text
stringlengths
8
267k
meta
dict
Q: Key value pairs in relational database Does someone have experience with storing key-value pairs in a database? I've been using this type of table: CREATE TABLE key_value_pairs ( itemid varchar(32) NOT NULL, itemkey varchar(32) NOT NULL, itemvalue varchar(32) NOT NULL, CONSTRAINT ct_primarykey PRIMARY KEY(itemid,itemkey) ) Then for example the following rows could exist: itemid itemkey itemvalue ---------------- ------------- ------------ 123 Colour Red 123 Size Medium 123 Fabric Cotton The trouble with this scheme is the SQL syntax required to extract data is quite complex. Would it be better to just create a series of key/value columns? CREATE TABLE key_value_pairs ( itemid varchar(32) NOT NULL, itemkey1 varchar(32) NOT NULL, itemvalue1 varchar(32) NOT NULL, itemkey2 varchar(32) NOT NULL, itemvalue2 varchar(32) NOT NULL, . . .etc . . . ) This will be easier and faster to query but lacks the extensibility of the first approach. Any advice? A: From experience, i have found that certain keys will be more widely used or queried more often. We have usually then slightly de-normalized the design to include a specific field back in the main "item" table. eg. if every Item has a Colour, you might add the Colour column to your item table. Fabric and Size may be used less often and can be kept separate in the key-value pair table. You may even keep the colour in the key-value pair table, but duplicate the data in the item table to get the performance benefits. Obviously this varies depending on the data and how flexible you need the key-value pairs to be. It can also result in your attribute data not being located consistantly. However, de-normalizing does greatly simplify the queries and improves their performance as well. I would usually only consider de-normalizing when performance becomes and issue, not just to simplify a query. A: PostgreSQL 8.4 supports hstore data type for storing sets of (key,value) pairs within a single PostgreSQL data field. Please refer http://www.postgresql.org/docs/8.4/static/hstore.html for its usage information. Though it's very old question but thought to pass on this info thinking it might help someone. A: I think the best way to design such tables is as follows: * *Make the frequently used fields as columns in the database. *Provide a Misc column which contains a dictionary(in JSON/XML/other string formeat) which will contain the fields as key-value pairs. Salient points: * *You can write your normal SQL queries to query for SQL in most situations. *You can do a FullTextSearch on the key-value pairs. MySQL has a full text search engine, else you can use "like" queries which are a little slower. While full text search is bad, we assume that such queries are fewer, so that should not cause too many issues. *If your key-value pairs are simple boolean flags, this technique has the same power as having a separate column for the key. Any more complex operation on the key value pairs should be done outside the database. *Looking at the frequency of queries over a period of time will give tell you which key-value pairs need to be converted in columns. *This technique also makes it easy to force integrity constraints on the database. *It provides a more natural path for developers to re-factor their schema and code. A: If you have very few possible keys, then I would just store them as columns. But if the set of possible keys is large then your first approach is good (and the second approach would be impossible). Or is it so that each item can only have a finite number of keys, but the keys could be something from a large set? You could also consider using an Object Relational Mapper to make querying easier. A: I don't understand why the SQL to extract data should be complex for your first design. Surely to get all values for an item, you just do this: SELECT itemkey,itemvalue FROM key_value_pairs WHERE itemid='123'; or if you just want one particular key for that item: SELECT itemvalue FROM key_value_pairs WHERE itemid='123' AND itemkey='Fabric'; The first design also gives you the flexibility to easily add new keys whenever you like. A: There is another solution that falls somewhere between the two. You can use an xml type column for the keys and values. So you keep the itemid field, then have an xml field that contains the xml defined for some key value pairs like <items> <item key="colour" value="red"/><item key="xxx" value="blah"/></items> Then when you extract your data fro the database you can process the xml in a number of different ways. Depending on your usage. This is an extend able solution. A: In most cases that you would use the first method, it's because you haven't really sat down and thought out your model. "Well, we don't know what the keys will be yet". Generally, this is pretty poor design. It's going to be slower than actually having your keys as columns, which they should be. I'd also question why your id is a varchar. In the rare case that you really must implement a key/value table, the first solution is fine, although, I'd generally want to have the keys in a separate table so you aren't storing varchars as the keys in your key/value table. eg, CREATE TABLE valid_keys ( id NUMBER(10) NOT NULL, description varchar(32) NOT NULL, CONSTRAINT pk_valid_keys PRIMARY KEY(id) ); CREATE TABLE item_values ( item_id NUMBER(10) NOT NULL, key_id NUMBER(10) NOT NULL, item_value VARCHAR2(32) NOT NULL, CONSTRAINT pk_item_values PRIMARY KEY(item_id), CONSTRAINT fk_item_values_iv FOREIGN KEY (key_id) REFERENCES valid_keys (id) ); You can then even go nuts and add a "TYPE" to the keys, allowing some type checking. A: Before you continue on your approach, I would humbly suggest you step back and consider if you really want to store this data in a "Key-Value Pair"table. I don't know your application but my experience has shown that every time I have done what you are doing, later on I wish I had created a color table, a fabric table and a size table. Think about referential integrity constraints, if you take the key-value pair approach, the database can't tell you when you are trying to store a color id in a size field Think about the performance benefits of joining on a table with 10 values versus a generic value that may have thousands of values across multiple domains. How useful is an index on Key Value really going to be? Usually the reasoning behind doing what you are doing is because the domains need to be "user definable". If that is the case then even I am not going to push you towards creating tables on the fly (although that is a feasible approach). However, if your reasoning is because you think it will be easier to manage than multiple tables, or because you are envisioning a maintenance user interface that is generic for all domains, then stop and think really hard before you continue. A: I once used key-value pairs in a database for the purpose of creating a spreadsheet (used for data entry) in which a teller would summarize his activity from working a cash drawer. Each k/v pair represented a named cell into which the user entered a monetary amount. The primary reason for this approach is that the spreadsheet was highly subject to change. New products and services were added routinely (thus new cells appeared). Also, certain cells were not needed in certain situations and could be dropped. The app I wrote was a rewrite of an application that did break the teller sheet into separate sections each represented in a different table. The trouble here was that as products and services were added, schema modifications were required. As with all design choices there are pros and cons to taking a certain direction as compared to another. My redesign certainly performed slower and more quickly consumed disk space; however, it was highly agile and allowed for new products and services to be added in minutes. The only issue of note, however, was disk consumption; there were no other headaches I can recall. As already mentioned, the reason I usually consider a key-value pair approach is when users—this could be a the business owner—want to create their own types having a user-specific set of attributes. In such situations I have come to the following determination. If there is either no need to retrieve data by these attributes or searching can be deferred to the application once a chunk of data has been retrieved, I recommend storing all the attributes in a single text field (using JSON, YAML, XML, etc.). If there is a strong need to retrieve data by these attributes, it gets messy. You can create a single "attributes" table (id, item_id, key, value, data_type, sort_value) where the sort column coverts the actual value into a string-sortable representation. (e.g. date: “2010-12-25 12:00:00”, number: “0000000001”) Or you can create separate attribute tables by data-type (e.g. string_attributes, date_attributes, number_attributes). Among numerous pros and cons to both approaches: the first is simpler, the second is faster. Both will cause you to write ugly, complex queries. A: the first method is quite ok. you can create a UDF that extracts the desired data and just call that. A: The first method is a lot more flexible at the cost you mention. And the second approach is never viable as you showed. Instead you'd do (as per your first example) create table item_config (item_id int, colour varchar, size varchar, fabric varchar) of course this will only work when the amount of data is known and doesn't change a lot. As a general rule any application that demands changing DDL of tables to do normal work should be given a second and third thoughts. A: Violating normalization rules is fine as long as the business requirement can still be fulfilled. Having key_1, value_1, key_2, value_2, ... key_n, value_n can be OK, right up until the point that you need key_n+1, value_n+1. My solution has been a table of data for shared attributes and XML for unique attributes. That means I use both. If everything (or most things) have a size, then size is a column in the table. If only object A have attribute Z, then Z is stored as XML similar Peter Marshall's answer already given. A: The second table is badly de-normalised. I would stick with the first approach. A: I think you're doing the right thing, as long as the keys/values for a given type of item change frequently. If they are rather static, then simply making the item table wider makes more sense. We use a similar (but rather more complex) approach, with a lot of logic around the keys/values, as well as tables for the types of values permitted for each key. This allows us to define items as just another instance of a key, and our central table maps arbitrary key types to other arbitrary key types. It can rapidly tie your brain in knots, but once you've written and encapsulated the logic to handle it all, you have a lot of flexibility. I can write more details of what we do if required. A: If the keys are dynamic, or there are loads of them, then use the mapping table that you have as your first example. In addition this is the most general solution, it scales best in the future as you add more keys, it is easy to code the SQL to get the data out, and the database will be able to optimise the query better than you would imagine (i.e., I wouldn't put effort into prematurely optimising this case unless it was proven to be a bottleneck in testing later on, in which case you could consider the next two options below). If the keys are a known set, and there aren't many of them (<10, maybe <5), then I don't see the problem in having them as value columns on the item. If there are a medium number of known fixed keys (10 - 30) then maybe have another table to hold the item_details. However I don't ever see a need to use your second example structure, it looks cumbersome. A: If you go the route of a KVP table, and I have to say that I do not like that technique at all myself as it is indeed difficult to query, then you should consider clustering the values for a single item id together using an appropriate technique for whatever platform you're on. RDBMS's have a tendency to scatter rows around to avoid block contention on inserts and if you have 8 rowes to retrieve you could easily find yourself accessing 8 blocks of the table to read them. On Oracle you'd do well to consider a hash cluster for storing these, which would vastly improve performance on accessing the values for a given item id. A: Your example is not a very good example of the use of key value pairs. A better example would be the use of something like a Fee table a Customer table and a Customer_Fee table in a billing application. The Fee table would consist of fields like: fee_id, fee_name, fee_description The Customer_Fee table would consist of fields like: customer_id, fee_id, fee_value A: I've been thinking about the same challenge and this is what I came up with. The task is a relational table, where I store common attributes: CREATE TABLE `tasks` ( `task_id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT, `account_id` BIGINT(20) UNSIGNED NOT NULL, `type` VARCHAR(128) COLLATE UTF8MB4_UNICODE_CI DEFAULT NULL, `title` VARCHAR(256) COLLATE UTF8MB4_UNICODE_CI NOT NULL, `description` TEXT COLLATE UTF8MB4_UNICODE_CI NOT NULL, `priority` VARCHAR(40) COLLATE UTF8MB4_UNICODE_CI DEFAULT NULL, `created_by` VARCHAR(40) COLLATE UTF8MB4_UNICODE_CI DEFAULT NULL, `creation_date` TIMESTAMP NULL DEFAULT NULL, `last_updated_by` VARCHAR(40) COLLATE UTF8MB4_UNICODE_CI DEFAULT NULL, `last_updated_date` TIMESTAMP NULL DEFAULT NULL, PRIMARY KEY (`task_id`), KEY `tasks_fk_1` (`account_id`), CONSTRAINT `tasks_fk_1` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`account_id`) ON DELETE CASCADE ON UPDATE NO ACTION ) ENGINE=INNODB AUTO_INCREMENT=1 DEFAULT CHARSET=UTF8MB4 COLLATE = UTF8MB4_UNICODE_CI ROW_FORMAT=DYNAMIC; And here is the KV table to store additional task info. I prefer storing values with their types to handle the data in a proper way. Feel free to comment. CREATE TABLE `task_variables` ( `row_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `task_id` bigint(20) unsigned NOT NULL, `name` varchar(128) COLLATE utf8mb4_unicode_ci NOT NULL, `type` varchar(40) COLLATE utf8mb4_unicode_ci DEFAULT NULL, `variable_text_value` varchar(256) COLLATE utf8mb4_unicode_ci DEFAULT NULL, `variable_number_value` double DEFAULT NULL, `variable_date_value` datetime DEFAULT NULL, `created_by` varchar(40) COLLATE utf8mb4_unicode_ci DEFAULT NULL, `creation_date` timestamp NULL DEFAULT NULL, `last_updated_by` varchar(40) COLLATE utf8mb4_unicode_ci DEFAULT NULL, `last_updated_date` timestamp NULL DEFAULT NULL, PRIMARY KEY (`row_id`), KEY `task_variables_fk` (`task_id`), CONSTRAINT `task_variables_fk` FOREIGN KEY (`task_id`) REFERENCES `tasks` (`task_id`) ON DELETE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci ROW_FORMAT=DYNAMIC; A: Times have changed. Now you have other database types you can use beside relational databases. NOSQL choices now include, Column Stores, Document Stores, Graph, and Multi-model (See: http://en.wikipedia.org/wiki/NoSQL). For Key-Value databases, your choices include (but not limited to) CouchDb, Redis, and MongoDB.
{ "language": "en", "url": "https://stackoverflow.com/questions/126271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79" }
Q: Setting a Sharepoint Site Theme through a Web Service? Is it possible to change a Sharepoint 2007 Site Theme through a Web Service? I know it can be done through the Object Model with (i think) SPWeb.ApplyTheme, but I did not see anything in the available Web Services, apart from CustomizeCss in Webs.asmx, which does not seem to be exactly what I need. A: This is not possible out of the box. However, you can write your own custom SharePoint web service that exposes this feature to you. A walkthrough on how to make your own custom web service in SharePoint can be found here: http://msdn.microsoft.com/en-us/library/ms464040.aspx Another way would be to create your own themesetter and invoke it via a request. An example of this can be found here: http://www.sharepoint-tips.com/2006/03/automatically-applying-theme-to-site.html
{ "language": "en", "url": "https://stackoverflow.com/questions/126274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Making something both a C identifier and a string? Say you want to generate a matched list of identifiers and strings enum { NAME_ONE, NAME_TWO, NAME_THREE }; myFunction(NAME_ONE, "NAME_ONE"); myFunction(NAME_TWO, "NAME_TWO"); myFunction(NAME_THREE, "NAME_THREE"); ..without repeating yourself, and without auto-generating the code, using C/C++ macros Initial guess: You could add an #include file containing myDefine(NAME_ONE) myDefine(NAME_TWO) myDefine(NAME_THREE) Then use it twice like: #define myDefine(a) a, enum { #include "definitions" } #undef myDefine #define myDefine(a) myFunc(a, "a"); #include "definitions" #undef myDefine but #define doesn't let you put parameters within a string? A: Here's a good way to declare name-list: #define FOR_ALL_FUNCTIONS(F)\ F(NameOne)\ F(NameTwo)\ F(NameThree)\ #define DECLARE_FUNCTION(N)\ void N(); #define IMPLEMENT_FUNCTION(N)\ void N(){} FOR_ALL_FUNCTIONS(DECLARE_FUNCTION); FOR_ALL_FUNCTIONS(IMPLEMENT_FUNCTION); This way this name-list can be re-used multiple times. I have used it for prototyping new language features, although never ended up using them. So, if nothing else, they were a great way to find dead-ends in own inventions. I wonder if it's because what they say: "macros are bad"... :) A: For your second #define, you need to use the # preprocessor operator, like this: #define myDefine(a) myFunc(a, #a); That converts the argument to a string.
{ "language": "en", "url": "https://stackoverflow.com/questions/126277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: C99 stdint.h header and MS Visual Studio To my amazement I just discovered that the C99 stdint.h is missing from MS Visual Studio 2003 upwards. I'm sure they have their reasons, but does anyone know where I can download a copy? Without this header I have no definitions for useful types such as uint32_t, etc. A: Turns out you can download a MS version of this header from: https://github.com/mattn/gntp-send/blob/master/include/msinttypes/stdint.h A portable one can be found here: http://www.azillionmonkeys.com/qed/pstdint.h Thanks to the Software Ramblings blog. NB: The Public Domain version of the header, mentioned by Michael Burr in a comment, can be find as an archived copy here. An updated version can be found in the Android source tree for libusb_aah. A: Microsoft do not support C99 and haven't announced any plans to. I believe they intend to track C++ standards but consider C as effectively obsolete except as a subset of C++. New projects in Visual Studio 2003 and later have the "Compile as C++ Code (/TP)" option set by default, so any .c files will be compiled as C++. A: Just define them yourself. #ifdef _MSC_VER typedef __int32 int32_t; typedef unsigned __int32 uint32_t; typedef __int64 int64_t; typedef unsigned __int64 uint64_t; #else #include <stdint.h> #endif A: Update: Visual Studio 2010 and Visual C++ 2010 Express both have stdint.h. It can be found in C:\Program Files\Microsoft Visual Studio 10.0\VC\include A: Another portable solution: POSH: The Portable Open Source Harness "POSH is a simple, portable, easy-to-use, easy-to-integrate, flexible, open source "harness" designed to make writing cross-platform libraries and applications significantly less tedious to create and port." http://poshlib.hookatooka.com/poshlib/trac.cgi as described and used in the book: Write portable code: an introduction to developing software for multiple platforms By Brian Hook http://books.google.ca/books?id=4VOKcEAPPO0C -Jason A: Visual Studio 2003 - 2008 (Visual C++ 7.1 - 9) don't claim to be C99 compatible. (Thanks to rdentato for his comment.) A: Boost contains cstdint.hpp header file with the types you are looking for: http://www.boost.org/doc/libs/1_36_0/boost/cstdint.hpp
{ "language": "en", "url": "https://stackoverflow.com/questions/126279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "117" }
Q: Has anyone been able to get SharePoint using NTLM working with SQUID as a reverse proxy? * *We have a SQUID reverse proxy and a MOSS 2007 portal. All sites are using NTLM. *We cannot get it working with SQUID as a reverse proxy. Any ideas where to start? A: Can you switch to Kerberos instead of NTLM? You're encountering the "Double-Hop Issue", whereby NTLM authentication cannot traverse proxies or servers. This is outlined at this location: http://blogs.msdn.com/knowledgecast/archive/2007/01/31/the-double-hop-problem.aspx And over here: http://support.microsoft.com/default.aspx?scid=kb;en-us;329986 Double-Hop Issue The double-hop issue is when the ASPX page tries to use resources that are located on a server that is different from the IIS server. In our case, the first "hop" is from the web browser client to the IIS ASPX page; the second hop is to the AD. The AD requires a primary token. Therefore, the IIS server must know the password for the client to pass a primary token to the AD. If the IIS server has a secondary token, the NTAUTHORITY\ANONYMOUS account credentials are used. This account is not a domain account and has very limited access to the AD. The double-hop using a secondary token occurs, for example, when the browser client is authenticated to the IIS ASPX page by using NTLM authentication. In this example, the IIS server has a hashed version of the password as a result of using NTLM. If IIS turns around and passes the credentials to the AD, IIS is passing a hashed password. The AD cannot verify the password and, instead, authenticates by using the NTAUTHORITY\ANONYMOUS LOGON. On the other hand, if your browser client is authenticated to the IIS ASPX page by using Basic authentication, the IIS server has the client password and can make a primary token to pass to the AD. The AD can verify the password and does authenticate as the domain user. For more information, click the following article number to view the article in the Microsoft Knowledge Base: 264921 (http://support.microsoft.com/kb/264921/) How IIS authenticates browser clients If switching to Kerberos is not an option, have you investigated the Squid NTLM project? http://devel.squid-cache.org/ntlm/ A: you can use HAProxy for load balancing
{ "language": "en", "url": "https://stackoverflow.com/questions/126288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Automatic Casts redux After I messed up the description of my previous post on this I have sat down and tried to convey my exact intent. I have a class called P which performs some distinct purpose. I also have PW which perform some distinct purpose on P. PW has no member variables, just member functions. From this description you would assume that the code would follow like this: class P { public: void a( ); }; class PW { public: PW( const P& p ) : p( p ) { } void b( ); P& p; }; class C { public: P GetP( ) const { return p; } private: P p; }; // ... PW& p = c.GetP( ); // valid // ... However that brings up a problem. I can't call the functions of P without indirection everywhere. // ... p->p->a( ) // ... What I would like to do is call p->a( ) and have it automatically determine that I would like to call the member function of P. Also having a member of PW called a doesn't really scale - what if I add (or remove) another function to P - this will need to be added (or removed) to PW. A: You could try overriding operator* and operator-> to return access to the embedded p. Something like this might do the trick : class P { public: void a( ) { std::cout << "a" << std::endl; } }; class PW { public: PW(P& p) : p(p) { } void b( ) { std::cout << "b" << std::endl; } P & operator*() { return p; } P * operator->() { return &p; } private: P & p; }; class C { public: P & getP() { return p; } private: P p; }; int main() { C c; PW pw(c.getP()); (*pw).a(); pw->a(); pw.b(); return EXIT_SUCCESS; } This code prints a a b However, this method may confuse the user since the semantic of operator* and operator-> becomes a little messed up. A: If you make P a superclass to PW, like you did in your previous question, you could call p->a() and it would direct to class P. It seems like you've already considered and rejected this though, judging from this question. If so, would you care to elaborate why this wont work for you? A: I think you need to think through what kind of relationship exists between PW and P. Is it an is-a relationship? Are instances of PW instances of P? Then it would make sense to have PW inherit from P. Is it a has-a relationship? Then you should stick with containment, and put up with the syntactic inconvenience. Incidentally, it's generally not a good idea to expose a non-const reference to a member variable in a class's public interface.
{ "language": "en", "url": "https://stackoverflow.com/questions/126297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What happens when a Flex App can't run at the specified framerate? In our application (a game), in some cases it can't run fast enough. Obviously we'd like to speed it up, but in the mean-time when this happens it causes many problems (or if it's not causing them, the two are related). The one which is least related to our own functionality is that the built in Alert.show() method stops working. Typically the full-screen transparent box appears but not the actual popup. I believe this is down to Flex giving all available cycles to other tasks... but it's proving difficult to investigate analytically so I am happy to hear another explanation. To clarify, core parts of Flex are simply not working in this situation. I've stepped through the code for instance where a new element is added to the screen, everything happens and the addChild() method is called on the main display canvas... but then the element does not appear. If we then disable our update loop, the element suddenly appears. So whether Flex is supposed to run the exact same code or not, somehow it IS blocking is some strange way. As I said, even the Flex Alert.show() method doesn't work. A: All Flash content is executed frame-by-frame - Flash executes one frame's worth of code, then updates the screen, and then waits until the next frame update. When Flash can't keep up with the specified framerate, all that happens is that instead of waiting between frame updates, Flash does them as fast as it can with no waiting in between. So the only visible difference is that frame updates occur less frequently. There are never cases where code is skipped, events are dropped, or screen redraws are skipped for performance reasons (unless you've found new bugs). So the most likely culprit is that either you have a problem with code that's very time-dependent (such as code that expects two timers to trigger on the same frame), or some other problem that's being misdiagnosed. (For example, maybe there's a bug causing a slowdown, rather than a slowdown causing your bug.) A: I'm not too sure if Flex has some additional performance handling of it's own. But for pure actionscript the only thing that would happen is the framerate would slow to a crawl, everything will happen normally just slower. If you stack very large amounts of transparent or masked objects you might get some weird behavior, but that should be more noticable. And I guess telling you that making a game in Flex isn't that much of a good idea (just because of the performance overhead the framework has) is a bit late ;) A: I like to make games in FLEX 3 (actionscript3), its actually pretty handy solution when compared to Flash CS3: good debugging environment without hassle. Of course it depends on the game style which one is better, if you need lot of graphics you may like Flash more, but Flex allows you to use external images, components, etc. Notice I am not talking about Flex XML project here. Answer to your performance issue: You can use e.g. old MacOSX machine to see what happens in a very slow machine, a few solutions are: - move objects more than x++ y++ pixels when machine is old - reduce objects you can detect with a timer how slow machine is..
{ "language": "en", "url": "https://stackoverflow.com/questions/126316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Undoing specific revisions in Subversion Suppose I have a set of commits in a repository folder... 123 (250 new files, 137 changed files, 14 deleted files) 122 (150 changed files) 121 (renamed folder) 120 (90 changed files) 119 (115 changed files, 14 deleted files, 12 added files) 118 (113 changed files) 117 (10 changed files) I want to get a working copy that includes all changes from revision 117 onward but does NOT include the changes for revisions 118 and 120. EDIT: To perhaps make the problem clearer, I want to undo the changes that were made in 118 and 120 while retaining all other changes. The folder contains thousands of files in hundreds of subfolders. What is the best way to achieve this? The answer, thanks to Bruno and Bert, is the command (in this case, for removing 120 after the full merge was performed) svn merge -c -120 . Note that the revision number must be specified with a leading minus. '-120' not '120' A: There's a more straightforward way if you use TortoiseSVN, a Windows client for Subversion. You just click to view the log in your updated work copy, select the revisions you want to undo, right click, and select "Revert changes from these revisions". It is a safe operation because the changes are applied just in your workspace. You still have to commit to modify your repository. It is one of the best features of TortoiseSVN. I've always been a command line guy, but Tortoise changed my mind. A: To undo revisions 118 and 120: svn up -r HEAD # get latest revision svn merge -c -120 . # undo revision 120 svn merge -c -118 . # undo revision 118 svn commit # after solving problems (if any) Also see the description in Undoing changes. Note the minus in the -c -120 argument. The -c (or --change) switch is supported since Subversion 1.4, older versions can use -r 120:119. A: I suppose you could create a branch from revision 117, then merge everything except 118 and 120. svn copy -r 117 source destination Then checkout this branch and from there do svnmerge.py merge -r119,120-123 EDIT: This doesn't undo the revisions in the branch/trunk. Use svn merge instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/126320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Specifying a non-.NET dependency in Visual Studio I'm calling a non-.NET dll from my project using P/Invoke, meaning that the .dll must always be present in the .exe's directory. Is there any way to tell Visual Studio of this dependency, so that it will automatically copy the .dll to the output directory when compiling, and will automatically include the .dll in the setup? Or do I have to do this manually? A: You can simply add the .DLL to your project. Select the Properties pane for that file and set Build Action to Content and Copy to Output Directory to Copy if newer. A: You can copy/link this file(s) to the project, and in properties windows set "Build Action" to "None" and "Copy to Output Directory" to "Copy if newer" or "Copy always". Or you can use a "Pre-Build Events" & "Post-Build Events" where you can specify any batch scripts. I prefere the second option, because this way is more flexible than the first. Also you can modify a MSBuild file and add a task for copy the file(s). A: I think one problem with just adding a .DLL to the project is that you may need different versions of a DLL for debug and release builds. You'd think you can add both debug and release versions of the DLL to the file, and based on configurations, exclude the inappropriate one, but I couldn't find a way to do that. I'm using Visual Studio 2010. I am positive this worked in the old days with VS6.
{ "language": "en", "url": "https://stackoverflow.com/questions/126331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the best way in Rails to determine if two (or more) given URLs (as strings or hash options) are equal? I'm wanting a method called same_url? that will return true if the passed in URLs are equal. The passed in URLs might be either params options hash or strings. same_url?({:controller => :foo, :action => :bar}, "http://www.example.com/foo/bar") # => true The Rails Framework helper current_page? seems like a good starting point but I'd like to pass in an arbitrary number of URLs. As an added bonus It would be good if a hash of params to exclude from the comparison could be passed in. So a method call might look like: same_url?(projects_path(:page => 2), "projects?page=3", :excluding => :page) # => true A: Here's the method (bung it in /lib and require it in environment.rb): def same_page?(a, b, params_to_exclude = {}) if a.respond_to?(:except) && b.respond_to?(:except) url_for(a.except(params_to_exclude)) == url_for(b.except(params_to_exclude)) else url_for(a) == url_for(b) end end If you are on Rails pre-2.0.1, you also need to add the except helper method to Hash: class Hash # Usage { :a => 1, :b => 2, :c => 3}.except(:a) -> { :b => 2, :c => 3} def except(*keys) self.reject { |k,v| keys.include? k.to_sym } end end Later version of Rails (well, ActiveSupport) include except already (credit: Brian Guthrie) A: Is this the sort of thing you're after? def same_url?(one, two) url_for(one) == url_for(two) end A: def all_urls_same_as_current? *params_for_urls params_for_urls.map do |a_url_hash| url_for a_url_hash.except(*exclude_keys) end.all? do |a_url_str| a_url_str == request.request_uri end end Wherein: * *params_for_urls is an array of hashes of url parameters (each array entry are params to build a url) *exclude_keys is an array of symbols for keys you want to ignore *request.request_uri may not be exactly what you should use, see below. Then there are all sorts of things you'll want to consider when implementing your version: * *do you want to compare the full uri with domain, port and all, or just the path? *if just the path, do you want to still compare arguments passed after the question mark or just those that compose the actual path path?
{ "language": "en", "url": "https://stackoverflow.com/questions/126332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Which Ruby XML library would you recommend for a 2.4MB XML file? I have a 2.4 MB XML file, an export from Microsoft Project (hey I'm the victim here!) from which I am requested to extract certain details for re-presentation. Ignoring the intelligence or otherwise of the request, which library should I try first from a Ruby perspective? I'm aware of the following (in no particular order): * *REXML *Chilkat Ruby XML library *hpricot XML *libXML I'd prefer something packaged as a Ruby gem, which I suspect the Chilkat library is not. Performance isn't a major issue - I don't expect the thing to need to run more than once a day (once a week is more likely). I'm more interested in something that's as easy to use as anything XML-related is able to get. EDIT: I tried the gemified ones: hpricot is, by a country mile, easiest. For example, to extract the content of the SaveVersion tag in this XML (saved in a file called, say 'test.xml') <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Project xmlns="http://schemas.microsoft.com/project"> <SaveVersion>12</SaveVersion> </Project> takes something like this: doc = Hpricot.XML(open('test.xml')) version = (doc/:Project/:SaveVersion).first.inner_html hpricot seems to be relatively unconcerned with namespaces, which in this example is fine: there's only one, but would potentially be a problem with a complex document. Since hpricot is also very slow, I rather imagine this would be a problem that solves itself. libxml-ruby is an order of magnitude faster, understands namespaces (it took me a good couple of hours to figure this out) and is altogether much closer to the XML metal - XPath queries and all the other stuff are in there. This is not necessarily a Good Thing if, like me, you open up an XML document only under conditions of extreme duress. The helper module was mostly helpful in providing examples of how to handle a default namespace effectively. This is roughly what I ended up with (I'm not in any way asserting its beauty, correctness or other value, it's just where I am right now): xml_parser = XML::Parser.new xml_parser.string = File.read(path) doc = xml_parser.parse @root = doc.root @scopes = { :in_node => '', :in_root => '/', :in_doc => '//' } @ns_prefix = 'p' @ns = "#{@ns_prefix}:#{@root.namespace[0].href}" version = @root.find_first(xpath_qry("Project/SaveVersion", :in_root), @ns).content.to_i def xpath_qry(tags, scope = :in_node) "#{@scopes[scope]}" + tags.split(/\//).collect{ |tag| "#{@ns_prefix}:#{tag}"}.join('/') end I'm still debating the pros and cons: libxml for its extra rigour, hpricot for the sheer style of _why's code. EDIT again, somewhat later: I discovered HappyMapper ('gem install happymapper') which is hugely promising, if still at an early stage. It's declarative and mostly works, although I have spotted a couple of edge cases that I don't have fixes for yet. It lets you do stuff like this, which parses my Google Reader OPML: module OPML class Outline include HappyMapper tag 'outline' attribute :title, String attribute :text, String attribute :type, String attribute :xmlUrl, String attribute :htmlUrl, String has_many :outlines, Outline end end xml_string = File.read("google-reader-subscriptions.xml") sections = OPML::Outline.parse(xml_string) I already love it, even though it's not perfect yet. A: Nokogiri wraps libxml2 and libxslt with a clean, Rubyish API that supports namespaces, XPath and CSS3 queries. Fast, too. http://nokogiri.org/ A: Hpricot is probably the best tool for you -- it is easy to use and should handle 2mg file with no problem. Speedwise libxml should be the best. I used libxml2 binding for python few months ago (at that moment rb-libxml was stale). Streaming interface worked the best for me (LibXML::XML::Reader in ruby gem). It allows to process file while it is downloading, is a bit more userfriendly than SAX and allowed me to load data from 30mb xml file from internet to a MySQL database in a little more than a minute.
{ "language": "en", "url": "https://stackoverflow.com/questions/126337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Is it safe to redirect to the same URL? I have URLs of the form http://domain/image/⟨uuid⟩/42x42/some_name.png. The Web server (nginx) is configured to look for a file /some/path/image/⟨uuid⟩/thumbnail_42x42.png, and if it does not exist, it sends the URL to the backend (Django via mod_wsgi) which then generates the thumbnail. Then the backend emits a 302 redirect to exactly the same URL that was requested by the client, with the idea that upon this second request the server will notice the thumbnail file and send it directly. The question is, will this work with all the browsers? So far testing has shown no problems, but can I be sure all the user agents will interpret this as intended? Update: Let me clarify the intent. Currently this works as follows: * *The client requests a thumbnail of an image. *The server sees the file does not exist, so it forwards the request to the backend. *The backend creates the thumbnail and returns 302. *The backend releases all the resources, letting the server share the newly generated file to current and subsequent clients. Having the backend serve the newly created image is worse for two reasons: * *Two ways of serving the same data must be created; *The server is much better at serving static content. What if the client has an extremely slow link? The backend is not particularly fast nor memory-efficient, and keeping it in memory while spoon-feeding the client can be wasteful. So I keep the backend working for the minimum amount of time. Update²: I’d really appreciate some RFC references or opinions of someone with experience with lots of browsers. All those affirmative answers are pleasant but they look somewhat groundless. A: If it doesn't, the client's broken. Most clients will follow redirect loops until a maximum value. So yes, it should be fine until your backend doesn't generate the thumbnail for any reason. You could instead change URLs to be http://domain/djangoapp/generate_thumbnail and that'll return the thumbnail and the proper content-type and so on A: Yes, it's fine to re-direct to the same URI as you were at previously.
{ "language": "en", "url": "https://stackoverflow.com/questions/126345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How to secure MS SSAS 2005 for HTTP remote access via Internet? We are building an hosted application that uses MS SQL Server Analysis Services 2005 for some of the reporting, specifically OLAP cube browsing. Since it is designed to be used by very large global organizations, security is important. It seems that Microsoft's preferred client tool for browsing OLAP cubes is Excel 2007 and the whole infrastructure is geared around Windows Integrated Authentication. We, however, are trying to build an internet-facing web application and do not want to create Windows Accounts for every user. It also seems that there are not many nice AJAXy web-based OLAP cube browsing tools (fast, drag-and-drop for dimensions, support for actions, cross-browser etc.) As an aside, we're currently using Dundas OLAP Grid but have also considered RadarCube and other more expensive commercial solutions and are still thinking of taking on CellSetGrid and developing it further - if you know of any other cheap/open solutions please let me know! We are therefore planning on providing two modes of access to the cube data: * *Through our own Web Application using one of these 3rd party Web-based OLAP browsing tools. *Direct access from Excel over HTTPS via the msmdpump.dll data pump, for when the web version is too slow/clunky or user needs more powerful analysis. For the web app access, the connection to the SSAS data source happens from the web server so we can happily pass a CustomData item on the Connection String which indicates which user is connecting. Since we potentially have too many combinations of rights to create individual SSAS roles for, we have implemented dynamic dimension security that uses a "Cube Users" dimension in conjunction with the CustomData item from the connection string and limits the Allowed Set of various other dimension members accordingly (via other Many-to-Many dinemsion relationships with Measure Groups that contain the 'rights mapping') See Mosha on Dimension Security: http://www.sqljunkies.com/WebLog/mosha/archive/2004/12/16/5605.aspx This all seems to work fine so far. For the 'direct connection' from Excel, we set up the data pump for HTTP access (see the MS Technet article) but have enabled anonymous access, relying again on the Connection String to control access since we don't have windows accounts. However, in this case, the connection string is controlled by the user (we push a .odc file from the web app, but a curious user could view & change it), so we cannot rely on users to be good and keep the CustomData=grunt@corp.org from changing to CustomData=superuser@corp.org. As it turns out, this also causes the same problem with Roles, since these are also specified on the connection string if you are not using Windows Integrated Authentication. The question therefore boils down to this: is there a way of getting basic authentication in IIS working without windows accounts in such a way that it can be used with the SSAS data pump to let SSAS know which user is connecting so that dynamic dimension security can be used successfully? (This is my first q on StackOverflow and probably the most complicated question I've ever asked: let me know where I haven't explained myself very well and I'll attempt to clarify) A: Basic authentication will work with local user accounts (non-domain) and even support passthrough authentication if the local accounts exist on different machines, however you should force SSL as basic authentication sends passwords in plaintext. You can use non-windows accounts with basic authentication in IIS with add-on such as http://www.codeplex.com/CustomBasicAuth, but SSAS will still need to know who that user is and as far as I know SSAS uses only Windows authentication. A: For a (relatively) cheap thin client front-end for SSAS look at RSInteract. For bonus points it will also consume SSRS reports and report models. Any attempt to use dimension security will require SSAS to be aware of the user and have their access rights available to it. I don't see any way to get around maintaining user permissions.
{ "language": "en", "url": "https://stackoverflow.com/questions/126350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Example Facebook Application using TurboGears -- pyFacebook I have a TurboGears application I'd like to run through Facebook, and am looking for an example TurboGears project using pyFacebook or minifb.py. pyFacebook is Django-centric, and I can probably figure it out, but this is, after all, the lazy web. A: Why is pyFacebook django centric? Looks like it works perfectly fine with all kinds of WSGI apps or Python applications in general. No need to use Django. A: pyFacebook is Django-centric because it includes a Django example. I did not intend to irk, but am merely looking for a TurboGears example using pyFacebook.
{ "language": "en", "url": "https://stackoverflow.com/questions/126356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: "cannot find -lpq" when trying to install psycopg2 Intro: I'm trying to migrate our Trac SQLite to a PostgreSQL backend, to do that I need psycopg2. After clicking past the embarrassing rant on www.initd.org I downloaded the latest version and tried running setup.py install. This didn't work, telling me I needed mingw. So I downloaded and installed mingw. Problem: I now get the following error when running setup.py build_ext --compiler=mingw32 install: running build_ext building 'psycopg2._psycopg' extension writing build\temp.win32-2.4\Release\psycopg\_psycopg.def C:\mingw\bin\gcc.exe -mno-cygwin -shared -s build\temp.win32-2.4\Release\psycopg \psycopgmodule.o build\temp.win32-2.4\Release\psycopg\pqpath.o build\temp.win32- 2.4\Release\psycopg\typecast.o build\temp.win32-2.4\Release\psycopg\microprotoco ls.o build\temp.win32-2.4\Release\psycopg\microprotocols_proto.o build\temp.win3 2-2.4\Release\psycopg\connection_type.o build\temp.win32-2.4\Release\psycopg\con nection_int.o build\temp.win32-2.4\Release\psycopg\cursor_type.o build\temp.win3 2-2.4\Release\psycopg\cursor_int.o build\temp.win32-2.4\Release\psycopg\lobject_ type.o build\temp.win32-2.4\Release\psycopg\lobject_int.o build\temp.win32-2.4\R elease\psycopg\adapter_qstring.o build\temp.win32-2.4\Release\psycopg\adapter_pb oolean.o build\temp.win32-2.4\Release\psycopg\adapter_binary.o build\temp.win32- 2.4\Release\psycopg\adapter_asis.o build\temp.win32-2.4\Release\psycopg\adapter_ list.o build\temp.win32-2.4\Release\psycopg\adapter_datetime.o build\temp.win32- 2.4\Release\psycopg\_psycopg.def -LC:\Python24\libs -LC:\Python24\PCBuild -Lc:/P ROGRA~1/POSTGR~1/8.3/lib -lpython24 -lmsvcr71 -lpq -lmsvcr71 -lws2_32 -ladvapi32 -o build\lib.win32-2.4\psycopg2\_psycopg.pyd C:\mingw\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ld.exe: cannot fin d -lpq collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 What I've tried - I noticed the forward slashes in the -L option, so I manually entered my PostgreSQL lib directory in the library_dirs option in the setup.cfg, to no avail (the call then had a -L option with backslashes, but the error message stayed the same). A: Have you tried the binary build of psycopg2 for windows? If that works with your python then it mitigates the need to build by hand. I've seen random people ask this question on various lists and it seems one recommendation is to build postgresql by hand to work around this problem. A: Compiling extensions on windows can be tricky. There are precompiled libraries available however: http://www.stickpeople.com/projects/python/win-psycopg/
{ "language": "en", "url": "https://stackoverflow.com/questions/126364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Controlling access to an internal collection in c# - Pattern required This is kind of hard to explain, I hope my English is sufficient: I have a class "A" which should maintain a list of objects of class "B" (like a private List). A consumer of class "A" should be able to add items to the list. After the items are added to the list, the consumer should not be able to modify them again, left alone that he should not be able to temper with the list itself (add or remove items). But he should be able to enumerate the items in the list and get their values. Is there a pattern for it? How would you do that? If the question is not clear enough, please let me know. A: To prevent editing the list or its items you have to make them immutable, which means you have to return a new instance of an element on every request. See Eric Lippert's excellent series of "Immutability in C#": http://blogs.msdn.com/ericlippert/archive/tags/Immutability/C_2300_/default.aspx (you have to scroll down a bit) A: As many of these answers show, there are many ways to make the collection itself immutable. It takes more effort to keep the members of the collection immutable. One possibility is to use a facade/proxy (sorry for the lack of brevity): class B { public B(int data) { this.data = data; } public int data { get { return privateData; } set { privateData = value; } } private int privateData; } class ProxyB { public ProxyB(B b) { actual = b; } public int data { get { return actual.data; } } private B actual; } class A : IEnumerable<ProxyB> { private List<B> bList = new List<B>(); class ProxyEnumerator : IEnumerator<ProxyB> { private IEnumerator<B> b_enum; public ProxyEnumerator(IEnumerator<B> benum) { b_enum = benum; } public bool MoveNext() { return b_enum.MoveNext(); } public ProxyB Current { get { return new ProxyB(b_enum.Current); } } Object IEnumerator.Current { get { return this.Current; } } public void Reset() { b_enum.Reset(); } public void Dispose() { b_enum.Dispose(); } } public void AddB(B b) { bList.Add(b); } public IEnumerator<ProxyB> GetEnumerator() { return new ProxyEnumerator(bList.GetEnumerator()); } IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); } } The downside of this solution is that the caller will be iterating over a collection of ProxyB objects, rather than the B objects they added. A: EDIT: Added support for edition contexts. Caller can only add elements inside an edition context. You can aditionally enforce that only one edition context can be created for the lifetime of the instance. Using encapsulation you can define any set of policies to access the inner private member. The following example is a basic implementation of your requirements: namespace ConsoleApplication2 { using System; using System.Collections.Generic; using System.Collections; class B { } interface IEditable { void StartEdit(); void StopEdit(); } class EditContext<T> : IDisposable where T : IEditable { private T parent; public EditContext(T parent) { parent.StartEdit(); this.parent = parent; } public void Dispose() { this.parent.StopEdit(); } } class A : IEnumerable<B>, IEditable { private List<B> _myList = new List<B>(); private bool editable; public void Add(B o) { if (!editable) { throw new NotSupportedException(); } _myList.Add(o); } public EditContext<A> ForEdition() { return new EditContext<A>(this); } public IEnumerator<B> GetEnumerator() { return _myList.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); } public void StartEdit() { this.editable = true; } public void StopEdit() { this.editable = false; } } class Program { static void Main(string[] args) { A a = new A(); using (EditContext<A> edit = a.ForEdition()) { a.Add(new B()); a.Add(new B()); } foreach (B o in a) { Console.WriteLine(o.GetType().ToString()); } a.Add(new B()); Console.ReadLine(); } } } A: You basically want to avoid to give away references to the class B items. That's why you should do a copy of the items. I think this can be solved with the ToArray() method of a List object. You need to create a deep-copy of the list if you want to prevent changes. Generally speaking: most of the times it is not worthwhile to do a copy to enforce good behaviour, especially when you also write the consumer. A: public class MyList<T> : IEnumerable<T>{ public MyList(IEnumerable<T> source){ data.AddRange(source); } public IEnumerator<T> GetEnumerator(){ return data.Enumerator(); } private List<T> data = new List<T>(); } The downside is that a consumer can modify the items it gets from the Enumerator, a solution is to make deepcopy of the private List<T>. A: It wasn't clear whether you also needed the B instances themselves to be immutable once added to the list. You can play a trick here by using a read-only interface for B, and only exposing these through the list. internal class B : IB { private string someData; public string SomeData { get { return someData; } set { someData = value; } } } public interface IB { string SomeData { get; } } A: The simplest that I can think of is return a readonly version of the underlying collection if editing is no longer allowed. public IList ListOfB { get { if (_readOnlyMode) return listOfB.AsReadOnly(); // also use ArrayList.ReadOnly(listOfB); else return listOfB; } } Personally though, I would not expose the underlying list to the client and just provide methods for adding, removing, and enumerating the B instances. A: Wow, there are some overly complex answers here for a simple problem. Have a private List<T> Have an public void AddItem(T item) method - whenever you decide to make that stop working, make it stop working. You could throw an exception or you could just make it fail silently. Depends on what you got going on over there. Have a public T[] GetItems() method that does return _theList.ToArray()
{ "language": "en", "url": "https://stackoverflow.com/questions/126367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: In Scrum, where does the detail sit? We've been using Scrum on a few projects now with varying success and I now have a query relating to documentation. In Scrum, you obviously have the product backlog ("The application begins by bringing up the last document the user was working with.") and the sprint task backlog ("Implement forgot password screen"). However, in all the examples I have seen, these two items are fairly high level in terms of detail (being designed on fit on a post-it note). So, where does the detail sit? Let's say the client has some very specific requirements for a stock management screen, or has a complex API that needs to be integrated with on the back end, where is this documented, how and who captures this information? Is it seperate to the backlog but populated on a just-in-time basis or some other way? A: Sprint backlog The sprint backlog is a greatly detailed document containing information about how the team is going to implement the requirements for the upcoming sprint. Tasks are broken down into hours with no task being more than 16 hours. If a task is greater than 16 hours, it should be broken down further. Tasks on the sprint backlog are never assigned, rather tasks are signed-up for by the team members as they like. A: Detail can sit in a wiki available to the whole team and editable by the whole team. A: Not sure if this is as simple as it sounds. We've seen challenges with the detail part as well. Lets say if we're developing on a story that requires capturing simple contact information for lets say a CRM system. I now have the stories from the PO and we went through the sprint planning meeting and understood the first 5 stories that meets our velocity. However its always a struggle on capturing all the details of the conversation, for example how the screen needs to be laid out, what are the 20+ fields you need to have on the screen, can some of these fields lookup information from other tables/views etc. Who captures those details, should it be the PO or developer and whats the best practice for storing these details. We're right now trying to use wiki's for this, however it becomes an overhead in trying to maintain the action items on who needs to update which details and by when. A: My understanding is that specific requirements such as this are handled by the product owner. They will liase with the client during Sprint Planning 2 and update the tasks with specfic requirements as needed - hence why the Product Owner is a optional attendee of the Sprint Planning 2 meeting. This gives you a hybrid of Just-in-Time and Sprint Planning 2 population of the specifics. Anything that isn't satisfied by the time you come to work on the task will be an impediment and should be dealt with a the daily scrum, by the product owner. As the development is Agile when using Scrum you shouldn't find too much of an issues getting requirements just in time.
{ "language": "en", "url": "https://stackoverflow.com/questions/126369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to implement unobtrusive javascript with dynamic content generation? I write a lot of dynamically generated content (developing under PHP) and I use jQuery to add extra flexibility and functionality to my projects. Thing is that it's rather hard to add JavaScript in an unobtrusive manner. Here's an example: You have to generate a random number of div elements each with different functionality triggered onClick. I can use the onclick attribute on my div elements to call a JS function with a parameter but that is just a bad solution. Also I could generate some jQuery code along with each div in my PHP for loop, but then again this won't be entirely unobtrusive. So what's the solution in situations like this? A: You need to add something to the divs that defines what type of behaviour they have, then use jQuery to select those divs and add the behaviour. One option is to use the class attribute, although arguably this should be used for presentation rather than behaviour. An alternative would be the rel attribute, but I usually find that you also want to specify different CSS for each behaviour, so class is probably ok in this instance. So for instance, lets assume you want odd and even behaviour: <div class="odd">...</div> <div class="even">...</div> <div class="odd">...</div> <div class="even">...</div> Then in jQuery: $(document).load(function() { $('.odd').click(function(el) { // do stuff }); $('.even').click(function(el) { // dostuff }); }); jQuery has a very powerful selector engine that can find based on any CSS based selector, and also support some XPath and its own selectors. Get to know them! http://docs.jquery.com/Selectors A: I would recommend that you use this thing called "Event delegation". This is how it works. So, if you want to update an area, say a div, and you want to handle events unobtrusively, you attach an event handler to the div itself. Use any framework you prefer to do this. The event attachment can happen at any time, regardless of if you've updated the div or not. The event handler attached to this div will receive the event object as one of it's arguments. Using this event object, you can then figure which element triggered the event. You could update the div any number of times: events generated by the children of the div will bubble up to the div where you can catch and handle them. This also turns out to be a huge performance optimization if you are thinking about attaching multiple handlers to many elements inside the div. A: I would recommend disregarding the W3C standards and writing out HTML-properties on the elements that need handlers attached to them: Note: this will not break the rendering of the page! <ul> <li handler="doAlertOne"></li> <li handler="doAlertTwo"></li> <li handler="doAlertThree"></li> </ul> Declare a few functions: function doAlertOne() { } function doAlertTwo() { } function doAlertThree() { } And then using jQuery like so: $("ul li").each(function () { switch($(this).attr("handler")) { case "doAlertOne": doAlertOne(); break; case ... etc. } }); Be pragmatic. A: It's a bit hard to tell from your question, but perhaps you can use different jQuery selectors to set up different click behaviours? For example, say you have the following: <div class="section-1"> <div></div> </div> <div class="section-2"> <div></div> </div> Perhaps you could do the following in jQuery: $('.section-1 div').onclick(...one set of functionality...); $('.section-2 div').onclick(...another set of functionality...); Basically, decide based on context what needs to happen. You could also select all of the divs and test for some parent or child element to determine what functionality they get. I'd have to know more about the specifics of your situation to give more focused advice, but maybe this will get you started. A: I haven't don't really know about JQuery, but I do know that the DOJO toolkit does make highly unobtrusive Javascript possible. Take a look at the example here: http://dojocampus.org/explorer/#Dojo_Query_Adding%20Events The demo dynamically adds events to a purely html table based on classes. Another example is the behaviour features, described here:http://dojocampus.org/content/2008/03/26/cleaning-your-markup-with-dojobehavior/
{ "language": "en", "url": "https://stackoverflow.com/questions/126373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Non colliding hash algorithm for strings up to 255 characters I am looking for a hash-algorithm, to create as close to a unique hash of a string (max len = 255) as possible, that produces a long integer (DWORD). I realize that 26^255 >> 2^32, but also know that the number of words in the English language is far less than 2^32. The strings I need to 'hash' would be mostly single words or some simple construct using two or three words. The answer: One of the FNV variants should meet your requirements. They're fast, and produce fairly evenly distributed outputs. (Answered by Arachnid) A: See here for a previous iteration of this question (and the answer). A: One technique is to use a well-known hash algorithm (say, MD5 or SHA-1) and use only the first 32 bits of the result. Be aware that the risk of hash collisions increases faster than you might expect. For information on this, read about the Birthday Paradox. A: Ronny Pfannschmidt did a test with common english words yesterday and hasn't encountered any collisions for the 10000 words he tested in the Python string hash function. I haven't tested it myself, but that algorithm is very simple and fast, and seems to be optimized for common words. Here the implementation: static long string_hash(PyStringObject *a) { register Py_ssize_t len; register unsigned char *p; register long x; if (a->ob_shash != -1) return a->ob_shash; len = Py_SIZE(a); p = (unsigned char *) a->ob_sval; x = *p << 7; while (--len >= 0) x = (1000003*x) ^ *p++; x ^= Py_SIZE(a); if (x == -1) x = -2; a->ob_shash = x; return x; } A: H(key) = [GetHash(key) + 1 + (((GetHash(key) >> 5) + 1) % (hashsize – 1))] % hashsize MSDN article on HashCodes A: Java's String.hash() can be easily viewed here, its algorithm is s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]
{ "language": "en", "url": "https://stackoverflow.com/questions/126381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Define small row height in Reporting Services 2005 I want to specify a small row height in a Reporting Services report of about 3pt. While the report looks ok in the previewer, once deployed, the row height resets to the standard row height. I have adjusted the "CanGrow" and "CanShrink" settings as well as the padding, lineHeight, font size, etc... A: I've found that one way to fix this is to put a single underscore in each column of the row. The problem is actually with the way a blank row is outputted. If you view the source of the outputted report you will see that the row you are trying to keep short will output like so: <TR style="HEIGHT:1.06mm"> <TD class="a19">&nbsp;</TD> <TD class="a20">&nbsp;</TD> <TD class="a21">&nbsp;</TD> </TR> Those blank spaces (&nbsp;) is what is causing the height to be incorrect. If you were to remove those blank spaces it would output correctly. By putting an underscore character in each column of the row it removes the blank space that would normally be outputted and then your row height is more accurate. You may want to change the color of the text of each column to match your row background color, just so the underscore will never be visible.
{ "language": "en", "url": "https://stackoverflow.com/questions/126382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is it a bad idea to implement a timer loop in Flex? In our game project we did have a timer loop set to fire about 20 times a second (the same as the application framerate). We use this to move some sprites around. I'm wondering if this could cause problems and we should instead do our updates using an EnterFrame event handler? I get the impression that having a timer loop run faster than the application framerate is likely to cause problems... is this the case? As an update, trying to do it on EnterFrame caused very weird problems. Instead of a frame every 75ms, suddenly it jumped to 25ms. Note, it wasn't just our calculation claimed the framerate was different, suddenly the animations sped up to a crazy rate. A: I'd go for the Enter frame, in some special cases it can be useful to have two "loops" one for logic and one for the visuals, but for most games I make I stick to the Enter frame-event listener. Having a separate timer for moving your stuff around is a bit unnecessary since having it set to anything except the framerate would make the motion either jerky or just not visible (since the frame is not redrawn). One thing to consider however is to decouple your logic from the framerate, this is most easily accomplished by using getTimer (available in both as2 and as3) to calculate the time that has expired since the last frame and adjusting the motions or whatever accordingly. A timer is no more reliable than the enter frame event, flash will try to keep up with whatever rate you've set, but if you're doing heavy processing or complex graphics it will slow down, both timers and framerate. A: Here's a rundown of how Flash handles framerates and why you saw your content play faster. At the deepest level, whatever host application that Flash is running in (the browser usually) polls flash at some interval. That interval might be every 10ms in one browser, or 50ms in another. Every time time that poll occurs, Flash does something like this: * *Have (1000/framerate) miliseconds passed since the last frame update? * *If no: do nothing and return *If yes: Execute a frame update: * *Advance all (playing) timelines one frame *Dispatch all events (including an ENTER_FRAME event *Execute all frame scripts and event handlers with pending events *Draw screen updates *return However, certain kinds of external events (such as keypresses, mouse events, and timer events) are handled asynchronously to the above process. So if you have an event handler that fires when a key is pressed, the code in that handler might be executed several times between frame updates. The screen will still only be redrawn once per frame update, unless you use the updateAfterEvent() method (global in AS2, attached to events in AS3). Note that the asynchronous behavior of these events does not affect the timing of frame updates. Even if you use timer events to, for example, redraw the screen 50 times per second, frame animations will still occur at the published framerate, and scripted animations will not execute any faster if they're driven by the enterFrame event (rather than the timer). A: The nice thing about using enter frame events, is your processing will degrade at the same pace as the rendering and you'll get a screen update right after the code block finishes. Either method isn't guaranteed to occur at a specific time interval. So your event handler should be determining how long it's been since it last executed, and making decisions off of that instead of purely how many times it's run. A: I think timerEvent and Enter Frame are both good options, I have used both of them in my games. ( Did you mean timerEvent by timer loop? ) PS: notice that in slow machines the timer may not refresh quick enough, so you may need to adjust your code to make game work "faster" in slow machines. A: I would suggest using a class such as TweenLite ( http://blog.greensock.com/tweenliteas3/ ) which is lightweight at about 3kb or if you need more power you can use TweenMax, which i believe is 11kb. There are many advantages here. First off, this "engine" has been thoroughly tested and benchmarked and is well known as one of the most resource friendly ways to animate few or even many things. I have seen a benchmark, where in AS3, 1,500 sprites are being animated with TweenLite and it holds a strong 20 fps, as where competitors like Tweener would bog down to 9 fps http://blog.greensock.com/tweening-speed-test/. The next advantage is the ease of use as I will demonstrate below. //Make sure you have a class path pointed at a folder that contains the following. import gs.TweenLite; import gs.easing.*; var ball_mc:MovieClip = new MovieClip(); var g:Graphics = ball_mc.graphics; g.beginFill(0xFF0000,1); g.drawCircle(0,0,10); g.endFill(); //Now we animate ball_mc //Example: TweenLite.to(displayObjectName, totalTweeningTime, {someProperty:someValue,anotherProperty:anotherValue,onComplete:aFunctionCalledWhenComplete}); TweenLite.to(ball_mc, 1,{x:400,alpha:0.5}); So this takes ball_mc and moves it to 400 from its current position on the x axis and during that same Tween it reduces or increases the alpha from its current value to 0.5. After importing the needed class, it is really only 1 line of code to animate each object, which is really nice. We can a also affect the ease, which I believe by default is Expo.easeOut(Strong easeOut). If you wanted it to bounce or be elastic such effects are available just by adding a property to the object as follows. TweenLite.to(ball_mc, 1,{x:400,alpha:0.5,ease:Bounce.easeOut}); TweenLite.to(ball_mc, 1,{x:400,alpha:0.5,ease:Elastic.easeOut}); The easing all comes from the gs.easing.* import which I believe is Penner's Easing Equations utilized through TweenLite. In the end we have no polling (Open loops) to manage such as Timer and we have very readable code that can be amended or removed with ease. It is also important to note that TweenLite and TweenMax offer far more than I have displayed here and it is safe to say that I use one of the two classes in every single project. The animations are custom, they have functionality attached to them (onComplete: functionCall), and again, they are optimal and resource friendly.
{ "language": "en", "url": "https://stackoverflow.com/questions/126385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: SQL server 2005 numeric precision loss Debugging some finance-related SQL code found a strange issue with numeric(24,8) mathematics precision. Running the following query on your MSSQL you would get A + B * C expression result to be 0.123457 SELECT A, B, C, A + B * C FROM ( SELECT CAST(0.12345678 AS NUMERIC(24,8)) AS A, CAST(0 AS NUMERIC(24,8)) AS B, CAST(500 AS NUMERIC(24,8)) AS C ) T So we have lost 2 significant symbols. Trying to get this fixed in different ways i got that conversion of the intermediate multiplication result (which is Zero!) to numeric (24,8) would work fine. And finally a have a solution. But still I hace a question - why MSSQL behaves in this way and which type conversions actually occured in my sample? A: Just as addition of the float type is inaccurate, multiplication of the decimal types can be inaccurate (or cause inaccuracy) if you exceed the precision. See Data Type Conversion and decimal and numeric. Since you multiplied NUMERIC(24,8) and NUMERIC(24,8), and SQL Server will only check the type not the content, it probably will try to save the potential 16 non-decimal digits (24 - 8) when it can't save all 48 digits of precision (max is 38). Combine two of them, you get 32 non-decimal digits, which leaves you with only 6 decimal digits (38 - 32). Thus the original query SELECT A, B, C, A + B * C FROM ( SELECT CAST(0.12345678 AS NUMERIC(24,8)) AS A, CAST(0 AS NUMERIC(24,8)) AS B, CAST(500 AS NUMERIC(24,8)) AS C ) T reduces to SELECT A, B, C, A + D FROM ( SELECT CAST(0.12345678 AS NUMERIC(24,8)) AS A, CAST(0 AS NUMERIC(24,8)) AS B, CAST(500 AS NUMERIC(24,8)) AS C, CAST(0 AS NUMERIC(38,6)) AS D ) T Again, between NUMERIC(24,8) and NUMERIC(38,6), SQL Server will try to save the potential 32 digits of non-decimals, so A + D reduces to SELECT CAST(0.12345678 AS NUMERIC(38,6)) which gives you 0.123457 after rounding. A: Following the logic pointed out by eed3si9n and what you said in your question it seems that the best approach when doing mathematics operations is to extract them into a function and additionally to specify precision after each operation, It this case the function could look something like: create function dbo.myMath(@a as numeric(24,8), @b as numeric(24,8), @c as numeric(24,8)) returns numeric(24,8) as begin declare @d as numeric(24,8) set @d = @b* @c return @a + @d end A: Despite what it says on Precision, Scale, and Length (Transact-SQL). I believe it is also applying a minimum 'scale' (number of decimal places) of 6 to the resulting NUMERIC type for multiplication the same as it does for division etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/126401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Change database files location in MySQL administrator? I would like to change the database files location of MySQL administrator to another drive of my computer. (I run Windows XP SP2 and MySQL Administrator 1.2.8.) --Under the startup variable --> General Parameters --> I changed Data directory: from C:/Program Files/MySQL/MySQL Server 5.0/data to D:/....., but after I stopped the service and restarted it, the following error appeared: Could not re-connect to the MySQL Server. Server could not be started. Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist Has anyone else had this problem? A: In Windows * *Navigate to C:\Program Files\MySQL\MySQL Server 5.4\ and locate the my.ini file *Find the SERVER SECTION and go to approx line 76 and modify the datadir line to where you want your MySQL application data to be stored *Now navigate to C:\Documents and Settings\All Users\Application Data\MySQL\MySQL Server 5.4\data\ and copy and paste the mysql folder into your new location. *Restart the MySQL Server in Control Panel > Administrative Tools > Service A: Normally it works like this: * *shut down MySQL *change the [mysqld] and [mysqld_safe] datadir variable in the MySQL configuration *change the basedir variable in the same section. *move the location over *restart MySQL If that doesn't work I have no idea. On linux you can try to move the socket to a new location too, but that shouldn't affect windows. Alternatively you can use a symbolic link on *nix what most people do I guess. A: You also have to manually modify mysql's configuration (usually my.conf) A: MySQL Administrator cannot be used for tasks like this. It is merely a tool for looking at MySQL servers, despite its name. Relocating data is described in many MySQL tutorials and in the manual IIRC. But basically it's just moving the data to a new location while the server is shut down and then correcting the paths in the servers config file. After that you should be able to restart the server and connect MySQL Administrator to it. A: Make sure you give the Network Service Full permissions in the security tab of Windows Explorer options. If the server can't read/write etc. to the selected folder the service will either not start or it will attempt a start and shut right down.
{ "language": "en", "url": "https://stackoverflow.com/questions/126406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Eliminating `switch` statements What are ways of eliminating the use of switch statements in code? A: I think that what you are looking for is the Strategy Pattern. This can be implemented in a number of ways, which have been mentionned in other answers to this question, such as: * *A map of values -> functions *Polymorphism. (the sub-type of an object will decide how it handles a specific process). *First class functions. A: 'switch' is just a language construct and all language constructs can be thought of as tools to get a job done. As with real tools, some tools are better suited to one task than another (you wouldn't use a sledge hammer to put up a picture hook). The important part is how 'getting the job done' is defined. Does it need to be maintainable, does it need to be fast, does it need to scale, does it need to be extendable and so on. At each point in the programming process there are usually a range of constructs and patterns that can be used: a switch, an if-else-if sequence, virtual functions, jump tables, maps with function pointers and so on. With experience a programmer will instinctively know the right tool to use for a given situation. It must be assumed that anyone maintaining or reviewing code is at least as skilled as the original author so that any construct can be safely used. A: switch statements would be good to replace if you find yourself adding new states or new behaviour to the statements: int state; String getString() { switch (state) { case 0 : // behaviour for state 0 return "zero"; case 1 : // behaviour for state 1 return "one"; } throw new IllegalStateException(); } double getDouble() { switch (this.state) { case 0 : // behaviour for state 0 return 0d; case 1 : // behaviour for state 1 return 1d; } throw new IllegalStateException(); } Adding new behaviour requires copying the switch, and adding new states means adding another case to every switch statement. In Java, you can only switch a very limited number of primitive types whose values you know at runtime. This presents a problem in and of itself: states are being represented as magic numbers or characters. Pattern matching, and multiple if - else blocks can be used, though really have the same problems when adding new behaviours and new states. The solution which others have suggested as "polymorphism" is an instance of the State pattern: Replace each of the states with its own class. Each behaviour has its own method on the class: IState state; String getString() { return state.getString(); } double getDouble() { return state.getDouble(); } Each time you add a new state, you have to add a new implementation of the IState interface. In a switch world, you'd be adding a case to each switch. Each time you add a new behaviour, you need to add a new method to the IState interface, and each of the implementations. This is the same burden as before, though now the compiler will check that you have implementations of the new behaviour on each pre-existing state. Others have said already, that this may be too heavyweight, so of course there is a point you reach where you move from one to another. Personally, the second time I write a switch is the point at which I refactor. A: A switch is a pattern, whether implemented with a switch statement, if else chain, lookup table, oop polymorphism, pattern matching or something else. Do you want to eliminate the use of the "switch statement" or the "switch pattern"? The first one can be eliminated, the second one, only if another pattern/algorithm can be used, and most of the time that is not possible or it's not a better approach to do so. If you want to eliminate the switch statement from code, the first question to ask is where does it make sense to eliminate the switch statement and use some other technique. Unfortunately the answer to this question is domain specific. And remember that compilers can do various optimizations to switch statements. So for example if you want to do message processing efficiently, a switch statement is pretty much the way to go. But on the other hand running business rules based on a switch statement is probably not the best way to go and the application should be rearchitected. Here are some alternatives to switch statement : * *lookup table *polymorphism *pattern matching (especially used in functional programming, C++ templates) A: if-else I refute the premise that switch is inherently bad though. A: Switch in itself isn't that bad, but if you have lots of "switch" or "if/else" on objects in your methods it may be a sign that your design is a bit "procedural" and that your objects are just value buckets. Move the logic to your objects, invoke a method on your objects and let them decide how to respond instead. A: Well, for one, I didn't know using switch was an anti pattern. Secondly, switch can always be replaced with if / else if statements. A: Why do you want to? In the hands of a good compiler, a switch statement can be far more efficient than if/else blocks (as well as being easier to read), and only the largest switches are likely to be sped up if they're replaced by any sort of indirect-lookup data structure. A: Switch-statements are not an antipattern per se, but if you're coding object oriented you should consider if the use of a switch is better solved with polymorphism instead of using a switch statement. With polymorphism, this: foreach (var animal in zoo) { switch (typeof(animal)) { case "dog": echo animal.bark(); break; case "cat": echo animal.meow(); break; } } becomes this: foreach (var animal in zoo) { echo animal.speak(); } A: See the Switch Statements Smell: Typically, similar switch statements are scattered throughout a program. If you add or remove a clause in one switch, you often have to find and repair the others too. Both Refactoring and Refactoring to Patterns have approaches to resolve this. If your (pseudo) code looks like: class RequestHandler { public void handleRequest(int action) { switch(action) { case LOGIN: doLogin(); break; case LOGOUT: doLogout(); break; case QUERY: doQuery(); break; } } } This code violates the Open Closed Principle and is fragile to every new type of action code that comes along. To remedy this you could introduce a 'Command' object: interface Command { public void execute(); } class LoginCommand implements Command { public void execute() { // do what doLogin() used to do } } class RequestHandler { private Map<Integer, Command> commandMap; // injected in, or obtained from a factory public void handleRequest(int action) { Command command = commandMap.get(action); command.execute(); } } A: I think the best way is to use a good Map. Using a dictionary you can map almost any input to some other value/object/function. your code would look something(psuedo) like this: void InitMap(){ Map[key1] = Object/Action; Map[key2] = Object/Action; } Object/Action DoStuff(Object key){ return Map[key]; } A: Switch is not a good way to go as it breaks the Open Close Principal. This is how I do it. public class Animal { public abstract void Speak(); } public class Dog : Animal { public virtual void Speak() { Console.WriteLine("Hao Hao"); } } public class Cat : Animal { public virtual void Speak() { Console.WriteLine("Meauuuu"); } } And here is how to use it (taking your code): foreach (var animal in zoo) { echo animal.speak(); } Basically what we are doing is delegating the responsibility to the child class instead of having the parent decide what to do with children. You might also want to read up on "Liskov Substitution Principle". A: In JavaScript using Associative arrays, this: function getItemPricing(customer, item) { switch (customer.type) { // VIPs are awesome. Give them 50% off. case 'VIP': return item.price * item.quantity * 0.50; // Preferred customers are no VIPs, but they still get 25% off. case 'Preferred': return item.price * item.quantity * 0.75; // No discount for other customers. case 'Regular': case default: return item.price * item.quantity; } } becomes this: function getItemPricing(customer, item) { var pricing = { 'VIP': function(item) { return item.price * item.quantity * 0.50; }, 'Preferred': function(item) { if (item.price <= 100.0) return item.price * item.quantity * 0.75; // Else return item.price * item.quantity; }, 'Regular': function(item) { return item.price * item.quantity; } }; if (pricing[customer.type]) return pricing[customer.type](item); else return pricing.Regular(item); } Courtesy A: Everybody loves HUGE if else blocks. So easy to read! I am curious as to why you would want to remove switch statements, though. If you need a switch statement, you probably need a switch statement. Seriously though, I'd say it depends on what the code's doing. If all the switch is doing is calling functions (say) you could pass function pointers. Whether it's a better solution is debatable. Language is an important factor here also, I think. A: If the switch is there to distinguish between various kinds of objects, you're probably missing some classes to precisely describe those objects, or some virtual methods... A: For C++ If you are referring to ie an AbstractFactory I think that a registerCreatorFunc(..) method usually is better than requiring to add a case for each and every "new" statement that is needed. Then letting all classes create and register a creatorFunction(..) which can be easy implemented with a macro (if I dare to mention). I believe this is a common approach many framework do. I first saw it in ET++ and I think many frameworks that require a DECL and IMPL macro uses it. A: In a procedural language, like C, then switch will be better than any of the alternatives. In an object-oriented language, then there are almost always other alternatives available that better utilise the object structure, particularly polymorphism. The problem with switch statements arises when several very similar switch blocks occur at multiple places in the application, and support for a new value needs to be added. It is pretty common for a developer to forget to add support for the new value to one of the switch blocks scattered around the application. With polymorphism, then a new class replaces the new value, and the new behaviour is added as part of adding the new class. Behaviour at these switch points is then either inherited from the superclass, overridden to provide new behaviour, or implemented to avoid a compiler error when the super method is abstract. Where there is no obvious polymorphism going on, it can be well worth implementing the Strategy pattern. But if your alternative is a big IF ... THEN ... ELSE block, then forget it. A: Function pointers are one way to replace a huge chunky switch statement, they are especially good in languages where you can capture functions by their names and make stuff with them. Of course, you ought not force switch statements out of your code, and there always is a chance you are doing it all wrong, which results with stupid redundant pieces of code. (This is unavoidable sometimes, but a good language should allow you to remove redundancy while staying clean.) This is a great divide&conquer example: Say you have an interpreter of some sort. switch(*IP) { case OPCODE_ADD: ... break; case OPCODE_NOT_ZERO: ... break; case OPCODE_JUMP: ... break; default: fixme(*IP); } Instead, you can use this: opcode_table[*IP](*IP, vm); ... // in somewhere else: void opcode_add(byte_opcode op, Vm* vm) { ... }; void opcode_not_zero(byte_opcode op, Vm* vm) { ... }; void opcode_jump(byte_opcode op, Vm* vm) { ... }; void opcode_default(byte_opcode op, Vm* vm) { /* fixme */ }; OpcodeFuncPtr opcode_table[256] = { ... opcode_add, opcode_not_zero, opcode_jump, opcode_default, opcode_default, ... // etc. }; Note that I don't know how to remove the redundancy of the opcode_table in C. Perhaps I should make a question about it. :) A: The most obvious, language independent, answer is to use a series of 'if'. If the language you are using has function pointers (C) or has functions that are 1st class values (Lua) you may achieve results similar to a "switch" using an array (or a list) of (pointers to) functions. You should be more specific on the language if you want better answers. A: Switch statements can often be replaced by a good OO design. For example, you have an Account class, and are using a switch statement to perform a different calculation based on the type of account. I would suggest that this should be replaced by a number of account classes, representing the different types of account, and all implementing an Account interface. The switch then becomes unnecessary, as you can treat all types of accounts the same and thanks to polymorphism, the appropriate calculation will be run for the account type. A: Depends why you want to replace it! Many interpreters use 'computed gotos' instead of switch statements for opcode execution. What I miss about C/C++ switch is the Pascal 'in' and ranges. I also wish I could switch on strings. But these, while trivial for a compiler to eat, are hard work when done using structures and iterators and things. So, on the contrary, there are plenty of things I wish I could replace with a switch, if only C's switch() was more flexible! A: Use a language that doesn't come with a built-in switch statement. Perl 5 comes to mind. Seriously though, why would you want to avoid it? And if you have good reason to avoid it, why not simply avoid it then? A: Another vote for if/else. I'm not a huge fan of case or switch statements because there are some people that don't use them. The code is less readable if you use case or switch. Maybe not less readable to you, but to those that have never needed to use the command. The same goes for object factories. If/else blocks are a simple construct that everyone gets. There's a few things you can do to make sure that you don't cause problems. Firstly - Don't try and indent if statements more than a couple of times. If you're finding yourself indenting, then you're doing it wrong. if a = 1 then do something else if a = 2 then do something else else if a = 3 then do the last thing endif endif endif Is really bad - do this instead. if a = 1 then do something endif if a = 2 then do something else endif if a = 3 then do something more endif Optimisation be damned. It doesn't make that much of a difference to the speed of your code. Secondly, I'm not averse to breaking out of an If Block as long as there are enough breaks statements scattered through the particular code block to make it obvious procedure processA(a:int) if a = 1 then do something procedure_return endif if a = 2 then do something else procedure_return endif if a = 3 then do something more procedure_return endif end_procedure EDIT: On Switch and why I think it's hard to grok: Here's an example of a switch statement... private void doLog(LogLevel logLevel, String msg) { String prefix; switch (logLevel) { case INFO: prefix = "INFO"; break; case WARN: prefix = "WARN"; break; case ERROR: prefix = "ERROR"; break; default: throw new RuntimeException("Oops, forgot to add stuff on new enum constant"); } System.out.println(String.format("%s: %s", prefix, msg)); } For me the issue here is that the normal control structures which apply in C like languages have been completely broken. There's a general rule that if you want to place more than one line of code inside a control structure, you use braces or a begin/end statement. e.g. for i from 1 to 1000 {statement1; statement2} if something=false then {statement1; statement2} while isOKtoLoop {statement1; statement2} For me (and you can correct me if I'm wrong), the Case statement throws this rule out of the window. A conditionally executed block of code is not placed inside a begin/end structure. Because of this, I believe that Case is conceptually different enough to not be used. Hope that answers your questions.
{ "language": "en", "url": "https://stackoverflow.com/questions/126409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "194" }
Q: Is Oracle Coherence stable? Has anyone used Oracle Coherence? It looks very promising at the roadshows. My concern is whether it's a stable and robust enough to implement mission-critical financial solutions. I'd be grateful for any feedback on its performance, robustness and ease of maintenance. A: Coherence is in production at hundreds of companies. Many of the companies are large financial institutions and large consumer websites. For example, hotwire.com uses Coherence. A: As with any technology, Coherence has the capability to meet your needs for performance and robustness. If you understand the technology and implement it correctly, then yes. I've been using it for a few months now. So far, it's doing fine in production. We haven't gotten to the maintenance part yet. A: I've had first hand experience with Oracle Coherence at two big investment banks and can say it's definitely stable. However, as with any complex piece of software it's not without it's quirks. EDIT: Doh, just realised the question is over a year old. Oh well... A: Oracle Coherence is a mature product, but you need to make sure that you have the latest patches from Oracle. The point releases that are downloadable from the Oracle website have many bugs that are fixed in the patch releases. As for development and maintenance, the learning curve is a little steeper than the documentation would suggest. I recommend the Oracle training course (or Alek Seovic's book).
{ "language": "en", "url": "https://stackoverflow.com/questions/126421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is it possible to change the natural order of columns in Postgres? Is it possible to change the natural order of columns in Postgres 8.1? I know that you shouldn't rely on column order - it's not essential to what I am doing - I only need it to make some auto-generated stuff come out in a way that is more pleasing, so that the field order matches all the way from pgadmin through the back end and out to the front end. A: I'm wanting the same. Yes, order isn't essential for my use-case, but it just rubs me the wrong way :) What I'm doing to resolve it is as follows. This method will ensure you KEEP any existing data, * *Create a new version of the table using the ordering I want, using a temporary name. *Insert all data into that new table from the existing one. *Drop the old table. *Rename the new table to the "proper name" from "temporary name". *Re-add any indexes you previously had. *Reset ID sequence for primary key increments. Current table order: id, name, email 1. Create a new version of the table using the ordering I want, using a temporary name. In this example, I want email to be before name. CREATE TABLE mytable_tmp ( id SERIAL PRIMARY KEY, email text, name text ); 2. Insert all data into that new table from the existing one. INSERT INTO mytable_tmp --- << new tmp table ( id , email , name ) SELECT id , email , name FROM mytable; --- << this is the existing table 3. Drop the old table. DROP TABLE mytable; 4. Rename the new table to the "proper name" from "temporary name". ALTER TABLE mytable_tmp RENAME TO mytable; 5. Re-add any indexes you previously had. CREATE INDEX ... 6. Reset ID sequence for primary key increments. SELECT setval('public.mytable_id_seq', max(id)) FROM mytable; A: Reorder the columns in postgresql walkthrough Warning: this approach deletes table properties such as unique indexes and other unintended consequences that come with doing a drop your_table. So you'll need to add those back on after. --create a table where column bar comes before column baz: CREATE TABLE foo ( moo integer, bar character varying(10), baz date ); --insert some data insert into foo (moo, bar, baz) values (34, 'yadz', now()); insert into foo (moo, bar, baz) values (12, 'blerp', now()); select * from foo; ┌─────┬───────┬────────────┐ │ moo │ bar │ baz │ ├─────┼───────┼────────────┤ │ 34 │ yadz │ 2021-04-07 │ │ 12 │ blerp │ 2021-04-07 │ └─────┴───────┴────────────┘ -- Define your reordered columns here, don't forget one, -- or it'll be missing from the replacement. drop view if exists my_view; create view my_view as ( select moo, baz, bar from foo ); select * from my_view; DROP TABLE IF EXISTS foo2; --foo2 is your replacement table that has columns ordered correctly create table foo2 as select * from my_view; select * from foo2; --finally drop the view and the original table and rename DROP VIEW my_view; DROP TABLE foo; ALTER TABLE foo2 RENAME TO foo; --observe the reordered columns: select * from foo; ┌─────┬────────────┬───────┐ │ moo │ baz │ bar │ ├─────┼────────────┼───────┤ │ 34 │ 2021-04-07 │ yadz │ │ 12 │ 2021-04-07 │ blerp │ └─────┴────────────┴───────┘ Get the prior order of column names for copying and pasting If your table you want to reorder has hundreds of columns, you'll want to automate the getting of the given order of columns so you can copy, nudge, then paste into the above views. SELECT string_agg(column_name, ',') from ( select * FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name = 'your_big_table' order by ordinal_position asc ) f1; Which prints: column_name_1,column_name_2, ..., column_name_n You copy the above named ordering, you move them to where they belong then paste into the view up top. A: You can actually just straight up change the column order, but I'd hardly recommend it, and you should be very careful if you decide to do it. eg. # CREATE TABLE test (a int, b int, c int); # INSERT INTO test VALUES (1,2,3); # SELECT * FROM test; a | b | c ---+---+--- 1 | 2 | 3 (1 row) Now for the tricky bit, you need to connect to your database using the postgres user so you can modify the system tables. # SELECT relname, relfilenode FROM pg_class WHERE relname='test'; relname | relfilenode ---------+------------- test_t | 27666 (1 row) # SELECT attrelid, attname, attnum FROM pg_attribute WHERE attrelid=27666; attrelid | attname | attnum ----------+----------+-------- 27666 | tableoid | -7 27666 | cmax | -6 27666 | xmax | -5 27666 | cmin | -4 27666 | xmin | -3 27666 | ctid | -1 27666 | b | 1 27666 | a | 2 27666 | c | 3 (9 rows) attnum is a unique column, so you need to use a temporary value when you're modifying the column numbers as such: # UPDATE pg_attribute SET attnum=4 WHERE attname='a' AND attrelid=27666; UPDATE 1 # UPDATE pg_attribute SET attnum=1 WHERE attname='b' AND attrelid=27666; UPDATE 1 # UPDATE pg_attribute SET attnum=2 WHERE attname='a' AND attrelid=27666; UPDATE 1 # SELECT * FROM test; b | a | c ---+---+--- 1 | 2 | 3 (1 row) Again, because this is playing around with database system tables, use extreme caution if you feel you really need to do this. This is working as of postgres 8.3, with prior versions, your milage may vary. A: If your database is not very big and you can afford some downtime then you can: * *Disable write access to the database this is essential as otherwise any changes after starting the next point will be lost *pg_dump --create --column-inserts databasename > databasename.pgdump.sql *Edit apropriate CREATE TABLE statement in databasename.pgdump.sql If the file is too big for your editor just split it using split command, edit, then assemble back using cat *drop database databasename You do have a recent backup, just in case, do you? *psql --single-transaction -f databasename.pgdump.sql If you don't use --single-transaction it will be very slow If you use so called large objects make sure they are included in the dump. I'm not sure if they are by default in 8.1. A: I have asked that question in pgsql-admin in 2007. Tom Lane himself declared it practically unfeasible to change the order in the catalogs. * *http://archives.postgresql.org/pgsql-admin/2007-06/msg00037.php Clarification: this applies for users with the present tools. Does not mean, it could not be implemented. IMO, it should be. Still true for Postgres 12. A: You can get the column ordering that you want by creating a new table and selecting columns of the old table in the order that you want them to present: CREATE TABLE test_new AS SELECT b, c, a FROM test; SELECT * from test_new; b | c | a ---+---+--- 2 | 3 | 1 (1 row) Note that this copies data only, not modifiers, constraints, indexes, etc.. Once the new table is modified the way you want, drop the original and alter the name of the new one: BEGIN; DROP TABLE test; ALTER TABLE test_new RENAME TO test; COMMIT; A: Unfortunately, no, it's not. Column order is entirely up to Postgres. A: Specifying the column order in the query is the only reliable (and sane) way. That said, you can usually get a different ordering by altering the table as shown in the example below as the columns are usually (not guaranteed to be) returned in the order they were added to the table. postgres=# create table a(a int, b int, c int); CREATE TABLE postgres=# insert into a values (1,2,3); INSERT 0 1 postgres=# select * from a; a | b | c ---+---+--- 1 | 2 | 3 (1 row) postgres=# alter table a add column a2 int; ALTER TABLE postgres=# select * from a; a | b | c | a2 ---+---+---+---- 1 | 2 | 3 | (1 row) postgres=# update a set a2 = a; UPDATE 1 postgres=# alter table a drop column a; ALTER TABLE postgres=# alter table a rename column a2 to a; ALTER TABLE postgres=# select * from a; b | c | a ---+---+--- 2 | 3 | 1 (1 row) postgres=#
{ "language": "en", "url": "https://stackoverflow.com/questions/126430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Different odd/even pages in MS Access reports For a report in MS Access (2007) I need to put data of some columns on all odd pages and other columns on all even pages. It is for printing out double sided card files onto sheets of paper. Does somebody have an idea how to do that? A: Your question is too general. I would suggest you have all columns in all pages, and then add some code to the page header section (or even in the detail section) "On Format" to change the .Visible property of your Detail text boxes depending on the page number. I think you'll need to have a Text Box in the page header or footer with "=[Page]" as source data in order to know the correct page number. My Access report knowledge might be severely outdated, though. A: Well you can check whether "Page" is odd or even in an "On Format" event and maker columns visible or not visible depending on which page you are on. However, it would be far easier to: Put in a couple of sections and put in a new page between them. Then it's just a matter of ensuring that you don't overflow the page with too many rows per card. OR Make the report wide enough that it forces a second page and then place those columns on a second page (i.e. the back of the first page). As I recall access's print order is left right top bottom, so pages set up like this: A B C D would print like this: A B C D In the case of having to display data from the same record on two consecutive pages, this is the option I would choose. A: You could alternatively use a pair of queries, printing the first to side A, and the second to side B, perhaps?
{ "language": "en", "url": "https://stackoverflow.com/questions/126431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Any way to remove IEs black border around submit button in active forms? I am implementing a design that uses custom styled submit-buttons. They are quite simply light grey buttons with a slightly darker outer border: input.button { background: #eee; border: 1px solid #ccc; } This looks just right in Firefox, Safari and Opera. The problem is with Internet Explorer, both 6 and 7. Since the form is the first one on the page, it's counted as the main form - and thus active from the get go. The first submit button in the active form receives a solid black border in IE, to mark it as the main action. If I turn off borders, then the black extra border in IE goes away too. I am looking for a way to keep my normal borders, but remove the outline. A: I know I'm almost 2 years late to the game, but I found another solution (at least for IE7). If you add another input type="submit" to your form before any other submit button in the form the problem will go away. Now you just need to hide this new, black-border-absorbing-button. This works for me (overflow needs to be "auto"): <input type="submit" value="" style="height:0;overflow:auto;position:absolute;left:-9999px;" /> Note: I am using an HTML5 doctype (<!doctype html>). A: Well this works here: <html> <head> <style type="text/css"> span.button { background: #eee; border: 1px solid #ccc; } span.button input { background:none; border:0; margin:0; padding:0; } </style> </head> <body> <span class="button"><input type="button" name="..." value="Button"/></span> </body> </html> A: if you dont want to add a wrapper to the input / button then try doing this. As this is invalid CSS then make sre its for IE only. Have the border as per for other browsers but use the filter:chroma for IE... <!--[if IE]> <style type="text/css"> input { filter:chroma(color=#000000); border:none; } </style> <![endif]--> worked for me. A: I've found an answer that works for me on another forum. It removes the unwanted black border in ie6 and ie7. It's probable that some/many of you have not positioned your input="submit" in form tags. Don't overlook this. It worked for me after trying everything else. If you are using a submit button, make sure it is within a form and not just a fieldset: <form><fieldset><input type="submit"></fieldset></form> A: I was able to combine David Murdoch's suggestion with some JQuery such that the fix will automatically be applied for all 'input:submit' elements on the page: // Test for IE7. if ($.browser.msie && parseInt($.browser.version, 10) == 7) { $('<input type="submit" value="" style="height:0;overflow:auto;position:absolute;left:-9999px;" />') .insertBefore("input:submit"); } You can include this in a Master Page or equivalent, so it gets applied to all pages in your site. It works, but it does feel a bit wrong, somehow. A: I'm building on @nickmorss's example of using filters which didn't really work out for my situation... Using the glow filter instead worked out much better for me. <!--[if IE]> <style type="text/css"> input[type="submit"], input[type="button"], button { border: none !important; filter: progid:DXImageTransform.Microsoft.glow(color=#d0d0d0,strength=1); height: 24px; /* I had to adjust the height from the original value */ } </style> <![endif]--> A: Right, well here's an ugly fix for you to weigh up... Stick the button in a <span>, nuke the border on the button and give the border to the span instead. IE is a bit iffy about form element margins so this might not work precisely. Perhaps giving the span the same background as the button might help in that respect. span.button { background: #eee; border: 1px solid #ccc; } span.button input { background: #eee; border:0; } and <span class="button"><input type="button" name="..." value="Button"/></span> A: The best solution I have found, is to move the border to a wrapping element, like this: <div class='submit_button'><input type="submit" class="button"></div> With this CSS: .submit_button { width: 150px; border: 1px solid #ccc; } .submit_button .button { width: 150px; border: none; } The main problem with this solution is that the button now is a block-element, and needs to be fixed-width. We could use inline-block, except that Firefox2 does not support it. Any better solutions are welcome. A: I think filter:chroma(color=#000000); as metnioned a wile ago is the best as you can apply in certain class. Otherwise you will have to go and apply an extra tag on every button you have that is if you are using classes of course. .buttonStyle { filter:chroma(color=#000000); BACKGROUND-COLOR:#E5813C solid; BORDER-BOTTOM: #cccccc 1px solid; BORDER-LEFT: #cccccc 1px solid; BORDER-RIGHT: #cccccc 1px solid; BORDER-TOP: #cccccc 1px solid; COLOR:#FF9900; FONT-FAMILY: Verdana, Arial, Helvetica, sans-serif; FONT-SIZE: 10px; FONT-WEIGHT: bold; TEXT-DECORATION: none; } That did it for me! A: I had this problem and solved it with a div around the button, displayed it as a block, and positioned it manually. the margins for buttons in IE and FF was just too unpredictable and there was no way for them both to be happy. My submit button had to be perfectly lined up against the input, so it just wouldnt work without positioning the items as blocks. A: This is going to work: input[type=button] { filter:chroma(color=#000000); } This works even with button tag, and eventually you can safely use the background-image css property. A: The correct answer to this qustion is: outline: none; ... works for IE and Chrome, in my knowledge. A: A hackish solution might be to use markup like this: <button><span>Go</span></button> and apply your border styles to the span element. A: add *border:none this removes the border for IE6 and IE7, but keeps it for the other browsers A: With the sliding doors technique, use two spans inside of the button. And eliminate any formatting on the button in your IE override. <button><span class="open">Search<span class="close"></span></span></button> A: I can't comment (yet) so I have to add my comment this way. I thing Mr. David Murdoch's advice is the best for Opera ( here ). OMG, what a lovely girl he's got btw. I've tried his approach in Opera and I succeeded basically doubling the input tags in this way: <input type="submit" value="Go" style="display:none;" id="WorkaroundForOperaInputFocusBorderBug" /> <input type="submit" value="Go" /> This way the 1st element is hidden but it CATCHES the display focus Opera would give to the 2nd input element instead. LOVE IT! A: At least in IE7 you can style the border althogh you can't remove it (set it to none). So setting the color of the border to the same color that your background should do. .submitbutton { background-color: #fff; border: #fff dotted 1px; } if your background is white. A: For me the below code actually worked. <!--[if IE]> <style type="text/css"> input[type=submit],input[type=reset],input[type=button] { filter:chroma(color=#000000); color:#010101; } </style> <![endif]--> Got it from @Mark's answer and loaded it only for IE.
{ "language": "en", "url": "https://stackoverflow.com/questions/126445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Call Stack at Runtime I want to access the call stack at runtime in a Native C++ application. I am not using the IDE. How do I display the call stack? Update: I have a function which is called from many points all over the application. It crashes on rare occasions. I was looking for a way to get name of the caller and log it. A: Have a look at StackWalk64. If you're used to doing this on .NET, then you're in for a nasty surprise. A: I believe that this page has the answer you are looking for. You said Visual C so I assume you mean windows. A: You should consider setting your unhandled exception filter and writing a minidump file from within it. It is not all that complicated and is well documented. Just stick to the minimum of things you do once in your unhandled exception filter (read what can all go wrong if you get creative). But to be on the safe side (your unhandled exception filter might get inadvertently overwritten), you could put your code inside __try/__except block and write the minidump from within the filter function (note, you cannot have objects that require automatic unwinding in a function with __try/__except block, if you do have them, consider putting them into a separate function): long __stdcall myfilter(EXCEPTION_POINTERS *pexcept_info) {     mycreateminidump(pexcept_info);     return EXCEPTION_EXECUTE_HANDLER; } void myfunc() { __try{     //your logic here } __except(myfilter(GetExceptionInformation())) {     // exception handled } } You can then inspect the dump file with a debugger of your choice. Both Visual Studio and debuggers from Windows Debugging Tools package can handle minidumps. A: If you want to get a callstack of the crash, what you really want to do is post mortem debugging. If you want to check a callstack of application while it is running, this is one of many functions SysInternals Process Explorer can offer. A: If you're not actively debugging, you can "crash" the app to produce a minidump (this can be done non-invasively and lets the app continue running). IIRC DrWatson will let you do this, if not userdump from MS support will. You can then load the dump into windbg and see the callstack + variables etc there. You will need your app's symbols to make sense of the trace. If you're looking for a simpler run-time code style traces, I recommend a simple class that you instantiate on every method, the constructor writes the method name using OutputDebugString. Use WinDebug to view the trace as the program runs. (put some form of control in your class, even if its just a global variable or registry value, or global Atom so you can turn the tracing on or off at will). A: It crashes on rare occasions. I was looking for a way to get name of the caller and log it. What do you mean by it crashes? Access Violation? Divide by zero? what exactly? Does it interact with kernel mode components? Turn on appverifier. that should eliminate a lot of things. create this: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\FileName.exe under that key, create a new string name : debugger value: c:\pathtowindbg\windbg.exe -gG -xe av If you're running 32bit code with WOW, you need to do this under the wow3264node.
{ "language": "en", "url": "https://stackoverflow.com/questions/126450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Redirect the output of the exe batch file I want to execute a certain batch file and redirect its console output to a text control in visual c++ or redirect the console output at the same time the logs/echo are showing. A: Basically, you have to make the run process to write to a pipe, and to read the output of this pipe. [EDIT] I know how SciTE does that (you can take a look at the source: win32/SciTEWin.cxx, ExecuteOne function), I searched a slightly more generic way, found How to spawn console processes with redirected standard handles from Microsoft itself. If you seach CreatePipe PeekNamedPipe CreateProcess keyword, for example, you might find other examples. A: Another option is to use Boost.Process (Boost.Process is not (yet) an official Boost C++ library. It must be downloaded and installed separately). The example "Child.4 - Reading from a child using asynchronous I/O" shows how to redirect the output of the child process into a C++ stream (and later access the content). Example "Child.4 - Reading from a child using asynchronous I/O" show how to use Boost.Process togehter with Boost.Asio to access the childs I/O asynchronous. The advantages of this method is, that Boost.Process supports the Windows API and POSIX API. A: If elegance is not a priority then a really simple solution might be to redirect the output to a file, and then read in the file contents.
{ "language": "en", "url": "https://stackoverflow.com/questions/126452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I deploy registry keys and values using WiX 3.0? If I want to create the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Application\MyApp with the string value EventMessageFile : C:\Path\To\File.dll how do I define this in my WiX 3.0 WXS file? Examples of what the XML should look like is much appreciated. A: You seem to want to create an event log source. If that is the case, you should take a look at the <EventSource> element in the util extension. A: Check out this page. An example would be: <registry action="write" root"HKLM" key="SYSTEM\CurrentControlSet\Services\Eventlog\Application\MyApp" type="string" value="EventMessageFile : C:\Path\To\File.dll" /> A: I went with this: <Component Id="EventLogRegKeys" Guid="{my guid}"> <RegistryKey Id="Registry_EventLog" Root="HKLM" Key="SYSTEM\CurrentControlSet\Services\Eventlog\Application\MyApp" Action="create"> <RegistryValue Id="Registry_EventLog_EventSourceDll" Action="write" KeyPath="yes" Name="EventMessageFile" Type="string" Value="C:\Path\To\File.dll" /> </RegistryKey> </Component> A: It would be better to refer to File.dll using file reference syntax, to ensure that the actual path it's installed to is used. Use [#filekey], where filekey is the Id of the File element describing the file. A: Use the following under DirectoryRef --> Directory... <Component Id="RegisterAddReferencesTab32" Guid="D9D01248-8F19-45FC-B807-093CD6765A60">  <RegistryValue Action="write" Id="RegInstallDir32" Key="HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Application\MyApp" Root="HKLM" Type="string" Value="C:\Path\To\File.dll" /></Component>
{ "language": "en", "url": "https://stackoverflow.com/questions/126465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to stop right click dead-locking Visual Studio 2008 I have a very serious problem with Visual Studio 2008. Occasionally when I right-click (for go to definition, set next statement etc) when I'm debugging, Visual Studio will just dead-lock and go into not responding mode. Has anyone had the same problem? Does anyone know how to solve it? Edit: I'm using SP1 with a couple of hot-fixes. A: Problem: Signed Applications/dlls load slowly in Vista. Visual Studio IDE 'Hangs' on offline/non-internet-connected workstations. Without internet connectivity the certificate revocation check times out and causes applications to hang. When debugging/stepping through code dlls are loaded as needed and this is when the revocation check is attempted and the VS IDE becomes unresponsive. What this effects: This effects all signed applications/dlls and is also the reason for Microsoft Word/Excel taking so long to open a simple document. Office applications, SQL Management Studio, Visual Studio, Web Applications that use a certificate. Fix: Disable checking of Publisher's Certificate Revocation Via IE: * *Go to Internet Options in IE 7 *Then go to the Security Tab, scroll towards the bottom *Uncheck the 'Check for Publisher's Certificate Revocation' checkbox *Click OK Via the registry: * *Open regedit *Browse to the following key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing *To disable the check: Change the value of the State key to 146944 Decimal or 0x00023e00 Hexadecimal To re-enable the check: Change the value of the State key to 146432 Decimal or 0x00023c00 Hexadecimal Alternate Fix: Disable the Visual Studio Hosting Process: * *Open a project in Visual Studio. *On the Project menu, click Properties. *Click the Debug tab. *Clear the Enable the Visual Studio hosting process check box. Note: The Alternate Fix causes the loss of some debugging functionality. Background: Microsoft Connect Report A: This issue no longer occurs for me since I've moved to Windows7. If you're unfortunate to be stuck with Windows Vista still, I did discover that it only hung on right-click when waiting for the intellisense database to be built (see the bottom left corner for it's progress). The only "fix" I had was to wait for intellisense to stop building, then do a right-click. A: I wrote a piece of code the other day, a very crazy template, and the latest VisualStudio would just hang if I placed my mouse over the templated code. It was surreal :) Anyways you might have an issue like that and you might want to delete your intellisense database and try again. A: Try launching Visual Studio in safe mode to rule out problems with any extension installed. A: When debugging multi-threaded apps, sometimes I get a hang when breakpoint is hit. And sometimes VS would hang (hourglass) when I tried to look at a variable by right-clicking on the variable within the code. I googled and found a hint that explained, that when VS breaks, it evaluates all the variables in the locals and watch panes, in order to display them. But in threaded apps, this can cause deadlocks if the code takes locks when evaluating values, for example in property getters. By closing the locals pane before I break, I avoided the hangs. I'm not explaining this very well. I tried googling again to find the original hint, but did not succeed. It may have been this: Why does Visual Studio stall while debugging?. A: Exit Visual Studio and delete the .ncb file for the project. A: no, but it sounds like a bug. Report it to MS and they'll give you instructions how to get a debug setup going to send them information to debug it. A: Mark, have you applied the SP1? I haven't had your exact issue, but I did have problems with it locking up for 15 seconds in debug mode (or when coming out of debug mode). I found a blog post somewhere that suggested some possible fixes. One of them was to go into IE 7 and open up Tools->Internet Options->Advanced Tab->Security section and uncheck the 'Check for publisher's certificate revocation' and 'check for server certificate revocation' (or at least the first one). Once I did that, my lock-up woes were over. Granted, my dev box isn't on the internet, so I didn't care much about most of those settings anyhow in IE. Don't know if that is any help to you, but it certainly fixed my issues with VS 2008. All the best! A: I find Visual Studio (VC9) locks up regularly when debugging multithread apps. I usually have to reboot to get the system back. A: For me I found that VS was trying to open an IP that had expired for when I did some previous remote debugging. Check your Debugging setup under Tools-->Options-->Debugging--->Symbols Make sure you don't have a bad path there. A: Not a solution I know, but justification for moving my VC projects from VC2008 to VC2010- where Intellisense has been disabled. The recovery worked ok despite not having explicitly hit save for 3 hours. A: The following worked for me: delete the .ncd and .suo files of the associated project. Source A: Ah another big show stopper could be "ActiveDirectory". If this happens at your work and they use "ActiveDirectory" this can happen. Someone here claimed it was a bug with Google Toolbar, but I don't have solid evidence that Google is responsible or not.
{ "language": "en", "url": "https://stackoverflow.com/questions/126472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: SWD files in Flash Player 9 Does anybody have any pointers to what SWD files are in Flash 9, why Flash Player wants to download them sometimes, how to make ones, how to make use of them? Didn't manage to dig anything useful myself yet. Update I know roughly what are swd files used in Flash8, and there is even a way to make them, but Flash 9 doesn't seem to need it at first glance, but still attempts to use sometimes. A: SWD files are needed to debug content with Adobe's debugging tools. You can see this in action by publishing from Flash authoring with shift-control-enter. The SWD itself is only needed for the debugging tool to see inside the SWF. You can throw it away once you're done debugging, and you never need to upload it to the server unless you're planning to do remote debugging. Docs: Debugging local files Debugging remote files A: SWD files are similar to SWF files, except that they contain debugging-specific information that the debugger and Flash Debug Player watch for. From: About SWD files The citation above is from the Flex documentation but applies to 'normal' Flash too.
{ "language": "en", "url": "https://stackoverflow.com/questions/126510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: PHP and MS Access: Number of Records returned by SELECT query I am running following PHP code to interact with a MS Access database. $odbc_con = new COM("ADODB.Connection"); $constr = "DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=" . $db_path . ";"; $odbc_con -> open($constr); $rs_select = $odbc_con -> execute ("SELECT * FROM Main"); Using ($rs_select -> RecordCount) gives -1 though the query is returning non-zero records. (a) What can be the reason? (b) Is there any way out? I have also tried using count($rs_select -> GetRows()). This satisfies the need but looks inefficient as it will involve copying of all the records into an array first. A: It is possible that ODBC doesn't know the recordcount yet. In that case it is possible to go to the last record and only then will recordcount reflect the true number of records. This will probably also not be very efficient since it will load all records from the query. As Oli said, using SELECT COUNT(*) will give you the result. I think that using 2 queries would still be more efficient than using my first method. A: ADODB has its own rules for what recordcount is returned depending on the type of recordset you've defined. See: MS Knowledge Base article 194973 W3C Schools article In the example above, the PHP COM() object is used to instantiate ADODB, a COM interface for generic database access. According to the PHP documentation, the object reference produced is overloaded, so you can just use the same properties/methods that the native ADODB object would have. This means that you need to use the ADODB methods to set the recordset type to one that will give an accurate recordcount (if you must have it). The alternative, as others have mentioned, is to use a second query to get the COUNT() of the records returned by the SELECT statement. This is easier, but may be inappropriate in the particular environment. I'm not an ADO guru, so can't provide you with the exact commands for setting your recordset type, but from the articles cited above, it is clear that you need a static or keyset cursor. It appears to me that the proper method of setting the CursorType is to use a parameter in the command that opens the recordset. This W3C Schools article on the CursorType property gives the appropriate arguments for that command. Hopefully, this information will help the original poster accomplish his task, one way or the other. A: Doesn't Access have its own COUNT operator? eg: $rs_select = $odbc_con -> execute ("SELECT COUNT(*) FROM Main"); A: Basically, Access is not going to show you the whole record set until it needs to (it's faster that way for much of the user experience anyway) - especially with larger recordsets. To get an accurate count, you must traverse the entire record set. In VBA I normally do that with a duo of foo.MoveLast and foo.MoveFirst - I don't know what the php equivalents are. It's expensive, but since it sounds like you are going to be processing the whole record set anyway, I guess it is OK. (a side note, this same traversal is also necessary if you are manipulating bookmarks in VBA, as you can get some wild results if you clone a recordset and don't traverse it before you copy the bookmark back to the form's recordset) A: If you're using a dynamic cursor type of connection, then it could actually change. Someone may delete a record from that database while you're browsing through pages of records. To avoid, use a static sort of snapshot cursor. I have this bookmarked which will explain it well. This always got me and the bookmark always reminded me why. http://support.microsoft.com/kb/194973
{ "language": "en", "url": "https://stackoverflow.com/questions/126513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: GIS/Mapping solutions that provide easy access to routing data I'm look for a GIS/Mapping tool that will give me easy SERVER-SIDE access to route information (specifically, trip time and distance) in an ASP.NET web application. From what I can tell, Google and Yahoo maps do everything client-side in javascript, but neither provide services to retrieve just the route information. I'm interested in both free and paid products. Also, if you have experience with a product I like to hear about what you think (complexity of API, runtime performance, likes, dislikes, etc.) A: ESRI's ArcGIS Server and ArcWeb services provide point-to-point routing. You have full control over creating the data (if you want), changing the data, customizing parameters, and even adding dynamic cost analysis. Server might be a bit heavy-weight for JUST routing as its a full mapping and analysis server system. ArcWeb is an online service where you can buy just the services you want. Another option is Oracle Spatial. They have some built-in networking/routing capabilities to do point-to-point routing. I personally have been unable to get it to work, but I've heard second/third-hand comments that it works, but has the normal complexities of Oracle (i.e. not a DIY job). MapQuest also has a comprehensive set of API's (much better than Google or Yahoo IMHO) that can do routing without a map. Not sure what their licensing/costs are and how they compare to Google/Yahoo for non-personal use. (One note: many of the "free" services require you to pay a license fee for non-personal use... they might ignore you until your traffic gets high enough for them to notice so watch out) A: We've had success with using the web services from Map24 (http://developer.navteq.com/site/global/zones/ms/index.jsp) to do some GIS-y type things. They have a web service for doing routing (http://developer.navteq.com/site/global/zones/ms/map24webservices/map24routingservice.jsp) which I had a quick play with last year. There's a bit of a learning curve wih their APIs but once you get past it their stuff is quite powerful and performs really well. Phil A: Arc Web Services is going away and is now becoming ArcGIS online. This means a whole new updated sdk for developing arc web applications for web sites. If you need a really advanced routing system developed a company called routesmart makes dll based route solver solutions for client server apps and now web applications using ArcGIS 9.x All ESRI api's are complex. You have to just start using it and when you gain experience in it you will feel more comfortable. Since ESRI has somewhat of a shrinking monopoly on certain aspects of GIS capabilities it will be interesting to see what Google and others develop to create some decent competition for the GIS world. Until then all encompassing GIS solutions will not be flexable or easy to work with and expensive. One major gripe I have with ESRI products is rather ethereal however it seems to be a recurring theme I have experienced in all of their product lines. You do a code\db\whatever update and test. It doesn't work. You try again without making changes and it still doesn't work, but if you try one more time it suddenly works. It's like ESRI has coded everything to fail 2 times first before any change to a point release, mxd, sde data, or python code to begin performing as intended. As strange as this may seem if you have ever worked with their products you know this because you have experienced it yourself. This may have to do with my lack of understanding but all my GIS friends tend to agree with me on this point. A: As I mentioned in one comment, we've very good experience with Oracle Spatial NDM. Here's a very handy tutorial. Another option I know is pgRouting. See also this post on GIS Stackexchange.
{ "language": "en", "url": "https://stackoverflow.com/questions/126520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Java: Eclipse - Diff Trace I've got two versions of a project with a JUnit test. Looking at the diff of the files there's no obvious reason to this, but the test passes on one project but not on the other. For a while now I've wondered if there was a program that would record a trace through the whole program when it runs and save it to a file. This would be really handy for this exact example where I want a trace through both tests and then I want to diff them together. Can anyone point me in the direction of a program that might be able to do this, or a way that I could diff the two program's flows? Cheers A: I'd use Aspects for this. Check out AspectJ for instance, it is very easy to design a rule (point cut) that says "For all methods invoked in my Java code, log method name". A: If I understand your question correctly looks like this might partially achieve what you are after. http://www.lambdacs.com/debugger/debugger.html A: The Eclipse TPTP project has a trace engine that you might find useful.
{ "language": "en", "url": "https://stackoverflow.com/questions/126521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Iterate a list with indexes in Python I could swear I've seen the function (or method) that takes a list, like this [3, 7, 19] and makes it into iterable list of tuples, like so: [(0,3), (1,7), (2,19)] to use it instead of: for i in range(len(name_of_list)): name_of_list[i] = something but I can't remember the name and googling "iterate list" gets nothing. A: Here it is a solution using map function: >>> a = [3, 7, 19] >>> map(lambda x: (x, a[x]), range(len(a))) [(0, 3), (1, 7), (2, 19)] And a solution using list comprehensions: >>> a = [3,7,19] >>> [(x, a[x]) for x in range(len(a))] [(0, 3), (1, 7), (2, 19)] A: python enumerate function will be satisfied your requirements result = list(enumerate([1,3,7,12])) print result output [(0, 1), (1, 3), (2, 7),(3,12)] A: >>> a = [3,4,5,6] >>> for i, val in enumerate(a): ... print i, val ... 0 3 1 4 2 5 3 6 >>> A: If you have multiple lists, you can do this combining enumerate and zip: list1 = [1, 2, 3, 4, 5] list2 = [10, 20, 30, 40, 50] list3 = [100, 200, 300, 400, 500] for i, (l1, l2, l3) in enumerate(zip(list1, list2, list3)): print(i, l1, l2, l3) Output: 0 1 10 100 1 2 20 200 2 3 30 300 3 4 40 400 4 5 50 500 Note that parenthesis is required after i. Otherwise you get the error: ValueError: need more than 2 values to unpack A: Here's another using the zip function. >>> a = [3, 7, 19] >>> zip(range(len(a)), a) [(0, 3), (1, 7), (2, 19)] A: Yep, that would be the enumerate function! Or more to the point, you need to do: list(enumerate([3,7,19])) [(0, 3), (1, 7), (2, 19)]
{ "language": "en", "url": "https://stackoverflow.com/questions/126524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "216" }
Q: Unit Testing of .NET Add-In for Microsoft Office Has anyone got any suggestions for unit testing a Managed Application Add-In for Office? I'm using NUnit but I had the same issues with MSTest. The problem is that there is a .NET assembly loaded inside the Office application (in my case, Word) and I need a reference to that instance of the .NET assembly. I can't just instantiate the object because it wouldn't then have an instance of Word to do things to. Now, I can use the Application.COMAddIns("Name of addin").Object interface to get a reference, but that gets me a COM object that is returned through the RequestComAddInAutomationService. My solution so far is that for that object to have proxy methods for every method in the real .NET object that I want to test (all set under conditional-compilation so they disappear in the released version). The COM object (a VB.NET class) actually has a reference to the instance of the real add-in, but I tried just returning that to NUnit and I got a nice p/Invoke error: System.Runtime.Remoting.RemotingException : This remoting proxy has no channel sink which means either the server has no registered server channels that are listening, or this application has no suitable client channel to talk to the server. at System.Runtime.Remoting.Proxies.RemotingProxy.InternalInvoke(IMethodCallMessage reqMcmMsg, Boolean useDispatchMessage, Int32 callType) at System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(IMessage reqMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) I tried making the main add-in COM visible and the error changes: System.InvalidOperationException : Operation is not valid due to the current state of the object. at System.RuntimeType.ForwardCallToInvokeMember(String memberName, BindingFlags flags, Object target, Int32[] aWrapperTypes, MessageData& msgData) While I have a work-around, it's messy and puts lots of test code in the real project instead of the test project - which isn't really the way NUnit is meant to work. A: This is how I resolved it. * *Just about everything in my add-in runs from the Click method of a button in the UI. I have changed all those Click methods to consist only of a simple, parameterless call. *I then created a new file (Partial Class) called EntryPoint that had lots of very short Friend Subs, each of which was usually one or two calls to parameterised worker functions, so that all the Click methods just called into this file. So, for example, there's a function that opens a standard document and calls a "save as" into our DMS. The function takes a parameter of which document to open, and there are a couple of dozen standard documents that we use. So I have Private Sub btnMemo_Click(ByVal Ctrl As Microsoft.Office.Core.CommandBarButton, ByRef CancelDefault As Boolean) Handles btnMemo.Click DocMemo() End Sub in the ThisAddin and then Friend Sub DocMemo() OpenDocByNumber("Prec", 8862, 1) End Sub in my new EntryPoints file. *I add a new AddInUtilities file which has Public Interface IAddInUtilities #If DEBUG Then Sub DocMemo() #End If End Interface Public Class AddInUtilities Implements IAddInUtilities Private Addin as ThisAddIn #If DEBUG Then Public Sub DocMemo() Implements IAddInUtilities.DocMemo Addin.DocMemo() End Sub #End If Friend Sub New(ByRef theAddin as ThisAddIn) Addin=theAddin End Sub End Class *I go to the ThisAddIn file and add in Private utilities As AddInUtilities Protected Overrides Function RequestComAddInAutomationService() As Object If utilities Is Nothing Then utilities = New AddInUtilities(Me) End If Return utilities End Function And now it's possible to test the DocMemo() function in EntryPoints using NUnit, something like this: <TestFixture()> Public Class Numbering Private appWord As Word.Application Private objMacros As Object <TestFixtureSetUp()> Public Sub LaunchWord() appWord = New Word.Application appWord.Visible = True Dim AddIn As COMAddIn = Nothing Dim AddInUtilities As IAddInUtilities For Each tempAddin As COMAddIn In appWord.COMAddIns If tempAddin.Description = "CobbettsMacrosVsto" Then AddIn = tempAddin End If Next AddInUtilities = AddIn.Object objMacros = AddInUtilities.TestObject End Sub <Test()> Public Sub DocMemo() objMacros.DocMemo() End Sub <TestFixtureTearDown()> Public Sub TearDown() appWord.Quit(False) End Sub End Class The only thing you can't then unit test are the actual Click events, because you're calling into EntryPoints in a different way, ie through the RequestComAddInAutomationService interface rather than through the event handlers. But it works! A: Consider the various mocking frameworks NMock, RhinoMocks, etc. to fake the behavior of Office in your tests.
{ "language": "en", "url": "https://stackoverflow.com/questions/126526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I get working in Safari? On the safari browser, the standard <asp:Menu> doesn't render well at all. How can this be fixed? A: Thanks for the advice, it led me into the following solution; I created a file named "safari.browser" and placed it in the App_Browsers directory. The content of this file is shown below; <browsers> <browser refID="safari1plus"> <controlAdapters> <adapter controlType="System.Web.UI.WebControls.Menu" adapterType="" /> </controlAdapters> </browser> </browsers> As I understand it, this tells ASP.NET not to use the adaptor it would normally use to render the control content and instead use uplevel rendering. A: You can use ControlAdapters to alter the rendering of server controls. Here's an example: http://www.pluralsight.com/community/blogs/fritz/archive/2007/03/27/46598.aspx Though, in my opinion it might be equal amount of work to abandon the menu control for a pure css one (available on many sites). A: Oooof - was hoping it would be a simmple case of adding a browserCaps item in web.config with appropriate values or similar... A: The best and simplest solution I've found for this problem is to include this bit of code in your page_load event. if (Request.UserAgent.IndexOf("AppleWebKit") > 0) Request.Browser.Adapters.Clear();
{ "language": "en", "url": "https://stackoverflow.com/questions/126528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: VS2003/05 constantly screws up the display - is there any known fix for this? Running VS2003/05 under Vista makes the former screw up the display at least 50% of the time - you start debugging, VS kicks in and you see the windows/docking panes screwed up, not refreshing, etc... I've contacted Microsoft about this, but they weren't much help, I was wondering if someone knows any fixes? I'm running VS with visual styles turned off under Vista, so that it doesn't hang when you try to do a "find in files". All the latest updates/service packs are installed. A: For Visual Studio 2005 , install the Microsoft® Visual Studio® 2005 Service Pack 1 and the Visual Studio 2005 Service Pack 1 Update for Windows Vista Take a look at the Visual Studio .NET 2003 on Windows Vista Issue List and see if you find something there. And see if that help things. A: I've done a lot with Visual Studio 2005 Express on Vista, and have never seen any display issues. Vista is pretty sensitive to the quality of the video drivers - have you tried updating yours?
{ "language": "en", "url": "https://stackoverflow.com/questions/126546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Custom painting Windows Forms Scrollbar? I know how to do custom drawing of a standalone Windows Forms ScrollBar because there are plenty of examples at places like codeproject.com. But how do you custom draw the scrollbars in controls you do not create yourself? For example a Panel can show scrollbars but how would I ensure that the scrollbars it shows are custom drawn? Or maybe this is not possible and I would need to create my own version of a Panel so that they use my own custom drawing scrollbars? A: The scrollbars you see most often, including those built into most winforms controls, are rendered by Windows and there is no way to override their appearance in WinForms short of implementing an entirely custom solution which completely takes over the rendering and behavior of the common scrollbar control. There are some commercial packagages which claim to do this (google winforms skinning).
{ "language": "en", "url": "https://stackoverflow.com/questions/126555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why does Firefox 2 display fonts larger than specified in CSS? I have a webpage where Firefox 2 displays the font certain, really specific elements, larger than than what I specified in the CSS. When I look at the affected element (mostly td elements as far as I can tell) with Firebug, I see that the font-size is inherited from the body (11px, so its not a relative size). No styles overwrite this font-size, anywhere. When I toggle the Show computed style option, the font-size is displayed as 16px, Firefox's default. It seems that the td does not inherit the font-size properly. Obviously, I could specify a more specific CSS selector targeting the td (which in fact works) but I can't find any documented behavior of Firefox not inheriting the font-size properly. It only happens in just a few tables, but completely unrelated to each other. Does anyone know if I am overlooking something or is this a rendering issue in Firefox? Internet Explorer (I know, not the best reference for standards compliance) does not scale the font sizes up in tables. A: This is a reasonably well known annoyance: TABLEs and TDs will inherit all font styles except font-size, at least in XHTML. To "fix" this, set the font size also for the TABLE or TD element. So to clarify, this is not Firefox-specific. Did you test in other browsers?
{ "language": "en", "url": "https://stackoverflow.com/questions/126557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to replace a file in a msi installer? I want to replace a single file inside a msi. How to do it? A: Use msi2xml. * *This command extracts the MSI files: msi2xml -c OutputDir TestMSI.MSI *Open OutputDir and modify the file. *To rebuild the MSI run: xml2msi.exe -m TestMSI.xml You need the -m to ignore the 'MD5 checksum test' that fails when an MSIs file(s) are modified. A: Try InstEd - an installer editor at http://www.instedit.com/. It has a 30 day trial, and it works for me. You extract the files to a folder, edit, rebuild the cab, and then save the MSI. Everything but the edit of your files is done in the GUI. Not a great program but I paid my $30 to be able to quickly edit files in the MSI. I don't work for InstEd or related in any way other than paying for and using the application. A: You need to extract the CAB file stream from your msi using MsiDB.exe (supplied with the Windows Installer SDK). Run it from the command line with the -x option and specify the name of the cab file - this is listed in the Media table in the msi database. Alternatively you can skip this part if you specify the "Package Files as:" option in the VSI options to "Compresses in Cabinet Files" to have the cab file left out of the msi when it's built (it will be created in the same directory as the msi). Once extracted you can change the specified file in the cab folder - its name has been mangled so you need to find out what msi name for the file is in the file table and then rename your new file to that. Once done you can pop it back in with the MsiDB utility using the -a option. Before you add with -a you need to use msidb -k to remove the cab from the MSI. A: Very simple example code to replace a file inside an MSI. This does not stream the new file/CAB back into the MSI but requires the CAB to be in the same directory as the MSI for installation to succeed. I'm sure with a little more effort you could alter the code to stream the CAB back in. Const MSI_SOURCE = "application.msi" Const FILE_REPLACE = "config.xml" Dim filesys, installer, database, view Dim objFile, size, result, objCab Set filesys=CreateObject("Scripting.FileSystemObject") Set installer = CreateObject("WindowsInstaller.Installer") Set database = installer.OpenDatabase (MSI_SOURCE, 1) Set objFile = filesys.GetFile(FILE_REPLACE) size = objFile.Size Set objCab = CreateObject("MakeCab.MakeCab.1") objCab.CreateCab "config.cab", False, False, False objCab.AddFile FILE_REPLACE, filesys.GetFileName(FILE_REPLACE) objCab.CloseCab Set view = database.OpenView ("SELECT LastSequence FROM Media WHERE DiskId = 1") view.Execute Set result = view.Fetch seq = result.StringData(1) + 1 ' Sequence for new configuration file Set view = database.OpenView ("INSERT INTO Media (DiskId, LastSequence, Cabinet) VALUES ('2', '" & seq & "', 'config.cab')") view.Execute Set view = database.OpenView ("UPDATE File SET FileSize = " & size & ", Sequence = " & seq & " WHERE File = '" & LCase(FILE_REPLACE) & "'") view.Execute A: This code has only been tested on 1 file, where the name is excactly the same as the file being replaced.. but it should implement Christopher Painters answer in C#, with DTF (from WIX) /** * this is a bastard class, as it is not really a part of building an installer package, * however, we need to be able to modify a prebuild package, and add user specific files, post build, to save memory on server, and have a fast execution time. * * \author Henrik Dalsager */ //I'm using everything... using System; using System.IO; using System.Linq; using System.Text; using System.Collections; using System.Collections.Generic; using System.Diagnostics.CodeAnalysis; using System.Globalization; using System.Text.RegularExpressions; using Microsoft.Deployment.Compression.Cab; using Microsoft.Deployment.WindowsInstaller; using Microsoft.Deployment.WindowsInstaller.Package; namespace MSIFileManipulator { /** * \brief updates an existing MSI, I.E. add new files * */ class updateMSI { //everything revolves around this package.. InstallPackage pkg = null; //the destruction should close connection with the database, just in case we forgot.. ~updateMSI() { if (pkg != null) { try { pkg.Close(); } catch (Exception ex) { //rollback? //do nothing.. we just don't want to break anything if database was already closed, but not dereffered. } } } /** * \brief compresses a list of files, in a workdir, to a cabinet file, in the same workdir. * \param workdir path to the workdir * \param filesToArchive a list of filenames, of the files to include in the cabinet file. * \return filename of the created cab file */ public string createCabinetFileForMSI(string workdir, List<string> filesToArchive) { //create temporary cabinet file at this path: string GUID = Guid.NewGuid().ToString(); string cabFile = GUID + ".cab"; string cabFilePath = Path.Combine(workdir, cabFile); //create a instance of Microsoft.Deployment.Compression.Cab.CabInfo //which provides file-based operations on the cabinet file CabInfo cab = new CabInfo(cabFilePath); //create a list with files and add them to a cab file //now an argument, but previously this was used as test: //List<string> filesToArchive = new List<string>() { @"C:\file1", @"C:\file2" }; cab.PackFiles(workdir, filesToArchive, filesToArchive); //we will ned the path for this file, when adding it to an msi.. return cabFile; } /** * \brief embeds a cabinet file into an MSI into the "stream" table, and adds it as a new media in the media table * This does not install the files on a clients computer, if he runs the installer, * as none of the files in the cabinet, is defined in the MSI File Table (that informs msiexec where to place mentioned files.) * It simply allows cabinet files to piggypack within a package, so that they may be extracted again at clients computer. * * \param pathToCabFile full absolute path to the cabinet file * \return media number of the new cabinet file wihtin the MSI */ public int insertCabFileAsNewMediaInMSI(string cabFilePath, int numberOfFilesInCabinet = -1) { if (pkg == null) { throw new Exception("Cannot insert cabinet file into non-existing MSI package. Please Supply a path to the MSI package"); } int numberOfFilesToAdd = numberOfFilesInCabinet; if (numberOfFilesInCabinet < 0) { CabInfo cab = new CabInfo(cabFilePath); numberOfFilesToAdd = cab.GetFiles().Count; } //create a cab file record as a stream (embeddable into an MSI) Record cabRec = new Record(1); cabRec.SetStream(1, cabFilePath); /*The Media table describes the set of disks that make up the source media for the installation. we want to add one, after all the others DiskId - Determines the sort order for the table. This number must be equal to or greater than 1, for out new cab file, it must be > than the existing ones... */ //the baby SQL service in the MSI does not support "ORDER BY `` DESC" but does support order by.. IList<int> mediaIDs = pkg.ExecuteIntegerQuery("SELECT `DiskId` FROM `Media` ORDER BY `DiskId`"); int lastIndex = mediaIDs.Count - 1; int DiskId = mediaIDs.ElementAt(lastIndex) + 1; //wix name conventions of embedded cab files is "#cab" + DiskId + ".cab" string mediaCabinet = "cab" + DiskId.ToString() + ".cab"; //The _Streams table lists embedded OLE data streams. //This is a temporary table, created only when referenced by a SQL statement. string query = "INSERT INTO `_Streams` (`Name`, `Data`) VALUES ('" + mediaCabinet + "', ?)"; pkg.Execute(query, cabRec); Console.WriteLine(query); /*LastSequence - File sequence number for the last file for this new media. The numbers in the LastSequence column specify which of the files in the File table are found on a particular source disk. Each source disk contains all files with sequence numbers (as shown in the Sequence column of the File table) less than or equal to the value in the LastSequence column, and greater than the LastSequence value of the previous disk (or greater than 0, for the first entry in the Media table). This number must be non-negative; the maximum limit is 32767 files. /MSDN */ IList<int> sequences = pkg.ExecuteIntegerQuery("SELECT `LastSequence` FROM `Media` ORDER BY `LastSequence`"); lastIndex = sequences.Count - 1; int LastSequence = sequences.ElementAt(lastIndex) + numberOfFilesToAdd; query = "INSERT INTO `Media` (`DiskId`, `LastSequence`, `Cabinet`) VALUES (" + DiskId.ToString() + "," + LastSequence.ToString() + ",'#" + mediaCabinet + "')"; Console.WriteLine(query); pkg.Execute(query); return DiskId; } /** * \brief embeds a cabinet file into an MSI into the "stream" table, and adds it as a new media in the media table * This does not install the files on a clients computer, if he runs the installer, * as none of the files in the cabinet, is defined in the MSI File Table (that informs msiexec where to place mentioned files.) * It simply allows cabinet files to piggypack within a package, so that they may be extracted again at clients computer. * * \param pathToCabFile full absolute path to the cabinet file * \param pathToMSIFile full absolute path to the msi file * \return media number of the new cabinet file wihtin the MSI */ public int insertCabFileAsNewMediaInMSI(string cabFilePath, string pathToMSIFile, int numberOfFilesInCabinet = -1) { //open the MSI package for editing pkg = new InstallPackage(pathToMSIFile, DatabaseOpenMode.Direct); //have also tried direct, while database was corrupted when writing. return insertCabFileAsNewMediaInMSI(cabFilePath, numberOfFilesInCabinet); } /** * \brief overloaded method, that embeds a cabinet file into an MSI into the "stream" table, and adds it as a new media in the media table * This does not install the files on a clients computer, if he runs the installer, * as none of the files in the cabinet, is defined in the MSI File Table (that informs msiexec where to place mentioned files.) * It simply allows cabinet files to piggypack within a package, so that they may be extracted again at clients computer. * * \param workdir absolute path to the cabinet files location * \param cabFile is the filename of the cabinet file * \param pathToMSIFile full absolute path to the msi file * \return media number of the new cabinet file wihtin the MSI */ public int insertCabFileAsNewMediaInMSI(string workdir, string cabFile, string pathToMSIFile, int numberOfFilesInCabinet = -1) { string absPathToCabFile = Path.Combine(workdir, cabFile); string absPathToMSIFile = Path.Combine(workdir, pathToMSIFile); return insertCabFileAsNewMediaInMSI(absPathToCabFile, absPathToMSIFile, numberOfFilesInCabinet); } /** * \brief reconfigures the MSI, so that a file pointer is "replaced" by a file pointer to another cabinets version of said file... * The original file will not be removed from the MSI, but simply orphaned (no component refers to it). It will not be installed, but will remain in the package. * * \param OriginalFileName (this is the files target name at the clients computer after installation. It is our only way to locate the file in the file table. If two or more files have the same target name, we cannot reorient the pointer to that file!) * \param FileNameInCabinet (In case you did not have the excact same filename for the new file, as the original file, you can specify the name of the file, as it is known in the cabinet, here.) * \param DiskIdOfCabinetFile - Very important information. This is the Id of the new cabinet file, it is the only way to know where the new source data is within the MSI cabinet stream. This function extracts the data it needs from there, like sequence numbers */ public void PointAPreviouslyConfiguredComponentsFileToBeFetchedFromAnotherCabinet(string OriginalFileName, string FileNameInCabinet, string newFileSizeInBytes, int DiskIdOfCabinetFile) { //retrieve the range of sequence numbers for this cabinet file. string query = "SELECT `DiskId` FROM `Media` ORDER BY `LastSequence`"; Console.WriteLine(query); IList<int> medias = pkg.ExecuteIntegerQuery("SELECT `DiskId` FROM `Media` ORDER BY `LastSequence`"); query = "SELECT `LastSequence` FROM `Media` ORDER BY `LastSequence`"; Console.WriteLine(query); IList<int> mediaLastSequences = pkg.ExecuteIntegerQuery("SELECT `LastSequence` FROM `Media` ORDER BY `LastSequence`"); if(medias.Count != mediaLastSequences.Count) { throw new Exception("there is something wrong with the Media Table, There is a different number of DiskId and LastSequence rows"); } if(medias.Count <= 0) { throw new Exception("there is something wrong with the Media Table, There are no rows with medias available.."); } int FirstSequence = -1; int LastSequence = -1; int lastIndex = medias.Count - 1; for (int index = lastIndex; index >= 0; index--) { int rowLastSequence = mediaLastSequences.ElementAt(index); int rowDiskId = medias.ElementAt(index); if (rowDiskId == DiskIdOfCabinetFile) { LastSequence = rowLastSequence; if (index < lastIndex) { //the next cabinet files last sequence number + 1, is this ones first.. FirstSequence = mediaLastSequences.ElementAt(index + 1) + 1; break; } else { //all files from the first, to this last sequence number, are found in this cabinet FirstSequence = mediaLastSequences.ElementAt(lastIndex); break; } } } //now we will look in the file table to get a vacant sequence number in the new cabinet (if available - first run will return empty, and thus default to FirstSequence) int Sequence = FirstSequence; query = "SELECT `Sequence` FROM `File` WHERE `Sequence` >= " + FirstSequence.ToString() + " AND `Sequence` <= " + LastSequence.ToString() + " ORDER BY `Sequence`"; Console.WriteLine(query); IList<int> SequencesInRange = pkg.ExecuteIntegerQuery(query); for (int index = 0; index < SequencesInRange.Count; index++) { if (FirstSequence + index != SequencesInRange.ElementAt(index)) { Sequence = FirstSequence + index; break; } } //now we set this in the file table, to re-point this file to the new media.. //File.FileName = FileNameInCabinet; //File.FileSize = newFileSizeInBytes; //File.Sequence = sequence; query = "UPDATE `File` SET `File`.`FileName`='" + FileNameInCabinet + "' WHERE `File`='" + OriginalFileName + "'"; Console.WriteLine(query); pkg.Execute(query); query = "UPDATE `File` SET `File`.`FileSize`=" + newFileSizeInBytes + " WHERE `File`='" + OriginalFileName + "'"; Console.WriteLine(query); pkg.Execute(query); query = "UPDATE `File` SET `File`.`Sequence`=" + Sequence.ToString() + " WHERE `File`='" + OriginalFileName + "'"; Console.WriteLine(query); pkg.Execute(query); } } } demonstration usage: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace MSIFileManipulator { class Program { static void Main(string[] args) { string workdir = @"C:\Users\Me\MyDevFolder\tests"; string msiFile = "replace_test_copy.msi"; string fileName = "REPLACE_THIS_IMAGE.png"; List<string> filesToInclude = new List<string>(); System.IO.FileInfo fileInfo = new System.IO.FileInfo(System.IO.Path.Combine(workdir, fileName)); if (fileInfo.Exists) { Console.WriteLine("now adding: " + fileName + " to cabinet"); filesToInclude.Add(fileName); updateMSI myMSI = new updateMSI(); string cabfileName = myMSI.createCabinetFileForMSI(workdir, filesToInclude); Console.WriteLine("cabinet file saved as: " + cabfileName); int diskID = myMSI.insertCabFileAsNewMediaInMSI(workdir, cabfileName, msiFile); Console.WriteLine("new media added with disk ID: " + diskID.ToString()); myMSI.PointAPreviouslyConfiguredComponentsFileToBeFetchedFromAnotherCabinet(fileName, fileName, fileInfo.Length.ToString(), diskID); Console.WriteLine("Done"); } else { Console.WriteLine("Could not locate the replacement file:" + fileName); } Console.WriteLine("press any key to exit"); Console.ReadKey(); } } } I am aware that my test does not clean up after it self.. A: The easiest way to do it is to repackage MSI: * *Open MSI file in Wise for Windows Installer. Choose an option to to extract files. *Locate the file on disk and replace it. *Build MSI. These steps should also work for InstallShield.
{ "language": "en", "url": "https://stackoverflow.com/questions/126562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: How to do Automated UI testing for Flash I have an actionscript 2 application that I'd like to write automated UI testing for. For example I'd like to simulate a mouse click on a button and validate that a movie-clip is displayed at the right position and in the right color... Basically, UI testing. What are the best tools available or what is the desired approach? In JavaScript there is the selenium framework which does a nice job. Any similar tool for flash? A: I know this is an old question, but this could be useful for future reference. There is a relatively new project Automated UI tester for ActionScript Its installation is pretty simple, and is described in the user guide step by step. A: Use Adobe genie (UI Tester for ActionScript) http://sourceforge.net/adobe/genie/discussion/general/ But I don't think AS1 or AS2 will work. It works only with AS3. A: Another way is to use any HTML/JS autotesting tool and provide a JS api to your Flash app - at least, you can always expose functions like 'locate smth by id', 'click smth by id', 'enter some text into smth by id' or whatever. A: Try http://SauceLabs.com - they've got a Flash and Flex testing tool A: This may be of use Flash UI testing
{ "language": "en", "url": "https://stackoverflow.com/questions/126573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Displaying Tiff files in SSRS reports I have a requirement to be be able to embed scanned tiff images into some SSRS reports. When I design a report in VS2005 and add an image control the tiff image displays perfectly however when I build it. I get the warning : Warning 2 [rsInvalidMIMEType] The value of the MIMEType property for the image ‘image1’ is “image/tiff”, which is not a valid MIMEType. c:\SSRSStuff\TestReport.rdl 0 0 and instead of an image I get the little red x. Has anybody overcome this issue? A: Assuming you're delivering the image file via IIS, use an ASP.NET page to change image formats and mime type to something that you can use. Response.ContentType = "image/png"; Response.Clear(); using (Bitmap bmp = new Bitmap(tifFilepath)) bmp.Save(Response.OutputStream, ImageFormat.Png); Response.End(); A: I have been goggling fora solution on how to display a TIFF image in a SSRS report but I couldn't find any and since SSRS doesn's support TIFF, I thought converting the TIFF to one of the suppported format will do the trick. And it did. I don't know if there are similar implementation like this out there, but I am just posting so others could benefit as well. Note this only applies if you have a TIFF image saved on database. Public Shared Function ToImage(ByVal imageBytes As Byte()) As Byte() Dim ms As System.IO.MemoryStream = New System.IO.MemoryStream(imageBytes) Dim os As System.IO.MemoryStream = New System.IO.MemoryStream() Dim img As System.Drawing.Image = System.Drawing.Image.FromStream(ms) img.Save(os, System.Drawing.Imaging.ImageFormat.Jpeg) Return os.ToArray() End Function Here’s how you can use the code: 1. In the Report Properties, Select Refereneces, click add and browse System.Drawing, Version=2.0.0.0 2. Select the Code Property, Copy paste the function above 3. Click Ok 4. Drop an Image control from the toolbox 4.1. Right-Click the image and select Image Properties 4.2. Set the Image Source to Database 4.3. In the Use this field, Click expression and paste the code below =Code.ToImage(Fields!FormImage.Value) 4.4. Set the appropriate Mime to Jpeg Regards, Fulbert A: Thanks Peter your code didn't compile but the idea was sound. Here is my attempt that works for me. protected void Page_Load(object sender, EventArgs e) { Response.ContentType = "image/jpeg"; Response.Clear(); Bitmap bmp = new Bitmap(tifFileLocation); bmp.Save(Response.OutputStream, ImageFormat.Jpeg); Response.End(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/126584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I attach a MSSQL 2000 database with only an MDF file I have an old server with a defunct evaluation version of SQL 2000 on it (from 2006), and two databases which were sitting on it. For some unknown reason, the LDF log files are missing. Presumed deleted. I have the mdf files (and in one case an ndf file too) for the databases which used to exist on that server, and I am trying to get them up and running on another SQL 2000 box I have sitting around. sp_attach_db complains that the logfile is missing, and will not attach the database. Attempts to fool it by using a logfile from a database with the same name failed miserably. sp_attach_single_file_db will not work either. The mdf files have obviously not been cleanly detached. How do I get the databases attached and readable? A: I found this answer, which worked with my SQL 2000 machines: How to attach a database with a non-cleanly detached MDF file. Step 1: Make a new database with same name, and which uses the same files as the old one on the new server. Step 2: Stop SQL server, and move your mdf files (and any ndf files you have) over the top of the new ones you just created. Delete any log files. Step 3: Start SQL and run this to put the DB in emergency mode. sp_configure 'allow updates', 1 go reconfigure with override GO update sysdatabases set status = 32768 where name = 'TestDB' go sp_configure 'allow updates', 0 go reconfigure with override GO Step 4: Restart SQL server and observe that the DB is successfully in emergency mode. Step 5: Run this undocumented dbcc option to rebuild the log file (in the correct place) DBCC REBUILD_LOG(TestDB,'D:\SQL_Log\TestDB_Log.LDF') Step 6: You might need to reset the status. Even if you don't, it won't do any harm to do so. exec sp_resetstatus TestDB Step 7: Stop and start SQL to see your newly restored database. A: In Enterprise Manager, right-click the server and choose Attach Database. Select the MDF file and click Ok. It will then ask you if you want to create a new log file or not. Say Yes.
{ "language": "en", "url": "https://stackoverflow.com/questions/126587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Installing a font on a client machine I'm using Visual Studio 2008 and the built-in installation tools for a C# client application. How can I use this installer to install a font on the client machine (if it's not already there)? A: For me, Timothy Carter had the answer mostly right: "right click on the File System on Target Machine, Add Special Folder -> Fonts Folder, then place your font file there." But that was not enough. The Fonts didn't actually get installed as system Fonts. To install the fonts, I had to visit the Properties for each font file and change Register=vsdrfDoNoRegister to Register=vsdrfFont. After that the Setup file I generated would also install the font! A: In VS2005 (so I assume 2008 as well), right click on the File System on Target Machine, Add Special Folder -> Fonts Folder, then place your font file there. A: Take a look at this article. http://www.atakala.com/Browser/Item.aspx?user_id=amos&dict_id=83 The most important call is the AddFontResource Win32 API call altough the described sequence of operations must be respected for you to have a working font setup in the system. http://msdn.microsoft.com/en-us/library/ms534231(VS.85).aspx The AddFontResource function adds the font resource from the specified file to the system font table. The font can subsequently be used for text output by any application. A: I'm using InnoSetup to deliver my VS applications. It allows you to install fonts from your system to the client system. I never tested in Windows 7 or Vista (probably you'll have some permission issues). Remember that not all the fonts are freely distributable.
{ "language": "en", "url": "https://stackoverflow.com/questions/126594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Importing contacts from the top webmail services Is anyone familiar with a service or open-source scripts that can import contacts from Gmail, Yahoo mail, AOL, Hotmail and other prominent webmail services? (Not plaxo, it has a problem in IE7) A: If it's just a one off operation. Linkedn can, but I'm not sure if you could do anything useful afterwards with the list.
{ "language": "en", "url": "https://stackoverflow.com/questions/126599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Accessing the windows admin shares when not connected to a network I'm finding that I can't access the admin shares on an XP64 box when there's no network connection. Trying to browse to \\localhost\c$ fails (although obviously browsing c: works). Reason for the question is that the NANT build script for our application uses this format to copy files from one local directory to another, and it's failing when I'm trying to use my laptop on the train (the same problem occurs if I unplug the network cable from my desktop and build). The whole build only uses local resources so should be possible without network connection. A: You could install a loopback adapter to fool the computer into thinking it's on a network. http://support.microsoft.com/kb/839013
{ "language": "en", "url": "https://stackoverflow.com/questions/126601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the "Fountain Development Model"? It is mentioned on the Systems Development Life Cycle page on Wikipedia: To manage this, a number of system development life cycle (SDLC) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, and synchronize and stabilize. I found a few things on Google, but I felt that they were vague and they just didn't click for me. Perhaps an explanation from someone here might be more clear. A: Waterfall is a model that enforces control and avoids parallelism; every requirement for a task has to be fulfilled before starting the task. Fountain says that a new task can be started before all requirements are met, because not all requirements are necessary at the start of the task. Think of this: Super Mario Game, Waterfall: first, design everything, then get hardware done (Hardware Team), then create some test sprites, then code the engine, then create artwork, then music and finish. Fountain: while the hardware team is doing its job, artwork starts conceptual work, and coding starts some prototyping on preexisting hw. When artists and hw finishes, coders integrate these onto their code and continue 'til finishing the game... A: Fountain: Stand in a circle and throw some patterns and key words in the air to see where they land. Pick up only the ones that land inside the circle. Repeat until cancelled. Waterfall: Wrangle everyone into a boat, then yell "Geronimo!" while going over Niagra Falls. Pick up the shattered pieces then rinse and repeat. Make sure it's well documented what part of the boat each individual should be sitting in, what they should be holding on to, how loud to yell, and exactly where they should land. See form 3684-B for additional instructions. Spiral: Pick one team member and have everyone else spin them around in circles until dizy. Build and Fix: Just throw it against the wall to see what sticks. If something falls off add some duct tape. Used gum may also work. Any part that won't stay stuck, just throw away. Rapid Prototyping: Do exactly what the client asked for. Repeat until they figure out what they want. Incremental: Only build the parts you want to, and only when you want to do it. An alternate version is to only build the parts they scream loudest for, and only when they are actually standing at your desk waiting for it. Synchronize and Stabilize: Like Spiral except only one person at a time spins the unlucky team member. When their turn is over, stop the spinning for a moment. A: As I understand it, they essentially contain the same steps but a fountain approach is much more iterative, with less focus on initial design and more on analysis. You basically bodge your way through things. See what needs to happen, and improve it. See what needs to happen. Improve it. It's more agile but at the cost of project stability. Waterfall is a lot better for large projects.
{ "language": "en", "url": "https://stackoverflow.com/questions/126607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can a .NET windows application be compressed into a single .exe? I am not too familiar with .NET desktop applications (using Visual Studio 2005). Is it possible to have the entire application run from a single .exe file? A: Yes. In .NET you can have your entire application encapsulated as a single EXE file. Just make sure your solution only has one Windows Application project in it (and no other projects except for setups). The EXE file created by your .NET project will not be a stand-alone executable file, but the only dependency it will have will be the .NET runtime (unless you add references to other DLL assemblies). If you use .NET 2.0 (which I recommend), the runtime comes preinstalled on newer PCs and is very easy and quick to set up on older machines (installer is about 23 MB). If your app does need to reference other assemblies (like a data access DLL or a .NET class library DLL project), you could use one of the tools referenced by other posters here to combine your EXE and all referenced DLLs into a single EXE file. However, conventional practice would dictate simply deploying the EXE file along with any dependent DLLs as separate files. In .NET, deployment of dependent DLLs is pretty simple: just put them in the same folder as the EXE file on the client machine, and you're done. It is good practice to deploy your application (whether one file or many) as a single-file installer (either a setup.EXE or setup.MSI). .NET comes with deployment project templates that can create installers for you fairly easily. Slightly off-topic: You could use NGEN to compile your .NET application as a native EXE, but it would still be dependent upon the .NET runtime. The advantage of native compilation is that some things can be pre-compiled, but I've never seen a situation where this tiny performance increase is worth the bother. A: It's 2021 and support for this has improved by leaps and bounds. If you've made the jump to .NET 5 (which has Windows Forms support!) you can make a single file exe that even embeds the native binaries. You must run the publish command from the command line, but it will produce a single exe with any configuration or content files along side it. The dotnet sdk does not need to exist on the target machine. dotnet.exe publish YourProject.csproj -f net5.0 -o package/win-x64 -c Release -r win-x64 /p:PublishTrimmed=true /p:TrimMode=Link /p:PublishSingleFile=true /p:IncludeNativeLibrariesForSelfExtract=true A few notes: * *PublishTrimmed can be set to false to eliminate code removal in case of heavy reflection use. *TrimMode can be set to copyused or link. More details here: https://learn.microsoft.com/en-us/dotnet/core/deploying/trim-self-contained. *The -f parameter specifies the framework version, i.e. net5.0, etc. *The -c parameter is configuration, usually Release. *The -r parameter is the runtime to build for, can be win-x86, win-x64 and linux-x64. There may be options for ARM as well. *The -o parameter is the output folder. Full reference: https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-publish A: There is a third party tool called .NET Reactor that can do this for you. I have not used the tool and so am not sure how well it works. A: I have used the .NETZ .NET open source executable packer to pack EXE and DLL files into single EXE file. Here is a command line example for how to pack DLL files into one file: netz -s application.exe foo.dll bar.dll A: Yes, you can use the ILMerge tool. It is also available as a NuGet package. A: Jeffrey Richter wrote in his book excerpt that a callback method with the application domain’s ResolveAssembly event can be registered to enable the CLR to find third-party assemblies and DLL files during program initialization: AppDomain.CurrentDomain.AssemblyResolve += (sender, args) => { String resourceName = "AssemblyLoadingAndReflection." + new AssemblyName(args.Name).Name + ".dll"; using (var stream = Assembly.GetExecutingAssembly().GetManifestResourceStream(resourceName)){ Byte[] assemblyData = new Byte[stream.Length]; stream.Read(assemblyData, 0, assemblyData.Length); return Assembly.Load(assemblyData); } }; Disclaimer: I have not used this myself. As far as I understood, the client still needs to have the .NET framework installed. A: Today, in 2015, you can use Costura.Fody for this. Using it amounts to adding the NuGet package to your project and recompiling. I'm not sure if it works with Visual Studio 2005 as in the question, but then it's not year 2008 either. I've used it in a few of my projects and it worked great. Fody is a general purpose Code Weaver that is free and open source. The basic idea that it allows post-processing of the compiled .NET code to enhance it with features. For example, logging can be added to every method call, etc. In our case, it's packaging the dependent DLL files into the assembly resources, so that they could be loaded during the program runtime as if they were stand-alone dependencies. A: As it has been said, you can use ILMerge. It may be easier however, if you use the free Phoenix protector, which also protects your code. A: You can try the NBox utility. َ A: I'm using .netshrink myself, and it does exactly what you need. It packs the main and extra assemblies (DLL files) into one executable image. I've been using it for one year already, and I'm not going back to ILMerge (it always crashes at some point...). A: ILMerge can combine assemblies to one single assembly provided the assembly has only managed code. You can use the commandline application, or add reference to the EXE file and programmatically merge. For a GUI version, there is Eazfuscator, and also .Netz, both of which are free. Paid applications include BoxedApp and SmartAssembly. If you have to merge assemblies with unmanaged code, I would suggest SmartAssembly. I never had hiccups with SmartAssembly, but with all others. Here, it can embed the required dependencies as resources to your main EXE file. You can do all this manually, not needing to worry if an assembly is managed or in mixed mode by embedding the DLL file in your resources and then relying on AppDomain's Assembly ResolveHandler. This is a one-stop solution by adopting the worst case, that is, assemblies with unmanaged code. class Program { [STAThread] static void Main() { AppDomain.CurrentDomain.AssemblyResolve += (sender, args) => { string assemblyName = new AssemblyName(args.Name).Name; if (assemblyName.EndsWith(".resources")) return null; string dllName = assemblyName + ".dll"; string dllFullPath = Path.Combine(GetMyApplicationSpecificPath(), dllName); using (Stream s = Assembly.GetEntryAssembly().GetManifestResourceStream(typeof(Program).Namespace + ".Resources." + dllName)) { byte[] data = new byte[stream.Length]; s.Read(data, 0, data.Length); // Or just byte[] data = new BinaryReader(s).ReadBytes((int)s.Length); File.WriteAllBytes(dllFullPath, data); } return Assembly.LoadFrom(dllFullPath); }; } } Where Program is the class name. The key here is to write the bytes to a file and load from its location. To avoid a chicken-and-egg problem, you have to ensure you declare the handler before accessing the assembly and that you do not access the assembly members (or instantiate anything that has to deal with the assembly) inside the loading (assembly resolving) part. Also take care to ensure GetMyApplicationSpecificPath() is not any temporary directory since temporary files could be attempted to get erased by other programs or by yourself (not that it will get deleted while your program is accessing the DLL file, but at least it's a nuisance. AppData is a good location). Also note that you have to write the bytes each time; you can't load from location just because the DLL file already resides there. For managed DLL files, you need not write bytes, but directly load from the location of the DLL file, or just read the bytes and load the assembly from memory. Like this or so: using (Stream s = Assembly.GetEntryAssembly().GetManifestResourceStream(typeof(Program).Namespace + ".Resources." + dllName)) { byte[] data = new byte[stream.Length]; s.Read(data, 0, data.Length); return Assembly.Load(data); } // Or just return Assembly.LoadFrom(dllFullPath); // If location is known. If the assembly is fully unmanaged, you can see this link or this as to how to load such DLL files. A: Yes, you can compress dotnet windows application into a single .exe file. Just you have to follow the below process: * *Goto Manage NuGet Packages right clicking on your project. *Install Costura.Fody *Clean Debug folder inside the bin folder(Delete all files and folder). *Run the project. *Goto debug folder, then to app.publish folder, you will get a single exe file here.
{ "language": "en", "url": "https://stackoverflow.com/questions/126611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Passing JS function to applet for as event listener Is it possible to pass a function/callback from javascript to a java applet? For example i have an applet with a button that when pressed it will call the passed js callback function onCommand() { alert('Button pressed from applet'); } applet.onCommand(onCommand); A: I tend to use something I derived from the reflection example at the bottom of this page, as then you don't need to meddle with your classpath to get it to compile Then I just pass JSON strings around between the applet and javascript A: You can use JSObject to call back into javascript from Java. From that page: import netscape.javascript.*; import java.applet.*; import java.awt.*; class MyApplet extends Applet { public void init() { JSObject win = JSObject.getWindow(this); JSObject doc = (JSObject) win.getMember("document"); JSObject loc = (JSObject) doc.getMember("location"); String s = (String) loc.getMember("href"); // document.location.href win.call("f", null); // Call f() in HTML page } } A: ps. to use JSObject you may need to include "MAYSCRIPT" tag to applet html tag.
{ "language": "en", "url": "https://stackoverflow.com/questions/126631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: .Net Menustrip background colour range I need to put a control to the right of my MenuStrip. The MenuStrip fades in colour away from the BackColor on the left hand side to something whiter on the right, and ideally I would like to make my small control blend in by having the same backColor as the menustrip has on that side. Does anyone know how that colour is computed? Worst case, can you recomment a tiny app for sampling colours off the screen? [Update] Sampling is not a useful approach if the menustrip can be resized A: The tool strip renderer draws a gradient between two colors that are defined in the ProfessionalColorTable that is passed into the ToolStripProfessionalRenderer constructor. It uses the MenuStripGradientBegin and MenuStripGradientEnd values. So you need to recover these two values and then draw your background appropriately as a linear gradient. A: I sample colours using Paint.Net: Make a screendump of the whole screen, or window, and then use Paint.Net to sample the color. A: The background of ToolStrip ( i.e. MenuStrip, StatusStrip, etc... ) is not hard-coded. Look for "ToolStripRender" class and derived classes "ToolStripSystemRenderer" and "ToolStripProfessionalRender". They have several method for drawing parts of ToolStrip.
{ "language": "en", "url": "https://stackoverflow.com/questions/126635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Oracle Datadiff (Oracle) I have to return all records from last 12 months. How to do that in PL/SQL? EDIT: Sorry, I forgot to explain, I do have a column of DATA type A: Doing this in PL/SQL is pretty much synonymous with doing it in SQL. SELECT * FROM table WHERE date_column >= ADD_MONTHS(TRUNC(SYSDATE),-12) You might like to fiddle around with the TRUNC statement to get exactly the range you want -- I used TRUNC(SYSDATE) which is the same as TRUNC(SYSDATE,'D') -- ie. remove the time portion of the sysdate. For example, if it is currently Aug 12 but you want values from Feb 01 instead of Feb 12 then use: SELECT * FROM table WHERE date_column >= ADD_MONTHS(TRUNC(SYSDATE,'MM'),-12) Also, see the docs for treatment of months having different numbers of days: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions004.htm#SQLRF00603 A: SELECT * FROM table WHERE date_column > ADD_MONTHS(SYSDATE, -12) Not sure I deserved down-modding for the earlier posts... was only trying to help. A: SELECT * FROM table WHERE date_column > SYSDATE - 365
{ "language": "en", "url": "https://stackoverflow.com/questions/126652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is your favorite misconception about Lisp? Please respond with one by one. If you explain why it is not true then try to avoid general statements and provide particular examples. A: My favourite lisp misconception: CL really means CthuLhu! Jokes aside; by far the most widespread misconception regarding all manner of lisp dialects, is that the s-expressions syntax reliance on parenthesis harms readability, when in fact it is this very trait that helps to make lisp code a lot more terse, clear and readable than the majority of other programming languages. A: I don't have a "favourite" misconception, because most misconceptions, and not only about Lisp, are just annoyances. But I have one misconception about Lisp that really shocked me when I read Lisp's history (specifically History of Lisp and The Evolution of Lisp). Lisp is known to be a slow interpreted language my many people, but in reality it had a compiler something about less than one year after having its first working interpreter, I think, and the following history is a long list of work targetted at optimization that resulted in some Lisp implementations beating Fortran at number crushing! Most probable source of the myth is that many people get only to know Lisp from a CS course about Scheme, where they write themselves an interpreter. And probably many of them get to hate the course because it teaches them beautiful but complex concepts about computability or recursion, and they then assimilate their issues with the course with their issues with Lisp. A: "The Universe is written in Lisp". But apparently it's not. A: Everything in Lisp is a list, there are no other efficient complex data structures. Not true. In Common Lisp there are classes, structures, hashtables, multi dimensional arrays, mutable strings, etc. Each one of those are implemented efficiently. For example, the SBCL compiler emits optimized inline native code for array access. A: Lisp cannot provide standalone executables. Using SBCL, you can save the current state of your lisp VM into a single executable file with a simple function call. When you start this executable, it will start from the original state and call the function or lambda that you have provided when it was saved. A: Lisp has no IDE, so you must use notepad and continuously count all the superfluous parenthesis. In fact, there are quite a few. For Common Lisp, it is best to use Emacs with SLIME both under windows and linux. It provides a Listener (to evaluate), an Inspector (to watch results), a Debugger (with stepping), a highly customizable Editor (this is Emacs after all). SLIME supports a 10 different implementations. A: That it is a programming language. Today Lisp is a family of programming languages including Common Lisp and Scheme, which are standards with various implementations each, and also Clojure, Arc and many others. A: That all the parentheses make code unreadable. After about two weeks, and with a decent text editor, you just stop noticing them. [ETA - just found a quote by long-time lisper Kenny Tilton: "Parentheses? What parentheses? I haven't noticed any parentheses since my first month of Lisp programming. I like to ask people who complain about parentheses in Lisp if they are bothered by all the spaces between words in a newspaper..."] A: Lisp is an interpreted language and thus it is really slow. Look at the function disassemble in the Common Lisp Hyper Spec. A Common Lisp implementation called SBCL comes with an efficient native compiler backend for a number of platforms. A: My favourite misconception: that it's a “functional language” that discourages iteration or OOP or whatever programming style you can name. Nothing could be farther from the truth. First of all, “Lisp”, depending on the meaning intended by the speaker, isn't really a language at all but a language family. There are a couple of more or less well-known and widespread members of this family: Scheme, Common Lisp, Emacs-Lisp, and AutoLisp are the traditional ones; nowadays, there are also Nu, newLISP (which is actually more like an “oldLISP” than a modern one, but anyway), Arc, and Clojure. It's certainly true that Schemers seem to favour a functional style and discourage iteration in favour of recursion. However, as far as I can tell, Schemers are actually a minority in the Lisp world, at least when considering practical usage of Lisp as a programming tool as opposed to a research or study subject. I do not know much about AutoLisp or all the other Lisp dialects beside Scheme and Common Lisp, but what I do know is that Common Lisp is definitely not a single-paradigm language that eschews imperative or even object-oriented programming. In fact, Common Lisp features the most powerful class-based object system that I know of, incorporating stuff like aspect-oriented programming out of the box. Lisp seems to me to actually be the language family that most encourages experiments in new directions and most easily incorporates programming paradigms that aren't supported from the get-go. This includes imperative programming. Common Lisp has even got GOTO! A: That it's for Artificial Intelligence A: many think that lisp is a list-oriented language (whatever that means). it's a widespread belief, even among lispers, that the one great and most important thing about lisp is the cons cell and the singly linked lists that one can build up using them (sexp's). people who believe that don't understand the real reasons that make lisp a more productive language than the other mainstream languages (this is only my subjective opinion!), and don't understand that lisp could pretty much be what it is without having the cons cell in the game at all. a random selection of things that are important to make lisp what it is, and that are easily doable without sexp's: * *REPL *compiler available at runtime *dynamic typing *syntactic abstractions (macros) *closures *simple syntax which is easy to edit (and to "parse" by the brain) a few explanations: simply put, dynamic typing means that not the places have type but the runtime values. this is an important help if you don't know beforehand all your requirements, and your application is constantly transforming as the project evolves. from my experience, this is by far most of the case - only with much more obstacles when static typing is in the picture. probably many would argue that the cons cell and sexp's are crucial for a flexible macro system. the crucial thing is a simple syntax and a simple to manipulate data structure it is parsed into. the sexp syntax and lists together are a good candidate, but there are many others. i'm happy with the parens, but i'm not happy with the other part, namely using cons'd lists to represent program code, because it has an important deficiency: you can't annotate it with random stuff without disturbing the data that is already represented with it. e.g. i can't annotate such a cons'd list representing program code with the fact that it was edited by Joe two days ago, without turning that form into nonsense for the lisp compiler. these annotations are more and more important as your DSL's become more complex, and as your macros turn into small compilers compiling your DSL into lisp. A: I don't know about having a favorite misconception... but one I often see that programmers talk about "LISP" and why they don't like it/it's inappropriate/it's academic/it'll never work for project X. Which would be fine except, when they say "LISP" they mean "Scheme" by which they really mean "a subset of Scheme" by which they really mean "a half remembered subset of Scheme from that AI or languages course 5 years ago that they didn't like". There seems to be quite a bit of irrational prejudice against Lisp(s) from this direction. Rational prejudices are something else entirely. A: That "CLOS" is a programming language, and that Lisp is an interpreted, functional-only language. Really, I heard that from CS professors. And I think there was some misleading chart in one of these Programming Language Concepts books that suggested that. CS teachers at Colleges and Universities today (particularly the young ones) were educated using Java, C and C++, and they probably learnt either Scheme or Common Lisp in a course called "comparative studies of programming language" or "programming paradigms", which was probably taught by someone who doesn't like any Lisp language, and taught them about functions, lists, symbols and higher-order functions. Period. Then they end up teaching what they learned: * *Lisp is one programming language *Lisp is interpreted *Lisp is slow *Lisp is an AI language (the last time I checked Robert Sebesta's book, it still claimed that -- but there is a new edition, so he may have fixed this) *Lisp has no OO support (!!!) *Lisp is a functional language (as opposed to minimally supporting any other paradigm) *In Lisp there are no data types *Lisp has no data structures except lists (and hence is not useful for number crunching) *"Lisp is not used anymore, and is only used in this course because it is the most important functional language" I even saw a very intelligent professor giving an example of matrix multiplication in Common Lisp -- representing the matrices as lists, of course! A: That because people tackle hard problems using Lisp, the language itself must be difficult. Recursion is hard. Fixed points of functions are hard. But they barely cover chapter 1 of Structure and Interpretation of Computer Programs. That's not because Scheme is hard -- it's actually because it's so easy, the hard stuff is all that's left to do! A: Any sufficiently advanced application is indistinguishable from line noise. A: People blame lisp for parenthesis. I would like them to say how they would implement a list-array oriented language without them...
{ "language": "en", "url": "https://stackoverflow.com/questions/126653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Opensource Implementation of the Alias Method I am doing a project at the moment, and in the interest of code reuse, I went looking for a library that can perform some probabilistic accept/reject of an item: i.e., there are three people (a, b c), and each of them have a probability P{i} of getting an item, where p{a} denotes the probability of a. These probabilities are calculated at run time, and cannot be hardcoded. What I wanted to do is to generate one random number (for an item), and calculate who gets that item based on their probability of getting it. The alias method (http://books.google.com/books?pg=PA133&dq=alias+method+walker&ei=D4ORR8ncFYuWtgOslpVE&sig=TjEThBUa4odbGJmjyF4daF1AKF4&id=ERSSDBDcYOIC&output=html) outlined here explained how, but I wanted to see if there is a ready made implementation so I wouldn't have to write it up. A: Would something like this do? Put all p{i}'s in the array, function will return an index to the person who gets the item. Executes in O(n). public int selectPerson(float[] probabilies, Random r) { float t = r.nextFloat(); float p = 0.0f; for (int i = 0; i < probabilies.length; i++) { p += probabilies[i]; if (t < p) { return i; } } // We should not end up here if probabilities are normalized properly (sum up to one) return probabilies.length - 1; } EDIT: I haven't really tested this. My point was that the function you described is not very complicated (if I understood what you meant correctly, that is), and you shouldn't need to download a library to solve this. A: Here is a Ruby implementation: https://github.com/cantino/walker_method A: i just tested out the method above - its not perfect, but i guess for my purposes, it ought to be enough. (code in groovy, pasted into a unit test...) void test() { for (int i = 0; i < 10; i++) { once() } } private def once() { def double[] probs = [1 / 11, 2 / 11, 3 / 11, 1 / 11, 2 / 11, 2 / 11] def int[] whoCounts = new int[probs.length] def Random r = new Random() def int who int TIMES = 1000000 for (int i = 0; i < TIMES; i++) { who = selectPerson(probs, r.nextDouble()) whoCounts[who]++ } for (int j = 0; j < probs.length; j++) { System.out.printf(" %10f ", (probs[j] - (whoCounts[j] / TIMES))) } println "" } public int selectPerson(double[] probabilies, double r) { double t = r double p = 0.0f; for (int i = 0; i < probabilies.length; i++) { p += probabilies[i]; if (t < p) { return i; } } return probabilies.length - 1; } outputs: the difference betweenn the probability, and the actual count/total obtained over ten 1,000,000 runs: -0.000009 0.000027 0.000149 -0.000125 0.000371 -0.000414 -0.000212 -0.000346 -0.000396 0.000013 0.000808 0.000132 0.000326 0.000231 -0.000113 0.000040 -0.000071 -0.000414 0.000236 0.000390 -0.000733 -0.000368 0.000086 0.000388 -0.000202 -0.000473 -0.000250 0.000101 -0.000140 0.000963 0.000076 0.000487 -0.000106 -0.000044 0.000095 -0.000509 0.000295 0.000117 -0.000545 -0.000112 -0.000062 0.000306 -0.000584 0.000651 0.000191 0.000280 -0.000358 -0.000181 -0.000334 -0.000043 0.000484 -0.000156 0.000420 -0.000372
{ "language": "en", "url": "https://stackoverflow.com/questions/126656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is a good source for geometric algorithms? I am looking for any good sources for geometric algorithms specifically; The simple stuff like when two lines cross and so on is easy enough (and easy to find), but I would like to find somewhere with algorithms for the more tricky things, such as finding the shape formed by expanding a given polygon by some amount; fast algorithms for shapes with curved sides, etc. Any good tips? Thanks! A: Computational Geometry Algorithms Library is decent. A: I enjoy Dave Eberly's website, especially some of his PDFs. For curved surfaces, there's a pretty good free textbook here, that covers beziers, nurbs and subdivision surfaces. A: "Computational Geometry: Algorithms and Applications" by Mark de Berg, Otfried Cheong, Marc van Kreveld, and Mark Overmars is an excellent introductory computational geometry textbook. It is known as "the four-Marks book" even though only three of the four authors are named Mark or Marc. A: The definitive sourcebook for this is Mathematical Elements for Computer Graphics by Rogers and Adams http://www.nar-associates.com/nar-publishing/mecg2nd.htm A: computational geometry in c is a great book, i learnt a lot from it A: In the end, I did find exactly what I was looking for: Real-Time Collision Detection by Christer Ericson. This is wonderful, and I recommend it strongly. Not so much on curved sides etc, but for the essential stuff on how to actually program geometrical hit testing and so on properly, it seems hard to beat. A: A very nice source of inspiration is Paul Bourke. http://paulbourke.net/ straight to his geometry stuff : http://paulbourke.net/geometry/index.html You might want to wander around on his site a bit, there's tons of nice stuff ! A: I've gotten good use from the generically named Computer Graphics, C Version by Hearn and Baker. A: If you are interested in something realy complex, try searching it on a http://citeseer.ist.psu.edu/ It's a scientific digital library, and the computational geometry is well presented there. I used it a lot while implementing shadows in a 3D.
{ "language": "en", "url": "https://stackoverflow.com/questions/126658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Why does maven 2 try to download dependencies that I already have? When I launch the "mvn install" command, maven sometimes tries to download dependencies that it has already downloaded. That's expected for SNAPSHOT but why does maven do that for other JARs? I know I can avoid that behavior by "-o" flag but I just wonder what the cause is. A: I'd look for dependencies that don't have a specified version number. Maven will periodically check to make sure that it has the most up-to-date version of these artifacts. A: This is probably not what you're seeing, but in the past I've had to manually install artifacts into my local repository and if you forget to include the -Dgenerate.pom=true option there will be no pom in the repo for that artifact and Maven will go out to central (and any other remote repos you have configured) to try to download that pom on every build. A: While we're on the subject of this, I've run into a major bug in Maven 2.0.x. In offline mode, maven will still attempt to download the latest snapshot, and when it can't find your snapshot repo, it fails the build. Imagine the hilarity that ensues when this happens on site with a client and you just needed to make a small change (but I digress). Here's the bug: http://jira.codehaus.org/browse/MNG-2433 here's a workaround: http://mail-archives.apache.org/mod_mbox/maven-users/200601.mbox/%3C117228810601130559l7e79a5e2k@mail.gmail.com%3E A: The -o flag still wasn't working for me, but this did: find ~/.m2/repository -name '_maven*' | xargs rm find ~/.m2/repository -name '*lastUpdated' | xargs rm Which will delete all the .lastUpdated and _maven.repositories files in your local repo. I ran into this issue because we have a corporate Nexus repo that was unreachable, and I needed to do some work. Using Eclipse's Maven integration may have also contributed to this.
{ "language": "en", "url": "https://stackoverflow.com/questions/126667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Unable to close OledbDataReader to Sybase Database in VB.NET I don't seem to be able to close the OledbDataReader object after reading data from it. Here is the relevant code - Dim conSyBase As New OleDb.OleDbConnection("Provider=Sybase.ASEOLEDBProvider.2;Server Name=xx.xx.xx.xx;Server Port Address=5000;Initial Catalog=xxxxxxxxxx;User ID=xxxxxxxx;Password=xxxxxxxxx;") conSyBase.Open() Dim cmdSyBase As New OleDb.OleDbCommand("MySQLStatement", conSyBase) Dim drSyBase As OleDb.OleDbDataReader = cmdSyBase.ExecuteReader Try While drSyBase.Read /*Do some stuff with the data here */ End While Catch ex As Exception NotifyError(ex, "Read failed.") End Try drSyBase.Close() /* CODE HANGS HERE */ conSyBase.Close() drSyBase.Dispose() cmdSyBase.Dispose() conSyBase.Dispose() The console application just hangs at the point at which I try to close the reader. Opening and closing a connection is not a problem, therefore does anyone have any ideas for what may be causing this? A: I found the answer! Before drSyBase.Close() You need to call the cancel method of the Command object cmdSyBase.Cancel() I believe that this may be specific to Sybase databases A: This is a long-shot, but try moving your .Close() and .Dispose() lines in a Finally block of the Try. Like this: Dim conSyBase As New OleDb.OleDbConnection("Provider=Sybase.ASEOLEDBProvider.2;Server Name=xx.xx.xx.xx;Server Port Address=5000;Initial Catalog=xxxxxxxxxx;User ID=xxxxxxxx;Password=xxxxxxxxx;") conSyBase.Open() Dim cmdSyBase As New OleDb.OleDbCommand("MySQLStatement", conSyBase) Dim drSyBase As OleDb.OleDbDataReader = cmdSyBase.ExecuteReader Try While drSyBase.Read /*Do some stuff with the data here */ End While Catch ex As Exception  NotifyError(ex, "Read failed.") Finally drSyBase.Close() conSyBase.Close() drSyBase.Dispose() cmdSyBase.Dispose() conSyBase.Dispose() End Try A: It's been a while since I used VB.NET, but the most safe way to handle this in C# is to use a "using" statement. It's like an implicit try-catch and it makes sure all resources are closed/cancelled and disposed when the "using" ends. using (OleDb.OleDbConnection connection = new OleDb.OleDbConnection(connectionString)) { DoDataAccessStuff(); } // Your resource(s) are killed, disposed and all that Update: Found a link about Using statement in VB.NET 2.0, hope it helps. Using conSyBase As New OleDb.OleDbConnection("Provider=Sybase.ASEOLEDBProvider.2;Server Name=xx.xx.xx.xx;Server Port Address=5000;Initial Catalog=xxxxxxxxxx;User ID=xxxxxxxx;Password=xxxxxxxxx;"), _ cmdSyBase As New OleDb.OleDbCommand("MySQLStatement", conSyBase) conSyBase.Open() Dim drSyBase As OleDb.OleDbDataReader = cmdSyBase.ExecuteReader Try While drSyBase.Read() '...' End While Catch ex As Exception NotifyError(ex, "Read failed.") End Try cmdSyBase.Cancel() End Using
{ "language": "en", "url": "https://stackoverflow.com/questions/126678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Algorithm for counting the number of unique colors in an image Looking for one that is fast enough and still graceful with memory. The image is a 24bpp System.Drawing.Bitmap. A: Most people here have suggested solutions that will probably be fast (actually the one that only uses 2 MB is probably acceptable regarding memory usage and very fast; the one with the hash might be even faster, but it will definitely use more than 2 MB of memory). Programming is always a trade off between memory usage and CPU time. You can usually get results faster if you are willing to "waste" more memory or you can get results slower by "wasting" more computation time, however this usually safes you a lot of memory. Here's one solution nobody has suggested so far. It is probably the one that costs least memory (you can optimize it, so it will hardly use more memory than is necessary to keep the image in memory, however, the image will be altered, though you might have to copy it first). I doubt it can beat the hash or bit-mask solution in speed, it's just interesting if memory is your highest concern. * *Sort the pixels in the image by color. You can easily convert every pixel to a 32 bit number and 32 bit numbers can be compared to each other, one number being smaller than another one, bigger or equal. If you use Quicksort, no extra storage space is needed for sorting, other than additional stack space. If you use Shellsort, no extra memory is needed at all (though Shellsort will be much slower than Quicksort). int num = (RED << 16) + (GREEN << 8) + BLUE; *Once you have sorted the pixels like that (which means you have re-arranged them within the image), all pixels of equal color are always next to each other. So you can just once iterate over the image and look how often the color changes. E.g. you store the current color of the pixel at (0, 0) and you init a counter with the value 1. Next step is you go to (0, 1). If it is the same color as before, nothing to do, continue with the next pixel (0, 2). However, if it is not the same, increase the counter by one and remember the color of that pixel for the next iteration. *Once you have looked at the last pixel (and possibly increased the counter again, if it was not the same as the second last pixel), the counter contains the number of unique colors. Iterating over all pixels at least once is something you must do in any case, regardless of solution, so it has no impact on this solution being slower or faster than other solutions. The speed of this algorithm depends on how fast you can sort the pixels of the image by color. As I said, this algorithm is easily beaten when speed is your main concert (other solutions here are probably all faster), but I doubt it can be beaten when memory usage is your main concern, since other than the counter, enough storage space to store one color, and storage space for the image itself, it will only need additional memory if your chosen sorting algorithm needs any. A: var cnt = new HashSet<System.Drawing.Color>(); foreach (Color pixel in image) cnt.Add(pixel); Console.WriteLine("The image has {0} distinct colours.", cnt.Count); /EDIT: as Lou said, using .GetArgb() instead of the Color value itself might be slightly faster because of the way Color implements GetHashCode. A: Most of the other implementations here are going to be slow. For this to be fast, you need direct scanline access and some kind of sparse matrix to store the color data in. First I will describe the 32bpp case, it's much easier: * *HashSet: Sparse Matrix of colors *ImageData: Use a BitmapData object to directly access the underlying memory *PixelAccess: Use a int* to reference the memory as ints which you can iterate through For each iteration just do a hashset.add of that integer. At the end just see how many keys are in HashSet and that's the total number of colors. It is important to note that resizing a HashSet is really painful (O(n) where n is the number of items in the set) and so you may want to construct a reasonably sized HashSet to begin with, maybe something like imageHeight*imageWidth/4 would be good. In the 24bpp case PixelAccess needs to be a byte* and you need to iterate over 3 bytes for each color in order to construct an int. For each byte in the set of 3 first bitshift to the left by 8 (one byte) and add it to an integer. You now have a 24bpp Color represented by an 32bit int, the rest is all the same. A: You didn't exactly define unique colors. If you actually mean truly unique code values (as opposed to visually the same), then the only exact solution is to actually count them up using one of the techniques described in other answers. If you are looking for visually similar colors, this does quickly distill down to a palette mapping problem where you are looking for say the 256 best unique colors to use to most closely represent the original full dynamic color range image. For most images, it is amazing how good an image reduced from 24-bits and up to 16-million different colors to start with can be mapped to an image with only 256 unique colors when those 256 colors are well chosen. The optimal selection of those right 256 colors (for this example) has been proven to be NP-complete, but there are practical solutions that can come very close. Search for papers by a guy named Shijie Wan and stuff built on his work. If you are looking for an approximation to the number of the code value colors in an image, I would compress the image using a loss-less compression scheme. The compression ratio will directly relate to the number of unique code values in the image. You don't even have to keep the compressed output, just accumulate the number of bytes along the way and throw away the actual output data. Using a set of sample images as a reference, you could build a lookup table between compression ratio and number of different code values in the image. Again, this last technique while quite fast will definitely be an approximation, but it should correlate reasonably well. A: If you need an exact number, then you are going to have to loop over all of the pixels. Probably storing the color and a count in a hash is the best way to go because of the sparseness of the colors. Using the Color.ToArgb() in the hash instead of the color object would probably be a good idea too. Also, if speed is a major concern, you don't want to use a function like GetPixel(x, y) -- instead try to process chunks at a time (row a time). If you can, get a pointer to the beginning of the image memory and do it unsafe. A: Never implemented something like this before, but as I see it, a primitive implementation: For a 24-bit image, the maximum number of colours the image could have is the minimum of (2^24, pixel count of image). You only need to record whether a particular colour has been counted, not how many times it has been counted. That means you need 1 bit to record whether each colour is counted. That's 2MB of memory. Iterate through the pixels, set the relevant bit in your 2MB colour set map. At the end iterate through the colour set map counting the set bits (if you are lucky you will have a POPCNT instruction to aid in this). For smaller images and certainly lower colour depths you might be better off keeping a colour table and count for each colour that is in the image. A: Before modern graphics cards when most machines ran in 256 color palette mode this was an area of quite some considerable interest. The limits on processing power and memory imposed just the sort of constraint which might be useful to you - so a search on algorithms for handling palettes is likely to turn up something of use. A: That depends on what types of images you want to analyse. For 24 Bit images you will need up to 2MB of memory (since in the worst case you have to process each color). For this a bitmap would be the best idea (you have a 2 MB bitmap, where each bit corresponds to an color). This would be a good solution for pictues with a high color count which can be realized in O(#pixels). For 16 Bit images you would only need 8 kB for this bitmap using this technique. However if you have pictures with not much colors it would be better to use something else. But then you would need some kind of check to indicate which algorithm you should use... A: The maximum number of unique colours in an image is equal to the number of pixels, so this is predictable from the very start of the process. Using the HashSet method proposed, by Konrad, would then seem to be a reasonable solution, as the size of the hash should be no greater than the number of pixels, whereas using the bitmap approach suggested by JeeBee would required 512 MB for a 32 bit image (If there's an Alpha channel, and this is determined to contribute to the uniqueness of the colour) The performance of the HashSet approach, though, is likely to be worse than that of the 'bit-per-colour' approach - you might want to try both and do some benchmarks, using lots of differnt images A: The modern popular implementation of color quantization uses the octree data structure. Note the wikipedia pages, the content is pretty good. The octree has the advantage of being as memory-limited as you want, so you can sample the whole image and decide on your palette without much additional memory. Once you understand the concept, follow the link to the 1996 Dr Dobb's journal article's source code. Since this is a C# question, see the May 2003 MSDN article Optimizing Color Quantization for ASP.NET Images, which includes some source code.
{ "language": "en", "url": "https://stackoverflow.com/questions/126680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I reserve low memory (pre OS)? Background: I need to reserve an amount of memory below 0xA0000 prior to my operating system starts. To do this I change the 0040:0013 (or 0x413) word which is the amount of low memory available in KiB. However, Windows and other operating systems use E820h/INT15h to query the memory layout and some BIOS:es doesn't reflect 0x413 changes to the E820h/INT15h BIOS function. Therefore I also have to hook the E820h function if needed. Question: Is there another (more reliable) way to reserve low memory prior to the OS? Or any other way of changing the E820h/INT15h results other than hooking INT15h (by poking EBDA perhaps?) A: I don't think so, but if you are not doing a bootloader, you could para-virtualize the os. You could look at Xen hypervisor for it.
{ "language": "en", "url": "https://stackoverflow.com/questions/126689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How does SOA service discovery (UDDI) work in practice? I'm just reading up on SOA and the service registry / UDDI get mentioned regularly. It sounds nice but how is used in reality? * *Is the registry meant to decouple a logical service from its' physical implementation (port, url etc)? *Is the registry meant to be browsed by a human looking for an interesting service to play with? *Would it be 'wrong' to hard-wire an application to the services it uses? A: A service registry stores and publishes information about all available services, mainly their interface description and their current URI (ip, port, whatever). This way the application can simply ask the registry for the needed service and will get the details of a fitting service implementation, and can connect. UDDI is not the only way to get a registry for you services. But remember that UDDI is intended for webservices only, so it's only usefull if your SOA consists only of webservices. 1) Correct. 2) No, it's not really meant for human eyes. Sure, there are tools to browse the directory, but they are mainly for looking if the registry got the services you need etc. The real usage happens directly between your application/service and the registry. 3) That depends on what you want to accomplish. If you want to build a SOA it think it would be 'wrong' because this contradicts the loose coupling paradigm of SOA. If this is your only service, the only application that uses it and it's likely that the service won't change it's URI there's definitely no problem in hard-wiring it - but then there's propably no need to separate this service :) A: I find it to be more theoretically useful than practically useful. It is infrequently implemented and infrequently used. In reality, DNS provides a sufficient abstraction tool for the location of resources on the network. A: how about using multicast to disover the service? Like using jgroups or SLP? All the services will discover each other and inject the one they need into a proxies. Then building abstraction over the actual transport implementation. (e.g. rest, soap, rmi)
{ "language": "en", "url": "https://stackoverflow.com/questions/126700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How accurate can I expect the time to be from a stratum 0 NTP server on the same subnet on ethernet? I have an application that depends on gpsd and ntpd to accurately set the system time on a linux machine. gpsd is fed NMEA + PPS The application is punping ~25MB per second over the network and I think the loading on the system is causing jitter in the time somehow. (loaded PCI express bus causing irregular interrupt latency) I have another machine that is not loaded at all that I could setup to read the GPS and act as an NTP server for the loaded machine. (the loaded machine would be getting startum 1 ???) How accurate can I expect the time to be from a stratum 0 NTP server on the same subnet on Ethernet? I hope this is not too off topic, I am sure sometime someone else will be happy the answer is documented here. ;-) A: Best info I could find on NTP accuracy, seems to point at 1-2 ms in a LAN setting: NTP v4 with kernel mods to support it, is capable of much better than 1ms accuracy, possibly as good as 1ns. According to [Dave Mills] article, NTP v3 is accurate to 1-2ms in a LAN and 10s of ms in WAN nets. http://www.cis.udel.edu/~mills/ntp.html Other articles suggest that with an accurate time source, such as a GPS time source, NTP is accurate to 50us, but the links on the Linux kernel support say that accuracy of a few ms are possible. http://www.atomic-clock.galleon.eu.com/support/ntp-time-server-accuracy.html Another article says that it is dependent on the predictability of network delays (i.e. a low jitter network). http://www.postel.org/pipermail/end2end-interest/2003-April/002925.html A: NTP is usually considered good for small single-digit ms in this sort of situation. After it has been running for a few days, there shouldn't really be much jitter in any of the the actual clocks, because the ntpd implements a heap of very long time-constant filtering. However, you don't really say how you're measuring the time, and whatever mechanism you're using might be just as jittery as (if not more than) the underlying synchronisation. If you do have a busy network and network cards with really deep buffering, then that might not be helping things, as the jitter between packet arrival and interrupt service will be larger. The fancier your Ethernet switching is the worse it is for timing too - old fashioned hubs are better than switches in this regard. A: The stratum level of the NTP server in question has no relation to the accuracy of the clock/server. It purely means the distance away from the "reference clock" you are. What matters more, in regards to NTP accuracy (in regards to time, of course) is network latency between servers, type of server being used, and potentially server load. Depending on what NTP server you use, they document on how accurate their time will be. Each server software uses various algorithims to compute time based on network latency and server load, and it comes down to the accuracy of those algorithims. For instance, MS NTP server states that it will be accurate within 2 seconds. OpenNTPd has stated that they won't give you the possible accuracy of the server. There are instances where stratum 3 servers can be more accurate than stratum 2 servers, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/126710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to make Wget handle an HTTP 100-Continue response? I am trying to POST a HTML (contained in a file) to a URL using Wget like this: wget -O- --debug --header=Content-Type:text/html --post-file=index.html http://localhost/www/encoder.ashx The URL to which the HTML is being posted is a Web application end-point implemented using ASP.NET. The server replies with a 100 (Continue) response and Wget simply stops dead in its tracks rather than continuing with the real response that should follow next. Can Wget be somehow told to hanlde a 100 (Continue) response or is this some well-known limitation of the tool? Notes: * *I noticed that Wget never sends the Expect: 100-Continue header so technically the server should not be issuing a 100 (Continue) response. UPDATE: Looks like this is possible, as per §8.2.3 of RFC 2616 (Hypertext Transfer Protocol -- HTTP/1.1): An origin server SHOULD NOT send a 100 (Continue) response if the request message does not include an Expect request-header field with the "100-continue" expectation, and MUST NOT send a 100 (Continue) response if such a request comes from an HTTP/1.0 (or earlier) client. There is an exception to this rule: for compatibility with RFC 2068, a server MAY send a 100 (Continue) status in response to an HTTP/1.1 PUT or POST request that does not include an Expect request-header field with the "100- continue" expectation. This exception, the purpose of which is to minimize any client processing delays associated with an undeclared wait for 100 (Continue) status, applies only to HTTP/1.1 requests, and not to requests with any other HTTP- version value. *cURL has no problems with such a transaction. It send an Expect: 100-Continue header and continued with 100 (Continue) response on to the real one. For more information, here is the full debug trace of the transaction from the invocation shown above: Setting --post-file (postfile) to index.html Setting --header (header) to Content-Type:text/html DEBUG output created by Wget 1.10 on Windows. --13:29:17-- http://localhost/www/encoder.ashx => `-' Resolving localhost... seconds 0.00, 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:80... seconds 0.00, connected. Created socket 296. Releasing 0x01621a10 (new refcount 1). ---request begin--- POST /www/encoder.ashx HTTP/1.0 User-Agent: Wget/1.10 Accept: */* Host: localhost Connection: Keep-Alive Content-Type: text/html Content-Length: 30984 ---request end--- [writing POST file index.html ... done] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 100 Continue Server: ASP.NET Development Server/9.0.0.0 Date: Wed, 24 Sep 2008 11:29:17 GMT Content-Length: 0 ---response end--- 100 Continue Closed fd 296 13:29:17 ERROR 100: Continue. A: I looked at the source code to wget for Windows, and as far as I can tell the debug output is coming from the generic error condition when wget fails to parse the response properly. Looks like this is just a limitation of wget, so you'll probably have to use curl or some other method to avoid encountering this issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/126711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Immutable Styles in Silverlight 2 Anyone found a good pattern for getting around immutable styles in Silverlight 2? What I mean is does anyone have a workaround for the fact that you cannot switch the style of an element programmatically once it has been set, i.e. the second line here will throw a catastrophic failure exception: this.TestButton.Style = (Style)Application.Current.Resources["Fred"]; this.TestButton.Style = (Style)Application.Current.Resources["Barney"]; A: It's not possible. The best workaround I've seen is from Nikhil Kothari at Microsoft: http://www.nikhilk.net/Silverlight-Themes.aspx There is a major drawback to defining styles centrally in App.xaml anyway, which is that it breaks all designer support when you reference those styles from other user controls. I haven't used it but Nikhil's theme engine looks very promising, and I have a funny feeling that many of his ideas will make it into the silverlight product eventually anyway. A: The problem goes away in Silverlight 3 where styles are mutable - yay! A: I don't know if this helps, but I believe you can change the control's template as many times as you want during runtime. Maybe that would be a potential workaround.
{ "language": "en", "url": "https://stackoverflow.com/questions/126716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is it possible to send and store a Type reference in VB6/VBA? I'm working on a VB6 application and I would like to send a Type as a reference and store it in another form. Is this possible? Sending it is no problem, I just use the ByRef keyword: public Sub SetStopToEdit(ByRef currentStop As StopType) But when I try to use Set to store currentStop in the receiving module I get the "Object required" error when running the program: Private stopToEdit As StopTypeModule.StopType ' ... Lots of code Set stopToEdit = currentStop StopType is defined as follows in a Module (not a class module): Public Type StopType MachineName As String StartDate As Date StartTime As String Duration As Double End Type Is it possible to store the sent reference or do I have to turn StopType into a class? While just setting a local variable works: stopToEdit = currentStop When stopToEdit is later changed the change is not visible in the variable sent to SetStopToEdit. A: You need to refactor it into a class. A: What is StopType? How is it defined? Is a Type the VB6-Record stuff? If so (and if possible), you should redefine it as a class - and only use those, as you will run into problems with Collections otherwise. Try dropping the Set Keyword - Strings, Integers and Numbers, but if I remember correctly, also Records, are not Set, they are Let, but that is implicit in the assignement: stopToEdit = currentStop EDIT: If you want to change the passed in (ByRef) record, do a manual element for element copy instead of reassigning the whole thing - that should do the trick. At the same time: DON'T! ByRef (sadly in VB the default) is not so very cool (to paraphrase my son). Try to design your functions so they don't change arguments passed in - this is what you have a return value for... A: The confusion here is that a StopType is not a reference like an object, but behaves more like an in built type such as LONG. When you do: stopToEdit = currentStop You are only taking a copy of currentStop. If you subsequently change stopToEdit, you'll need to copy it back: currentStop = stopToEdit That way the value will be passed back out of the Sub.
{ "language": "en", "url": "https://stackoverflow.com/questions/126718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to avoid storing credentials to connect to Oracle with JDBC? Is it possible to setup a JDBC connection to Oracle without providing username/password information in a configuration file (or in any other standard readable location)? Typically applications have a configuration file that contains setup parameters to connect to a database. Some DBAs have problems with the fact that usernames and passwords are in clear text in config files. I don't think this is possible with Oracle and JDBC, but I need some confirmation... A possible compromise is to encrypt the password in the config file and decrypt it before setting up the connection. Of course, the decryption key should not be in the same config file. This will only solve accidental opening of the config file by unauthorized users. A: You may want to try Kerberos which can use the OS user's credentials and adding the OS user to the database as identified externally. Make sure that you use Kerberos and not the old way of doing this, which had serious security issues. For Kerberos support you would need the advanced security option and a recent JDBC driver, probably 11g version. Before trying to get it to work in Java, try it out in Sql*Plus using '/' as username and empty password. "select user from dual" should give you user@domain. You may also find that there is a fundamental difference between using thin or OCI driver when it comes to Kerberos configuration. A: You definitely don't want to be able to connect to the database without credentials because that makes the database vulnerable. This is a general problem, how do I store credentials needed to access external systems? WebLogic has a credential mapper to solve this problem, in which credentials (encrypted) are stored in embedded LDAP. Many Oracle products use a credential store facility that stores credentials in Oracle wallet. In the question, you provided the answer. Store the password encrypted and decrypt when you need it. Obviously you have to use symmetric encryption algorithm such as 3DES so you can decrypt it. Make sure the symmetric key is not something that can be guessed. The trick is where you keep the symmetric key needed for en/de-cryption. You can put it in a file that is secured through the OS or you can keep it in the code, but then you need to keep the code secure. You can also generate the key if you use a technique that will produce the same key and the algorithm is reasonably secure. If you can keep the code secure you can obviously keep the password in the code as well. However, you want the flexibility of being able to change the credentials without changing the code. You can add more layers to this solution as well. You can encrypt the configuration file (with a different key) as well as the password inside it making the hacker discover 2 keys. There are other even more secure methods using PKI, but they get hard to set up. A: I'd suggest you look into proxy authentication. This is documented in the Oracle® Database Security Guide, as well as the Oracle® Database JDBC Developer's Guide and Reference. Essentially what this allows you to do is have a user in the database that ONLY has connect privileges. The users real database accounts are configured to be able connect as the proxy user. Your application connecting through JDBC then stores the proxy username and password, and when connecting provides these credentials, PLUS the username of the real database user in the connect string. Oracle connects as the proxy user, and then mimics the real database user, inheriting the database privileges of the real user. A: All J2EE containers (JBOSS, Tomcat, BEA) have connection pools. They will open a number of connections, keep them alive and will give them to you in 1/100th the time it takes to create one from scratch. Additionally, they also have cool features, in JBOSS for example, all the connection info is stored in an external file. If you change the connection info i.e., you switch from a test to a production DB, your application will dynamically be fed connections from the new pool The good news is that you don't need to run a full J2EE container just to use connection pooling. The external resource allows the password to be stored in either plaintext, or pseudo-encrypted. For a guide on using Tomcat's builtin connection pooling see the apache commons-dbcp: * *http://vigilbose.blogspot.com/2009/03/apache-commons-dbcp-and-tomcat-jdbc.html A: Since I'm not entirely clear on your environment other than Java & JDBC talking to Oracle I'll speak towards that. If you are talking about a Java EE app, you should be able to setup connection pools and data sources on the app server, then your application talks to the connection pool who handles connectivity at that level. The connection pool and data source holds and secures the credentials. A: To my knowledge jdbc connection usernames/passwords need to be stored as plain text. One way to limit the possible risks of this is to restrict the rights of the user so that only the given applications database can be used and only from a predefined host. IMO, this would limit the attacker very much: he could only use the un/pw from the same host where the original application resides and only to attack the original application's database. A: Have wondered this in the past. The solution is certainly one that includes having proper network security at the server and network level to really reduce the number of people who can get access to the system, and having the database credentials only give access to a database account with the bare minimum of permissions required for the application to run. Encryption of properties files might be enough of a deterrent in terms of "can't be bothered to find the key or passphrase" to get an attacker to go onto their next compromised server. I wouldn't rely on "my neighbour is less secure so steal from him please" security however! A: There are two key approaches and both have a significant impact on the design of the system, such that it is not easy to move from one to the other without a significant rewrite. You need to understand what your companies security governance policy is before choosing. 1) Every user has credentials, that are carried through the application, for the service that is being used by the Application; in your case the Oracle database uses those user credentials to connect to the database. The downside is that every user needs a credentials for each secure service. This is a reasonable secure approach but also requires the signficant extra work to provide and maintain the user credentials. Your database administrators will need to actively manage user credentials, which may run counter to your company’s security governance policies. 2) The Application database credentials are stored on a secure directory service, e.g. Secure LDAP. The Application accesses the directory service with the users’ credentials. The directory service returns the approriate credentials for the service being accessed. In both cases the database credentials should be limited to perform the appropriate operations only. The credentials should reflect the requirements of the business processes, for example; they allow select from defined views/tables, insert into others, but not create, update or drop tables. In the second approach use seperate credentials for each major business process, e.g. Order Processing, Accounting, HR, etc. However remember that security is like layers of an onion, if somebody has stripped away the layers to access the application, such that they can access the DB contection pool config file. They can probably Trojan the application to capture users’ credentials. This is where a good security governance policy comes in. Security Governance is a complex issue that needs senior management commitment, because if you need this level of security for your live platform, it costs. You need to separate responsibilities of development from deployment, operations & user authority management. You may also need to have security auditors, who have full access to view changes but no ability to change the configuration. It if far from simple and is highly paid specialism. A: You can store the credentials anywhere, including as hardwired strings in the program or as entries in the Windows registry. It's up to you to retrieve them if you use something nonstandard, though; I'm not aware of any pre-rolled solutions that aren't plaintext. A: You could try Oracle's proxy authentication where the JDBC client authenticates using a certificate against a known middle-tier component/service (the proxy) which is trusted by the database server. I've never tried that though, so I don't know whether it's easy to do. A: In addition to the solutions that were already mentioned (Kerberos authentication, using proxy authentication) there are 2 other solutions that both work with the JDBC thin driver: * *Store the password in an SSO wallet: a wallet can be used to store the user's password. If you use an SSO wallet then the wallet itself doesn't have a password. SSO wallets are commonly used in the context of SSL but they can also be used to just store a password. *Use SSL with user authentication: configure SSL with a user that's externally authenticated by the Distinguished Name (DN). This user doesn't have a password. As long as you connect with SSL using a certificate that has this DN you'll be able to create a session using this user.
{ "language": "en", "url": "https://stackoverflow.com/questions/126720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Prisoner's Dilemma Algorithm After watching The Dark Knight I became rather enthralled with the concept of the Prisoner's Dilemma. There must be an algorithm that that maximizes one's own gain given a situation. For those that find this foreign: http://en.wikipedia.org/wiki/Prisoner%27s_dilemma Very, very interesting stuff. Edit: The question is, what is, if any, the most efficient algorithm that exists for the Prisoner's Dilemma? A: The wikipedia page seems to give all the answers... for the one-time prisoner's dilemma, the most optimal solution for each prisoner (not both prisoners) is to betray. For the iterated prisoner's dilemma, it is best to remain silent on the first go, and then after that do whatever the other prisoner did on the last go. A: The whole point of the dilemma is that the optimal solution (both prisoners stay quiet) is dangerous because part of the problem is out of your hands. So, choosing the suboptimal solution seems to maximize your gain, but it's still suboptimal I don't see how an algorithm could supply a solution when part of the problem is the unknown. A: I recommend reading Axelrod's The Evolution of Cooperation. This is a computer experiment of competing strategies for the iterated prisoner's dilemma. When I heard of it last, the tit-for-tat strategy came out first. It may have changed. A: For the one-off version of the game, the best strategy is always to defect since there is no chance of retaliation. It gets more interesting for an iterated version since players can respond to their opponents' previous choices. If we know in advance exactly how many rounds there will be, then the logical "best" strategy is still to defect always. This is because it always makes sense to defect on the last turn since there is no chance of retaliation. Of course, our rational opponent will know this and also always defect on the last turn. This makes it sensible for us to defect on the penultimate turn since there is no chance of cooperation on the final turn anyway. Following this logic to its natural conclusion, we should defect on every turn. When the total number of rounds is unknown, things become more interesting. A good strategy for the game should try to predict what an opponent will do. I researched using evolutionary algorithms and simple machine learning with opponent modelling to generate strategies for the game for my masters degree. If you're really interested, you can read my thesis. As recommended by Yuval, probably the best place to start is Axelrod's seminal book. If you're really, really interested in this stuff, there was a 20th anniversary follow-up that included a lot of the more recent work on IPD (the Iterated Prisoner's Dilemma) by other researchers. Also, I'd thoroughly recommended William Poundstone's Prisoner's Dilemma, which is part biography of John von Neumann and part introduction to game theory. A: Since there is only one choice to make, and in the absence of any changeable inputs, your algorithm is either going to be: cooperate = true; ...or... cooperate = false It's more interesting to find a strategy for the Iterated Prisoner's Dilemma, which is something many people have done. For example http://www.iterated-prisoners-dilemma.info/ Even then it's not 'solvable' since the other player is not predictable. A: Well, to my understanding, pattern recognition is a huge part of it as well. Finding the other prisoner's habit - how often he stays quiet and when he narcs. You also have to cross reference that to your own choices to determine what you did to make him react in a certain way. I think its a little more complex than wiki explained. Its not just what the prisoner did on the last go, but on all goes before that stretching up to infinity. A: Ah yes. This made me remember this old article about The Prisoner's Dilemma in Software Development For an algorithmic PD competition look here This was a good one too A: There isn't since you can not categorically predict the behavior of the second prisoner. There are all sorts of "solutions" that make underlying but very restrictive assumptions about the behavior of the second prisoner, but they have little to say about the unconstrained problem (that's what makes it such a compelling dilemma). My two cents, given that you can not rely on the second prisoners behavior is that it comes down to: are you a optimist, or a cynic? Are the two prisoners going to stick together (honor among thieves), or are they going to rat each other out at the first opportunity to save their own throat...? A: Further, in an iterated prisoners' game the optimal strategy will vary based on the other strategies in play. In a series against a player who ALWAYS defects always defecting is the best strategy. When playing against a player who might co-operate, a strategy that retaliates, but occasionally forgives is likely to be best. I should add that this only applies in a game of unknown length. Any game of known length is identical to the single round game. A: Attempting to find an optimal solution for the Prisoner's Dilemma is like trying to find one for Ro-Sham-Bo (rock-paper-scissors.) The best you can do is model your opponent and try to exploit patterns. In the early days of game theory and computer science, John von Neumann and the Rand Corporation spent extensive amounts of skull sweat trying to come up with an optimal algorithm for resolving the Prisoner's Dilemma without success and, iirc, eventually proved mathematically that there was no optimal solution. A: The whole point of the prisoner's dilemma is that your optimal strategy is to betray the other prisoner. O(1) A: Mathemaically the other posts answer the question, but in reality, there may be additional options. However absurd these options are, they will result in additional outcome possibilities, and they may result in increased chance of personal gains. For example, in Batman's case, it would ruin the plot, but he could have just killed the Joker -- thus ruining any additional effects the Joker would have on the outcome. By letting the Joker live, Batman unwittingly allows the Joker the only "victory" he needs. A: The game becomes much more interesting when you step back and consider the whole tournament. For example, a few years back a PD tournament was won by a team from the UK which submitted multiple entries. One of them was the "master" and the other were "slaves". They would all start by playing a specific sequence of actions which would allow the masters and slaves to recognize each other. Once recognized the master would defect and the slave would cooperate for the rest of the iterations. Thus, the master won the tournament but at the cost of the slaves. The strategy made economic sense as there was a monetary prize for first place but the cost of entry were low. More generally, when writing a program for a TD tournament you need to look at the bigger picture: * *how are the prizes awarded? *can you conspire with other contestants? *what are the costs of entry? penalties? Otherwise, yes, the dominant strategy is to defect in the one-shot PD. Axelrod, as others mentioned, showed that tit-for-tat was robust in a series of tournaments, but in these tournaments nobody thought about conspiring with other contestants.
{ "language": "en", "url": "https://stackoverflow.com/questions/126737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How can i move an object drawn in device context python I have drawn an image in the device context using python and I want to move it smoothly/animate either vertically or horizontally. What algorithm should I use? Where can I get info for this kind of tasks in python? A: Once an object is drawn in a device context it stays there. If you want to move it you need to redraw it. You can keep a background that's fixed and only redraw the movable elements each time they move. Basically that's how it's done. To move an object smoothly over a line you have to do something like this (I don't have a program ready, so can only give you an idea): * *choose the start and end position: point A(x1, y1) and B(x2, y2) *choose in how much time the object should change position from A to B (say 10 seconds). *use a timer set to a certain interval (say 2 seconds) *calculate the delta X and Y that the object should change for each timer interval. In this case dx = (x2-x1)*2/10 and dy = (y2-y1)*2/10 *in the timer callback increment the current object position with dx and dy and redraw the image That would be the algorithm. I suggest that you also take a look to PyGame. Maybe you can use that and it also has some tutorials. A: To smoothly move object between starting coordinate (x1, y1) and destination coordinate (x2,y2), you need to first ask yourself, how long the object should take to get to its destination. Lets say you want the object to get there in t time units (which maybe seconds, hours, whatever). Once you have determined this it is then trivial to workout the displacement per unit time: dx = (x2-x1)/t dy = (y2-y1)/t Now you simply need to add (dx,dy) to the object's position ((x,y), initially (x1,y1)) every unit time, and stop when the object gets within some threshold distance of the destination. This is to account for the fact errors in divisions will accumulate, so if you did an equality check like: (x,y)==(x2,y2) It is unlikely it will ever be true. Note the above method gives you constant velocity, straight line movement. You may wish to instead use some sort a slightly more complex formula to give the object the appearance of accelerating, maintaining cruise speed, then decelerating. The following formulae may then be useful: v(t) = u(t) + t*a(t) x(t) = v(t) + t*v(t) This is merely Euler's method, and should suffice for animation purposes.
{ "language": "en", "url": "https://stackoverflow.com/questions/126738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: What good tools are available to create online help for .net applications? I have a winforms application that presently ships with a chm file for context-sensitive help documentation (not API docs), created using MS HTML Help Workshop. I'd like to move to online documentation (don't have to ship it with the product, can update it easily, etc). What tools are recommended for this sort of thing, and what are their pros and cons? I'd like to be able to do the following: * *host the help files on my webserver *provide context-sensitive help *have reasonable-looking navigation/TOC for the help *host different versions of the help for different major versions of the application *easily edit the help. Something like a wiki would be nice, preferably with good wysiwyg editor. *easily create a PDF manual from the help files *not have to pay (much) for the tool I guess I can do most of this with HTML Help Workshop and a bit of work, but if there's better tools out there I'd like to know. A: Most commercial help systems have an option to generate the help as a set of HTML pages that could then be uploaded to a website. Certinaly HelpStudio does which I use myself. You could also try out Doc-2-Help which is also a major player in the help market as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/126744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Compilation fails randomly: "cannot open program database" During a long compilation with Visual Studio 2005 (version 8.0.50727.762), I sometimes get the following error in several files in some project: fatal error C1033: cannot open program database 'v:\temp\apprtctest\win32\release\vc80.pdb' (The file mentioned is either vc80.pdb or vc80.idb in the project's temp dir.) The next build of the same project succeeds. There is no other Visual Studio open that might access the same files. This is a serious problem because it makes nightly compilation impossible. A: This generally happens when your previous attempts at debugging have not killed the debugger fully. In Task manager look for a process called vcjit, kill it and try again. Worst option restart visual studio, this should solve your problem. A: It is possible that an antivirus or a similar program is touching the pdb file on write - an antivirus is the most likely suspect in this scenario. I'm afraid that I can only give you some general pointers, based on my past experience in setting nightly builds in our shop. Some of these may sound trivial, but I'm including them for the sake of completion. * *First and foremost: make sure you start up with a clean slate. That is, force-delete the output directory of the build before you start your nightly. *If you have an antivirus, antispyware or other such programs on your nightly machine, consider removing them. If that's not an option, add your obj folder to the exclusion list of the program. *(optional) Consider using tools such as VCBuild or MSBuild as part of your nightly. I think it's better to use MSBuild if you're on a multicore machine. We use IncrediBuild for nightlies and MSBuild for releases, and never encountered the problem you describe. If nothing else works, you can schedule a watchdog script a few hours after the build starts and check its status; if the build fails, the watchdog should restart it. This is an ugly hack, but it's better than nothing. A: We've seen this a lot at my site too. This explanation, from Peter Kaufmann, seems to be the most plausible based on our setup: When building a solution in Visual Studio 2005, you get errors like fatal error C1033: cannot open program database 'xxx\debug\vc80.pdb'. However, when running the build for a second time, it usually succeeds. Reason: It's possible that two projects in the solution are writing their outputs to the same directory (e.g. 'xxx\debug'). If the maximum number of parallel project builds setting in Tools - Options, Projects and Solutions - Bild and Run is set to a value greater than 1, this means that two compiler threads could be trying to access the same files simultaneously, resulting in a file sharing conflict. Solution: Check your project's settings and make sure no two projects are using the same directory for output, target or any kind of intermediate files. Or set the maximum number of parallel project builds setting to 1 for a quick workaround. I experienced this very problem while using the VS project files that came with the CLAPACK library. UPDATE: There is a chance that Tortoise SVN accesses 'vc80.pdb', even if the file is not under versioning control, which could also result in the error described above (thanks to Liana for reporting this). However, I cannot confirm this, as I couldn't reproduce the problem after making sure different output directories are used for all projects. A: I had this problem today and it turned out to be non-ansi characters in the path to the pdb that caused it. I'm using windows through vmware, and my project was in a shared location: \vmware-host\Shared Folders\project When I moved it to \Users\julian\project it resolved the issue. A: I just ran into this problem. Visual studio was complaining about not being able to open vc100.pdb. I looked for open file handles to this file using procexp and found out that the process mspdbsrv had an open file handle to it. Killing this process fixed the issue and I was able to compile. A: Switch the debug info to C7 format instead of using the PDB. Project Options -> C/C++ -> General -> Debug Information Format and set it to C7. A: Try right click the excutable file of VS....and Properties->Compatibility-> Tick "Run this program in compatibilty mode for:" OFF........ A: I had a similar problem while working on a project which I had located in my Dropbox folder. I found that it would throw this error when the little "syncing" icon was going on the Dropbox icon in the system tray, since Dropbox was accessing the files to upload them to their server. When I waited to build until Dropbox finished syncing, it worked every time. A: I have same problem C1033: cannot open program database, Scenario I have two dll's parent.dll and child.dll.I just attached child.dll project with visual studio debugger at the same time i am trying to build the parent.dll project,produces error C1033: cannot open program database Solution Stop debugging and kill the process attached with the debugger.Rebuild the project A: This happens to me consistently if I Ctrl+Break to cancel a build (vs2015). There's some process that isn't shut down properly. I went on a rampage "End Tasking" ms/vs related processes (look for duplicates) and my build worked again. A restart would probably work too. As would moving to gnu binutils. Annoyingly unlocker tools don't report any processes locking the file, windows doesn't let me delete the .pdb but I can rename it. My guess is two processes jump in at the same time during a build. A: Are you using LinqToSql at all? Perhaps it is similar to the odd error I will experience occasionally as I asked in this question: What causes Visual Studio to fail to load an assembly incorrectly? A: I changed my intermediate directory from: %TEMP%\$(ProjectName)\$(Platform)\$(Configuration)\ to C:\temp\$(ProjectName)\$(Platform)\$(Configuration)\ It works now. NO idea why. A: In my case the problem was Google Drive: I forgot that the project was under a synced folder and G Drive probably locked that file. Pausing the sync didn't help since the error was throwed anyway. Moving the project folder to another location not synced by Google Drive solved my issue. Just to mention, at the beginning I thought it was my anti-virus, since when examinating the file using procexp it showed that the file was used by one of my anti-virus process. Excluding the folder project from my anti-virus scan didn't help in my case. A: the simplest solution is "build one more time": BuildConsole abc.sln /rebuild /cfg="release|Win32" if %errorlevel% neq 0 ( BuildConsole abc.sln /cfg="release|Win32" if %errorlevel% neq 0 ( rem process error exit 1 ) ) A: I just ran into this problem and Google led me here. For me, it was Google Drive syncing my project files while I'm trying to run. Pausing Google Drive sync temporarily solved it, but I'd rather there was a way for Google Drive to keep its hands off while Visual Studio is doing its stuff. If anyone knows how I can configure that, please let me know
{ "language": "en", "url": "https://stackoverflow.com/questions/126751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Examples of Recursive functions Can anybody suggest programming examples that illustrate recursive functions? There are the usual old horses such as Fibonacci series and Towers of Hanoi, but anything besides them would be fun. A: The interpreter design pattern is a quite nice example because many people don't spot the recursion. The example code listed in the Wikipedia article illustrates well how this can be applied. However, a much more basic approach that still implements the interpreter pattern is a ToString function for nested lists: class List { public List(params object[] items) { foreach (object o in items) this.Add(o); } // Most of the implementation omitted … public override string ToString() { var ret = new StringBuilder(); ret.Append("( "); foreach (object o in this) { ret.Append(o); ret.Append(" "); } ret.Append(")"); return ret.ToString(); } } var lst = new List(1, 2, new List(3, 4), new List(new List(5), 6), 7); Console.WriteLine(lst); // yields: // ( 1 2 ( 3 4 ) ( ( 5 ) 6 ) 7 ) (Yes, I know it's not easy to spot the interpreter pattern in the above code if you expect a function called Eval … but really, the interpreter pattern doesn't tell us what the function is called or even what it does and the GoF book explicitly lists the above as an example of said pattern.) A: In my opinion, recursion is good to know, but most solutions that could use recursion could also be done using iteration, and iteration is by far more efficient. That said here is a recursive way to find a control in a nested tree (such as ASP.NET or Winforms): public Control FindControl(Control startControl, string id) { if (startControl.Id == id) return startControl if (startControl.Children.Count > 0) { foreach (Control c in startControl.Children) { return FindControl(c, id); } } return null; } A: Here's a pragmatic example from the world of filesystems. This utility recursively counts files under a specified directory. (I don't remember why, but I actually had a need for something like this long ago...) public static int countFiles(File f) { if (f.isFile()){ return 1; } // Count children & recurse into subdirs: int count = 0; File[] files = f.listFiles(); for (File fileOrDir : files) { count += countFiles(fileOrDir); } return count; } (Note that in Java a File instance can represent either a normal file or a directory. This utility excludes directories from the count.) A common real world example would be e.g. FileUtils.deleteDirectory() from the Commons IO library; see the API doc & source. A: A real-world example is the "bill-of-materials costing" problem. Suppose we have a manufacturing company that makes final products. Each product is describable by a list of its parts and the time required to assemble those parts. For example, we manufacture hand-held electric drills from a case, motor, chuck, switch, and cord, and it takes 5 minutes. Given a standard labor cost per minute, how much does it cost to manufacture each of our products? Oh, by the way, some parts (e.g. the cord) are purchased, so we know their cost directly. But we actually manufacture some of the parts ourselves. We make a motor out of a housing, a stator, a rotor, a shaft, and bearings, and it takes 15 minutes. And we make the stator and rotor out of stampings and wire, ... So, determining the cost of a finished product actually amounts to traversing the tree that represents all whole-to-list-of-parts relationships in our processes. That is nicely expressed with a recursive algorithm. It can certainly be done iteratively as well, but the core idea gets mixed in with the do-it-yourself bookkeeping, so it's not as clear what's going on. A: In order to understand recursion, one must first understand recursion. A: The rule of thumb for recursion is, "Use recursion, if and only if on each iteration your task splits into two or more similar tasks". So Fibonacci is not a good example of recursion application, while Hanoi is a good one. So most of the good examples of recursion are tree traversal in different disquises. For example: graph traversal - the requirement that visited node will never be visited again effectively makes graph a tree (a tree is a connected graph without simple cycles) divide and conquer algorithms (quick sort, merge sort) - parts after "divide" constitute children nodes, "conquer" constitues edges from parent node to child nodes. A: The hairiest example I know is Knuth's Man or Boy Test. As well as recursion it uses the Algol features of nested function definitions (closures), function references and constant/function dualism (call by name). A: As others have already said, a lot of canonical recursion examples are academic. Some practical uses I 've encountered in the past are: 1 - Navigating a tree structure, such as a file system or the registry 2 - Manipulating container controls which may contain other container controls (like Panels or GroupBoxes) A: How about testing a string for being a palindrome? bool isPalindrome(char* s, int len) { if(len < 2) return TRUE; else return s[0] == s[len-1] && isPalindrome(&s[1], len-2); } Of course, you could do that with a loop more efficiently. A: Write a recursive descent parser! A: This illustration is in English, rather than an actual programming language, but is useful for explaining the process in a non-technical way: A child couldn't sleep, so her mother told a story about a little frog, who couldn't sleep, so the frog's mother told a story about a little bear, who couldn't sleep, so bear's mother told a story about a little weasel ...who fell asleep. ...and the little bear fell asleep; ...and the little frog fell asleep; ...and the child fell asleep. A: From the world of math, there is the Ackermann function: Ackermann(m, n) { if(m==0) return n+1; else if(m>0 && n==0) return Ackermann(m-1, 1); else if(m>0 && n>0) return Ackermann(m-1, Ackermann(m, n-1)); else throw exception; //not defined for negative m or n } It always terminates, but it produces extremely large results even for very small inputs. Ackermann(4, 2), for example, returns 265536 − 3. A: Another couple of "usual-suspects" are Quicksort and MergeSort A: My personal favorite is Binary Search Edit: Also, tree-traversal. Walking down a folder file structure for instance. A: Implementing Graphs by Guido van Rossum has some recursive functions in Python to find paths between two nodes in graphs. A: My favorite sort, Merge Sort (Favorite since I can remember the algorithm and is it not too bad performance-wise) A: * *Factorial *Traversing a tree in depth (in a filesystem, a game space, or any other case) A: How about reversing a string? void rev(string s) { if (!s.empty()) { rev(s[1..s.length]); } print(s[0]); } Understanding this helps understand recursion. A: How about anything processing lists, like: * *map (and andmap, ormap) *fold (foldl, foldr) *filter *etc... A: Once upon a time, and not that long ago, elementary school children learned recursion using Logo and Turtle Graphics. http://en.wikipedia.org/wiki/Turtle_graphics Recursion is also great for solving puzzles by exhaustive trial. There is a kind of puzzle called a "fill in" (Google it) in which you get a grid like a crossword, and the words, but no clues, no numbered squares. I once wrote a program using recursion for a puzzle publisher to solve the puzzles in order be sure the known solution was unique. A: Recursive functions are great for working with recursively defined datatypes: * *A natural number is zero or the successor of another natural number *A list is the empty list or another list with an element in front *A tree is a node with some data and zero or more other subtrees Etc. A: Translate a spreadsheet column index to a column name. It's trickier than it sounds, because spreadsheet columns don't handle the '0' digit properly. For example, if you take A-Z as digits when you increment from Z to AA it would be like going from 9 to 11 or 9 to 00 instead of 10 (depending on whether A is 1 or 0). Even the Microsoft Support example gets it wrong for anything higher than AAA! The recursive solution works because you can recurse right on those new-digit boundries. This implementation is in VB.Net, and treats the first column ('A') as index 1. Function ColumnName(ByVal index As Integer) As String Static chars() As Char = {"A"c, "B"c, "C"c, "D"c, "E"c, "F"c, "G"c, "H"c, "I"c, "J"c, "K"c, "L"c, "M"c, "N"c, "O"c, "P"c, "Q"c, "R"c, "S"c, "T"c, "U"c, "V"c, "W"c, "X"c, "Y"c, "Z"c} index -= 1 'adjust index so it matches 0-indexed array rather than 1-indexed column' Dim quotient As Integer = index \ 26 'normal / operator rounds. \ does integer division' If quotient > 0 Then Return ColumnName(quotient) & chars(index Mod 26) Else Return chars(index Mod 26) End If End Function
{ "language": "en", "url": "https://stackoverflow.com/questions/126756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Qt Jambi: QAbstractListModel not displaying in QListView I've created an implementation of the QAbstractListModel class in Qt Jambi 4.4 and am finding that using the model with a QListView results in nothing being displayed, however using the model with a QTableView displays the data correctly. Below is my implementation of QAbstractListModel: public class FooListModel extends QAbstractListModel { private List<Foo> _data = new Vector<Foo>(); public FooListModel(List<Foo> data) { if (data == null) { return; } for (Foo foo : data) { _data.add(Foo); } reset(); } public Object data(QModelIndex index, int role) { if (index.row() < 0 || index.row() >= _data.size()) { return new QVariant(); } Foo foo = _data.get(index.row()); if (foo == null) { return new QVariant(); } return foo; } public int rowCount(QModelIndex parent) { return _data.size(); } } And here is how I set the model: Foo foo = new Foo(); foo.setName("Foo!"); List<Foo> data = new Vector<Foo>(); data.add(foo); FooListModel fooListModel = new FooListModel(data); ui.fooListView.setModel(fooListModel); ui.fooTableView.setModel(fooListModel); Can anyone see what I'm doing wrong? I'd like to think it was a problem with my implementation because, as everyone says, select ain't broken! A: I'm not experienced in Jambi, but shouldn't you be returning a QVariant from method data() instead of returning a Foo? It's not clear to me how the view is going to know how to convert the Foo into a string for display. Also, any chance I could sell you the easier-to-use QStandardModel and QStandardModelItem instead of rolling a fully custom one the hard way? And if you are only going to have one view ever, you can avoid the whole MVC Pattern altogether and just use the very very easy to use QListWidget. A: Your model's data() implementation has two problems in it: * *It fails to different values for different item data roles. Your current implementation returns the same value for all roles, and some views can have problems with it. *QVariant in Jambi is not the same as in regular Qt. When you have nothing to return, just return null. A better implementation would be: public Object data(QModelIndex index, int role) { if (index.row() < 0 || index.row() >= _data.size()) return null; if (role != Qt.ItemDataRole.DisplayRole) return null; return _data.get(index.row()); }
{ "language": "en", "url": "https://stackoverflow.com/questions/126759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to force a web browser NOT to cache images Background I am writing and using a very simple CGI-based (Perl) content management tool for two pro-bono websites. It provides the website administrator with HTML forms for events where they fill the fields (date, place, title, description, links, etc.) and save it. On that form I allow the administrator to upload an image related to the event. On the HTML page displaying the form, I am also showing a preview of the picture uploaded (HTML img tag). The Problem The problem happens when the administrator wants to change the picture. He would just have to hit the "browse" button, pick a new picture and press ok. And this works fine. Once the image is uploaded, my back-end CGI handles the upload and reloads the form properly. The problem is that the image shown does not get refreshed. The old image is still shown, even though the database holds the right image. I have narrowed it down to the fact that the IMAGE IS CACHED in the web browser. If the administrator hits the RELOAD button in Firefox/Explorer/Safari, everything gets refreshed fine and the new image just appears. My Solution - Not Working I am trying to control the cache by writing a HTTP Expires instruction with a date very far in the past. Expires: Mon, 15 Sep 2003 1:00:00 GMT Remember that I am on the administrative side and I don't really care if the pages takes a little longer to load because they are always expired. But, this does not work either. Notes When uploading an image, its filename is not kept in the database. It is renamed as Image.jpg (to simply things out when using it). When replacing the existing image with a new one, the name doesn't change either. Just the content of the image file changes. The webserver is provided by the hosting service/ISP. It uses Apache. Question Is there a way to force the web browser to NOT cache things from this page, not even images? I am juggling with the option to actually "save the filename" with the database. This way, if the image is changed, the src of the IMG tag will also change. However, this requires a lot of changes throughout the site and I rather not do it if I have a better solution. Also, this will still not work if the new image uploaded has the same name (say the image is photoshopped a bit and re-uploaded). A: You may write a proxy script for serving images - that's a bit more of work though. Something likes this: HTML: <img src="image.php?img=imageFile.jpg&some-random-number-262376" /> Script: // PHP if( isset( $_GET['img'] ) && is_file( IMG_PATH . $_GET['img'] ) ) { // read contents $f = open( IMG_PATH . $_GET['img'] ); $img = $f.read(); $f.close(); // no-cache headers - complete set // these copied from [php.net/header][1], tested myself - works header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Some time in the past header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); // image related headers header('Accept-Ranges: bytes'); header('Content-Length: '.strlen( $img )); // How many bytes we're going to send header('Content-Type: image/jpeg'); // or image/png etc // actual image echo $img; exit(); } Actually either no-cache headers or random number at image src should be sufficient, but since we want to be bullet proof.. A: Simple fix: Attach a random query string to the image: <img src="foo.cgi?random=323527528432525.24234" alt=""> What the HTTP RFC says: Cache-Control: no-cache But that doesn't work that well :) A: I checked all the answers around the web and the best one seemed to be: (actually it isn't) <img src="image.png?cache=none"> at first. However, if you add cache=none parameter (which is static "none" word), it doesn't effect anything, browser still loads from cache. Solution to this problem was: <img src="image.png?nocache=<?php echo time(); ?>"> where you basically add unix timestamp to make the parameter dynamic and no cache, it worked. However, my problem was a little different: I was loading on the fly generated php chart image, and controlling the page with $_GET parameters. I wanted the image to be read from cache when the URL GET parameter stays the same, and do not cache when the GET parameters change. To solve this problem, I needed to hash $_GET but since it is array here is the solution: $chart_hash = md5(implode('-', $_GET)); echo "<img src='/images/mychart.png?hash=$chart_hash'>"; Edit: Although the above solution works just fine, sometimes you want to serve the cached version UNTIL the file is changed. (with the above solution, it disables the cache for that image completely) So, to serve cached image from browser UNTIL there is a change in the image file use: echo "<img src='/images/mychart.png?hash=" . filemtime('mychart.png') . "'>"; filemtime() gets file modification time. A: I'm a NEW Coder, but here's what I came up with, to stop the Browser from caching and holding onto my webcam views: <meta Http-Equiv="Cache" content="no-cache"> <meta Http-Equiv="Pragma-Control" content="no-cache"> <meta Http-Equiv="Cache-directive" Content="no-cache"> <meta Http-Equiv="Pragma-directive" Content="no-cache"> <meta Http-Equiv="Cache-Control" Content="no-cache"> <meta Http-Equiv="Pragma" Content="no-cache"> <meta Http-Equiv="Expires" Content="0"> <meta Http-Equiv="Pragma-directive: no-cache"> <meta Http-Equiv="Cache-directive: no-cache"> Not sure what works on what Browser, but it does work for some: IE: Works when webpage is refreshed and when website is revisited (without a refresh). CHROME: Works only when webpage is refreshed (even after a revisit). SAFARI and iPad: Doesn't work, I have to clear the History & Web Data. Any Ideas on SAFARI/ iPad? A: When uploading an image, its filename is not kept in the database. It is renamed as Image.jpg (to simply things out when using it). Change this, and you've fixed your problem. I use timestamps, as with the solutions proposed above: Image-<timestamp>.jpg Presumably, whatever problems you're avoiding by keeping the same filename for the image can be overcome, but you don't say what they are. A: I use PHP's file modified time function, for example: echo <img src='Images/image.png?" . filemtime('Images/image.png') . "' />"; If you change the image then the new image is used rather than the cached one, due to having a different modified timestamp. A: You must use a unique filename(s). Like this <img src="cars.png?1287361287" alt=""> But this technique means high server usage and bandwidth wastage. Instead, you should use the version number or date. Example: <img src="cars.png?2020-02-18" alt=""> But you want it to never serve image from cache. For this, if the page does not use page cache, it is possible with PHP or server side. <img src="cars.png?<?php echo time();?>" alt=""> However, it is still not effective. Reason: Browser cache ... The last but most effective method is Native JAVASCRIPT. This simple code finds all images with a "NO-CACHE" class and makes the images almost unique. Put this between script tags. var items = document.querySelectorAll("img.NO-CACHE"); for (var i = items.length; i--;) { var img = items[i]; img.src = img.src + '?' + Date.now(); } USAGE <img class="NO-CACHE" src="https://upload.wikimedia.org/wikipedia/commons/6/6a/JavaScript-logo.png" alt=""> RESULT(s) Like This https://example.com/image.png?1582018163634 A: Armin Ronacher has the correct idea. The problem is random strings can collide. I would use: <img src="picture.jpg?1222259157.415" alt=""> Where "1222259157.415" is the current time on the server. Generate time by Javascript with performance.now() or by Python with time.time() A: Your problem is that despite the Expires: header, your browser is re-using its in-memory copy of the image from before it was updated, rather than even checking its cache. I had a very similar situation uploading product images in the admin backend for a store-like site, and in my case I decided the best option was to use javascript to force an image refresh, without using any of the URL-modifying techniques other people have already mentioned here. Instead, I put the image URL into a hidden IFRAME, called location.reload(true) on the IFRAME's window, and then replaced my image on the page. This forces a refresh of the image, not just on the page I'm on, but also on any later pages I visit - without either client or server having to remember any URL querystring or fragment identifier parameters. I posted some code to do this in my answer here. A: From my point of view, disable images caching is a bad idea. At all. The root problem here is - how to force browser to update image, when it has been updated on a server side. Again, from my personal point of view, the best solution is to disable direct access to images. Instead access images via server-side filter/servlet/other similar tools/services. In my case it's a rest service, that returns image and attaches ETag in response. The service keeps hash of all files, if file is changed, hash is updated. It works perfectly in all modern browsers. Yes, it takes time to implement it, but it is worth it. The only exception - are favicons. For some reasons, it does not work. I could not force browser to update its cache from server side. ETags, Cache Control, Expires, Pragma headers, nothing helped. In this case, adding some random/version parameter into url, it seems, is the only solution. A: Add a time stamp <img src="picture.jpg?t=<?php echo time();?>"> will always give your file a random number at the end and stop it caching A: I would use: <img src="picture.jpg?20130910043254"> where "20130910043254" is the modification time of the file. When uploading an image, its filename is not kept in the database. It is renamed as Image.jpg (to simply things out when using it). When replacing the existing image with a new one, the name doesn't change either. Just the content of the image file changes. I think there are two types of simple solutions: 1) those which come to mind first (straightforward solutions, because they are easy to come up with), 2) those which you end up with after thinking things over (because they are easy to use). Apparently, you won't always benefit if you chose to think things over. But the second options is rather underestimated, I believe. Just think why php is so popular ;) A: use Class="NO-CACHE" sample html: <div> <img class="NO-CACHE" src="images/img1.jpg" /> <img class="NO-CACHE" src="images/imgLogo.jpg" /> </div> jQuery: $(document).ready(function () { $('.NO-CACHE').attr('src',function () { return $(this).attr('src') + "?a=" + Math.random() }); }); javascript: var nods = document.getElementsByClassName('NO-CACHE'); for (var i = 0; i < nods.length; i++) { nods[i].attributes['src'].value += "?a=" + Math.random(); } Result: src="images/img1.jpg" => src="images/img1.jpg?a=0.08749723793963926" A: With the potential for badly behaved transparent proxies in between you and the client, the only way to totally guarantee that images will not be cached is to give them a unique uri, something like tagging a timestamp on as a query string or as part of the path. If that timestamp corresponds to the last update time of the image, then you can cache when you need to and serve the new image at just the right time. A: I assume original question regards images stored with some text info. So, if you have access to a text context when generating src=... url, consider store/use CRC32 of image bytes instead of meaningless random or time stamp. Then, if the page with plenty of images is displaying, only updated images will be reloaded. Eventually, if CRC storing is impossible, it can be computed and appended to the url at runtime. A: Ideally, you should add a button/keybinding/menu to each webpage with an option to synchronize content. To do so, you would keep track of resources that may need to be synchronized, and either use xhr to probe the images with a dynamic querystring, or create an image at runtime with src using a dynamic querystring. Then use a broadcasting mechanism to notify all components of the webpages that are using the resource to update to use the resource with a dynamic querystring appended to its url. A naive example looks like this: Normally, the image is displayed and cached, but if the user pressed the button, an xhr request is sent to the resource with a time querystring appended to it; since the time can be assumed to be different on each press, it will make sure that the browser will bypass cache since it can't tell whether the resource is dynamically generated on the server side based on the query, or if it is a static resource that ignores query. The result is that you can avoid having all your users bombard you with resource requests all the time, but at the same time, allow a mechanism for users to update their resources if they suspect they are out of sync. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta name="mobile-web-app-capable" content="yes" /> <title>Resource Synchronization Test</title> <script> function sync() { var xhr = new XMLHttpRequest; xhr.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { var images = document.getElementsByClassName("depends-on-resource"); for (var i = 0; i < images.length; ++i) { var image = images[i]; if (image.getAttribute('data-resource-name') == 'resource.bmp') { image.src = 'resource.bmp?i=' + new Date().getTime(); } } } } xhr.open('GET', 'resource.bmp', true); xhr.send(); } </script> </head> <body> <img class="depends-on-resource" data-resource-name="resource.bmp" src="resource.bmp"></img> <button onclick="sync()">sync</button> </body> </html> A: I've found Chrome specifically tries to get clever with the URL arguments solution on images. That method to avoid cache only works some of the time. The most reliable solution I've found is to add both a URL argument (E.g. time stamp or file version) AND also change the capitalisation of the image file extension in the URL. <img src="picture.jpg"> becomes <img src="picture.JPG?t=current_time"> A: All the Answers are valid as it works fine. But with that, the browser also creates another file in the cache every time it loads that image with a different URL. So instead of changing the URL by adding some query params to it. So, what we can do is we can update the browser cache using cache.put caches.open('YOUR_CACHE_NAME').then(cache => { const url = 'URL_OF_IMAGE_TO_UPDATE' fetch(url).then(res => { cache.put(url, res.clone()) }) }) cache.put updates the cache with a new response. for more: https://developer.mozilla.org/en-US/docs/Web/API/Cache/put A: Best solution is to provide current time at the end of the source href like <img src="www.abc.com/123.png?t=current_time"> this will remove the chances of referencing the already cache image. To get the recent time one can use performance.now() function in jQuery or javascript. A: I made a PHP script that automatically appends the timestamps on all images and also on links. You just need to include this script in your pages. Enjoy! http://alv90.altervista.org/how-to-force-the-browser-not-to-cache-images/
{ "language": "en", "url": "https://stackoverflow.com/questions/126772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "150" }
Q: C++ union in C# I'm translating a library written in C++ to C#, and the keyword 'union' exists once. In a struct. What's the correct way of translating it into C#? And what does it do? It looks something like this; struct Foo { float bar; union { int killroy; float fubar; } as; } A: You can use explicit field layouts for that: [StructLayout(LayoutKind.Explicit)] public struct SampleUnion { [FieldOffset(0)] public float bar; [FieldOffset(4)] public int killroy; [FieldOffset(4)] public float fubar; } Untested. The idea is that two variables have the same position in your struct. You can of course only use one of them. More informations about unions in struct tutorial A: In C/C++ union is used to overlay different members in the same memory location, so if you have a union of an int and a float they both use the same 4 bytes of memory to store, obviously writing to one corrupts the other (since int and float have different bit layout). In .Net Microsoft went with the safer choice and didn't include this feature. EDIT: except for interop A: If you're using the union to map the bytes of one of the types to the other then in C# you can use BitConverter instead. float fubar = 125f; int killroy = BitConverter.ToInt32(BitConverter.GetBytes(fubar), 0); or; int killroy = 125; float fubar = BitConverter.ToSingle(BitConverter.GetBytes(killroy), 0); A: You can't really decide how to deal with this without knowing something about how it is used. If it is merely being used to save space, then you can ignore it and just use a struct. However that is not usually why unions are used. There two common reasons to use them. One is to provide 2 or more ways to access the same data. For instance, a union of an int and an array of 4 bytes is one (of many) ways to separate out the bytes of a 32 bit integer. The other is when the data in the struct came from an external source such as a network data packet. Usually one element of the struct enclosing the union is an ID that tells you which flavor of the union is in effect. In neither of these cases can you blindly ignore the union and convert it to a struct where the two (or more) fields do not coincide. A: You could write a simple wrapper but in most cases just use an object it is less confusing. public class MyUnion { private object _id; public T GetValue<T>() => (T)_id; public void SetValue<T>(T value) => _id = value; } A: public class Foo { public float bar; public int killroy; public float fubar { get{ return (float)killroy;} set{ killroy = (int)value;} } } A: Personally, I would ignore the UNION all together and implement Killroy and Fubar as separate fields public struct Foo { float bar; int Kilroy; float Fubar; } Using a UNION saves 32 bits of memory allocated by the int....not going to make or break an app these days.
{ "language": "en", "url": "https://stackoverflow.com/questions/126781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: Checking for code changes in all imported python modules Almost every Python web framework has a simple server that runs a wsgi application and automatically reloads the imported modules every time the source gets changed. I know I can look at the code and see how it's done, but that may take some time and I'm asking just out of curiosity. Does anyone have any idea how this is implemented? A: As the author of one of the reloader mechanisms (the one in werkzeug) I can tell you that it doesn't work. What all the reloaders do is forking one time and restarting the child process if a monitor thread notices that one module changed on the file system. Inline reload()ing doesn't work because references to the reloaded module are not updated. A: reload() does not work. "Reloading" is usually implemented by forking. Implementing "real" reload() is extremely difficult and even the most serious attempt, twisted.python.rebuild isn't perfect.
{ "language": "en", "url": "https://stackoverflow.com/questions/126787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: If you already know LISP, why would you also want to learn F#? What is the added value for learning F# when you are already familiar with LISP? A: Comparing Lisp directly to F# isn't really fair, because at the end of the day with enough time you could write the same app in either language. However, you should learn F# for the same reasons that a C# or Java developer should learn it - because it allows functional programming on the .NET platform. I'm not 100% familiar with Lisp, but I assume it has some of the same problems as OCaml in that there isn't stellar library support. How do you do Database access in Lisp? What about high-performance graphics? If you want to learn more about 'Why .NET', check out this SO question. A: If you knew F# and Lisp, you'd find this a rather strange question to ask. As others have pointed out, Lisp is dynamically typed. More importantly, the unique feature of Lisp is that it's homoiconic: Lisp code is a fundamental Lisp data type (a list). The macro system takes advantage of that by letting you write code which executes at compile-time and modifies other code. F# has nothing like this - it's a statically typed language which borrows a lot of ideas from ML and Haskell, and runs it on .NET What you are asking is akin to "Why do I need to learn to use a spoon if I know how to use a fork?" A: Given that LISP is dynamically typed and F# is statically typed, I find such comparisons strange. A: If I were switching from Lisp to F#, it would be solely because I had a task on my hands that hugely benefitted from some .NET-only library. But I don't, so I'm not. A: Money. F# code is already more valuable than Lisp code and this gap will widen very rapidly as F# sees widespread adoption. In other words, you have a much better chance of earning a stable income using F# than using Lisp. Cheers, Jon Harrop. A: F# is a very different language compared to most Lisp dialects. So F# gives you a very different angle of programming - an angle that you won't learn from Lisp. Most Lisp dialects are best used for incremental, interactive development of symbolic software. At the same time most Lisp dialects are not Functional Programming Languages, but more like multi-paradigm languages - with different dialects placing different weight on supporting FPL features (free of side effects, immutable data structures, algebraic data types, ...). Thus most Lisp dialects either lack static typing or don't put much emphasis on it. So, if you know some Lisp dialect, then learning F# can make a lot of sense. Just don't think that much of your Lisp knowledge applies to F#, since F# is a very different language. As much as an imperative programming used to C or Java needs to unlearn some ideas when learning Lisp, one also needs to unlearn Lisp habits (no types, side effects, macros, ...) when using F#. F# is also driven by Microsoft and taking advantage of the .net framework. A: F# has the benefit that .NET development (in general) is very widely adopted, easily available, and more mass market. If you want to code F#, you can get Visual Studio, which many developers will already have...as opposed to getting the LISP environment up and running. Additionally, existing .NET developers are much more likely to look at F# than LISP, if that means anything to you. (This is coming from a .NET developer who coded, and loved, LISP, while in college). A: * *Static typing (with type inference) *Algebraic data types *Pattern matching *Extensible pattern matching with active patterns. *Currying (with a nice syntax) *Monadic programming, called 'workflows', provides a nice way to do asynchronous programming. A lot of these are relatively recent developments in the programming language world. This is something you'll see in F# that you won't in Lisp, especially Common Lisp, because the F# standard is still under development. As a result, you'll find there is a quite a bit to learn. Of course things like ADTs, pattern matching, monads and currying can be built as a library in Lisp, but it's nicer to learn how to use them in a language where they are conveniently built-in. The biggest advantage of learning F# for real-world use is its integration with .NET. A: I'm not sure if you would? If you find F# interesting that would be a reason. If you work requires it, it would be a reason. If you think it would make you more productive or bring you added value over your current knowledge, that would be a reason. But if you don't find F# interesting, your work doesn't require it and you don't think it would make you more productive or bring you added value, then why would you? If the question on the other hand is what F# gives that lisp don't, then type inference, pattern matching and integration with the rest of the .NET framework should be considered. A: I know this thread is old but since I stumbled on this one I just wanted to comment on my reasons. I am learning F# simply for professional opportunities since .NET carries a lot of weight in a category of companies that dominate my field. The functional paradigm has been growing in use among more quantitatively and data oriented companies and I'd like to be one of the early comers to this trend. Currently there doesn't an exist a strong functional language that fully and safely integrates with the .NET library. I actually attempted to port some .NET from Lisp code and it's really a pain b/c the FFI only supports C primitives and .NET interoperability requires an 'interface' construct and even though I know how to do this in C it's really a huge pain. It would be really, really, good if Lisp went the extra mile in it's next standard and required a c++ class (including virtual functions w/ vtables), and a C# style interface type in it's FFI. Maybe even throw in a Java interface style type too. This would allow complete interoperability with the .NET library and make Lisp a strong contender as a large-scale language. However with that said, coming from a Lisp background made learning F# rather easy. And I like how F# has gone the extra mile to provide types that you would commonly see it quantitative type work. I believe F# was created with mathematical work in mind and that in itself has value over Lisp. A: One way to look at this (the original question) is to match up the language (and associated tools and platforms) to the immediate task. If the task requires an overwhelming percentage of .NET code, and it would require less shoe-horning in one language than another to meet the task head-on, then take the path of least resistance (F#). If you don't need .NET capabilities, and you're comfortable working with LISP and there's no arm-bending to move away from it, keep using it. Not really much different from comparing a hammer with a wrench. Pick the tool that fits the job most effectively. Trying to pick a tool that's objectively "best" is nonsense. And in any case, in 20 years, all of the currently "hot" languages might be outdated anyway.
{ "language": "en", "url": "https://stackoverflow.com/questions/126790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Find two consecutive rows I'm trying to write a query that will pull back the two most recent rows from the Bill table where the Estimated flag is true. The catch is that these need to be consecutive bills. To put it shortly, I need to enter a row in another table if a Bill has been estimated for the last two bill cycles. I'd like to do this without a cursor, if possible, since I am working with a sizable amount of data and this has to run fairly often. Edit There is an AUTOINCREMENT(1,1) column on the table. Without giving away too much of the table structure, the table is essentially of the structure: CREATE TABLE Bills ( BillId INT AUTOINCREMENT(1,1,) PRIMARY KEY, Estimated BIT NOT NULL, InvoiceDate DATETIME NOT NULL ) So you might have a set of results like: BillId AccountId Estimated InvoiceDate -------------------- -------------------- --------- ----------------------- 1111196 1234567 1 2008-09-03 00:00:00.000 1111195 1234567 0 2008-08-06 00:00:00.000 1111194 1234567 0 2008-07-03 00:00:00.000 1111193 1234567 0 2008-06-04 00:00:00.000 1111192 1234567 1 2008-05-05 00:00:00.000 1111191 1234567 0 2008-04-04 00:00:00.000 1111190 1234567 1 2008-03-05 00:00:00.000 1111189 1234567 0 2008-02-05 00:00:00.000 1111188 1234567 1 2008-01-07 00:00:00.000 1111187 1234567 1 2007-12-04 00:00:00.000 1111186 1234567 0 2007-11-01 00:00:00.000 1111185 1234567 0 2007-10-01 00:00:00.000 1111184 1234567 1 2007-08-30 00:00:00.000 1111183 1234567 0 2007-08-01 00:00:00.000 1111182 1234567 1 2007-07-02 00:00:00.000 1111181 1234567 0 2007-06-01 00:00:00.000 1111180 1234567 1 2007-05-02 00:00:00.000 1111179 1234567 0 2007-03-30 00:00:00.000 1111178 1234567 1 2007-03-02 00:00:00.000 1111177 1234567 0 2007-02-01 00:00:00.000 1111176 1234567 1 2007-01-03 00:00:00.000 1111175 1234567 0 2006-11-29 00:00:00.000 In this case, only records 1111188 and 1111187 would be consecutive. A: select top 2 * from bills where estimated = 1 order by billdate desc A: Assuming the rows have sequential IDs, something like this may be what you're looking for: select top 1 * from Bills b1 inner join Bills b2 on b1.id = b2.id - 1 where b1.IsEstimate = 1 and b2.IsEstimate = 1 order by b1.BillDate desc A: You should be able to do a descensing sorted query on estimated = true and select top 2. I am not the best at SQL so i cant give exact language structure A: Do you have a column for "statement number", e.g., if Q12008 was statement 28 for a particular customer, then Q22008's bill would be 29, Q32008's bill would be 30 (assuming quarterly billing). You could then check that the statement numbers were adjacent rather than having to do date manipulation.
{ "language": "en", "url": "https://stackoverflow.com/questions/126794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Java: Invalid Keystore format Error Does anyone know how to solve this java error? java.io.IOException: Invalid keystore format I get it when I try and access the certificate store from the Java option in control panels. It's stopping me from loading applets that require elevated privileges. Error Image A: I was able to reproduce the error by mangling the trusted.certs file at directory C:\Documents and Settings\CDay\Application Data\Sun\Java\Deployment\security. Deleting the file fixed the problem. A: Do not include special characters in organization name and unit A: Seems to be a missing certificate or an invalid format. Did you already generate a certificate with keytool? A: for me it meant that my key file I was trying to import was invalid (it was actually a 404 page not a valid key) A: For you guys who can't find the 'Documents and Settings' (whatever reason there may be) here is another path where the trusted.certs can be found: C:\Users\<username>\AppData\LocalLow\Sun\Java\Deployment\security Hope this helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/126798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a way to determine if an exception is occurring? In a destructor, is there a way to determine if an exception is currently being processed? A: You can use std::uncaught_exception(), but it might not do what you think it does: see GoTW#47 for more information. A: As Luc said, you can use std::uncaught_exception(). But why do you want to know? In any case, destructors should never throw exceptions! A: You can use the Boost Test Library. Look here for a small example: struct my_exception1 { explicit my_exception1( int res_code ) : m_res_code( res_code ) {} int m_res_code; }; struct my_exception2 { explicit my_exception2( int res_code ) : m_res_code( res_code ) {} int m_res_code; }; class dangerous_call { public: dangerous_call( int argc ) : m_argc( argc ) {} int operator()() { if( m_argc < 2 ) throw my_exception1( 23 ); if( m_argc > 3 ) throw my_exception2( 45 ); else if( m_argc > 2 ) throw "too many args"; return 1; } private: int m_argc; }; void translate_my_exception1( my_exception1 const& ex ) { std::cout << "Caught my_exception1(" << ex.m_res_code << ")"<< std::endl; } void translate_my_exception2( my_exception2 const& ex ) { std::cout << "Caught my_exception2(" << ex.m_res_code << ")"<< std::endl; } int cpp_main( int argc , char *[] ) { ::boost::execution_monitor ex_mon; ex_mon.register_exception_translator<my_exception1>( &translate_my_exception1); ex_mon.register_exception_translator<my_exception2>( &translate_my_exception2); try{ // ex_mon.detect_memory_leak( true); ex_mon.execute( ::boost::unit_test::callback0<int>( dangerous_call( argc ) ) ); } catch ( boost::execution_exception const& ex ) { std::cout << "Caught exception: " << ex.what() << std::endl; } return 0; } You have to dig in the documentation. It is a very powerful library to test your software! Anyway with the help of Boost you can catch any kind of exception trigerred anywhere in your function test!
{ "language": "en", "url": "https://stackoverflow.com/questions/126800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Ruby on Rails - Why use tests? I'm confused about what the various testing appliances in Ruby on Rails are for. I have been using the framework for about 6 months but I've never understood the testing part of it. The only testing I've used is JUnit3 in Java and that only briefly. Everything I've read about it just shows testing validations. Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations? Furthermore, the tests seem super fragile to any change in your code. So if you change anything in your models, you have to change your tests and fixtures to match. Doesn't this violate the DRY principle? Third, writing test code seems to take alot of time. Is that normal? Wouldn't it just be faster to refresh my browser and see if it worked? I already have to play with my application just to see if it flows correctly and make sure my CSS hasn't exploded. Why wouldn't manual testing be enough? I've asked these questions before and I haven't gotten more than "automated testing is automated". I am smart enough to figure out the advantages of automating a task. My problem is that costs of writing tests seem absurdly high compared to the benefits. That said, any detailed response is welcome because I probably missed a benefit or two. A: Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations? The validations in Rails do work -- in fact, there are unit tests in the Rails codebase to ensure it. When you test a model's validation, you're testing the specifics of the validation: the length, the accepted values, etc. You're making sure the code was written as intended. Some validations are simple helpers and you may opt not to test them on the notion that "no one can mess up a validates_numericality_of call." Is that true? Does every developer always remember to write it in the first place? Does every developer never accidentally delete a line on a bad copy paste? In my personal opinion, you don't need to test every last combination of values for a Rails' validation helper, but you need a line to test that it's there with the right values passed, just in case some punk changes it in the future without proper forethought. Further, other validations are more complex, requiring lots of custom code -- they may warrant more thorough testing. Furthermore, the tests seem super fragile to any change in your code. So if you change anything in your models, you have to change your tests and fixtures to match. Doesn't this violate the DRY principle? I don't believe it violates DRY. They're communicating (that's what programming is, communication) two very different things. The test says the code should do something. The code says what it actually does. Testing is extremely important when there is a disconnect between those things. Test code and application code are intimately linked, obviously. I think of them as two sides of a coin. You wouldn't want a front without a back, or a back without a front. Good test code reinforces good application code, and vice versa. The two together are used to understand the whole problem that you're trying to solve. And well written test code is documentation -- it shows how the application code should be used. Third, writing test code seems to take alot of time. Is that normal? Wouldn't it just be faster to refresh my browser and see if it worked? I already have to play with my application just to see if it flows correctly and make sure my CSS hasn't exploded. Why wouldn't manual testing be enough? You've only worked on very small projects, for which that testing is arguably sufficient. However, when you work on a project with several developers, thousands or tens of thousands of lines of code, integration points with web services, third party libraries, multiple databases, months of development and requirements changes, etc, there are a lot of other factors in play. Manual testing is simply not enough. In a project of any real complexity, changes in one place can often have unforeseen results in others. Proper architecture helps mitigate this problem, but automated testing helps as well (and helps identify points where the architecture can be improved) by identifying when a change in one place breaks another. My problem is that costs of writing tests seem absurdly high compared to the benefits. That said, any detailed response is welcome because I probably missed a benefit or two. I'll list a few more benefits. If you test first (Test Driven Development) your code will probably be better. I haven't met a programmer who gave it a solid shot for whom this wasn't the case. Testing first forces you to think about the problem and actually design your solution, versus hacking it out. Further, it forces you to understand the problem domain well enough to where if you do have to hack it out, you know your code works within the limitations you've defined. If you have full test coverage, you can refactor with NO RISK. If a software problem is very complicated (again, real world projects that last for months tend to be complicated) then you may wish to simplify code that has previously been written. So, you can write new code to replace the old code, and if it passes all of your tests, you're done. It does exactly what the old code did with respect to the tests. For a project that plans to use an agile development method, refactoring is absolutely essential. Changes will always need to be made. To sum up, automated testing, especially test driven development, is basically a method of managing the complexity of software development. If your project isn't very complex, the cost may outweigh the benefits (although I doubt it). However, real world projects tend to be very complex, and the results of testing and TDD speak for themselves: they work. (If you're curious, I find Dan North's article on Behavior Driven Development to be very helpful in understanding a lot of the value in testing: http://dannorth.net/introducing-bdd) A: I haven't really used Rails much, but I would think that these automated tests would be useful as smoke tests to be sure that the thing you just did doesn't break something that you did last week. This will become increasingly important as your project grows. Also, writing the tests before you write the code (using the Test-Driven-Development model) will help you write the code better and faster, since the tests force you to fully think the problem through. It will also help you to know where to break up complex methods into smaller methods that you can test individually. You are right, writing and maintaining tests takes a lot of time. Sometimes more time than the code itself. However, it can save you time in bug fixing and refactoring for the reasons above. A: Tests should validate your application logic. Personally, I think my most important tests are the ones I run in Selenium. They check that what shows up in the browser is actually what I expect to see. However, if that's all I had, then I would find it hard to debug - it helps to have lower level tests as well and integration, functional, and unit tests are all useful tools. Unit tests let you check that the model behaves the way you expect it to (and that means every method, not just validatins). Validatins will certainly Just Work, but only if you get them right. If you get them wrong, they will Just Work, but not the way you expected. Writing a couple of lines of test is quicker than debugging later on. A simple example like the one at http://wiseheartdesign.com/2006/01/16/testing-rails-validations just checks validations in a unit test. The O'Reilly article at http://www.oreillynet.com/pub/a/ruby/2007/06/07/rails-testing-not-just-for-the-paranoid.html?page=1 is a bit more complete (though still fairly basic). Automated testing is particularly useful in regression testing where you change something and run a suite of tests to check that you didn't break anything else. Tests are a form of repetition, but they don't violate DRY because they express things in a different way. A test says "I did X so Y should happen". Code says "X happened, so now I need to do Z, which happens to cause Y to happen". i.e. a test stimulates a cause and checks an effect, while code responds to a cause, and effects something. A: A lot of the testing tutorials and the sample tests created by the Rails generators are pretty lame and IMHO that can give the mistaken impression that you're supposed to test stupid stuff like the built in Rails methods, etc. Since Rails has it's own test suite, there's no point in you writing or running tests that only test built in Rails functionality. Your tests should exercise the code you're writing! :-) As for the relative merit of running tests vs just refreshing in your browser.. The larger your app gets, the more of a pain in the ass it is to have to manually run through numerous scenarios and edge cases to make sure nothing in your application has broken. Eventually, you'll stop testing your entire application after each change and just start "spot testing" the areas you think should have been affected. Inevitably, you'll find something that used to work months ago that is now completely broken, and you have no certainty when it broke or which changes broke it. After that happens enough times... you'll come to value automated testing.... :-) A: For example: I work on a 25000+ lines project (yes, in rails 1.2) and last monday I was told if I could make Users dissapear from every list except admin ones if they had "leave_date" attribute set to the past. You can rewrite every list action (50+) to put a @users.reject!{|u| Date.today > u.leave_date} Or you can override the "find" method (DRY;-), but only if you have tests (on everything that finds users!) you will know you didn't break anything by overriding User#find !! A: Everything I've read about it just shows testing validations. Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations? There's a good Railscast showing one way to test controllers.
{ "language": "en", "url": "https://stackoverflow.com/questions/126801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What should I know about Git before I start using it? I have used "traditional" version control systems to maintain source code repositories on past projects. I am starting a new project with a distributed team and I can see advantages to using a distributed system. Given that I understand SourceSafe, CVS, and Subversion; what suggestions do you have for a Git newbie? A: In my own experience moving from Subversion to Git, the most important thing is not what you need to learn, but what you need to unlearn. Distributed Version Control is very different from Centralized Version Control. CVC is a subset of DVC, so you can just do CVC in a DVC tool, but it will be more complicated than with a CVC tool. Try to unlearn CVC, and get in the DVC mindset. If you just end up doing CVC in a DVC tool, you will merely be frustrated by all the added complexity and you will not realize what that added complexity is buying you in terms of flexibility. All DVC tools have great and very powerful support for branching and merging. Use it. All the history is available at your fingertips. Use it. (For example: never comment out code, just delete it. You can always get it back, even on an airplane with no internet connection.) One very important aspect of Git: all other tools have a more or less defined workflow. Git doesn't. Git is a DVCS workflow construction kit. This makes it sometimes hard to know what to do: you have to design and implement your own workflow (hint: use lots of shell scripts). I use Git for more than a year now, and I still haven't completely figured out my workflow yet. A: Do the tutorial Then play around with it. Do a little toy project to get a feel for it before you starting working with your main codebase. I use gitk a lot to review patches and track how the code changes from commit to commit. A: Before committing files, they have to be added to the Git staging area — every time. To make this easier, there is a -a option to add all tracked files, as in git commit -a. Also, when you do git diff, it only shows you the difference between your working copy and what's in the staging area. If you've added changed files to your staging area, git diff may report nothing even though you may have uncommitted changes. Use git status to see for sure. A: The Git - SVN Crash Course is a good read for getting going. A: I tried git in my company. We used CVS and wanted to move to a better VC tool. We have chosen git as a best tool for files versioning (Linus on GIT). Its performance is just the best and it is a really great tools for a developer who understands version control in-depth but It is a nightmare for a regular developers who uses version control in background and don't want learn how to use for more than few hours (and they do need to learn a lot) Also it's integration with existing IDEs is far away for being complete. The whole usability is pretty big issue for a regular developer. After a pilot with 4 developers we switched to Subversion as simplest although not so superior tool. There is also commercial solution for Subversion MultiSite (which we didn't try yet but will try shortly) - WANDisco
{ "language": "en", "url": "https://stackoverflow.com/questions/126804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Pattern for trying different methods when exception is thrown Here's a question to expose my lack of experience: I have a method DoSomething() which throws an exception if it doesn't manage to do it cleanly. If it fails, I try the less accurate method DoSomethingApproximately() several times in the hope that it will find a sufficiently good solution; if this also fails I finally call DoSomethingInaccurateButGuaranteedToWork(). All three are methods belonging to this object. Two questions: first, is this (admittedly ugly) pattern acceptable, or is there a more elegant way? Second, what is the best way to keep track of how many times I have called DoSomethingApproximately(), given that it is likely to throw an exception? I am currently keeping a variable iNoOfAttempts in the object, and nesting try blocks... this is horrible and I am ashamed. A: You should never use exceptions for control flow of your application. In your case I'd bunch the three methods together into a single method and have it return which particular approach succeeded, maybe with an enum or something like that. A: Return an error code instead of throwing an exception. If the ways those methods fail do throw exceptions, catch them all in the same method and take appropriate action, like increasing a counter and returning an error code. bool result = DoSomething(); while (!result && tries < MAX_TRIES) { result = DoSomethingApproximately(); //This will increment tries if (tries > THRESHOLD) { result = DoSomethingThatAlwaysWorks(); } } A: How about (pseudocode): try{ return doSomething(); } catch(ExpectedException) { ...not much here probably...} for(i = 0 to RETRIES){ try{ return doSomethingApproximately; } catch(ExpectedException) { ...not much here probably...} } doSomethingGuaranteed(); Addendum: I strongly recommend that you do not use special return values, because that means that every single user of the function has to know that some of the return values are special. Depending on the range of the function, it may be sensible to return an ordinary part of the range that can be dealt with normally, e.g. an empty collection. Of course, that may make it impossible to distinguish between failure, and the "right" answer being the empty collection (in this example). A: Go the whole function pointers in a structure route. Just to spice it up, I'll use a Queue and some LINQ. Queue<Action> actions = new Queue<Action>(new Action[] { obj.DoSomething, obj.DoSomethingApproximately, obj.DoSomethingApproximately, obj.DoSomethingApproximately, obj.DoSomethingApproximately, obj.DoSomethingGuaranteed }); actions.First(a => { try { a(); return true; } catch (Exception) { return false; } });
{ "language": "en", "url": "https://stackoverflow.com/questions/126829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to solve "An attempt to attach an auto-named database for file..." SQL error? I've got a local .mdf SQL database file that I am using for an integration testing project. Everything works fine on the initial machine I created the project, database, etc. on, but when I try to run the project on another machine I get the following: System.Data.SqlClient.SqlException : A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.) I figure while I am investigating this problem I would also ask the community here to see if someone has already overcome this. The exception occurs when I instantiate the new data context. I am using LINQ-to-SQL. m_TransLogDataContext = new TransLogDataContext (); Let me know if any additional info is needed. Thanks. A: I'm going to answer my own question as I have the solution. I was relying on the automatic connection string which had an incorrect "AttachDbFilename" property set to a location that was fine on the original machine but which did not exist on the new machine. I'm going to have to dynamically build the connection string since I want this to run straight out of source control with no manual tweaking necessary. Easy enough. A: That because your application have more than one setting to database, try to "Find All" on your solution by search your connection name likes I'm using "EnergyRetailSystemConnectionString" or you can search by your database name
{ "language": "en", "url": "https://stackoverflow.com/questions/126837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I set the UI language in vim? I saw this on reddit, and it reminded me of one of my vim gripes: It shows the UI in German. I want English. But since my OS is set up in German (the standard at our office), I guess vim is actually trying to be helpful. What magic incantations must I perform to get vim to switch the UI language? I have tried googling on various occasions, but can't seem to find an answer. A: Ubuntu 10.10 + VIM 7.2 IMproved. Code below changes language for console vim. Add it at top of your vim.rc if has('unix') language messages C else language messages en endif A: Adding this to _vimrc works for me in windows 8: set langmenu=en_US let $LANG = 'en_US' (note that _vimrc is in the same directory that contains my vim74 dir, thats the _vimrc file that vim reads at startup) A: Try this in _vimrc. It works with my win7. set langmenu=en_US let $LANG = 'en_US' source $VIMRUNTIME/delmenu.vim source $VIMRUNTIME/menu.vim A: :help language :language fr_FR.ISO_8859-1 A: These two lines at the begining of your .vimrc file will do the job: let $LANG = 'en' set langmenu=none A: As Ken noted, you want the :language command. Note that putting this in your .vimrc or .gvimrc won’t help you with the menus in gvim, since their definition is loaded once at startup, very early on, and not re-read again later. So you really do need to set LC_ALL (or more specifically LC_MESSAGES) in your environment – or on non-Unixoid systems (eg. Windows), you can pass the --cmd switch (which executes the given command first thing, as opposed to the -c option): gvim --cmd "lang en_US" As I mentioned, you don’t need to use LC_ALL, which will forcibly switch all aspects of your computing environment. You can do more nuanced stuff. F.ex., my own locale settings look like this: LANG=en_US.utf8 LC_CTYPE=de_DE.utf8 LC_COLLATE=C This means I get a largely English system, but with German semantics for letters, except that the default sort order is ASCIIbetical (ie. sort by codepoint, not according to language conventions). You could use a different variation; see man 7 locale for more. A: Start vim with a changed locale: LC_ALL=en_GB.utf-8 vim Or export that variable per default in your bashrc/profile. A: Two Vim installations on Windows Nothing from here around have helped me until I have realized that I have 2 Vim installed. * *Git Bash via MinGW (Cygwin, mintty) *A separate installation in the Program Files on Windows Next command will filter you all watched vimrc-files and their locations. vim --version | grep vimrc * *_vimrc (Windows & CMD) *.vimrc (Bash for Git) *vimrc (has different locations for both) 1: Vim on Windows & CMD Only renaming (deletion) of the lang folder helped me. You can find it here C:\Program Files (x86)\Vim\vim80\lang I tried all config settings listed here around and it was useless. 2.1: Git Bash through MinGW, Cygwin, mintty For Git Bash I added language messages en_US at the top of C:\Program Files\Git\etc\vimrc Of course, if you prefer to delete the lang folder you can find it here * *C:\Program Files\Git\usr\share\vim\vim80\lang *C:\Users\User_name_xxx\AppData\Local\Programs\Git\usr\share\vim\vim80\lang for a local user installation. 2.2: Tuning only Git's Bash (MinGW64, Cygwin, mintty) At the end, for Bash on Windows I have chosen to skip manipulations with vimrc I opened C:\Program Files\Git\etc\bash.bashrc and added the following line LANG='en_US' or LANG=C Try to do not use en_US.UTF-8 because it forces some bash commands to produce weird chars. For example in find 'xxx_yyy_zzz_aaa.bbbddd' for a non-existing file. A: Putting this line of code at the top of my _vimrc file saved my day: set langmenu=en_US.UTF-8 A: This worked for changing vim's menu language set langmenu=en_US.UTF-8 [or just set langmenu=en for short] But language en gave me an error sayng it couldn't set en as a language but this line did the job :let $LANG = 'en' The latter come from the Vim's docs. I added both lines at the beginning of the _vimrc file. I use a Windows 7 64 computer. PS: this line changes both language and menus language language messages en In the .vimrc file (or _vimrc file if you are in windows) A: For reference, in Windows (7) I just deleted the directory C:\Program Files (x86)\Vim\vim72\lang. That made it fallback to en_US. A: I don't know why all of the above answers did not work for me. I kept getting errors about the locales not existing. Maybe it's a Windows thing. At any rate, my solution was to add this to my vimrc: let $LANG = 'en' Ah, I spoke too soon. The menus of gVim are still in Japanese, but the intro screen is in English. A: Try adding this to your _vimrc: let $LANG='en_US' A: I simply disabled the Native Language Support when installing gvim (thus making it a custom installation). Tested successfully with gvim82.exe under Windows 7. A: Had similar issue, but neither one of above solution worked: https://superuser.com/questions/552504/vim-ui-language-issue/552523 I've resolved it by removing all vim packets and build vim from sources. Hope it'll help someone. A: If you're on Windows and don't want to be bothered issuing commands To prevent the GUI from loading localization files Just go to Program Files\Vim\vim80\lang and put an underscore as a prefix in front of all the files that look like they have something to do with your locale. To prevent VIM itself from loading localization files In the same folder as above, prefix with an underscore the folder named with your country code. Note: Windows 10 will probably ask for Administrator privileges by raising a UAC warning. By the way This same technique can be applied to a lot of Unix/Linux tools ported on Windows, and generally all software packages where the localization files can readily be accessed. If you rename those to prevent the application from finding them, the fallback language will most probably be English.
{ "language": "en", "url": "https://stackoverflow.com/questions/126853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "112" }
Q: Compute Users average weight I have two tables, Users and DoctorVisit User - UserID - Name DoctorsVisit - UserID - Weight - Date The doctorVisit table contains all the visits a particular user did to the doctor. The user's weight is recorded per visit. Query: Sum up all the Users weight, using the last doctor's visit's numbers. (then divide by number of users to get the average weight) Note: some users may have not visited the doctor at all, while others may have visited many times. I need the average weight of all users, but using the latest weight. Update I want the average weight across all users. A: If I understand your question correctly, you should be able to get the average weight of all users based on their last visit from the following SQL statement. We use a subquery to get the last visit as a filter. SELECT avg(uv.weight) FROM (SELECT weight FROM uservisit uv INNER JOIN (SELECT userid, MAX(dateVisited) DateVisited FROM uservisit GROUP BY userid) us ON us.UserID = uv.UserId and us.DateVisited = uv.DateVisited I should point out that this does assume that there is a unique UserID that can be used to determine uniqueness. Also, if the DateVisited doesn't include a time but just a date, one patient who visits twice on the same day could skew the data. A: This should get you the average weight per user if they have visited: select user.name, temp.AvgWeight from user left outer join (select userid, avg(weight) from doctorsvisit group by userid) temp on user.userid = temp.userid A: Write a query to select the most recent weight for each user (QueryA), and use that query as an inner select of a query to select the average (QueryB), e.g., SELECT AVG(weight) FROM (QueryA) A: I think there's a mistake in your specs. If you divide by all the users, your average will be too low. Each user that has no doctor visits will tend to drag the average towards zero. I don't believe that's what you want. I'm too lazy to come up with an actual query, but it's going to be one of these things where you use a self join between the base table and a query with a group by that pulls out all the relevant Id, Visit Date pairs from the base table. The only thing you need the User table for is the Name. We had a sample of the same problem in here a couple of weeks ago, I think. By the "same problem", I mean the problem where we want an attribute of the representative of a group, but where the attribute we want isn't included in the group by clause. A: I think this will work, though I could be wrong: Use an inner select to make sure you have the most recent visit, then use AVG. Your User table in this example is superfluous: since you have no weight data there and you don't care about user names, it doesn't do you any good to examine it. SELECT AVG(dv.Weight) FROM DoctorsVisit dv WHERE dv.Date = ( SELECT MAX(Date) FROM DoctorsVisit innerdv WHERE innerdv.UserID = dv.UserID ) A: If you're using SQL Server 2005 you don't need the sub query on the GROUP BY. You can use the new ROW_NUMBER and PARTION BY functionality. SELECT AVG(a.weight) FROM (select ROW_NUMBER() OVER(PARTITION BY dv.UserId ORDER BY Date desc) as ID, dv.weight from DoctorsVisit dv) a WHERE a.Id = 1 As someone else has mentioned though, this is the average weight across all the users who have VISITED the doctor. If you want the average weight across ALL of the users then anyone not visiting the doctor will give a misleading average. A: Here's my stab at the solution: select avg(a.Weight) as AverageWeight from DoctorsVisit as a innner join (select UserID, max (Date) as LatestDate from DoctorsVisit group by UserID) as b on a.UserID = b.UserID and a.Date = b.LatestDate; Note that the User table isn't used at all. This average omits entirely users who have no doctors visits at all, or whose weight is recorded as NULL in their latest doctors visit. This average is skewed if any users have more than one visit on the same date, and if the latest date is one of those date where the user got wighed more than once.
{ "language": "en", "url": "https://stackoverflow.com/questions/126855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Use custom objects as the source for Microsoft Reports (.rdlc) In some instances, I prefer working with custom objects instead of strongly typed datasets and data rows. However, it seems like Microsoft Reporting (included with VS2005) requires strongly typed datasets. Is there a way to use my custom objects to design and populate reports? A: I found the answer. Yes, it's possible. You just have to add a custom object as a datasource in visual studio. http://www.gotreportviewer.com/objectdatasources/index.html A: I could never choose one of my own POCOs in Report Data setup from my project to be a model for the report - the alleged 'global' option mentioned in the walkthrough was not there. So I ended up having to edit the XML to define the type and an imitation data source (which does not actually exist in my project). I assign the data of type Aies.Core.Model.Invoice.MemberInvoice to the report in code reportViewer.LocalReport.DataSources.Add(new ReportDataSource("MemberInvoice", new[] { invoice1 })); And the custom definition is: <DataSources> <DataSource Name="MemberInvoice"> <ConnectionProperties> <DataProvider>System.Data.DataSet</DataProvider> <ConnectString>/* Local Connection */</ConnectString> </ConnectionProperties> <rd:DataSourceID>3fe04def-105a-4e9b-99db-630c1f8bb2c9</rd:DataSourceID> </DataSource> </DataSources> <DataSets> <DataSet Name="MemberInvoice"> <Fields> <Field Name="MemberId"> <DataField>MemberId</DataField> <rd:TypeName>System.Int32</rd:TypeName> </Field> <Field Name="DateOfIssue"> <DataField>DateOfIssue</DataField> <rd:TypeName>System.DateTime</rd:TypeName> </Field> <Field Name="DateDue"> <DataField>DateDue</DataField> <rd:TypeName>System.DateTime</rd:TypeName> </Field> <Field Name="Amount"> <DataField>Amount</DataField> <rd:TypeName>System.Decimal</rd:TypeName> </Field> </Fields> <Query> <DataSourceName>MemberInvoice</DataSourceName> <CommandText>/* Local Query */</CommandText> </Query> <rd:DataSetInfo> <rd:DataSetName>Aies.Core.Model.Invoice</rd:DataSetName> <rd:TableName>MemberInvoiceData</rd:TableName> <rd:ObjectDataSourceSelectMethod>GetInvoices</rd:ObjectDataSourceSelectMethod> <rd:ObjectDataSourceSelectMethodSignature>System.Collections.Generic.IEnumerable`1[Aies.Core.Model.Invoice.MemberInvoice] GetInvoices()</rd:ObjectDataSourceSelectMethodSignature> <rd:ObjectDataSourceType>Aies.Core.Model.Invoice.MemberInvoiceData, Aies.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null</rd:ObjectDataSourceType> </rd:DataSetInfo> </DataSet> </DataSets> A: I believe you can set up SSRS to read data values from a more or less arbitrary object. This Link describes the IDataReaderFieldProperties object in the API which (IIRC) allows you to specify the getter method to invoke to get a value.
{ "language": "en", "url": "https://stackoverflow.com/questions/126863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I debug a C# COM assembly when it's being called from a native win32 application? I'm developing a C# assembly which is to be called via COM from a Delphi 7 (iow, native win32, not .net) application. So far, it seems to work. I've exported a TLB file, imported that into my Delphi project, and I can create my C# object and call its functions. So that's great, but soon I'm going to really want to use Visual Studio to debug the C# code while it's running. Set breakpoints, step through code, all that stuff. I've tried breaking in the Delphi code after the COM object is created, then looking for a process for VS to attach to, but I can't find one. Is there a way to set VS2008 up to do this? I'd prefer to just be able to hit f5 and have VS start the Delphi executable, wait for the C# code to be called, and then attach itself to it.. But I could live with manually attaching to a process, I suppose. Just please don't tell me I have to make do with MessageBox.Show etc. A: In the VS2008 project properties page, on the Debug tab, there's an option to set a different Start Action. This can be used to run an external program (e.g. your Delphi app) when you press F5. A: Place the following in the method you wish to debug: #if DEBUG if (!System.Diagnostics.Debugger.IsAttached) Debugger.Launch(); #endif When you want to debug, build a debug version and use that in your application. When this code runs, a dialog pops up asking if you want to attach a debugger. A: You can just attach to the native application and see breakpoint, view stacks, watches etc. normally. You'll need to attach after the COM object is created. I put a Afx MsgBox when the object is created to stop the application's flow and then attach the debugger.
{ "language": "en", "url": "https://stackoverflow.com/questions/126868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Best design for lookup-and-possibly-change method I am designing a class that stores (caches) a set of data. I want to lookup a value, if the class contains the value then use it and modify a property of the class. I am concerned about the design of the public interface. Here is how the class is going to be used: ClassItem *pClassItem = myClass.Lookup(value); if (pClassItem) { // item is found in class so modify and use it pClassItem->SetAttribute(something); ... // use myClass } else { // value doesn't exist in the class so add it myClass.Add(value, something); } However I don't want to have to expose ClassItem to this client (ClassItem is an implementation detail of MyClass). To get round that the following could be considered: bool found = myClass.Lookup(value); if (found) { // item is found in class so modify and use it myClass.ModifyAttribute(value, something); ... // use myClass } else { // value doesn't exist in the class so add it myClass.Add(value, something); } However this is inefficient as Modify will have to do the lookup again. This would suggest a lookupAndModify type of method: bool found = myClass.LookupAndModify(value, something); if (found) { // item is found in class ... // use myClass } else { // value doesn't exist in the class so add it myClass.Add(value, something); } But rolling LookupAndModify into one method seems like very poor design. It also only modifies if value is found and so the name is not only cumbersome but misleading as well. Is there another better design that gets round this issue? Any design patterns for this (I couldn't find anything through google)? A: Actually std::set<>::insert() does precisely this. If the value exists, it returns the iterator pointing to the existing item. Otherwise, the iterator where the insertion was made is returned. It is likely that you are using a similar data structure for fast lookups anyway, so a clean public interface (calling site) will be: myClass.SetAttribute(value, something) which always does the right thing. MyClass handles the internal plumbing and clients don't worry about whether the value exists. A: This assumes that you're setting value to the same "something" in both the Modify and Add cases: if (!myClass.AddIfNotExists(value, something)) { // use myClass } Otherwise: if (myClass.TryModify(value, something)) { // use myClass } else { myClass.Add(value, otherSomething); } A: Two things. The first solution is close. Don't however, return ClassItem *. Return an "opaque object". An integer index or other hash code that's opaque (meaningless) to the client, but usable by the myClass instance. Then lookup returns an index, which modify can subsequently use. void *index = myClass.lookup( value ); if( index ) { myClass.modify( index, value ); } else { myClass.add( value ); } After writing the "primitive" Lookup, Modify and Add, then write your own composite operations built around these primitives. Write a LookupAndModify, TryModify, AddIfNotExists and other methods built from your lower-level pieces.
{ "language": "en", "url": "https://stackoverflow.com/questions/126870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I disable updating a form in Windows Forms? During a complicated update I might prefer to display all the changes at once. I know there is a method that allows me to do this, but what is it? A: You can use the old Win32 LockWindowUpdate function: [DllImport("user32.dll")] private static extern long LockWindowUpdate(long Handle); try { // Lock Window... LockWindowUpdate(frm.Handle); // Perform your painting / updates... } finally { // Release the lock... LockWindowUpdate(0); } A: Most complex third-party Windows Forms components have BeginUpdate and EndUpdate methods or similar, to perform a batch of updates and then drawing the control. At the form level, there is no such a thing, but you could be interested by enabling Double buffering. A: I think this.SuspendLayout() & ResumeLayout() should do it A: I don't find SuspendLayout() and ResumeLayout() do what you are asking for. LockWindowsUpdate() mentioned by moobaa does the trick. However, LockWindowUpdate only works for one window at a time. You can also try this: using System; using System.Windows.Forms; using Microsoft.Win32; using System.Runtime.InteropServices; public partial class Form1 : Form { [DllImport("user32.dll")] public static extern int SendMessage(IntPtr hWnd, Int32 wMsg, bool wParam, Int32 lParam); private const int WM_SETREDRAW = 11; public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { SendMessage(this.Handle, WM_SETREDRAW, false, 0); // Do your thingies here SendMessage(this.Handle, WM_SETREDRAW, true, 0); this.Refresh(); } } A: You can use SuspendLayout and ResumeLayout methods in the form or controls while updating properties. If you're binding data to controls you can use BeginUpdate and EndUpdate methods. A: SuspendLayout will help performance if the updates involve changes to controls and layout: MSDN
{ "language": "en", "url": "https://stackoverflow.com/questions/126876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How do I make a gwt-ext window not resize when its content resizes? I've run setHeight(600) on a Window, that's it's initial size. I've also went ahead and done setAutoScroll(true). When the content of the window resizes, the window itself resizes. What I want is for the window to stay fixed in size, and when the content grows larger, add scrollbars. I can get this if I resize the window manually, then let the content grow or shrink. A: Have you checked the feature provided by GWT - setAutoHeights(false)?
{ "language": "en", "url": "https://stackoverflow.com/questions/126881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Data Comparison We have a SQL Server table containing Company Name, Address, and Contact name (among others). We regularly receive data files from outside sources that require us to match up against this table. Unfortunately, the data is slightly different since it is coming from a completely different system. For example, we have "123 E. Main St." and we receive "123 East Main Street". Another example, we have "Acme, LLC" and the file contains "Acme Inc.". Another is, we have "Ed Smith" and they have "Edward Smith" We have a legacy system that utilizes some rather intricate and CPU intensive methods for handling these matches. Some involve pure SQL and others involve VBA code in an Access database. The current system is good but not perfect and is cumbersome and difficult to maintain The management here wants to expand its use. The developers who will inherit the support of the system want to replace it with a more agile solution that requires less maintenance. Is there a commonly accepted way for dealing with this kind of data matching? A: Here's something I wrote for a nearly identical stack (we needed to standardize the manufacturer names for hardware and there were all sorts of variations). This is client side though (VB.Net to be exact) -- and use the Levenshtein distance algorithm (modified for better results): Public Shared Function FindMostSimilarString(ByVal toFind As String, ByVal ParamArray stringList() As String) As String Dim bestMatch As String = "" Dim bestDistance As Integer = 1000 'Almost anything should be better than that! For Each matchCandidate As String In stringList Dim candidateDistance As Integer = LevenshteinDistance(toFind, matchCandidate) If candidateDistance < bestDistance Then bestMatch = matchCandidate bestDistance = candidateDistance End If Next Return bestMatch End Function 'This will be used to determine how similar strings are. Modified from the link below... 'Fxn from: http://ca0v.terapad.com/index.cfm?fa=contentNews.newsDetails&newsID=37030&from=list Public Shared Function LevenshteinDistance(ByVal s As String, ByVal t As String) As Integer Dim sLength As Integer = s.Length ' length of s Dim tLength As Integer = t.Length ' length of t Dim lvCost As Integer ' cost Dim lvDistance As Integer = 0 Dim zeroCostCount As Integer = 0 Try ' Step 1 If tLength = 0 Then Return sLength ElseIf sLength = 0 Then Return tLength End If Dim lvMatrixSize As Integer = (1 + sLength) * (1 + tLength) Dim poBuffer() As Integer = New Integer(0 To lvMatrixSize - 1) {} ' fill first row For lvIndex As Integer = 0 To sLength poBuffer(lvIndex) = lvIndex Next 'fill first column For lvIndex As Integer = 1 To tLength poBuffer(lvIndex * (sLength + 1)) = lvIndex Next For lvRowIndex As Integer = 0 To sLength - 1 Dim s_i As Char = s(lvRowIndex) For lvColIndex As Integer = 0 To tLength - 1 If s_i = t(lvColIndex) Then lvCost = 0 zeroCostCount += 1 Else lvCost = 1 End If ' Step 6 Dim lvTopLeftIndex As Integer = lvColIndex * (sLength + 1) + lvRowIndex Dim lvTopLeft As Integer = poBuffer(lvTopLeftIndex) Dim lvTop As Integer = poBuffer(lvTopLeftIndex + 1) Dim lvLeft As Integer = poBuffer(lvTopLeftIndex + (sLength + 1)) lvDistance = Math.Min(lvTopLeft + lvCost, Math.Min(lvLeft, lvTop) + 1) poBuffer(lvTopLeftIndex + sLength + 2) = lvDistance Next Next Catch ex As ThreadAbortException Err.Clear() Catch ex As Exception WriteDebugMessage(Application.StartupPath , [Assembly].GetExecutingAssembly().GetName.Name.ToString, MethodBase.GetCurrentMethod.Name, Err) End Try Return lvDistance - zeroCostCount End Function A: SSIS (in Sql 2005+ Enterprise) has Fuzzy Lookup which is designed for just such data cleansing issues. Other than that, I only know of domain specific solutions - such as address cleaning, or general string matching techniques. A: There are many vendors out there that offer products to do this kind of pattern matching. I would do some research and find a good, well-reputed product and scrap the home-grown system. As you say, your product is only good, and this is a common-enough need for businesses that I'm sure there's more than one excellent product out there. Even if it costs a few thousand bucks for a license, it will still be cheaper than paying a bunch of developers to work on something in-house. Also, the fact that the phrases "intricate", "CPU intensive", "VBA code" and "Access database" appear together in your system's description is another reason to find a good third-party tool. EDIT: it's also possible that .NET has a built-in component that does this kind of thing, in which case you wouldn't have to pay for it. I still get surprised once in a while by the tools that .NET offers. A: Access doesn't really have the tools for this. In an ideal world I would go with the SSIS solution and use fuzzy lookup. But if you are currently using Access, the chances of your office buying SQL Server Enterprise edition seem low to me. If you are stuck with the current environment, you could try a brute force approach. Start with standardized cleansing of addresses. PIck standard abbreviations for Street, raod, etc. and write code to change all the normal variations to those standard addesses. Replace any instances of two spaces with one space, trim all the data and remove any non-alphanumeric characters. As you can see this is quite a task. As for company names, maybe you can try matching on first 5 characters of the name and the address or phone. You could also create a table of known variations and what they will relate to in your database to use for cleanising future files. So if you record with id 100 is Acme, Inc. you could have a table like this: idfield Name 100 Acme, Inc. 100 Acme, Inc 100 Acme, Incorporated 100 Acme, LLC 100 Acme This will start small but build over time if you make an entry every time you find and fix a duplicate (make it part of you de-dupping process) and if you make an entry every time you are able to match the first part of the name and address to an existing company. I'd also look at that function Torial posted and see if it helps. All of this would be painful and timeconsuming, but would get better over time as you find new variations and add them to the code or list. If you do decide to stardardize your addressdata, make sure to clean production data first, then do any imports to a work table and clean it, then try to match to production data and insert new records. A: There's quite a few ways to tackle this that may not be obvious. The best is finding unique identifiers that you can use for matching outside of the fields with mis spellings, etc. Some thoughts * *The obvious, Social security number, drivers license, etc *Email address *Cleansed phone number (Rremove punctuation, etc) As far as vendors go I just answered a similar question and am pasting below. Each major provider does have their own solution. Oracle, IBM, SAS Dataflux, etc and each claim to be the best at this kind of problem. Independent verified evaluation: There was a study done at Curtin University Centre for Data Linkage in Australia that simulated the matching of 4.4 Million records. Identified what providers had in terms of accuracy (Number of matches found vs available. Number of false matches) DataMatch Enterprise, Highest Accuracy (>95%), Very Fast, Low Cost IBM Quality Stage , high accuracy (>90%), Very Fast, High Cost (>$100K) SAS Data Flux, Medium Accuracy (>85%), Fast, High Cost (>100K) That was the best independent evaluation we could find, was very thorough.
{ "language": "en", "url": "https://stackoverflow.com/questions/126885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Visual Studio 2005 - Refactor multiple attributes in same time? I use the right button>Refactor>Encapsultate field to have my accessor every time. The problem is when I create new class, I can have more than 10 attributes and it's long to do 1 by 1 every accessor. Is there a faster way to create them? Thank you for your time. A: If you create a new class, you can use code snippets to create encapsulated fields instead of first creating field and then encapsulating it. In C#, the shortcuts are prop and propg (for private set). A: Looks like the refactoring built into studio only supports a single field at a time for the Encapsulate Field refactoring. Refactor Pro! (http://www.devexpress.com/Products/Visual_Studio_Add-in/Refactoring/) or Resharper (http://www.jetbrains.com/resharper/index.html) both have support for encapsulating multiple fields. You may be able to get fancy and put together a macro that would allow you to select multiple fields and then encapsulate each one, but VS macros are not my ball of wax. A: In C# 3.0, the new property syntax saves you the need to declare the field & implement the accessors. They syntax is looks like: public string Name { get; private set; } Also, I want to point out that for internal members, trivial properties have very little value over internal fields, since you have control over both caller and implementation - you can switch to a property in the future, without a lot of work. Even for public members, thinking you can future-proof your code merely by making public data fields in to properties is myopic. At the very least, you should add indirection around your constructor (with a factory), and your interface (with an interface). It also requires deep thought in to how the consumers of your API will expect you to work over multiple versions. It's really hard, and it's only worth doing if you are an API vendor, in my opinion. In my code, the main reason I use properties is because a lot of tools that use reflection look at properties but not fields. I think this is a mistake, but that's the way the tools work.
{ "language": "en", "url": "https://stackoverflow.com/questions/126894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: git-svn merges and commit details we are using git-svn to manage branches of an SVN repo. We are facing the following problem: after a number of commits by user X in the branch, user Y would like to use git-svn to merge the changes in branch to trunk. The problem we're seeing is that the commit messages for all the individual merge operations look as if they were made by user Y, whereas the actual change in branch was made by user X. Is there a way to indicate to git-svn that when merging, use the original commit message/author for a given change rather than the person doing the merge? A: You can use grafts to teach git about merges that are not denoted in the commit object in question. echo "$merge_sha1 $parent1_sha1 $parent2_sha1" >> .git/info/grafts Finding this info is easy enough: given find the merge commit in question, you know $merge_sha1 and $parent1_sha1 already. Conventionally, the commit message of such a commit will contain the SVN revision number of the second parent commit, which you simply translate to the corresponding commit ID: git svn find-rev r$revnum $branch Presto, you have all 3 pieces of information you need to create the graft. A: Try using the --add-author-from and --use-log-author options to git-svn. A: The git-svn man page recommends that you don't use merge. ""It is recommended that you run git-svn fetch and rebase (not pull or merge)"". Having said that, you can do what you like :-) There are 2 issues here. First is that svn only stores the commiter, not the author of a patch as git does. So when Y commits the merges to trunk, svn only records her name, even though the patches were authored by X. This is an amazing feature of git, stunningly simple yet vital for open source projects were attributing changes to the author can avoid legal problems down the road. Secondly, git doesn't seem to use the relatively new svn merge features. This may be a temporary thing, as git is actively developed and new features are added all the time. But for now, it doesn't use them. I've just tried with git 1.6.0.2 and it "loses" information compared to doing the same operation with svn merge. In svn 1.5, a new feature was added to the logging and annotation methods, so that svn log -g on the trunk would output something like this for a merge: ------------------------------------------------------------------------ r5 | Y | 2008-09-24 15:17:12 +0200 (Wed, 24 Sep 2008) | 1 line Merged release-1.0 into trunk ------------------------------------------------------------------------ r4 | X | 2008-09-24 15:16:13 +0200 (Wed, 24 Sep 2008) | 1 line Merged via: r5 Return 1 ------------------------------------------------------------------------ r3 | X | 2008-09-24 15:15:48 +0200 (Wed, 24 Sep 2008) | 2 lines Merged via: r5 Create a branch Here, Y commits r5, which incorporates the changes from X on the branch into the trunk. The format of the log is not really that great, but it comes into its own on svn blame -g: 2 Y int main() 2 Y { G 4 X return 1; 2 Y } Here assuming Y only commits to trunk, we can see that one line was editted by X (on the branch) and merged. So, if you are using svn 1.5.2, you are possibly better of merging with the real svn client for now. Although you would lose merge info in git, it is usually clever enough not to complain. Update: I've just tried this with git 1.7.1 to see if there has been any advances in the interim. The bad news is that merge within git still does not populate the svn:mergeinfo values, so git merge followed by git svn dcommit will not set svn:mergeinfo and you will lose merge information if the Subversion repository is the canonical source, which it probably is. The good news is that git svn clone does read in svn:mergeinfo properties to construct a better merge history, so if you use svn merge correctly (it requires merging full branches) then the git clone will look correct to git users.
{ "language": "en", "url": "https://stackoverflow.com/questions/126896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is the equivalent of the SQLite datetime function in PostgreSQL? The question is pretty self-explanatory. I'm looking for a PostgreSQL equivalent to the SQLite datetime function. A: postgres=# select to_char(now(),'YYYY-mm-dd HH:MM:ss'); to_char --------------------- 2008-09-24 02:09:20 (1 row) postgres=# select to_char(now(),'YYYY-mm-dd HH24:MM:ss'); to_char --------------------- 2008-09-24 14:09:20 (1 row) A: I think this is what your're searching for: timestamp [ (p) ] [ without time zone ] or timestamp [ (p) ] with time zone otherwise have a look @ http://www.postgresql.org/docs/8.1/interactive/datatype-datetime.html A: OK, thanks for the answers, they helped point me in the right direction. What I actually was looking for was to_timestamp.
{ "language": "en", "url": "https://stackoverflow.com/questions/126897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Organizing the source code base when mixing two or more languages (like Java and C++) I ran into a problem a few days ago when I had to introduce C++ files into a Java project. It started with a need to measure the CPU usage of the Java process and it was decided that the way to go was to use JNI to call out to a native library (a shared library on a Unix machine) written in C. The problem was to find an appropriate place to put the C files in the source repository (incidentally Clearcase) which consists of only Java files. I thought of a couple of alternatives: (a) Create a separate directory for putting the C files (specifically, one .h file and one .c file) at the top of the source base like: /vobs/myproduct/javasrc /vobs/myproduct/cppsrc I didn't like this because I have only two C files and it seemed very odd to split the source base at the language level like this. Had substantial portions of the project been written more or less equally in C++ and Java, this could be okay. (b) Put the C files into the Java package that uses it. I have the calling Java classes in /vobs/myproduct/com/mycompany/myproduct/util/ and the C files also go in there. I didn't like this either because I think the C files just don't belong in the Java package. Has anybody solved a problem like this before? Generally, what's a good strategy to follow when organizing codebase that mixes two or more languages? Update: I don't have any plan to use any C or C++ in my project, some Jython perhaps, but you never know when my customers need a feature that can be solved only by using C or best solved by using C. A: "I didn't like this because I have only two C files and it seemed very odd to split the source base at the language level like this" Why does it seem odd? Consider this project: project1\src\java project1\src\cpp project1\src\python Or, if you decide to split things up into modules: project1\module1\src\java project1\module1\src\cpp project1\module2\src\java project1\module2\src\python I guess it's a matter of personal taste, but the above structure is fairly common, and I think it works quite well once you get used to it. A: The default Maven-generated layout for web apps is src/main/java, src/test/java, src/main/resources, and src/test/resources. I would assume that it would default to adding src/main/cpp and src/test/cpp as well. This seems like a decent enough convention to me. A: Keeping them in separate folders is a good idea. It makes it easier to find than searching Java packages for the C files, and it also allows for the possibility of adding more C code in the future without having to move it all around later. A: Personally I'd separate the two, possibly even into their own separate projects, but that's when they are both separate things, much like you wouldn't put two different concepts in the same class. It's get much vaguer when they both touch the same conceptual area. Ofcourse there's always issues when it comes to building the code, is putting it in structure b) possible for instance without needing to do all sorts of tricks to get it to compile? Are you planning on using more C in the project, in which case the C files would get spread all over your project if you follow the same pattern ... A: Personally in the case of split language solutions, I would keep them in seperate projects or folders. One way of looking at the problem is to treat the C classes like any other third party API. Interface out the dependancies (i.e. avoid direct calls) in your java code to avoid tight coupling and keep the C source in a seperate project/folder from the java. A: Let's use different terminology. There is one product which is not project. The product consist of Java workspace and C/C++ workspace, each loadable from the different IDE. Eventually if you use one and the same IDE there will be only one workspace. Each workspace consists of several projects. Each project has its own folder structure (src, bin, res, e.t.c). So in case it is only one workspace, then it is better to have at least one Java and one C/C++ project inside, each with different compile/run/debug/output/... settings. So, I would use: Product/Workspace(1)/JavaProject1/src Product/Workspace(1)/JavaProject2/src Product/Workspace(1 or 2)/CPPproject1/src Product/Workspace(1 or 2)/CPPproject2/src ... This way you can use eventually one and the same folder structure for each project, which is more consistent. Basically this is just one more level of abstraction - dividing the product to different related projects. A: In this case, the files in question are not just a different language, but also run as a separate program that interacts through a defined interface. This means that the source files can be treated as a separate project, and therefore kept elsewhere. The case is different in .NET projects which mix C# and ASP.NET (for example) within one codebase. How do people organise their code in such cases?
{ "language": "en", "url": "https://stackoverflow.com/questions/126898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Methods for caching PHP objects to file? In ASPNET, I grew to love the Application and Cache stores. They're awesome. For the uninitiated, you can just throw your data-logic objects into them, and hey-presto, you only need query the database once for a bit of data. By far one of the best ASPNET features, IMO. I've since ditched Windows for Linux, and therefore PHP, Python and Ruby for webdev. I use PHP most because I dev several open source projects, all using PHP. Needless to say, I've explored what PHP has to offer in terms of caching data-objects. So far I've played with: * *Serializing to file (a pretty slow/expensive process) *Writing the data to file as JSON/XML/plaintext/etc (even slower for read ops) *Writing the data to file as pure PHP (the fastest read, but quite a convoluted write op) I should stress now that I'm looking for a solution that doesn't rely on a third party app (eg memcached) as the apps are installed in all sorts of scenarios, most of which don't have install rights (eg: a cheap shared hosting account). So back to what I'm doing now, is persisting to file secure? Rule 1 in production server security has always been disable file-writing, but I really don't see any way PHP could cache if it couldn't write. Are there any tips and/or tricks to boost the security? Is there another persist-to-file method that I'm forgetting? Are there any better methods of caching in "limited" environments? A: Re: Is there another persist-to-file method that I'm forgetting? It's of limited utility but if you have a particularly beefy database query you could write the serialized object back out to an indexed database table. You'd still have the overhead of a database query, but it would be a simple select as opposed to the beefy query. Re: Is persisting to file secure? and cheap shared hosting account) The sad fact is cheap shared hosting isn't secure. How much do you trust the 100,500, or 1000 other people who have access to your server? For historic and (ironically) security reasons, shared hosting environments have PHP/Apache running as a unprivileged user (with PHP running as an Apache module). The security rational here is if the world facing apache process gets compromised, the exploiters only have access to an unprivileged account that can't screw with important system files. The bad part is, that means whenever you write to a file using PHP, the owner of that file is the same unprivileged Apache user. This is true for every user on the system, which means anyone has read and write access to the files. The theoretical hackers in the above scenario would also have access to the files. There's also a persistent bad practice in PHP of giving a directory permissions of 777 to directories and files to enable the unprivileged apache user to write files out, and then leaving the directory or file in that state. That gives anyone on the system read/write access. Finally, you may think obscurity saves you. "There's no way they can know where my secret cache files are", but you'd be wrong. Shared hosting sets up users in the same group, and most default file masks will give your group users read permission on files you create. SSH into your shared hosting account sometime, navigate up a directory, and you can usually start browsing through other users files on the system. This can be used to sniff out writable files. The solutions aren't pretty. Some hosts will offer a CGI Wrapper that lets you run PHP as a CGI. The benefit here is PHP will run as the owner of the script, which means it will run as you instead of the unprivileged user. Problem averted! New Problem! Traditional CGI is slow as molasses in February. There is FastCGI, but FastCGI is finicky and requires constant tuning. Not many shared hosts offer it. If you find one that does, chances are they'll have APC enabled, and may even be able to provide a mechanism for memcached. A: I had a similar problem, and thus wrote a solution, a memory cache written in PHP. It only requires the PHP build to support sockets. Other then that, it is a pure php solution and should run just fine on Shared hosting. http://code.google.com/p/php-object-cache/ A: What I always do if I have to be able to write is to ensure I'm not writing anywhere I have PHP code. Typically my directory structure looks something like this (it's varied between projects, but this is the general idea): project/ app/ html/ index.php data/ cache/ app is not writable by the web server (neither is index.php, preferably). cache is writable and used for caching things such as parsed templates and objects. data is possibly writable, depending on need. That is, if the users upload data, it goes into data. The web server gets pointed to project/html and whatever method is convenient is used to set up index.php as the script to run for every page in the project. You can use mod_rewrite in Apache, or content negotiation (my preference but often not possible), or whatever other method you like. All your real code lives in app, which is not directly accessible by the web server, but should be added to the PHP path. This has worked quite well for me for several projects. I've even been able to get, for instance, Wikimedia to work with a modified version of this structure. Oh... and I'd use serialize()/unserialize() to do the caching, although generating PHP code has a certain appeal. All the templating engines I know of generate PHP code to execute, making post-parse very fast. A: Serializing is quite safe and commonly used. There is an alternative however, and that is to cache to memory. Check out memcached and APC, they're both free and highly performant. This article on different caching techniques in PHP might also be of interest. A: If you have access to the Database Query Cache (ie. MySQL) you could go with serializing your objects and storing them in the DB. The database will take care of holding the query results in memory so that should be pretty fast. A: You don't spell out -why- you're trying to cache objects. Are you trying to speed up a slow database query, work around expensive object instantiation, avoid repeated generation of complex page, maintain application state or are you just compulsively storing away objects in case of a long winter? The best solution, given the atrocious limitations of most low-cost shared hosting, is going to depend on what you're trying to accomplish. Going for bottom of the barrel shared-hosting means you have to accept that you won't be working with the best tools. The numbers are hard to quantify, but there's a trade off between hosting costs, site performance & developer time (ie - fast, cheap or easy). A: It's in theory possible to store objects in sessions. That might get you past the file writing disabled problem. Additionally you could store the session in a mysql memory backed table to speed up the query. A: Some hosting places may have APC compiled in.. That would allow you to store the objects in memory.
{ "language": "en", "url": "https://stackoverflow.com/questions/126917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How can I ensure a dialog will be modal when opened from an IE BHO? I have an Internet Explorer Browser Helper Object (BHO), written in c#, and in various places I open forms as modal dialogs. Sometimes this works but in some cases it doesn't. The case that I can replicate at present is where IE is running javascript to open other child windows... I guess it's getting a bit confused somewhere.... The problem is that when I call: (new MyForm(someParam)).ShowDialog(); the form is not modal, so I can click on the IE window and it gets focus. Since IE is in the middle of running my code it doesn't refresh and therefore to the user it appears that IE is hanging. Is there a way of ensuring that the form will be opened as modal, ie that it's not possible for the form to be hidden behind IE windows. (I'm using IE7.) NB: this is a similar question to this post although that's using java. I guess the solution is around correctly passing in the IWin32Window of the IE window, so I'm looking into that. A: Here's a more concise version of Ryan/Rory's WindowWrapper code: internal class WindowWrapper : IWin32Window { public IntPtr Handle { get; private set; } public WindowWrapper(IntPtr hwnd) { Handle = hwnd; } } A: It wasn't my intention to answer my own question, but... It seems that if you pass in the correct IWin32Window to the ShowDialog() method it works fine. The trick is how to get this. Here's how I did this, where 'siteObject' is the object passed in to the SetSite() method of the BHO: IWebBrowser2 browser = siteObject as IWebBrowser2; if (browser != null) hwnd = new IntPtr(browser.HWND); (new MyForm(someParam)).ShowDialog(new WindowWrapper(hwnd)); ... // Wrapper class so that we can return an IWin32Window given a hwnd public class WindowWrapper : System.Windows.Forms.IWin32Window { public WindowWrapper(IntPtr handle) { _hwnd = handle; } public IntPtr Handle { get { return _hwnd; } } private IntPtr _hwnd; } Thanks to Ryan for the WindowWrapper class, although I'd hoped there was a built-in way to do this? UPDATE: this won't work on IE8 with Protected Mode, since it's accessing an HWND outside what it should be. Instead you'll have to use the HWND of the current tab (or some other solution?), e.g. see .net example in this post for a way of getting that.
{ "language": "en", "url": "https://stackoverflow.com/questions/126925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I make the AssemblyInfo version of a C# .NET CF program propagate to the Explorer Properties window? It seems like if you compile a Visual Studio solution and have a version # in your AssemblyInfo.cs file, that should propagate to say, the Windows Explorer properties dialog. This way, someone could simply right click on the *.exe and click 'properties' to see the version #. Is there a special setting in Visual Studio to make this happen? example picture http://content.screencast.com/users/Pincas/folders/Jing/media/40442efd-6d74-4d8a-8e77-c1e725e6c150/2008-09-24_0849.png Edit: I should have mentioned that this is, specifically, for .NET Compact Framework 2.0, which doesn't support AssemblyFileVersion. Is all hope lost? A: Note, that the AssemblyFileVersion attribute is not available under .NET Compact Framework! See this article from Daniel Mooth for a workaround. A: NOTE: This answer is for accessing the AssemblyInfo properties within a .NET CF 3.5 application. It does not propagate to the executable's "Properties" inside of Windows Explorer (but could be used to write to a file, to the console, or to display in the application) I know this is a very old question, but here is a solution I found using Reflection and Linq to get the "AssemblyInformationalVersion" (product version in newer Visual Studio projects). First, I added this to the AssemblyInfo.cs (replace the string with whatever you want to use): [assembly: AssemblyInformationalVersion("1.0.0.0 Alpha")] Then, you can use this method to pull out the attribute (I placed it inside a static class in the AssemblyInfo.cs file). The method get's an array of all Assembly attributes, then selects the first result matching the attribute name (and casts it to the proper type). The InformationalVersion string can then be accessed. //using System.Reflection; //using System.Linq; public static string AssemblyInformationalVersion { get { AssemblyInformationalVersionAttribute informationalVersion = (AssemblyInformationalVersionAttribute) (AssemblyInformationalVersionAttribute.GetCustomAttributes(Assembly.GetExecutingAssembly())).Where( at => at.GetType().Name == "AssemblyInformationalVersionAttribute").First(); return informationalVersion.InformationalVersion; } } To get the normal "AssemblyVersion" attribute I used: //using System.Reflection; public static string AssemblyVersion { get { return Assembly.GetExecutingAssembly().GetName().Version.ToString(); } } A: Does the AssemblyFileVersion attribute help? A: You need to add another attribute: [assembly: AssemblyFileVersion("1.0.114.0")] Note that you still need the AssemblyVerison one as well to correctly identify the assembly to the .NET runtime. A: The version number does propagate through to the "Version" tab in the properties dialogue but not through to the summary. Unfortunately VS will not auto-populate the summary information of a file as the information is meta-data attached to the file itself. You can however access and manipulate the summary information yourself by using the free DSO OleDocument Properties Reader from Microsoft. You can acquire the library from: http://www.microsoft.com/downloads/details.aspx?FamilyId=9BA6FAC6-520B-4A0A-878A-53EC8300C4C2&displaylang=en Further information on its use can be found at: http://www.developerfusion.co.uk/show/5093/ EDIT: As noted above by pb and Nigel Hawkins you can get the property to propogate by using the AssemblyFileVersion attribute like: [assembly: AssemblyFileVersion("1.0.114.0")] A: I'm not sure that RevisionNumber is the correct field to be looking for. Try explorer, right click -> version tab, and look at the AssemblyVersion field there. A: in my project we use FileVersion = YYYY.MM.DD.BUILD (e.g., 2008.9.24.1) but the ProductVersion should be major.minor.revision.BUILD. we use the AssemblyInformationalVersion to get around this. AssemblyVersion="MAJ.MIN.REV.1" --> used by .NET AssemblyInformationalVersion="MAJ.MIN.REV.XXX" --> used in explorer's ProductVersion AssemblyFileVersion="YYYY.MM.DD.XXX" --> used in explorer's FileVersion
{ "language": "en", "url": "https://stackoverflow.com/questions/126926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Log4Net/C# - Disable default logging I am using log4net in a C# project, in the production environment, I want to disable all the logging, but when some fatal error occures it should log all the previous 512 messages in to a file.I have successfully configured this, and it is working fine. It logs the messages in to a file when some fatal error occures. But when I run it from Visual Studio, I can see all the log messages are written to the Output window, regardless of whether it is a Fatal or not. (I cant see these messages when I run from the Windows Explorer - my application is a WinForm exe and there is no Console window to see the output) Is there any way to disable this logging? I need my logs only in file, that too when some fatal error occures. <?xml version="1.0" encoding="utf-8" ?> <configuration> <log4net debug="false"> <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="log.txt" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="1MB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" /> </layout> </appender> <appender name="BufferingForwardingAppender" type="log4net.Appender.BufferingForwardingAppender" > <bufferSize value="512" /> <lossy value="true" /> <evaluator type="log4net.Core.LevelEvaluator"> <threshold value="FATAL"/> </evaluator> <appender-ref ref="RollingFileAppender" /> </appender> <root> <level value="DEBUG" /> <appender-ref ref="BufferingForwardingAppender" /> </root> </log4net> </configuration> And this is how I configure it in the static initializer of Windows Forms. static Window1() { Stream vStream = typeof(Window1).Assembly.GetManifestResourceStream("TestLogNet.log4net.config"); XmlConfigurator.Configure(vStream); BasicConfigurator.Configure(); } And I have the logger object initialized in the constructor of WinForm logger = LogManager.GetLogger(typeof(Window1)); [language - C#, .NET Framework - 3.5, Visual Studio 2008, log4net 1.2.10, project type - WinForms] A: Remove the BasicConfigurator.Configure() line. That's what that line does -- adds a ConsoleAppender pointing to Console.Out. A: Do you still see the messages in Visual Studio if the application is compiled in release mode? It's possible that log4net uses Debug.Write to show the errors anyway. If that's the case then those messages shouldn't appear in release mode.
{ "language": "en", "url": "https://stackoverflow.com/questions/126939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }