text
stringlengths
8
267k
meta
dict
Q: Combination of more than one crypto algorithm I'm considering the following: I have some data stream which I'd like to protect as secure as possible -- does it make any sense to apply let's say AES with some IV, then Blowfish with some IV and finally again AES with some IV? The encryption / decryption process will be hidden (even protected against debugging) so it wont be easy to guess which crypto method and what IVs were used (however, I'm aware of the fact the power of this crypto chain can't be depend on this fact since every protection against debugging is breakable after some time). I have computer power for this (that amount of data isn't that big) so the question only is if it's worth of implementation. For example, TripleDES worked very similarly, using three IVs and encrypt/decrypt/encrypt scheme so it probably isn't total nonsense. Another question is how much I decrease the security when I use the same IV for 1st and 3rd part or even the same IV for all three parts? I welcome any hints on this subject A: I don't think you have anything to loose by applying one encryption algorithm on top of another that is very different from the first one. I would however be wary of running a second round of the same algorithm on top of the first one, even if you've run another one in-between. The interaction between the two runs may open a vulnerability. Having said that, I think you're agonizing too much on encryption part. Most exposures of data do not happen by breaking an industry-standard encryption algorithm, like AES, but through other weaknesses in the system. I would suggest to spend more time on looking at key management, the handling of unencrypted data, weaknesses in the algorithm's implementation (the possibility of leaking data or keys), and wider system issues, for instance, what are you doing with data backups. A: A hacker will always attack the weakest element in a chain. So it helps little to make a strong element even stronger. Cracking an AES encryption is already impossible with 128 Bit key length. Same goes for Blowfish. Choosing even bigger key lengths make it even harder, but actually 128 Bit has never been cracked up to now (and probably will not within the next 10 or 20 years). So this encryption is probably not the weakest element, thus why making it stronger? It is already strong. Think about what else might be the weakest element? The IV? Actually I wouldn't waste too much time on selecting a great IV or hiding it. The weakest key is usually the enccryption key. E.g. if you are encrypting data stored to disk, but this data needs to be read by your application, your application needs to know the IV and it needs to know the encryption key, hence both of them needs to be within the binary. This is actually the weakest element. Even if you take 20 encryption methods and chain them on your data, the IVs and encryption keys of all 20 needs to be in the binary and if a hacker can extract them, the fact that you used 20 instead of 1 encryption method provided zero additional security. Since I still don't know what the whole process is (who encrypts the data, who decrypts the data, where is the data stored, how is it transported, who needs to know the encryption keys, and so on), it's very hard to say what the weakest element really is, but I doubt that AES or Blowfish encryption itself is your weakest element. A: I'm not sure about this specific combination, but it's generally a bad idea to mix things like this unless that specific combination has been extensively researched. It's possible the mathematical transformations would actually counteract one another and the end result would be easier to hack. A single pass of either AES or Blowfish should be more than sufficient. UPDATE: From my comment below… Using TripleDES as an example: think of how much time and effort from the world's best cryptographers went into creating that combination (note that DoubleDES had a vulnerability), and the best they could do is 112 bits of security despite 192 bits of key. UPDATE 2: I have to agree with Diomidis that AES is extremely unlikely to be the weak link in your system. Virtually every other aspect of your system is more likely to be compromised than AES. UPDATE 3: Depending on what you're doing with the stream, you may want to just use TLS (the successor to SSL). I recommend Practical Cryptography for more details—it does a pretty good job of addressing a lot of the concerns you'll need to address. Among other things, it discusses stream ciphers, which may or may not be more appropriate than AES (since AES is a block cipher and you specifically mentioned that you had a data stream to encrypt). A: Who are you trying to protect your data from? Your brother, your competitor, your goverment, or the aliens? Each of these has different levels at which you could consider the data to be "as secure as possible", within a meaningful budget (of time/cash) A: I wouldn't rely on obscuring the algorithms you're using. This kind of "security by obscurity" doesn't work for long. Decompiling the code is one way of revealing the crypto you're using but usually people don't keep secrets like this for long. That's why we have private/public key crypto in the first place. A: Also, don't waste time obfuscating the algorithm - apply Kirchoff's principle, and remember that AES, in and of itself, is used (and acknowledged to be used) in a large number of places where the data needs to be "secure". A: Damien: you're right, I should write it more clearly. I'm talking about competitor, it's for commercial use. So there's meaningful budget available but I don't want to implement it without being sure I know why I'm doing it :) Hank: yes, this is what I'm scared of, too. The most supportive source for this idea was mentioned TripleDES. On the other side, when I use one algorithm to encrypt some data, then apply another one, it would be very strange if the 'power' of whole encryption would be lesser than using standalone algorithm. But this doesn't mean it can't be equal... This is the reason why I'm asking for some hint, this isn't my area of knowledge... A: Diomidis: this is basically my point of view but my colleague is trying to convince me it really 'boosts' security. My proposal would be to use stronger encryption key instead of one algorithm after another without any thinking or deep knowledge what I'm doing. A: @Miro Kropacek - your colleague is trying to add security through Voodoo. Instead, try to build something simple that you can analyse for flaws - such as just using AES. I'm guessing it was he (she?) who suggested enhancing the security through protection from debugging too... A: You can't actually make things less secure if you encrypt more than once with distinct IVs and keys, but the gain in security may be much less than you anticipate: In the example of 2DES, the meet-in-the-middle attack means it's only twice as hard to break, rather than squaring the difficulty. In general, though, it's much safer to stick with a single well-known algorithm and increase the key length if you need more security. Leave composing cryptosystems to the experts (and I don't number myself one of them). A: Encrypting twice is more secure than encrypting once, even though this may not be clear at first. Intuitively, it appears that encrypting twice with the same algorithm gives no extra protection because an attacker might find a key which decrypts all the way from the final cyphertext back to the plaintext. ... But this is not the case. E.g. I start with plaintext A and encrypt with key K1 it to get B. Then I encrypt B with key K2 to get C. Intuitively, it seems reasonable to assume that there may well be a key, K3, which I could use to encrypt A and get C directly. If this is the case, then an attacker using brute force would eventually stumble upon K3 and be able to decrypt C, with the result that the extra encryption step has not added any security. However, it is highly unlikely that such a key exists (for any modern encryption scheme). (When I say "highly unlikely" here, I mean what a normal person would express using the word "impossible"). Why? Consider the keys as functions which provide a mapping from plaintext to cyphertext. If our keys are all KL bits in length, then there are 2^KL such mappings. However, if I use 2 keys of KL bits each, this gives me (2^KL)^2 mappings. Not all of these can be equivalent to a single-stage encryption. Another advantage of encrypting twice, if 2 different algorithms are used, is that if a vulnerability is found in one of the algorithms, the other algorithm still provides some security. As others have noted, brute forcing the key is typically a last resort. An attacker will often try to break the process at some other point (e.g. using social engineering to discover the passphrase). Another way of increasing security is to simply use a longer key with one encryption algorithm. ...Feel free to correct my maths! A: Yes, it can be beneficial, but probably overkill in most situations. Also, as Hank mentions certain combinations can actually weaken your encryption. TrueCrypt provides a number of combination encryption algorithms like AES-Twofish-Serpent. Of course, there's a performance penalty when using them. A: Changing the algorithm is not improving the quality (except you expect an algorithm to be broken), it's only about the key/block length and some advantage in obfuscation. Doing it several times is interesting, since even if the first key leaked, the resulting data is not distinguishable from random data. There are block sizes that are processed better on a given platform (eg. register size). Attacking quality encryption algorithms only works by brute force and thus depending on the computing power you can spend on. This means eventually you only can increase the probable average time somebody needs to decrypt it. If the data is of real value, they'd better not attack the data but the key holder... A: I agree with what has been said above. Multiple stages of encryption won't buy you much. If you are using a 'secure' algorithm then it is practically impossible to break. Using AES in some standard streaming mode. See http://csrc.nist.gov/groups/ST/toolkit/index.html for accepted ciphers and modes. Anything recommended on that site should be sufficiently secure when used properly. If you want to be extra secure, use AES 256, although 128 should still be sufficient anyway. The greatest risks are not attacks against the algorithm itself, but rather attacks against key management, or side channel attacks (which may or may not be a risk depending on the application and usage). If you're application is vulnerable to key management attacks or to side channel attacks then it really doesn't matter how many levels of encryption you apply. This is where I would focus your efforts.
{ "language": "en", "url": "https://stackoverflow.com/questions/120131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Failover & Disaster Recovery What's the difference between failover and disaster recovery? A: Failover is more linked to backup procedure. The main difference between the two, from the end client's point of view is the downtime. * *Failover is expected to have a low downtime (1 2 hours top) *DR can have anything between 6 hours to a day or two. The other difference is the nature of environments available after a failover or a DR. * *Failover means the end clients see nothing and can continue his activity (development or production management) *DR should mean only production environment is back up. All development environments are down, or seriously degraded. A: Failover: When one machine fails, another machine (usually in the same location) takes over and resumes service Disaster recovery: When Godzilla destroys your data center, you do have alternative locations to keep providing your service and protocols/means for the other location to know how to keep delivering the service Depending on the particular needs of each service, disaster recovery might just be a backup tape in a safe in a different location. In other words, it's just having a defined protocol to recover from disaster. Likewise, failover might just be having a spare backup machine which makes you go to the data center for it to take over the place of the failed one, that is, having a defined protocol about what to do in case of machine failure. Summing up, failover answers the question 'what do I do in case a single machine fails?', disaster recovery answers 'what do I do in case a disaster happens (fire, floods, war, ISP goes bankrupt, whatever)?' High Availability Deployment Architecture A: Since a disaster (like 9/11) can completely destroy a datacenter, does it mean that DR is the processes of rebuilding everything for that datacenter?
{ "language": "en", "url": "https://stackoverflow.com/questions/120139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: User Monitoring in Rails We have an app with an extensive admin section. We got a little trigger happy with features (as you do) and are looking for some quick and easy way to monitor "who uses what". Ideally a simple gem that will allow us to track controller/actions on a per user basis to build up a picture of the features that are used and those that are not. Anything out there that you'd recommend.. Thanks Dom A: I don't know that there's a popular gem or plugin for this; in the past, I've implemented this sort of auditing as a before_filter in ApplicationController: from memory: class ApplicationController < ActionController::Base before_filter :audit_events # ... protected def audit_events local_params = params.clone controller = local_params.delete(:controller) action = local_params.delete(:action) Audit.create( :user => current_user, :controller => controller, :action => action, :params => local_params ) end end This assumes that you're using something like restful_authentication to get current user, of course. EDIT: Depending on how your associations are set up, you'd do even better to replace the Audit.create bit with this: current_user.audits.create({ :controller => controller, :action => action, :params => local_params }) Scoping creations via ActiveRecord assoiations == best practice
{ "language": "en", "url": "https://stackoverflow.com/questions/120149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Long-lived RESTful interactions We have a discussion going on in my team at the moment, and I'd be interested in other views. Suppose we have a RESTful web service whose role is to annotate documents by applying a variety of analysis algorithms and services. The basic interaction in clear: we have a resource which is the document collection; the client POSTs a new document to the collection, gets back the URI of the new document, then can GET that docURI to get the document back or GET {docURI}/metadata to see the general metadata, {docURI}/ne for named entities, etc. The problem is that some of the analyses may take a long time to complete. Suppose the client GETs the metadata URI before the analysis is complete, because it wants to be able to show partial or incremental results in the UI. Repeating the GET in future may yield more results. Solutions we've discussed include: * *keeping the HTTP connection open until all analyses are done (which doesn't seem scalable) *using content-length and accept-range headers to get incremental content (but we don't know in advance how long the final content will be) *providing an Atom feed for each resource so the client subscribes to update events rather than simply GETting the resource (seems overly complicated and possibly resource hungry if there are many active documents) *just having GET return whatever is available at the time (but it still leaves the problem of the client knowing when we're finally done) [edited to remove reference to idempotency following comments]. Any opinions, or suggestions for alternative ways to handle long-lived or asynchronous interactions in a RESTful architecture? Ian A: I would implement it the following way: 1) client requests metadata 2) server returns either actual data (if it's already available) or NotReady marker 3) client asks server when data will be available (this step can be merger with previous) 4) server returns time interval (there might be some heuristics for total number of executing jobs, etc) 5) client waits for specified period of time and go to step 1 This way you can provide data to clients as soon as possible. You can shape server load by tweaking delay interval returned at step 4) A: providing an Atom feed for each resource so the client subscribes to update events rather than simply GETting the resource (seems overly complicated and possibly resource hungry if there are many active documents) Have you considered SUP? If polling is an option, why bother with a feed? Why not just have the clients poll the resource itself? Could you cut down on unnecessary polling by including an estimated time for completion of the analyses? A: Use HTTP 202 Accepted. Also, check out RESTful Web Services - it's where I learned about the above. A: * *just ignoring idempotency and having GET return whatever is available at the time (but it still leaves the problem of the client knowing when we're finally done). Does a GET that returns different results over time really mean that it is not idempotent? The spec says: Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request That is, multiple calls to GET may return different results, so long as the calls themselves are side-effect free. In which case, perhaps your REST method could use the conditional GET and caching mechanisms to indicate when it is done: * *While the analysis is in progress, a GET {docURI}/metadata response could have: * *an Expires header set to a few seconds in the future. *no ETag header. *Once the analysis is done, responses for that resource have: * *no Expires header. *an ETag. Subsequent requests with the ETag should return 304 Not Modified. NB you may want to consider the other response headers involved in caching, not just Expires. This "feels" like a RESTful design - you can imagine a web browser doing the right thing as it made consecutive requests to this resource. A: "just having GET return whatever is available at the time" makes a ton of sense. Except, when they're polling, you don't want to keep returning stuff they already know. The answers get longer each time they poll. You need them to provide you with their "what I've seen so far" in the GET request. This gives you idempotency. If they ask for chunk 1, they always get the same answer. Once they've seen chunk 1, they can ask for chunk 2. The answer doesn't get bigger. More pieces become available. A "collection-level" GET provides the size of the response. You have "detail-level" GETs for each piece that's available. Essentially, this is an algorithm like the TCP/IP acknowledgment. When they ack a piece, you send the next piece. If there is a next piece, otherwise you send a 200-nothing new to report. The "problem of the client knowing when we're finally done" is imponderable. They can't know and you can't predict how long it will take. You don't want them doing "busy waiting" -- polling to see if you're done yet -- that's a pretty big load on your server. If they're impatient. You can throttle their requests. You can send them a "check back in x seconds" where x gets progressively bigger. You can even use a Unix-style scheduler algorithm where their score goes down when they poll and up if they don't poll for X seconds. The alternative is some kind of queue where you post the results back to them. To do this, they'd have to provide a URI that you could POST to tell them you're done. Or, they use Atom for a lightweight polling architecture. While Atom seems complex -- and it still involves polling -- you provide a minimal Atom answer ("not changed yet") until you're done, when you provide ("new results") so they can do the real heavyweight get. This is for all-or-nothing, instead of the incremental response technique above. You can, also, think of the "collection-level" GET as your Atom status on the process as a whole. A: One alternative solution that may or may not be suitable in your case is to add a new endpoint called "AnnotationRequests". Post the document (or a link to it) to the AnnotationRequests endpoint and it should return a Location (e.g. http://example.org/AnnotationRequest/2042) that will allow your client to poll the status of the process. When the process is complete, the "AnnotationRequest" representation can contain a link to the completed document. One nice side effect of this is you can do a GET on AnnotationRequests so see documents that are currently being processed. It is up to you to decide how long you want to keep the AnnotationRequests around. It may be valuable to keep a complete history of when they were requested, by who and how long each took, or could throw them away periodically. A: You may want to check out Udi Dahan's nServiceBus.
{ "language": "en", "url": "https://stackoverflow.com/questions/120158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Save drop-down history in a Firefox Toolbar I'm doing some testing on Firefox toolbars for the sake of learning and I can't find out any information on how to store the contents of a "search" drop-down inside the user's profile. Is there any tutorial on how to sort this out? A: Since it's taking quite a bit to get an answer I went and investigate it myself. Here is what I've got now. Not all is clear to me but it works. Let's assume you have a <textbox> like this, on your .xul: <textbox id="search_with_history" /> You now have to add some other attributes to enable history. <textbox id="search_with_history" type="autocomplete" autocompletesearch="form-history" autocompletesearchparam="Search-History-Name" ontextentered="Search_Change(param);" enablehistory="true" /> This gives you the minimum to enable a history on that textbox. For some reason, and here is where my ignorance shows, the onTextEntered event function has to have the param to it called "param". I tried "event" and it didn't work. But that alone will not do work by itself. One has to add some Javascript to help with the job. // This is the interface to store the history const HistoryObject = Components.classes["@mozilla.org/satchel/form-history;1"] .getService( Components.interfaces.nsIFormHistory2 || Components.interfaces.nsIFormHistory ); // The above line was broken into 4 for clearness. // If you encounter problems please use only one line. // This function is the one called upon the event of pressing <enter> // on the text box function Search_Change(event) { var terms = document.getElementById('search_with_history').value; HistoryObject.addEntry('Search-History-Name', terms); } This is the absolute minimum to get a history going on. A: Gustavo, I wanted to do the same thing - I found an answer here on the Mozilla support forums. (Edit: I wanted to save my search history out of interest, not because I wanted to learn how the Firefox toolbars work, as you said.) Basically, that data is stored in a sqlite database file called formhistory.sqlite (in your Firefox profile directory). You can use the Firefox extension SQLite Manager to retrieve and export the data: https://addons.mozilla.org/firefox/addon/5817 You can export it as a CSV (comma- separated values) file and open it with Excel or other software. This has the added benefit of also saving the history of data you've entered into other forms/fields on sites, such as the Search field on Google, etc, if this data is of interest to you. A: Gustavo's solution is good, but document.getElemenById('search_with_history').value; is missing a 't' in getElementById
{ "language": "en", "url": "https://stackoverflow.com/questions/120170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to do query auto-completion/suggestions in Lucene? I'm looking for a way to do query auto-completion/suggestions in Lucene. I've Googled around a bit and played around a bit, but all of the examples I've seen seem to be setting up filters in Solr. We don't use Solr and aren't planning to move to using Solr in the near future, and Solr is obviously just wrapping around Lucene anyway, so I imagine there must be a way to do it! I've looked into using EdgeNGramFilter, and I realise that I'd have to run the filter on the index fields and get the tokens out and then compare them against the inputted Query... I'm just struggling to make the connection between the two into a bit of code, so help is much appreciated! To be clear on what I'm looking for (I realised I wasn't being overly clear, sorry) - I'm looking for a solution where when searching for a term, it'd return a list of suggested queries. When typing 'inter' into the search field, it'll come back with a list of suggested queries, such as 'internet', 'international', etc. A: You can use the class PrefixQuery on a "dictionary" index. The class LuceneDictionary could be helpful too. Take a look at this article linked below. It explains how to implement the feature "Did you mean ?" available in modern search engine such as Google. You may not need something as complex as described in the article. However the article explains how to use the Lucene spell package. One way to build a "dictionary" index would be to iterate on a LuceneDictionary. Hope it helps Did You Mean: Lucene? (page 1) Did You Mean: Lucene? (page 2) Did You Mean: Lucene? (page 3) A: my code based on lucene 4.2,may help you import java.io.File; import java.io.IOException; import org.apache.lucene.analysis.miscellaneous.PerFieldAnalyzerWrapper; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexWriterConfig; import org.apache.lucene.index.IndexWriterConfig.OpenMode; import org.apache.lucene.search.spell.Dictionary; import org.apache.lucene.search.spell.LuceneDictionary; import org.apache.lucene.search.spell.PlainTextDictionary; import org.apache.lucene.search.spell.SpellChecker; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FSDirectory; import org.apache.lucene.store.IOContext; import org.apache.lucene.store.RAMDirectory; import org.apache.lucene.util.Version; import org.wltea4pinyin.analyzer.lucene.IKAnalyzer4PinYin; /** * * * @author <a href="mailto:liu.gang@renren-inc.com"></a> * @version 2013-11-25上午11:13:59 */ public class LuceneSpellCheckerDemoService { private static final String INDEX_FILE = "/Users/r/Documents/jar/luke/youtui/index"; private static final String INDEX_FILE_SPELL = "/Users/r/Documents/jar/luke/spell"; private static final String INDEX_FIELD = "app_name_quanpin"; public static void main(String args[]) { try { // PerFieldAnalyzerWrapper wrapper = new PerFieldAnalyzerWrapper(new IKAnalyzer4PinYin( true)); // read index conf IndexWriterConfig conf = new IndexWriterConfig(Version.LUCENE_42, wrapper); conf.setOpenMode(OpenMode.CREATE_OR_APPEND); // read dictionary Directory directory = FSDirectory.open(new File(INDEX_FILE)); RAMDirectory ramDir = new RAMDirectory(directory, IOContext.READ); DirectoryReader indexReader = DirectoryReader.open(ramDir); Dictionary dic = new LuceneDictionary(indexReader, INDEX_FIELD); SpellChecker sc = new SpellChecker(FSDirectory.open(new File(INDEX_FILE_SPELL))); //sc.indexDictionary(new PlainTextDictionary(new File("myfile.txt")), conf, false); sc.indexDictionary(dic, conf, true); String[] strs = sc.suggestSimilar("zhsiwusdazhanjiangshi", 10); for (int i = 0; i < strs.length; i++) { System.out.println(strs[i]); } sc.close(); } catch (IOException e) { e.printStackTrace(); } } } A: In addition to the above (much appreciated) post re: c# conversion, should you be using .NET 3.5 you'll need to include the code for the EdgeNGramTokenFilter - or at least I did - using Lucene 2.9.2 - this filter is missing from the .NET version as far as I could tell. I had to go and find the .NET 4 version online in 2.9.3 and port back - hope this makes the procedure less painful for someone... Edit : Please also note that the array returned by the SuggestTermsFor() function is sorted by count ascending, you'll probably want to reverse it to get the most popular terms first in your list using System.IO; using System.Collections; using Lucene.Net.Analysis; using Lucene.Net.Analysis.Tokenattributes; using Lucene.Net.Util; namespace Lucene.Net.Analysis.NGram { /** * Tokenizes the given token into n-grams of given size(s). * <p> * This {@link TokenFilter} create n-grams from the beginning edge or ending edge of a input token. * </p> */ public class EdgeNGramTokenFilter : TokenFilter { public static Side DEFAULT_SIDE = Side.FRONT; public static int DEFAULT_MAX_GRAM_SIZE = 1; public static int DEFAULT_MIN_GRAM_SIZE = 1; // Replace this with an enum when the Java 1.5 upgrade is made, the impl will be simplified /** Specifies which side of the input the n-gram should be generated from */ public class Side { private string label; /** Get the n-gram from the front of the input */ public static Side FRONT = new Side("front"); /** Get the n-gram from the end of the input */ public static Side BACK = new Side("back"); // Private ctor private Side(string label) { this.label = label; } public string getLabel() { return label; } // Get the appropriate Side from a string public static Side getSide(string sideName) { if (FRONT.getLabel().Equals(sideName)) { return FRONT; } else if (BACK.getLabel().Equals(sideName)) { return BACK; } return null; } } private int minGram; private int maxGram; private Side side; private char[] curTermBuffer; private int curTermLength; private int curGramSize; private int tokStart; private TermAttribute termAtt; private OffsetAttribute offsetAtt; protected EdgeNGramTokenFilter(TokenStream input) : base(input) { this.termAtt = (TermAttribute)AddAttribute(typeof(TermAttribute)); this.offsetAtt = (OffsetAttribute)AddAttribute(typeof(OffsetAttribute)); } /** * Creates EdgeNGramTokenFilter that can generate n-grams in the sizes of the given range * * @param input {@link TokenStream} holding the input to be tokenized * @param side the {@link Side} from which to chop off an n-gram * @param minGram the smallest n-gram to generate * @param maxGram the largest n-gram to generate */ public EdgeNGramTokenFilter(TokenStream input, Side side, int minGram, int maxGram) : base(input) { if (side == null) { throw new System.ArgumentException("sideLabel must be either front or back"); } if (minGram < 1) { throw new System.ArgumentException("minGram must be greater than zero"); } if (minGram > maxGram) { throw new System.ArgumentException("minGram must not be greater than maxGram"); } this.minGram = minGram; this.maxGram = maxGram; this.side = side; this.termAtt = (TermAttribute)AddAttribute(typeof(TermAttribute)); this.offsetAtt = (OffsetAttribute)AddAttribute(typeof(OffsetAttribute)); } /** * Creates EdgeNGramTokenFilter that can generate n-grams in the sizes of the given range * * @param input {@link TokenStream} holding the input to be tokenized * @param sideLabel the name of the {@link Side} from which to chop off an n-gram * @param minGram the smallest n-gram to generate * @param maxGram the largest n-gram to generate */ public EdgeNGramTokenFilter(TokenStream input, string sideLabel, int minGram, int maxGram) : this(input, Side.getSide(sideLabel), minGram, maxGram) { } public override bool IncrementToken() { while (true) { if (curTermBuffer == null) { if (!input.IncrementToken()) { return false; } else { curTermBuffer = (char[])termAtt.TermBuffer().Clone(); curTermLength = termAtt.TermLength(); curGramSize = minGram; tokStart = offsetAtt.StartOffset(); } } if (curGramSize <= maxGram) { if (!(curGramSize > curTermLength // if the remaining input is too short, we can't generate any n-grams || curGramSize > maxGram)) { // if we have hit the end of our n-gram size range, quit // grab gramSize chars from front or back int start = side == Side.FRONT ? 0 : curTermLength - curGramSize; int end = start + curGramSize; ClearAttributes(); offsetAtt.SetOffset(tokStart + start, tokStart + end); termAtt.SetTermBuffer(curTermBuffer, start, curGramSize); curGramSize++; return true; } } curTermBuffer = null; } } public override Token Next(Token reusableToken) { return base.Next(reusableToken); } public override Token Next() { return base.Next(); } public override void Reset() { base.Reset(); curTermBuffer = null; } } } A: Based on @Alexandre Victoor's answer, I wrote a little class based on the Lucene Spellchecker in the contrib package (and using the LuceneDictionary included in it) that does exactly what I want. This allows re-indexing from a single source index with a single field, and provides suggestions for terms. Results are sorted by the number of matching documents with that term in the original index, so more popular terms appear first. Seems to work pretty well :) import java.io.IOException; import java.io.Reader; import java.util.ArrayList; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.ISOLatin1AccentFilter; import org.apache.lucene.analysis.LowerCaseFilter; import org.apache.lucene.analysis.StopFilter; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.ngram.EdgeNGramTokenFilter; import org.apache.lucene.analysis.ngram.EdgeNGramTokenFilter.Side; import org.apache.lucene.analysis.standard.StandardFilter; import org.apache.lucene.analysis.standard.StandardTokenizer; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.index.CorruptIndexException; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.Term; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.Sort; import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.TopDocs; import org.apache.lucene.search.spell.LuceneDictionary; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FSDirectory; /** * Search term auto-completer, works for single terms (so use on the last term * of the query). * <p> * Returns more popular terms first. * * @author Mat Mannion, M.Mannion@warwick.ac.uk */ public final class Autocompleter { private static final String GRAMMED_WORDS_FIELD = "words"; private static final String SOURCE_WORD_FIELD = "sourceWord"; private static final String COUNT_FIELD = "count"; private static final String[] ENGLISH_STOP_WORDS = { "a", "an", "and", "are", "as", "at", "be", "but", "by", "for", "i", "if", "in", "into", "is", "no", "not", "of", "on", "or", "s", "such", "t", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with" }; private final Directory autoCompleteDirectory; private IndexReader autoCompleteReader; private IndexSearcher autoCompleteSearcher; public Autocompleter(String autoCompleteDir) throws IOException { this.autoCompleteDirectory = FSDirectory.getDirectory(autoCompleteDir, null); reOpenReader(); } public List<String> suggestTermsFor(String term) throws IOException { // get the top 5 terms for query Query query = new TermQuery(new Term(GRAMMED_WORDS_FIELD, term)); Sort sort = new Sort(COUNT_FIELD, true); TopDocs docs = autoCompleteSearcher.search(query, null, 5, sort); List<String> suggestions = new ArrayList<String>(); for (ScoreDoc doc : docs.scoreDocs) { suggestions.add(autoCompleteReader.document(doc.doc).get( SOURCE_WORD_FIELD)); } return suggestions; } @SuppressWarnings("unchecked") public void reIndex(Directory sourceDirectory, String fieldToAutocomplete) throws CorruptIndexException, IOException { // build a dictionary (from the spell package) IndexReader sourceReader = IndexReader.open(sourceDirectory); LuceneDictionary dict = new LuceneDictionary(sourceReader, fieldToAutocomplete); // code from // org.apache.lucene.search.spell.SpellChecker.indexDictionary( // Dictionary) IndexReader.unlock(autoCompleteDirectory); // use a custom analyzer so we can do EdgeNGramFiltering IndexWriter writer = new IndexWriter(autoCompleteDirectory, new Analyzer() { public TokenStream tokenStream(String fieldName, Reader reader) { TokenStream result = new StandardTokenizer(reader); result = new StandardFilter(result); result = new LowerCaseFilter(result); result = new ISOLatin1AccentFilter(result); result = new StopFilter(result, ENGLISH_STOP_WORDS); result = new EdgeNGramTokenFilter( result, Side.FRONT,1, 20); return result; } }, true); writer.setMergeFactor(300); writer.setMaxBufferedDocs(150); // go through every word, storing the original word (incl. n-grams) // and the number of times it occurs Map<String, Integer> wordsMap = new HashMap<String, Integer>(); Iterator<String> iter = (Iterator<String>) dict.getWordsIterator(); while (iter.hasNext()) { String word = iter.next(); int len = word.length(); if (len < 3) { continue; // too short we bail but "too long" is fine... } if (wordsMap.containsKey(word)) { throw new IllegalStateException( "This should never happen in Lucene 2.3.2"); // wordsMap.put(word, wordsMap.get(word) + 1); } else { // use the number of documents this word appears in wordsMap.put(word, sourceReader.docFreq(new Term( fieldToAutocomplete, word))); } } for (String word : wordsMap.keySet()) { // ok index the word Document doc = new Document(); doc.add(new Field(SOURCE_WORD_FIELD, word, Field.Store.YES, Field.Index.UN_TOKENIZED)); // orig term doc.add(new Field(GRAMMED_WORDS_FIELD, word, Field.Store.YES, Field.Index.TOKENIZED)); // grammed doc.add(new Field(COUNT_FIELD, Integer.toString(wordsMap.get(word)), Field.Store.NO, Field.Index.UN_TOKENIZED)); // count writer.addDocument(doc); } sourceReader.close(); // close writer writer.optimize(); writer.close(); // re-open our reader reOpenReader(); } private void reOpenReader() throws CorruptIndexException, IOException { if (autoCompleteReader == null) { autoCompleteReader = IndexReader.open(autoCompleteDirectory); } else { autoCompleteReader.reopen(); } autoCompleteSearcher = new IndexSearcher(autoCompleteReader); } public static void main(String[] args) throws Exception { Autocompleter autocomplete = new Autocompleter("/index/autocomplete"); // run this to re-index from the current index, shouldn't need to do // this very often // autocomplete.reIndex(FSDirectory.getDirectory("/index/live", null), // "content"); String term = "steve"; System.out.println(autocomplete.suggestTermsFor(term)); // prints [steve, steven, stevens, stevenson, stevenage] } } A: Here's a transliteration of Mat's implementation into C# for Lucene.NET, along with a snippet for wiring a text box using jQuery's autocomplete feature. <input id="search-input" name="query" placeholder="Search database." type="text" /> ... JQuery Autocomplete: // don't navigate away from the field when pressing tab on a selected item $( "#search-input" ).keydown(function (event) { if (event.keyCode === $.ui.keyCode.TAB && $(this).data("autocomplete").menu.active) { event.preventDefault(); } }); $( "#search-input" ).autocomplete({ source: '@Url.Action("SuggestTerms")', // <-- ASP.NET MVC Razor syntax minLength: 2, delay: 500, focus: function () { // prevent value inserted on focus return false; }, select: function (event, ui) { var terms = this.value.split(/\s+/); terms.pop(); // remove dropdown item terms.push(ui.item.value.trim()); // add completed item this.value = terms.join(" "); return false; }, }); ... here's the ASP.NET MVC Controller code: // // GET: /MyApp/SuggestTerms?term=something public JsonResult SuggestTerms(string term) { if (string.IsNullOrWhiteSpace(term)) return Json(new string[] {}); term = term.Split().Last(); // Fetch suggestions string[] suggestions = SearchSvc.SuggestTermsFor(term).ToArray(); return Json(suggestions, JsonRequestBehavior.AllowGet); } ... and here's Mat's code in C#: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Lucene.Net.Store; using Lucene.Net.Index; using Lucene.Net.Search; using SpellChecker.Net.Search.Spell; using Lucene.Net.Analysis; using Lucene.Net.Analysis.Standard; using Lucene.Net.Analysis.NGram; using Lucene.Net.Documents; namespace Cipher.Services { /// <summary> /// Search term auto-completer, works for single terms (so use on the last term of the query). /// Returns more popular terms first. /// <br/> /// Author: Mat Mannion, M.Mannion@warwick.ac.uk /// <seealso cref="http://stackoverflow.com/questions/120180/how-to-do-query-auto-completion-suggestions-in-lucene"/> /// </summary> /// public class SearchAutoComplete { public int MaxResults { get; set; } private class AutoCompleteAnalyzer : Analyzer { public override TokenStream TokenStream(string fieldName, System.IO.TextReader reader) { TokenStream result = new StandardTokenizer(kLuceneVersion, reader); result = new StandardFilter(result); result = new LowerCaseFilter(result); result = new ASCIIFoldingFilter(result); result = new StopFilter(false, result, StopFilter.MakeStopSet(kEnglishStopWords)); result = new EdgeNGramTokenFilter( result, Lucene.Net.Analysis.NGram.EdgeNGramTokenFilter.DEFAULT_SIDE,1, 20); return result; } } private static readonly Lucene.Net.Util.Version kLuceneVersion = Lucene.Net.Util.Version.LUCENE_29; private static readonly String kGrammedWordsField = "words"; private static readonly String kSourceWordField = "sourceWord"; private static readonly String kCountField = "count"; private static readonly String[] kEnglishStopWords = { "a", "an", "and", "are", "as", "at", "be", "but", "by", "for", "i", "if", "in", "into", "is", "no", "not", "of", "on", "or", "s", "such", "t", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with" }; private readonly Directory m_directory; private IndexReader m_reader; private IndexSearcher m_searcher; public SearchAutoComplete(string autoCompleteDir) : this(FSDirectory.Open(new System.IO.DirectoryInfo(autoCompleteDir))) { } public SearchAutoComplete(Directory autoCompleteDir, int maxResults = 8) { this.m_directory = autoCompleteDir; MaxResults = maxResults; ReplaceSearcher(); } /// <summary> /// Find terms matching the given partial word that appear in the highest number of documents.</summary> /// <param name="term">A word or part of a word</param> /// <returns>A list of suggested completions</returns> public IEnumerable<String> SuggestTermsFor(string term) { if (m_searcher == null) return new string[] { }; // get the top terms for query Query query = new TermQuery(new Term(kGrammedWordsField, term.ToLower())); Sort sort = new Sort(new SortField(kCountField, SortField.INT)); TopDocs docs = m_searcher.Search(query, null, MaxResults, sort); string[] suggestions = docs.ScoreDocs.Select(doc => m_reader.Document(doc.Doc).Get(kSourceWordField)).ToArray(); return suggestions; } /// <summary> /// Open the index in the given directory and create a new index of word frequency for the /// given index.</summary> /// <param name="sourceDirectory">Directory containing the index to count words in.</param> /// <param name="fieldToAutocomplete">The field in the index that should be analyzed.</param> public void BuildAutoCompleteIndex(Directory sourceDirectory, String fieldToAutocomplete) { // build a dictionary (from the spell package) using (IndexReader sourceReader = IndexReader.Open(sourceDirectory, true)) { LuceneDictionary dict = new LuceneDictionary(sourceReader, fieldToAutocomplete); // code from // org.apache.lucene.search.spell.SpellChecker.indexDictionary( // Dictionary) //IndexWriter.Unlock(m_directory); // use a custom analyzer so we can do EdgeNGramFiltering var analyzer = new AutoCompleteAnalyzer(); using (var writer = new IndexWriter(m_directory, analyzer, true, IndexWriter.MaxFieldLength.LIMITED)) { writer.MergeFactor = 300; writer.SetMaxBufferedDocs(150); // go through every word, storing the original word (incl. n-grams) // and the number of times it occurs foreach (string word in dict) { if (word.Length < 3) continue; // too short we bail but "too long" is fine... // ok index the word // use the number of documents this word appears in int freq = sourceReader.DocFreq(new Term(fieldToAutocomplete, word)); var doc = MakeDocument(fieldToAutocomplete, word, freq); writer.AddDocument(doc); } writer.Optimize(); } } // re-open our reader ReplaceSearcher(); } private static Document MakeDocument(String fieldToAutocomplete, string word, int frequency) { var doc = new Document(); doc.Add(new Field(kSourceWordField, word, Field.Store.YES, Field.Index.NOT_ANALYZED)); // orig term doc.Add(new Field(kGrammedWordsField, word, Field.Store.YES, Field.Index.ANALYZED)); // grammed doc.Add(new Field(kCountField, frequency.ToString(), Field.Store.NO, Field.Index.NOT_ANALYZED)); // count return doc; } private void ReplaceSearcher() { if (IndexReader.IndexExists(m_directory)) { if (m_reader == null) m_reader = IndexReader.Open(m_directory, true); else m_reader.Reopen(); m_searcher = new IndexSearcher(m_reader); } else { m_searcher = null; } } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/120180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: XSS Blacklist - Is anyone aware of a reasonable one? As a temporary quick fix to mitigate the major risk while working on the permanent fix for XSS Vulnerability in a very large code base, I'm looking for a pre-existing XSS prevention blacklist that does a reasonable job of protecting against XSS. Preferably a set of Regular Expressions. I'm aware of plenty of cheat sheets for testing and smoke tests etc, what I'm looking for is pre-tuned regexps for blocking the attacks. I am fully aware that the best way is output escaping or if you need some markup from users to use whitelisting. But, with the size of the code base, we need something in quick to reduce the immediate footprint of the vulnerability and raise the bar whilst working on the real solution. Is anyone aware of a good set? A: I realise this may not be a direct answer to your question, but ASP.NET developers in a similar situation may find this useful: Microsoft Anti-Cross Site Scripting Library V1.5 This library differs from most encoding libraries in that it uses the "principle of inclusions" technique to provide protection against XSS attacks. This approach works by first defining a valid or allowable set of characters, and encodes anything outside this set (invalid characters or potential attacks). The principle of inclusions approach provides a high degree of protection against XSS attacks and is suitable for Web applications with high security requirements. A: Here is one: http://ha.ckers.org/xss.html but i don't know if it's complete. CAL9000 is another list where you could find something like that. A: Not sure if you're using PHP, but if so you should look at HTMLPurifer. It's extremely simple to use; just add a call to the purify() method where you accept your input, or where you output it. Its whitelist-based approach blocks every XSS attack I've tested it against. A: The cheat sheet at ha.ckers.org/xss.html is not complete. A colleague of mine found one or two that aren't on there. RSnake does list many of the regex filters each attack string gets past. Use a few and you may close enough holes. It would be a good starting place. If nothing else, to know what kinds of things you need to be looking for. Use it as a place to start and make sure the scripts you write escape enough characters to make any attacks your blacklists miss rendered benign. What good is xss injection if no browser renders it? In reality escaping enough of the right characters goes most of the way here. It's quite hard to inject XSS into a script that turns every < into a &lt; and escapes " into &quot;. A: If you run Apache you could use mod_security to close some holes. At least it would provide you with a tool (the console or a plain logfile) to monitor the traffic and to react before it's too late. Also, gotroot.com has a couple interesting rules for web applications. Then again, I don't really know what kind of holes you are closing. A: What you want is an IDS (Intrusion detection system). If you're using PHP, there is PHPIDS. It's maintained and tested by an excellent hacker community. They have been throwing all kinds of things at it to improve the filters, well beyond Rsnake's original list. There was also a .NET port somewhere, not sure if it's still maintained.
{ "language": "en", "url": "https://stackoverflow.com/questions/120189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What are the steps needed to create and publish a rubygem of your own? So you've created a ruby library. How do you create and publish your rubygem? And what common pitfalls and gotchas are there pertaining to creating and publishing rubygems? A: There are several tools to help you build your own gems. hoe and newgem are the best-known, and have a lot of good qualities. However, hoe adds itself as a dependency to your gem, and newgem has become a very large tool, one that I find unwieldy when I want to create and deploy a gem quickly. My favorite tool is Mr Bones by Tim Pease. It’s lightweight, featureful, and does not add dependencies to your project. To create a project with it, you just run bones <my_project_name> on the command line, and a skeleton is built for you, complete with a lib directory for your code, a bin directory for your tools, and a test directory. The configuration is in a Rakefile, and it’s clear and concise. Here's the configuration for a project I did a few months ago: load 'tasks/setup.rb' ensure_in_path 'lib' require 'friend-feed' task :default => 'test' PROJ.name = 'friend-feed' PROJ.authors = 'Clinton R. Nixon' PROJ.email = 'crnixon@gmail.com' PROJ.url = 'friend-feed.rubyforge.org' PROJ.rubyforge_name = 'friend-feed' PROJ.dependencies = ['json'] PROJ.version = FriendFeed::VERSION PROJ.exclude = %w(.git pkg) Mr Bones has the standard set of features you’d expect: you can use it to package up gems and tarfiles of your library, as well as release it on RubyForge and deploy your documentation there. Its killer feature, though, is its ability to freeze its skeleton in your home directory. When you run bones --freeze, a directory named .mrbones is copied into your home directory. You can edit the files in there to make a skeleton for your gems that works the way you work, and from then on, when you run bones to create a new gem, it will use your personal gem skeleton. You can unfreeze Mr Bones by running bones --unfreeze and your skeleton will be backed up, and the default skeleton will be used again. (Editorial note: I wrote a blog post about this several months ago, and most of this is copied from it.) A: I recommend github as a place to start, especially for open source projects. A: And try a google search as well... Very first search result for me... * *http://www.5dollarwhitebox.org/drupal/creating_a_rubygem_package A: You may also want to checkout the Hoe gem, which can automate the gem creation process. See: http://nubyonrails.com/articles/tutorial-publishing-rubygems-with-hoe A: I actually wrote a tutorial on exactly this, and I wrote it as I was learning. It's more focused on the game I'd written than a library. Also, it assumes you want to build the gem via rake rather than on your own: * *Part 1 on how to create the gem. *Part 2 on how to run binaries installed by your gem, and get to resources. A: hoe no longer adds itself as a dependency as off rubygems 1.2. Its rake tasks are focused on deployment of the rubygem to rubyforge. If you just want to serve the gem from github I think there some new hoe-esque rake task tools to help.
{ "language": "en", "url": "https://stackoverflow.com/questions/120191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Updating server-side progress on Rails application I want to upload and then process a file in a Ruby on Rails app. The file upload is usually quite short, but the server-side processing can take some time (more than 20 seconds) so I want to give the user some indicator - something better than a meaningless 'processing...' screen. I'm trying to use the following code in the view <%= periodically_call_remote(:url => {:action => 'progress_monitor', :controller => 'files'}, :frequency => '5', :update => "setProgress('progressBar','5')" ) %> The content of the :update parameter is the javascript I want to run every 5 seconds and the following code is in the files controller def progress_monitor render :text => 'whatever' end Eventually the progress_monitor method will return the current progress as an integer (% complete) and that will be passed into the 'setProgress' JavaScript code (that will update an on screen element) However, I'm struggling to get a correct response from from the server that can then be passed into JavaScript. Can anyone help, or am I approaching this the wrong way? There is a follow up question to this, I originally updated this question but the update was sufficiently different to warrant a new question, here. A: periodically_call_remote() updates a div. It won't call your JavaScript function. I'm no JavaScript guru, but to solve your problem, you should do your own xmlhttp call. If I were you, I'd use prototype's AJAX request http://www.prototypejs.org/api/ajax/request and use JavaScript's settimeout or setinterval to do the periodic polling http://www.elated.com/articles/javascript-timers-with-settimeout-and-setinterval/ hope this helps cos actually I've encountered the same prb too =) A: The :update option should only list the div you want to update, NOT have Javascript that you want to evaluate. The Rails helpers are still very good for this situation, there's no need to write much custom JS. If you wish to execute JS on return, an easy way would be to render an RJS template. If you wish to do it with pure HTML, simply render a view (or partial, or return directly from the render method with the HTML of the progress bar) and put the div of the progress bar in the :update option of the periodically_call_remote method.
{ "language": "en", "url": "https://stackoverflow.com/questions/120201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the best replacement for Windows' rand_s in Linux/POSIX? The problem is not about randomness itself (we have rand), but in cryptographically secure PRNG. What can be used on Linux, or ideally POSIX? Does NSS have something useful? Clarification: I know about /dev/random, but it may run out of entropy pool. And I'm not sure whether /dev/urandom is guaranteed to be cryptographically secure. A: Use /dev/random (requires user input, eg mouse movements) or /dev/urandom. The latter has an entropy pool and doesn't require any user input unless the pool is empty. You can read from the pool like this: char buf[100]; FILE *fp; if (fp = fopen("/dev/urandom", "r")) { fread(&buf, sizeof(char), 100, fp); fclose(fp); } Or something like that. A: From Wikipedia (my italics): A counterpart to /dev/random is /dev/urandom ("unlocked" random source) which reuses the internal pool to produce more pseudo-random bits. This means that the call will not block, but the output may contain less entropy than the corresponding read from /dev/random. The intent is to serve as a cryptographically secure pseudorandom number generator. This may be used for less secure applications. A: The /dev/random device is intended to be a source of cryptographically secure bits.
{ "language": "en", "url": "https://stackoverflow.com/questions/120206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Virtual Machine Benchmarks I am using VMware Server 1.0.7 on Windows XP SP3 at the moment to test software in virtual machines. I have also tried Microsoft Virtual PC (do not remeber the version, could be 2004 or 2007) and VMware was way faster at the time. I have heard of Parallels and VirtualBox but I did not have the time to try them out. Anybody has some benchmarks how fast is each of them (or some other)? I searched for benchmarks on the web, but found nothing useful. I am looking primarily for free software, but if it is really better than free ones I would pay for it. Also, if you are using (or know of) a good virtualization software but have no benchmarks for it, please let me know. A: You'll get best performance if your hardware supports hardware virtualization, such as AMD's AMD-V or Intel's VT, and you enable this feature on the computer and in your virtualization software. For Microsoft solutions, you need at least Virtual PC 2007 or Virtual Server 2005 R2 SP1, or Hyper-V on Windows Server 2008 (I don't expect you'll rebuild your system just to run Hyper-V, but I thought I'd mention it). Subjectively I haven't noticed any difference between Virtual PC and VMware Workstation performance; I'm using VMware now as it supports USB virtualization, which Virtual PC doesn't. You also generally need to install appropriate custom, virtualization-aware, drivers in the guest OS, as the standard drivers are expecting to talk to real hardware. In Virtual PC and Server these are called Additions, in VMware they are VMware Tools. A: From my experience of Parallels and VMware (on the PC and more extensively on the Mac) the difference between any 2 competing versions of the software is usually quite small and often 'reversed' in the next releases. I never found Parallels to be much faster (or slower) than VMware - it often would be a case of the state of the VM I was running, the host machine itself and the app(s) I was running within the VM. If VMWare brought out a new release which did something faster, you could be sure that Parallels would improve their performance in that area in the next release, too. In the end I settled on VMWare Fusion and the key reason for this was just that it played nicely with VMware Workstation on the PC. I have trouble taking Parallels VMs from the Mac to the PC and back again, and this worked fine on VMware. Finally, though this is less of a concern, I was unhappy that sometimes it felt as if Parallels would release a version without proper regression testing - you'd get the up-to-date version and find that networking was suddenly unexplicably broken until they released another patch a few days later. I doubt this is still the case but VMware always felt a little more 'in control' and professional to me. I'd go for a solution that you can get running in a stable fashion on your PC, that is compatible with your other requirements (such as your co-workers' platforms and your overall budget). You can waste your lifetime trying to measure which one is faster at any given task! One other thing - it's worth checking the documentation that comes with the software, and any forums etc, before making judgements about performance. For instance, in my experience throwing huge amounts of ram at your VM (at the expense of free ram in the host system) does NOT automatically make it faster; better to split the ram up evenly, and certainly keep an eye on any recommended figure. In VMware, that recommended figure is a good guide. A: Anandtech has some great info on virtualization. Although they are not any benchmarks, it provides a great insight on why it is so difficult to do proper virtualization benchmarks. I cannot suggest you a specific product, because it depends very much on your requirements.
{ "language": "en", "url": "https://stackoverflow.com/questions/120212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Windows 2003 Standard IIS Remote Admin - Can't login I have just installed Windows Server 2003 Standard Edition and therefore IIS6 (comes as standard). I have also install the windows component that enable the administration of IIS from the browser (https://server:8098/). The problem I have is that I have to log-in to this tool but the Server Administrator u/name and p/word does not let me in. The Windows documentation on this tool (http://support.microsoft.com/kb/324282) says "You are prompted for a user name and password that exist on the Web Server" but none of the standard user acounts on the server let me in. Thanks, A: Here are a couple ideas: * *Take a look at the security log on the server for clues. *Look at the "Directory Security" tab on the properties of the admin site and ensure "Enable anonymous access" is unchecked. You will need to use "Integrated Windows authentication" or "Basic authentication". If you use Basic auth then the password is sent across then network base64 encoded - you will want to use SSL to encrypt it. *Is there a specific requirement to use the web tools? You can download Internet Information Services (IIS) 6.0 Manager for Windows XP from Microsoft and run it from a client. A: I'm not so sure now, haven't set up a Win 2003 box in a while but as far as I remember you have to activate remote desktop first and then you can use a RDP client to access the server. I recommend that over the ActiveX RDP client. A: Is the server part of a domain? It may be defaulting to a domain username/password combo rather than a local username/password. Try "server.domain.local\administrator" or "administrator@server.domain.local". A: I would check the permissions on that site in IIS - make sure you are using an account that is a member of a group specifically assigned permissions. I understand that the builtin admin account is not working but its possible the site permissions have changed removing that account or group. - hope that makes some sort of sense A: This might be unlikely, but are you trying to use a username that has a blank password? Windows restricts remote access when using those accounts. If that's the case, you can check the Group Policy (gpedit.msc for local computer, or the one for domains if it's in a domain.): Computer Configuration Windows Settings Security Settings Local Policies Accounts: Limit local account use of blank passwords to console logon only
{ "language": "en", "url": "https://stackoverflow.com/questions/120226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: PHP: running scheduled jobs (cron jobs) I have a site on my webhotel I would like to run some scheduled tasks on. What methods of achieving this would you recommend? What I’ve thought out so far is having a script included in the top of every page and then let this script check whether it’s time to run this job or not. This is just a quick example of what I was thinking about: if ($alreadyDone == 0 && time() > $timeToRunMaintainance) { runTask(); $timeToRunMaintainance = time() + $interval; } Anything else I should take into consideration or is there a better method than this? A: Have you ever looked ATrigger? The PHP library is also available to start creating scheduled tasks without any overhead. Disclaimer: I'm among their team. A: if you're wondering how to actually run your PHP script from cron, there are two options: Call the PHP interpreter directly (i.e., "php /foo/myscript.php"), or use lynx (lynx http://mywebsite.com/myscript.php). Which one you choose depends mostly on how your script needs its environment configured - the paths and file access permissions will be different depending on whether you call it through the shell or the web browser. I'd recommend using lynx. One side effect is that you get an e-mail every time it runs. To get around this, I make my cron PHP scripts output nothing (and it has to be nothing, not even whitespace) if they complete successfully, and an error message if they fail. I then call them using a small PHP script from cron. This way, I only get an e-mail if it fails. This is basically the same as the lynx method, except my shell script makes the HTTP request and not lynx. Call this script "docron" or something (remember to chmod +x), and then use the command in your crontab: "docron http://mydomain.com/myscript.php". It e-mails you the output of the page as an HTML e-mail, if the page returns something. #!/usr/bin/php <?php $h = @file_get_contents($_SERVER['argv'][1]); if ($h === false) { $h = "<b>Failed to open file</b>: " . $_SERVER['argv'][1]; } if ($h != '') { @mail("cron@mydomain.com", $_SERVER['argv']['1'], $h, "From: cron@mydomain.com\nMIME-Version: 1.0\nContent-type: text/html; charset=iso-8859-1"); } ?> A: If you want to avoid setting up cron jobs and whatnot (though I'd suggest it's a better method), the solution you've provided is pretty good. On a number of projects, I've had the PHP script itself do the check to see whether it's time to run the update. The down-side (okay, one of the down sides) is that if no one is using the app during a certain period then the script won't run. The up-side is that if no one is using the app during a certain period then the script won't run. The tasks I've got it set up to do are things like "update a cache file", "do a daily backup" and whatnot. If someone isn't using the app, then you aren't going to need updated cache files, nor are there going to be any database changes to backup. The only modification to your method which I'd suggest is that you only run those checks when someone successfully logs in. You don't need to check on every page load. A: Cron is a general purpose solution for scheduling problems. But when you go big and schedules go high in frequency, there can be reliability/overlapping issues. If you see such problems, consider something like supervise or more sophisticated monit. A: That's what cronjobs are made for. man crontab assuming you are running a linux server. If you don't have shell access or no way to setup cronjobs, there are free services that setup cronjobs on external servers and ping one of your URLs. A: If you using cpanel u should add this like: /usr/local/bin/php -q /home/yoursite/public_html/yourfile.php A: I'm answering this now because no-one seems to have mentioned this exact solution. On a site I'm currently working on, we've set up a cron job using cPanel, but instead of running the PHP Interpreter directly (because we're using CodeIgniter and our code is mapped to a controller function, this probably isn't a great idea) we're using wget. wget -q -O cron_job.log http://somehost/controller/method -q is so that wget won't generate any output (so you won't keep getting emails). -O cron_job.log will save the contents of whatever your controller generates to a log file (overwritten each time so it won't keep growing). I've found this to be the easiest way of getting 'proper' cron working. A: If you have a cPanel host, you can add cron jobs through the web interface.Go to Advanced -> Cron Jobs and use the non-advanced form to set up the cron frequency. You want a command like this: /usr/bin/php /path/to/your/php/script.php A: The method you are using is fine, if you don't want to use cronjobs or anything external, but these can be heavy to check each time a page loads. At first, some cronjobs can probably be replaced. For example if you have a counter for how many users have registered on your website, you can simply update this number when a user registers, so you don't have to use a cronjob or any scheduled task for this. If you want to use scheduled tasks, I suggest you to use the method you are using right now, but with a little modification. If you're site has enough hits on a day, you can simply make the tasks run (or the tasks check function run) only for 1% or maybe 0.01% of the hits instead of all of them, the percentage you should use depends on the page hits you have and how many times you want to run the task. So, simply add a randomizer to achieve this feature. You could simply use a function like this; if(rand (1, 100) <= 1) { // 1, 100 is used to generate a number between 1 and 100. 1 is for one percent. // Run the tasks system } A: I would outsource the cronjobs with www.guardiano.pm and call a url every X minute. When your url (i.e www.yoursite.com/dothis.php) is called than you execute some code. If you don't want to let the web request the page when you want you can allow only request in POST and send some parameter that only you know with guardiano.pm Thats what I would do because I do that on my pet projects. Have fun! A: Command line PHP + cron would be the way I would go. It's simple and should fit the bill. It is usually installed with PHP as a matter of course. A: If you do not have the option to setup a cronjob you can call the script with cUrl (as alternative to wget - same functionality). Just do a scheduled task on your local machine that executes the cUrl function. A: If you want something more abstract, you might consider using something like a PHP scheduler. For example: * *https://github.com/lavary/crunz *https://github.com/peppeocchi/php-cron-scheduler And also, to parse the cron expression, you could use an existing library such as https://github.com/mtdowling/cron-expression. It provides a lot of useful methods to help you figure out information of a cron job. Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/120228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: Is there an alternative to the SQL Profiler for SQL Server 2000 I am trying to optimize some stored procedures on a SQL Server 2000 database and when I try to use SQL Profiler I get an error message "In order to run a trace against SQL Server you have to be a member of sysadmin fixed server role.". It seems that only members of the sysadmin role can run traces on the server (something that was fixed in SQL Server 2005) and there is no way in hell that I will be granted that server role (company policies) What I'm doing now is inserting the current time minus the time the procedure started at various stages of the code but I find this very tedious I was also thinking of replicating the database to a local installation of SQL Server but the stored procedure is using data from many different databases that i will spend a lot of time copying data locally So I was wondering if there is some other way of profiling SQL code? (Third party tools, different practices, something else ) A: In query analyser: SET STATISTICS TIME ON SET STATISTICS IO ON Run query and look in the messages tab. It occurs to me this may require same privileges, but worth a try. A: Your hands are kind of tied without profiler. You can, however, start with tuning your existing queries using Query Analyzer or any query tool and examining the execution plans. With QA, you can use the Show Execution Plan option. From other tools you can use the SET STATISTICS PROFILE ON / OFF A: There is a workaround on SQL 2000 to obfuscate the Profiler connection dialogue box to limit the sysadmin connection to running traces only. SQLTeam Blog
{ "language": "en", "url": "https://stackoverflow.com/questions/120240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to chain pseudo selectors in SASS I'm trying to put together a selector in SASS that will operate on the visted, hovered state of a link, but I can't quite seem to get the markup right, can someone enlighten me? I was writing it like this: &:visited:hover attribute: foo A: a &:visited:hover attribute: foo Nowadays, this is the only valid form. Indention has to be consistent (2 spaces are recommended) and the colon follows the attribute. A: a &:visited:hover :attribute foo Try that - note that identation is two spaces, and the colon goes before attribute not after. A: Perfect for Hover and Before / After: &:hover { color:#FFFFFF; &::before { color:#FFFFFF; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/120244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Short Integers in Python Python allocates integers automatically based on the underlying system architecture. Unfortunately I have a huge dataset which needs to be fully loaded into memory. So, is there a way to force Python to use only 2 bytes for some integers (equivalent of C++ 'short')? A: Thanks to Armin for pointing out the 'array' module. I also found the 'struct' module that packs c-style structs in a string: From the documentation (https://docs.python.org/library/struct.html): >>> from struct import * >>> pack('hhl', 1, 2, 3) '\x00\x01\x00\x02\x00\x00\x00\x03' >>> unpack('hhl', '\x00\x01\x00\x02\x00\x00\x00\x03') (1, 2, 3) >>> calcsize('hhl') 8 A: Nope. But you can use short integers in arrays: from array import array a = array("h") # h = signed short, H = unsigned short As long as the value stays in that array it will be a short integer. * *documentation for the array module A: You can use NumyPy's int as np.int8 or np.int16. A: Armin's suggestion of the array module is probably best. Two possible alternatives: * *You can create an extension module yourself that provides the data structure that you're after. If it's really just something like a collection of shorts, then that's pretty simple to do. *You can cheat and manipulate bits, so that you're storing one number in the lower half of the Python int, and another one in the upper half. You'd write some utility functions to convert to/from these within your data structure. Ugly, but it can be made to work. It's also worth realising that a Python integer object is not 4 bytes - there is additional overhead. So if you have a really large number of shorts, then you can save more than two bytes per number by using a C short in some way (e.g. the array module). I had to keep a large set of integers in memory a while ago, and a dictionary with integer keys and values was too large (I had 1GB available for the data structure IIRC). I switched to using a IIBTree (from ZODB) and managed to fit it. (The ints in a IIBTree are real C ints, not Python integers, and I hacked up an automatic switch to a IOBTree when the number was larger than 32 bits). A: You can also store multiple any size of integers in a single large integer. For example as seen below, in python3 on 64bit x86 system, 1024 bits are taking 164 bytes of memory storage. That means on average one byte can store around 6.24 bits. And if you go with even larger integers you can get even higher bits storage density. For example around 7.50 bits per byte with 2**20 bits wide integer. Obviously you will need some wrapper logic to access individual short numbers stored in the larger integer, which is easy to implement. One issue with this approach is your data access will slow down due use of the large integer operations. If you are accessing a big batch of consecutively stored integers at once to minimize the access to large integers, then the slower access to long integers won't be an issue. I guess use of numpy will be easier approach. >>> a = 2**1024 >>> sys.getsizeof(a) 164 >>> 1024/164 6.2439024390243905 >>> a = 2**(2**20) >>> sys.getsizeof(a) 139836 >>> 2**20 / 139836 7.49861266054521 A: Using bytearray in python which is basically a C unsigned char array under the hood will be a better solution than using large integers. There is no overhead for manipulating a byte array and, it has much less storage overhead compared to large integers. It's possible to get storage density of 7.99+ bits per byte with bytearrays. >>> import sys >>> a = bytearray(2**32) >>> sys.getsizeof(a) 4294967353 >>> 8 * 2**32 / 4294967353 7.999999893829228
{ "language": "en", "url": "https://stackoverflow.com/questions/120250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: What's the best way to detect if a given Javascript object is a DOM Element? Say for instance I was writing a function that was designed to accept multiple argument types: var overloaded = function (arg) { if (is_dom_element(arg)) { // Code for DOM Element argument... } }; What's the best way to implement is_dom_element so that it works in a cross-browser, fairly accurate way? A: Probably this one here: node instanceof HTMLElement That should work in most browsers. Otherwise you have to duck-type it (eg. typeof x.nodeType != 'undefined') A: jQuery checks the nodeType property. So you would have: var overloaded = function (arg) { if (arg.nodeType) { // Code for DOM Element argument... } }; Although this would detect all DOM objects, not just elements. If you want elements alone, that would be: var overloaded = function (arg) { if (arg.nodeType && arg.nodeType == 1) { // Code for DOM Element argument... } }; A: What about obj instanceof HTMLElement
{ "language": "en", "url": "https://stackoverflow.com/questions/120262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Best way to store constants referenced in the DB? In my database, I have a model which has a field which should be selected from one of a list of options. As an example, consider a model which needs to store a measurement, such as 5ft or 13cm or 12.24m3. The obvious way to achieve this is to have a decimal field and then some other field to store the unit of measurement. So what is the best way to store the unit of measurement? I've used a couple of approaches in the past: 1) Storing the various options in another DB table (and associated model), and linking the two with a standard foreign key (and usually eager loading the associated model). This seems like overkill, as you are forcing the DB to perform a join on every query. 2) Storing the options as a constant Hash, loaded in one of the initializers, where the key into the Hash is stored in the unit of measurement field. This way, you effectively do the join in Ruby (which may or may not be a performance increase), but you lose the ability to query from the "unit of measurement" side. This wouldn't be a problem provided it's unlikely you'd need to do queries like "find me all measurements with units of cm". Neither of these feel particularly elegant to me.. can anyone suggest something better? A: Have you seen constant_cache? It's sort of the combination of the best of 1 and 2 - lookup data is stored in the DB, but it's exposed as class constants on the lookup model and only loaded at application start, so you don't suffer the join penalties constantly. The following example comes from the README: migration: create_table :account_statuses do |t| t.string :name, :description end AccountStatus.create!(:name => 'Active', :description => 'Active user account') AccountStatus.create!(:name => 'Pending', :description => 'Pending user account') AccountStatus.create!(:name => 'Disabled', :description => 'Disabled user account') model: class AccountStatus < ActiveRecord::Base caches_constants end using it: Account.new(:username => 'preagan', :status => AccountStatus::PENDING) A: I would go with option one. How large will it be the UnitOfMeasurement table? And, if using an integer primary key, why do you worry so much about speed? Option 1 is the way to go for design reasons. Just declare it with an integer (even smallint) primary key and a field for the unit description. A: Has ActiveRecord gotten support for natural keys, yet? If it has, you can just make the name (or whatever) column of the UnitOfMeasure table the PK, that way the value of the FK column has all the info you need, and you still have a fully normalized DB with a canonical set of UnitOfMeasurement values. A: Do you need to perform lookups on these values? If not, you could as well store them as a string and parse the string later on in the application that reads the values. While you risk storing unparseable data, you gain speed and reduce DB complexity. Sometimes normalizing a database is not helpful. In the end /something/ within your system needs to know that "cm" is a length measure and "m3" is a room measure and comparing "3cm" to "1m3" doesn't make any sense anyway. So you just as well can put all that knowledge in code. Let's say you are only going to display that data anyway, what is normalizing good for here?
{ "language": "en", "url": "https://stackoverflow.com/questions/120266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How can I measure distance and create a bounding box based on two latitude+longitude points in Java? I am wanting to find the distance between two different points. This I know can be accomplished with the great circle distance. http://www.meridianworlddata.com/Distance-calculation.asp Once done, with a point and distance I would like to find the point that distance north, and that distance east in order to create a box around the point. A: Corrected Haversine Distance formula.... public static double HaverSineDistance(double lat1, double lng1, double lat2, double lng2) { // mHager 08-12-2012 // http://en.wikipedia.org/wiki/Haversine_formula // Implementation // convert to radians lat1 = Math.toRadians(lat1); lng1 = Math.toRadians(lng1); lat2 = Math.toRadians(lat2); lng2 = Math.toRadians(lng2); double dlon = lng2 - lng1; double dlat = lat2 - lat1; double a = Math.pow((Math.sin(dlat/2)),2) + Math.cos(lat1) * Math.cos(lat2) * Math.pow(Math.sin(dlon/2),2); double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)); return EARTH_RADIUS * c; } A: Or you could use SimpleLatLng. Apache 2.0 licensed and used in one production system that I know of: mine. Short story: I was searching for a simple geo library and couldn't find one to fit my needs. And who wants to write and test and debug these little geo tools over and over again in every application? There's got to be a better way! So SimpleLatLng was born as a way to store latitude-longitude data, do distance calculations, and create shaped boundaries. I know I'm two years too late to help the original poster, but my aim is to help the people like me who find this question in a search. I would love to have some people use it and contribute to the testing and vision of this little lightweight utility. A: We've had some success using OpenMap to plot a lot of positional data. There's a LatLonPoint class that has some basic functionality, including distance. A: http://www.movable-type.co.uk/scripts/latlong.html public static Double distanceBetweenTwoLocationsInKm(Double latitudeOne, Double longitudeOne, Double latitudeTwo, Double longitudeTwo) { if (latitudeOne == null || latitudeTwo == null || longitudeOne == null || longitudeTwo == null) { return null; } Double earthRadius = 6371.0; Double diffBetweenLatitudeRadians = Math.toRadians(latitudeTwo - latitudeOne); Double diffBetweenLongitudeRadians = Math.toRadians(longitudeTwo - longitudeOne); Double latitudeOneInRadians = Math.toRadians(latitudeOne); Double latitudeTwoInRadians = Math.toRadians(latitudeTwo); Double a = Math.sin(diffBetweenLatitudeRadians / 2) * Math.sin(diffBetweenLatitudeRadians / 2) + Math.cos(latitudeOneInRadians) * Math.cos(latitudeTwoInRadians) * Math.sin(diffBetweenLongitudeRadians / 2) * Math.sin(diffBetweenLongitudeRadians / 2); Double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a)); return (earthRadius * c); } A: Here is a Java implementation of Haversine formula. I use this in a project to calculate distance in miles between lat/longs. public static double distFrom(double lat1, double lng1, double lat2, double lng2) { double earthRadius = 3958.75; // miles (or 6371.0 kilometers) double dLat = Math.toRadians(lat2-lat1); double dLng = Math.toRadians(lng2-lng1); double sindLat = Math.sin(dLat / 2); double sindLng = Math.sin(dLng / 2); double a = Math.pow(sindLat, 2) + Math.pow(sindLng, 2) * Math.cos(Math.toRadians(lat1)) * Math.cos(Math.toRadians(lat2)); double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)); double dist = earthRadius * c; return dist; } A: For a more accurate distance (0.5mm) you can also use the Vincenty approximation: /** * Calculates geodetic distance between two points specified by latitude/longitude using Vincenty inverse formula * for ellipsoids * * @param lat1 * first point latitude in decimal degrees * @param lon1 * first point longitude in decimal degrees * @param lat2 * second point latitude in decimal degrees * @param lon2 * second point longitude in decimal degrees * @returns distance in meters between points with 5.10<sup>-4</sup> precision * @see <a href="http://www.movable-type.co.uk/scripts/latlong-vincenty.html">Originally posted here</a> */ public static double distVincenty(double lat1, double lon1, double lat2, double lon2) { double a = 6378137, b = 6356752.314245, f = 1 / 298.257223563; // WGS-84 ellipsoid params double L = Math.toRadians(lon2 - lon1); double U1 = Math.atan((1 - f) * Math.tan(Math.toRadians(lat1))); double U2 = Math.atan((1 - f) * Math.tan(Math.toRadians(lat2))); double sinU1 = Math.sin(U1), cosU1 = Math.cos(U1); double sinU2 = Math.sin(U2), cosU2 = Math.cos(U2); double sinLambda, cosLambda, sinSigma, cosSigma, sigma, sinAlpha, cosSqAlpha, cos2SigmaM; double lambda = L, lambdaP, iterLimit = 100; do { sinLambda = Math.sin(lambda); cosLambda = Math.cos(lambda); sinSigma = Math.sqrt((cosU2 * sinLambda) * (cosU2 * sinLambda) + (cosU1 * sinU2 - sinU1 * cosU2 * cosLambda) * (cosU1 * sinU2 - sinU1 * cosU2 * cosLambda)); if (sinSigma == 0) return 0; // co-incident points cosSigma = sinU1 * sinU2 + cosU1 * cosU2 * cosLambda; sigma = Math.atan2(sinSigma, cosSigma); sinAlpha = cosU1 * cosU2 * sinLambda / sinSigma; cosSqAlpha = 1 - sinAlpha * sinAlpha; cos2SigmaM = cosSigma - 2 * sinU1 * sinU2 / cosSqAlpha; if (Double.isNaN(cos2SigmaM)) cos2SigmaM = 0; // equatorial line: cosSqAlpha=0 (§6) double C = f / 16 * cosSqAlpha * (4 + f * (4 - 3 * cosSqAlpha)); lambdaP = lambda; lambda = L + (1 - C) * f * sinAlpha * (sigma + C * sinSigma * (cos2SigmaM + C * cosSigma * (-1 + 2 * cos2SigmaM * cos2SigmaM))); } while (Math.abs(lambda - lambdaP) > 1e-12 && --iterLimit > 0); if (iterLimit == 0) return Double.NaN; // formula failed to converge double uSq = cosSqAlpha * (a * a - b * b) / (b * b); double A = 1 + uSq / 16384 * (4096 + uSq * (-768 + uSq * (320 - 175 * uSq))); double B = uSq / 1024 * (256 + uSq * (-128 + uSq * (74 - 47 * uSq))); double deltaSigma = B * sinSigma * (cos2SigmaM + B / 4 * (cosSigma * (-1 + 2 * cos2SigmaM * cos2SigmaM) - B / 6 * cos2SigmaM * (-3 + 4 * sinSigma * sinSigma) * (-3 + 4 * cos2SigmaM * cos2SigmaM))); double dist = b * A * (sigma - deltaSigma); return dist; } This code was freely adapted from http://www.movable-type.co.uk/scripts/latlong-vincenty.html A: You can use the Java Geodesy Library for GPS, it uses the Vincenty's formulae which takes account of the earths surface curvature. Implementation goes like this: import org.gavaghan.geodesy.*; ... GeodeticCalculator geoCalc = new GeodeticCalculator(); Ellipsoid reference = Ellipsoid.WGS84; GlobalPosition pointA = new GlobalPosition(latitude, longitude, 0.0); GlobalPosition userPos = new GlobalPosition(userLat, userLon, 0.0); double distance = geoCalc.calculateGeodeticCurve(reference, userPos, pointA).getEllipsoidalDistance(); The resulting distance is in meters. A: I know that there are many answers, but in doing some research on this topic, I found that most answers here use the Haversine formula, but the Vincenty formula is actually more accurate. There was one post that adapted the calculation from a Javascript version, but it's very unwieldy. I found a version that is superior because: * *It also has an open license. *It uses OOP principles. *It has greater flexibility to choose the ellipsoid you want to use. *It has more methods to allow for different calculations in the future. *It is well documented. VincentyDistanceCalculator A: This method would help you find the distance between to geographic location in km. private double getDist(double lat1, double lon1, double lat2, double lon2) { int R = 6373; // radius of the earth in kilometres double lat1rad = Math.toRadians(lat1); double lat2rad = Math.toRadians(lat2); double deltaLat = Math.toRadians(lat2-lat1); double deltaLon = Math.toRadians(lon2-lon1); double a = Math.sin(deltaLat/2) * Math.sin(deltaLat/2) + Math.cos(lat1rad) * Math.cos(lat2rad) * Math.sin(deltaLon/2) * Math.sin(deltaLon/2); double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)); double d = R * c; return d; } A: Kotlin version of Haversine formula. Returned result in meters. Tested on https://www.vcalc.com/wiki/vCalc/Haversine+-+Distance const val EARTH_RADIUS_IN_METERS = 6371007.177356707 fun distance(lat1: Double, lng1: Double, lat2: Double, lng2: Double): Double { val latDiff = Math.toRadians(abs(lat2 - lat1)) val lngDiff = Math.toRadians(abs(lng2 - lng1)) val a = sin(latDiff / 2) * sin(latDiff / 2) + cos(Math.toRadians(lat1)) * cos(Math.toRadians(lat2)) * sin(lngDiff / 2) * sin(lngDiff / 2) val c = 2 * atan2(sqrt(a), sqrt(1 - a)) return EARTH_RADIUS_IN_METERS * c } A: I typically use MATLAB with the Mapping Toolbox, and then use the code in my Java using MATLAB Builder JA. It makes my life a lot simpler. Given most schools have it for free student access, you can try it out (or get the trial version to get over your work). A: For Android, there is a simple approach. public static float getDistanceInMeter(LatLng start, LatLng end) { float[] results = new float[1]; Location.distanceBetween(start.latitude, start.longitude, end.latitude, end.longitude, results); return results[0]; } ; https://developer.android.com/reference/android/location/Location#distanceBetween(lat1,lng1,lat2,lng2,output[])
{ "language": "en", "url": "https://stackoverflow.com/questions/120283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: What is a good OO C++ wrapper for sqlite I'd like to find a good object oriented C++ (as opposed to C) wrapper for sqlite. What do people recommend? If you have several suggestions please put them in separate replies for voting purposes. Also, please indicate whether you have any experience of the wrapper you are suggesting and how you found it to use. A: I also wasn't pleased with what I could find. Now you can write: class Person { public: Person() {} static SqlTable<Person>& table() { static SqlTable<Person> tab = SqlTable<Person>::sqlTable("Person", SqlColumn<Person>("Firstname", makeAttr(&Person::firstname)), SqlColumn<Person>("Lastname", makeAttr(&Person::lastname)), SqlColumn<Person>("Age", makeAttr(&Person::age)), return tab; } std::string firstname; std::string lastname; int age; }; SqliteDB db("testtable.db"); auto sel(db.select<Person>("Firstname=\"Danny\" and Lastname=\"Zeckzer\"")); std::for_each(sel.first, sel.second, [](const Person& p) { ... Person me; db.insert<Person>(me); ... std::vector<Person> everybody; db.insert<Person>(everybody.begin(), everybody.end()); The table method is all you need to write as long as you stick to the sqlite3 data types. As everything is a template not much abstraction layer code remains after -O. Natural joins require a result class similar to the Person class. The implementation is a single header with less than 500 lines. License is LGPL. Source A: Everyone have given good advice on what to use: I'll tell you what instrument NOT use. LiteSQL. My experience is terrible. I'm just doing some reasearch on what orm use, and I'm testing a lot of it. Weaknesses: * *no documentation *no explanatory README *no explanation on prerequisites *do not compile due to a lot of bug (isn't true, isn't fixed in v0.3.17) A: This is really inviting down-votes, but here goes... I use sqlite directly from C++, and don't see any value with an added C++ abstraction layer. It's quite good (and efficient) as is. A: I wasn't pleased with any I could find either, so I wrote my own: sqlite3cc. Here's a code example: sqlite::connection db( filename ); sqlite::command c( db, "UPDATE foo SET bar = ? WHERE name = ?" ); c << 123 << name << sqlite::exec; sqlite::query q( db, "SELECT foo FROM bar" ); for( sqlite::query::iterator i = q.begin(); i != q.end(); i++ ) std::cout << i->column< std::string >( 0 ) << "\n"; A: I've used this one http://www.codeproject.com/KB/database/CppSQLite.aspx but I've moved to C#, so there may be newer/better ones now A: http://www.codeproject.com/KB/database/CppSQLite.aspx is just fantastic, it is very easy to port, I had it working on bcb5 (omg) in half an hour or so. It is about as thin as you can get and easy to understand. There are a goodly number of examples that cover just about every thing you need to know. It uses exceptions for error handling - I modified it to provide return codes in a mater of minutes. Only tricky issue is to create your own lib file none are provided. try { CppSQLite3DB db; db.open(asFileName.c_str()); db.execDML("Update data set hrx = 0"); } // try catch (...) { } // catch Could not be much simpler than this..... A: Another simple one is NLDatabase. Disclaimer: I'm the author. Basic usage (and to be honest, you won't get much more than "basic" from this one) looks like this: #include "NLDatabase.h" using namespace std; using namespace NL::DB; int main(int argc, const char * argv[]) { Database db( "test.sqlite" ); auto results = db.query("SELECT * FROM test WHERE name <> ?").select("TOM"); for ( auto const & row : results ) { cout << "column[0]=" << row.column_string( 0 ) << endl; } } And just for fun, open a database, run a query and fetch results all in one line: for ( auto & row : Database( "test.sqlite" ).query( "SELECT * FROM test").select() ) { cout << row.column_string( 0 ) << endl; } A: I made one because of the need in our company. https://www.github.com/rubdos/libsqlitepp It's C++11, and header only. Just put the header in your project, include it and link to the C sqlite libraries. Examples should be somewhere on that git repo too, fairly easy to use. A: Perhaps you can take a look at http://pocoproject.org or Platinum C++ Framework A: Oracle/OCI/ODBC Template Library A: Another good wraper for databases in C++ is SOCI. It's not very OO, but the more Modern C++. It supports Oracle, PostgreSQL and MySQL. A SQLite backend is in the CVS. A: I read this post and tried some of the libraries mentioned in the answers , But none of them was easy enough for me ( i am a lazy programmer ! ). So i wrote my own wrapper : sqlite modern cpp database db("dbfile.db"); // executes the query and creates a 'user' table if not exists db << "create table if not exists user (" " age int," " name text," " weight real" ");"; // inserts a new user and binds the values to '?' marks db << "insert into user (age,name,weight) values (?,?,?);" << 20 << "bob" << 83.0; // slects from table user on a condition ( age > 18 ) and executes // the lambda for every row returned . db << "select age,name,weight from user where age > ? ;" << 18 >> [&](int age, string name, double weight) { cout << age << ' ' << name << ' ' << weight << endl; }; // selects the count(*) of table user int count = 0; db << "select count(*) from user" >> count; Have fun ! A: Here's one that hasn't been updated in a while, but compiles and runs on Mac OS GCC 4.3. It's also released under the MIT License, so you can use it in a commercial project, no problems. http://code.google.com/p/sqlite3pp/ The usage is boost-ified and very clean: sqlite3pp::database db("test.db"); sqlite3pp::transaction xct(db); { sqlite3pp::command cmd(db, "INSERT INTO contacts (name, phone) VALUES (:user, :phone)"); cmd.bind(":user", "Mike"); cmd.bind(":phone", "555-1234"); cmd.execute(); } xct.rollback(); See: http://code.google.com/p/sqlite3pp/wiki/UsagePage A: Use Qt - it has great binding for SQLite that fits well into its overall design
{ "language": "en", "url": "https://stackoverflow.com/questions/120295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: Used Ctrl-Alt-F6 in Linux, and can't get my screen back This is obviously a stupid question. I am coding in Eclipse both on Mac and Linux, but I mixed up and used the Mac shortcut to window tabbing (Ctrl-Cmd-F6), but I was using the Linux on uni and screen went black. I've done this before, but this time I can't get back to my desktop. Ctrl-Alt F1-F6 gives me different terminals, F7 gives me a black screen and F8 a blinking underscore in the top left corner. Shouldn't my session have been somewhere in F1-F6 and is it lost? A: The ctrl+alt+Fx (x=1..6) key combinations often allow you to have up to 6 concurrent terminal sessions on the console. Usually one is setup to use X windows, and differs from distribution to distribution. Typically its on Ctrl+Alt+F7. http://linux.about.com/od/linux101/l/blnewbie5_1.htm Some distributions of Linux allow you to kill the X Windows session with Ctrl+Alt+Backspace at which point the operating system will attempt to restart it. A: alt + F1 works for ubuntu 18.04. A: In the future, you can go into a terminal and type: init 3 To bring the system into text mode, and: init 5 To return the system to X mode. The nice thing about doing it that way is that everything should be shut down and restarted cleanly. A: For My case. I tried with hitting ctl+ alt + F2 together. And it worked in Fedora A: Ctrl-Alt-F7 should work perhaps your X has crashed? I just did what you did and F7 got it back for me, saying that before I remember X crashing and I had the same black screen A: I had the same issue. I tried with hitting ctl+ alt + F1 together. And it worked A: X is probably still running on F7, your display driver (or something else) is just misbehaving. You might be able to trick it into coming back on by going to F7 and blindly opening a terminal and playing with xset ($ xset dpms force on). Or you can ctrl-alt-backspace to kill X and GDM should restart it. Try seeing if you can repeat the problem and then file a bug report (or let the lab admin know if it isn't your computer). It probably has something to do with your distro's kernel configuration/patching. I've had this happen before on Ubuntu but not any other distros (I've used many), which is why I am assuming it might be distro-specific issue. Probably the unintended consequences of some kernel patching. A: Try Ctrl-Alt-F9, and Ctrl-Alt-F10. :-) A: Looks like X crashed. To check, you could log in on one of the terminals (on Ctrl+F1 etc.) and check that the "X" process is still running. I've had the same happen to me recently, and found the SIGSEGV and backtrace later in /var/log/Xorg.0.log. Curse your graphics driver vendor (usually) and then reboot. A: We're running gnome on Red Hat 5. ps axu in one of the other terminals showed some of the processes still running. Probably something with the display drivers then. Did ctrl-alt-backspace and restarted it. Thanks for the help. A: F8 solves the problem in Linux Mint 17.3 Rosa
{ "language": "en", "url": "https://stackoverflow.com/questions/120296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Delete variables in immediate window in C# Once I've created a variable in the immediate window in C# (VS2008), is there any way to delete it so I can create a new variable with the same name but a different type? Apart from restarting the program that is. The reason would be to keep the immediate window's namespace clean, since it's difficult to keep track of variable declarations once they scroll off the visible part of the window. A: I don't think it is possible to do what you are asking. A: You could make a dictionary to hold your immediate window "variables". Then you can remove items from the dictionary when you're done with them. A: If you just want to keep namespace clean, you can use object instead of the types you've declared for the variable? But, don't know if there exists any method to clear them! A: You can do something similar to the following: public void LolFunction() { { int main = 0; Console.Writeline(main); } { string main = "Roflstring"; Console.Writeline(main); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/120315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to sort an array of UTF-8 strings? I currentyl have no clue on how to sort an array which contains UTF-8 encoded strings in PHP. The array comes from a LDAP server so sorting via a database (would be no problem) is no solution. The following does not work on my windows development machine (although I'd think that this should be at least a possible solution): $array=array('Birnen', 'Äpfel', 'Ungetüme', 'Apfel', 'Ungetiere', 'Österreich'); $oldLocal=setlocale(LC_COLLATE, "0"); var_dump(setlocale(LC_COLLATE, 'German_Germany.65001')); usort($array, 'strcoll'); var_dump(setlocale(LC_COLLATE, $oldLocal)); var_dump($array); The output is: string(20) "German_Germany.65001" string(1) "C" array(6) { [0]=> string(6) "Birnen" [1]=> string(9) "Ungetiere" [2]=> string(6) "Äpfel" [3]=> string(5) "Apfel" [4]=> string(9) "Ungetüme" [5]=> string(11) "Österreich" } This is complete nonsense. Using 1252 as the codepage for setlocale() gives another output but still a plainly wrong one: string(19) "German_Germany.1252" string(1) "C" array(6) { [0]=> string(11) "Österreich" [1]=> string(6) "Äpfel" [2]=> string(5) "Apfel" [3]=> string(6) "Birnen" [4]=> string(9) "Ungetüme" [5]=> string(9) "Ungetiere" } Is there a way to sort an array with UTF-8 strings locale aware? Just noted that this seems to be PHP on Windows problem, as the same snippet with de_DE.utf8 used as locale works on a Linux machine. Nevertheless a solution for this Windows-specific problem would be nice... A: Update on this issue: Even though the discussion around this problem revealed that we could have discovered a PHP bug with strcoll() and/or setlocale(), this is clearly not the case. The problem is rather a limitation of the Windows CRT implementation of setlocale() (PHPs setlocale() is just a thin wrapper around the CRT call). The following is a citation of the MSDN page "setlocale, _wsetlocale": The set of available languages, country/region codes, and code pages includes all those supported by the Win32 NLS API except code pages that require more than two bytes per character, such as UTF-7 and UTF-8. If you provide a code page like UTF-7 or UTF-8, setlocale will fail, returning NULL. The set of language and country/region codes supported by setlocale is listed in Language and Country/Region Strings. It therefore is impossible to use locale-aware string operations within PHP on Windows when strings are multi-byte encoded. A: Eventually this problem cannot be solved in a simple way without using recoded strings (UTF-8 → Windows-1252 or ISO-8859-1) as suggested by ΤΖΩΤΖΙΟΥ due to an obvious PHP bug as discovered by Huppie. To summarize the problem, I created the following code snippet which clearly demonstrates that the problem is the strcoll() function when using the 65001 Windows-UTF-8-codepage. function traceStrColl($a, $b) { $outValue=strcoll($a, $b); echo "$a $b $outValue\r\n"; return $outValue; } $locale=(defined('PHP_OS') && stristr(PHP_OS, 'win')) ? 'German_Germany.65001' : 'de_DE.utf8'; $string="ABCDEFGHIJKLMNOPQRSTUVWXYZÄÖÜabcdefghijklmnopqrstuvwxyzäöüß"; $array=array(); for ($i=0; $i<mb_strlen($string, 'UTF-8'); $i++) { $array[]=mb_substr($string, $i, 1, 'UTF-8'); } $oldLocale=setlocale(LC_COLLATE, "0"); var_dump(setlocale(LC_COLLATE, $locale)); usort($array, 'traceStrColl'); setlocale(LC_COLLATE, $oldLocale); var_dump($array); The result is: string(20) "German_Germany.65001" a B 2147483647 [...] array(59) { [0]=> string(1) "c" [1]=> string(1) "B" [2]=> string(1) "s" [3]=> string(1) "C" [4]=> string(1) "k" [5]=> string(1) "D" [6]=> string(2) "ä" [7]=> string(1) "E" [8]=> string(1) "g" [...] The same snippet works on a Linux machine without any problems producing the following output: string(10) "de_DE.utf8" a B -1 [...] array(59) { [0]=> string(1) "a" [1]=> string(1) "A" [2]=> string(2) "ä" [3]=> string(2) "Ä" [4]=> string(1) "b" [5]=> string(1) "B" [6]=> string(1) "c" [7]=> string(1) "C" [...] The snippet also works when using Windows-1252 (ISO-8859-1) encoded strings (of course the mb_* encodings and the locale must be changed then). I filed a bug report on bugs.php.net: Bug #46165 strcoll() does not work with UTF-8 strings on Windows. If you experience the same problem, you can give your feedback to the PHP team on the bug-report page (two other, probably related, bugs have been classified as bogus - I don't think that this bug is bogus ;-). Thanks to all of you. A: This is a very complex issue, since UTF-8 encoded data can contain any Unicode character (i.e. characters from many 8-bit encodings which collate differently in different locales). Perhaps if you converted your UTF-8 data into Unicode (not familiar with PHP unicode functions, sorry) and then normalized them into NFD or NFKD and then sorting on code points might give some collation that would make sense to you (ie "A" before "Ä"). Check the links I provided. EDIT: since you mention that your input data are clear (I assume they all fall in the "windows-1252" codepage), then you should do the following conversion: UTF-8 → Unicode → Windows-1252, on which Windows-1252 encoded data do a sort selecting the "CP1252" locale. A: $a = array( 'Кръстев', 'Делян1', 'делян1', 'Делян2', 'делян3', 'кръстев' ); $col = new \Collator('bg_BG'); $col->asort( $a ); var_dump( $a ); Prints: array 2 => string 'делян1' (length=11) 1 => string 'Делян1' (length=11) 3 => string 'Делян2' (length=11) 4 => string 'делян3' (length=11) 5 => string 'кръстев' (length=14) 0 => string 'Кръстев' (length=14) The Collator class is defined in PECL intl extension. It is distributed with PHP 5.3 sources but might be disabled for some builds. E.g. in Debian it is in package php5-intl . Collator::compare is useful for usort. A: I found this following helper function to convert all letters of a string to ASCII letters very helpful here. function _all_letters_to_ASCII($string) { return strtr(utf8_decode($string), utf8_decode('ŠŒŽšœžŸ¥µÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýÿ'), 'SOZsozYYuAAAAAAACEEEEIIIIDNOOOOOOUUUUYsaaaaaaaceeeeiiiionoooooouuuuyy'); } After that a simple array_multisort() gives you what you want. $array = array('Birnen', 'Äpfel', 'Ungetüme', 'Apfel', 'Ungetiere', 'Österreich'); $reference_array = $array; foreach ($reference_array as $key => &$value) { $value = _all_letters_to_ASCII($value); } var_dump($reference_array); array_multisort($reference_array, $array); var_dump($array); Of course you can make the helper function fit more advanced needs. But for now, it looks pretty good. array(6) { [0]=> string(6) "Birnen" [1]=> string(5) "Apfel" [2]=> string(8) "Ungetume" [3]=> string(5) "Apfel" [4]=> string(9) "Ungetiere" [5]=> string(10) "Osterreich" } array(6) { [0]=> string(5) "Apfel" [1]=> string(6) "Äpfel" [2]=> string(6) "Birnen" [3]=> string(11) "Österreich" [4]=> string(9) "Ungetiere" [5]=> string(9) "Ungetüme" } A: Using your example with codepage 1252 worked perfectly fine here on my windows development machine. $array=array('Birnen', 'Äpfel', 'Ungetüme', 'Apfel', 'Ungetiere', 'Österreich'); $oldLocal=setlocale(LC_COLLATE, "0"); var_dump(setlocale(LC_COLLATE, 'German_Germany.1252')); usort($array, 'strcoll'); var_dump(setlocale(LC_COLLATE, $oldLocal)); var_dump($array); ...snip... This was with PHP 5.2.6. btw. The above example is wrong, it uses ASCII encoding instead of UTF-8. I did trace the strcoll() calls and look what I found: function traceStrColl($a, $b) { $outValue = strcoll($a, $b); echo "$a $b $outValue\r\n"; return $outValue; } $array=array('Birnen', 'Äpfel', 'Ungetüme', 'Apfel', 'Ungetiere', 'Österreich'); setlocale(LC_COLLATE, 'German_Germany.65001'); usort($array, 'traceStrColl'); print_r($array); gives: Ungetüme Äpfel 2147483647 Ungetüme Birnen 2147483647 Ungetüme Apfel 2147483647 Ungetüme Ungetiere 2147483647 Österreich Ungetüme 2147483647 Äpfel Ungetiere 2147483647 Äpfel Birnen 2147483647 Apfel Äpfel 2147483647 Ungetiere Birnen 2147483647 I did find some bug reports which have been flagged being bogus... The best bet you have is filing a bug-report I suppose though... A: I am confronted with the same problem with German "Umlaute". After some research, this worked for me: $laender =array("Österreich", "Schweiz", "England", "France", "Ägypten"); $laender = array_map("utf8_decode", $laender); setlocale(LC_ALL,"de_DE@euro", "de_DE", "deu_deu"); sort($laender, SORT_LOCALE_STRING); $laender = array_map("utf8_encode", $laender); print_r($laender); The result: Array ( [0] => Ägypten [1] => England [2] => France [3] => Österreich [4] => Schweiz ) A: Your collation needs to match the character set. Since your data is UTF-8 encoded, you should use a UTF-8 collation. It could be named differently on different platforms, but a good guess would be de_DE.utf8. On UNIX systems, you can get a list of currently installed locales with the command locale -a
{ "language": "en", "url": "https://stackoverflow.com/questions/120334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: How do I get maven to "stamp" a build number into a properties file I would like to be able to, as part of a maven build, set the build number (doesn't matter exactly what) in a properties file/class (so I can show it in a UI). Any ideas? A: We used the Build Number Plugin now available from Codehaus. It can generate a sequential build number or allows you to use the time stamp. A: The Maven build number plugin should do what you want. A: I use the maven-property-plugin to store the CruiseControl build label in a properties file (the build label is is available as a system property named 'label'). A post on how to do this with Hudson.
{ "language": "en", "url": "https://stackoverflow.com/questions/120335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Performance Profiling Tips Netbeans for client applications Do you have any tips for effective profiling using Netbeans? The profiler is quite nice and powerful. I've used it to find problems in some of my Eclipse RCP client applications. However, I get the feeling that I could get some more value out of it. Normally I set it to profile either all my classes (starting with xxx.mydomain) using an inclusive filter, or I use an exclude filter to remove all org.eclipse classes. This helps keep the overhead down. After running the section of code I am interested, I take a snapshot. I analyze for hotspots and then change the code, repeat the profiling, take another snapshot and compare again. Any other suggestions or tips on how to get the most out of the profiler with client applications? A: The JavaOne lab exercises are available online for free, you should be able to get some good tips there. http://developers.sun.com/learning/javaoneonline/j1labs2008.jsp?track=1&yr=2008 A: Specifically this link is interesting from the Java One Lab http://developers.sun.com/learning/javaoneonline/j1lab.jsp?lab=LAB-8430&yr=2008&track=1
{ "language": "en", "url": "https://stackoverflow.com/questions/120351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Delphi 2009 TurboPower library conversions In the next few months I will be resurrecting a project which made extensive use of Orpheus and SysTools. The development system I used is long gone, so would like to update the libraries to my current development environment. My question(s): is anyone porting, or has anyone ported the TurboPower libraries to Tiburon, if so did you encounter any problems; and if the answer is nobody, is it worth collaborating to produce a Delphi 2009 version, sharing the load. A: Some components in the process of being ported to Delphi 2009, including 5 TurboPower libraries. No Orpheus or SysTools, though. http://www.songbeamer.com/delphi/ Update: As M Plaut pointed out, Orpheus has been added to the site and has been updated as recently as Nov 13. A: Orpheus407AU_3 was posted at http://sourceforge.net/projects/tporpheus/ on Sept 5, 2009. A: There is Orpheus project at SourceForge but last release was made in 2005 :( Systools is also to be found there. A: When turbo power closed their doors, I analysed my code that was using Orpheus and SysTools. I found that there were only a handful of SysToosl functions I was using and so we wrote our own functions. (Can't remember what they were) It was fairly straight forward, some of them were in the newer versions of Delphi and the rest were easy to code. Orpheus was a little more difficult. I would be willing to throw some time into bringing back Orpheus. We replaced it with standard Delphi components and some code, but our applications lacks some of the coolness it once had. A: We would definitely be looking to port this as well. We use alot of Orpheus components in our current applications and this would be a definite roadblock to Delphi 2009. A: As of 10-11-2008, there is a version at http://www.songbeamer.com/delphi/ of Orpheus as well. The following comments are attached: This is based on the version from CVS. The first two packages compile and are partly tested. Some asm code still needs updating. Some bugfixing also need to be found and fixed. Contributions are welcome (use the contact form on the top). Search for "FIXME" in the source. Files that may need special attention and bugfixes: OVCDRPVW.PAS, OVCPF.PAS, OVCEDITU.PAS, OVCVIEWR.PAS, OVCSTR.PAS A: I have to bring a very old project to delphi 2009 : a CNC editor. The project didn't use Orheus at that time, but I was looking into it (did some tests), and the orpheus text editor is still the fastest on the market. So yes, I am very interested. I tried to compile the old source in delphi 9, but it crashes. I am not a good programmer, but I can do tests for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/120353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to create Editable PDF by ASP.NET How to create editable PDF using ASP.NET. I want to create a Pdf from a master template, edit it (fill some value(input not from database)) and save. Is it possible without using a 3rd party? If some sample code available it will be great A: I use PDF4NET in a couple of projects, can definitely recommend it. There are code samples on their website. A: You can use the open-source iTextSharp, which is a port from iText (java). Here is a code sample that creates a form with a text field (it's java but the iTextSharp interface is nearly identical) A: Maybe you can try the ReportViewer control. You can create a "report template", assign data to it in runtime (fill in values), render it, and save the result. A: I like http://dynamicpdf.com they have a free community edition which allows you to programmatically create PDF's from scratch. The paid products offer much more functionality and they have a designer as well. A: You can use Aspose.Pdf.Kit for this purpose. In fact, it can be used either in a ASP.NET or Windows application. You can download the trial version. Have a look at the related documentation; that might be of some help.
{ "language": "en", "url": "https://stackoverflow.com/questions/120355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Irretrievably destroying data in Java Is there anyway in Java to delete data (e.g., a variable value, object) and be sure it can't be recovered from memory? Does assigning null to a variable in Java delete the value from memory? Any ideas? Answers applicable to other languages are also acceptable. A: Store sensitive data in an array, then "zero" it out as soon as possible. Any data in RAM can be copied to the disk by a virtual memory system. Data in RAM (or a core dump) can also be inspected by debugging tools. To minimize the chance of this happening, you should strive for the following * *keep the time window a secret is present in memory as short as possible *be careful about IO pipelines (e.g., BufferedInputStream) that internally buffer data *keep the references to the secret on the stack and out of the heap *don't use immutable types, like String, to hold secrets The cryptographic APIs in Java use this approach, and any APIs you create should support it too. For example, KeyStore.load allows a caller to clear a password char[], and when the call completes, as does the KeySpec for password-based encryption. Ideally, you would use a finally block to zero the array, like this: KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); InputStream is = … char[] pw = System.console().readPassword(); try { ks.load(is, pw); } finally { Arrays.fill(pw, '\0'); } A: Nothing gets deleted, its just about being accessible or not to the application. Once inaccessible, the space becomes a candidate for subsequent usage when need arises and the space will be overwritten. In case of direct memory access, something is always there to read but it might be junk and wont make sense. A: By setting your Object to null doesn't mean that your object is removed from memory. The Virtual Machine will flag that Object as ready for Garbage Collection if there are no more references to that Object. Depending on your code it might still be referenced even though you have set it to null in which case it will not be removed. (Essentially if you expect it to be garbage collected and it is not you have a memory leak!) Once it is flagged as ready for collection you have no control over when the Garbage Collector will remove it. You can mess around with Garbage Collection strategies but I wouldn't advise it. Profile your application and look at the object and it's id and you can see what is referencing it. Java provide VisualVM with 1.6.0_07 and above or you can use NetBeans A: As zacherates said, zero out the sensitive fields of your Object before removing references to it. Note that you can't zero out the contents of a String, so use char arrays and zero each element. A: Due to the wonders virtual memory, it is nearly impossible to delete something from memory in a completely irretrievable manner. Your best bet is to zero out the value fields; however: * *This does not mean that an old (unzeroed) copy of the object won't be left on an unused swap page, which could persist across reboots. *Neither does it stop someone from attaching a debugger to your application and poking around before the object gets zeroed or crashing the VM and poking around in the heap dump. A: Nope, unless you have direct answer to hardware. There is a chance that variable will be cached somewhere. Sensitive data can even be stored in swap :) If you're concerning only about RAM, you can play with garbage collector. In high level langs usually you don't have a direct access to memory, so it's not possible to control this aspect. For example in .NET there is a class SecureString which uses interop and direct memory access. A: I would think that your best bet (that isn't complex) is to use a char[] and then change each position in the array. The other comments about it being possible for it to be copied in memory still apply. A: Primitive data (byte, char, int, double) and arrays of them (byte[], ...) are erasable by writing new random content into them. Object data have to be sanitized by overwriting their primitive properties; setting a variable to null just makes the object available for GC, but not immediately dead. A dump of VM will contain them for anyone to see. Immutable data such as String cannot be overwritten in any way. Any modification just makes a copy. You shall avoid keeping sensitive data in such objects. P.S. If we talk about passwords, it's better to use crypto-strong hash functions (MD5, SHA1, ...), and never ever work with passwords in clear text. A: If you're thinking about securing password/key management, you could write some JNI code that uses platform-specific API to store the keys in a secure way and not leak the data into the memory managed by the JVM. For example, you could store the keys in a page locked in physical memory and could prevent the IO bus from accessing the memory. EDIT: To comment on some of the previous answers, the JVM could relocate your objects in memory without erasing their previous locations, so, even char[], bytes, ints and other "erasable" data types aren't an answer if you really want to make sure that no sensitive information is stored in the memory managed by the JVM or swapped on to the hard drive. A: Totally and completely irretrievable is something almost impossible in this day and age. When you normally delete something, the onlything that happens is that the first spot in your memory is emptied. This first spot used to contain the information as to howfar the memory had to be reserved for that program or something else. But all the other info is still there untill it's overwritten by someone else. i sudgest either TinyShredder, or using CCleaner set to the Gutmann-pass
{ "language": "en", "url": "https://stackoverflow.com/questions/120380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to save file using Eclipse Monkey? I've recently started working with Aptana and Eclipse Monkey. What I want to do now is to create a script that saves a file then uploads it to a FTP. Problem is that Eclipse Monkey is VERY poorly documented and Google returned only unrelated results. I've made it upload the file, but I need the script to save the file before uploading. sync.uploadCurrentEditor(); Do you know any resource for Eclipse Monkey with methods,etc ? A: Check this out, it has the solution for your problem: http://forums.aptana.com/viewtopic.php?t=5216 Edit; to be more specific; You need to add the following line into the meta-data piece at the top, so the script knows the reference 'editors': * DOM: http://download.eclipse.org/technology/dash/update/org.eclipse.eclipsemonkey.lang.javascript After you've done that, you need to add the following line right before the sync. stuff; editors.activeEditor.save(); That's it :)
{ "language": "en", "url": "https://stackoverflow.com/questions/120383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What non-web-based tools exist to view IIS logs? I'm looking for non-web-based tools to view IIS logs. I've been using LogParser but I was hoping for something with an interface. It doesn't need to do any fancy reports or charts -- just a list with some search filters is plenty. I need something lightweight that I can run directly on the web server, preferably without a complex install process. A: VisualLogParser wraps Log Parser in a GUI. I'm sure there are others as well, but it's fit the bill for me. All the yumminess of Log Parser, with a half-decent interface. A: I find that command-line tools are often enough. For example, to list all log entries with a 404 response: findstr "404" logfilename > out.txt Findstr supports regular expressions in the search term and wildcards in the filename, so it is quite flexible for dealing with logfiles. A: I'm intrigued why you need more of an "interface" than the command line interface already provided by LogParser? Are you struggling with the SQL-like syntax maybe or is there something else? LogParser ticks ALL your other requirements. It totally rocks my socks. A: Nothing beats log parser. However, you may want to check out Sawmill & splunk.
{ "language": "en", "url": "https://stackoverflow.com/questions/120395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to price a corporate web app? I am putting together a proposal for a large multinational company for our licenced solution. Problem is I've never put together something this big before and so I don't know what is acceptable. Ignoring hosting or support (or even the functionality of the app) for the moment and just concentrating the licence - do you think it's more usual to do it on a per user basis - and if so would the company mind when prices fluctuate as more users come on board? Or is it more normal to do bands of users: 1-1000, 1000-5000, 5000-10000, 10000-25000, 25000-50000, 50000-100k, 100k+ (probably maximum usage). Does anyone know of any good links about this kind of thing? Has anyone here procured a complex, multilingual web app for 30000 users and how much is acceptable? A: With big deals it's usually discounted on each unit of whatever you're selling so that you can scale with their volume. Probably makes sense to have a base price plus a per user license that scales with their usage. Another rule of thumb is that when you have a large purchase to get approved your sales cycle will be fairly long so it's a good idea to price with that in mind. This a pretty good article by Joel Spolsky: http://www.joelonsoftware.com/articles/CamelsandRubberDuckies.html If you never took an econ course then this will help a lot. A: It's going to depend on your situation I suppose, is this licensed solution already complete, or is it going to be created specifically for this company. If it's the first, I'd suggest using previous offers as basis. If it's going to be developped, it's going to depend on how you want to recoup your money for developing it. Over what period of time do you want to recoup your costs etc. I'd say that bands of users is probably the most common solution, as people probably don't want to relicense for every individual user.
{ "language": "en", "url": "https://stackoverflow.com/questions/120396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Fax barcode recognition software to integrate in a c# application My application is preparing a cover-sheet for submitting documents by fax. I identify the batch by printing a Code39 on it. The incoming faxes are received as tif files and my small c# service is polling for new faxes and calling a decoding function. Currently I use a Win32 dll to recognize the barcode (QSbar39) and I am facing some problems with it - timeouts, hangs, errors etc. which are painful to handle. I'd prefer the have some free .Net component or an open source class to integrate. Whould you take a different approach? What tool do you use? Many thanks for your answers! A: BarBara is an open source barcode recognition library. It certainly isn't perfect, but I have gotten it to work (if I remember it needed some twiddling to get it to compile in VS2008) pretty well. A: I've used this java libarary with good results, and have gotten it to compile in .Net with minimal effort. It's been around for a while and works pretty well. It had some minor issues with EAN codes, but that has probably been resolved now. http://barbecue.sourceforge.net/ A: I'm not sure there is anything available free or open source, but VintaSoftBarcode.NET Library is pretty cheap and did the job for me on a recent project.
{ "language": "en", "url": "https://stackoverflow.com/questions/120402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Problem accessing file from different thread in Asp.net I have a process in a website (Asp.net 3.5 using Linq-to-Sql for data access) that needs to work as follows: * *Upload file *Record and save info regarding file to database *Import data from file into database *Redirect to different page When run sequentially like this, everything works fine. However, since the files being imported can be quite large, I would like step 3 to run on a different thread from the UI thread. The user should get to step 4 while step 3 is still in progress, and the screen on step 4 will periodically update to let the user know when the import is complete. I am handling the threading as follows: public class Import { public static void ImportPendingFile() { Import i = new Import(); Thread newThread = new Thread(new ThreadStart(i.ImportFile)); newThread.Start(); } public void ImportFile() { // 1. Query DB to identify pending file // 2. Open up and parse pending file // 3. Import all data from file into DB // 4. Update db to reflect that import completed successfully } } And in the codebehind: protected void butUpload(object sender, EventArgs e) { // Save file, prepare for import Import.ImportPendingFile(); Response.Redirect(NewLocation); } When doing this, I am able to confirm via debugger that the new thread is starting up properly. However, whenever I do this, the thread aborts when trying to access the file (step 2 in the code behind). This works fine when run in the main thread, so something about the multi-threaded situation is preventing this. I had thought that since the file is saved to disk (which it is) that there shouldn't be any problem with opening it up in a different thread. Any ideas where I have gone wrong and how I can fix it? Thanks! Note: I am using a third-party assembly to open the file. Using reflector, I have found the following code related to how it opens up the file: if (File.Exists(fileName)) { using (FileStream stream = new FileStream(fileName, FileMode.Open)) { // use stream to open file } } A: Try Response.Redirect(url, false) , else the 'Response' will be ended just after that call.
{ "language": "en", "url": "https://stackoverflow.com/questions/120404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Image icon beside the site URL I would like to have information about the icons which are displayed alongside the site URLs on a web browser. Is this some browser specific feature? Where do we specify the icon source, ie, is it in some tag on the web page itself ? A: These icons are called favicons Most web browsers support http://mysite.com/favicon.ico but the proper way to do it is to include an icon meta tag in the head profile. <head profile="http://www.w3.org/2005/10/profile"> <link rel="icon" type="image/png" href="/somewhere/myicon.png" /> […] </head> Source from the W3C itself. Your best bet is to probably do both with the same icon image. A: It's called a favicon. You might want to check out these three questions: * *What is currently the best way to get a favicon to display in all browsers that support Favicons? *Why no favicon for my web site? *Preferred way to use favicons? A: I believe you're referring to the Favicon, which allows a website to specify a 16x16 (or larger) image which is displayed in the address bar next to the URL in most modern browsers. Some browsers just pick the file called favicon.ico which is in the root of your web folder, whereas others require it to be specified in the <head> of the HTML using the following code, <link rel="shortcut icon" href="favicon.ico" type="image/x-icon" /> This was originally the way it was done with IE, but that doesn't conform to standards (because of the space in the rel), so most browsers now let you do it as follows, where you can use any standard image format, not just .ico <link rel="icon" href="favicon.png" type="image/png" /> A: These are favicons - more info on that page. Basically, .ico files in the root directory on the webserver. A: I will just add that some sites use an animated Gif as favicon. Which can be seen as über cool or supremely annoying, depending on your tastes... And probably not supported by all browsers. A: The easiest way to get that info is by this simple web app link text You only have to type the url of the page and it returns all the image properties A: They're favicons. Browsers look at / on a server for favicon.ico A: This is called favicon you can look on this tutorial using favicon in asp.net application LINK A: It was originally a windows icon format file, stored under the URL http://site/favicon.ico. Most sites still use favicon.ico, and many browsers still automatically look there, regardless of the meta tags.
{ "language": "en", "url": "https://stackoverflow.com/questions/120420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How do I iterate over a set of records in RPG(LE) with embedded SQL? How do I iterate over a set of records in RPG(LE) with embedded SQL? A: As Mike said, iterating over a cursor is the best solution. I would add to give slightly better performance, you might might want to fetch into an array to process in blocks rather than one record at a time. Example: EXEC SQL OPEN order_history; // Set the length len = %elem(results); // Loop through all the results dow (SqlState = Sql_Success); EXEC SQL FETCH FROM order_history FOR :len ROWS INTO :results; if (SQLER3 <> *zeros); for i = 1 to SQLER3 by 1; // Load the output eval-corr output = results(i); // Do something endfor; endif; enddo; HTH, James R. Perkins A: Usually I'll create a cursor and fetch each record. //*********************************************************************** // Main - Main Processing Routine begsr Main; exsr BldSqlStmt; if OpenSqlCursor() = SQL_SUCCESS; dow FetchNextRow() = SQL_SUCCESS; exsr ProcessRow; enddo; if sqlStt = SQL_NO_MORE_ROWS; CloseSqlCursor(); endif; endif; CloseSqlCursor(); endsr; // Main I have added more detail to this answer in a post on my website.
{ "language": "en", "url": "https://stackoverflow.com/questions/120422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What's the difference between "Layers" and "Tiers"? What's the difference between "Layers" and "Tiers"? A: Yes my dear friends said correctly. Layer is a logical partition of application whereas tier is physical partition of system tier partition is depends on layer partition. Just like an application execute on single machine but it follows 3 layered architecture, so we can say that layer architecture could be exist in a tier architecture. In simple term 3 layer architecture can implement in single machine then we can say that its is 1 tier architecture. If we implement each layer on separate machine then its called 3 tier architecture. A layer may also able to run several tier. In layer architecture related component to communicate to each other easily. Just like we follow given below architecture * *presentation layer *business logic layer *data access layer A client could interact to "presentation layer", but they access public component of below layer's (like business logic layer's public component) to "business logic layer" due to security reason. Q * why we use layer architecture ? because if we implement layer architecture then we increase our applications efficiency like ==>security ==>manageability ==>scalability other need like after developing application we need to change dbms or modify business logic etc. then it is necessary to all. Q * why we use tier architecture? because physically implementation of each layer gives a better efficiency ,without layer architecture we can not implement tier architecture. separate machine to implement separate tier and separate tier is implement one or more layer that's why we use it. it uses for the purposes of fault tolerance. ==>easy to maintain. Simple example Just like a bank open in a chamber, in which categories the employee: * *gate keeper *a person for cash *a person who is responsible to introduce banking scheme *manager they all are the related components of system. If we going to bank for loan purpose then first a gate keeper open the door with smile after that we goes to near a person that introduce to all scheme of loan after that we goes to manager cabin and pass the loan. After that finally we goes to cashier's counter take loan. These are layer architecture of bank. What about tier? A bank's branch open in a town, after that in another town, after that in another but what is the basic requirement of each branch * *gate keeper *a person for cash *a person who is responsible to introduce banking scheme *manager exactly the same concept of layer and tier. A: I like the below description from Microsoft Application Architecture Guide 2 Layers describe the logical groupings of the functionality and components in an application; whereas tiers describe the physical distribution of the functionality and components on separate servers, computers, networks, or remote locations. Although both layers and tiers use the same set of names (presentation, business, services, and data), remember that only tiers imply a physical separation. A: Read Scott Hanselman's post on the issue: A reminder on "Three/Multi Tier/Layer Architecture/Design": Remember though, that in "Scott World" (which is hopefully your world also :) ) a "Tier" is a unit of deployment, while a "Layer" is a logical separation of responsibility within code. You may say you have a "3-tier" system, but be running it on one laptop. You may say your have a "3-layer" system, but have only ASP.NET pages that talk to a database. There's power in precision, friends. A: I use layers to describe the architect or technology stack within a component of my solutions. I use tiers to logically group those components typically when network or interprocess communication is involved. A: Layers refer to the logical separation of code. Logical layers help you organize your code better. For example, an application can have the following layers. * *Presentation Layer or UI Layer *Business Layer or Business Logic Layer *Data Access Layer or Data Layer The above three layers reside in their own projects, maybe 3 projects or even more. When we compile the projects we get the respective layer DLL. So we have 3 DLLs now. Depending upon how we deploy our application, we may have 1 to 3 tiers. As we now have 3 DLL's, if we deploy all the DLLs on the same machine, then we have only 1 physical tier but 3 logical layers. If we choose to deploy each DLL on a separate machine, then we have 3 tiers and 3 layers. So, Layers are a logical separation and Tiers are a physical separation. We can also say that tiers are the physical deployment of layers. A: Why always trying to use complex words? A layer = a part of your code, if your application is a cake, this is a slice. A tier = a physical machine, a server. A tier hosts one or more layers. Example of layers: * *Presentation layer = usually all the code related to the User Interface *Data Access layer = all the code related to your database access Tier: Your code is hosted on a server = Your code is hosted on a tier. Your code is hosted on 2 servers = Your code is hosted on 2 tiers. For example, one machine hosting the Web Site itself (the Presentation layer), another machine more secured hosting all the more security sensitive code (real business code - business layer, database access layer, etc.). There are so many benefits to implement a layered architecture. This is tricky and properly implementing a layered application takes time. If you have some, have a look at this post from Microsoft: http://msdn.microsoft.com/en-gb/library/ee658109.aspx A: Logical layers are merely a way of organizing your code. Typical layers include Presentation, Business and Data – the same as the traditional 3-tier model. But when we’re talking about layers, we’re only talking about logical organization of code. In no way is it implied that these layers might run on different computers or in different processes on a single computer or even in a single process on a single computer. All we are doing is discussing a way of organizing a code into a set of layers defined by specific function. Physical tiers however, are only about where the code runs. Specifically, tiers are places where layers are deployed and where layers run. In other words, tiers are the physical deployment of layers. Source: Rockford Lhotka, Should all apps be n-tier? A: Technically a Tier can be a kind of minimum environment required for the code to run. E.g. hypothetically a 3-tier app can be running on * *3 physical machines with no OS . *1 physical machine with 3 virtual machines with no OS. (That was a 3-(hardware)tier app) *1 physical machine with 3 virtual machines with 3 different/same OSes (That was a 3-(OS)tier app) *1 physical machine with 1 virtual machine with 1 OS but 3 AppServers (That was a 3-(AppServer)tier app) *1 physical machine with 1 virtual machine with 1 OS with 1 AppServer but 3 DBMS (That was a 3-(DBMS)tier app) *1 physical machine with 1 virtual machine with 1 OS with 1 AppServers and 1 DBMS but 3 Excel workbooks. (That was a 3-(AppServer)tier app) Excel workbook is the minimum required environment for VBA code to run. Those 3 workbooks can sit on a single physical computer or multiple. I have noticed that in practice people mean "OS Tier" when they say "Tier" in the app description context. That is if an app runs on 3 separate OS then its a 3-Tier app. So a pedantically correct way describing an app would be "1-to-3-Tier capable, running on 2 Tiers" app. :) Layers are just types of code in respect to the functional separation of duties withing the app (e.g. Presentation, Data , Security etc.) A: I've found a definition that says that Layers are a logical separation and tiers are a physical separation. A: * *In plain english, the Tier refers to "each in a series of rows or levels of a structure placed one above the other" whereas the Layer refers to "a sheet, quantity, or thickness of material, typically one of several, covering a surface or body". *Tier is a physical unit, where the code / process runs. E.g.: client, application server, database server; Layer is a logical unit, how to organize the code. E.g.: presentation (view), controller, models, repository, data access. *Tiers represent the physical separation of the presentation, business, services, and data functionality of your design across separate computers and systems. Layers are the logical groupings of the software components that make up the application or service. They help to differentiate between the different kinds of tasks performed by the components, making it easier to create a design that supports reusability of components. Each logical layer contains a number of discrete component types grouped into sublayers, with each sublayer performing a specific type of task. The two-tier pattern represents a client and a server. In this scenario, the client and server may exist on the same machine, or may be located on two different machines. Figure below, illustrates a common Web application scenario where the client interacts with a Web server located in the client tier. This tier contains the presentation layer logic and any required business layer logic. The Web application communicates with a separate machine that hosts the database tier, which contains the data layer logic. Advantages of Layers and Tiers: * *Layering helps you to maximize maintainability of the code, optimize the way that the application works when deployed in different ways, and provide a clear delineation between locations where certain technology or design decisions must be made. *Placing your layers on separate physical tiers can help performance by distributing the load across multiple servers. It can also help with security by segregating more sensitive components and layers onto different networks or on the Internet versus an intranet. A 1-Tier application could be a 3-Layer application. A: Layers are logical separation of related-functional[code] within the application and Communication between the layers is explicit and loosely coupled. [Presentation logic, Application logic, Data Access logic] Tiers are Physical separation of layers [which get hosted on Individual servers] in an individual computer(process). As shown in diagram: 1-Tier & 3-Layers « App Logic with out DB access store data in a files. 2-Tier & 3-Layers « App Logic & DataStorage-box. 2-Tier & 2-Layers « Browser View[php] & DataStorage[procedures] 2-Tier & 1-Layers « Browser View[php] & DataStorage, query sending is common. 3-Tier & n-Layer « Browser View[php], App Logic[jsp], DataStorage n-Tier advantages: Better Security Scalability : As your organization grows You can scale up your DB-Tier with DB-Clustering with out touching other tiers. Maintainability : Web designer can change the View-code, with out touching the other layers on the other tiers. Easily Upgrade or Enhance [Ex: You can add Additional Application Code, Upgrade Storage Area, or even add Multiple presentation Layers for Separate devises like mobile, tablet, pc] Diagram from the blog A: When you talk about presentation, service, data, network layer, you are talking about layers. When you "deploy them separately", you talk about tiers. Tiers is all about deployment. Take it this way: We have an application which has a frontend created in Angular, it has a backend as MongoDB and a middle layer which interacts between the frontend and the backend. So, when this frontend application, database application, and the middle layer is all deployed separately, we say it's a 3 tier application. Benefit: If we need to scale our backend in the future, we only need to scale the backend independently and there's no need to scale up the frontend. A: Layers are conceptual entities, and are used to separate the functionality of software system from a logical point of view; when you implement the system you organize these layers using different methods; in this condition we refer to them not as layers but as tiers. A: IBM's Three-Tier Architecture article has a section dedicated to this topic: In discussions of three-tier architecture, layer is often used interchangeably – and mistakenly – for tier, as in 'presentation layer' or 'business logic layer.' They aren't the same. A 'layer' refers to a functional division of the software, but a 'tier' refers to a functional division of the software that runs on infrastructure separate from the other divisions. The Contacts app on your phone, for example, is a three-layer application, but a single-tier application, because all three layers run on your phone. The difference is important, because layers can't offer the same benefits as tiers.
{ "language": "en", "url": "https://stackoverflow.com/questions/120438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "253" }
Q: What is the most convincing way to require formalized unit testing? This certainly presupposes that unit testing is a good thing. Our projects have some level of unit testing, but it's inconsistent at best. What are the most convincing ways that you have used or have had used with you to convince everyone that formalized unit testing is a good thing and that making it required is really in the best interest of the 'largeish' projects we work on. I am not a developer, but I am in Quality Assurance and would like to improve the quality of the work delivered to ensure it is ready to test. By formalized unit tests, I'm simply talking about * *Identifying the Unit Tests to be written *Identifying the test data (or describe it) *Writing these tests *Tracking these tests (and re-using as needed) *Making the results available A: A very convincing way is to do formalized unit test yourself, regardless of what your team/company does. This might take some extra effort on your side, especially if you're not experienced with this sort of practice. When you can then show your code is better and you are being more productive than your fellow developers, they are going to want to know why. Then feed them your favorite unit testing methods. Once you've convinced your fellow developers, convince management together. A: I use Maven with the Surefire and Cobertura plugins for all my builds. The actual test cases are created with JUnit, DbUnit and EasyMock. Identifying Unit Tests I try to follow Test Driven Development but to be honest I usually just do that for the handful of the test cases and then come back and create tests for the edge and exception cases later. Identifying Test Data DbUnit is great for loading test data for your unit tests. Writing Test Cases I use JUnit to create the test cases. I try to write self documenting test cases but will use Javadocs to comment something that is not obvious. Tracking & Making The Results Available I integrate the unit testing into my Maven build cycle using the Surefire plugin and I use the Corbertura plugin to measure the coverage achieved by those tests. I always generate and publish a web-site including the Surefire and Cobertura reports as part of my daily build so I can see what tests failed/passed. A: The event which convinced me was when we managed to regress a bug three times, in three consecutive releases. Once I realised how much more productive I was as a programmer when I wasn't constantly fixing trivial mistakes after they had gone to the client, and I could have a warm fuzzy feeling that colleagues code would do what they claimed it would, I became a convert. A: Back in the day I did Cobol development on Mainframes we did this religiously in the several companies I worked in and it was accepted as the way you did things because the environment enforced it. I think it was a very typical scheme for the era and maybe some of the reasons might be applicable to you:- Like most mainframe environments we had three realms, development, Quality Assurance and Production. Programmers developed in development and unit tested there, and once they signed off and were happy the unit was migrated to the QA environment (with the test and results docs) where it was system tested by dedicated QA staff. The development to QA migration was a formal step which happened overnight. Once QA'ed the code was migrated to Production - and we had very few bugs. The motivation to get the unit testing done and right was that if you didn't and a bug was found by QA staff it was obvious that you hadn't done the work. Consequently your reputation depended on how rigorous you were. Of course most people would end up with the occasional bug, but coders who produced solid tested code all the time soon got a star reputation and those who produced buggy code got noticed too. The push would be always to up your game, and consequently the culture produced was one that pushed towards bug free code delivered first time. Extracting pertinent points - * *Coder reputation tied up with delivery of bug free tested code *Significant overhead associated with moving unit tested code to the next level, so motivation not to repeat this and get it right first time. *System testing performed by different people to unit testing - ideally a different team. I'm sure your environment will differ but the principals might be translatable. A: Education and/or certification. Give your team members a formal training in the field of testing - maybe with certification exam (depending on your team members and your own attitude towards certification). You'll take testing to a higher level that way, and your team members will be more likely to take a professional attitude towards testing. A: Sometimes by example is the best way. I also find that reminding people that certain things just dont happen when things are under test. Next time somebody asks you to write something, do it with tests regardless. Eventually your peers will be jealous of the ease by which you can change your code and know that it still works. As for management you need to emphasise how much time gets wasted due to the nuclear explosion that occurs when you need to make a change to codebase X that isnt under test. Many developers dont realise just how much they refactor without ensuring they are preserving behaviour across the entire system. For me this is the biggest benefit to unit testing and TDD in my opinion. * *Software requirements change *Software changes to suit the requirements The only certainty is change. Changing code that is not under test requires the developer to be aware of every behavioural side effect possible. The reality is that the coders who think they can read into every permutation, does so by a pain staking process of trial and error until nothing breaks obviously. At this point they check in. The pragmatic programmer recognizes that he/she is not perfect and all knowing, and that tests are like a safety net that allows them to walk the refactoring tightrope quickly and safely. As for when to write test on greenfield code, I'd have to advocate as much as possible. Spend the time defining the behaviours that you want out of your system and write tests initially to express those higher level constructs. Unit tests can come as thoughts crystallize. Hope this helps. A: There is a big difference between convincing and requiring. If you find a way to convince your colleagues to write them - great. However if you create some formalized rules and require them to write unit tests, they will find a way to overcome this. As a result you will get a bunch of unit tests which are worth nothing: There will be unit test for every single class available and they will test setters and getters. Think twice before creating and enforcing rules. Developers are good at overcoming them. A: Remind your team or the other developers that they're professionals, not amateurs. Worked for me! Also, it's an industry standard these days. Without unit testing experience, they are less desirable and less valuable as employees to potential future employers. A: First time around you just need to go ahead and write them and show people that it's worth it. I've found on three projects that it's the only way to convince people. Some people who don't code (e.g. junior project managers) won't be able to see the value until it's staring them right in the face. A: On my software team, we tend to write a small business case on these issues and present them to management in order to have the time available to create and track tests. We explain that the time taken to test is well made up for when crunch time comes and everything is on the line. We also set up a Hudson build server to centralize the tracking of the unit tests. This makes it a lot easier for the developers to keep track of failing tests and to discover recurring problems. A: As a team lead, it is my responsibility to ensure that my programmers are doing unit testing on all the modules they work on. I suppose at this point, it's not even a question of how to convince them, it's required. Not sometimes, not on largish projects, all the time. Unit testing is the first line of defense against putting something in production that you will have to maintain. If something is put into production that has not been completely unit and system tested, then it will come back to bite you. I guess one of the policies we have here to support this is that if it blows in production, or causes problems, then the programmer responsible for coding and testing that module will be the one that has to take care of the problems, do the cleanup, etc. That alone is a fairly good motivator. The other is that it is about pride. I work in a shop of about 75 coders, although that is large by some standards, it's really small enough for all of us to know one another. Its also small enough that we know what one another is working on, and when it does move to production, we are aware of any abends, failures, etc. If you are careful, do the unit and system testing, the chances of moving something to production without causing failures increases significantly. It may take a time or two of moving something to production and failing to realize it, but there are great rewards involved in not messing up. It's really nice to hear congratulations in the hallway when you move a project in and it doesn't screw up. A: Write a bunch of them and demonstrate that unit testing has improved your productivity and the quality of your code. Without some kind of proof, sometimes people won't believe it's worth it. A: So, two years after I asked this question, I find that one unexpected answer was that by moving to a new SDLC was what was needed. Five years ago, we established our first formal SDLC. It improved our situation, but left out some important things, such as automation. We are now in the process of establishing a new SDLC (under new managment) where one of the tenants is automation. Not just automated unit tests, but automated functional tests. I guess the lesson is that I was thinking too small. If you are going to change how you create software, go 'whole hog' and make a drastic change rather than propose incremental change if you are not used to that. A: You could take some inspiration from an initiative at Google. Their test team started putting up examples, tips and benefits inside the toilet cubicles to raise the profile of the merits of test automation. https://testing.googleblog.com/2007/01/introducing-testing-on-toilet.html
{ "language": "en", "url": "https://stackoverflow.com/questions/120460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Use of Migrations in Ruby on Rails I would like to confirm that the following analysis is correct: I am building a web app in RoR. I have a data structure for my postgres db designed (around 70 tables; this design may need changes and additions during development to reflect Rails ways of doing things. EG, I designed some user and role tables - but if it makes sense to use Restful Authentication, I will scrub them and replace with whatever RA requires. ). I have a shellscript which calls a series of .sql files to populate the empty database with tables and initial data (eg, Towns gets pre-filled with post towns) as well as test data (eg, Companies gets a few dummy companies so I have data to play with). for example: CREATE TABLE towns ( id integer PRIMARY KEY DEFAULT nextval ('towns_seq'), county_id integer REFERENCES counties ON DELETE RESTRICT ON UPDATE CASCADE, country_id integer REFERENCES countries ON DELETE RESTRICT ON UPDATE CASCADE NOT NULL, name text NOT NULL UNIQUE ); Proposition 0: Data lasts longer than apps, so I am convinced that I want referential integrity enforced at the DB level as well as validations in my RoR models, despite the lack of DRYNESS. Proposition 1: If I replace the script and sql files with Migrations, it is currently impossible to tell my Postgres database about the Foreign Key and other constraints I currently set in SQL DDL files within the migration code. Proposition 2: The touted benefit of migrations is that changes to the schema are versioned along with the RoR model code. But if I keep my scripts and .sql files in railsapp/db, I can version them just as easily. Proposition 3: Given that migrations lack functionality I want, and provide benefits I can replicate, there is little reason for me to consider using them. So I should --skipmigrations at script/generate model time. My question: If Proposition 0 is accepted, are Propositions 1,2,3 true or false, and why? Thanks! A: Proposition 1 is mistaken : you can definitely define referential integrity using migrations if only by using direct SQL inside the migration, see this post for more details. Proposition 2: The touted interest of migrations is to be able to define your database model incrementally while keeping track of what each change added and be able to easily rollback any such change at a later time. You have to be careful with the order you create/modify things in but you can do it. One thing to keep in mind : rails is better suited for application-centri design. in the Rails Way(tm) the database is only ever accessed through the application active record layer and exposes data to the outside using webservices A: Proposition 1 is false in at least two situations - you can use plugins like foreign_key_migrations to do the following: def self.up create_table :users do |t| t.column :department_id, :integer, :references => :departments end end which creates the appropriate foreign key constraint in your DB. Of course, you might have other things that you want to do in your DDL, in which case the second situation becomes more compelling: you're not forced to use the Ruby DSL in migrations. Try the execute method, instead: def self.up execute 'YOUR SQL HERE' end With that, you can keep the contents of your SQL scripts in migrations, gaining the benefits of the latter (most prominently the down methods, which you didn't address in your original question) and retaining the lower-level control you prefer. A: 1: You may want to try out this plugin. I didn't try it myself though, but it seems to be able to add foreign key constraints through migrations. 2: The real benefit of migration is the ability to go back and forth in the history of your database. That's not as easy with your .sql files. 3: See if the above-mentioned plugin works for you, then decide :) At any rate, it's not a capital sin if you don't use them! A: Since you are using Postgres and may not want to install the foreign_key_migrations plugin, here is what I do when I want to use both migrations and foreign key constraints. I add a SchemaStatements method to ActiveRecord::SchemaStatements called "add_fk_constraint". This could go in some centralized file, but in the example migration file below, I have just put it inline. module ActiveRecord module ConnectionAdapters # :nodoc: module SchemaStatements # Example call: # add_fk_constraint 'orders','advertiser_id','advertisers','id' # "If you want add/alter a 'orders' record, then its 'advertiser_id' had # better point to an existing 'advertisers' record with corresponsding 'id'" def add_fk_constraint(table_name, referencing_col, referenced_table, referenced_col) fk_name = "#{table_name}_#{referencing_col}" sql = <<-ENDSQL ALTER TABLE #{table_name} ADD CONSTRAINT #{fk_name} FOREIGN KEY (#{referencing_col}) REFERENCES #{referenced_table} (#{referenced_col}) ON UPDATE NO ACTION ON DELETE CASCADE; CREATE INDEX fki_#{fk_name} ON #{table_name}(#{referencing_col}); ENDSQL execute sql end end end end class AdvertisersOrders < ActiveRecord::Migration def self.up create_table :advertisers do |t| t.column :name, :string, :null => false t.column :net_id, :integer, :null => false t.column :source_service_id, :integer, :null => false, :default => 1 t.column :source_id, :integer, :null => false end create_table :orders do |t| t.column :name, :string, :null => false t.column :advertiser_id, :integer, :null => false t.column :source_id, :integer, :null => false end add_fk_constraint 'orders','advertiser_id','advertisers','id' end def self.down drop_table :orders drop_table :advertisers end end I hopes this helps someone. It has been very useful to me since I need to load a lot of externally supplied data with SQL "COPY" calls, yet I find the migrations system very convenient.
{ "language": "en", "url": "https://stackoverflow.com/questions/120467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: NHibernate nvarchar/ntext truncation problem I'm using nhibernate to store some user settings for an app in a SQL Server Compact Edition table. This is an excerpt the mapping file: <property name="Name" type="string" /> <property name="Value" type="string" /> Name is a regular string/nvarchar(50), and Value is set as ntext in the DB I'm trying to write a large amount of xml to the "Value" property. I get an exception every time: @p1 : String truncation: max=4000, len=35287, value='<lots of xml..../>' I've googled it quite a bit, and tried a number of different mapping configurations: <property name="Name" type="string" /> <property name="Value" type="string" > <column name="Value" sql-type="StringClob" /> </property> That's one example. Other configurations include "ntext" instead of "StringClob". Those configurations that don't throw mapping exceptions still throw the string truncation exception. Is this a problem ("feature") with SQL CE? Is it possible to put more than 4000 characters into a SQL CE database with nhibernate? If so, can anyone tell me how? Many thanks! A: Okay, with many thanks to Artur in this thread, here's the solution: Inherit from the SqlServerCeDriver with a new one, and override the InitializeParamter method: using System.Data; using System.Data.SqlServerCe; using NHibernate.Driver; using NHibernate.SqlTypes; namespace MySqlServerCeDriverNamespace { /// <summary> /// Overridden Nhibernate SQL CE Driver, /// so that ntext fields are not truncated at 4000 characters /// </summary> public class MySqlServerCeDriver : SqlServerCeDriver { protected override void InitializeParameter( IDbDataParameter dbParam, string name, SqlType sqlType) { base.InitializeParameter(dbParam, name, sqlType); if (sqlType is StringClobSqlType) { var parameter = (SqlCeParameter)dbParam; parameter.SqlDbType = SqlDbType.NText; } } } } Then, use this driver instead of NHibernate's in your app.config <nhibernateDriver>MySqlServerCeDriverNamespace.MySqlServerCeDriver , MySqlServerCeDriverNamespace</nhibernateDriver> I saw a lot of other posts where people had this problem, and solved it by just changing the sql-type attribute to "StringClob" - as attempted in this thread. I'm not sure why it wouldn't work for me, but I suspect it is the fact that I'm using SQL CE and not some other DB. But, there you have it! A: <property name="Value" type="string" /> <column name="Value" sql-type="StringClob" /> </property> I'm assuming this is a small typo, since you've closed the property tag twice. Just pointing this out, in case it wasn't a typo. A: Try <property name="Value" type="string" length="4001" /> A: Tried: <property name="Value" type="string" length="4001" /> and <property name="Value" type="string" > <column name="Value" sql-type="StringClob" length="5000"/> </property> Neither worked, I'm afraid... Same exception - it still says that the max value is 4000. A: Why are you using the sub-element syntax? try: <property name='Value' type='StringClob' /> A: On my current deplyoment of SQL CE and NHibernate I use a length of 4001. Then NHibernate generates the stuff as NTEXT instead of NVARCHAR. Try that. Another thing to use with NHibernate and SQL CE is: <session-factory> ... <property name="connection.release_mode">on_close</property> </session-factory> That solves some other problems for me atleast. A: After reading your post this modification got it working in my code protected override void InitializeParameter(IDbDataParameter dbParam,string name,SqlType sqlType) { base.InitializeParameter(dbParam, name, sqlType); var stringType = sqlType as StringSqlType; if (stringType != null && stringType.LengthDefined && stringType.Length > 4000) { var parameter = (SqlCeParameter)dbParam; parameter.SqlDbType = SqlDbType.NText; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/120470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you visualize your sprint backlog? Most Scrum teams have some sort of whiteboard or other board upon which the stories/tasks for the current sprint are visualized. I'm curious as to how people organize this board? Do you use post-it notes? Are they color-coded? How do you group tasks? How do you distinguish the state of tasks? Etc... A: I've seen groups use a whiteboard, and use different colors for each group of tasks. If you use note cards for your stories, you can put them up there as well, and divide them by release/iteration/group of tasks. This concept is explained better here. Update: I also use spreadsheets to visualize my sprints/iterations, because my team is not all co-located. I use tables and graphs similar to what was mentioned in Jim's answer. A: Not for everyone, but for those running TFS, Scrum For Team System provides excellent sprint backlog reports. Failing that, Ive personally maintained sprint backlogs using a spreadsheet, as per this article. Sharing via something along the lines of google docs. A: Somewhere on the web there is a blog post which is just a lot of scrum boards. It is really good to see how other people do it. Maybe someone can find it for us :) I think this looks like a pretty comprehensive way of doing things! http://www.xpday.net/Xpday2007/session/AgileInGovernment.html A: Check out the Rally tool at rallydev.com. Depending on your needs, there is a free community edition. It's very easy to track stories and tasks within a given sprint, including estimations, actuals, and states for each story and task. A: I usually use an Excel sheet, on a shared network folder: one column is used to specify the "group" of the task, and one to specify the task itself. For completed tasks, we simply mark the row in green. The primary disadvantage for that is sharing - I've yet to find a decent solution that allows more than one person to edit the backlog. We have some ways to deal with it (by limiting the updates to a specific time of day, and then having the team update it together), but it is still annoying. For sprints with a small number of tasks, we simply write the tasks on a whiteboard, and strike over the tasks as they are completed.
{ "language": "en", "url": "https://stackoverflow.com/questions/120474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you deploy your common SharePoint library We have a class library where we keep a lot of the stuff that we often use when doing sharepoint development. How would you go around deploying this? Right now our best bet is to have it in a separate solution, and deploy that so that the assembly is deployed to GAC. That way we ensure that the assembly is deployed to all application tiers and is available. Is there a better approach than this? A: GAC, of course, is the easiest way to deploy an assembly; however, what if you don't want to share this assembly across an entire server. Or what if the license doesn't permit that. So, there are two ways to deploy an assembly: * *GAC (you already know about it) *BIN folder. To deploy your assembly to the bin folder of your site (e.g. C:\Inetpub\wwwroot\wss\VirtualDirectories\80) you'll need to create a custom Security Policy file and change a security level in the web.config. This is not easy at all and can be quite frustrating but may be well worth it. More information: http://msdn.microsoft.com/en-us/library/cc768621.aspx A: The GAC is usually your best choice. Like ensuring you deploy to all applications, it's also easier in terms of security. A: If I remember correctly, putting it in the GAC is the recommended course of action. A: Also, remember that you have to add to the SafeControls list in the web.config. http://grounding.co.za/blogs/brett/archive/2008/05/23/sharepoint-register-an-assembly-as-a-safe-control-in-the-web-config-file.aspx A: I've decided to deploy it to GAC since the assembly doesn't pose a security risk since it will not be used from Web Parts. I've researched a bit and deploying to gac is the recommended way to do it. You could argue that everything but Web Parts should be deployed to GAC. Since Web Parts pose a potentially security risk it can be a good idea to make your own CAS and deploy it to sharepoint bin. Cheers. A: Note that if you do decide to deploy to the BIN folder, you can deploy custom security policy settings such as new Permission Sets through your solution manifest file.
{ "language": "en", "url": "https://stackoverflow.com/questions/120484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to work around the [1] IE bug while saving an excel file from a Web server? I've noticed that Internet Explorer adds a number in square brackets to files downloaded from the internet (usually [1]). This creates a big problem with downloading Excel spreadsheets as square brackets are not a valid filename character inside Excel worksheet name. That problem is IE specific, others browsers are keeping same file name. So, if you have a pivot table auto-refreshed on file opening for example, you'll get an error message saying the name "file[1].yourPivotTableName" is not valid. Is there any solution to that problem ? EDIT : It seems that whatever the filename suggested by HTTP directives, IE adds [1] in all cases, which cause the problem ! (So, answers about filenames aren't helpful in that case) EDIT : I've tried some VBA code to save file under another name when it'll open. However, it doesn't work (same error message than before). Do you think there's a way to fix that with VBA ? A: I think that this happens when you open the spreadsheet in IE and IE saves it to a temporary file. And I think it only happens when the spreadsheet's filename has more than one dot in it. Try it with a simple "sample.xls". Another workaround is to tell users to save the file to the desktop and then open it. A: It's a built-in feature in Internet Explorer. Stop using "Open", start using "Save" in the file-download window, otherwise IE will append "[1]" to filename of the file that it places in some temporary folder. You could build some .NET application using System.IO.FileSystemWatcher that catches the event of the creation of the downloaded file or something and renames the file. A: I've got it working using VBA provided by this cool guy (think of him fondly). It renames the file and then reattaches the pivots. http://php.kennedydatasolutions.com/blog/2008/02/05/internet-explorer-breaks-excel-pivot-tables/ A: I have solved this issue by using method where we pass 3 parameters: Filename, file extension(without the .dot) and the HTTP request); then doing the UTF-8 encoding of the filename and extension. Sample Code: public static String encoding(String fileName, String extension, HttpServletRequest request) { String user = request.getHeader( "user-agent" ); boolean isInternetExplorer = ( user.indexOf( "MSIE" ) > -1 ); String var = ""; try { fileName = URLEncoder.encode( fileName, "UTF-8" ); fileName = fileName.trim().replaceAll( "\\+", " " ); extension = URLEncoder.encode( extension, "UTF-8" ); extension = extension.trim().replaceAll( "\\+", " " ); if ( isInternetExplorer ) { disposition = "attachment; filename=\"" + fileName+"."+extension+"\""; } else { var = "attachment; filename*=UTF-8''" + fileName+"."+extension; } } catch ( UnsupportedEncodingException ence ) { var = "attachment; filename=\"" + fileName+"."+extension; ence.printStackTrace(); } return var; } This worked just fine in my case. Hope it will help you all. A: Actually, the correct .NET-code is as following: Response.AppendHeader("content-disposition", "attachment;filename=file.xls"); Response.ContentType = "application/vnd.ms-excel"; Note: AppendHeader, not AddHeader, which I think only works in debug web-server and IIS7. A: The following has worked for me: private string EncodeFileName(string fileName) { fileName = HttpUtility.UrlEncode(fileName, Encoding.UTF8).Replace("+", " "); if (HttpContext.Current.Request.UserAgent.ToLower().Contains("msie")) { var res = new StringBuilder(); var chArr = fileName.ToCharArray(); for (var j = 0; j < chArr.Length; j++) { if (chArr[j] == '.' && j != fileName.LastIndexOf(".")) res.Append("%2E"); else res.Append(chArr[j]); } fileName = res.ToString(); } return "\"" + fileName + "\""; } A: You could just make sure that in the options box for the pivot the auto refresh is switched off. Now even when opened from the server the pivot will work perfectly A: Put these four lines in your code: response.reset(); response.setHeader("Expires", "0"); response.setHeader("Cache-Control","must-revalidate,post-check=0, pre-check=0"); response.setHeader("Pragma", "public"); Hope this helps. A: I have encountered the same problem and came up with (imo) a better solution that does not need any VBA. If you set "Content-Disposition" header to "attachment; filename=<...>" instead of "inline; filename=<...>" the normal browsers will open dialog that will allow to save or open a file with a filename defined in a header, but Internet Explorer will behave in kind of weird way. It will open file download dialog and if you press Save it will suggest a filename that is defined in the header, but if you press Open it will save file to a temporary folder and open it with a name that is the same as your URN (without 'namespace'), e.g. if your URI is http://server/folder/file.html, so IE will save your file as file.html (no brackets, woo hoo!). This leads us to a solution: Write a script that handles request from http://server/folder/* and when you need to serve an XLS file just redirect to that script (use your filename instead of asterisk) with Content-Disposition set to inline. A: In .NET I have found from experience only this seems to work for me: Response.AddHeader("Content-Disposition", "attachment; filename=excel.xls"); Response.AddHeader("Content-Type", "application/vnd.ms-excel"); Response.ContentType = "application/vnd.ms-excel"; The duplication smells, but so far I have never got to the bottom of it (maybe Sebs post explains this). Also the "content-Disposition" value appears very finicky use a : instead of a ; or ommit the space between it and 'filename' and it blows! Also if you have compression enabled on IIS this may fix things for you: Response.ClearHeaders()
{ "language": "en", "url": "https://stackoverflow.com/questions/120497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: delphi "Invalid use of keyword" in TQuery I'm trying to populate a TDBGrid with the results of the following TQuery against the file Journal.db: select * from Journal where Journal.where = "RainPump" I've tried both Journal."Where" and Journal.[Where] to no avail. I've also tried: select Journal.[Where] as "Location" with the same result. Journal.db is a file created by a third party and I am unable to change the field names. The problem is that the field I'm interested in is called 'where' and understandably causes the above error. How do I reference this field without causing the BDE (presumably) to explode? A: Aah, I'm loving delphi again... I found a workaround. The TQuery component has the Filter property :-) I omitted the "Where=" where clause from the query whilst still keeping all the other 'and' conditions. I set the Filter property to "Where = 'RainPump'". I set the Filtered property to True and life is good again. I'm still wondering if there's a smarter way to do this using this old technology but if it's stupid and it works, then it's not stupid. A: I'm afraid that someone reading this thread will get the impression that the BDE SQL engine cannot handle the query: select * from Journal where Journal."Where" = "RainPump" and will waste their time unnecessarily circumlocuting around it. In fact this construction works fine. The quotes around "Where" keeps the BDE from interpreting it as a keyword, just as you would expect. I don't know what is wrong in Baldric's particular situation, or what he tried in what order. He describes the problem as querying a *.db table, but his SQL error looks more like something you'd get in passthrough mode. Or, possibly he simplified his code for submission, thus eliminating the true cause of the error. My tests performed with: BDE v.5.2 (5.2.0.2) Paradox for Windows v. 7 (32b) Delphi 5.0 (5.62) Various versions of the statement that succeed: select * from Journal D0 where D0."Where" = "RainPump" select * from Journal where Journal."Where" = "RainPump" select * from ":common:Journal" D0 where D0."Where" = "RainPump" select * from ":common:Journal" where ":common:Journal"."Where" = "RainPump" select * from :common:Journal where Journal."Where" = "RainPump" select * from ":common:Journal" D0 where D0."GUMPIK" = 3 select * from ":common:Journal" where ":common:Journal"."GUMPIK" = 3 select * from :common:Journal where Journal."GUMPIK" = 3 Versions of the statement that look correct but fail with "Invalid use of keyword": select * from ":common:Journal" where :common:Journal."Where" = "RainPump" select * from :common:Journal where :common:Journal."Where" = "RainPump" select * from ":common:Journal" where :common:Journal."GUMPIK" = 3 select * from :common:Journal where :common:Journal."GUMPIK" = 3 -Al. A: Rewrite it like this, should work: select * from Journal where Journal.[where] = "RainPump" A: You can insert the resultset into a new table with "values" (specifying no column names) where you have given your own column names in the new table and then do a select from that table, Using a TQuery, something like: Query1.sql.clear; query1,sql.add('Insert into newtable values (select * from Journal);'); query1.sql.add('Select * from newtable where newcolumn = "Rainpump";'); query1.open; A: select * from Journal where Journal."where" = "RainPump" A: Me, I'd rename the awkward column. A: In MySQL, table/column names can be enclosed in `` (the angled single quotes). I'm not sure what the BDE allows, but you could try replacing [where] with `where` A: Ok, so naming columns after keyboards is bad in ANY SQL system. Would you name a column "select" or "count" or "alter" or "table" or perhaps just for the fun of it "truncate" or "drop"? I would hope not. Even if you build in the work around for this instance you are creating a mine field for whomever comes after you. Do what mj2008 said and rename the bloody column. Allowing this column name to persist is the worst example of someone who is building a system and would get you on the poop list for any project manager.
{ "language": "en", "url": "https://stackoverflow.com/questions/120503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Optimising a SELECT query that runs slow on Oracle which runs quickly on SQL Server I'm trying to run the following SQL statement in Oracle, and it takes ages to run: SELECT orderID FROM tasks WHERE orderID NOT IN (SELECT DISTINCT orderID FROM tasks WHERE engineer1 IS NOT NULL AND engineer2 IS NOT NULL) If I run just the sub-part that is in the IN clause, that runs very quickly in Oracle, i.e. SELECT DISTINCT orderID FROM tasks WHERE engineer1 IS NOT NULL AND engineer2 IS NOT NULL Why does the whole statement take such a long time in Oracle? In SQL Server the whole statement runs quickly. Alternatively is there a simpler/different/better SQL statement I should use? Some more details about the problem: * *Each order is made of many tasks *Each order will be allocated (one or more of its task will have engineer1 and engineer2 set) or the order can be unallocated (all its task have null values for the engineer fields) *I am trying to find all the orderIDs that are unallocated. Just in case it makes any difference, there are ~120k rows in the table, and 3 tasks per order, so ~40k different orders. Responses to answers: * *I would prefer a SQL statement that works in both SQL Server and Oracle. *The tasks only has an index on the orderID and taskID. *I tried the NOT EXISTS version of the statement but it ran for over 3 minutes before I cancelled it. Perhaps need a JOIN version of the statement? *There is an "orders" table as well with the orderID column. But I was trying to simplify the question by not including it in the original SQL statement. I guess that in the original SQL statement the sub-query is run every time for each row in the first part of the SQL statement - even though it is static and should only need to be run once? Executing ANALYZE TABLE tasks COMPUTE STATISTICS; made my original SQL statement execute much faster. Although I'm still curious why I have to do this, and if/when I would need to run it again? The statistics give Oracle's cost-based optimzer information that it needs to determine the efficiency of different execution plans: for example, the number of rowsin a table, the average width of rows, highest and lowest values per column, number of distinct values per column, clustering factor of indexes etc. In a small database you can just setup a job to gather statistics every night and leave it alone. In fact, this is the default under 10g. For larger implementations you usually have to weigh the stability of the execution plans against the way that the data changes, which is a tricky balance. Oracle also has a feature called "dynamic sampling" that is used to sample tables to determine relevant statistics at execution time. It's much more often used with data warehouses where the overhead of the sampling it outweighed by the potential performance increase for a long-running query. A: The "IN" - clause is known in Oracle to be pretty slow. In fact, the internal query optimizer in Oracle cannot handle statements with "IN" pretty good. try using "EXISTS": SELECT orderID FROM tasks WHERE orderID NOT EXISTS (SELECT DISTINCT orderID FROM tasks WHERE engineer1 IS NOT NULL AND engineer2 IS NOT NULL)`print("code sample");` Caution: Please check if the query builds the same data results. Edith says: ooops, the query is not well formed, but the general idea is correct. Oracle has to fulfill a full table scan for the second (inner) query, build the results and then compare them to the first (outer) query, that's why it's slowing down. Try SELECT orderID AS oid FROM tasks WHERE NOT EXISTS (SELECT DISTINCT orderID AS oid2 FROM tasks WHERE engineer1 IS NOT NULL AND engineer2 IS NOT NULL and oid=oid2) or something similiar ;-) A: I would try using joins instead SELECT t.orderID FROM tasks t LEFT JOIN tasks t1 ON t.orderID = t1.orderID AND t1.engineer1 IS NOT NULL AND t1.engineer2 IS NOT NULL WHERE t1.orderID IS NULL also your original query would probably be easier to understand if it was specified as: SELECT orderID FROM orders WHERE orderID NOT IN (SELECT DISTINCT orderID FROM tasks WHERE engineer1 IS NOT NULL AND engineer2 IS NOT NULL) (assuming you have orders table with all the orders listed) which can be then rewritten using joins as: SELECT o.orderID FROM orders o LEFT JOIN tasks t ON o.orderID = t.orderID AND t.engineer1 IS NOT NULL AND t.engineer2 IS NOT NULL WHERE t.orderID IS NULL A: Some questions: * *How many rows are there in tasks? *What indexes are defined on it? *Has the table been analyzed recently? Another way to write the same query would be: select orderid from tasks minus select orderid from tasks where engineer1 IS NOT NULL AND engineer2 IS NOT NULL However, I would rather expect the query to involve an "orders" table: select orderid from ORDERS minus select orderid from tasks where engineer1 IS NOT NULL AND engineer2 IS NOT NULL or select orderid from ORDERS where orderid not in ( select orderid from tasks where engineer1 IS NOT NULL AND engineer2 IS NOT NULL ) or select orderid from ORDERS where not exists ( select null from tasks where tasks.orderid = orders.orderid and engineer1 IS NOT NULL OR engineer2 IS NOT NULL ) A: I agree with TZQTZIO, I don't get your query. If we assume the query did make sense then you might want to try using EXISTS as some suggest and avoid IN. IN is not always bad and there are likely cases which one could show it actually performs better than EXISTS. The question title is not very helpful. I could set this query up in one Oracle database and make it run slow and make it run fast in another. There are many factors that determine how the database resolves the query, object statistics, SYS schema statistics, and parameters, as well as server performance. Sqlserver vs. Oracle isn't the problem here. For those interested in query tuning and performance and want to learn more some of the google terms to search are "oak table oracle" and "oracle jonathan lewis". A: Often this type of problem goes away if you analyze the tables involved (so Oracle has a better idea of the distribution of the data) ANALYZE TABLE tasks COMPUTE STATISTICS; A: I think several people have pretty much the right SQL, but are missing a join between the inner and outer queries. Try this: SELECT t1.orderID FROM tasks t1 WHERE NOT EXISTS (SELECT 1 FROM tasks t2 WHERE t2.orderID = t1.orderID AND t2.engineer1 IS NOT NULL AND t2.engineer2 IS NOT NULL) A: "Although I'm still curious why I have to do this, and if/when I would need to run it again?" The statistics give Oracle's cost-based optimzer information that it needs to determine the efficiency of different execution plans: for example, the number of rowsin a table, the average width of rows, highest and lowest values per column, number of distinct values per column, clustering factor of indexes etc. In a small database you can just setup a job to gather statistics every night and leave it alone. In fact, this is the default under 10g. For larger implementations you usually have to weigh the stability of the execution plans against the way that the data changes, which is a tricky balance. Oracle also has a feature called "dynamic sampling" that is used to sample tables to determine relevant statistics at execution time. It's much more often used with data warehouses where the overhead of the sampling it outweighed by the potential performance increase for a long-running query. A: Isn't your query the same as SELECT orderID FROM tasks WHERE engineer1 IS NOT NULL OR engineer2 IS NOT NULL ? A: How about : SELECT DISTINCT orderID FROM tasks t1 WHERE NOT EXISTS (SELECT * FROM tasks t2 WHERE t2.orderID=t1.orderID AND (engineer1 IS NOT NULL OR engineer2 IS NOT NULL)); I am not a guru of optimization, but maybe you also overlooked some indexes in your Oracle database. A: Another option is to use MINUS (EXCEPT on MSSQL) SELECT orderID FROM tasks MINUS SELECT DISTINCT orderID FROM tasks WHERE engineer1 IS NOT NULL AND engineer2 IS NOT NULL A: If you decide to create an ORDERS table, I'd add an ALLOCATED flag to it, and create a bitmap index. This approach also forces you to modify the business logic to keep the flag updated, but the queries will be lightning fast. It depends on how critical are the queries for the application. Regarding the answers, the simpler the better in this case. Forget subqueries, joins, distinct and group bys, they are not needed at all! A: What proportion of the rows in the table meet the condition "engineer1 IS NOT NULL AND engineer2 IS NOT NULL"? This tells you (roughly) whether it might be worth trying to use an index to retrieve the associated orderid's. Another way to write the query in Oracle that would handle unindexed cases very well would be: select distinct orderid from ( select orderid, max(case when engineer1 is null and engineer2 is null then 0 else 1) over (partition by orderid) as max_null_finder from tasks ) where max_null_finder = 0 A: The Oracle optimizer does a good job of processing MINUS statements. If you re-write your query using MINUS, it is likely to run quite quickly: SELECT orderID FROM tasks MINUS SELECT DISTINCT orderID FROM tasks WHERE engineer1 IS NOT NULL AND engineer2 IS NOT NULL A: New take. Iff: * *The COUNT() function does not count NULL values and * *You want the orderID of all tasks where none of the tasks have either engineer1 or engineer2 set to a value then this should do what you want: SELECT orderID FROM tasks GROUP BY orderID HAVING COUNT(engineer1) = 0 AND COUNT(engineer2) = 0 Please test it. A: I agree with ΤΖΩΤΖΙΟΥ and wearejimbo that your query should be... SELECT DISTINCT orderID FROM Tasks WHERE Engineer1 IS NULL OR Engineer2 IS NULL; I don't know about SQL Server, but this query won't be able to take advantage of any indexes because null rows aren't in indexes. The solution to this would be to re-write the query in a way that would allow a function based index to be created that only includes the null value rows. This could be done with NVL2, but would likely not be portable to SQL Server. I think the best answer is not one that meets your criteria and that is write a different statement for each platform that is best for that platform. A: Sub-queries are "bad" with Oracle. It's generally better do use joins. Here's an article on how to rewrite your subqueries with join : http://www.dba-oracle.com/sql/t_rewrite_subqueries_performance.htm A: Here is an alternate approach which I think gives what you want: SELECT orderID FROM tasks GROUP BY orderID HAVING COUNT(engineer1) = 0 OR COUNT(engineer2) = 0 I'm not sure if you want "AND" or "OR" in the HAVING clause. It sounds like according to business logic these two fields should either both be populated or both be NULL; if this is guaranteed then you could reduce the condition to just checking engineer1. Your original query would, I think, give multiple rows per orderID, whereas mine will only give one. I am guessing this is OK since you are only fetching the orderID. A: If you have no index over the Engineer1 and Engineer2 columns then you are always going to generate a Table Scan in SQL Server and the equivalent whatever that may be in Oracle. If you just need the Orders that have unallocated tasks then the following should work just fine on both platforms, but you should also consider adding the indexes to the Tasks table to improve query perfomance. SELECT DISTINCT orderID FROM tasks WHERE (engineer1 IS NULL OR engineer2 IS NULL)
{ "language": "en", "url": "https://stackoverflow.com/questions/120504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Where can I find a free compiler for Windows Vista which works in Fullscreen mode? I need to know where i can get a free version of C and C++ compilers for Windows Vista. Many of the versions i have tried are not working in fullscreen mode. A: Have you tried MinGW? It's a command-line compiler. I don't have Vista, so I can't test it, but it should work. A: Visual Studio Express 2008 if free. It's lacking some specific features that might be a requirements for you. You can check here. I'm not sure what you mean by "not working in fullscreen mode". Can you be more explicit about what you have tried and exactly how it hasn't worked? A: Please take a look at visual studio 2008 express edition. It is a freeware IDE and compiler from Microsoft for C#, VB.Net, C++, SQL, and web. The Express line is a lightweight version of Microsoft Visual Studio 2008 product. I believe it has full screen support in the IDE. See the wikipedia page for further reading. A: I would recommend trying out the on of thew Microsoft Visual Express packages. http://www.microsoft.com/express/. I found them very good for trying things out especially for a single developer who is not too concerned about deployment and getting a product to market. I am not sure if it works for Vista but I can't see why it wouldn't work. A: Apart from the aforementioned MSVS and MinGW, you could try Eclipse CDT and Code::Blocks. While MSVS, especially with Visual Assist (costs money), is quite a powerful and convenient tool, the other IDEs have the benefit of working with more OSes. And yes, they’re powerful and convenient, too. Update: True, these are IDEs, not compilers. For C++ under MS Windows, they both use MinGW as the compiler. But it’s my understanding that it’s IDEs that the OP needs. A: You can try the following to see if they work on Vista. http://nuwen.net/mingw.html#download (comes with Boost with all the libs already built) http://www.tdragon.net/recentgcc/
{ "language": "en", "url": "https://stackoverflow.com/questions/120533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How to update components in different threads? .NET CF I´m programming a .NET Compact Framework application which shows maps on a PDA. I´ve created an ad hoc component that paints it´s own piece of the whole map, using several of this components the big picture is composed. I did it this way to avoid the latency of painting the whole map in a single step. What I would like to do now is to paint this pieces in their own thread, so the map appears to grow as a single entity and (also, and more important) to avoid freezing the rest of the user interface. Right know each piece of the map is painted in its onPaint method. My idea is to, somehow, tell the system "execute this code in a thread please". Something like: protected override void OnPaint(PaintEventArgs e) { // <code to be executed in a thread> e.Graphics.paintTHis(); e.Graphics.paintThat(); whateverItTakesToPaintThisPieceOfTheMap(); // </code to be executed in a thread> } Do you know how to do this? Or is my approach simply wrong? Thanks for your time! A: The approach is wrong. Code that updates the ui has to run on the ui thread. You'll get an exception if you update the ui from another thread. A: In order to call a function that updates the UI from another thread use the Invoke function of the Form. Here is a good reference http://weblogs.asp.net/justin_rogers/articles/126345.aspx Edit: as pointed out in comments BeginInvoke would be better if you want the calling code not to wait for the UI thread. A: Draw map in memory in a background thread, then render (in the UI thread) that raster image to screen when ready. Use BufferedGraphics if possible, GDI otherwise. A: If rendering the map is time consuming and you don't want to freeze the GUI thread (make you app unresponsive) you could divide the screen into cells. Use a background thread to render a cell as a bitmap, use Invoke to tell the GUI thread to draw the finished cell, when invoke returns, let the thread continue on with the next cell. You will need to draw direcly onto the control (not inside Paint()) or you will need to call a invalidateRect and have some logic to make sure your calculated image matches what the system wants you to draw. This will make your image appear gradually, and your UI will be responsive. If the user makes some kind of action that makes it unnecessary to continue drawing, just abort the thread. A: I would suggest that you have some kind of messaging system from the underlying threads to the main UI thread. This way the main UI thread makes all the changes and is triggered from the underlying threads. Also make sure you can send some data with those messages, in case you want to send some complex information back to the main UI thread.
{ "language": "en", "url": "https://stackoverflow.com/questions/120540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why would an application act differently after the VS debugger is attached? I have a desktop application written in C#. It is trying to manage a socket connection and fails. The same application is successful if it is attached to the Visual Studio debugger. How can it be debugged? A: I'd say timing issues too having the debugger attached will slow down the code slightly which might mean that a race condition isn't occuring. To debug it try to add some logging code to your application, I personally use log4net You shouldn't have and problems with malloc and the like as you are coding in c#. if you are running a web app it might also be there is a difference in the cassini webserver in VS and the one you are deploying to. A: Usually, timing issues. Are there threads involved? If C/C++, then there could be a lot of reasons because of how memory management bugs might behave. A: You might have variables whose default values are different when running under the compiler as opposed to standalone. Race conditions might be another idea if there are threads involved. If you are allocating RAM via malloc or new, then make sure that the memory is initialized properly before using it. A: This is a classic example of timing. If it works in the debugger then it means you have to re-factor your code a bit to handle this. Now if you are app is a server socket that receives connections from client and trying to spawn a thread for each of those connections, you might have to consider using select() to manage connections with in one thread. A: We've actually encountered a similar issue. Timing is a critical part of this. As well as throwing no-ops into the code (primary difference w/ debugged code). With socket programming, it seems as though debugging w/ VisualStudio.Net is like having additional Application.DoEvents() calls made. We have found that we have stuff that will fail (non-debugging) unless we allow the component to breathe (e.g. handle its own events) by calling Application.DoEvents(). A: When Visual Studio attaches to your application, the CLR and JIT have subtle runtime differences to enable debugging. Garbage collection for example is different. http://stupiddumbguy.blogspot.com/2008/05/net-garbage-collection-behavior-for.html A: IT might be because you're watching properties with side effects in your debugger. Though the other answers here are more likely...
{ "language": "en", "url": "https://stackoverflow.com/questions/120556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can a task wait on multiple vxworks Queues? We have a vxWorks design which requires one task to process both high and low priority messages sent over two message queues. The messages for a given priority have to be processed in FIFO order. For example, process all the high priority messages in the order they were received, then process the low priority messages. If there is no high priority message, then process the low priority message immediately. Is there a way to do this? A: If you use named pipes (pipeDevCreate(), write(), read()) instead of message queues, you can use select() to block until there are messages in either pipe. Whenever select() triggers, you process all messages in the high priority pipe. Then you process a single message from the low priority pipe. Then call select again (loop). Example Code snippets: // Initialization: Create high and low priority named pipes pipeDrv(); //initialize pipe driver int fdHi = pipeDevCreate("/pipe/high",numMsgs,msgSize); int fdLo = pipeDevCreate("/pipe/low",numMsgs,msgSize); ... // Message sending thread: Add messages to pipe write(fdHi, buf, sizeof(buf)); ... // Message processing Thread: select loop fd_set rdFdSet; while(1) { FD_ZERO(&rdFdSet); FD_SET(fdHi, &rdFdSet); FD_SET(fdLo, &rdFdSet; if (select(FD_SETSIZE, &rdFdSet, NULL, NULL, NULL) != ERROR) { if (FD_ISSET(fdHi, &rdFdSet)) { // process all high-priority messages while(read(fdHi,buf,size) > 0) { //process high-priority } } if (FD_ISSET(fdLo, &rdFdSet)) { // process a single low priority message if (read(fdLo,buf,size) > 0) { // process low priority } } } } A: In vxWorks, you can't wait directly on multiple queues. You can however use the OS events (from eventLib) to achieve this result. Here is a simple code snippet: MSG_Q_ID lowQ, hiQ; void Init() { // Task Initialization Code. This should be called from the task that will // be receiving the messages ... hiQ = msgQCreate(...); lowQ = msgQCreate(...); msgQEvStart(hiQ, VX_EV01); // Event 1 sent when hiQ receives message msgQEvStart(loQ, VX_EV02); // Event 2 sent when loQ receives message ... } void RxMessages() { ... UINT32 ev; // Event received // Blocks until we receive Event 1 or 2 eventReceive(VX_EV01 | VX_EV02, EVENT_WAIT_ANY, WAIT_FOREVER, &ev); if(ev & VX_EV01) { msgQReceive(hiQ, ...); } if(ev & VX_EV02) { msgQReceive(loQ, ...); } } Note that you need to modify that code to make sure you drain all your queues in case there is more than one message that was received. The same mechanism can also be applied to Binary semaphores using the semEvStart() function.
{ "language": "en", "url": "https://stackoverflow.com/questions/120561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: WSE Server under Windows 2000 I have a problem with WSE on Windows 2000 Machines. The method SoapReceivers.Add crashed under Windows 2000. It seemes that the windows 2000 os did not allow to start a listening service on excluse addresses. So I find a out that I can set this in the config file to "false" But the problem is still be available. Anybody with an idea..?? Greetings Kai.. A: I vaguely remember there were a bunch of different WSE releases (1-3) & service packs. I know that the major releases where not backward compatible and release 2.0 had some internal incompatibilities between service packs. You may have a compatibility problem. Make sure that you've install the version that the app was written against. If that's not the problem, then look at this discussion thread. It might offer some clues. Good Luck!
{ "language": "en", "url": "https://stackoverflow.com/questions/120567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: A good pattern/solution to the social web user issue of point whoring? Take any social website like Digg or Stack Overflow that somehow lets users reward points for stories, questions, etc.. What happens is quite similar to the process that lead to the rise of tabloid newspapers that feed only headlines and no content to its readers. Users are usually smart enough to figure out strategies to maximize their point rewards regardless of whether that strategy harmonized with the goal of the website or not. I identify the following problems * *People will swamp more general and more entertaining questions with answers. Answering more specific questions requires actual domain knowledge. *Getting most points is often tied to involving most users. Given a random web crowd this unfortunately means mostly generic, subjective, argumentative and unspecific entries. As the creator of a social website you have the unique chance to influence social behavior towards a favorable direction. I think that the influence the system has on the behavior of the people far outweighs the initial seed of users. I'm interested in patterns/solutions that aim to solve this problem in terms of: * *ranking algorithms *expert systems *limiting/creating ways of social interaction *information that is provided/hidden In particular, given the perspective of Stack Overflow, how could one solve entries like "What's your favorite programmer cartoon" become the most popular entries (I pick this one because it is a good example for the undesired phenomenon). A: This isn't fundamentally different from asking what's the best way to stop people from cash whoring in real life. Looking at it in that context, I suspect the only way to stop this behaviour is to remove the opportunity ie don't use points (or money). The problem is that if you do that, there's no quantifiable reason to get out of bed. Points and cash both measure social status and ability to influence others. Ability to sequestre resources is a primary sexual selection criterion, which is why greed is so powerful; it is a direct sublimation of the sex drive. A: I find the voting system that Stack Overflow uses to be quite good. Other people essentially judge you as an expert or not. And the good answers bubble up to the top. I'd stray away from punishing people who don't get voted for, and maybe have some threshold for those who get consistently voted down, losing points. That said, with popular topics most people won't be bothered sorting through the flack to find the diamonds in the rough. So you are going to lose some good answers. Also Stack Overflow seems to punish people who post lost of unvoted answers... My score went down after posting this as I have a few posts with 0 votes. [Update] In response to comments: I think if you want more specific answers you have to dig deeper and look at a questions for a particular tag. I think the recent post What's your favorite “programmer” cartoon? has shown that people will swamp more general and more "entertaining" questions with answers as they are more like procrastination. Answering more specific questions requires actual domain knowledge. As for why my score went down it may have been a bug. My score went from 91 to 81 when I posted this answer and then rose to 111 after that. As I'm not privy to the algorithm that Stack overflow uses, I assumed that that was what had happened. It might just have normalised my score. [Update 2] I think that social networks have to police themselves. They are owned and run by the community by their very nature. Just looking at the AACS Revolt that happened on Digg last year is proof enough that you can't control it. The trick is to have enough users who will mod down the garbage and mod up the good stuff. Perhaps hiring a number of moderators who can do this full time, or even just giving a few people who have proven themselves to be good citizens moderator rights, with extra weight on their moderation, most people online live for this kind of recognition and some might be willing to do it for free as they will have become members of some kind of social networking elite. The question is how do you stop them from abusing this power? As Stan Lee is fond of saying: With great power comes great responsibility. A: It's probably hard to have a completely workable solution. Essentially, if the reason people attain points has a positive effect on the community, then point-whoring will increase the quality of the community over time. Would a measure that decreases the rate at which points grow in proportion to the number of points attained be workable? (i.e first 100 points are 'normal', next 100 take 20% longer to attain, etc) A: Where there's a lot of people you should always expect the casual jerk or childish individual or attention whore or troll to show up. I don't think there's a method, a scientifically proved strategy, a technology to prevent this from happening. After all programmers are (almost always= people, also, and people tend to be quite diverse in behavior, communication skills, patience to bear the stupid, self-consciousness, sense of opportunity and even common sense. Being an heterogeneous group is something that might probably enrich everyone involved. There shall be, nonetheless, a system to slow down the annoying people. Downvoting and closing question and answers might work - provided that the vast majority of the participants are well-behaved, responsible adults. If this is not the case... well, then everyone who feels offended would better bail out, because there's no value in attending a community where these tenets are widely disregarded. The chance is that if the subject around which the community gathered in the beginning is quite selective in itself then a natural selection will occurr in the long term. I mean: if someone is REALLY interested in programming after a while he/she will calm down, ask more intelligent answers, give more ponderate answer, more polite comments... A: It is impossible to avoid entries that are in conflict with a specific social websites goal because one cannot prevent entertainment and procrastination. But given any entry it should be easy to decide weather it is harmonizing with the websites goal. The solution could be to reward posters if they self police/tag their entry to the right category, and give all users a way to filter out categories they do not want to see. For instance, on Stack Overflow most entertainment posts appear because people want to collect points (hypothesis). If this assumption is correct, then the solution of the pattern above would be that whoever tags his entry as entertaining gets twice as many points for upvotes. However nobody interested in entries tagged specific is going to see it when he has entertaining filtered out. A: This is fundamentally the same as making a startup. The initial moderators, hired by the owner/ceo/founder(s), set the tone for the whole community. Fortunately Jeff and Co. are moderating as well, so we can look at their example as we choose what stories to dismiss. I find this to be a very difficult problem, though, as I too enjoy these offtopic conversations, and I know there will be backlash against me, individually, if I close some of these topics - I've already experienced it, and other moderators have to some greater degree. But these are growing pains in any community, and there's no technical solution. Jeff et al need to cultivate the culture here so that the memes and DNA of the community are set and directed in a good direction. But it's not as easy as that either, because until you start a community you don't know what it's capable of, or what it will become. You have to let it grow a bit itself, and exercise light authority so you don't stifle something that may be better (inevitably will be, actually) than your initial vision. In other words, the key here is to solve the social problem with a social solution - select strong moderators that will set the tone and enforce it. There is no way, technically, to prevent people from gaming the system, especially when there are other humans in the feedback loop - they will always find a way to game the other players into doing what they want. -Adam A: The most important lesson with regard to the design of any social computing is that community dynamics problems cannot be solved purely by technological means. In other words, whatever the solution you implement, if you have users to whom getting points (or trolling or getting involved in flame wars or whatever other disruption) is more important that participating in the community, that is what they will do. When designing the solution, you "simply" have to make sure that there are sufficient benefits (badges, entertainment, information, rewarding feedback) in place for people who aren't in it just for the points. Then, if you are very lucky, you will attract the right kind of people. This may seem trite, but it is one of the most importat results of CSCW research (Olson and Olson, 2000): unless users are prepared to collaborate/play fair/be productive, then no amount of technology is going to solve that problem. A: The problem is that everyone wants to be included. Look at questions with 100+ answers: Who is really going to read the last one? I'm not an expert in anything, if I compare myself to others on this site, but I still want to answer questions (see I'm doing it now). I wish someone would start a forum for some of these issues. A: Perhaps there could be a "difficulty" or "obscurity" multiplier for answers to less popular questions. People will point-whore: it's in our nature to crave recognition. Trying to stop them would be a lot of work, and if you succeed, many of the largest contributors to the site might then leave. You need to make the behaviour you desire attractive to point-whores. A: Perhaps you should compute reputation on something based on the ratio of votes:views. This might discourage the populism going on and focus people's attention to providing answers across a breadth of less popular but still valid and relevant questions - questions that might actually be of interest to someone. I would actually like to do some editing and tidying up of posts - Many posts could be improved by a bit of editorial work, filling out detail or providing links to other sources. However, without a dedicated exercise in karma-whoring (on what would be off-topic posts for me) it's very unlikely I would ever my rep up to the level where I have permissions to do this in my real areas of expertise where I might actually have something of value to say. A: Separate "points" into different categories. If Digg or Reddit had a "funny/lame" vote in addition to a "interesting/uninteresting" vote, one could screen out things which have a "funny" vote that's higher than its "interesting" vote. Slashdot has a moderation system that makes such distinctions, and you can filter it such that "+1, Funny" becomes "+0, Funny". Heck, you can even make "-1, Troll" work as "+5, Troll" (not that you should, but you could). A: People will swamp more general and more entertaining questions with answers. Answering more specific questions requires actual domain knowledge. First of all, I challenge your assertion that this is a problem. More general questions will have a more general audience and will be relevant to a greater number of people. Asking a question like "How do I do OBSCURE_TASK in LANGUAGE using TOOLKIT?" is indeed more specific and probably requires more specific knowledge to answer, but is likely only to be useful to people doing that task in that language in that toolkit. Should that float to the top when it's only of interest to a small number of people? Answered obscure questions can be found by searching. Browsing is better for finding general things. Users are usually smart enough to figure out strategies to maximize their point rewards regardless weather that strategy harmonized with the goal of the website or not. Yes, that's the cost of involving other people in the process. They will behave based on their own desires and motivations. If you truly want to enforce a reputation system based on whether or not it "harmonizes with the goal of the website," you want a dictatorship, not a community site.
{ "language": "en", "url": "https://stackoverflow.com/questions/120578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Expose VSTO functionality to VBA w/o local admin What would be the best way to expose certain functionality in a Dotnet VSTO Excel add-in to VBA, without requiring the user to be a local administrator (i.e. no COM registration, no HttpListener)? Would it be possible to use Microsoft Message Queues from VBA? A: If I may interpret your question as broadly as "How do I expose functionality in a .Net assembly to Excel without COM registration" then an excellent solution is to use Excel's XLL interface. Basically one deploys an xll shim and an associated .Net dll. When the xll is loaded it reflects over the dll and exposes the functions therein to Excel. An open source implementation can be found here http://exceldna.typepad.com/blog/2006/01/introducing_exc.html A commercial, closed source, but more feature rich one here http://www.managedxll.com/ A: You can't simply instantiate them as COM objects, as VSTO will not be running in the default application domain. Here is how I've done it, which is admittedly a bit convoluted. This was with a VSTO workbook saved as an XLA file, which in some ways is more flexible than a pure VSTO add-in. * *You need to generate a type library using regasm.exe that will be referenced by your VBA code. *Create a root factory class in your .NET object model, which is capable of instantiating any of the classes you want to consume in VBA (something like the "Application" class in the Office object models). *You then need to find a way to pass a reference to an instance of this factory class to VBA. Once VBA has a reference to an instance of this factory class, it can call its methods to instantiate any other objects in your .NET object model. *To pass an instance to VBA, define a macro in your VBA code as follows Example code: Private m_objMyFactory As Object Public Sub RegisterFactory(MyFactory As Object) On Error GoTo ErrHandler Set m_objMyFactory = MyFactory Exit Sub ErrHandler: MsgBox "An unexpected error occurred when registering the Factory component: " & Err.Description Exit Sub End Sub * *Now add code to the VSTO ThisWorkbook_Open event handler, which instantiates your factory object and calls the above macro passing a reference to the factory object. Example code: void ThisWorkbook_Open() { try { ThisApplication.Run("RegisterFactory", new MyNamespace.MyFactory(), Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing); } catch (Exception ex) { MessageBox.Show("Load error: " + ex.ToString()); } } There are a few more issues to consider to get this working robustly - if you're interested in following this up let me know and I'll post more details. A: You may be interested in Excel4Net (it is similar to ExcelDNA and ManagedXll, but easier to use): website: http://www.excel4net.com blog: http://excel4net.blogspot.com A: Just for reference for future readers: You might also want to have a look to this question: Accessing a VSTO application-addin types from VBA (Excel) and, in particular, to the blog that is referenced there: VSTO Add-ins, COMAddIns and RequestComAddInAutomationService By overriding RequestComAddInAutomationService() you can expose whatever functionality you want, by defining a Facade class that provides entry points for all those features, and exposing that class to VBA.
{ "language": "en", "url": "https://stackoverflow.com/questions/120579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SVG rendering in a PyGame application. Prior to Pygame 2.0, Pygame did not support SVG. Then how did you load it? In a pyGame application, I would like to render resolution-free GUI widgets described in SVG. How can I achieve this? (I like the OCEMP GUI toolkit but it seems to be bitmap dependent for its rendering) A: You can use Cairo (with PyCairo), which has support for rendering SVGs. The PyGame webpage has a HOWTO for rendering into a buffer with a Cairo, and using that buffer directly with PyGame. A: I realise this doesn't exactly answer your question, but there's a library called Squirtle that will render SVG files using either Pyglet or PyOpenGL. A: Cairo cannot render SVG out of the box. It seems we have to use librsvg. Just found those two pages: * *Rendering SVG with libRSVG,Python and c-types *How to use librsvg from Python Something like this should probably work (render test.svg to test.png): import cairo import rsvg WIDTH, HEIGHT = 256, 256 surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, WIDTH, HEIGHT) ctx = cairo.Context (surface) svg = rsvg.Handle(file="test.svg") svg.render_cairo(ctx) surface.write_to_png("test.png") A: This is a complete example which combines hints by other people here. It should render a file called test.svg from the current directory. It was tested on Ubuntu 10.10, python-cairo 1.8.8, python-pygame 1.9.1, python-rsvg 2.30.0. #!/usr/bin/python import array import math import cairo import pygame import rsvg WIDTH = 512 HEIGHT = 512 data = array.array('c', chr(0) * WIDTH * HEIGHT * 4) surface = cairo.ImageSurface.create_for_data( data, cairo.FORMAT_ARGB32, WIDTH, HEIGHT, WIDTH * 4) pygame.init() window = pygame.display.set_mode((WIDTH, HEIGHT)) svg = rsvg.Handle(file="test.svg") ctx = cairo.Context(surface) svg.render_cairo(ctx) screen = pygame.display.get_surface() image = pygame.image.frombuffer(data.tostring(), (WIDTH, HEIGHT),"ARGB") screen.blit(image, (0, 0)) pygame.display.flip() clock = pygame.time.Clock() while True: clock.tick(15) for event in pygame.event.get(): if event.type == pygame.QUIT: raise SystemExit A: pygamesvg seems to do what you want (though I haven't tried it). A: The question is quite old but 10 years passed and there is new possibility that works and does not require librsvg anymore. There is Cython wrapper over nanosvg library and it works: from svg import Parser, Rasterizer def load_svg(filename, surface, position, size=None): if size is None: w = surface.get_width() h = surface.get_height() else: w, h = size svg = Parser.parse_file(filename) rast = Rasterizer() buff = rast.rasterize(svg, w, h) image = pygame.image.frombuffer(buff, (w, h), 'ARGB') surface.blit(image, position) I found Cairo/rsvg solution too complicated to get to work because of dependencies are quite obscure to install. A: SVG files are supported with Pygame Version 2.0. Since Version 2.0.2, SDL Image supports SVG (Scalable Vector Graphics) files (see SDL_image 2.0). Therefore, with pygame version 2.0.1, SVG files can be loaded into a pygame.Surface object with pygame.image.load(): surface = pygame.image.load('my.svg') Before Pygame 2, you had to implement Scalable Vector Graphics loading with other libraries. Below are some ideas on how to do this. A very simple solution is to use CairoSVG. With the function cairosvg.svg2png, an Vector Graphics (SVG) files can be directly converted to an [Portable Network Graphics (PNG)] file Install CairoSVG. pip install CairoSVG Write a function that converts a SVF file to a PNG (ByteIO) and creates a pygame.Surface object may look as follows: import cairosvg import io def load_svg(filename): new_bites = cairosvg.svg2png(url = filename) byte_io = io.BytesIO(new_bites) return pygame.image.load(byte_io) See also Load SVG An alternative is to use svglib. However, there seems to be a problem with transparent backgrounds. There is an issue about this topic How to make the png background transparent? #171. Install svglib. pip install svglib A function that parses and rasterizes an SVG file and creates a pygame.Surface object may look as follows: from svglib.svglib import svg2rlg import io def load_svg(filename): drawing = svg2rlg(filename) str = drawing.asString("png") byte_io = io.BytesIO(str) return pygame.image.load(byte_io) Anther simple solution is to use pynanosvg. The downside of this solution is that nanosvg is no longer actively supported and does not work with Python 3.9. pynanosvg can be used to load and rasterize Vector Graphics (SVG) files. Install Cython and pynanosvg: pip install Cython pip install pynanosvg The SVG file can be read, rasterized and loaded into a pygame.Surface object with the following function: from svg import Parser, Rasterizer def load_svg(filename, scale=None, size=None, clip_from=None, fit_to=None, foramt='RGBA'): svg = Parser.parse_file(filename) scale = min((fit_to[0] / svg.width, fit_to[1] / svg.height) if fit_to else ([scale if scale else 1] * 2)) width, height = size if size else (svg.width, svg.height) surf_size = round(width * scale), round(height * scale) buffer = Rasterizer().rasterize(svg, *surf_size, scale, *(clip_from if clip_from else 0, 0)) return pygame.image.frombuffer(buffer, surf_size, foramt) Minimal example: import cairosvg import pygame import io def load_svg(filename): new_bites = cairosvg.svg2png(url = filename) byte_io = io.BytesIO(new_bites) return pygame.image.load(byte_io) pygame.init() window = pygame.display.set_mode((300, 300)) clock = pygame.time.Clock() pygame_surface = load_svg('Ice-001.svg') size = pygame_surface.get_size() scale = min(window.get_width() / size[0], window.get_width() / size[1]) * 0.8 pygame_surface = pygame.transform.scale(pygame_surface, (round(size[0] * scale), round(size[1] * scale))) run = True while run: clock.tick(60) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False window.fill((127, 127, 127)) window.blit(pygame_surface, pygame_surface.get_rect(center = window.get_rect().center)) pygame.display.flip() pygame.quit() exit() A: The last comment crashed when I ran it because svg.render_cairo() is expecting a cairo context and not a cairo surface. I created and tested the following function and it seems to run fine on my system. import array,cairo, pygame,rsvg def loadsvg(filename,surface,position): WIDTH = surface.get_width() HEIGHT = surface.get_height() data = array.array('c', chr(0) * WIDTH * HEIGHT * 4) cairosurface = cairo.ImageSurface.create_for_data(data, cairo.FORMAT_ARGB32, WIDTH, HEIGHT, WIDTH * 4) svg = rsvg.Handle(filename) svg.render_cairo(cairo.Context(cairosurface)) image = pygame.image.frombuffer(data.tostring(), (WIDTH, HEIGHT),"ARGB") surface.blit(image, position) WIDTH = 800 HEIGHT = 600 pygame.init() window = pygame.display.set_mode((WIDTH, HEIGHT)) screen = pygame.display.get_surface() loadsvg("test.svg",screen,(0,0)) pygame.display.flip() clock = pygame.time.Clock() while True: clock.tick(15) event = pygame.event.get() for e in event: if e.type == 12: raise SystemExit A: Based on other answers, here's a function to read a SVG file into a pygame image - including correcting color channel order and scaling: def pygame_svg( svg_file, scale=1 ): svg = rsvg.Handle(file=svg_file) width, height= map(svg.get_property, ("width", "height")) width*=scale; height*=scale data = array.array('c', chr(0) * width * height * 4) surface = cairo.ImageSurface.create_for_data( data, cairo.FORMAT_ARGB32, width, height, width*4) ctx = cairo.Context(surface) ctx.scale(scale, scale) svg.render_cairo(ctx) #seemingly, cairo and pygame expect channels in a different order... #if colors/alpha are funny, mess with the next lines import numpy data= numpy.fromstring(data, dtype='uint8') data.shape= (height, width, 4) c= data.copy() data[::,::,0]=c[::,::,1] data[::,::,1]=c[::,::,0] data[::,::,2]=c[::,::,3] data[::,::,3]=c[::,::,2] image = pygame.image.frombuffer(data.tostring(), (width, height),"ARGB") return image A: Despite Pygame/SDL new support for SVG files, its rendering features are still very limited, so LibRsvg might still be needed. This is a 2022 update for the accepted answer that works with modern versions of Python, Pygame and Pycairo: #!/usr/bin/env python3 import sys import cairo import gi import PIL.Image import pygame gi.require_version('Rsvg', '2.0') from gi.repository import Rsvg WIDTH = 512 HEIGHT = 512 PATH = sys.argv[1] if len(sys.argv) > 1 else "test.svg" def load_svg(path: str, size: tuple) -> pygame.Surface: """Render an SVG file to a new pygame surface and return that surface.""" svg = Rsvg.Handle.new_from_file(path) # Create a Cairo surface. # Nominally ARGB, but in little-endian architectures it is effectively BGRA surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, *size) # Create a Cairo context and scale it context = cairo.Context(surface) context.scale(size[0]/svg.props.width, size[1]/svg.props.height) # Render the SVG svg.render_cairo(context) # Get image data buffer data = surface.get_data() if sys.byteorder == 'little': # Convert from effective BGRA to actual RGBA. # PIL is surprisingly faster than NumPy, but can be done without neither data = PIL.Image.frombuffer('RGBA', size, data.tobytes(), 'raw', 'BGRA', 0, 1).tobytes() return pygame.image.frombuffer(data, size, "RGBA").convert_alpha() pygame.init() window = pygame.display.set_mode((WIDTH, HEIGHT)) image = load_svg(PATH, (WIDTH, HEIGHT)) window.blit(image, (0, 0)) pygame.display.update() clock = pygame.time.Clock() while True: if pygame.event.get([pygame.QUIT]): break clock.tick(30) pygame.quit()
{ "language": "en", "url": "https://stackoverflow.com/questions/120584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: What is the best way to forward a single qmail alias to /dev/null? I would like to trash the mail received by a single qmail alias. I don't want any mail delivery errors, and I want qmail to be happy about having delivered the mail. How can I do this, preferably without adding another local email account? A: Create an alias by creating a file /var/qmail/aliases/.qmail-blackhole with this content: |cat >/dev/null Then redirect whatever you want to this ‘blackhole’ alias (or use whatever you want in place of ‘blackhole’). Merely using /dev/null won’t work (Unable_to_write_/dev/null). The messages will still be logged, however. Though it’s more of a feature than a bug. A: Create an alias with only a comment and no delivery instructions, like: echo "# drop all messages on the floor" > ~alias/.qmail-devnull Replace "devnull" with whatever alias name you need of course. A: A meta-question: why would this get rated down? Is it not appropriate for the site?
{ "language": "en", "url": "https://stackoverflow.com/questions/120587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Case-insensitive search using Hibernate I'm using Hibernate for ORM of my Java app to an Oracle database (not that the database vendor matters, we may switch to another database one day), and I want to retrieve objects from the database according to user-provided strings. For example, when searching for people, if the user is looking for people who live in 'fran', I want to be able to give her people in San Francisco. SQL is not my strong suit, and I prefer Hibernate's Criteria building code to hard-coded strings as it is. Can anyone point me in the right direction about how to do this in code, and if impossible, how the hard-coded SQL should look like? Thanks, Yuval =8-) A: If you use Spring's HibernateTemplate to interact with Hibernate, here is how you would do a case insensitive search on a user's email address: getHibernateTemplate().find("from User where upper(email)=?", emailAddr.toUpperCase()); A: For the simple case you describe, look at Restrictions.ilike(), which does a case-insensitive search. Criteria crit = session.createCriteria(Person.class); crit.add(Restrictions.ilike('town', '%fran%'); List results = crit.list(); A: The usual approach to ignoring case is to convert both the database values and the input value to upper or lower case - the resultant sql would have something like select f.name from f where TO_UPPER(f.name) like '%FRAN%' In hibernate criteria restrictions.like(...).ignoreCase() I'm more familiar with Nhibernate so the syntax might not be 100% accurate for some more info see pro hibernate 3 extract and hibernate docs 15.2. Narrowing the result set A: You also do not have to put in the '%' wildcards. You can pass MatchMode (docs for previous releases here) in to tell the search how to behave. START, ANYWHERE, EXACT, and END matches are the options. A: Criteria crit = session.createCriteria(Person.class); crit.add(Restrictions.ilike('town', 'fran', MatchMode.ANYWHERE); List results = crit.list(); A: This can also be done using the criterion Example, in the org.hibernate.criterion package. public List findLike(Object entity, MatchMode matchMode) { Example example = Example.create(entity); example.enableLike(matchMode); example.ignoreCase(); return getSession().createCriteria(entity.getClass()).add( example).list(); } Just another way that I find useful to accomplish the above. A: Since Hibernate 5.2 session.createCriteria is deprecated. Below is solution using JPA 2 CriteriaBuilder. It uses like and upper: CriteriaBuilder builder = session.getCriteriaBuilder(); CriteriaQuery<Person> criteria = builder.createQuery(Person.class); Root<Person> root = criteria.from(Person.class); Expression<String> upper = builder.upper(root.get("town")); criteria.where(builder.like(upper, "%FRAN%")); session.createQuery(criteria.select(root)).getResultList(); A: Most default database collations are not case-sensitive, but in the SQL Server world it can be set at the instance, the database, and the column level. A: You could look at using Compass a wrapper above lucene. http://www.compass-project.org/ By adding a few annotations to your domain objects you get achieve this kind of thing. Compass provides a simple API for working with Lucene. If you know how to use an ORM, then you will feel right at home with Compass with simple operations for save, and delete & query. From the site itself. "Building on top of Lucene, Compass simplifies common usage patterns of Lucene such as google-style search, index updates as well as more advanced concepts such as caching and index sharding (sub indexes). Compass also uses built in optimizations for concurrent commits and merges." I have used this in the past and I find it great.
{ "language": "en", "url": "https://stackoverflow.com/questions/120588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: Auto-updating Bags in NHibernate I use ASP.Net with NHibernate accessing a Pgsql database. For some of our Objects, we use NHibernate bags, which map to List objects in our application. Sometimes we have issues with needing to refresh the objects through NHibernate when we update anything to do with the lists in the database. <bag name="Objects" inverse="true" lazy="true" generic="true" > <key column="object_id" /> <one-to-many class="Object" /> </bag> Above is a sample of the code I use for our bags. I was wondering if anyone else came across this issue anywhere, and what you do to work around it? A: Have you tried NHibernate cascades, such as save-update? You are able to tell NHibernate to automatically traverse an entity's associations, and act according to the cascade option. For instance, adding an unsaved entity to a collection with save-update cascade will cause it to be saved along with its parent object, without any need for explicit instructions on our side. Here is what each cascade option means: * *none - do not do any cascades, let the users handle them *save-update - when the object is saved/updated, check the associations and save/update any object that requires it (including save/update the associations in many-to-many scenario). *delete - when the object is deleted, delete all the objects in the association. *delete-orphans - when the object is deleted, delete all the objects in the association. In addition to that, when an object is removed from the association and not associated with another object (orphaned), also delete it. *all - when an object is saved/updated/deleted, check the associations and save/update/delete all the objects found. *all-delete-orphans - when an object is saved/updated/deleted, check the associations and save/update/delete all the objects found. In addition to that, when an object is removed from the association and not associated with another object (orphaned), also delete it. More info here: NHibernate Cascades: the different between all, all-delete-orphans and save-update
{ "language": "en", "url": "https://stackoverflow.com/questions/120596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What powers Google Charts? Does any body know what powers Google Charts? I have been using it for a while but not sure what Google used to build it. A: They bought the Gapminder library for doing charts. It's a Java library as far as I know, but they don't seem very anxious to release the code as open-source. A: Everything at google is done in C++, Java, or Python. I'm guessing the internals is probably done in one of the latter two. A: Mathplotlib was my guess too - ( thanks "davidg" ). SVG - got my own doubts because you don't have to go the length of server side SVG just to produce a static image. No panning or scaling required so not sure if they used SVG A: I feel the touch of SVG there.. Maybe Internal engine to generate and work with SVG and export images as PNG images. Any other thoughts? A: Just guessing here: they must be using Python with some charting library and then returning the produced files. There are a few tools to do charts in Python. Matplotlib and ReportLab come to mind. A: What is sure is that you can do it with a Java servlet. Eastwood is an open source implementation of the Google Chart API. (powered by JFreeChart) A: Probably just libraries they have written themselves, it's pretty easy to throw together a chart drawing library, but hard to do it right. So someone hacked together a custom java/C++/python library using already available stuff to be able to update the graphics of his charts easily, and then it extended. That's the great thing about it, that you can make your own version without much effort, just change the URL and design your own flash animation of the chart. And that the data available in the graphs is easily webscraped.. Just theory, but something like this is perfect small project to do in 20% of your time.
{ "language": "en", "url": "https://stackoverflow.com/questions/120601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Classic asp include I am trying to separate some asp logic out into a separate page. For now, I am trying to call a simple function. Here is the simple index page that I am using <html> <head> <title>Calling a webservice from classic ASP</title> </head> <body> <% If Request.ServerVariables("REQUEST_METHOD") = "POST" Then %> <!--#include file="aspFunctions.asp"--> <% doStuff() End If %> <FORM method=POST name="form1" ID="Form1"> ID: <INPUT type="text" name="corpId" ID="id" value="050893"> <BR><BR> <INPUT type="submit" value="GO" name="submit1" ID="Submit1" > </form> </body> </html> Here is aspfunctions.asp sub doStuff() Response.Write("In Do Stuff") end sub When i hit the submit button on my form i get the below sub doStuff() Response.Write("In Do Stuff") end sub Microsoft VBScript runtime error '800a000d' Does anyone have any idea what i could be doing wrong? Any help is greatly appreciated Thanks Damien Type mismatch: 'doStuff' /uat/damien/index.asp, line 15 A: You must have the asp functions inside the <% %> tag. A: aspfunctions.asp should be inside tags so the asp is "executed", e.g. aspfunctions.asp file: <% sub doStuff() Response.Write("In Do Stuff") end sub %> Otherwise the asp in aspfunctions.asp is just seen as plain-text, so as far as the server is concerned, doStuff has never been defined. A: You're including the other file within an if statement. This does not mean that it's dynamically included, it's not. It will always be included. To see this in action try this sample: <% If 1=0 Then 'We never get here %> <!--#include file="aspFunctions.asp"--> <% dostuff() End If dostuff() %> A: If I remember correctly, you need no brackets for calls without a return value (untested solution): doStuff A: Make changes in two places: * *In aspfunctions.asp write "sub doStuff" instead of sub doStuff() *Call the function as doStuff not doStuff()
{ "language": "en", "url": "https://stackoverflow.com/questions/120607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to display the total of a level as the value of its last child in MDX I have an MDX query which lists a measure for all 'Week' and 'Day' levels in the OLAP database. For example SELECT { HIERARCHIZE( {[Business Time].[Week].members, [Business Time].[Date].members} ) } ON ROWS, { [Measures].[Quantity] } ON COLUMNS FROM [Sales] Where the measure is displayed for a Week though, instead of showing the total of all the Day values, I would like to show the value for the last day within the week. For example Week 1: 12 15 Sept: 10 16 Sept: 20 17 Sept: 12 18 Sept: 15 19 Sept: 8 20 Sept: 9 21 Sept: 12 Week 2: 15 22 Sept: 12 23 Sept: 15 How can I achieve this within the MDX? A: Add a new calculated measures onto the start of your MDX that shows the last day's value only if it is being shown at a week level, otherwise leave it unaltered: WITH MEMBER [Measures].[Blah] AS 'IIF( [Business Time].currentMember.level.name = "Week", ([Business Time].currentMember.lastChild, [Measures].[Quantity]), ([Business Time].currentMember, [Measures].[Quantity]) )' I expect a client asked for this odd request, and I predict you'll get a call in a month from someone in the same office, saying the weekly 'total' is wrong on this report!
{ "language": "en", "url": "https://stackoverflow.com/questions/120616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Logging in J2ME What logging solutions exist for j2me? I'm specifically interested in easily excluding logging for "release" version, to have a smaller package & memory footprint. A: You can use the -assumenosideaffects in proguard to completley remove your logging class: -assumenosideeffects public class logger.Logger {*;} Rather than having to preprocess. A: The Series60 and UIQ phone that have a Sun virtual machine modified by Symbian itself have Standard Output redirection. Not only can you capture System.out but Throwable.printStackTrace() also works. On early handsets, You would need to write a C++ application that hooks into the standard library server process. Symbian produced the Redirector application that could capture the VM standard output to a console or a file. On newer handsets, a "redirect://" GCF protocol was introduced that could read the VM standard output into a Java byte[] or String object (you would want to do that in a separate MIDlet) and the Redirector application was rewritten in Java. On the newest J9 VM used in Series60 3rd Edition Feature Pack 2 handsets (and later), you may need to try "redirect://test" instead. A: MicroLog is sure bet. It is a small logging library for Java ME (J2ME) like Log4j. It has support for logging to console, file, RecordStore, Canvas, Form, Bluetooth, a serial port (Bluetooth, IR, USB), Socket(incl SSL), UDP, Syslog, MMS, SMS, e-mail or to Amazon S3. http://sourceforge.net/projects/microlog/ A: If you are using preprocessing and obfuscation with Proguard, then you can have a simple logging class. public class Log { public static void debug(final String message) { //#if !release.build System.out.println(message); //#endif } } Or do logging where ever you need to. Now, if release.build property is set to true, this code will be commented out, that will result in an empty method. Proguard will remove all usages of empty method - In effect release build will have all debug messages removed. Edit: Thinking about it on library level (I'm working on mapping J2ME library) I have, probably, found a better solution. public class Log { private static boolean showDebug; public static void debug(final String message) { if (showDebug) { System.out.println(message); } } public static void setShowDebug(final boolean show) { showDebug = show; } } This way end developer can enable log levels inside library that he/she is interested in. If nothing will be enabled, all logging code will be removed in end product obfuscation. Sweet :) /JaanusSiim A: I've used MIDPLogger to some acceptable level in a production application, although I have found it has more use after integrating into the application rather than as another Midlet in suite or so forth. I also found MicroLog but haven't used it to any great detail. A: I wrote a bytecode optimizer, and because of the format of class files you can point to the UTF encoding of classname & function which allows you to output logs with MyClass.someFunc() (you can process the signature if you want to get the types) which allows you to do something like the C style debug using LINE & FILE macros. A: Using conditional compilation of the logger class does not solve the problem of completely removing logging statements because you will quite often log more than a simple string. You will look up variable values and then assemble them into strings, e.g.: WhateverLog.log( "Loaded " + someclass.size() + " foos" ). Now if you only leave out the body of WhateverLog.log (as shown in the accepted solution), you will still leave a lot of unnecessary code in, including String concatenation (and thus a StringBuffer creation). That's why you'd better use a byte code post processing tool like proguard (already mentioned). Proguard's -assumenosideeffects will allow its optimizer to remove not only the logging statements but also all code whose results would only be used by the logging call. A: LWUIT framework of the J2ME provide the Logging form which can have a log the statement inside it. You can add the log at each and every place you think may generate the exception. Example : Log.getInstance().showLog(); By adding the above line you can able to track the logging in the J2ME devices.
{ "language": "en", "url": "https://stackoverflow.com/questions/120618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Dark color scheme for Eclipse Is Eclipse at all theme-able? I would like to install a dark color scheme for it, since I much prefer white text on dark background than the other way around. A: This is the best place for Eclipse color themes: http://www.eclipsecolorthemes.org/ A: I've created my own dark color scheme (based on Oblivion from gedit), which I think is very nice to work with. Preview & details at: http://www.rogerdudler.com/?p=362 We're happy to announce the beta of eclipsecolorthemes.org, a new website to download, create and maintain Eclipse color themes / schemes. The theme editor allows you to copy an existing theme and edit the colors with a live preview of your changes on specific editors. The downloadable themes support a lot of editors (PHP, Java, SQL, Ant, text, HTML, CSS, and more to follow) There's a growing list of themes already available on the site: * *Zenburn *Oblivion *Inkpot *Vibrant Ink You can read more about the launch here. A: I have to say, this is one area where Eclipse is really weak. Specifically, the import/export of preferences applies to ALL preferences. There is no way to import say just the fonts/color preferences (like you can with Visual Studio) without mucking up my key binding preferences. Also, I have tried several of these preference files referenced above, and they completely break my Eclipse install. A: I've created several color themes, and a script to extract a new one from someone's color preferences. I'm currently using one I still have yet to post on the site, but I should eventually get to it. http://eclipsecolorthemes.jottit.com A: Easiest way: change the Windows Display Properties main window background color. I went to Appearance tab, changed to Silver scheme, clicked Advanced, clicked on "Active Window" and changed Color 1 to a light gray. All Eclipse views softened. Since Luna (4.4) there seems to be a full Dark them in Window -> Preferences -> General -> Appearance -> Theme -> Dark A: Here's a guy that posted his Eclipse preferences for changing the colors like a theme: http://blog.codefront.net/2006/09/28/vibrant-ink-textmate-theme-for-eclipse/ And here's more about how to set the colors in the Ganymede Eclipse version (v. 3.4, mid 2008): http://help.eclipse.org/ganymede/index.jsp?topic=/org.eclipse.platform.doc.user/concepts/accessibility/fontsandcolors.htm A: For the quick hack, on Linux running GNOME with a Windows keyboard, Windows-Key-M will inverse-color all windows, and Windows-Key-N will inverse color a single window. It's an awesome feature, in my book. A: As I replied to "Is there a simple, consistent way to change the color scheme of Eclipse editors?": I've been looking for this too and after a bit of research found a workable solution. This is based on the FDT editor for Eclipse, but I'm sure you could apply the same logic to other editors. My blog post: Howto create a color-scheme for FDT Hope this helps! A: The best solution I've found is to leave Eclipse in normal bright mode, and use an OS level screen inverter. On OS X you can do Command + Option + Ctrl + 8, inverts the whole screen. On Linux with Compiz, it's even better, you can do Windows + N to darken windows selectively (or Windows + M to do the whole screen). On Windows, the only decent solution I've found is powerstrip, but it's only free for one year... then it's like $30 or something... Then you can invert the screen, adjust the syntax-level colours to your liking, and you're off to the races, with cool shades on. A: As posted to a few related questions already, I'm working on a plugin for easy, cross-editor color theme management: http://marketplace.eclipse.org/content/eclipse-color-theme It is still work in progress, but already supports many editors and a few dark color themes. A: For Linux users, assuming you run a compositing window manager (Compiz), you can just turn the window negative. I use Eclipse like this all the time, the normal (whitie) looks is blowing my eyes off. A: If you use Aptana then you can download a dark color theme! I have been looking for one recently and found the Aptana one. Thought others might be interested! Check out: http://www.nightlion.net/themes/2009/aptana-dark-color-theme/ A: These are the pleasing colors for my eyes during coding. Jazz music not included in theme. Eclipse Color Themes Plugin file: LukinaJama3.xml on depositfiles A: I have finally found exactly what I have been looking for, i.e. a dark theme for PyDev (although I still feel like Eclipse is missing out on this). A: This is another dark Eclipse theme: http://blog.prabir.me/post/Dark-Eclipse-Theme.aspx. I have the Visual Studio equivalent of the theme. A: Here's a rev 0.0.1 of an attempt at a dark background colour scheme for Eclipse (and a screenshot). Any feedback at all? (this is a big departure from what I normally use for Vim. A: Checkout this color scheme I created for Eclipse PDT. It is based on the Vim Zenburn color scheme developed by slinky A: Some people posted options for Linux and Mac, and the Windows (free) equivalent is, if you can deal with it globally: Set Windows desktop appearance theme window background color. You can keep current/desired theme, just modify the background color of windows. By default, it is set to white. I change it to a shade of grey. I tried dark grey and black before, but then you have to change text font colors globally, and all that's painful. But a simple shade of grey as background does the trick globally, works with any color text font as long as the shade of grey is not too dark. It's not the best solution for all editors/IDEs, as I prefer black, but it's the next best free & global workaround on Windows. A: I played with customizing the colors. I went with the yellow text/blue background I've liked from Turbo Pascal. The problem I ran into was it let you set the colors of the editors but then the other views like Package Explorer or Navigator stayed with the default black-on-white colors. I'm sure you could do it programatically but there are waaaay to many settings for my patience. A: In response to this comment I made a filter for Color Filter plugin for Compiz. Here's what I got: Howto: * *Go to /usr/share/compiz/filters/ *Create new file "negative-low-contrast" (as root) *Insert the attached code into it. *Go to System->Preferences->CompizConfig ... *Enter Color Filter Plugin *Enable it and add newly created filter to the list Profeet!! Filter code: !!ARBfp1.0 TEMP temp, neg; # Dunno what's this... but every other filter starts with this :) ; TEX temp, fragment.texcoord[0], texture[0], RECT; # Applying negative filter ; RCP neg.a, temp.a; MAD temp.rgb, -neg.a, temp, 1.0; MUL temp.rgb, temp.a, temp; MUL temp, fragment.color, temp; # Lowering contrast and shifting brightness ; MUL temp.rgb, temp, 0.8; ADD temp.rgb, temp, 0.25; MOV result.color, temp; END You also can play with the filter. May be you will get something more facinating :) Feel free to share!
{ "language": "en", "url": "https://stackoverflow.com/questions/120621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "425" }
Q: One or more files do not match the primary file of the database (error 5173) I got this error when I checked out my database from source control. It might sounds weird to check in the sql server database, but this was what I have done because this is just a personal project. Anyone knows how to fix this? A: Here's my finding. As mentioned by other posters, you really don't want to check database files into and out of the source control. But if you absolutely need to, and you have done check in the database files and you are encountering the same error that I encountered, here is a workaround: First, detach the database, then, delete the ldf file, reattach the database again. This is how I solved my problem. A: You really don't want to be checking database files into and out of source control - in SQL Server you have to detach the files for this to even work and you run all kinds of risks. If you absolutely have to do this, you should version backups. I recommend versioning a script which creates the entire database (tables, sprocs, views, etc.) You can try creating a database attaching from that data file and using Create Database the ATTACH_REBUILD_LOG option, but I'm not confident it's going to work since they probably weren't detached properly. A: Did you take a copy of the log file (.ldf) as well as the ".mdf" file? You need the matching set of both to re-attach the database A: This sounds like the data files do not match the structure files of your database. Shortly spoken, the files where your data (i.e. table rows) resides in are (mostly) not the files the structure of your data (i.e. the description of the tables) is stored. At least in "modern" RDBMS systems. So you checked out your data and the database recognized some changes in the structure which happened till then (you altered a table or something like that). The way "to fix this" would be to check in all files your database relies on, but I think that is not really what you wanted to achieve. Better (as mentioned above) to do backups and then drop / restore the database from them.
{ "language": "en", "url": "https://stackoverflow.com/questions/120625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there a way to redefine malloc at link time on Windows? I would like to replace the default malloc at link time to use a custom malloc. But when I try to redefine malloc in my program, I get this error: MSVCRT.lib(MSVCR80.dll) : error LNK2005: _malloc already defined in test.lib(test.obj) This works perfectly on any Unix, and it works on Windows with most functions, but not with malloc. How can I do this? And what is different with malloc that disallow overriding it? I know I could replace every call to malloc with my custom malloc, or use a macro to do this, but I would rather not modify every third party library. A: I think it depends in which order you link the files. I think you need to link your custom function first, then the import library. A: There is really good discussion of how hard this is here: http://benjamin.smedbergs.us/blog/2008-01-10/patching-the-windows-crt/ Apparently, you need to patch the CRT Edit: actually, a MS employee gave the technique in the discussion. You need to move your malloc to a lib, and then link it before the CRT "he also mentions that if you link your malloc as a lib before the CRT (i.e. make sure to turn on ‘ignore default libs’ and explictly include the CRT), you’ll get what you want, and can redistribute this lib without problems." A: From version 3.0 Firefox uses a custom allocator (AFAIR jmalloc) -- you could check how they did it. I read that they had some problems with it. You may check this blog post. A: What about defining malloc=_custom_malloc in the project makefile. Than adding a file such as: my_memory.c #undef malloc #undef calloc ... void *_custom_malloc(int size) { return jmalloc(size); } void *_custom_calloc(int size) { return jcalloc(size); } ...
{ "language": "en", "url": "https://stackoverflow.com/questions/120627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Tag hierarchies and handling of This is a real issue that applies on tagging items in general (and yes, this applies to StackOverflow too, and no, it is not a question about StackOverflow). The whole tagging issue helps cluster similar items, whatever items they may be (jokes, blog posts, so questions etc). However, there (usually but not strictly) is a hierarchy of tags, meaning that some tags imply other tags too. To use a familiar example, the "c#" so tag implies also ".net"; another example, in a jokes database, a "blondes" tag implies the "derisive" tag, similarly to "irish" or "belge" or "canadian" etc depending on the joke's country origin. How have you handled this, if you have, in your projects? I will supply an answer describing two different methods I have used in two separate cases (actually, the same mechanism but implemented in two different environments), but I am also interested not only on similar mechanisms, but also on your opinion on the hierarchy issue. A: This is a tough question. The two extremes are an ontology (everything is hierarchical) and a folksonomy (tags have no hierarchy). I have answered this on WikiAnswers, with a reference to Clay Shirky's "Ontology is Overrated" article which claims you should set no hierarchy. A: Actually I would say that it is not so much a hierarchical system but a semantic net with felt distancies between tags meanings. What do I mean: mathematics is closer to experimental physics then to gardening. Possibility to build such a net: Build pairs of tags and let people judge the perceived distance (using a measure like 1-10, meaning something like [synonyms, alike,...,antonyms], ...) and when searching, search for all tags within a certain distance. Does a measure have to be equal distance if coming from the oposite direction ([a,b] close -> [b,a,] close)? Or does proximity imply [a,b] close and [b,c] close -> [a,b] close? Maybe the first word will by default trigger another semantic field? If you start at "social worker", "analyst" ist near. If you start at "programmer", "analyst" is near as well. But starting at any of these points, you probably would not count the other as near ("sozial worker" is by no means close to "programmer"). You therefore would have only pairs judged and judged in both directions (in random order). [TagRelations] tagId integer closeTagId integer proximity integer Example for selection of similar tags: select closeTagId from TagRelations where tagId = :tagID and proximity < 3 A: The mechanism I have implemented was to not use the tags given themselves, but an indirect lookup table (not strictly DBMS terms) which links a tag to many implied tags (obviously, a tag is linked with itself for this to work). In a python project, the lookup table is a dictionary keyed on tags, with values sets of tags (where tags are plain strings). In a database project (indifferent which RDBMS engine it was), there were the following tables: [Tags] tagID integer primary key tagName text [TagRelations] tagID integer # first part of two-field key tagID_parent integer # second part of key trlValue float where the trlValue was a value in the (0, 1] space, used to give a gravity for the each linked tag; a self-to-self tag relation always carries 1.0 in the trlValue, while the rest are algorithmically calculated (it's not important how exactly). Think the example jokes database I gave; a ['blonde', 'derisive', 0.5] record would correlate to a ['pondian', 'derisive', 0.5] and therefore suggest all derisive jokes given another.
{ "language": "en", "url": "https://stackoverflow.com/questions/120628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Best practice regarding number of threads in GUI applications In the past I've worked with a number of programmers who have worked exclusively writing GUI applications. And I've been given the impression that they have almost universally minimised the use of multiple threads in their applications. In some cases they seem to have gone to extreme lengths to ensure that they use a single thread. Is this common? Is this the generally accepted philosophy for gui application design? And if so, why? [edit] There are a number of answers saying that thread usage should be minimised to reduce complexity. Reducing complexity in general is a good thing. But if you look at any number of applications where response to external events is of paramount importance (eg. web servers, any number of embedded applications) there seems to be a world of difference in the attitude toward thread usage. A: I think in terms of windows you are limited to all GUI operations happening on a single thread - because of the way the windows message pump works, to increase responsivness most apps add at least one additional worker thread for longer running tasks that would otherwise block and make the ui unresponsive. Threading is fundamentally hard and so thinking in terms or more than a couple threads can often result in a lot of debugging effort - there is a quote that escapes me right now that goes something like - "if you think you understand threading then you really dont" A: Generally speaking, GUI frameworks aren't thread safe. For things like Swing(Java's GUI API), only one thread can be updating the UI (or bad things can happen). Only one thread handles dispatching events. If you have multiple threads updating the screen, you can get some ugly flicker and incorrect drawing. That doesn't mean the application needs to be single threaded, however. There are certainly circumstances when you don't want this to be the case. If you click on a button that calculates pi to 1000 digits, you don't want the UI to be locked up and the button to be depressed for the next couple of days. This is when things like SwingWorker come in handy. It has two parts a doInBackground() which runs in a seperate thread and a done() that gets called by the thread that handles updating the UI sometime after the doInBackground thread has finished. This allows events to be handled quickly, or events that would take a long time to process in the background, while still having the single thread updating the screen. A: I've seen the same thing. Ideally you should perform any operation that is going to take longer then a few hundred ms in a background thread. Anything sorter than 100ms and a human probably wont notice the difference. A lot of GUI programmers I've worked with in the past are scared of threads because they are "hard". In some GUI frameworks such as the Delphi VCL there are warnings about using the VCL from multiple threads, and this tends to scare some people (others take it as a challenge ;) ) One interesting example of multi-threaded GUI coding is the BeOS API. Every window in an application gets its own thread. From my experience this made BeOS apps feel more responsive, but it did make programming things a little more tricky. Fortunately since BeOS was designed to be multi-threaded by default there was a lot of stuff in the API to make things easier than on some other OSs I've used. A: Most GUI frameworks are not thread safe, meaning that all controls have to me accessed from the same thread that created them. Still, it's a good practice to create worker threads to have responsive applications, but you need to be careful to delegate GUI updates to the GUI thread. A: Yes. GUI applications should minimize the the number of threads that they use for the following reasons: * *Thread programming is very hard and complicated *In general, GUI applications do at most 2 things at once : a) Respond to User Input, and b) Perform a background task (such as load in data) in response to a user action or an anticipated user action In general therefore, the added complexity of using multiple threads is not justified by the needs of the application. There are of course exceptions to the rule. A: GUIs generally don't use a whole lot of threads, but they often do throw off another thread for interacting with certain sub-systems especially if those systems take awhile or are very shared resources. For example, if you're going to print, you'll often want to throw off another thread to interact with the printer pool as it may be very busy for awhile and there's no reason not to keep working. Another example would be database loads where you're interacting with SQL server or something like that and because of the latency involved you may want to create another thread so your main UI processing thread can continue to respond to commands. A: The more threads you have in an application, (generally) the more complex the solution is. By attempting to minimise the number of threads being utilised within a GUI, there are less potential areas for problems. The other issue is the biggest problem in GUI design: the human. Humans are notorious in their inability to want to do multiple things at the same time. Users have a habit of clicking multiple butons/controls in quick sucession in order to attempt to get something done quicker. Computers cannot generally keep up with this (this is only componded by the GUIs apparent ability to keep up by using multiple threads), so to minimise this effect GUIs will respond to input on a first come first serve basis on a single thread. By doing this, the GUI is forced to wait until system resorces are free untill it can move on. Therefore elimating all the nasty deadlock situations that can arise. Obviously if the program logic and the GUI are on different threads, then this goes out the window. From a personal preference, I prefer to keep things simple on one thread but not to the detriment of the responsivness of the GUI. If a task is taking too long, then Ill use a different thread, otherwise Ill stick to just one. A: As the prior comments said, GUI Frameworks (at least on Windows) are single threaded, thus the single thread. Another recommendation (that is difficult to code in practice) is to limit the number of the threads to the number of available cores on the machine. Your CPU can only do one operation at a time with one core. If there are two threads, a context switch has to happen at some point. If you've got too many threads, the computer can sometimes spend more time swapping between threads than letting threads work. As Moore's Law changes with more cores, this will change and hopefully programming frameworks will evolve to help us use threads more effectively, depending on the number of cores available to the program, such as the TPL. A: Generally all the windowing messages from the window manager / OS will go to a single queue so its natural to have all UI elements in a single thread. Some frameworks, such as .Net, actually throw exceptions if you attempt to directly access UI elements from a thread other than the thread that created it.
{ "language": "en", "url": "https://stackoverflow.com/questions/120636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you get a spreadsheet to open Excel instead of a browser window? If you call javascript window.open and pass a url to a .xls file it open on some machines in the browser window. How can you force it into Excel? A: Only the users machine can "force" it into Excel. That said, 99% of the time if you send the correct mime-type and a user has Excel, then it will open in Excel assuming they approve. And only the server can send the correct mime-type. The document type you pass to a JavaScript window.open call will have no effect on this. In fact, calling window.open will at best just open a superfluous window. It's best to just link to the document with <a href="foo.xls">. And provided your server is sending a mime-type of application/x-excel or application/x-msexcel this will almost always nudge the browser into opening a new window with the Excel document. A: If it's just a static file, and you're using Apache on Linux, check for a file called /etc/mime.types, and ensure that it has the following line in there to associate the .xls file extension with the correct MIME type: application/vnd.ms-excel xls I'm guessing the location of that file might vary across systems, but it's in /etc/mime.types on my server which is running RHEL4. A: AFAIK you can't do this with JavaScript alone. If you have some sort of scripting language on the server's side you can alter the header to force a download. Here's a simple tutorial in PHP, but you can easily find one in your favorite language. A: You cannot force it into Excel. You can allow the browser to handle it whichever way it is configured to do so, or you can try to force it to download the file and let the user open if from their desktop. To force a download, search for "force download" and your server-side language (PHP, ASP.NET, JSP, etc.) A: I don't think you can: you cannot call external programs using Javascript for security reasons. Assuming that the user has Excel installed, you may want to open the new window without the address bar to give the user "the illusion" that the file has been opened with Excel in Internet Explorer. A: I wouldn't think this is possible from javascript due to security issues, there would be nothing stopping a rogue webpage from opening dozens of excel/word instances. Could you not set a hyperlink to the url of the .xls, that way the user would get the usual download prompt to view the file. A: * *Set the http content type to the Excel datatype: application/vnd.ms-excel *You shouldn't need to redirect to a new window, but you will get a popup asking the user to save or open the file. *In relation to (2): I'd worry if a browser could launch an external application and load data into it automatically without user intervention. A: This is a setting in each user's browser and not in something that can be set by code. So unfortunately you do not have control of that. A: You can not, as it depends on the client machine. For example on Windows if you want it to always open it with Excel, not in the browser window, you have to open My Computer, Tools, Folder Options, File Types, select the XLS type, and click on Advanced. There are two checkboxes: Browse in same window and Open web documents in place. Uncheck both, close browser window, open it again and try again. However as I said: it depends on the client, you can not force it. A: You can do this using LaunchinIE, an ActiveX Control that will enable HTML pages to start whatever application on the client's machine, without security warnings. Quote from the site: "At last, web pages can start Word, Excel, or any other corporate application without complaints. Securely." For this you do have to install the control on the user machine and also add the URL that is allowed to execute local applications to the Windows registry. Another quote from the site: "To ensure security, LaunchinIE needs to be carefully configured client-side; due to this restriction it's only fit for intranet use." I use LaunchinIE in our training facility so I can use Internet Explorer as a menu which lets the user choose the machine setup. LaunchinIE then calls a batch script that configures the machine to best support the selected training. A: Here are the steps to get this pop-up back when opening a saved Excel file. * *Right-Click on the windows [START] button and select Explore to open Windows Explorer window will open. *From the menu select Tools \ Folder Options… • Choose the File Types tab and scroll down the list of files. • Left-Click to highlight the XLS Microsoft Excel Worksheet file extension and click on the Advanced button. *In the Edit File Type window, Uncheck the “Browse in same window” option. *Click OK button to accept your changes. *Start a new browser session. The next time you open your Excel spreadsheet in your Inbox, you should be prompted with the following window. Be sure to leave the “Always ask before opening this type of file” as checked. Clicking the Open button should now open your file in Excel.
{ "language": "en", "url": "https://stackoverflow.com/questions/120646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is Assert.Fail() considered bad practice? I use Assert.Fail a lot when doing TDD. I'm usually working on one test at a time but when I get ideas for things I want to implement later I quickly write an empty test where the name of the test method indicates what I want to implement as sort of a todo-list. To make sure I don't forget I put an Assert.Fail() in the body. When trying out xUnit.Net I found they hadn't implemented Assert.Fail. Of course you can always Assert.IsTrue(false) but this doesn't communicate my intention as well. I got the impression Assert.Fail wasn't implemented on purpose. Is this considered bad practice? If so why? @Martin Meredith That's not exactly what I do. I do write a test first and then implement code to make it work. Usually I think of several tests at once. Or I think about a test to write when I'm working on something else. That's when I write an empty failing test to remember. By the time I get to writing the test I neatly work test-first. @Jimmeh That looks like a good idea. Ignored tests don't fail but they still show up in a separate list. Have to try that out. @Matt Howells Great Idea. NotImplementedException communicates intention better than assert.Fail() in this case @Mitch Wheat That's what I was looking for. It seems it was left out to prevent it being abused in another way I abuse it. A: For this scenario, rather than calling Assert.Fail, I do the following (in C# / NUnit) [Test] public void MyClassDoesSomething() { throw new NotImplementedException(); } It is more explicit than an Assert.Fail. There seems to be general agreement that it is preferable to use more explicit assertions than Assert.Fail(). Most frameworks have to include it though because they don't offer a better alternative. For example, NUnit (and others) provide an ExpectedExceptionAttribute to test that some code throws a particular class of exception. However in order to test that a property on the exception is set to a particular value, one cannot use it. Instead you have to resort to Assert.Fail: [Test] public void ThrowsExceptionCorrectly() { const string BAD_INPUT = "bad input"; try { new MyClass().DoSomething(BAD_INPUT); Assert.Fail("No exception was thrown"); } catch (MyCustomException ex) { Assert.AreEqual(BAD_INPUT, ex.InputString); } } The xUnit.Net method Assert.Throws makes this a lot neater without requiring an Assert.Fail method. By not including an Assert.Fail() method xUnit.Net encourages developers to find and use more explicit alternatives, and to support the creation of new assertions where necessary. A: I use MbUnit for my Unit Testing. They have an option to Ignore tests, which show up as Orange (rather than Green or Red) in the test suite. Perhaps xUnit has something similar, and would mean you don't even have to put any assert into the method, because it would show up in an annoyingly different colour making it hard to miss? Edit: In MbUnit it is in the following way: [Test] [Ignore] public void YourTest() { } A: This is the pattern that I use when writting a test for code that I want to throw an exception by design: [TestMethod] public void TestForException() { Exception _Exception = null; try { //Code that I expect to throw the exception. MyClass _MyClass = null; _MyClass.SomeMethod(); //Code that I expect to throw the exception. } catch(Exception _ThrownException) { _Exception = _ThrownException } finally { Assert.IsNotNull(_Exception); //Replace NullReferenceException with expected exception. Assert.IsInstanceOfType(_Exception, typeof(NullReferenceException)); } } IMHO this is a better way of testing for exceptions over using Assert.Fail(). The reason for this is that not only do I test for an exception being thrown at all but I also test for the exception type. I realise that this is similar to the answer from Matt Howells but IMHO using the finally block is more robust. Obviously it would still be possible to include other Assert methods to test the exceptions input string etc. I would be grateful for your comments and views on my pattern. A: Personally I have no problem with using a test suite as a todo list like this as long as you eventually get around to writing the test before you implement the code to pass. Having said that, I used to use this approach myself, although now I'm finding that doing so leads me down a path of writing too many tests upfront, which in a weird way is like the reverse problem of not writing tests at all: you end up making decisions about design a little too early IMHO. Incidentally in MSTest, the standard Test template uses Assert.Inconclusive at the end of its samples. AFAIK the xUnit.NET framework is intended to be extremely lightweight and yes they did cut Fail deliberately, to encourage the developer to use an explicit failure condition. A: Wild guess: withholding Assert.Fail is intended to stop you thinking that a good way to write test code is as a huge heap of spaghetti leading to an Assert.Fail in the bad cases. [Edit to add: other people's answers broadly confirm this, but with quotations] Since that's not what you're doing, it's possible that xUnit.Net is being over-protective. Or maybe they just think it's so rare and so unorthogonal as to be unnecessary. I prefer to implement a function called ThisCodeHasNotBeenWrittenYet (actually something shorter, for ease of typing). Can't communicate intention more clearly than that, and you have a precise search term. Whether that fails, or is not implemented (to provoke a linker error), or is a macro that doesn't compile, can be changed to suit your current preference. For instance when you want to run something that is finished, you want a fail. When you're sitting down to get rid of them all, you may want a compile error. A: With the good code I usually do: void goodCode() { // TODO void goodCode() throw new NotSupportedOperationException("void goodCode()"); } With the test code I usually do: @Test void testSomething() { // TODO void test Something Assert.assert("Some descriptive text about what to test") } If using JUnit, and don't want to get the failure, but the error, then I usually do: @Test void testSomething() { // TODO void test Something throw new NotSupportedOperationException("Some descriptive text about what to test") } A: It was deliberately left out. This is Brad Wilson's reply as to why is there no Assert.Fail(): We didn't overlook this, actually. I find Assert.Fail is a crutch which implies that there is probably an assertion missing. Sometimes it's just the way the test is structured, and sometimes it's because Assert could use another assertion. A: I've always used Assert.Fail() for handling cases where you've detected that a test should fail through logic beyond simple value comparison. As an example: try { // Some code that should throw ExceptionX Assert.Fail("ExceptionX should be thrown") } catch ( ExceptionX ex ) { // test passed } Thus the lack of Assert.Fail() in the framework looks like a mistake to me. I'd suggest patching the Assert class to include a Fail() method, and then submitting the patch to the framework developers, along with your reasoning for adding it. As for your practice of creating tests that intentionally fail in your workspace, to remind yourself to implement them before committing, that seems like a fine practice to me. A: Beware Assert.Fail and its corrupting influence to make developers write silly or broken tests. For example: [TestMethod] public void TestWork() { try { Work(); } catch { Assert.Fail(); } } This is silly, because the try-catch is redundant. A test fails if it throws an exception. Also [TestMethod] public void TestDivide() { try { Divide(5,0); Assert.Fail(); } catch { } } This is broken, the test will always pass whatever the outcome of the Divide function. Again, a test fails if and only if it throws an exception. A: If you're writing a test that just fails, and then writing the code for it, then writing the test. This isn't Test Driven Development. Technically, Assert.fail() shouldn't be needed if you're using test driven development correctly. Have you thought of using a Todo List, or applying a GTD methodology to your work? A: MS Test has Assert.Fail() but it also has Assert.Inconclusive(). I think that the most appropriate use for Assert.Fail() is if you have some in-line logic that would be awkward to put in an assertion, although I can't even think of any good examples. For the most part, if the test framework supports something other than Assert.Fail() then use that. A: Why would you use Assert.Fail for saying that an exception should be thrown? That is unnecessary. Why not just use the ExpectedException attribute? A: I think you should ask yourselves what (upfront) testing should do. First, you write a (set of) test without implmentation. Maybe, also the rainy day scenarios. All those tests must fail, to be correct tests: So you want to achieve two things: 1) Verify that your implementation is correct; 2) Verify that your unit tests are correct. Now, if you do upfront TDD, you want to execute all your tests, also, the NYI parts. The result of your total test run passes if: 1) All implemented stuff succeeds 2) All NYI stuff fails After all, it would be a unit test ommision if your unit tests succeeds whilst there is no implementation, isnt it? You want to end up with something of a mail of your continous integration test that checks all implemented and not implemented code, and is sent if any implemented code fails, or any not implemented code succeeds. Both are undesired results. Just write an [ignore] tests wont do the job. Neither, an asserts that stops an the first assert failure, not running other tests lines in the test. Now, how to acheive this then? I think it requires some more advanced organisation of your testing. And it requires some other mechanism then asserts to achieve these goals. I think you have to split up your tests and create some tests that completly run but must fail, and vice versa. Ideas are to split your tests over multiple assemblies, use grouping of tests (ordered tests in mstest may do the job). Still, a CI build that mails if not all tests in the NYI department fail is not easy and straight-forward. A: This is our use case for Assert.Fail(). One important goal for our Unit tests is that they don't touch the database. Sometimes mocking doesn't happen properly, or application code is modified and a database call is inadvertently made. This can be quite deep in the call stack. The exception may be caught so it won't bubble up, or because the tests are running initially with a database the call will work. What we've done is add a config value to the unit test project so that when the database connection is first requested we can call Assert.Fail("Database accessed"); Assert.Fail() acts globally, even in different libraries. This therefore acts as a catch-all for all of the unit tests. If any one of them hits the database in a unit test project then they will fail. We therefore fail fast.
{ "language": "en", "url": "https://stackoverflow.com/questions/120648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: Directory-tree listing in Python How do I get a list of all files (and directories) in a given directory in Python? A: import os for filename in os.listdir("C:\\temp"): print filename A: This is a way to traverse every file and directory in a directory tree: import os for dirname, dirnames, filenames in os.walk('.'): # print path to all subdirectories first. for subdirname in dirnames: print(os.path.join(dirname, subdirname)) # print path to all filenames. for filename in filenames: print(os.path.join(dirname, filename)) # Advanced usage: # editing the 'dirnames' list will stop os.walk() from recursing into there. if '.git' in dirnames: # don't go into any .git directories. dirnames.remove('.git') A: While os.listdir() is fine for generating a list of file and dir names, frequently you want to do more once you have those names - and in Python3, pathlib makes those other chores simple. Let's take a look and see if you like it as much as I do. To list dir contents, construct a Path object and grab the iterator: In [16]: Path('/etc').iterdir() Out[16]: <generator object Path.iterdir at 0x110853fc0> If we want just a list of names of things: In [17]: [x.name for x in Path('/etc').iterdir()] Out[17]: ['emond.d', 'ntp-restrict.conf', 'periodic', If you want just the dirs: In [18]: [x.name for x in Path('/etc').iterdir() if x.is_dir()] Out[18]: ['emond.d', 'periodic', 'mach_init.d', If you want the names of all conf files in that tree: In [20]: [x.name for x in Path('/etc').glob('**/*.conf')] Out[20]: ['ntp-restrict.conf', 'dnsextd.conf', 'syslog.conf', If you want a list of conf files in the tree >= 1K: In [23]: [x.name for x in Path('/etc').glob('**/*.conf') if x.stat().st_size > 1024] Out[23]: ['dnsextd.conf', 'pf.conf', 'autofs.conf', Resolving relative paths become easy: In [32]: Path('../Operational Metrics.md').resolve() Out[32]: PosixPath('/Users/starver/code/xxxx/Operational Metrics.md') Navigating with a Path is pretty clear (although unexpected): In [10]: p = Path('.') In [11]: core = p / 'web' / 'core' In [13]: [x for x in core.iterdir() if x.is_file()] Out[13]: [PosixPath('web/core/metrics.py'), PosixPath('web/core/services.py'), PosixPath('web/core/querysets.py'), A: You can use os.listdir(path) For reference and more os functions look here: * *Python 2 docs: https://docs.python.org/2/library/os.html#os.listdir *Python 3 docs: https://docs.python.org/3/library/os.html#os.listdir A: A recursive implementation import os def scan_dir(dir): for name in os.listdir(dir): path = os.path.join(dir, name) if os.path.isfile(path): print path else: scan_dir(path) A: I wrote a long version, with all the options I might need: http://sam.nipl.net/code/python/find.py I guess it will fit here too: #!/usr/bin/env python import os import sys def ls(dir, hidden=False, relative=True): nodes = [] for nm in os.listdir(dir): if not hidden and nm.startswith('.'): continue if not relative: nm = os.path.join(dir, nm) nodes.append(nm) nodes.sort() return nodes def find(root, files=True, dirs=False, hidden=False, relative=True, topdown=True): root = os.path.join(root, '') # add slash if not there for parent, ldirs, lfiles in os.walk(root, topdown=topdown): if relative: parent = parent[len(root):] if dirs and parent: yield os.path.join(parent, '') if not hidden: lfiles = [nm for nm in lfiles if not nm.startswith('.')] ldirs[:] = [nm for nm in ldirs if not nm.startswith('.')] # in place if files: lfiles.sort() for nm in lfiles: nm = os.path.join(parent, nm) yield nm def test(root): print "* directory listing, with hidden files:" print ls(root, hidden=True) print print "* recursive listing, with dirs, but no hidden files:" for f in find(root, dirs=True): print f print if __name__ == "__main__": test(*sys.argv[1:]) A: Here is another option. os.scandir(path='.') It returns an iterator of os.DirEntry objects corresponding to the entries (along with file attribute information) in the directory given by path. Example: with os.scandir(path) as it: for entry in it: if not entry.name.startswith('.'): print(entry.name) Using scandir() instead of listdir() can significantly increase the performance of code that also needs file type or file attribute information, because os.DirEntry objects expose this information if the operating system provides it when scanning a directory. All os.DirEntry methods may perform a system call, but is_dir() and is_file() usually only require a system call for symbolic links; os.DirEntry.stat() always requires a system call on Unix but only requires one for symbolic links on Windows. Python Docs A: The one worked with me is kind of a modified version from Saleh's answer elsewhere on this page. The code is as follows: dir = 'given_directory_name' filenames = [os.path.abspath(os.path.join(dir,i)) for i in os.listdir(dir)] A: If you need globbing abilities, there's a module for that as well. For example: import glob glob.glob('./[0-9].*') will return something like: ['./1.gif', './2.txt'] See the documentation here. A: Here's a helper function I use quite often: import os def listdir_fullpath(d): return [os.path.join(d, f) for f in os.listdir(d)] A: For files in current working directory without specifying a path Python 2.7: import os os.listdir('.') Python 3.x: import os os.listdir() A: Try this: import os for top, dirs, files in os.walk('./'): for nm in files: print os.path.join(top, nm) A: A nice one liner to list only the files recursively. I used this in my setup.py package_data directive: import os [os.path.join(x[0],y) for x in os.walk('<some_directory>') for y in x[2]] I know it's not the answer to the question, but may come in handy A: For Python 2 #!/bin/python2 import os def scan_dir(path): print map(os.path.abspath, os.listdir(pwd)) For Python 3 For filter and map, you need wrap them with list() #!/bin/python3 import os def scan_dir(path): print(list(map(os.path.abspath, os.listdir(pwd)))) The recommendation now is that you replace your usage of map and filter with generators expressions or list comprehensions: #!/bin/python import os def scan_dir(path): print([os.path.abspath(f) for f in os.listdir(path)]) A: #import modules import os _CURRENT_DIR = '.' def rec_tree_traverse(curr_dir, indent): "recurcive function to traverse the directory" #print "[traverse_tree]" try : dfList = [os.path.join(curr_dir, f_or_d) for f_or_d in os.listdir(curr_dir)] except: print "wrong path name/directory name" return for file_or_dir in dfList: if os.path.isdir(file_or_dir): #print "dir : ", print indent, file_or_dir,"\\" rec_tree_traverse(file_or_dir, indent*2) if os.path.isfile(file_or_dir): #print "file : ", print indent, file_or_dir #end if for loop #end of traverse_tree() def main(): base_dir = _CURRENT_DIR rec_tree_traverse(base_dir," ") raw_input("enter any key to exit....") #end of main() if __name__ == '__main__': main() A: FYI Add a filter of extension or ext file import os path = '.' for dirname, dirnames, filenames in os.walk(path): # print path to all filenames with extension py. for filename in filenames: fname_path = os.path.join(dirname, filename) fext = os.path.splitext(fname_path)[1] if fext == '.py': print fname_path else: continue A: If figured I'd throw this in. Simple and dirty way to do wildcard searches. import re import os [a for a in os.listdir(".") if re.search("^.*\.py$",a)] A: Below code will list directories and the files within the dir def print_directory_contents(sPath): import os for sChild in os.listdir(sPath): sChildPath = os.path.join(sPath,sChild) if os.path.isdir(sChildPath): print_directory_contents(sChildPath) else: print(sChildPath) A: Here is a one line Pythonic version: import os dir = 'given_directory_name' filenames = [os.path.join(os.path.dirname(os.path.abspath(__file__)),dir,i) for i in os.listdir(dir)] This code lists the full path of all files and directories in the given directory name. A: I know this is an old question. This is a neat way I came across if you are on a liunx machine. import subprocess print(subprocess.check_output(["ls", "/"]).decode("utf8")) A: Easiest way: list_output_files = [os.getcwd()+"\\"+f for f in os.listdir(os.getcwd())]
{ "language": "en", "url": "https://stackoverflow.com/questions/120656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "603" }
Q: Python subprocess issue with ampersands I'm currently having a major issue with a python script. The script runs arbitrary commands through a handler to convert incorrect error reporting into correct error reporting. The issue I'm having is getting the script to work correctly on windows with a command that contains ampersands in it's path. I've attempted quoting the command, escaping the ampersand with ^ and neither works. I'm now out of ideas. Any suggestions? To clarify from current responses: * *I am using the subprocess module *I am passing the command line + arguments in as a list *The issue is with the path to the command itself, not any of the arguments *I've tried quoting the command. It causes a [Error 123] The filename, directory name, or volume label syntax is incorrect error *I'm using no shell argument (so shell=false) *In case it matters, I'm grabbing a pipe to stderr for processing it, but ignoring stdout and stdin *It is only for use on Windows currently, and works as expected in all other cases that I've tested so far. *The command that is failing is: p = subprocess.Popen(prog, stderr = subprocess.PIPE, bufsize=-1) when the first element of the list 'prog' contains any ampersands. Quoting this first string does not work. A: Make sure you are using lists and no shell expansion: subprocess.Popen(['command', 'argument1', 'argument2'], shell=False) A: Try quoting the argument that contains the & wget "http://foo.com/?bar=baz&amp;baz=bar" Is usually what has to be done in a Linux shell A: To answer my own question: Quoting the actual command when passing the parameters as a list doesn't work correctly (command is first item of list) so to solve the issue I turned the list into a space separated string and passed that into subprocess instead. Better solutions still welcomed. A: "escaping the ampersand with ^" Are you sure ^ is an escape character to Windows? Shouldn't you use \? A: I try a situation as following: exe = 'C:/Program Files (x86)/VideoLAN/VLC/VLC.exe' url = 'http://translate.google.com/translate_tts?tl=en&q=hello+world' subprocess.Popen([exe, url.replace("&","^&")],shell=True) This does work.
{ "language": "en", "url": "https://stackoverflow.com/questions/120657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: "Could not find the main class. Program will exit" I'm trying to run SQuirreL SQL. I've downloaded it and installed it, but when I try to run it I get this error message: Java Virtual Machine Launcher. Could not find the main class. Program will exit. I get the gist of this, but I have not idea how to fix it. Any help? more info: * *I'm on Windows XP pro. *I have java 1.6 installed, and other apps are running OK. *The install ran OK. *I believe I've followed the installation instructions correctly. *To run it, I'm invoking the squirrel-sql.bat file. Update This question: "Could not find the main class: XX. Program will exit." gives some background on this error from the point of view of a java developer. A: The classpath is the path that the system will follow when trying to find the classes that you're trying to run. In the batch file you're trying to execute it probably has a variable like CLASSPATH=blah;blah;etc or a java command that looks similar to java -classpath "c:\directory\lib\squirrel-sql.jar" com.some.squirrel.package.file If you can find or add that classpath setting, make sure that it includes a path to the squirrel-sql.jar and any other jar files that it may depend on separated by semicolons (or the root /lib directory that may be included with the installation). Basically you just need to tell java where to find the class files that you're trying to execute. Wikipedia has a more indepth discussion about classpath and can offer you more insight. http://en.wikipedia.org/wiki/Classpath_(Java) A: * *JAVA_HOME variable must be set, to point to the prog files/java/version???/bin *open squirrel-sql.bat file with some text editor and see if the JAVA_HOME variable there is the same as the one in your enviroment variable *change it if it doesn't match....and than run bat file again A: Is Java installed on your computer? Is the path to its bin directory set properly (in other words if you type 'java' from the command line do you get back a list of instructions or do you get something like "java is not recognized as a .....")? You could try try running squirrel-sql.jar from the command line (from the squirrel sql directory), using: java -jar squirrel-sql.jar A: Have you followed these instructions: http://www.squirrelsql.org/#installation If so, are you running the batch file or the shell script to run it? A: Tweaking MB's answer for windows, will get rid of the console window: start javaw -jar squirrel-sql.jar A: The .bat file does not seem to work. Just double-click on: squirrel-sql.jar or type: java -jar squirrel-sql.jar in the command-line. A: You can place .; in classpath in environmental variables to overcome this problem. A: I tried to start SQUirrel 3.1 but I received a message stating "Could not find the main class Files\Rational\ClearQuest\cqjni.jar" I noticed that C:\Program Files\Rational\ClearQuest\cqjni.jar is in my existing classpath as defined by the Windows environment variable, CLASSPATH. SQUirrel doesn't need my existing classpath, so I updated the SQUirrel bat file, squirrel-sql.bat. REM SET SQUIRREL_CP=%TMP_CP%;%CLASSPATH% SET SQUIRREL_CP=%TMP_CP% It no longer appends my existing classpath to its classpath and runs fine. A: I had this problem when I "upgraded" to Windows 7, which is 64-bit. My go to Java JRE is a 64-bit JVM. I had a 32-bit JRE on my machine for my browser, so I set up a system variable: JRE32=C:\Program Files\Java\jre7 When I run: "%JRE32\bin\java" -version I get: java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) Client VM (build 24.51-b03, mixed mode, sharing) Which is a 32-bit JVM. It would say "Java HotSpot(TM) 64-Bit" otherwise. I edited the "squirrel-sql.bat" file, REMarking out line 4 and adding line 5 as follows: (4) rem set "IZPACK_JAVA=%JAVA_HOME%" (5) set IZPACK_JAVA=%JRE32% And now everything works, fine and dandy. A: I had the same issue with a different application (BI Publisher) because I installed a 32 bit version of this application on a 64 bit version of Windows. Java Virtual Machine Launcher - could not find the main class The solution for my case was to tell BI Publisher where to find the x86 version of JRE:
{ "language": "en", "url": "https://stackoverflow.com/questions/120662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Will Subclipse 1.4.4 work with Subversion 1.3.2 Will Subclipse 1.4.4 work safely with Subversion 1.3.2? I am confused because its changelog says NOTE: As of Subclipse 1.3.0, the minimum Subversion JavaHL requirement is 1.5.0. A: Subclipse requires Subversion 1.5.x on the client. A Subversion 1.5.x client can talk to any 1.x server, all the way back to 1.0.0. A: Do you mean whether Subclipse 1.4.4 will work with a server that is 1.3.2? If so, probably yes since clients tend to be updated more often than the servers, and thus they try to be backwards compatible. 1.3.2 is starting to get old though, if the client is based on subversion 1.5. A: compatible with the server yes. But if you use it on a local working copy, your working copy gets updated to the new 1.5 format, so if you also use a command line svn client, or tortoise, you have to upgrade those as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/120669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Javascript Try/Catch I've got a function that runs a user generated Regex. However, if the user enters a regex that won't run then it stops and falls over. I've tried wrapping the line in a Try/Catch block but alas nothing happens. If it helps, I'm running jQuery but the code below does not have it as I'm guessing that it's a little more fundamental than that. Edit: Yes, I know that I am not escaping the "[", that's intentional and the point of the question. I'm accepting user input and I want to find a way to catch this sort of problem without the application falling flat on it's face. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <title>Regex</title> <script type="text/javascript" charset="utf-8"> var grep = new RegExp('gr['); try { var results = grep.exec('bob went to town'); } catch (e) { //Do nothing? } alert('If you can see this then the script kept going'); </script> </head> <body> </body> </html> A: The problem is with this line: var grep = new RegExp('gr['); '[' is a special character so it needs to be escaped. Also this line is not wrapped in try...catch, so you still get the error. Edit: You could also add an alert(e.message); in the catch clause to see the error message. It's useful for all kind of errors in javascript. Edit 2: OK, I needed to read more carefully the question, but the answer is still there. In the example code the offending line is not wrapped in the try...catch block. I put it there and didn't get errors in Opera 9.5, FF3 and IE7. A: var grep, results; try { grep = new RegExp("gr["); results = grep.exec('bob went to town'); } catch(e) { alert(e); } alert('If you can see this then the script kept going'); A: putting the RegExp initialization inside the try/catch will work (just tested in FireFox) var grep, results; try { grep = new RegExp("gr["); // your user input here } catch(e) { alert("The RegExpr is invalid"); } // do your stuff with grep and results Escaping here is not the solution. Since the purpose of this snippet is to actually test a user-generated RegExpr, you will want to catch [ as an unclosed RegExpr container. A: Try this the new RegExp is throwing the exception Regex <script type="text/javascript" charset="utf-8"> var grep; try { grep = new RegExp("gr["); } catch(e) { alert(e); } try { var results = grep.exec('bob went to town'); } catch (e) { //Do nothing? } alert('If you can see this then the script kept going'); </script> A: your RegExp doesn't close the [ In my FireFox, it never returns from the constructor -- looks like a bug in the implementation of RegExp, but if you provide a valid expression, it works A: One option is to validate the user-generated expressions. That is; escape characters that you know will stall your script.
{ "language": "en", "url": "https://stackoverflow.com/questions/120693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Recursive overloading semantics in the Scala REPL - JVM languages Using Scala's command line REPL: def foo(x: Int): Unit = {} def foo(x: String): Unit = {println(foo(2))} gives error: type mismatch; found: Int(2) required: String It seems that you can't define overloaded recursive methods in the REPL. I thought this was a bug in the Scala REPL and filed it, but it was almost instantly closed with "wontfix: I don't see any way this could be supported given the semantics of the interpreter, because these two methods must to be compiled together." He recommended putting the methods in an enclosing object. Is there a JVM language implementation or Scala expert who could explain why? I can see it would be a problem if the methods called each other for instance, but in this case? Or if this is too large a question and you think I need more prerequisite knowledge, does someone have any good links to books or sites about language implementations, especially on the JVM? (I know about John Rose's blog, and the book Programming Language Pragmatics... but that's about it. :) A: % scala28 Welcome to Scala version 2.8.0.final (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_20). Type in expressions to have them evaluated. Type :help for more information. scala> def foo(x: Int): Unit = () ; def foo(x: String): Unit = { println(foo(2)) } foo: (x: String)Unit <and> (x: Int)Unit foo: (x: String)Unit <and> (x: Int)Unit scala> foo(5) scala> foo("abc") () A: REPL will accept if you copy both lines and paste both at same time. A: The issue is due to the fact that the interpreter most often has to replace existing elements with a given name, rather than overload them. For example, I will often be running through experimenting with something, often creating a method called test: def test(x: Int) = x + x A little later on, let's say that I'm running a different experiment and I create another method named test, unrelated to the first: def test(ls: List[Int]) = (0 /: ls) { _ + _ } This isn't an entirely unrealistic scenario. In fact, it's precisely how most people use the interpreter, often without even realizing it. If the interpreter arbitrarily decided to keep both versions of test in scope, that could lead to confusing semantic differences in using test. For example, we might make a call to test, accidentally passing an Int rather than List[Int] (not the most unlikely accident in the world): test(1 :: Nil) // => 1 test(2) // => 4 (expecting 2) Over time, the root scope of the interpreter would get incredibly cluttered with various versions of methods, fields, etc. I tend to leave my interpreter open for days at a time, but if overloading like this were allowed, we would be forced to "flush" the interpreter every so often as things got to be too confusing. It's not a limitation of the JVM or the Scala compiler, it's a deliberate design decision. As mentioned in the bug, you can still overload if you're within something other than the root scope. Enclosing your test methods within a class seems like the best solution to me. A: As shown by extempore's answer, it is possible to overload. Daniel's comment about design decision is correct, but, I think, incomplete and a bit misleading. There's no outlawing of overloads (since they are possible), but they are not easily achieved. The design decisions that lead to this are: * *All previous definitions must be available. *Only newly entered code is compiled, instead of recompiling everything ever entered every time. *It must be possible to redefine definitions (as Daniel mentioned). *It must be possible to define members such as vals and defs, not only classes and objects. The problem is... how to achieve all these goals? How do we process your example? def foo(x: Int): Unit = {} def foo(x: String): Unit = {println(foo(2))} Starting with the 4th item, A val or def can only be defined inside a class, trait, object or package object. So, REPL puts the definitions inside objects, like this (not actual representation!) package $line1 { // input line object $read { // what was read object $iw { // definitions def foo(x: Int): Unit = {} } // val res1 would be here somewhere if this was an expression } } Now, due to how JVM works, once you defined one of them, you can't extend them. You could, of course, recompile everything, but we discarded that. So you need to place it in a different place: package $line1 { // input line object $read { // what was read object $iw { // definitions def foo(x: String): Unit = { println(foo(2)) } } } } And this explains why your examples are not overloads: they are defined in two different places. If you put them in the same line, they'd all be defined together, which would make them overloads, as shown in extempore's example. As for the other design decisions, each new package import definitions and "res" from previous packages, and the imports can shadow each other, which makes it possible to "redefine" stuff.
{ "language": "en", "url": "https://stackoverflow.com/questions/120702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Getting started with AJAX with ASP.NET 3.5, what do I need on the server I have the .net framework 3.5 on my development machine and also am using the AJAX toolkit in my code. In order to publish this code to a server for my users, do I need anything on the server in order for my AJAX code to operate correctly? A: You need only the .NET framework 3.5. If you publish your project, the AJAX Toolkit used will be also copied over. If you only reference the AJAX Toolkit via file, not via project, then be sure you set the dll to "Copy always" in the properties window. A: .NET 3.5 needs to be installed on the server. The Ajax Control Toolkit assembly does not need to be actually loaded on the server, but needs to be at least in the Bin Folder, with references in the web.config. A: Of course, you will need to install the .NET Framework on your server. If you are using the AJAX Toolkit you will want to copy over the AjaxControlToolkit.dll to the bin folder of your web application on your server. Also want to make sure that you set your web application to use .NET Framework 2.0. In IIS you go to the properties of your web site, then the ASP.NET tab and make sure ASP.NET version is set to 2.0.
{ "language": "en", "url": "https://stackoverflow.com/questions/120704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Setting up F# in Visual Studio 2005 Are there any decent tutorials for setting up F# in Visual Studio 2005? Everything I have found points at VS2008. ie: 'F# projects' under projects, etc. A: Unfortunately the CTP release of F# doesn't support VS 2005. Two options: * *Use 1.9.4.19, the most recent pre-CTP release *Download the free VS2008 shell and use that instead (I haven't tried it, but apparently it works) A: Installing F# Editing for Microsoft Visual Studio 2005 That page also links to Don Syme's F# blog, which has a lot of useful stuff on working with F# in VS, including a demo of F# intellisense in VS2005.
{ "language": "en", "url": "https://stackoverflow.com/questions/120715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: SQL Server - Partitioned Tables vs. Clustered Index? Let's assume you have one massive table with three columns as shown below: [id] INT NOT NULL, [date] SMALLDATETIME NOT NULL, [sales] FLOAT NULL Also assume you are limited to one physical disk and one filegroup (PRIMARY). You expect this table to hold sales for 10,000,000+ ids, across 100's of dates (easily 1B+ records). As with many data warehousing scenarios, the data will typically grow sequentially by date (i.e., each time you perform a data load, you will be inserting new dates, and maybe updating some of the more recent dates of data). For analytic purposes, the data will often be queried and aggregated for a random set of ~10,000 ids which will be specified via a join with another table. Often, these queries don't specify date ranges, or specify very wide date ranges, which leads me to my question: What is the best way to index / partition this table? I have thought about this for a while, but am stuck with conflicting solutions: Option #1: As data will be loaded sequentially by date, define the clustered index (and primary key) as [date], [id]. Also create a "sliding window" partitioning function / scheme on date allowing rapid movement of new data in / out of the table. Potentially create a non-clustered index on id to help with querying. Expected Outcome #1: This setup will be very fast for data loading purposes, but sub-optimal when it comes to analytic reads as, in a worst case scenario (no limiting by dates, unlucky with set of id's queried), 100% of the data pages may be read. Option #2: As the data will be queried for only a small subset of ids at a time, define the clustered index (and primary key) as [id], [date]. Do not bother to create a partitioned table. Expected Outcome #2: Expected huge performance hit when it comes to loading data as we can no longer quickly limit by date. Expected huge performance benefit when it comes to my analytic queries as it will minimize the number of data pages read. Option #3: Clustered (and primary key) as follows: [id], [date]; "sliding window" partition function / scheme on date. Expected Outcome #3: Not sure what to expect. Given that the first column in the clustered index is [id] and thus (it is my understanding) the data is arranged by ID, I would expect good performance from my analytic queries. However, the data is partitioned by date, which is contrary to the definition of the clustered index (but still aligned as date is part of the index). I haven't found much documentation that speaks to this scenario and what, if any, performance benefits I may get from this, which brings me to my final, bonus question: If I am creating a table on one filegroup on one disk, with a clustered index on one column, is there any benefit (besides partition switching when loading the data) that comes from defining a partition on the same column? A: This table is awesomely narrow. If the real table will be this narrow, you should be happy to have table scans instead of index->lookups. I would do this: CREATE TABLE Narrow ( [id] INT NOT NULL, [date] SMALLDATETIME NOT NULL, [sales] FLOAT NULL, PRIMARY KEY(id, date) --EDIT, just noticed your id is not unique. ) CREATE INDEX CoveringNarrow ON Narrow(date, id, sales) This handles point queries with seeks and wide-range queries with limited scans against date criteria and id criteria. There is no per-record lookup from index. Yes, I've doubled the write time (and space used) but that's fine, imo. If there's some need for a specific piece of data (and that need is demonstrated by profiling!!), I'd create a clustered view targetting that section of the table. CREATE VIEW Narrow200801 AS SELECT * FROM Narrow WHERE '2008-01-01' <= [date] AND [date] < '2008-02-01' --There is some command that I don't have at my finger tips to make this a clustered view. Clustered views can be used in queries by name, or the optimizer will choose to use the clustered views when the FROM and WHERE clause are appropriate. For example, this query will use the clustered view. Note that the base table is referred to in the query. SELECT SUM(sales) FROM Narrow WHERE '2008-01-01' <= [date] AND [date] < '2008-02-01' As index lets you make specific columns conveniently accessible... Clustered view lets you make specific rows conveniently accessible. A: A clustered index will give you performance benefits for queries when localising the I/O. Date is a traditional partitioning strategy as many D/W queries look at movements by date. A rule of thumb for a partitioned table suggests that partitions should be around 10m rows in size. It would be somewhat unusual to see much performance gain from a clustered index on a diverse analytic workload. The query optimiser will use a technique called 'Index Intersection' to select rows without even hitting the fact table. See Here for a post I did on another question that explains this in more depth with some links. A clustered index may or may not participate in the index intersection, so you may find that it gains you relatively little on a general query workload. You may find circumstances in loading where clustered indexes give you some gain, particularly if you have derived calculations (such as Earned Premium) that are computed within the ETL process. In this case you may get some benefits. If you have a specific query that you know will be executed all the time it might make sense to use clustered indexes for this. Options #2 and #3 are only going to significantly benefit you if you expect this type of query to be the overwhelming majority of the work done by the application. For a flexible system, a simple date range partition with an index on the ID (and date if the partitions hold a range would probably get you as good a performance as any. You might get some benefit from clustering the index limited circumstances. You might also get some mileage from building a cube over the data and ensuring that the aggregations are set up correctly for this query. A: If you are using the partitions in the select statements, then you cn gain some speed. If you are not using it, only using "standard" selects, then you have no benefit. On your original problem: I would recommend you option #1 with the non-clustered index on id included. A: I would do the following: * *Non-Clustered Index on [Id] *Clustered Index on [Date] *Convert the [sales] datatype to numeric instead of float A: Partition the table by date. Several horizontal partitions will be more performant than one large table with that many rows. A: Clustered index on the date column isn't good if you'll have inserts that will be inserted faster that the datetime resolution of 3.33 ms is. if you do you'll get 2 keys with the same value and your index will have to get another internal uniquifier which will increase its size. i'd go with #2 of your options.
{ "language": "en", "url": "https://stackoverflow.com/questions/120731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How can I enumerate all *.exes and the details about each? Something like: //Get all search data $search = new search('C:\', '*.exe'); while($item = $search->next()) { $details = $item->getDetails(); append_details('C:\files.txt', implode_details($details)); } But in NSIS (http://nsis.sourceforge.net/) A: You could use the FindFirst/FindNext functions to loop through everything in a particular directory. FindFirst $0 $1 "c:\*.exe" FileLoop: StrCmp $1 "" DoneFileLoop ;Check for no files DetailPrint $1 ;Print file name ;Code to output whatever details you wanted to a txt file here FindNext $0 $1 ;Get the next file from the list goto FileLoop ;Go back to the top and check for no files DoneFileLoop:
{ "language": "en", "url": "https://stackoverflow.com/questions/120733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Supporting URLs like /similar-to-:product in Ruby on Rails? I have been trying to use routes.rb for creating a URL /similar-to-:product (where product is dynamic) for my website. The issue is that routes.rb readily supports URLs like /:product-similar but doesn't support the former because it requires :product to be preceded with a separator ('/' is a separator but '-' isn't). The list of separators is in ActionController::Routing::SEPARATORS. I can't add '-' as a separator because :product can also contain a hyphen. What is the best way of supporting a URL like this? One way that I have successfully tried is to not use routes.rb and put the URL parsing logic in the controller itself, but that isn't the cleanest way. A: I would refactor your URLs so that they're simply "similar-to/product" A: In fact you can add - as a separator, then use route globbing. map.similar_product '/similar-to-*product', :controller => 'products', :action => 'similar' then, in ProductsController#similar @product = Product.find_by_slug params[:product].join('-') Though refactoring does seem nicer, since with this approach you'll need to specially handle all slugs that can contain hyphens. A: An easy solution is using a routing filter. See README for details. With routing filter you can have a url /similar-to-:product, preprocess it to /similar-to/:product before it gets to routing recognition. You'll also want to post-process generated paths back from /similar-to/:product to /similat-to-:product. A: I'm a little confused, but could you maybe add "to-" as a seperator?
{ "language": "en", "url": "https://stackoverflow.com/questions/120751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: ADO.NET Entity Framework and identity columns Is the Entity Framework aware of identity columns? I am using SQL Server 2005 Express Edition and have several tables where the primary key is an identity column. when I use these tables to create an entity model and use the model in conjunction with an entity datasource bond to a formview in order to create a new entity I am asked to enter a value for the identity column. Is there a way to make the framework not ask for values for identity columns? A: I know this post is quite old, but this may help the next person arriving hear via a Google search for "Entitiy Framework" and "Identity". It seems that Entity Frameworks does respect server-generated primary keys, as the case would be if the "Identity" property is set. However, the application side model still requires a primary key to be supplied in the CreateYourEntityHere method. The key specified here is discarded upon the SaveChanges() call to the context. The page here gives the detailed information regarding this. A: If you are using Entity Framework 5, you can use the following attribute. [DatabaseGenerated(DatabaseGeneratedOption.Identity)] A: You should set the identity columns' identity specification so that the (Is Identity) property is set to true. You can do this in your table designer in SSMS. Then you may need to update the entity data model. Perhaps that what you mean by saying the "Primary key is an identity column," or perhaps you missed this step. A: This is the best answer I've seen. You have to manually edit the storage layer xml to set StoreGeneratedPattern="Identity" on each primary key of type UniqueIdentifier that has the default value set to NewID(). http://web.archive.org/web/20130728225149/http://leedumond.com/blog/using-a-guid-as-an-entitykey-in-entity-framework-4/ A: Entity Framework is aware and can handle identity columns. Your problem can be maybe not the EF itself but the generated formview of it. Try to delete the input for the identity column from the insert form and let's see what happens. A: If all else fails before you rip out your hair - try deleting your EntityModel and re-importing from SQL Server. If you've been tweaking the keys and relationships and relying on the 'update model from database' function it's still a bit buggy in the RC version I've found - a fresh import may help. A: In C# you can do something like this to make it aware: In your FooConfiguration.cs : EntityTypeConfiguration<Foo>: this.Property(x => x.foo).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity); Then to use it, just be sure to insert the item into the context and call context.SaveChanges() before using x.foo to get the updated auto-incremented value. Otherwise x.foo will just be 0 or null. A: Entity framework does not fully understand Identities for some reason. The correct workaround is to set the Setter for that column to Private. This will make any generated UI understand that it should not set the identity value since it is impossible for it to set a private field. A: What worked for me was setting the StoreGeneratedPattern to None, when it was an Identity column. Now it all works consistently. The main problem with this is editing the models is an extreme chore if you have many models. A: I cannot believe it. Intellisensing ItemCollection yield the single item with ID = 0 after SaveChanges. Dim ItemCollection = From d In action.Parameter Select New STOCK_TYPE With { .Code = d.ParamValue.<Code>.Value, .GeneralUseID = d.ParamValue.<GeneralUse>.Value, } GtexCtx.STOCK_TYPE.AddObject( ItemCollection.FirstOrDefault) GtexCtx.SaveChanges() No matter what I do. After 8 hours including deleting my model, 35 times building and rebuilding, experimenting and editing the XML of EDMX and now almost coming to deleting my whole SQL Server database. At the 36th compile, this dumbfounding solution worked Dim abc = ItemCollection.FirstOrDefault GtexCtx.STOCK_TYPE.AddObject(abc) GtexCtx.SaveChanges() abc.ID yield 41 (the identity i needed) EDIT: Here's a simple code for thought for looping AddObject and still get ID Dim listOfST As List(Of STOCK_TYPE) = ItemCollection.ToList() For Each q As STOCK_TYPE In listOfST GtexCtx.STOCK_TYPE.AddObject(q) Next GtexCtx.SaveChanges() ...more code for inter-relationship tables Try Intellisence listOfST after SaveChanges and you will find updated ID. Maybe there's better way but the concept is there
{ "language": "en", "url": "https://stackoverflow.com/questions/120755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: Using the literal '@' with a string variable I have a helper class pulling a string from an XML file. That string is a file path (so it has backslashes in it). I need to use that string as it is... How can I use it like I would with the literal command? Instead of this: string filePath = @"C:\somepath\file.txt"; I want to do this: string filePath = @helper.getFilePath(); //getFilePath returns a string This isn't how I am actually using it; it is just to make what I mean a little clearer. Is there some sort of .ToLiteral() or something? A: I'm not sure if I understand. In your example: if helper.getFilePath() returns "c:\somepath\file.txt", there will be no problem, since the @ is only needed if you are explicitely specifying a string with "". When Functions talk to each other, you will always get the literal path. If the XML contains c:\somepath\file.txt and your function returns c:\somepath\file.txt, then string filePath will also contain c:\somepath\file.txt as a valid path. A: The @"" just makes it easier to write string literals. string (C# Reference, MSDN) Verbatim string literals start with @ and are also enclosed in double quotation marks. For example: @"good morning" // a string literal The advantage of verbatim strings is that escape sequences are not processed, which makes it easy to write, for example, a fully qualified file name: @"c:\Docs\Source\a.txt" // rather than "c:\\Docs\\Source\\a.txt" One place where I've used it is in a regex pattern: string pattern = @"\b[DdFf][0-9]+\b"; If you have a string in a variable, you do not need to make a "literal" out of it, since if it is well formed, it already has the correct contents. A: In C# the @ symbol combined with doubles quotes allows you to write escaped strings. E.g. print(@"c:\mydir\dont\have\to\escape\backslashes\etc"); If you dont use it then you need to use the escape character in your strings. http://msdn.microsoft.com/en-us/library/aa691090(VS.71).aspx You dont need to specify it anywhere else in code. In fact doing so should cause a compiler error. A: I don't think you have to worry about it if you already have the value. The @ operator is for when you're specifying the string (like in your first code snippet). What are you attempting to do with the path string that isn't working? A: You've got it backwards. The @-operator is for turning literals into strings, while keeping all funky characters. Your path is already a string - you don't need to do anything at all to it. Just lose the @. string filePath = helper.getFilePath(); A: The string returned from your helper class is not a literal string so you don't need to use the '@' character to remove the behaviour of the backslashes.
{ "language": "en", "url": "https://stackoverflow.com/questions/120763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to change CSS class of a HTML page element using ASP.NET? I have several <li> elements with different id's on ASP.NET page: <li id="li1" class="class1"> <li id="li2" class="class1"> <li id="li3" class="class1"> and can change their class using JavaScript like this: li1.className="class2" But is there a way to change <li> element class using ASP.NET? It could be something like: WebControl control = (WebControl)FindControl("li1"); control.CssClass="class2"; But FindControl() doesn't work as I expected. Any suggestions? Thanks in advance! A: The FindControl method searches for server controls. That is, it looks for controls with the attribute "runat" set to "server", as in: <li runat="server ... ></li> Because your <li> tags are not server controls, FindControl cannot find them. You can add the "runat" attribute to these controls or use ClientScript.RegisterStartupScript to include some client side script to manipulate the class, e.g. System.Text.StringBuilder sb = new System.Text.StringBuilder(); sb.Append("<script language=\"javascript\">"); sb.Append("document.getElementById(\"li1\").className=\"newClass\";") sb.Append("</script>"); ClientScript.RegisterStartupScript(this.GetType(), "MyScript", sb.ToString()); A: This will find the li element and set a CSS class on it. using System.Web.UI.HtmlControls; HtmlGenericControl liItem = (HtmlGenericControl) ctl.FindControl("liItemID"); liItem.Attributes.Add("class", "someCssClass"); Remember to add your runat="server" attribute as mentioned by others. A: you must set runat="server" like: <li id="li1" runat="server">stuff</li> A: Add runat="server" in your HTML page then use the attribute property in your asp.Net page like this li1.Attributes["Class"] = "class1"; li2.Attributes["Class"] = "class2"; A: Please try this if you want to apply style: li1.Style.Add("background-color", "black"); For CSS, you can try below syntax : li1.Attributes.Add("class", "clsItem"); A: FindControl will work if you include runat="server" in the <li> <li id="li1" runat="server">stuff</li> Otherwise you server side code can't 'see' it. A: Leaf Dev provided the solution for this, but in the place of "ctl" you need to insert "Master". It's working for me anyway: using System.Web.UI.HtmlControls; HtmlGenericControl liItem = (HtmlGenericControl) ctl.FindControl("liItemID"); liItem.Attributes.Add("class", "someCssClass"); A: You also can try this too if u want to add some few styles: li1.Style.add("color","Blue"); li2.Style.add("text-decoration","line-through");
{ "language": "en", "url": "https://stackoverflow.com/questions/120766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Implementing Google maps in mobile How to show google maps in mobile application using .net2.0 A: A very simple approach is to use the Google Maps Static API, a HttpWebRequest and Image.Save to download an image of the map, e.g. "http://maps.google.com/staticmap?zoom=14&size=512x512&mapt ype=mobile&markers=40.714728,-73.998672&key=YOUR-GOOGLE-MAPS-API-KEY" which can be shown in a PictureBox. Regards, tamberg A: Late to the party, but you might find this helpful as well: http://www.koushikdutta.com/2008/07/virtual-earth-and-google-maps-tiled-map.html Should give you a bit more flexibility (zooming, moving around, etc... ) than the static maps API. A: Do the maps absolutly have to be from Google? I was writing an application that made use of Live Maps to display maps on a Compact Framework application. When I wrote the app I used the 3.5 framework though, so I am not sure whether or not this will fit your needs. http://www.codeproject.com/KB/mobile/WiMoWifiPosition.aspx A: There is various controls available to integrate Google Maps into an ASP.net website. Since the Sockets and Net libraries are available on Windows Mobile, you could possibly integrate using the Google Maps API, however will have to build your own controls to display the maps. The API can be found here.
{ "language": "en", "url": "https://stackoverflow.com/questions/120775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What should an Application Controller do? I am a bit confused in what the application controller should do? Because I see the functionality will also exists in your MVP pattern to make the decisions which form should be shown when a button is clicked? Are there any good examples for Windows Forms that uses the application controller pattern? There is a difference in the MVC(ontroler) and the Application Controller. I know the MVC(ontroller), I am not sure what is the responsibilities for an Application Controller, and how does it fit into a WinForms application. Martin Fowler also calls this the Application Controller pattern, surely it is not the same thing as the MVC(ontroller)? A: An Application Controller is a bit of a different beast than the controller used in MVC. Martin Fowler's page on the Application Controller. In the case of an MVP WinForms app, which seems to be what the question topic is about I think. You can put all the logic for "what form do I show now" into the Presenter, but as your application grows you're going to be duplicating a lot of code between Presenters. Say you have 2 views that both have a button for "Edit this Widget", both of them would have to have logic to get a WidgetEditorPresenter and show the associated view. If you have an ApplicationController, you move that logic into the ApplicationController, and now you simply have a dependency in all your presenters on the ApplicationController and you can call appController.EditWidget() and it will pop up the correct view. The application controller is an uber-controller that controls application flow throughout your system as you move from screen to screen. A: I recently wrote an article on creating and using an ApplicationController in a C# Winforms project, to decouple the workflow and presenters from the forms directly. It may help: Decoupling Workflow And Forms With An Application Controller edit: Archive.org has got a more readable copy of the article at this time. A: Personally I have no experience with MVP or winforms, but I have worked with MVC. I hope this is what you're asking, otherwise ignore my answer completely. The C in MVC is responsible for more than just choosing the next view to be presented to the client. It holds most, preferably all, business-logic of the application, including performance of system tasks (such as logging and enforcement of permissions upon the flow of data from the model and to it). Its primary task is, naturally, to serve the presentation layer above it and separate it from the model layer below while mediating between them. I guess you can think of it as the brain of the application. Hope this helps, Yuval =8-)
{ "language": "en", "url": "https://stackoverflow.com/questions/120781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do I implement IEqualityComparer on an immutable generic Pair struct? Currently I have this (edited after reading advice): struct Pair<T, K> : IEqualityComparer<Pair<T, K>> { readonly private T _first; readonly private K _second; public Pair(T first, K second) { _first = first; _second = second; } public T First { get { return _first; } } public K Second { get { return _second; } } #region IEqualityComparer<Pair<T,K>> Members public bool Equals(Pair<T, K> x, Pair<T, K> y) { return x.GetHashCode(x) == y.GetHashCode(y); } public int GetHashCode(Pair<T, K> obj) { int hashCode = obj.First == null ? 0 : obj._first.GetHashCode(); hashCode ^= obj.Second == null ? 0 : obj._second.GetHashCode(); return hashCode; } #endregion public override int GetHashCode() { return this.GetHashCode(this); } public override bool Equals(object obj) { return (obj != null) && (obj is Pair<T, K>) && this.Equals(this, (Pair<T, K>) obj); } } The problem is that First and Second may not be reference types (VS actually warns me about this), but the code still compiles. Should I cast them (First and Second) to objects before I compare them, or is there a better way to do this? Edit: Note that I want this struct to support value and reference types (in other words, constraining by class is not a valid solution) Edit 2: As to what I'm trying to achieve, I want this to work in a Dictionary. Secondly, SRP isn't important to me right now because that isn't really the essence of this problem - it can always be refactored later. Thirdly, comparing to default(T) will not work in lieu of comparing to null - try it. A: Your IEqualityComparer implementation should be a different class (and definately not a struct as you want to reuse the reference). Also, your hashcode should never be cached, as the default GetHashcode implementation for a struct (which you do not override) will take that member into account. A: If you use hashcodes in comparing methods, you should check for "realy value" if the hash codes are same. bool result = ( x._hashCode == y._hashCode ); if ( result ) { result = ( x._first == y._first && x._second == y._second ); } // OR?: if ( result ) { result = object.Equals( x._first, y._first ) && object.Equals( x._second, y._second ); } // OR?: if ( result ) { result = object.ReferenceEquals( x._first, y._first ) && object.Equals( x._second, y._second ); } return result; But there is littlebit problem with comparing "_first" and "_second" fields. By default reference types uses fore equality comparing "object.ReferenceEquals" method, bud they can override them. So the correct solution depends on the "what exactly should do" the your comparing method. Should use "Equals" method of the "_first" & "_second" fields, or object.ReferenceEquals ? Or something more complex? A: It looks like you need IEquatable instead: internal struct Pair<T, K> : IEquatable<Pair<T, K>> { private readonly T _first; private readonly K _second; public Pair(T first, K second) { _first = first; _second = second; } public T First { get { return _first; } } public K Second { get { return _second; } } public bool Equals(Pair<T, K> obj) { return Equals(obj._first, _first) && Equals(obj._second, _second); } public override bool Equals(object obj) { return obj is Pair<T, K> && Equals((Pair<T, K>) obj); } public override int GetHashCode() { unchecked { return (_first != null ? _first.GetHashCode() * 397 : 0) ^ (_second != null ? _second.GetHashCode() : 0); } } } A: Regarding the warning, you can use default(T) and default(K) instead of null. I can't see what you're trying to achieve, but you shouldn't be using the hashcode to compare for equality - there is no guarantee that two different objects won't have the same hashcode. Also even though your struct is immutable, the members _first and _second aren't. A: First of all this code violates SRP principle. Pair class used to hold pairs if items, right? It's incorrect to delegate equality comparing functionality to it. Next let take a look at your code: Equals method will fail if one of the arguments is null - no good. Equals uses hash code of Pair class, but take a look at the definition of GetHashCode, it just a combination of pair members hash codes - it's has nothing to do with equality of items. I would expect that Equals method will compare actual data. I'm too busy at the moment to provide correct implementation, unfortunately. But from the first look, you code seems to be wrong. It would be better if you provide us description of what you want to achieve. I'm sure SO members will be able to give you some advices. A: Might I suggest the use of Lambda expressions as a parameter ? this would allow you to specify how to compare the internal generic types. A: I don't get any warning when compiling about this but I assume you are talking about the == null comparison? A cast seems like it would make this all somewhat cleaner, yes. PS. You really should use a separate class for the comparer. This class that fills two roles (being a pair and comparing pairs) is plain ugly.
{ "language": "en", "url": "https://stackoverflow.com/questions/120783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What are the pros and cons of Web Services and RMI in a Java-only environment? When developing distributed applications, all written in Java by the same company, would you choose Web Services or RMI? What are the pros and cons in terms of performance, loose coupling, ease of use, ...? Would anyone choose WS? Can you build a service-oriented architecture with RMI? A: I'd try to think about it this way: Are you going for independent services running beneath each other, and those services may be accessed by non-java applications some time in the future? Then go for web services. Do you just want to spread parts of an application (mind the singular) over several servers? Then go for RMI and you won't have to leave the Java universe to get everything working together tightly coupled. A: I would choose WS. * *It is unlikely that WS/RMI will be your bottleneck. *Why to close the door for other possible technologies in the future? *RMI might have problem if version of classes on client/server get out of sync. And... I would most likely choose REST services. A: If You Aren't Gonna Need It (interop with non-Java), and you probably aren't, RMI is going to be better; less code, less configuration, less bandwidth overhead. An option if you are scared that you Are Gonna Need It is to use EJB3; it uses RMI, is very easy to setup and deploy, but also allows you to turn your calls into Web Services easily if you need them. Whatever you do, do not create your own thing; stick to a standard. A: my choices are: standard java serialization - pros : imho offers the most performance, simple to implement (I'm using Spring to expose local interface as remote one); cons : serialization doesn't work between different jvm versions binary serialization (for example hessian from jetty) - pros : same performance as with java serialization and works between different jvm versions WS: only if there is a need in interoperability between different platforms java + .net , otherwise it is just too haveweight. A: RMI is a great rapid-development transport, but I would advise against using it in a production environment. The serialization compatibility problem can make things awkward, you have to coordinate your deployments very carefully. WebServices are inefficient, yes, but just through hardware at it. Alternatively, use plain, lightweight XML-over-HTTP, rather than full-fat SOAP/WSDL.
{ "language": "en", "url": "https://stackoverflow.com/questions/120791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do I set the proxy to be used by the JVM Many times, a Java app needs to connect to the Internet. The most common example happens when it is reading an XML file and needs to download its schema. I am behind a proxy server. How can I set my JVM to use the proxy ? A: To set an HTTP/HTTPS and/or SOCKS proxy programmatically: ... public void setProxy() { if (isUseHTTPProxy()) { // HTTP/HTTPS Proxy System.setProperty("http.proxyHost", getHTTPHost()); System.setProperty("http.proxyPort", getHTTPPort()); System.setProperty("https.proxyHost", getHTTPHost()); System.setProperty("https.proxyPort", getHTTPPort()); if (isUseHTTPAuth()) { String encoded = new String(Base64.encodeBase64((getHTTPUsername() + ":" + getHTTPPassword()).getBytes())); con.setRequestProperty("Proxy-Authorization", "Basic " + encoded); Authenticator.setDefault(new ProxyAuth(getHTTPUsername(), getHTTPPassword())); } } if (isUseSOCKSProxy()) { // SOCKS Proxy System.setProperty("socksProxyHost", getSOCKSHost()); System.setProperty("socksProxyPort", getSOCKSPort()); if (isUseSOCKSAuth()) { System.setProperty("java.net.socks.username", getSOCKSUsername()); System.setProperty("java.net.socks.password", getSOCKSPassword()); Authenticator.setDefault(new ProxyAuth(getSOCKSUsername(), getSOCKSPassword())); } } } ... public class ProxyAuth extends Authenticator { private PasswordAuthentication auth; private ProxyAuth(String user, String password) { auth = new PasswordAuthentication(user, password == null ? new char[]{} : password.toCharArray()); } protected PasswordAuthentication getPasswordAuthentication() { return auth; } } ... Remember that HTTP proxies and SOCKS proxies operate at different levels in the network stack, so you can use one or the other or both. A: reading an XML file and needs to download its schema If you are counting on retrieving schemas or DTDs over the internet, you're building a slow, chatty, fragile application. What happens when that remote server hosting the file takes planned or unplanned downtime? Your app breaks. Is that OK? See http://xml.apache.org/commons/components/resolver/resolver-article.html#s.catalog.files URL's for schemas and the like are best thought of as unique identifiers. Not as requests to actually access that file remotely. Do some google searching on "XML catalog". An XML catalog allows you to host such resources locally, resolving the slowness, chattiness and fragility. It's basically a permanently cached copy of the remote content. And that's OK, since the remote content will never change. If there's ever an update, it'd be at a different URL. Making the actual retrieval of the resource over the internet especially silly. A: If you want "Socks Proxy", inform the "socksProxyHost" and "socksProxyPort" VM arguments. e.g. java -DsocksProxyHost=127.0.0.1 -DsocksProxyPort=8080 org.example.Main A: I am also behind firewall, this worked for me!! System.setProperty("http.proxyHost", "proxy host addr"); System.setProperty("http.proxyPort", "808"); Authenticator.setDefault(new Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication("domain\\user","password".toCharArray()); } }); URL url = new URL("http://www.google.com/"); URLConnection con = url.openConnection(); BufferedReader in = new BufferedReader(new InputStreamReader( con.getInputStream())); // Read it ... String inputLine; while ((inputLine = in.readLine()) != null) System.out.println(inputLine); in.close(); A: You can set those flags programmatically this way: if (needsProxy()) { System.setProperty("http.proxyHost",getProxyHost()); System.setProperty("http.proxyPort",getProxyPort()); } else { System.setProperty("http.proxyHost",""); System.setProperty("http.proxyPort",""); } Just return the right values from the methods needsProxy(), getProxyHost() and getProxyPort() and you can call this code snippet whenever you want. A: Add this before you connect to a URL behind a proxy. System.getProperties().put("http.proxyHost", "someProxyURL"); System.getProperties().put("http.proxyPort", "someProxyPort"); System.getProperties().put("http.proxyUser", "someUserName"); System.getProperties().put("http.proxyPassword", "somePassword"); A: This is a minor update, but since Java 7, proxy connections can now be created programmatically rather than through system properties. This may be useful if: * *Proxy needs to be dynamically rotated during the program's runtime *Multiple parallel proxies need to be used *Or just make your code cleaner :) Here's a contrived example in groovy: // proxy configuration read from file resource under "proxyFileName" String proxyFileName = "proxy.txt" String proxyPort = "1234" String url = "http://www.promised.land" File testProxyFile = new File(proxyFileName) URLConnection connection if (!testProxyFile.exists()) { logger.debug "proxyFileName doesn't exist. Bypassing connection via proxy." connection = url.toURL().openConnection() } else { String proxyAddress = testProxyFile.text connection = url.toURL().openConnection(new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyAddress, proxyPort))) } try { connection.connect() } catch (Exception e) { logger.error e.printStackTrace() } Full Reference: http://docs.oracle.com/javase/7/docs/technotes/guides/net/proxies.html A: From the Java documentation (not the javadoc API): http://download.oracle.com/javase/6/docs/technotes/guides/net/proxies.html Set the JVM flags http.proxyHost and http.proxyPort when starting your JVM on the command line. This is usually done in a shell script (in Unix) or bat file (in Windows). Here's the example with the Unix shell script: JAVA_FLAGS=-Dhttp.proxyHost=10.0.0.100 -Dhttp.proxyPort=8800 java ${JAVA_FLAGS} ... When using containers such as JBoss or WebLogic, my solution is to edit the start-up scripts supplied by the vendor. Many developers are familiar with the Java API (javadocs), but many times the rest of the documentation is overlooked. It contains a lot of interesting information: http://download.oracle.com/javase/6/docs/technotes/guides/ Update : If you do not want to use proxy to resolve some local/intranet hosts, check out the comment from @Tomalak: Also don't forget the http.nonProxyHosts property! -Dhttp.nonProxyHosts="localhost|127.0.0.1|10.*.*.*|*.example.com|etc" A: Recently I've discovered the way to allow JVM to use browser proxy settings. What you need to do is to add ${java.home}/lib/deploy.jar to your project and to init the library like the following: import com.sun.deploy.net.proxy.DeployProxySelector; import com.sun.deploy.services.PlatformType; import com.sun.deploy.services.ServiceManager; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; public abstract class ExtendedProxyManager { private static final Log logger = LogFactory.getLog(ExtendedProxyManager.class); /** * After calling this method, proxy settings can be magically retrieved from default browser settings. */ public static boolean init() { logger.debug("Init started"); // Initialization code was taken from com.sun.deploy.ClientContainer: ServiceManager .setService(System.getProperty("os.name").toLowerCase().indexOf("windows") != -1 ? PlatformType.STANDALONE_TIGER_WIN32 : PlatformType.STANDALONE_TIGER_UNIX); try { // This will call ProxySelector.setDefault(): DeployProxySelector.reset(); } catch (Throwable throwable) { logger.error("Unable to initialize extended dynamic browser proxy settings support.", throwable); return false; } return true; } } Afterwards the proxy settings are available to Java API via java.net.ProxySelector. The only problem with this approach is that you need to start JVM with deploy.jar in bootclasspath e.g. java -Xbootclasspath/a:"%JAVA_HOME%\jre\lib\deploy.jar" -jar my.jar. If somebody knows how to overcome this limitation, let me know. A: That works for me: public void setHttpProxy(boolean isNeedProxy) { if (isNeedProxy) { System.setProperty("http.proxyHost", getProxyHost()); System.setProperty("http.proxyPort", getProxyPort()); } else { System.clearProperty("http.proxyHost"); System.clearProperty("http.proxyPort"); } } P/S: I base on GHad's answer. A: JVM uses the proxy to make HTTP calls System.getProperties().put("http.proxyHost", "someProxyURL"); System.getProperties().put("http.proxyPort", "someProxyPort"); This may use user setting proxy System.setProperty("java.net.useSystemProxies", "true"); A: Combining Sorter's and javabrett/Leonel's answers: java -Dhttp.proxyHost=10.10.10.10 -Dhttp.proxyPort=8080 -Dhttp.proxyUser=username -Dhttp.proxyPassword=password -jar myJar.jar A: Set the java.net.useSystemProxies property to true. You can set it, for example, through the JAVA_TOOL_OPTIONS environmental variable. In Ubuntu, you can, for example, add the following line to .bashrc: export JAVA_TOOL_OPTIONS+=" -Djava.net.useSystemProxies=true" A: To use the system proxy setup: java -Djava.net.useSystemProxies=true ... Or programatically: System.setProperty("java.net.useSystemProxies", "true"); Source: http://docs.oracle.com/javase/7/docs/api/java/net/doc-files/net-properties.html A: You can set some properties about the proxy server as jvm parameters -Dhttp.proxyPort=8080, proxyHost, etc. but if you need pass through an authenticating proxy, you need an authenticator like this example: ProxyAuthenticator.java import java.net.*; import java.io.*; public class ProxyAuthenticator extends Authenticator { private String userName, password; protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(userName, password.toCharArray()); } public ProxyAuthenticator(String userName, String password) { this.userName = userName; this.password = password; } } Example.java import java.net.Authenticator; import ProxyAuthenticator; public class Example { public static void main(String[] args) { String username = System.getProperty("proxy.authentication.username"); String password = System.getProperty("proxy.authentication.password"); if (username != null && !username.equals("")) { Authenticator.setDefault(new ProxyAuthenticator(username, password)); } // here your JVM will be authenticated } } Based on this reply: http://mail-archives.apache.org/mod_mbox/jakarta-jmeter-user/200208.mbox/%3C494FD350388AD511A9DD00025530F33102F1DC2C@MMSX006%3E A: The following shows how to set in Java a proxy with proxy user and proxy password from the command line, which is a very common case. You should not save passwords and hosts in the code, as a rule in the first place. Passing the system properties in command line with -D and setting them in the code with System.setProperty("name", "value") is equivalent. But note this Example that works: C:\temp>java -Dhttps.proxyHost=host -Dhttps.proxyPort=port -Dhttps.proxyUser=user -Dhttps.proxyPassword="password" -Djavax.net.ssl.trustStore=c:/cacerts -Djavax.net.ssl.trustStorePassword=changeit com.andreas.JavaNetHttpConnection But the following does not work: C:\temp>java com.andreas.JavaNetHttpConnection -Dhttps.proxyHost=host -Dhttps.proxyPort=port -Dhttps=proxyUser=user -Dhttps.proxyPassword="password" -Djavax.net.ssl.trustStore=c:/cacerts -Djavax.net.ssl.trustStorePassword=changeit The only difference is the position of the system properties! (before and after the class) If you have special characters in password, you are allowed to put it in quotes "@MyPass123%", like in the above example. If you access an HTTPS service, you have to use https.proxyHost, https.proxyPort etc. If you access an HTTP service, you have to use http.proxyHost, http.proxyPort etc. A: As is pointed out in other answers, if you need to use Authenticated proxies, there's no reliable way to do this purely using command-line variables - which is annoying if you're using someone else's application and don't want to mess with the source code. Will Iverson makes the helpful suggestion over at Using HttpProxy to connect to a host with preemtive authentication to use a Proxy-management tool such as Proxifier ( http://www.proxifier.com/ for Mac OS X and Windows) to handle this. For example with Proxifier you can set it up to only intercept java commands to be managed and redirected through its (authenticated) proxy. You're going to want to set the proxyHost and proxyPort values to blank in this case though, e.g. pass in -Dhttp.proxyHost= -Dhttp.proxyPort= to your java commands. A: This is a complete example that worked for me - note that for HTTPS there are separate properties (as per https://docs.oracle.com/javase/8/docs/technotes/guides/net/proxies.html). Code below sends a request to https://api.myip.com API and prints the response. public static void main(String[] args) throws IOException { System.setProperty("java.net.useSystemProxies", "true"); final String proxyUser = "proxy-user"; final String proxyPass = "password123"; final String host = "some.proxy.io"; final Integer port = 50201; // http System.setProperty("http.proxyHost",host); System.setProperty("http.proxyPort", String.valueOf(port)); System.setProperty("http.proxyUser", proxyUser); System.setProperty("http.proxyPassword", proxyPass); // https System.setProperty("https.proxyHost",host); System.setProperty("https.proxyPort", String.valueOf(port)); System.setProperty("https.proxyUser", proxyUser); System.setProperty("https.proxyPassword", proxyPass); System.setProperty("jdk.http.auth.tunneling.disabledSchemes", ""); System.setProperty("jdk.https.auth.tunneling.disabledSchemes", ""); Authenticator.setDefault(new Authenticator() { @Override public PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(proxyUser, proxyPass.toCharArray()); } } ); // create and send a https request to myip.com API URL url = new URL("https://api.myip.com"); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("GET"); int status = connection.getResponseCode(); // read the response BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream())); String responseLine; StringBuffer responseContent = new StringBuffer(); while ((responseLine = in.readLine()) != null) responseContent.append(responseLine); in.close(); connection.disconnect(); // print the response System.out.println(status); System.out.println(responseContent); } A: I think configuring WINHTTP will also work. Many programs including Windows Updates are having problems behind proxy. By setting up WINHTTP will always fix this kind of problems A: You can utilize the http.proxy* JVM variables if you're within a standalone JVM but you SHOULD NOT modify their startup scripts and/or do this within your application server (except maybe jboss or tomcat). Instead you should utilize the JAVA Proxy API (not System.setProperty) or utilize the vendor's own configuration options. Both WebSphere and WebLogic have very defined ways of setting up the proxies that are far more powerful than the J2SE one. Additionally, for WebSphere and WebLogic you will likely break your application server in little ways by overriding the startup scripts (particularly the server's interop processes as you might be telling them to use your proxy as well...).
{ "language": "en", "url": "https://stackoverflow.com/questions/120797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "360" }
Q: Learning FreeBSD What is the average time that it would take a complete novice, whose background is mostly Windows XP, to go through the FreeBSD handbook and get sufficient mastery to setup a server from the ground up? A: It would depend on how much knowledge you have of unix, and from the sounds of things, you probably do not have a whole lot. Assuming you have little knowledge of unix at all, I would say that it will probably take a few days to get a grasp of what is going on, and possibly a week to have something working. The FreeBSD handbook is pretty detailed though, and does provide you with a good grounding of everything you need to do to get things to work. I know that this sounds like an awful lot of time, but in my experience, they really are quite different OS paradigms. A: It's impossible to say. Not only is it highly dependent upon what sort of person you are, but it also depends on what exactly you are doing and how you define "sufficient mastery". Being able to get Apache operational is a simple matter of following step-by-step tutorials, you could do that in a matter of hours. Being able to run a multi-user server competently takes a hell of a lot longer, and the handbook isn't nearly enough. A: You could start with PC BSD (an easy to use distro) to get a feeling of BSD and then move to more advanced stuff like setting up servers. As others have noted, configuring a service to do a couple of things isn't very hard, you just have to follow some steps (which any monkey could do), but if you want more, you'll need extra time. A competent sysadmin does not know only the how, but also the why. Grandma can click all day in Windows and even if Windows Server has a GUI for server administration, it doesn't mean she can configure IIS or the DHCP service. By the way, it would be a good thing if you could learn an (Unix) editor, preferably vi, since it's the standard on BSDs; emacs, joe, pico are nice too, but they aren't so popular. As for the time, it took about two days for me to configure a server. But I had previous Linux experience and the server didn't do anything fancy. A: Look if you've never touched a Unix platform, you should learn a lot of things, basically a different philosophy. The FreeBSD Handbook and the community is simply wonderful, but a reference book like the FBSD handbook contains a lot of information that you must develop yourself. Also, the BSD platform is not easiest of the Unix family to begin from zero. Good sources to learn: * *Absolute BSD book. *The Complete BSD book (this is for Release 5, it's good for learning also). *Man pages. The BSDs man pages are a LOT better than the Linux ones. *FreeBSD Handbook. *FreeBSD forums: forums.freebsd.org and daemonforums. *Any Unix/Linux resource you can get your hands on. Many things are compatible (or near-compatible). e.g, if your friend tells you "I've found an old SGI IRIX / HPUX or (insert unix here) manual that I will throw in the thrashcan" stop it and see what you can learn from it. Keep in mind that you've a long road ahead. But you'll enjoy it. A: Depends on your reading speed :-) Depends on your needs (I mean: what kind of server). Once upon a time I did this - installing a FreeBSD on x86- (although I had some Linux knowledge already at that time), and it took me 3 hours, mainly that much time, because I was working on another machine in parallel. A: Depends on your background: Did you ever use power shell or other command line "applications" (like batches ;-). For me one of the greatest challenges to switch from a completely GUI'd operating system to an operating system that works best with a shell (something a little bit like the DOS prompt). But the moment you get the hang of it you'll be fine again. A: Another aspect is the availability of a second computer beside the one you are setting up. If you can do web searches for additional information while in the midst of doing an install, it can save a lot of time. As for the original topic, I've used Linux and Unix extensively, but have yet to get FreeBSD working after several tries over many years. I'd always get frustrated before I could get it fully installed and configured for a nice graphical desktop. (So personality obviously matters.) But it has been about two years since I've tried, and it may be simple now... Please do not consider this a flame against FreeBSD... just a true story that for some reason I couldn't seem to make it work. If it were not a good OS, I wouldn't have attempted so many times. A: If you're coming from a primarily Windows background, I think FreeBSD would be a great way to dive into UNIX, but you may also want to check out Ubuntu Linux-- specifically, Ubuntu Server. Got a spare Pentium 4-based system laying around at home? Burn yourself a CD and go to it. As a fan of FreeBSD myself, I have to second the recommendation for the "Absolute FreeBSD" book above-- another book worth a look is "Building a Server with FreeBSD 7." My original rationale for choosing FreeBSD was getting better control over what gets installed-- I was really tired of installing RedHat and/or SuSE and having a few gigabytes of stuff I wasn't going to use installed as part of the base install that wasn't easily removed after the fact. I've grown rather enamored with the BSD way of doing things, but it isn't necessarily for everyone. Something to consider-- if you have the hardware, run VMWare or VirtualBox, and set up a few virtual machines to get used to various distributions before making the commitment to install a particular one on bare hardware.
{ "language": "en", "url": "https://stackoverflow.com/questions/120803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Difference between Array.slice and Array().slice I am going through John Resig's excellent Advanced javascript tutorial and I do not thoroughly understand what's the difference between the following calls: (please note that 'arguments' is a builtin javascript word and is not exactly an array hence the hacking with the Array.slice instead of simply calling arguments.slice) >>> arguments [3, 1, 2, 3] >>> Array.slice.call( arguments ) 3,1,2,3 0=3 1=1 2=2 3=3 >>> Array.slice.call( arguments, 1 ) [] >>> Array().slice.call( arguments ) 3,1,2,3 0=3 1=1 2=2 3=3 >>> Array().slice.call( arguments, 1 ) 1,2,3 0=1 1=2 2=3 Basically my misunderstanding boils down to the difference between Array.slice and Array().slice. What exactly is the difference between these two and why does not Array.slice.call behave as expected? (which is giving back all but the first element of the arguments list). A: Array is just a function, albeit a special one (used to initialize arrays). Array.slice is a reference to the slice() function in the Array prototype. It can only be called on an array object and not on the Constructor (i.e. Array) itself. Array seems to behave specially though, as Array() returns an empty array. This doesn't seem to work for non-builtin Constructor functions (there you have to use new). So Array().slice.call is the same as [].slice.call A: Not quite. Watch what happens when you call String.substring.call("foo", 1) and String().substring.call("foo", 2): >>> String.substring.call("foo", 1) "1" >>> String().substring.call("foo", 1) "oo" Array.slice is neither properly referencing the slice function attached to the Array prototype nor the slice function attached to any instantiated Array instance (such as Array() or []). The fact that Array.slice is even non-null at all is an incorrect implementation of the object (/function/constructor) itself. Try running the equivalent code in IE and you'll get an error that Array.slice is null. This is why Array.slice does not behave correctly (nor does String.substring). Proof (the following is something one should never expect based on the definition of slice()...just like substring() above): >>> Array.slice.call([1,2], [3,4]) 3,4 Now, if you properly call slice() on either an instantiated object or the Array prototype, you'll get what you expect: >>> Array.prototype.slice.call([4,5], 1) [5] >>> Array().slice.call([4,5], 1) [5] More proof... >>> Array.prototype.slice == Array().slice true >>> Array.slice == Array().slice false A: How is any call to slice.call() working in the examples provided since a context parameter is not being supplied? Does slice implement it's own call method, thus overriding JavaScript's call method? The call and apply methods take as the first parameter an object to specify the context (this) object to apply to the invocation. A: I believe Array is the type and Array() is the constructor function. Messing around in FireBug: >>> Array === Array() false >>> Array.constructor Function() >>> Array().constructor Array() A: Well, Looking at http://www.devguru.com/Technologies/ecmascript/quickref/slice.html Array().slice is a function (constructor)in the array class, It cant be used as a data member. If you didn't want to use the '()' you would need to call it on the array. ie - arguments.slice(1) A: My guess is that Array is a prototype while Array() is an actual array object. Depending on the JavaScript interpretation, directly calling the prototype method of a builtin object type might work or it might not. I don't believe the spec says it has to work, just that calling it on a instantiated object works.
{ "language": "en", "url": "https://stackoverflow.com/questions/120804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Reading custom resource from within MIDP 2.0 midlet How can one load custom (not an image, nor a sound file) resource file from /res within .jar using MIDP 2.0? A: I'm working with MIDP 2.1, but I hope this is in 2.0 too. Class.getResource(path_to_resource) should give you an InputStream to the file. A: getResourceAsStream("/res/yourresource");
{ "language": "en", "url": "https://stackoverflow.com/questions/120836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I create a WPF Rounded Corner container? We are creating an XBAP application that we need to have rounded corners in various locations in a single page and we would like to have a WPF Rounded Corner container to place a bunch of other elements within. Does anyone have some suggestions or sample code on how we can best accomplish this? Either with styles on a or with creating a custom control? A: I know that this isn't an answer to the initial question ... but you often want to clip the inner content of that rounded corner border you just created. Chris Cavanagh has come up with an excellent way to do just this. I have tried a couple different approaches to this ... and I think this one rocks. Here is the xaml below: <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Background="Black" > <!-- Rounded yellow border --> <Border HorizontalAlignment="Center" VerticalAlignment="Center" BorderBrush="Yellow" BorderThickness="3" CornerRadius="10" Padding="2" > <Grid> <!-- Rounded mask (stretches to fill Grid) --> <Border Name="mask" Background="White" CornerRadius="7" /> <!-- Main content container --> <StackPanel> <!-- Use a VisualBrush of 'mask' as the opacity mask --> <StackPanel.OpacityMask> <VisualBrush Visual="{Binding ElementName=mask}"/> </StackPanel.OpacityMask> <!-- Any content --> <Image Source="http://chriscavanagh.files.wordpress.com/2006/12/chriss-blog-banner.jpg"/> <Rectangle Height="50" Fill="Red"/> <Rectangle Height="50" Fill="White"/> <Rectangle Height="50" Fill="Blue"/> </StackPanel> </Grid> </Border> </Page> A: You don't need a custom control, just put your container in a border element: <Border BorderBrush="#FF000000" BorderThickness="1" CornerRadius="8"> <Grid/> </Border> You can replace the <Grid/> with any of the layout containers... A: VB.Net code based implementation of kobusb's Border control solution. I used it to populate a ListBox of Button controls. The Button controls are created from MEF extensions. Each extension uses MEF's ExportMetaData attribute for a Description of the extension. The extensions are VisiFire charting objects. The user pushes a button, selected from the list of buttons, to execute the desired chart. ' Create a ListBox of Buttons, one button for each MEF charting component. For Each c As Lazy(Of ICharts, IDictionary(Of String, Object)) In ext.ChartDescriptions Dim brdr As New Border brdr.BorderBrush = Brushes.Black brdr.BorderThickness = New Thickness(2, 2, 2, 2) brdr.CornerRadius = New CornerRadius(8, 8, 8, 8) Dim btn As New Button AddHandler btn.Click, AddressOf GenericButtonClick brdr.Child = btn brdr.Background = btn.Background btn.Margin = brdr.BorderThickness btn.Width = ChartsLBx.ActualWidth - 22 btn.BorderThickness = New Thickness(0, 0, 0, 0) btn.Height = 22 btn.Content = c.Metadata("Description") btn.Tag = c btn.ToolTip = "Push button to see " & c.Metadata("Description").ToString & " chart" Dim lbi As New ListBoxItem lbi.Content = brdr ChartsLBx.Items.Add(lbi) Next Public Event Click As RoutedEventHandler Private Sub GenericButtonClick(sender As Object, e As RoutedEventArgs) Dim btn As Button = DirectCast(sender, Button) Dim c As Lazy(Of ICharts, IDictionary(Of String, Object)) = DirectCast(btn.Tag, Lazy(Of ICharts, IDictionary(Of String, Object))) Dim w As Window = DirectCast(c.Value, Window) Dim cc As ICharts = DirectCast(c.Value, ICharts) c.Value.CreateChart() w.Show() End Sub <System.ComponentModel.Composition.Export(GetType(ICharts))> _ <System.ComponentModel.Composition.ExportMetadata("Description", "Data vs. Time")> _ Public Class DataTimeChart Implements ICharts Public Sub CreateChart() Implements ICharts.CreateChart End Sub End Class Public Interface ICharts Sub CreateChart() End Interface Public Class Extensibility Public Sub New() Dim catalog As New AggregateCatalog() catalog.Catalogs.Add(New AssemblyCatalog(GetType(Extensibility).Assembly)) 'Create the CompositionContainer with the parts in the catalog ChartContainer = New CompositionContainer(catalog) Try ChartContainer.ComposeParts(Me) Catch ex As Exception Console.WriteLine(ex.ToString) End Try End Sub ' must use Lazy otherwise instantiation of Window will hold open app. Otherwise must specify Shutdown Mode of "Shutdown on Main Window". <ImportMany()> _ Public Property ChartDescriptions As IEnumerable(Of Lazy(Of ICharts, IDictionary(Of String, Object))) End Class A: I just had to do this myself, so I thought I would post another answer here. Here is another way to create a rounded corner border and clip its inner content. This is the straightforward way by using the Clip property. It's nice if you want to avoid a VisualBrush. The xaml: <Border Width="200" Height="25" CornerRadius="11" Background="#FF919194" > <Border.Clip> <RectangleGeometry RadiusX="{Binding CornerRadius.TopLeft, RelativeSource={RelativeSource AncestorType={x:Type Border}}}" RadiusY="{Binding RadiusX, RelativeSource={RelativeSource Self}}" > <RectangleGeometry.Rect> <MultiBinding Converter="{StaticResource widthAndHeightToRectConverter}" > <Binding Path="ActualWidth" RelativeSource="{RelativeSource AncestorType={x:Type Border}}" /> <Binding Path="ActualHeight" RelativeSource="{RelativeSource AncestorType={x:Type Border}}" /> </MultiBinding> </RectangleGeometry.Rect> </RectangleGeometry> </Border.Clip> <Rectangle Width="100" Height="100" Fill="Blue" HorizontalAlignment="Left" VerticalAlignment="Center" /> </Border> The code for the converter: public class WidthAndHeightToRectConverter : IMultiValueConverter { public object Convert(object[] values, Type targetType, object parameter, CultureInfo culture) { double width = (double)values[0]; double height = (double)values[1]; return new Rect(0, 0, width, height); } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } A: If you're trying to put a button in a rounded-rectangle border, you should check out msdn's example. I found this by googling for images of the problem (instead of text). Their bulky outer rectangle is (thankfully) easy to remove. Note that you will have to redefine the button's behavior (since you've changed the ControlTemplate). That is, you will need to define the button's behavior when clicked using a Trigger tag (Property="IsPressed" Value="true") in the ControlTemplate.Triggers tag. Hope this saves someone else the time I lost :)
{ "language": "en", "url": "https://stackoverflow.com/questions/120851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "131" }
Q: Can you read the body of a reply to a POST request from flash? I am trying to POST a file to a server that replies with some basic status information in the body. Everything is easy from javascript but I was wondering if I could read the status information from the reply using flash. That way I can use the multifile picker flash provides and send several files at once. BTW would it work with PUT? A: To get information out of your application (one-way information flow) Using the getURL function, you can use the form which specifies methods. getURL (url, window, method) Arguments: * *url should be the url of the script you want to get data to. *window should be a custom named window, or one of the four presets, "_blank", "_parent", "_self" or "_top" *method should be "GET" or "POST" When using the POST method, the current movie clips timeline variables are sent to the script as a seperate block of data after the HTTP POST-request header (exactly the same way a regular HTML form that uses the POST method). You will need Flash 6 or later on the player to get this to work. Example: getURL("http://example.org/myscript.pl", "_top", "POST"); To get information out of scripts (two-way information flow) If you want to get information back into your flash application from your script, you can use loadVariables with similar arguments, the only difference is the second argument, which is the target for the variables to load into. i.e. To load variables into the main movie timeline. loadVariables ("http://example.org/myscript.pl", "_root", "POST");
{ "language": "en", "url": "https://stackoverflow.com/questions/120854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Remove and Replace a visual component at runtime Is it possible to, for instance, replace and free a TEdit with a subclassed component instantiated (conditionally) at runtime? If so, how and when it should be done? I've tried to set the parent to nil and to call free() in the form constructor and AfterConstruction methods but in both cases I got a runtime error. Being more specific, I got an Access violation error (EAccessViolation). It seems François is right when he says that freeing components at frame costruction messes with Form controls housekeeping. A: You have to call RemoveControl of the TEdit's parent to remove the control. Use InsertControl to add the new control. var Edit2: TEdit; begin Edit2 := TEdit.Create(self); Edit2.Left := Edit1.Left; Edit2.Top := Edit2.Top; Edit1.Parent.Insertcontrol(Edit2); TWinControl(Edit1.parent).RemoveControl(Edit1); Edit1.Free; end; Replace TEdit.Create to the class you want to use, and copy all properties you need like I did with Left and Top. A: This more generic routine works either with a Form or Frame (updated to use a subclass for the new control): function ReplaceControlEx(AControl: TControl; const AControlClass: TControlClass; const ANewName: string; const IsFreed : Boolean = True): TControl; begin if AControl = nil then begin Result := nil; Exit; end; Result := AControlClass.Create(AControl.Owner); CloneProperties(AControl, Result);// copy all properties to new control // Result.Left := AControl.Left; // or copy some properties manually... // Result.Top := AControl.Top; Result.Name := ANewName; Result.Parent := AControl.Parent; // needed for the InsertControl & RemoveControl magic if IsFreed then FreeAndNil(AControl); end; function ReplaceControl(AControl: TControl; const ANewName: string; const IsFreed : Boolean = True): TControl; begin if AControl = nil then Result := nil else Result := ReplaceControlEx(AControl, TControlClass(AControl.ClassType), ANewName, IsFreed); end; using this routine to pass the properties to the new control procedure CloneProperties(const Source: TControl; const Dest: TControl); var ms: TMemoryStream; OldName: string; begin OldName := Source.Name; Source.Name := ''; // needed to avoid Name collision try ms := TMemoryStream.Create; try ms.WriteComponent(Source); ms.Position := 0; ms.ReadComponent(Dest); finally ms.Free; end; finally Source.Name := OldName; end; end; use it like: procedure TFrame1.AfterConstruction; var I: Integer; NewEdit: TMyEdit; begin inherited; NewEdit := ReplaceControlEx(Edit1, TMyEdit, 'Edit2') as TMyEdit; if Assigned(NewEdit) then begin NewEdit.Text := 'My Brand New Edit'; NewEdit.Author := 'Myself'; end; for I:=0 to ControlCount-1 do begin ShowMessage(Controls[I].Name); end; end; CAUTION: If you are doing this inside the AfterConstruction of the Frame, beware that the hosting Form construction is not finished yet. Freeing Controls there, might cause a lot of problems as you're messing up with Form controls housekeeping. See what you get if you try to read the new Edit Caption to display in the ShowMessage... In that case you would want to use ...ReplaceControl(Edit1, 'Edit2', False) and then do a ...FreeAndNil(Edit1) later. A: You can actually use RTTI (look in the TypInfo unit) to clone all the matching properties. I wrote code for this a while back, but I can't find it now. I'll keep looking.
{ "language": "en", "url": "https://stackoverflow.com/questions/120858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Flash Charts and Graphs exported for use in PDF reports - automation I want to put beautiful charts in a report that is available via html and pdf. I'd prefer to use just one API and that all processing occur server-side. I want to embed Flash charts in the html version of reports. I want to embed a static image (preferably vector-based format) in the pdf version. What is the best way to accomplish this? I've seen a product called Swiff Chart Generator but it's pretty weak on chart interactivity. I've also seen amcharts, which is strong on interactivity, but weak on pdf output. I'll probably use princexml to handle the overall pdf generation. Princexml doesn't render embedded flash. It does render embedded images and SVG. Another option is flying saucer, which is less feature-full but free. Corda - They make mapping, and graphing software that supports some amount of interactivity. They support SVG, PNG and flash formats out of the box. Of course, they are quite expensive. A: Take a look at AlivePDF. I believe it can do what you need. They have a demo where you can export and download a pdf of the swf you have just drawn into, very cool. Alternatively here is a Jpeg Exporter by the same folks. EDIT: Also take a look at Degrafa for charting in Flex. It's very good, and the underlying code is actually being folded into Adobe's next release! A: I did something similar 8 years ago with a java library from Visual Engineering. It looks like their products have changed but someone has their old demos online. It worked well as an applet for HTML output and i wrote some simple java class to write a .png to embed in the pdfs on the server. Strangly enough it was all called from PHP but hung together well. Java was a good choice as this had to work on Sun and Linux servers with IE front ends. Unfortunately this isn't Flash and isn't vector based. I'd be looking for tools like swf2jpg or swf2png. However if there are no other options for server side flash you may want to consider using a Java applet / application combo. A: You can use FusionCharts It allows you to embed Flash charts in HTML pages and the same can be exported as image/PDF easily, which you can them embed in your PDF report. A demo of the same which might be of help to you: http://www.fusioncharts.com/Demos/ExportChart/ Hope this helps:) A: You can grab the bitmap data of the chart straight from Flash using ActionScript. Unfortunately, I don't believe there is a way to export the vector data.
{ "language": "en", "url": "https://stackoverflow.com/questions/120862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I get files in my own file format to have its own dynamic icon? Our application has a file format similar to the OpenDocument file format (see http://en.wikipedia.org/wiki/OpenDocument) - i.e. zipped with a manifest file, a thumbnail image, etc. I notice that OpenOffice files have a preview image of the Open Office file as their icons, both in Windows and in Linux. Is there some way to accomplish this for our files: i.e. I want a dynamic icon based on the internal thumbnail.png? Edit 1 Wow, thanks for all the quick answers. Thumbnailer looks great for the GNOME world. Windows I'll be looking into those links, thanks. As for the comment question: programmatically OR via our installer. Edit 2 Oh, forgot Mac. How about on the Mac? (Sorry Mac lovers!) Also are there any links or info for how OpenOffice does their IconHandler stuff - since ours would be very similar? A: Windows What you need is an Icon Handler, also known as a Thumbnail Handler. Here is an example written as an active x control. Another resource is to look up Property Handlers, which should also point to you to the latest and greatest way of having dynamic meta data handled correctly in windows. These are dynamic solutions - they aren't needed if you just want an icon associated with all your files - they are only used when you want windows explorer to display an icon based on what's in the file, not just the extension, and when the file changes the icon is updated to reflect the changes. It doesn't have to be an image of the file itself, either, the thumbnail handler can generate any image based on the file contents. The property handler updates other metadata, such as song or video length, so you can use all the metadata Windows Explorer supports. Regarding MAC support, this page says, "The Mac and Windows operating systems have different methods of enabling this type of thumbnail, and in the case of the Mac OS, this support has been inconsistent from version to version so it hasn't been pursued [for Adobe InDesign]." OS X Icons for Mac OSX are determined by the Launch Services Database. However, it refers to a static icon file for all files handled by a registered application (it's not based on extension - each file has meta data attached that determines the application to which it belongs, although extensions give hints when the meta data doesn't exist, such as getting the file from a different OS or file system) It appears that the dynamic icon functionality in OSX is provided by Finder, but searches aren't bringing up any easy pointers in this direction. Since Finder keeps changing over time, I can see why this target is hard to hit... Gnome For Gnome you use a thumbnailer. (thanks Dorward) This is an extraordinarily simple program you write, which has 3 command line arguments: * *input file name, the file you are describing with the thumbnail (or URI if you accept those instead) *output file name, where you need to write the PNG *size, a number, in pixels, that describes the maximum square image size you should produce (128 --> 128x128 or smaller) I wish all systems were this simple. On the other hand this doesn't support animation and a few other features that are provided by more difficult to implement plugins on other systems. KDE I'm a bit uncertain, but there are a few pointers that should get you started. First is that Konqueror is the file manager and displays the icons - it supports dynamic icons for some inbuilt types, but I don't know if these are hardcoded, or plugins you can write. Check out the Embedded Components Tutorial for a starting point. There's a new (ish?) feature (or planned feature...) called Plasma which has a great deal to do with icons and icon functionality. Check out this announcment and this initial implementation. You may need to dig into the source of Konqueror and check out how they did this for text files and others already implemented. -Adam A: Mac OSX since version 10.5 … … has two approaches: * *Your document is in the standard OSX bundle format and has a static image This can be done by creating a subfolder QuickLook and placing the Thumbnail/Preview.png/tiff/jpg inside. *Everything else needs a QuickLook generator plugin which can be stored in either /Library/QuickLook ~/Library/QuickLook or inside the YourApp.app/Contents/Library/QuickLook Folders. This generator is being used to create Thumbnails and QuickLook previews on the fly. XCode offers a template for this. The template generates the needed ANSI C files which have to be implemented. If you want to write Object-C code you have to rename the GenerateThumbnailForURL.c and GeneratePreviewForURL.c to GenerateThumbnailForURL.m and GeneratePreviewForURL.m (and read the Apple Devel Docs carefully ;) ) Simple zip container based demo: You will have to add the Cocoa.framework and Foundation.framework to your project In your GenerateThumbnailForURL.c (this is partly out of my head - so no guarantee that it works out of the box ;) ): #include <Cocoa/Cocoa.h> #include <Foundation/Foundation.h> OSStatus GenerateThumbnailForURL(void *thisInterface, QLThumbnailRequestRef thumbnail, CFURLRef url, CFStringRef contentTypeUTI, CFDictionaryRef options, CGSize maxSize) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; /* unzip the thumbnail and put it into an NSData object */ // Create temporary path and writing handle for extraction NSString *tmpPath = [NSTemporaryDirectory() stringByAppendingFormat: [NSString stringWithFormat: @"%.0f.%@" , [NSDate timeIntervalSinceReferenceDate] * 1000.0, @"png"]]; [[NSFileManager defaultManager] createFileAtPath: tmpPath contents: [NSData alloc] attributes:nil]; NSFileHandle *writingHandle = [NSFileHandle fileHandleForWritingAtPath: tmpPath]; // Use task to unzip - create command: /usr/bin/unzip -p <pathToFile> <fileToExtract> NSTask *unzipTask = [[NSTask alloc] init]; [unzipTask setLaunchPath: @"/usr/bin/unzip"]; // -p -> output to StandardOut, added File to extract, nil to terminate Array [unzipTask setArguments: [NSArray arrayWithObjects: @"-p", [(NSURL *) url path], @"Thumbnails/thumbnail.png", nil]]; // redirect standardOut to writingHandle [unzipTask setStandardOutput: writingHandle]; // Unzip - run task [unzipTask launch]; [unzipTask waitUntilExit]; // Read Image Data and remove File NSData *thumbnailData = [NSData dataWithContentsOfFile: tmpPath]; [[NSFileManager defaultManager] removeFileAtPath: tmpPath handler:nil]; if ( thumbnailData == nil || [thumbnailData length] == 0 ) { // Nothing Found. Don't care. [pool release]; return noErr; } // That is the Size our image should have - create a dictionary too CGSize size = CGSizeMake(256, 256); NSDictionary *properties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:size.width],kQLPreviewPropertyWidthKey, [NSNumber numberWithInt:size.height],kQLPreviewPropertyHeightKey, nil]; // Get CGContext for Thumbnail CGContextRef CGContext = QLThumbnailRequestCreateContext(thumbnail, size, TRUE, (CFDictionaryRef)properties); if(CGContext) { NSGraphicsContext* context = [NSGraphicsContext graphicsContextWithGraphicsPort:(void *)CGContext flipped:size.width > size.height]; if(context) { //These two lines of code are just good safe programming… [NSGraphicsContext saveGraphicsState]; [NSGraphicsContext setCurrentContext:context]; NSBitmapImageRep *thumbnailBitmap = [NSBitmapImageRep imageRepWithData:thumbnailData]; [thumbnailBitmap draw]; //This line sets the context back to what it was when we're done [NSGraphicsContext restoreGraphicsState]; } // When we are done with our drawing code QLThumbnailRequestFlushContext() is called to flush the context QLThumbnailRequestFlushContext(thumbnail, CGContext); // Release the CGContext CFRelease(CGContext); } [pool release]; return noErr; } Info.plist You will have to modify your info.plist file too - when you open it up it has a lot of fields pre-set. Most of them are self-explaning (or will not have to be changed) but I had to add the following structure (copy paste should do - copy the text, go into the plist editor and just paste.): <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <array> <dict> <key>UTTypeConformsTo</key> <array> <string>com.pkware.zip-archive</string> </array> <key>UTTypeDescription</key> <string>i-net Crystal-Clear Report File</string> <key>UTTypeIconName</key> <string>generic</string> <key>UTTypeIdentifier</key> <string>com.company.product</string> <key>UTTypeReferenceURL</key> <string>http://your-url.com</string> <key>UTTypeTagSpecification</key> <dict> <key>public.filename-extension</key> <array> <string>$fileEXT$</string> </array> </dict> </dict> </array> </plist> This will register your filetype $fileExt$ and tell the system that your filetype is a zipy format type. A nice refference, that I used here is the QuickLook IPA Plugin from googlecode A: In Windows, what you need is to implement an Icon Handler. I did this many moons ago and it is not difficult as long as you know the basics of COM. See: http://msdn.microsoft.com/en-us/library/bb776857(VS.85).aspx A: For Gnome you use a thumbnailer. A: for WINDOWS try this: http://www.easydesksoftware.com/news/news12.htm A: Executables have the icon inside the file (potentially multiple) as a "resource". Data files pick up an icon based on file association. If you want a custom icon per file that is much harder. you either need too fool the OS into thinking it is an executable and embed the icon as a resource in the file, or deep link into the OS to override the default icon selection routine. A: I think, "custom own" icon can have only PE files in windows. Every other icons for file extensions are stored in windows registry. For specification of PE file, you can look at An In-Depth Look into the Win32 Portable Executable File Format and Peering Inside the PE: A Tour of the Win32 Portable Executable File Format. How it works in other OS, I don't know :/. A: I don't know about Linux, but for Windows you can start here: http://msdn.microsoft.com/en-us/library/bb774614.aspx Edit: I think this interface is for the thumbnails shown in thumbnail view, not icons. Sorry for wasting your time.
{ "language": "en", "url": "https://stackoverflow.com/questions/120865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What are the rules for calling the base class constructor? What are the C++ rules for calling the base class constructor from a derived class? For example, I know in Java, you must do it as the first line of the subclass constructor (and if you don't, an implicit call to a no-arg super constructor is assumed - giving you a compile error if that's missing). A: Nobody mentioned the sequence of constructor calls when a class derives from multiple classes. The sequence is as mentioned while deriving the classes. A: In C++ there is a concept of constructor's initialization list, which is where you can and should call the base class' constructor and where you should also initialize the data members. The initialization list comes after the constructor signature following a colon, and before the body of the constructor. Let's say we have a class A: class A : public B { public: A(int a, int b, int c); private: int b_, c_; }; Then, assuming B has a constructor which takes an int, A's constructor may look like this: A::A(int a, int b, int c) : B(a), b_(b), c_(c) // initialization list { // do something } As you can see, the constructor of the base class is called in the initialization list. Initializing the data members in the initialization list, by the way, is preferable to assigning the values for b_, and c_ inside the body of the constructor, because you are saving the extra cost of assignment. Keep in mind, that data members are always initialized in the order in which they are declared in the class definition, regardless of their order in the initialization list. To avoid strange bugs, which may arise if your data members depend on each other, you should always make sure that the order of the members is the same in the initialization list and the class definition. For the same reason the base class constructor must be the first item in the initialization list. If you omit it altogether, then the default constructor for the base class will be called automatically. In that case, if the base class does not have a default constructor, you will get a compiler error. A: Everybody mentioned a constructor call through an initialization list, but nobody said that a parent class's constructor can be called explicitly from the derived member's constructor's body. See the question Calling a constructor of the base class from a subclass' constructor body, for example. The point is that if you use an explicit call to a parent class or super class constructor in the body of a derived class, this is actually just creating an instance of the parent class and it is not invoking the parent class constructor on the derived object. The only way to invoke a parent class or super class constructor on a derived class' object is through the initialization list and not in the derived class constructor body. So maybe it should not be called a "superclass constructor call". I put this answer here because somebody might get confused (as I did). A: If you simply want to pass all constructor arguments to the base-class (=parent), here is a minimal example. This uses templates to forward every constructor call with 1, 2 or 3 arguments to the parent class std::string. Code Live-Version #include <iostream> #include <string> class ChildString: public std::string { public: template<typename... Args> ChildString(Args... args): std::string(args...) { std::cout << "\tConstructor call ChildString(nArgs=" << sizeof...(Args) << "): " << *this << std::endl; } }; int main() { std::cout << "Check out:" << std::endl; std::cout << "\thttp://www.cplusplus.com/reference/string/string/string/" << std::endl; std::cout << "for available string constructors" << std::endl; std::cout << std::endl; std::cout << "Initialization:" << std::endl; ChildString cs1 ("copy (2)"); char char_arr[] = "from c-string (4)"; ChildString cs2 (char_arr); std::string str = "substring (3)"; ChildString cs3 (str, 0, str.length()); std::cout << std::endl; std::cout << "Usage:" << std::endl; std::cout << "\tcs1: " << cs1 << std::endl; std::cout << "\tcs2: " << cs2 << std::endl; std::cout << "\tcs3: " << cs3 << std::endl; return 0; } Output Check out: http://www.cplusplus.com/reference/string/string/string/ for available string constructors Initialization: Constructor call ChildString(nArgs=1): copy (2) Constructor call ChildString(nArgs=1): from c-string (4) Constructor call ChildString(nArgs=3): substring (3) Usage: cs1: copy (2) cs2: from c-string (4) cs3: substring (3) Update: Using Variadic Templates To generalize to n arguments and simplify template <class C> ChildString(C arg): std::string(arg) { std::cout << "\tConstructor call ChildString(C arg): " << *this << std::endl; } template <class C1, class C2> ChildString(C1 arg1, C2 arg2): std::string(arg1, arg2) { std::cout << "\tConstructor call ChildString(C1 arg1, C2 arg2, C3 arg3): " << *this << std::endl; } template <class C1, class C2, class C3> ChildString(C1 arg1, C2 arg2, C3 arg3): std::string(arg1, arg2, arg3) { std::cout << "\tConstructor call ChildString(C1 arg1, C2 arg2, C3 arg3): " << *this << std::endl; } to template<typename... Args> ChildString(Args... args): std::string(args...) { std::cout << "\tConstructor call ChildString(nArgs=" << sizeof...(Args) << "): " << *this << std::endl; } A: In C++, the no-argument constructors for all superclasses and member variables are called for you, before entering your constructor. If you want to pass them arguments, there is a separate syntax for this called "constructor chaining", which looks like this: class Sub : public Base { Sub(int x, int y) : Base(x), member(y) { } Type member; }; If anything run at this point throws, the bases/members which had previously completed construction have their destructors called and the exception is rethrown to to the caller. If you want to catch exceptions during chaining, you must use a function try block: class Sub : public Base { Sub(int x, int y) try : Base(x), member(y) { // function body goes here } catch(const ExceptionType &e) { throw kaboom(); } Type member; }; In this form, note that the try block is the body of the function, rather than being inside the body of the function; this allows it to catch exceptions thrown by implicit or explicit member and base class initializations, as well as during the body of the function. However, if a function catch block does not throw a different exception, the runtime will rethrow the original error; exceptions during initialization cannot be ignored. A: If you have a constructor without arguments it will be called before the derived class constructor gets executed. If you want to call a base-constructor with arguments you have to explicitly write that in the derived constructor like this: class base { public: base (int arg) { } }; class derived : public base { public: derived () : base (number) { } }; You cannot construct a derived class without calling the parents constructor in C++. That either happens automatically if it's a non-arg C'tor, it happens if you call the derived constructor directly as shown above or your code won't compile. A: The only way to pass values to a parent constructor is through an initialization list. The initilization list is implemented with a : and then a list of classes and the values to be passed to that classes constructor. Class2::Class2(string id) : Class1(id) { .... } Also remember that if you have a constructor that takes no parameters on the parent class, it will be called automatically prior to the child constructor executing. A: If you have default parameters in your base constructor the base class will be called automatically. using namespace std; class Base { public: Base(int a=1) : _a(a) {} protected: int _a; }; class Derived : public Base { public: Derived() {} void printit() { cout << _a << endl; } }; int main() { Derived d; d.printit(); return 0; } Output is: 1 A: Base class constructors are automatically called for you if they have no argument. If you want to call a superclass constructor with an argument, you must use the subclass's constructor initialization list. Unlike Java, C++ supports multiple inheritance (for better or worse), so the base class must be referred to by name, rather than "super()". class SuperClass { public: SuperClass(int foo) { // do something with foo } }; class SubClass : public SuperClass { public: SubClass(int foo, int bar) : SuperClass(foo) // Call the superclass constructor in the subclass' initialization list. { // do something with bar } }; More info on the constructor's initialization list here and here. A: CDerived::CDerived() : CBase(...), iCount(0) //this is the initialisation list. You can initialise member variables here too. (e.g. iCount := 0) { //construct body }
{ "language": "en", "url": "https://stackoverflow.com/questions/120876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "828" }
Q: Python idiom to chain (flatten) an infinite iterable of finite iterables? Suppose we have an iterator (an infinite one) that returns lists (or finite iterators), for example one returned by infinite = itertools.cycle([[1,2,3]]) What is a good Python idiom to get an iterator (obviously infinite) that will return each of the elements from the first iterator, then each from the second one, etc. In the example above it would return 1,2,3,1,2,3,.... The iterator is infinite, so itertools.chain(*infinite) will not work. Related * *Flattening a shallow list in python A: Starting with Python 2.6, you can use itertools.chain.from_iterable: itertools.chain.from_iterable(iterables) You can also do this with a nested generator comprehension: def flatten(iterables): return (elem for iterable in iterables for elem in iterable) A: Use a generator: (item for it in infinite for item in it) The * construct unpacks into a tuple in order to pass the arguments, so there's no way to use it.
{ "language": "en", "url": "https://stackoverflow.com/questions/120886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Unrooted Tests When running all my tests in Eclipse (Eclipse 3.4 'Ganymede'), one test is listed under "Unrooted Tests". I'm using Junit 3.8 and this particular test extends TestCase. I do not see any difference between this test and the other tests. I don't remember seeing this occur in Eclipse 3.3 (Europa). Clarification: We haven't moved to JUnit 4.0 yet, so we are not using annotations. I also googled and it seemed like most people were having issues with JUnit 4, but I did not see any solutions. At this point the test passes both locally and in CruiseControl so I'm not overly concerned, but curious. When I first saw this, though, it was on a failing test that only failed when run with other tests. This led me down the rabbit hole looking for a solution to the "Unrooted" issue that I never found. Eventually I found the culprit in another test that was not properly tearing down. I agree, it does seem like an Eclipse issue. A: I got this error because I renamed my test method and then tried to run the test in Eclipse by clicking on the same run configuration - referring to the old method which now didn't exist. A: We solved the problem by making sure our test project was built. We had an issue in the build path which would not allow our test class to be compiled. Once we resolved the build path issue, the test compiled and the "new" method was able to be run. So we can assume that "Unrooted" tests also mean that they don't exist in the compiled binary. A: Finally I found the solution. The problem is that you are not defining your test cases using annotations but are still doing it the "old way". As soon as you convert over to using annotations you will be able to run one test at a time again. Here is an example of what a basic test should now look like using annotations: import static org.junit.Assert.*; // Notice the use of "static" here import org.junit.Before; import org.junit.Test; public class MyTests { // Notice we don't extent TestCases anymore @Before public void setUp() { // Note: It is not required to call this setUp() // ... } @Test public void doSomeTest() { // Note: method need not be called "testXXX" // ... assertTrue(1 == 1); } } A: I've never seen this -- but as far as I can tell from skimming Google for a few minutes, this appears as though it could be a bug in Eclipse rather than a problem with your test. You don't have the @Test annotation on the test, I assume? Can you blow the test away and recreate it, and if so do you get the same error? A: I was getting the "unrooted tests" error message as well and it went away magically. I believe it was due to the fact that I was using Eclipse with a Maven project. When I added a new method to my Test class and gave it the @Test annotation, it began getting the error message when I tried to run that one method using the "Run as Junit test" menu option; however, once I ran a maven build the unrooted tests message disappeared and I believe that is the solution to the problem in the future. Run a maven build because it will refresh the class that JUnit is using. A: If your class extends TestCase somewhere in its hierarchy, you have to use the JUnit 3 test runner listed in the drop down under run configurations. Using the JUnit 4 runner (the default I believe) causes that unrooted test phenomenon to occur. A: Another scenario that causes this problem was me blindly copy/pasting a method that requires a parameter. i.e. import org.junit.Test; public class MyTest { @Test public void someMethod(String param) { // stuff } } You have a few simple solutions: * *define the variable in the specific test method *add it as an instance variable to the test class *create a setup method and annotate it with @Before A: For me, it was due to the project got build path issues. My maven dependencies configuration needs to be updated. A: I had that problem and putting one "@Test" before the test method solved it! like this: @Test public void testOne() { // ... assertTrue(1 == 1); } A: These are the two scenarios that the Unrooted errors show up. * *If you have missed the annotation @Test before the test. @Test public void foo(){ } *If it is a Gwt project and when two mock of the same object are defined. Lets say there is one class Class A and @GwtMock private A atest; @GwtMock private A a; Then this will also show a Unrooted test error. A: One other thing you can try is to upgrade your version of JUnit to at least 4.12. I was experiencing this problem for a while with a class that extended one that used @RunWith(Parameterized.class). After a while, and I'm sorry that I don't know precisely what I did to cause this, the 'Unrooted Tests' message went away, but the test still didn't run correctly. The constructor that should have accepted arguments from the @Parameters method was never getting called; execution jumped straight from @BeforeClass to @AfterClass. The fix for that problem was to upgrade JUnit from the 4.8.1 it was using, to the latest (4.12). So maybe that could help someone else in the future. A: I had the same problem with java.lang.NoClassDefFoundError: org/hamcrest/SelfDescribing you need the jar hamcrest. same question 14539072: java.lang.NoClassDefFoundError: org/hamcrest/SelfDescribing A: I could the fix the issue by shifting from TestRunner ver 4.0 to 3 in run configurations for the individual test method. A: Do not extend junit.framework.TestCase in your test class with junit1.4 and this should solve the problem A: You are using Hamcrest? or another library to help in your test?. You are not using import static org.junit.Assert.*; Check if in your test you use: import static org.hamcrest.MatcherAssert.assertThat; or other assert isn´t JUnit assert. A: It turned out to be that my build path had some error...some jars were missing. I reconfigured build path and it worked! A: For me the problem was, that an exception was thrown in the @BeforeClass or @AfterClass methods. This will also cause tests to be categorized as unrooted. A: I got this error with the test method name as "test" @Test public void test() { // ... assertTrue(1 == 1); } I renamed the method and it worked A: I ran into this problem by not also declaring the test to be static. A: Maybe it's just a logical confusion about the goal of the method. Let's remember: E.g. correct tagged test method: @Test @Transactional @Rollback(true) public void testInsertCustomer() { (...) } -With Eclipse Junit plugin, You can run that test method using context menu over the method (E.g. at package explorer expanding the class and methods and selecting "testInsertCustomer()" method and from that item selecting "Run as >> JUnit test"). If you forgot "@Test" tag, or simply the method is not a test, but a (private or not) common method for using as utility for the other tests (e.g. "private fillCustomerObject()"), then the method does not require "@Test" tag, and simply you can not run it as a JUnit test! It's easy that you could create a utility method and later you forgot the real goal of that method, so if you try to run it as a test, JUnit will shout "Unrooted Tests". A: For me this problem was created by a real-time exception thrown in the @AfterClass method (take a look here for documentation): Basically all the test methods succeeded but at the end of the class this method was failing. Therefore all the tests seems fine but there was on my Eclipse an additional "unrooted test" failed. A: I got these errors for a maven project. I rebuild the project with mvn clean install.And the issue was solved A: It actually told me there is a test with annotation: @RunWith(MockitoJUnitRunner.class)
{ "language": "en", "url": "https://stackoverflow.com/questions/120889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Cyclic Dependencies Consider a normal customer-orders application based on MVC pattern using WinForms. The view part has grown too much (over 4000 files) and it needs to be split into smaller ones. For this example we are going to use 3 projects for the view part: * *Main - has dependencies to the other 2 projects. Instantiates the forms with the lists. *Customers - has 2 forms - customers list and customer details. *Orders - has 2 forms - orders list and order details. On the customer details form there is also a list of orders for that customer. The list is received from the OrdersController so it's no problem getting it. When the user selects an order, the list will get it's guid and pass it as reference to the Order Details form. This would mean that we need to have a reference to Orders Project in the Customers Project. (1) But also on the order details form there is a link to the customer that made that order. When clicked, it should open the Customer Details form. This would mean that we need to have a reference to Customers Project in the Orders Project. (2) From (1) and (2) we'll have cyclic dependencies between the Orders and Customers projects. How can this be avoided? Some kind of plug-in architecture? The project is already developed and the best solution would involve as little code change as possible. A: Change at least one of the types to an interface. For example have a ICustomer interface and a Customer type that implements this interface. Now add the ICustomer to the orders project and from the customers project set a reference to the Orders project so you can implement the interface. The Order type can now work against the ICustomer type without knowing the actual implementation. And for a better solution :-) Create both an ICustomer and an IOrder interface and add them to a third library project. And reference this project from the other two and only work with the interfaces, never with the implemenation. A: If they are that tightly coupled maybe they should not be split. A: Extract interfaces and put them in separate assembly. Since you're using MVC architecture, it shouldn't be hard. Take a look at Microsoft Composite UI Applications block for examples and good practices. A: I think that your main problem is not the architecture of your application. You need to understand the boundaries and how to divide the functionality among them. The division like yours is quite artificial, you try to split the application based on the domain objects. Try using user roles or functional themes do to that and the problem might go away. From the technical point of view I don't understand why your views should be aware of each other existence - that sounds a little bit odd to me. You are not going to split your data and your business logic and at the end of the day a GUID is just a sting you could easily pass across using different methods.
{ "language": "en", "url": "https://stackoverflow.com/questions/120898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to check if a trigger is invalid? I'm working on databases that have moving tables auto-generated by some obscure tools. By the way, we have to track information changes in the table via some triggers. And, of course, it occurs that some changes in the table structure broke some triggers, by removing a column or changing its type, for example. So, the question is: Is there a way to query the Oracle metadata to check is some triggers are broken, in order to send a report to the support team? The user_triggers give all the triggers and tells if they are enable or not, but does not indicate if they are still valid. A: SELECT * FROM ALL_OBJECTS WHERE OBJECT_NAME = trigger_name AND OBJECT_TYPE = 'TRIGGER' AND STATUS <> 'VALID' A: Have a look at SYS.OBJ$, specifically the STATUS column.
{ "language": "en", "url": "https://stackoverflow.com/questions/120900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Designing an 'Order' schema in which there are disparate product definition tables This is a scenario I've seen in multiple places over the years; I'm wondering if anyone else has run across a better solution than I have... My company sells a relatively small number of products, however the products we sell are highly specialized (i.e. in order to select a given product, a significant number of details must be provided about it). The problem is that while the amount of detail required to choose a given product is relatively constant, the kinds of details required vary greatly between products. For instance: Product X might have identifying characteristics like (hypothetically) * *'Color', *'Material' *'Mean Time to Failure' but Product Y might have characteristics * *'Thickness', *'Diameter' *'Power Source' The problem (one of them, anyway) in creating an order system that utilizes both Product X and Product Y is that an Order Line has to refer, at some point, to what it is "selling". Since Product X and Product Y are defined in two different tables - and denormalization of products using a wide table scheme is not an option (the product definitions are quite deep) - it's difficult to see a clear way to define the Order Line in such a way that order entry, editing and reporting are practical. Things I've Tried In the Past * *Create a parent table called 'Product' with columns common to Product X and Product Y, then using 'Product' as the reference for the OrderLine table, and creating a FK relationship with 'Product' as the primary side between the tables for Product X and Product Y. This basically places the 'Product' table as the parent of both OrderLine and all the disparate product tables (e.g. Products X and Y). It works fine for order entry, but causes problems with order reporting or editing since the 'Product' record has to track what kind of product it is in order to determine how to join 'Product' to its more detailed child, Product X or Product Y. Advantages: key relationships are preserved. Disadvantages: reporting, editing at the order line/product level. *Create 'Product Type' and 'Product Key' columns at the Order Line level, then use some CASE logic or views to determine the customized product to which the line refers. This is similar to item (1), without the common 'Product' table. I consider it a more "quick and dirty" solution, since it completely does away with foreign keys between order lines and their product definitions. Advantages: quick solution. Disadvantages: same as item (1), plus lost RI. *Homogenize the product definitions by creating a common header table and using key/value pairs for the customized attributes (OrderLine [n] <- [1] Product [1] <- [n] ProductAttribute). Advantages: key relationships are preserved; no ambiguity about product definition. Disadvantages: reporting (retrieving a list of products with their attributes, for instance), data typing of attribute values, performance (fetching product attributes, inserting or updating product attributes etc.) If anyone else has tried a different strategy with more success, I'd sure like to hear about it. Thank you. A: The first solution you describe is the best if you want to maintain data integrity, and if you have relatively few product types and seldom add new product types. This is the design I'd choose in your situation. Reporting is complex only if your reports need the product-specific attributes. If your reports need only the attributes in the common Products table, it's fine. The second solution you describe is called "Polymorphic Associations" and it's no good. Your "foreign key" isn't a real foreign key, so you can't use a DRI constraint to ensure data integrity. OO polymorphism doesn't have an analog in the relational model. The third solution you describe, involving storing an attribute name as a string, is a design called "Entity-Attribute-Value" and you can tell this is a painful and expensive solution. There's no way to ensure data integrity, no way to make one attribute NOT NULL, no way to make sure a given product has a certain set of attributes. No way to restrict one attribute against a lookup table. Many types of aggregate queries become impossible to do in SQL, so you have to write lots of application code to do reports. Use the EAV design only if you must, for instance if you have an unlimited number of product types, the list of attributes may be different on every row, and your schema must accommodate new product types frequently, without code or schema changes. Another solution is "Single-Table Inheritance." This uses an extremely wide table with a column for every attribute of every product. Leave NULLs in columns that are irrelevant to the product on a given row. This effectively means you can't declare an attribute as NOT NULL (unless it's in the group common to all products). Also, most RDBMS products have a limit on the number of columns in a single table, or the overall width in bytes of a row. So you're limited in the number of product types you can represent this way. Hybrid solutions exist, for instance you can store common attributes normally, in columns, but product-specific attributes in an Entity-Attribute-Value table. Or you could store product-specific attributes in some other structured way, like XML or YAML, in a BLOB column of the Products table. But these hybrid solutions suffer because now some attributes must be fetched in a different way The ultimate solution for situations like this is to use a semantic data model, using RDF instead of a relational database. This shares some characteristics with EAV but it's much more ambitious. All metadata is stored in the same way as data, so every object is self-describing and you can query the list of attributes for a given product just as you would query data. Special products exist, such as Jena or Sesame, implementing this data model and a special query language that is different than SQL. A: This might get you started. It will need some refinement Table Product ( id PK, name, price, units_per_package) Table Product_Attribs (id FK ref Product, AttribName, AttribValue) Which would allow you to attach a list of attributes to the products. -- This is essentially your option 3 If you know a max number of attributes, You could go Table Product (id PK, name, price, units_per_package, attrName_1, attrValue_1 ...) Which would of course de-normalize the database, but make queries easier. I prefer the first option because * *It supports an arbitrary number of attributes. *Attribute names can be stored in another table, and referential integrity enforced so that those damn Canadians don't stick a "colour" in there and break reporting. A: There's no magic bullet that you've overlooked. You have what are sometimes called "disjoint subclasses". There's the superclass (Product) with two subclasses (ProductX) and (ProductY). This is a problem that -- for relational databases -- is Really Hard. [Another hard problem is Bill of Materials. Another hard problem is Graphs of Nodes and Arcs.] You really want polymorphism, where OrderLine is linked to a subclass of Product, but doesn't know (or care) which specific subclass. You don't have too many choices for modeling. You've pretty much identified the bad features of each. This is pretty much the whole universe of choices. * *Push everything up to the superclass. That's the uni-table approach where you have Product with a discriminator (type="X" and type="Y") and a million columns. The columns of Product are the union of columns in ProductX and ProductY. There will be nulls all over the place because of unused columns. *Push everything down into the subclasses. In this case, you'll need a view which is the union of ProductX and ProductY. That view is what's joined to create a complete order. This is like the first solution, except it's built dynamically and doesn't optimize well. *Join Superclass instance to subclass instance. In this case, the Product table is the intersection of ProductX and ProductY columns. Each Product has a reference to a key either in ProductX or ProductY. There isn't really a bold new direction. In the relational database world-view, those are the choices. If, however, you elect to change the way you build application software, you can get out of this trap. If the application is object-oriented, you can do everything with first-class, polymorphic objects. You have to map from the kind-of-clunky relational processing; this happens twice: once when you fetch stuff from the database to create objects and once when you persist objects back to the database. The advantage is that you can describe your processing succinctly and correctly. As objects, with subclass relationships. The disadvantage is that your SQL devolves to simplistic bulk fetches, updates and inserts. This becomes an advantage when the SQL is isolated into an ORM layer and managed as a kind of trivial implementation detail. Java programmers use iBatis (or Hibernate or TopLink or Cocoon), Python programmers use SQLAlchemy or SQLObject. The ORM does the database fetches and saves; your application directly manipulate Orders, Lines and Products. A: Does your product line ever change? If it does, then creating a table per product will cost you dearly, and the key/value pairs idea will serve you well. That's the kind of direction down which I am naturally drawn. I would create tables like this: Attribute(attribute_id, description, is_listed) -- contains values like "colour", "width", "power source", etc. -- "is_listed" tells us if we can get a list of valid values: AttributeValue(attribute_id, value) -- lists of valid values for different attributes. Product (product_id, description) ProductAttribute (product_id, attribute_id) -- tells us which attributes apply to which products Order (order_id, etc) OrderLine (order_id, order_line_id, product_id) OrderLineProductAttributeValue (order_line_id, attribute_id, value) -- tells us things like: order line 999 has "colour" of "blue" The SQL to pull this together is not trivial, but it's not too complex either... and most of it will be write once and keep (either in stored procedures or your data access layer). We do similar things with a number of types of entity. A: Chris and AJ: Thanks for your responses. The product line may change, but I would not term it "volatile". The reason I dislike the third option is that it comes at the cost of metadata for the product attribute values. It essentially turns columns into rows, losing most of the advantages of the database column in the process (data type, default value, constraints, foreign key relationships etc.) I've actually been involved in a past project where the product definition was done in this way. We essentially created a full product/product attribute definition system (data types, min/max occurrences, default values, 'required' flags, usage scenarios etc.) The system worked, ultimately, but came with a significant cost in overhead and performance (e.g. materialized views to visualize products, custom "smart" components to represent and validate data entry UI for product definition, another "smart" component to represent the product instance's customizable attributes on the order line, blahblahblah). Again, thanks for your replies!
{ "language": "en", "url": "https://stackoverflow.com/questions/120907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: LabVIEW holds Excel Reference I try to open and excel reference in LabVIEW and then close it after sometime. But the LabVIEW keeps holding the reference and does not release it unless I close the VI. Why is this happening? Is there anyway to force it to release the Reference? I am checking the error out for any errors. But it is not throwing up any errors. A: Are you checking the return value from the close file command? I've had this problem with LV in the past and have found this to be one possible root cause for this problem. Check out the following example file to see if you are doing things the same way: labview\examples\file\datalog.llb\Read Datalog File Example.vi HTH A: What are you doing in Excel? Typically, Labview will hold the reference open only until it is closed. However, this includes any references to any part of excel (excel.worksheet, excel.range, excel.workbook, etc). You need to close each reference explicitly. It can be painstaking to debug, but you need to manually go through your entire excel-handling section and make sure that every reference is closed.
{ "language": "en", "url": "https://stackoverflow.com/questions/120908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }