text
stringlengths
8
267k
meta
dict
Q: what is the difference between OLE DB and ODBC data sources? I was reading a MS Excel help article about pivotcache and wonder what they mean by OLE DB and ODBC sources ...You should use the CommandText property instead of the SQL property, which now exists primarily for compatibility with earlier versions of Microsoft Excel. If you use both properties, the CommandText property’s value takes precedence. For OLE DB sources, the CommandType property describes the value of the CommandText property. For ODBC sources, the CommandText property functions exactly like the SQL property, and setting the property causes the data to be refreshed... I really appreciate your short answers. A: • August, 2011: Microsoft deprecates OLE DB (Microsoft is Aligning with ODBC for Native Relational Data Access) • October, 2017: Microsoft undeprecates OLE DB (Announcing the new release of OLE DB Driver for SQL Server) A: On a very basic level those are just different APIs for the different data sources (i.e. databases). OLE DB is newer and arguably better. You can read more on both in Wikipedia: * *OLE DB *ODBC I.e. you could connect to the same database using an ODBC driver or OLE DB driver. The difference in the database behaviour in those cases is what your book refers to. A: ODBC:- Only for relational databases (Sql Server, Oracle etc) OLE DB:- For both relational and non-relational databases. (Oracle, Sql-Server, Excel, raw files, etc) A: Both are data providers (API that your code will use to talk to a data source). Oledb which was introduced in 1998 was meant to be a replacement for ODBC (introduced in 1992) A: Here's my understanding (non-authoritative): ODBC is a technology-agnostic open standard supported by most software vendors. OLEDB is a technology-specific Microsoft's API from the COM-era (COM was a component and interoperability technology before .NET) At some point various datasouce vendors (e.g. Oracle etc.), willing to be compatible with Microsoft data consumers, developed OLEDB providers for their products, but for the most part OLEDB remains a Microsoft-only standard. Now, most Microsoft data sources allow both ODBC and OLEDB access, mainly for compatibility with legacy ODBC data consumers. Also, there exists OLEDB provider (wrapper) for ODBC which allows one to use OLEDB to access ODBC data sources if one so wishes. In terms of the features OLEDB is substantially richer than ODBC but suffers from one-ring-to-rule-them-all syndrome (overly generic, overcomplicated, non-opinionated). In non-Microsoft world ODBC-based data providers and clients are widely used and not going anywhere. Inside Microsoft bubble OLEDB is being phased out in favor of native .NET APIs build on top of whatever the native transport layer for that data source is (e.g. TDS for MS SQL Server). A: I'm not sure of all the details, but my understanding is that OLE DB and ODBC are two APIs that are available for connecting to various types of databases without having to deal with all the implementation specific details of each. According to the Wikipedia article on OLE DB, OLE DB is Microsoft's successor to ODBC, and provides some features that you might not be able to do with ODBC such as accessing spreadsheets as database sources. A: ODBC and OLE DB are two competing data access technologies. Specifically regarding SQL Server, Microsoft has promoted both of them as their Preferred Future Direction - though at different times. ODBC ODBC is an industry-wide standard interface for accessing table-like data. It was primarily developed for databases and presents data in collections of records, each of which is grouped into a collection of fields. Each field has its own data type suitable to the type of data it contains. Each database vendor (Microsoft, Oracle, Postgres, …) supplies an ODBC driver for their database. There are also ODBC drivers for objects which, though they are not database tables, are sufficiently similar that accessing data in the same way is useful. Examples are spreadsheets, CSV files and columnar reports. OLE DB OLE DB is a Microsoft technology for access to data. Unlike ODBC it encompasses both table-like and non-table-like data such as email messages, web pages, Word documents and file directories. However, it is procedure-oriented rather than object-oriented and is regarded as a rather difficult interface with which to develop access to data sources. To overcome this, ADO was designed to be an object-oriented layer on top of OLE DB and to provide a simpler and higher-level – though still very powerful – way of working with it. ADO’s great advantage it that you can use it to manipulate properties which are specific to a given type of data source, just as easily as you can use it to access those properties which apply to all data source types. You are not restricted to some unsatisfactory lowest common denominator. While all databases have ODBC drivers, they don’t all have OLE DB drivers. There is however an interface available between OLE and ODBC which can be used if you want to access them in OLE DB-like fashion. This interface is called MSDASQL (Microsoft OLE DB provider for ODBC). SQL Server Data Access Technologies Since SQL Server is (1) made by Microsoft, and (2) the Microsoft database platform, both ODBC and OLE DB are a natural fit for it. ODBC Since all other database platforms had ODBC interfaces, Microsoft obviously had to provide one for SQL Server. In addition to this, DAO, the original default technology in Microsoft Access, uses ODBC as the standard way of talking to all external data sources. This made an ODBC interface a sine qua non. The version 6 ODBC driver for SQL Server, released with SQL Server 2000, is still around. Updated versions have been released to handle the new data types, connection technologies, encryption, HA/DR etc. that have appeared with subsequent releases. As of 09/07/2018 the most recent release is v13.1 “ODBC Driver for SQL Server”, released on 23/03/2018. OLE DB This is Microsoft’s own technology, which they were promoting strongly from about 2002 – 2005, along with its accompanying ADO layer. They were evidently hoping that it would become the data access technology of choice. (They even made ADO the default method for accessing data in Access 2002/2003.) However, it eventually became apparent that this was not going to happen for a number of reasons, such as: * *The world was not going to convert to Microsoft technologies and away from ODBC; *DAO/ODBC was faster than ADO/OLE DB and was also thoroughly integrated into MS Access, so wasn’t going to die a natural death; *New technologies that were being developed by Microsoft, specifically ADO.NET, could also talk directly to ODBC. ADO.NET could talk directly to OLE DB as well (thus leaving ADO in a backwater), but it was not (unlike ADO) solely dependent on it. For these reasons and others, Microsoft actually deprecated OLE DB as a data access technology for SQL Server releases after v11 (SQL Server 2012). For a couple of years before this point, they had been producing and updating the SQL Server Native Client, which supported both ODBC and OLE DB technologies. In late 2012 however, they announced that they would be aligning with ODBC for native relational data access in SQL Server, and encouraged everybody else to do the same. They further stated that SQL Server releases after v11/SQL Server 2012 would actively not support OLE DB! This announcement provoked a storm of protest. People were at a loss to understand why MS was suddenly deprecating a technology that they had spent years getting them to commit to. In addition, SSAS/SSRS and SSIS, which were MS-written applications intimately linked to SQL Server, were wholly or partly dependent on OLE DB. Yet another complaint was that OLE DB had certain desirable features which it seemed impossible to port back to ODBC – after all, OLE DB had many good points. In October 2017, Microsoft relented and officially un-deprecated OLE DB. They announced the imminent arrival of a new driver (MSOLEDBSQL) which would have the existing feature set of the Native Client 11 and would also introduce multi-subnet failover and TLS 1.2 support. The driver was released in March 2018. A: At Microsoft website, it shows that native OLEDB provider is applied to SQL server directly and another OLEDB provider called OLEDB Provider for ODBC to access other Database, such as Sysbase, DB2 etc. There are different kinds of component under OLEDB Provider. See Distributed Queries on MSDN for more. A: According to ADO: ActiveX Data Objects, a book by Jason T. Roff, published by O'Reilly Media in 2001 (excellent diagram here), he says precisely what MOZILLA said. (directly from page 7 of that book) * *ODBC provides access only to relational databases * *OLE DB provides the following features *Access to data regardless of its format or location *Full access to ODBC data sources and ODBC drivers So it would seem that OLE DB interacts with SQL-based datasources THRU the ODBC driver layer. I'm not 100% sure this image is correct. The two connections I'm not certain about are ADO.NET thru ADO C-api, and OLE DB thru ODBC to SQL-based data source (because in this diagram the author doesn't put OLE DB's access thru ODBC, which I believe is a mistake). A: ODBC works only for relational databases, it can't works with non-relational databases such as Ms Excel files. Where Olebd can do everything. A: To know why M$ invents OLEDB, you can't compare OLEDB with ODBC. Instead, you should compare OLEDB with DAO,RDO, or ADO. The latter largely relies on SQL. However, OLEDB relies on COM. But ODBC is already there many years, so there's a OLEDB-ODBC bridges to remedy this. I think there's a big picture when M$ invents OLEDB.
{ "language": "en", "url": "https://stackoverflow.com/questions/103167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "194" }
Q: Figuring out the right language for the job: branching out from C# I work in a Microsoft environment, so I can use my C# hammer on any nails I come across. That being said, what languages (compiled, interpreted, scripting, functional, any types!) complement knowing C#, and for what purposes? For example, I've moved a lot of script functionality away from compiled console apps and into Powershell scripts. If you're an MS developer, have you found a niche in your world for other languages like F#, IronRuby, IronPython, or something similar, and what niche do they fill? Note: this question is directed at the Microsoft dev people since I can't run off and start installing LAMP stacks around my company, and therefore having to support it forever. :) However, feel free to mention any other languages that you found interesting to fulfill a certain task/role in your world apart from your main language. A: Python/Perl/Ruby/PowerShell are great supplements to C#/VB.NET. If your boss hands you a text file and says insert it into the database once or twice, then any of Perl/Python/Ruby (I'm not sure about powershell but I imagine it is not that much more difficult) should be fine to parse it. Either way, for your main applications you will probably be stuck in C#. You can use one of the more dynamic languages to do code generation in C#. Since you are in a Microsoft Environment, probably your best chance at getting your solution accepted is PowerShell. Next to that I'd say IronPython or something else that integrates with the CLR. But main issue is that for someone else to maintain what you do, they would have to know whatever language you are using. MS in the future has plans to use PowerShell a lot more, so it is probably easier to justify PowerShell then say Python/Perl/Ruby. If you are just processing a text file for one time use. Or creating a one time code generator to generate all the code and then intend to maintain the generated code, then it doesn't matter. You are the one who will consume the results and if you save time using Perl then more power to you. But if you are doing something that will be used over and over again (like an active code generator where you change the templates and run the generator instead of maintaining the generated code) then other developers working on what you did will need to know the language you used. It is much harder to argue learning Perl/Ruby/Python in a Microsoft Shop. But PowerShell seems like the easier argument. I think the MS grand plan is that eventually applications will expose more functionality for power shell through commandlets. Assuming this happens then PowerShell is even more of a no-brainer because it will expose tons of scriptable functionality that you won't get any other way. A: A nice scripting language is always a good tool to have on you belt. See Ruby, or Python. A: I use python for prototyping, since there's almost no turn around time between edits and actually running the new version of the code. I may even end up using it for a real project - the more I use it, the more I like it. It will take some getting used to as a C# programmer, though - the indentation-defines-structure system it uses is a little weird at first. A: Since you are in a MS shop, I would suggest PowerShell as a decent scripting language to learn. It plays well with C#. I'm a big fan of Ruby too. A: While it's a bit of a fringe language, I'm compelled to mention Erlang. Erlang is an excellent language to have in your toolbox since it's unusual strengths tend to compliment other programming platforms. Erlang is very useful for building distributed, concurrent, fault-tolerant systems. It's used a lot in the instant-messaging and telephony world where there's a need for distributed, yet interconnected architectures. A: I'd like to second or third python. Specifically, IronPython (ttp://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython) lets you learn python but also gives you access to the .net framework goodies. It's quite nice for scripting-related tasks so it'll probably be useful for your day-to-day coding life, and also a nice way to muck around in an experimental coding/prototyping way. A: Maybe play around with Boo and see what you think. Boo at Codehaus.org Boo at Wikipedia A: If your using the .Net framework the language is really not important as the compiler and interpreter create the same IL code in any case. If you step away from the .Net world, I am of the opinion that development tools and languages are a tool box. I strive to use the right tools for the job at hand, taking into consideration what the skill base of the other developers are and what direction a company is seeking of course (I'm a consultant). A: I'm with jjnguy. Try one of the scripting languages. Plus as a bonus, when you learn Ruby/PythonPearl, etc...it's a gateway drug...err language to developing for other environments. A: Trying out languages outside of your normal toolbox will give you new ways of approaching things in your current favorite language. Even if you don't use them for serious projects languages like Perl (for data mangling), Lisp (functional programming), and Javascript (prototype based programming) will teach you new ways to think about problems in your current language. A: As a web developer by trade, you might look into the XSLT/XPath family as for certain types of XML processing they can be very powerful tools. Granted, in C# 3.x Linq2Xml exposes some similar functionality inline. XSLT, however, can be a powerful way to separate data from presentation in your apps. A: I am very interested in F# and some of the other new languages in the CLR/DLR. The DLR languages might be a lot better for your UI, because they don't make you cast a lot of stupid things. However, I think that it is important to keep in mind tha learning a new language, especially in a new area, like functional programming, is always a good way to re-train your mind so that you are exposed to new concepts and you can code better in your language of choice, even if you never use those new languages. A: Check out Boo - it runs on top of the .NET stack, but its syntax is more like Python. A: To learn a new language that complements C#, I'd go with C++. You can use it in a 'way better than p/invoke' style to get access to unmanaged code from your C# apps. You can then start using it to write the memory-constrained apps, and/or performance-critical bits in, if you find some of your .NET applications start hogging all the RAM and/or CPU or just generally aren't as fast as you'd like.
{ "language": "en", "url": "https://stackoverflow.com/questions/103174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Enumerating Certificate Fields in C# How do you enumerate the fields of a certificate help in a store. Specifically, I am trying to enumerate the fields of personal certificates issued to the logged on user. A: See http://msdn.microsoft.com/en-us/library/system.security.cryptography.x509certificates.x509store.aspx using System.Security.Cryptography.X509Certificates; ... var store = new X509Store(StoreName.My); foreach(var cert in store.Certificates) ... A: You also must use store.open() before you can access the store.
{ "language": "en", "url": "https://stackoverflow.com/questions/103177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Where can I find Microsoft assemblies that are not already in Visual Studio? I figured someone can answer the question generally but if anyone wants to get specific I am trying to use: using System.Web.Security.SingleSignOn; using System.Web.Security.SingleSignOn.Authorization; I've googled my brains out and this is the closest answer I found: "We discussed this offline, but it looks like the ADFS assembly is GACed, but not installed on the file system or registered with VS.NET so that it shows up in the .NET tab. I'm guessing MS may need to beef up the installer for this scenario. In the meantime, you probably need to do this yourself." What on earth, do WHAT myself? A: I found an install log showing that it was expected to be in C:\WINDOWS\ADFS\System.Web.Security.SingleSignon.dll on Windows Server 2003. You probably need to have active directory installed for it to appear there because I checked one of my 2003 servers without AD and it wasn't there. Normally I would guess the DLL would be registered in the system-wide Global Assembly Cache (GAC), so you wouldn't have to know the actual path for it. If an assembly is registered in the GAC, then you can add a reference to it by bringing up the "Add Reference" dialog and clicking on the ".NET" Tab. A: You can find the specified namespace in this file: system.web.security.singlesignon.claimtransforms.dll But this file isn't normaly available but only installed in the GAC (Global Assembly Cache). You may find it under e.g. c:\window\assembly... and copy the dll to another path. Then you can manual reference it within Visual Studio. A: For projects using specific environment (like SharePoint object model)is recommended using virtual pc with installed in GAC assemblies. ADFS assemblies should have only Win server. If you find them and install manually in work environment (desktop) some possibilities (like debugging) will not impossible. A: If you're trying to add the assembly to the ".NET" tab in the Visual Studio "Add References" dialog box, there's a registry setting you need to make. KB30149 explains it in greater detail. The short version: You need to add an entry to the HKEY_CURRENT_USER\SOFTWARE\Microsoft\.NETFramework\AssemblyFolders registry key. If you're trying to locate a physical file corresponding to an assembly in the GAC, drop to a command prompt and go to %WINDIR%\Assembly (e.g., C:\WINDOWS\Assembly). Navigate around in there - that's where GAC'd assemblies live.
{ "language": "en", "url": "https://stackoverflow.com/questions/103178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I set an Application's Icon Globally in Swing? I know I can specify one for each form, or for the root form and then it'll cascade through to all of the children forms, but I'd like to have a way of overriding the default Java Coffee Cup for all forms even those I might forget. Any suggestions? A: You can make the root form (by which I assume you mean JFrame) be your own subclass of JFrame, and put standard functionality in its constructor, such as: this.setIconImage(STANDARD_ICON); You can bundle other standard stuff in here too, such as memorizing the frame's window metrics as a user preference, managing splash panes, etc. Any new frames spawned by this one would also be instances of this JFrame subclass. The only thing you have to remember is to instantiate your subclass, instead of JFrame. I don't think there's any substitute for remembering to do this, but at least now it's a matter of remembering a subclass instead of a setIconImage call (among possibly other features). A: There is another way, but its more of a "hack" then a real fix.... If you are distributing the JRE with your Application, you could replace the coffee cup icon resource in the java exe/dll/rt.jar wherever that is with your own icon. It might not be very legit, but it is a possibility... A: Also, if you have one "main" window, and set its icon properly, as long as you use that main window as the "parent" for any Dialog classes, they will inherit the icon. Any new Frames need to have the icon set on them, though. as Paul/Andreas said, subclassing JFrame is going to be your best bet. A: Extend the JDialog class (for example name it MyDialog) and set the icon in constructor. Then all dialogs should extend your implementation (MyDialog).
{ "language": "en", "url": "https://stackoverflow.com/questions/103179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Best way to prevent duplicate use of credit cards We have a system where we want to prevent the same credit card number being registered for two different accounts. As we don't store the credit card number internally - just the last four digits and expiration date - we cannot simply compare credit card numbers and expiration dates. Our current idea is to store a hash (SHA-1) in our system of the credit card information when the card is registered, and to compare hashes to determine if a card has been used before. Usually, a salt is used to avoid dictionary attacks. I assume we are vulnerable in this case, so we should probably store a salt along with the hash. Do you guys see any flaws in this method? Is this a standard way of solving this problem? A: Do not store a simple SHA-1 of the credit card number, it would be way too easy to crack (especially since the last 4 digits are known). We had the same problem in my company: here is how we solved it. First solution * *For each credit card, we store the last 4 digits, the expiration date, a long random salt (50 bytes long), and the salted hash of the CC number. We use the bcrypt hash algorithm because it is very secure and can be tuned to be as CPU-intensive as you wish. We tuned it to be very expensive (about 1 second per hash on our server!). But I guess you could use SHA-256 instead and iterate as many times as needed. *When a new CC number is entered, we start by finding all the existing CC numbers that end with the same 4 digits and have the same expiration date. Then, for each matching CC, we check whether its stored salted hash matches the salted hash calculated from its salt and the new CC number. In other words, we check whether or not hash(stored_CC1_salt+CC2)==stored_CC1_hash. Since we have roughly 100k credit cards in our database, we need to calculate about 10 hashes, so we get the result in about 10 seconds. In our case, this is fine, but you may want to tune bcrypt down a bit. Unfortunately, if you do, this solution will be less secure. On the other hand, if you tune bcrypt to be even more CPU-intensive, it will take more time to match CC numbers. Even though I believe that this solution is way better than simply storing an unsalted hash of the CC number, it will not prevent a very motivated pirate (who manages to get a copy of the database) to break one credit card in an average time of 2 to 5 years. So if you have 100k credit cards in your database, and if the pirate has a lot of CPU, then he can can recover a few credit card numbers every day! This leads me to the belief that you should not calculate the hash yourself: you have to delegate that to someone else. This is the second solution (we are in the process of migrating to this second solution). Second solution Simply have your payment provider generate an alias for your credit card. * *for each credit card, you simply store whatever you want to store (for example the last 4 digits & the expiration date) plus a credit card number alias. *when a new credit card number is entered, you contact your payment provider and give it the CC number (or you redirect the client to the payment provider, and he enters the CC number directly on the payment provider's web site). In return, you get the credit card alias! That's it. Of course you should make sure that your payment provider offers this option, and that the generated alias is actually secure (for example, make sure they don't simply calculate a SHA-1 on the credit card number!). Now the pirate has to break your system plus your payment provider's system if he wants to recover the credit card numbers. It's simple, it's fast, it's secure (well, at least if your payment provider is). The only problem I see is that it ties you to your payment provider. Hope this helps. A: PCI DSS states that you can store PANs (credit card numbers) using a strong one-way hash. They don't even require that it be salted. That said you should salt it with a unique per card value. The expiry date is a good start but perhaps a bit too short. You could add in other pieces of information from the card, such as the issuer. You should not use the CVV/security number as you are not allowed to store it. If you do use the expiry date then when the cardholder gets issued a new card with the same number it will count as a different card. This could be a good or bad thing depending on your requirements. An approach to make your data more secure is to make each operation computationally expensive. For instance if you md5 twice it will take an attacker longer to crack the codes. Its fairly trivial to generate valid credit card numbers and to attempt a charge through for each possible expiry date. However, it is computationally expensive. If you make it more expensive to crack your hashes then it wouldn't be worthwhile for anyone to bother; even if they had the salts, hashes and the method you used. A: Let's do a little math: Credit card numbers are 16 digits long. The first seven digits are 'major industry' and issuer numbers, and the last digit is the luhn checksum. That leaves 8 digits 'free', for a total of 100,000,000 account numbers, multiplied by the number of potential issuer numbers (which is not likely to be very high). There are implementations that can do millions of hashes per second on everyday hardware, so no matter what salting you do, this is not going to be a big deal to brute force. By sheer coincidence, when looking for something giving hash algorithm benchmarks, I found this article about storing credit card hashes, which says: Storing credit cards using a simple single pass of a hash algorithm, even when salted, is fool-hardy. It is just too easy to brute force the credit card numbers if the hashes are compromised. ... When hashing credit card number, the hashing must be carefully designed to protect against brute forcing by using strongest available cryptographic hash functions, large salt values, and multiple iterations. The full article is well worth a thorough read. Unfortunately, the upshot seems to be that any circumstance that makes it 'safe' to store hashed credit card numbers will also make it prohibitively expensive to search for duplicates. A: @Cory R. King SHA 1 isn't broken, per se. What the article shows is that it's possible to generate 2 strings which have the same hash value in less than brute force time. You still aren't able to generate a string that equates to a SPECIFIC hash in a reasonable amount of time. There is a big difference between the two. A: I believe I have found a fool-proof way to solve this problem. Someone please correct me if there is a flaw in my solution. * *Create a secure server on EC2, Heroku, etc. This server will serve ONE purpose and ONLY one purpose: hashing your credit card. *Install a secure web server (Node.js, Rails, etc) on that server and set up the REST API call. *On that server, use a unique salt (1000 characters) and SHA512 it 1000 times. That way, even if hackers get your hashes, they would need to break into your server to find your formula. A: People are over thinking the design of this, I think. Use a salted, highly secure (e.g. "computationally expensive") hash like sha-256, with a per-record unique salt. You should do a low-cost, high accuracy check first, then do the high-cost definitive check only if that check hits. Step 1: Look for matches to the last 4 digits (and possibly also the exp. date, though there's some subtleties there that may need addressing). Step 2: If the simple check hits, use the salt, get the hash value, do the in depth check. The last 4 digits of the cc# are the most unique (partly because it includes the LUHN check digit as well) so the percentage of in depth checks you will do that won't ultimately match (the false positive rate) will be very, very low (a fraction of a percent), which saves you a tremendous amount of overhead relative to the naive "do the hash check every time" design. A: Comparing hashes is a good solution. Make sure that you don't just salt all the credit card numbers with the same constant salt, though. Use a different salt (like the expiration date) on each card. This should make you fairly impervious to dictionary attacks. From this Coding Horror article: Add a long, unique random salt to each password you store. The point of a salt (or nonce, if you prefer) is to make each password unique and long enough that brute force attacks are a waste of time. So, the user's password, instead of being stored as the hash of "myspace1", ends up being stored as the hash of 128 characters of random unicode string + "myspace1". You're now completely immune to rainbow table attack. A: Almost a good idea. Storing just the hash is a good idea, it has served in the password world for decades. Adding a salt seems like a fair idea, and indeed makes a brute force attack that much harder for the attacker. But that salt is going to cost you a lot of extra effort when you actually check to ensure that a new CC is unique: You'll have to SHA-1 your new CC number N times, where N is the number of salts you have already used for all of the CCs you are comparing it to. If indeed you choose good random salts you'll have to do the hash for every other card in your system. So now it is you doing the brute force. So I would say this is not a scalable solution. You see, in the password world a salt adds no cost because we just want to know if the clear text + salt hashes to what we have stored for this particular user. Your requirement is actually pretty different. You'll have to weigh the trade off yourself. Adding salt doesn't make your database secure if it does get stolen, it just makes decoding it harder. How much harder? If it changes the attack from requiring 30 seconds to requiring one day you have achieved nothing -- it will still be decoded. If it changes it from one day to 30 years you have achived someting worth considering. A: Yes, comparing hashes should work fine in this case. A: A salted hash should work just fine. Having a salt-per-user system should be plenty of security. A: SHA1 is broken. Course, there isn't much information out on what a good replacement is. SHA2? A: If you combine the last 4 digits of the card number with the card holder's name (or just last name) and the expiration date you should have enough information to make the record unique. Hashing is nice for security, but wouldn't you need to store/recall the salt in order to replicate the hash for a duplicate check? A: I think a good solution as hinted to above, would be to store a hash value of say Card Number, Expiration date, and name. That way you can still do quick comparisons... A: Sha1 broken is not a problem here. All broken means is that it's possible to calculate collisions (2 data sets that have the same sha1) more easily than you would expect. This might be a problem for accepting arbitrary files based on their sha1 but it has no relevence for an internal hashing application. A: If you are using a payment processor like Stripe / Braintree, let them do the "heavy lifting". They both offer card fingerprinting that you can safely store in your db and compare later to see if a card already exists: * *Stripe returns fingerprint string - see doc *Braintree returns unique_number_identifier string - see doc
{ "language": "en", "url": "https://stackoverflow.com/questions/103184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Is there a macro recorder for Eclipse? Is there a good Eclipse plugin for recording and playing back macros? I've tried this one, but it didn't do me any good- it seemed like it wasn't ready for prime time. I know about editor templates, but I'm looking for something that I can use to record my keystrokes and then apply multiple times against a wad of text. This seems like a strange hole in an IDE. Am I missing some built-in facility for this? A: This seems like a strange hole in an IDE, am I missing some builtin facility for this? This is a common problem. There are around four bugs opened in Eclipse tracker for this. Unfortunately you would probably see macros in Eclipse in v4.0 or later. A: I've had success using AutoHotKey (Windows only, though). A: There was a plug-in called Eclipse Monkey which allowed writing scripts that execute inside the IDE. It was terminated about a month ago due to lack of interest. It is based on an older plug-in called Groovy Monkey. If you google it, you can still get it. The Aptana team has some more information on using it. Note that this allows writing scripts, but not recording actions. A: This is not an Eclipse-specific one, but it can be used there as well: http://sikuli.org/ A: I put something together over the last month or so that you may find useful. It has limitations since the Eclipse editor/commands weren't designed with macro support in mind, but I think it works reasonably well. I just posted Practical Macro at SourceForge a couple of days ago. It requires Eclipse 3.4. A: Just for the record, there is another project called MacroSchmacro that does Eclipse macros, but it doesn't record many important things (like searching to navigate). It is also extremely slow. A: For simple text expansion on a Windows computer, you could use AutoHotkey. It's not as powerful as most macro tools, but since it's not tied to any one program, it can be used in other editors, emails, etc. For example, if I type ";;ln" AutoHotkey instantly sends the keystrokes to delete this and replace it with "System.out.println();" with the cursor in between the parentheses. A: Talking about Emacs, jEdit has a very strong macro facility. There are a lot of high quality macros and plug-ins, and several macros are already built it in. You can even add some logic using bean scripting, which is analogous to VBA. So, you can write very powerful stuff (any many people have done so). jEdit is obviously a separate editor, but I think it's worth a shot. See http://www.jedit.org/ A: Emacs+ Version 3.x adds keyboard macros (http://www.mulgasoft.com/emacsplus) to its feature set.
{ "language": "en", "url": "https://stackoverflow.com/questions/103202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "115" }
Q: How would you implement a secure static login credentials system in Java? We recently had a security audit and it exposed several weaknesses in the systems that are in place here. One of the tasks that resulted from it is that we need to update our partner credentials system make it more secure. The "old" way of doing things was to generate a (bad) password, give it to the partner with an ID and then they would send that ID and a Base 64 encoded copy of that password in with all of their XML requests over https. We then decode them and validate them. These passwords won't change (because then our partners would have to make coding/config changes to change them and coordinating password expirations with hundreds of partners for multiple environments would be a nightmare) and they don't have to be entered by a human or human readable. I am open to changing this if there is a better but still relatively simple implementation for our partners. Basically it comes down to two things: I need a more secure Java password generation system and to ensure that they are transmitted in a secure way. I've found a few hand-rolled password generators but nothing that really stood out as a standard way to do this (maybe for good reason). There may also be a more secure way to transmit them than simple Base 64 encoding over https. What would you do for the password generator and do you think that the transmission method in place is secure enough for it? Edit: The XML comes in a SOAP message and the credentials are in the header not in the XML itself. Also, since the passwords are a one-off operation for each partner when we set them up we're not too worried about efficiency of the generator. A: Password Generation As far as encoding a password for transmission, the only encoding that will truly add security is encryption. Using Base-64 or hexadecimal isn't for security, but just to be able to include it in a text format like XML. Entropy is used to measure password quality. So, choosing each bit with a random "coin-flip" will give you the best quality password. You'd want passwords to be as strong as other cryptographic keys, so I'd recommend a minimum of 128 bits of entropy. There are two easy methods, depending on how you want to encode the password as text (which really doesn't matter from a security standpoint). For Base-64, use something like this: SecureRandom rnd = new SecureRandom(); /* Byte array length is multiple of LCM(log2(64), 8) / 8 = 3. */ byte[] password = new byte[18]; rnd.nextBytes(password); String encoded = Base64.encode(password); The following doesn't require you to come up with a Base-64 encoder. The resulting encoding is not as compact (26 characters instead of 24) and the password doesn't have as much entropy. (But 130 bits is already a lot, comparable to a password of at least 30 characters chosen by a human.) SecureRandom rnd = new SecureRandom(); /* Bit length is multiple of log2(32) = 5. */ String encoded = new BigInteger(130, rnd).toString(32); Creating new SecureRandom objects is computationally expensive, so if you are going to generate passwords frequently, you may want to create one instance and keep it around. A Better Approach Embedding the password in the XML itself seems like a mistake. First of all, it seems like you would want to authenticate a sender before processing any documents they send you. Suppose I hate your guts, and start sending you giant XML files to execute a denial of service attack. Do you want to have to parse the XML only to find out that I'm not a legitimate partner? Wouldn't it be better if the servlet just rejected requests from unauthenticated users up front? Second, the passwords of your legitimate partners were protected during transmission by HTTPS, but now they are likely stored "in the clear" on your system somewhere. That's bad security. A better approach would be to authenticate partners when they send you a document with credentials in the HTTP request headers. If you only allow HTTPS, you can take the password out of the document completely and put it into an HTTP "Basic" authentication header instead. It's secured by SSL during transmission, and not stored on your system in the clear (you only store a one-way hash for authentication purposes). HTTP Basic authentication is simple, widely supported, and will be much easier for you and your partners to implement than SSL client certificates. Protecting Document Content If the content of the documents themselves is sensitive, they really should be encrypted by the sender, and stored by you in their encrypted form. The best way to do this is with public key cryptography, but that would be a subject for another question. A: I'm unclear why transmitting the passwords over SSL -- via HTTPS -- is being considered "insecure" by your audit team. So when you ask for two things, it seems the second -- ensuring that the passwords are being transmitted in a secure way -- is already being handled just fine. As for the first, we'd have to know what about the audit exposed your passwords as insecure... A: I'd abandon the whole password approach and start using client certificates allowing a 2 side authenticated SSL connection. You have to generate and sign individual certificates for each client. In the SSL handshake, you request the client's certificate and verify it. If it fails, the connection ends with a 401 status code. Certificates can be revoked at any time be your side, allowing easily disconnecting former customers. Since all this happens in the handshake prior to the communication, is is not possible to flood your server with data.
{ "language": "en", "url": "https://stackoverflow.com/questions/103203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Finding First Row in a RDLC Table I have a table in a RDLC report which is utilized as a subreport, and the first column of this table is a static string. Does anyone know how I can determine if a row is the first in the table. I tried using "=First("My String")" but it didn't work. A: Looking at the link supplied by ThatBloke in his answer, I found the RowNumber command. Which means that this worked: =IIf(RowNumber(Nothing)=1,"myString", "") A: Aggregate functions work with "Scope', referring to the paragraph scope in this MSDN article, might help... http://msdn.microsoft.com/fr-fr/library/ms252112(VS.80).aspx" From what I understand you may have to define a scope or try =First("MyString", Nothing). A: =IIF((RowNumber(Nothing) Mod <>)=0) <> Indicate No of Rows Which you want To Display
{ "language": "en", "url": "https://stackoverflow.com/questions/103240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: VB6 / Crystal Report 8.5 error: A string is required here I recently inherited an old visual basic 6/ crystal reports project which connects to a sql server database. The error message I get (Error# -2147191803 A String is required here) when I attempt to run the project seems to be narrowed down to the .Printout command in the following code: 'Login to database Set Tables = Report.Database.Tables Set Table = Tables.Item(1) Table.SetLogOnInfo ConnName, DBName, user, pass DomainName = CStr(selected) 'Set parameter Fields 'Declare parameter holders Set ParamDefs = Report.ParameterFields 'Store parameter objects For Each ParamDef In ParamDefs With ParamDef MsgBox("DomainName : " + DomainName) Select Case .ParameterFieldName Case "Company Name" .SetCurrentValue DomainName End Select Select Case .Name Case "{?Company Name}" .SetCurrentValue DomainName End Select 'Flag to see what is assigned to Current value MsgBox("paramdef: " + ParamDef.Value) End With Next Report.EnableParameterPrompting = False Screen.MousePointer = vbHourglass 'CRViewer1.ReportSource = Report 'CRViewer1.ViewReport test = 1 **Report.PrintOut** test = test + 3 currenttime = Str(Now) currenttime = Replace(currenttime, "/", "-") currenttime = Replace(currenttime, ":", "-") DomainName = Replace(DomainName, ".", "") startName = mPath + "\crysta~1.pdf" endName = mPath + "\" + DomainName + "\" + DomainName + " " + currenttime + ".pdf" rc = MsgBox("Wait for PDF job to finish", vbInformation, "H/W Report") Name startName As endName Screen.MousePointer = vbDefault End If During the run, the form shows up, the ParamDef variable sets the "company name" and when it gets to the Report.PrintOut line which prompts to print, it throws the error. I'm guessing the crystal report isn't receiving the "Company Name" to properly run the crystal report. Does any one know how to diagnose this...either on the vb6 or crystal reports side to determine what I'm missing here? UPDATE: * *inserted CStr(selected) to force DomainName to be a string *inserted msgboxes into the for loop above and below the .setcurrentvalue line *inserted Case "{?Company Name}" statement to see if that helps setting the value *tried .AddCurrentValue and .SetCurrentValue functions as suggested by other forum websites *ruled out that it was my development environement..loaded it on another machine with the exact same vb6 crystal reports 8.5 running on winxp prof sp2 and the same errors come up. and when I run the MsgBox(ParamDef.Value) and it also turns up blank with the same missing string error. I also can't find any documentation on the craxdrt.ParameterFieldDefinition class to see what other hidden functions are available. When I see the list of methods and property variables, it doesn't list SetCurrentValue as one of the functions. Any ideas on this? A: What is the value of your selected variable? Table.SetLogOnInfo ConnName, DBName, user, pass DomainName = selected 'Set parameter Fields If it is not a string, then that might be the problem. Crystal expects a string variable and when it doesn't receive what it expects, it throws errors. A: selected is a string variable input taken from a form with a drop down select box. I previously put a message box there to ensure there the variable is passing through right before the Report.Printout and it does come up. DomainName variable is also declared as a string type. A: Here's how I set my parameters in Crystal (that came with .NET -- don't know if it helps you). Dim dv As New ParameterDiscreteValue dv.Value = showphone rpt.ParameterFields("showphone").CurrentValues.Add(dv) A: This can happen in crystal reports 8.5 if you changed the length of a string column you use in your report so that it exceeds 255 bytes. This can also happen if you change the column type from varchar to nvarchar (double byte string!) The reason for this is that crystal reports 8.5 treats all strings longer than 255 bytes as memo fields. I would suggest youe upgrade to crystal reports XI - the API has not changed that much. A: From my point of view you should get the same error message if you open the report in the Crystal Reports Designer and switch to preview mode. The Designer should also show you a message with the exact location of the problem, e.g. the field which can not be treated as a string. The problem is not that a field longer than 255 bytes cannot be printed. The problem is that a field longer than 255 bytes cannot be used in a formula, as a sort criteria ... A: Here is a live example of setting the parameters that we use: For Each CRXParamDef In CrystalReport.ParameterFields Select Case CRXParamDef.ParameterFieldName Case "@start" CRXParamDef.AddCurrentValue CDate("1/18/2002 12:00:00AM") Case "@end" CRXParamDef.AddCurrentValue Now End Select Next This is actually a sample written in VBScript for printing Crystal 8.5 reports, but the syntax is the same for VB6
{ "language": "en", "url": "https://stackoverflow.com/questions/103261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Does Weblogic 9.x support the 2.4 Servlet standard? Seems like a simple enough question but I can't seem to find the answer. And hey, dead simple questions like this with dead simple answers is what Joel and Jeff want SO to be all about, right? A: http://e-docs.bea.com/wls/docs92/compatibility/compatibility.html BEA WebLogic Server is one hundred percent J2EE 1.4 compatible ...which means that it supports the servlet 2.4 specification.
{ "language": "en", "url": "https://stackoverflow.com/questions/103271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the best way to make a mobile friendly site? Speaking entirely in technology-free terms, what is the best way to make a mobile friendly site? That is, I want to make a site that will work on a regular computer but also have mobile versions of the pages. Should I rewrite each page? The pages will probably have different functionality, so should I rewrite the backend code? Should it be an effectively different site with the same database? A: On my site, I detect user agent, and for known mobile browsers I serve a different stylesheet, with some larger/less necessary items left off some pages. The backend doesn't really change. A: I added a mobile presentation layer to an operational site about a year ago. Based on the architecture of the site (hopefully this isn't too technology dependent for you) I added a new set of JSPs to accommodate mobile browsers (sidenote: see http://wurfl.sourceforge.net/ for a great way to build mobile pages independent of browser type). Additionally some of the back-end functionality was changed due to the limited functionality of most mobile browsers. So, in short, the integration wasn't as painful as one would expect. Good luck! A: This is a pretty broad question, but here goes: * *If the site is primarily about the content, meaning it's not so much a service you use as it's a publication you read, then I'd try to avoid publishing two sites wherever possible. Concentrate on simple presentation using mature technologies that mobile browsers can handle fairly well. *If it's essentially a software application delivered via the network, then things get trickier, because you're going to want to consider the UI of the mobile device, and how it differs from the desktop. *This should go without saying, but either way, if you have many mobile users, you should keep that in mind when you author content for the site. Formats, length, voice, etc. A: In addition to the WURFL / WALL capabilities system that todd mentioned, there are Java Server Faces libraries available that use alternate WML renderkits for mobile phones. A: One way I have done it in the past was to make sure my data was abstracted well in the data tier and then use separate middle tier models to pull what was appropriate. In my case the application was a weather application and the display methods of the target devices were really limited so we opted to only show the user the essentials on the mobile devices while the website was full featured. That was probably 10 years ago when WAP was big. But these days with devices getting bigger screens, better bandwidth, you may want to consume and display the exact same data with a different view model. I never really know what type of application will need to consume the data in the future. We do a lot of apps across platforms but the domain model rarely changes. So I end up using the same middle tier objects where I can and pulling that data in different clients. A good example of this is a recent project where we had a rich internet application (widget), a full website, and a web service consuming the same data. Data abstraction in the middle-tier really shines in this environment. A: On a very high level of abstraction, there are two main caveats with mobile devices: (1) their screen is small, (2) their network connection is intermittent. This basically means that your need to present the content so that it looks fine even on a small (variable size) screen, and preferably make it cacheable too so that your users can browse the content while offline. Then there's also the problem of low bandwidth and high latency, but those are slightly less important nowadays. A: This is a very thorough overview of how to make a site mobile, though i hope its fair to say that there will always be different requirements for anyone seeking to go mobile. If you have a Blog, then you could just as easily make it mobile friendly using Mippin Mobilizer; its free, provides branding customisation tools, and with a big audience already browsing a wide mix of mobilized content, there's opportunities to generate advertising revenue around your blog. This is because the Mippin Mobilized blog then becomes part of a much wider community of content, people, news, blogs, listings, all connecting around content, and much more at the mobile site: http://mippin.com (on a mobile browser.) Take a look at the Mobilizing tool because it shows off what the site can do in a second: www.mippin.com/mobilizer Only if you have a blog of course...
{ "language": "en", "url": "https://stackoverflow.com/questions/103273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Portable way to catch signals and report problem to the user If by some miracle a segfault occurs in our program, I want to catch the SIGSEGV and let the user (possibly a GUI client) know with a single return code that a serious problem has occurred. At the same time I would like to display information on the command line to show which signal was caught. Today our signal handler looks as follows: void catchSignal (int reason) { std :: cerr << "Caught a signal: " << reason << std::endl; exit (1); } I can hear the screams of horror with the above, as I have read from this thread that it is evil to call a non-reentrant function from a signal handler. Is there a portable way to handle the signal and provide information to users? EDIT: Or at least portable within the POSIX framework? A: This table lists all of the functions that POSIX guarantees to be async-signal-safe and so can be called from a signal handler. By using the 'write' command from this table, the following relatively "ugly" solution hopefully will do the trick: #include <csignal> #ifdef _WINDOWS_ #define _exit _Exit #else #include <unistd.h> #endif #define PRINT_SIGNAL(X) case X: \ write (STDERR_FILENO, #X ")\n" , sizeof(#X ")\n")-1); \ break; void catchSignal (int reason) { char s[] = "Caught signal: ("; write (STDERR_FILENO, s, sizeof(s) - 1); switch (reason) { // These are the handlers that we catch PRINT_SIGNAL(SIGUSR1); PRINT_SIGNAL(SIGHUP); PRINT_SIGNAL(SIGINT); PRINT_SIGNAL(SIGQUIT); PRINT_SIGNAL(SIGABRT); PRINT_SIGNAL(SIGILL); PRINT_SIGNAL(SIGFPE); PRINT_SIGNAL(SIGBUS); PRINT_SIGNAL(SIGSEGV); PRINT_SIGNAL(SIGTERM); } _Exit (1); // 'exit' is not async-signal-safe } EDIT: Building on windows. After trying to build this one windows, it appears that 'STDERR_FILENO' is not defined. From the documentation however its value appears to be '2'. #include <io.h> #define STDIO_FILENO 2 EDIT: 'exit' should not be called from the signal handler either! As pointed out by fizzer, calling _Exit in the above is a sledge hammer approach for signals such as HUP and TERM. Ideally, when these signals are caught a flag with "volatile sig_atomic_t" type can be used to notify the main program that it should exit. The following I found useful in my searches. * *Introduction To Unix Signals Programming *Extending Traditional Signals A: FWIW, 2 is standard error on Windows also, but you're going to need some conditional compilation because their write() is called _write(). You'll also want #ifdef SIGUSR1 /* or whatever */ etc around all references to signals not guaranteed to be defined by the C standard. Also, as noted above, you don't want to handle SIGUSR1, SIGHUP, SIGINT, SIGQUIT and SIGTERM like this. A: Richard, still not enough karma to comment, so a new answer I'm afraid. These are asynchronous signals; you have no idea when they are delivered, so possibly you will be in library code which needs to complete to stay consistent. Signal handlers for these signals are therefore required to return. If you call exit(), the library will do some work post-main(), including calling functions registered with atexit() and cleaning up the standard streams. This processing may fail if, say, your signal arrived in a standard library I/O function. Therefore in C90 you are not allowed to call exit(). I see now C99 relaxes the requirement by providing a new function _Exit() in stdlib.h. _Exit() may safely be called from a handler for an asynchronous signal. _Exit() will not call atexit() functions and may omit cleaning up the standard streams at the implementation's discretion. To bk1e (commenter a few posts up) The fact that SIGSEGV is synchronous is why you can't use functions that are not designed to be reentrant. What if the function that crashed was holding a lock, and the function called by the signal handler tries to acquire the same lock? This is a possibility, but it's not 'the fact that SIGSEGV is synchronous' which is the problem. Calling non-reentrant functions from the handler is much worse with asynchronous signals for two reasons: * *asynchronous signal handlers are (generally) hoping to return and resume normal program execution. A handler for a synchronous signal is (generally) going to terminate anyway, so you've not lost much if you crash. *in a perverse sense, you have absolute control over when a synchronous signal is delivered - it happens as you execute your defective code, and at no other time. You have no control at all over when an async signal is delivered. Unless the OP's own I/O code is ifself the cause of the defect - e.g. outputting a bad char* - his error message has a reasonable chance of succeeding. A: Write a launcher program to run your program and report abnormal exit code to the user.
{ "language": "en", "url": "https://stackoverflow.com/questions/103280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What happens if MySQL connections continually aren't closed on PHP pages? At the beginning of each PHP page I open up the connection to MySQL, use it throughout the page and close it at the end of the page. However, I often redirect in the middle of the page to another page and so in those cases the connection does not be closed. I understand that this is not bad for performance of the web server since PHP automatically closes all MySQL connections at the end of each page anyway. Are there any other issues here to keep in mind, or is it really true that you don't have to worry about closing your database connections in PHP? $mysqli = new mysqli("localhost", "root", "", "test"); ...do stuff, perhaps redirect to another page... $mysqli->close(); A: Just because you redirect doesn't mean the script stops executing. A redirect is just a header being sent. If you don't exit() right after, the rest of your script will continue running. When the script does finish running, it will close off all open connections (or release them back to the pool if you're using persistent connections). Don't worry about it. A: From: http://us3.php.net/manual/en/mysqli.close.php "Open connections (and similar resources) are automatically destroyed at the end of script execution. However, you should still close or free all connections, result sets and statement handles as soon as they are no longer required. This will help return resources to PHP and MySQL faster." A: There might be a limit of how many connections can be open at once, so if you have many user you might run out of SQL connections. In effect, users will see SQL errors instead of nice web pages. It's better to open a connection to read data, then close it, then display data and once the user clicks "submit" you open another connection and then submit all changes.
{ "language": "en", "url": "https://stackoverflow.com/questions/103281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to convert a unmanaged double to a managed string? From managed C++, I am calling an unmanaged C++ method which returns a double. How can I convert this double into a managed string? A: I assume something like (gcnew System::Double(d))->ToString() A: C++ is definitely not my strongest skillset. Misread the question, but this should convert to a std::string, not exactly what you are looking for though, but leaving it since it was the original post.... double d = 123.45; std::ostringstream oss; oss << d; std::string s = oss.str(); This should convert to a managed string however.. double d = 123.45 String^ s = System::Convert::ToString(d);
{ "language": "en", "url": "https://stackoverflow.com/questions/103298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to test for the EOF flag in R? How can I test for the EOF flag in R? For example: f <- file(fname, "rb") while (???) { a <- readBin(f, "int", n=1) } A: The readLines function will return a zero-length value when it reaches the EOF. A: Try checking the length of data returned by readBin: while (length(a <- readBin(f, 'int', n=1)) > 0) { # do something }
{ "language": "en", "url": "https://stackoverflow.com/questions/103312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: In bash, environmental variables not tab-expanding correctly In bash, environmental variables will tab-expand correctly when placed after an echo command, for example: echo $HOME But after cd or cat, bash places a \ before the $ sign, like so: cd \$HOME If I use a variable as the second argument to a command, it won't expand at all: cp somefile $HOM What mysterious option do I have in my .bashrc or .inputrc file that is causing me such distress? A: What you're describing is a "feature" introduced in bash 4.2. So you don't have any mysterious option causing you distress, but just "intended" behaviour. I find this very annoying since I preferred it the way it used to be and haven't found any configuration options yet to get the earlier behaviour back. Playing with complete options as suggested by other answers didn't get me anywhere. A: Try complete -r cd to remove the special programmatic completion function that many Linux distributions install for the cd command. The function adds searching a list of of directories specified in the CDPATH variable to tab completions for cd, but at the expense of breaking the default completion behavior. See http://www.gnu.org/software/bash/manual/bashref.html#Programmable-Completion for more gory details. A: For the second instance, you can press ESC before tab to solve it. I don't know the solution to your problem, but you could look in /etc/bash_completion or the files under /etc/bash_completion.d to determine what commands use autocompletion and how. help complete Might also be helpful. A: The Bash Reference Manual has more information than you might want on expansion errata. Section 8.7 looks like it would be the place to start. It give information on the 'complete' function, among other things. A: Check the answer for https://superuser.com/questions/434139/urxvt-tab-expand-environment-variables by Dmitry Alexandrov: This is about direxpand option. $ shopt -s direxpand and $FOO_PATH/ will be expanded by TAB. A: I'm answering 4-year-old question! Fantastic! This is a bash bug/feature which was unintentionally introduced in v4.2, and was unnoticed for a long period of time. This was pointed out by geirha in this tread. Confirmed as unintended feature here I came across this problem when running Ubuntu at home. At work I have bash-3.00, so I've spent some time browsing around to see what's going on. I wonder if I can 'downgrade'....
{ "language": "en", "url": "https://stackoverflow.com/questions/103316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: What is the correct XPath for choosing attributes that contain "foo"? Given this XML, what XPath returns all elements whose prop attribute contains Foo (the first three nodes): <bla> <a prop="Foo1"/> <a prop="Foo2"/> <a prop="3Foo"/> <a prop="Bar"/> </bla> A: Have you tried something like: //a[contains(@prop, "Foo")] I've never used the contains function before but suspect that it should work as advertised... A: John C is the closest, but XPath is case sensitive, so the correct XPath would be: /bla/a[contains(@prop, 'Foo')] A: If you also need to match the content of the link itself, use text(): //a[contains(@href,"/some_link")][text()="Click here"] A: //a[contains(@prop,'Foo')] Works if I use this XML to get results back. <bla> <a prop="Foo1">a</a> <a prop="Foo2">b</a> <a prop="3Foo">c</a> <a prop="Bar">a</a> </bla> Edit: Another thing to note is that while the XPath above will return the correct answer for that particular xml, if you want to guarantee you only get the "a" elements in element "bla", you should as others have mentioned also use /bla/a[contains(@prop,'Foo')] This will search you all "a" elements in your entire xml document, regardless of being nested in a "blah" element //a[contains(@prop,'Foo')] I added this for the sake of thoroughness and in the spirit of stackoverflow. :) A: This XPath will give you all nodes that have attributes containing 'Foo' regardless of node name or attribute name: //attribute::*[contains(., 'Foo')]/.. Of course, if you're more interested in the contents of the attribute themselves, and not necessarily their parent node, just drop the /.. //attribute::*[contains(., 'Foo')] A: /bla/a[contains(@prop, "foo")] A: try this: //a[contains(@prop,'foo')] that should work for any "a" tags in the document A: descendant-or-self::*[contains(@prop,'Foo')] Or: /bla/a[contains(@prop,'Foo')] Or: /bla/a[position() <= 3] Dissected: descendant-or-self:: The Axis - search through every node underneath and the node itself. It is often better to say this than //. I have encountered some implementations where // means anywhere (decendant or self of the root node). The other use the default axis. * or /bla/a The Tag - a wildcard match, and /bla/a is an absolute path. [contains(@prop,'Foo')] or [position() <= 3] The condition within [ ]. @prop is shorthand for attribute::prop, as attribute is another search axis. Alternatively you can select the first 3 by using the position() function. A: For the code above... //*[contains(@prop,'foo')]
{ "language": "en", "url": "https://stackoverflow.com/questions/103325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "215" }
Q: How do you glue Lua to C++ code? Do you use Luabind, toLua++, or some other library (if so, which one) or none at all? For each approach, what are the pro's and con's? A: I can't really agree with the 'roll your own' vote, binding basic types and static C functions to Lua is trivial, yes, but the picture changes the moment you start dealing with tables and metatables; things go trickier very quickly. LuaBind seems to do the job, but I have a philosophical issue with it. For me it seems like if your types are already complicated the fact that Luabind is heavily templated is not going to make your code any easier to follow, as a friend of mine said "you'll need Herb Shutter to figure out the compilation messages". Plus it depends on Boost, plus compilation times get a serious hit, etc. After trying a few bindings, Tolua++ seems the best. Tolua doesn't seem to be very much in development, where as Tolua++ seems to work fine (plus half the 'Tolua' tutorials out there are, in fact, 'Tolua++' tutorials, trust me on that:) Tolua does generate the right stuff, the source can be modified and it seems to deal with complicated cases (like templates, unions, nameless structs, etc, etc) The biggest issue with Tolua++ seems to be the lack of proper tutorials, pre-set Visual Studio projects, or the fact that the command line is a bit tricky to follow (you path/files can't have white spaces -in Windows at least- and so on) Still, for me it is the winner. A: To answer my own question in part: Luabind: once you know how to bind methods and classes via this awkward template syntax, it's pretty straightforward and easy to add new bindings. However, luabind has a significant performance impact and shouldn't be used for realtime applications. About 5-20 times more overhead than calling C functions that manipulate the stack directly. A: I don't use any library. I have used SWIG to expose a C library some time ago, but there was too much overhead, and I stop using it. The pros are better performance and more control, but its takes more time to write. A: Use raw Lua API for your bindings -- and keep them simple. Take inspiration in the API itself (AUX library) and libraries by Lua authors. With some practice raw API is the best option -- maximum flexibility and minimum of unneeded overhead. You've got what you want and no more, the way you need it to be. If you must bind large third-party libraries use automated generators like tolua, tolua++ (or even roll your own for the specific case). It would free you from manual work. I would not recommend using Luabind. At the moment it's development stalled (however starting to come back to life), and if you would meet some corner case, you may be on your own. Also Luabind heavily uses template metaprogramming. This may (and may not) be unacceptable, depending on the point of view.
{ "language": "en", "url": "https://stackoverflow.com/questions/103347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: C++ strings: UTF-8 or 16-bit encoding? I'm still trying to decide whether my (home) project should use UTF-8 strings (implemented in terms of std::string with additional UTF-8-specific functions when necessary) or some 16-bit string (implemented as std::wstring). The project is a programming language and environment (like VB, it's a combination of both). There are a few wishes/constraints: * *It would be cool if it could run on limited hardware, such as computers with limited memory. *I want the code to run on Windows, Mac and (if resources allow) Linux. *I'll be using wxWidgets as my GUI layer, but I want the code that interacts with that toolkit confined in a corner of the codebase (I will have non-GUI executables). *I would like to avoid working with two different kinds of strings when working with user-visible text and with the application's data. Currently, I'm working with std::string, with the intent of using UTF-8 manipulation functions only when necessary. It requires less memory, and seems to be the direction many applications are going anyway. If you recommend a 16-bit encoding, which one: UTF-16? UCS-2? Another one? A: I have never found any reasons to use anything else than UTF-8 to be honest. A: If you decide to go with UTF-8 encoding, check out this library: http://utfcpp.sourceforge.net/ It may make your life much easier. A: I've actually written a widely used application (5million+ users) so every kilobyte used adds up, literally. Despite that, I just stuck to wxString. I've configured it to be derived from std::wstring, so I can pass them to functions expecting a wstring const&. Please note that std::wstring is native Unicode on the Mac (no UTF-16 needed for characters above U+10000), and therefore it uses 4 bytes/wchar_t. The big advantage of this is that i++ gets you the next character, always. On Win32 that is true in only 99.9% of the cases. As a fellow programmer, you'll understand how little 99.9% is. But if you're not convinced, write the function to uppercase a std::string[UTF-8] and a std::wstring. Those 2 functions will tell you which way is insanity. Your on-disk format is another matter. For portability, that should be UTF-8. There's no endianness concern in UTF-8, nor a discussion over the width (2/4). This may be why many programs appear to use UTF-8. On a slightly unrelated note, please read up on Unicode string comparisions and normalization. Or you'll end up with the same bug as .NET, where you can have two variables föö and föö differing only in (invisible) normalization. A: UTF-16 is still a variable length character encoding (there are more than 2^16 unicode codepoints), so you can't do O(1) string indexing operations. If you're doing lots of that sort of thing, you're not saving anything in speed over UTF-8. On the other hand, if your text includes a lot of codepoints in the 256-65535 range, UTF-16 can be a substantial improvement in size. UCS-2 is a variation on UTF-16 that is fixed length, at the cost of prohibiting any codepoints greater than 2^16. Without knowing more about your requirements, I would personally go for UTF-8. It's the easiest to deal with for all the reasons others have already listed. A: I would recommend UTF-16 for any kind of data manipulation and UI. The Mac OS X and Win32 API uses UTF-16, same for wxWidgets, Qt, ICU, Xerces, and others. UTF-8 might be better for data interchange and storage. See http://unicode.org/notes/tn12/. But whatever you choose, I would definitely recommend against std::string with UTF-8 "only when necessary". Go all the way with UTF-16 or UTF-8, but do not mix and match, that is asking for trouble. A: MicroATX is pretty much a standard PC motherboard format, most capable of 4-8 GB of RAM. If you're talking picoATX maybe you're limited to 1-2 GB RAM. Even then that's plenty for a development environment. I'd still stick with UTF-8 for reasons mentioned above, but memory shouldn't be your concern. A: From what I've read, it's better to use a 16-bit encoding internally unless you're short on memory. It fits almost all living languages in one character I'd also look at ICU. If you're not going to be using certain STL features of strings, using the ICU string types might be better for you. A: Have you considered using wxStrings? If I remember correctly, they can do utf-8 <-> Unicode conversions and it will make it a bit easier when you have to pass strings to and from the UI.
{ "language": "en", "url": "https://stackoverflow.com/questions/103358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How well does the Android Phone Emulator reflect the performance? I've been playing around with OpenGL ES development on Android. OpenGL ES applications seem to run slowly in the Emulator on my development machine. Does this reflect likely performance of actual hardware? I'm concerned about spending too much time developing an application if the graphics performance is going to be sluggish. A: The emulator is super slow on my Mobile Intel Pentium M 725, 1600 MHz. I'm assuming the emulator isn't representative of real world performance. A: Emulator is soo slow ,that in some cases an openGL application wouldn't even start when using it. While the actual hardware of android can even be so strong,that you can even play GTA on it. A: Configuring VM Acceleration on Windows Virtual machine acceleration for Windows requires the installation of the Intel Hardware Accelerated Execution Manager (Intel HAXM). The software requires an Intel CPU with Virtualization Technology (VT) support and one of the following operating systems: Windows 7 (32/64-bit) Windows Vista (32/64-bit) Windows XP (32-bit only) To install the virtualization driver: Start the Android SDK Manager, select Extras and then select Intel Hardware Accelerated Execution Manager. After the download completes, execute <sdk>/extras/intel/Hardware_Accelerated_Execution_Manager/IntelHAXM.exe Follow the on-screen instructions to complete installation. After installation completes, confirm that the virtualization driver is operating correctly by opening a command prompt window and running the following command: sc query intelhaxm You should see a status message including the following information: SERVICE_NAME: intelhaxm ... STATE : 4 RUNNING ... To run an x86-based emulator with VM acceleration: If you are running the emulator from the command line, just specify an x86-based AVD: emulator -avd <avd_name> A: After the new update the emulators have become much more reliable but still can't be taken as the way to check the performance of your application. Till now testing the application of real devices are more reliable then emulators. A: With the new emulator of Android Studio 2.0, if you have good computer it runs quite smoothly, at leat for my application! Check the features!
{ "language": "en", "url": "https://stackoverflow.com/questions/103366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Managed C++ Method naming I'm using managed c++ to implement a method that returns a string. I declare the method in my header file using the following signature: String^ GetWindowText() However, when I'm using this method from C#, the signature is: string GetWindowTextW(); How do I get rid of the extra "W" at the end of the method's name? A: To get around the preprocessor hackery of the Windows header files, declare it like this: #undef GetWindowText String^ GetWindowText() Note that, if you actually use the Win32 or MFC GetWindowText() routines in your code, you'll need to either redefine the macro or call them as GetWindowTextW(). A: GetWindowText is a win32 api call that is aliased via a macro to GetWindowTextW in your C++ project. Try adding #undef GetWindowText to you C++ project. A: Not Managed c++ but C++/CLI for the .net platform. A set of Microsoft extensions to C++ for use with their .Net system. Bjarne Stroustrup's FAQ http://www.research.att.com/~bs/bs_faq.html#CppCLI C++/CLI is not C++, don't tag it as such. Txs
{ "language": "en", "url": "https://stackoverflow.com/questions/103382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Oracle Natural Joins and Count(1) Does anyone know why in Oracle 11g when you do a Count(1) with more than one natural join it does a cartesian join and throws the count way off? Such as SELECT Count(1) FROM record NATURAL join address NATURAL join person WHERE status=1 AND code = 1 AND state = 'TN' This pulls back like 3 million rows when SELECT * FROM record NATURAL join address NATURAL join person WHERE status=1 AND code = 1 AND state = 'TN' pulls back like 36000 rows, which is the correct amount. Am I just missing something? Here are the tables I'm using to get this result. CREATE TABLE addresses ( address_id NUMBER(10,0) NOT NULL, address_1 VARCHAR2(60) NULL, address_2 VARCHAR2(60) NULL, city VARCHAR2(35) NULL, state CHAR(2) NULL, zip VARCHAR2(5) NULL, zip_4 VARCHAR2(4) NULL, county VARCHAR2(35) NULL, phone VARCHAR2(11) NULL, fax VARCHAR2(11) NULL, origin_network NUMBER(3,0) NOT NULL, owner_network NUMBER(3,0) NOT NULL, corrected_address_id NUMBER(10,0) NULL, "HASH" VARCHAR2(200) NULL ); CREATE TABLE rates ( rate_id NUMBER(10,0) NOT NULL, eob VARCHAR2(30) NOT NULL, network_code NUMBER(3,0) NOT NULL, product_code VARCHAR2(2) NOT NULL, rate_type NUMBER(1,0) NOT NULL ); CREATE TABLE records ( pk_unique_id NUMBER(10,0) NOT NULL, rate_id NUMBER(10,0) NOT NULL, address_id NUMBER(10,0) NOT NULL, effective_date DATE NOT NULL, term_date DATE NULL, last_update DATE NULL, status CHAR(1) NOT NULL, network_unique_id VARCHAR2(20) NULL, rate_id_2 NUMBER(10,0) NULL, contracted_by VARCHAR2(50) NULL, contract_version VARCHAR2(5) NULL, bill_address_id NUMBER(10,0) NULL ); I should mention this wasn't a problem in Oracle 9i, but when we switched to 11g it became a problem. A: My advice would be to NOT use NATURAL JOIN. Explicitly define your join conditions to avoid confusion and "hidden bugs". Here is the official NATURAL JOIN Oracle documentation and more discussion about this subject. A: If it happens exactly as you say then it must be an optimiser bug, you should report it to Oracle. A: you should try a count(*) There is a difference between the two. count(1) signifies count rows where 1 is not null count(*) signifies count the rows A: Just noticed you used 2 natural joins... From the documentation you can only use a natural join on 2 tables Natural_Join
{ "language": "en", "url": "https://stackoverflow.com/questions/103389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Setting the SVN "execute" bit in a Subversion repository using TortoiseSVN or command line SVN I've got an open-source app that is hosted at code.google.com. It is cross platform ( Linux / Windows / Mac ). I uploaded the code initially from a WinXP machine using TortoiseSVN and it seems that none of the "configure" batch files that are used for the linux build have their "execute" bits set. What would be the easiest way to set these for the files that need them? Using TortoiseSVN would be easier, I suppose, but if that can't be used, then I could also use the command line SVN on my linux machine. A: Here's how to do it on the command line: for file in `find . -name configure`; do svn ps svn:executable yes ${file} done Or for just one file (configure is the filename here): svn ps svn:executable yes configure A: On Unix use {} to adress resulset: find . -type f -name "*.bat" -exec svn propset svn:executable yes '{}' \; Does anyone know why this property requires "yes" as valid argument? Found another example with '' instead of yes, works too... A: find . -type f -name "*.bat" -exec svn propset svn:executable yes "${}" \; Of course the same goes for .exe, etc. A: With tortoise SVN, it's quite easy: you can select several files (may be from search results, so they don't have to be in the same directory), select "properties" in the TortoiseSVN menu, add the needed property (there is a drop-down list of the mostly used properties, in this case "svn:executable") and set the value (in this case "*"). If committing the changed files and checking them out under linux, the executable bit will be set. If you want to set more than one property at once, it may be more secure (in case of mistakes) to first set the properties correctly for one file, export them into a file, select all needed files, select the "properties" menu and import the previously saved properties. A: Method for restoring executable permissions that are lost during svn import: copy permissions from your original source that you used during svn import (current dir to version1): find . -type f | xargs -I {} chmod --reference {} ../version1/{} then set svn:executable for all executables using the following shell script: for file in `find . -executable -type f`; do svn ps svn:executable yes ${file} done
{ "language": "en", "url": "https://stackoverflow.com/questions/103395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: How can you have SharePoint Link Lists default to opening in a new window? In SharePoint, it is easy to set up a List webpart consisting of Links to other documents, folders, sites, etc. Unfortunately, when clicking these links, the default behavior is for the page to open in the current browser window. That is, it does NOT open the page in a new instance of the browser. This has proven annoying for a number of the users on my site. Does anyone know of a way to have the default behavior be to open in a NEW browser window? I'm hoping this is something that can be set in SharePoint rather than having users need to adjust some sort of setting in their browser. A: It is not possible with the default Link List web part, but there are resources describing how to extend Sharepoint server-side to add this functionality. Share Point Links Open in New Window Changing Link Lists in Sharepoint 2007 A: You can edit the page in SharePoint designer, convert the List View web part to an XSLT Data View. (by right click + "Convert to XSLT Data View"). Then you can edit the XSLT - find the A tag and add an attribute target="_blank" A: The same instance for SP2010; the Links List webpart will not automatically open in a new window, rather user must manually rt click Link object and select Open in New Window. The add/ insert Link option withkin SP2010 will allow a user to manually configure the link to open in a new window. Maybe SP2012 release will adrress this... A: Under the Links Tab ==> Edit the URL Item ==> Under the URL (Type the Web address)- format the value as follows: Example: if the URL = http://www.abc.com ==> then suffix the value with ==> * *#openinnewwindow/,'" target="http://www.abc.com' SO, the final value should read as ==> http://www.abc.com#openinnewwindow/,'" target="http://www.abc.com' DONE ==> this will open the URL in New Window
{ "language": "en", "url": "https://stackoverflow.com/questions/103402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Why Am I Getting Link Errors When Calling Function in Math.h? When attempting to call functions in math.h, I'm getting link errors like the following undefined reference to sqrt But I'm doing a #include <math.h> I'm using gcc and compiling as follows: gcc -Wall -D_GNU_SOURCE blah.c -o blah Why can't the linker find the definition for sqrt? A: Add -lm to the command when you call gcc: gcc -Wall -D_GNU_SOURCE blah.c -o blah -lm This will tell the linker to link with the math library. Including math.h will tell the compiler that the math functions like sqrt() exist, but they are defined in a separate library, which the linker needs to pack with your executable. As FreeMemory pointed out the library is called libm.a . On Unix-like systems, the rule for naming libraries is lib[blah].a . Then if you want to link them to your executable you use -l[blah] . A: You need to link the math library explicitly. Add -lm to the flags you're passing to gcc so that the linker knows to link libm.a A: Append -lm to the end of the gcc command to link the math library: gcc -Wall -D_GNU_SOURCE blah.c -o blah -lm For things to be linked properly, the order of the compiler flags matters! Specifically, the -lm should be placed at the end of the line. If you're wondering why the math.h library needs to be included at all when compiling in C, check out this explanation here.
{ "language": "en", "url": "https://stackoverflow.com/questions/103407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I flip a sprite/texture in OpenGLES? I have a sprite loaded as a texture and I need to animate it, allowing it to "face" left or right -- essentially sometimes I need to "flip" it. I know that OpenGL has a gltranslate which repositions an object, and glrotate which rotates it. Is there a method that simply flips it across one axis? If not, how would you accomplish this? A: I haven't messed around with point sprites, but I believe that they are textures. Textures have texture matrices, which means you can use glTranslatef(), glScalef() and glRotatef() on them. I would try out something along the lines of glScalef(-1,1,1); which would flip the texture coordinate by the X axis. As I said, I haven't played with point sprites, but I didn't mess with texture matrices either. They do seem quite useful, though. Update: I have played with texture matrices in the meantime. In the same way that you switch between modelview and projection matrices, you can switch to texture matrix; approximately: glMatrixMode(GL_TEXTURE); after which you can do the aforementioned operations. You could also just paint a quad/two triangles and be done with it :) A: You can't do this with OpenGL point-sprites; although you can move the center of the sprite around, the shape of it is always oriented the same way. What you can do is draw your sprites as quads, which lets you flip, rotate and mess with them any way you want. There are tutorials on manually drawing sprites (aka billboards) on NeHe
{ "language": "en", "url": "https://stackoverflow.com/questions/103421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Simple way to parse a person's name into its component parts? A lot of contact management programs do this - you type in a name (e.g., "John W. Smith") and it automatically breaks it up internally into: First name: John Middle name: W. Last name: Smith Likewise, it figures out things like "Mrs. Jane W. Smith" and "Dr. John Doe, Jr." correctly as well (assuming you allow for fields like "prefix" and "suffix" in names). I assume this is a fairly common things that people would want to do... so the question is... how would you do it? Is there a simple algorithm for this? Maybe a regular expression? I'm after a .NET solution, but I'm not picky. Update: I appreciate that there is no simple solution for this that covers ALL edge cases and cultures... but let's say for the sake of argument that you need the name in pieces (filling out forms - as in, say, tax or other government forms - is one case where you are bound to enter the name into fixed fields, whether you like it or not), but you don't necessarily want to force the user to enter their name into discrete fields (less typing = easier for novice users). You'd want to have the program "guess" (as best it can) on what's first, middle, last, etc. If you can, look at how Microsoft Outlook does this for contacts - it lets you type in the name, but if you need to clarify, there's an extra little window you can open. I'd do the same thing - give the user the window in case they want to enter the name in discrete pieces - but allow for entering the name in one box and doing a "best guess" that covers most common names. A: You probably don't need to do anything fancy really. Something like this should work. Name = Name.Trim(); arrNames = Name.Split(' '); if (arrNames.Length > 0) { GivenName = arrNames[0]; } if (arrNames.Length > 1) { FamilyName = arrNames[arrNames.Length - 1]; } if (arrNames.Length > 2) { MiddleName = string.Join(" ", arrNames, 1, arrNames.Length - 2); } You may also want to check for titles first. A: I had to do this. Actually, something much harder than this, because sometimes the "name" would be "Smith, John" or "Smith John" instead of "John Smith", or not a person's name at all but instead a name of a company. And it had to do it automatically with no opportunity for the user to correct it. What I ended up doing was coming up with a finite list of patterns that the name could be in, like: Last, First Middle-Initial First Last First Middle-Initial Last Last, First Middle First Middle Last First Last Throw in your Mr's, Jr's, there too. Let's say you end up with a dozen or so patterns. My application had a dictionary of common first name, common last names (you can find these on the web), common titles, common suffixes (jr, sr, md) and using that would be able to make real good guesses about the patterns. I'm not that smart, my logic wasn't that fancy, and yet still, it wasn't that hard to create some logic that guessed right more than 99% of the time. A: Having come to this conversation 10 years late, but still looking for an elegant solution, I read through this thread, and decided to take the path @eselk took, but expand on it: public class FullNameDTO { public string Prefix { get; set; } public string FirstName { get; set; } public string MiddleName { get; set; } public string LastName { get; set; } public string Suffix { get; set; } } public static class FullName { public static FullNameDTO GetFullNameDto(string fullName) { string[] knownPrefixes = { "mr", "mrs", "ms", "miss", "dr", "sir", "madam", "master", "fr", "rev", "atty", "hon", "prof", "pres", "vp", "gov", "ofc" }; string[] knownSuffixes = { "jr", "sr", "ii", "iii", "iv", "v", "esq", "cpa", "dc", "dds", "vm", "jd", "md", "phd" }; string[] lastNamePrefixes = { "da", "de", "del", "dos", "el", "la", "st", "van", "von" }; var prefix = string.Empty; var firstName = string.Empty; var middleName = string.Empty; var lastName = string.Empty; var suffix = string.Empty; var fullNameDto = new FullNameDTO { Prefix = prefix, FirstName = firstName, MiddleName = middleName, LastName = lastName, Suffix = suffix }; // Split on period, commas or spaces, but don't remove from results. var namePartsList = Regex.Split(fullName, "(?<=[., ])").ToList(); #region Clean out the crap. for (var x = namePartsList.Count - 1; x >= 0; x--) { if (namePartsList[x].Trim() == string.Empty) { namePartsList.RemoveAt(x); } } #endregion #region Trim all of the parts in the list for (var x = namePartsList.Count - 1; x >= 0; x--) { namePartsList[x] = namePartsList[x].Trim(); } #endregion #region Only one Name Part - assume a name like "Cher" if (namePartsList.Count == 1) { firstName = namePartsList.First().Replace(",", string.Empty).Trim(); fullNameDto.FirstName = firstName; namePartsList.RemoveAt(0); } #endregion #region Get the Prefix if (namePartsList.Count > 0) { //If we find a prefix, save it and drop it from the overall parts var cleanedPart = namePartsList.First() .Replace(".", string.Empty) .Replace(",", string.Empty) .Trim() .ToLower(); if (knownPrefixes.Contains(cleanedPart)) { prefix = namePartsList[0].Trim(); fullNameDto.Prefix = prefix; namePartsList.RemoveAt(0); } } #endregion #region Get the Suffix if (namePartsList.Count > 0) { #region Scan the full parts list for a potential Suffix foreach (var namePart in namePartsList) { var cleanedPart = namePart.Replace(",", string.Empty) .Trim() .ToLower(); if (!knownSuffixes.Contains(cleanedPart.Replace(".", string.Empty))) { continue; } if (namePart.ToLower() == "jr" && namePart != namePartsList.Last()) { continue; } suffix = namePart.Replace(",", string.Empty).Trim(); fullNameDto.Suffix = suffix; namePartsList.Remove(namePart); break; } #endregion } #endregion //If, strangely, there's nothing else in the overall parts... we're done here. if (namePartsList.Count == 0) { return fullNameDto; } #region Prefix/Suffix taken care of - only one "part" left. if (namePartsList.Count == 1) { //If no prefix, assume first name (e.g. "Cher"), otherwise last (e.g. "Dr Jones", "Ms Jones") if (prefix == string.Empty) { firstName = namePartsList.First().Replace(",", string.Empty).Trim(); fullNameDto.FirstName = firstName; } else { lastName = namePartsList.First().Replace(",", string.Empty).Trim(); fullNameDto.LastName = lastName; } } #endregion #region First part ends with a comma else if (namePartsList.First().EndsWith(",") || (namePartsList.Count >= 3 && namePartsList.Any(n => n == ",") && namePartsList.Last() != ",")) { #region Assume format: "Last, First" if (namePartsList.First().EndsWith(",")) { lastName = namePartsList.First().Replace(",", string.Empty).Trim(); fullNameDto.LastName = lastName; namePartsList.Remove(namePartsList.First()); firstName = namePartsList.First(); fullNameDto.FirstName = firstName; namePartsList.Remove(namePartsList.First()); if (!namePartsList.Any()) { return fullNameDto; } foreach (var namePart in namePartsList) { middleName += namePart.Trim() + " "; } fullNameDto.MiddleName = middleName; return fullNameDto; } #endregion #region Assume strange scenario like "Last Suffix, First" var indexOfComma = namePartsList.IndexOf(","); #region Last Name is the first thing in the list if (indexOfComma == 1) { namePartsList.Remove(namePartsList[indexOfComma]); lastName = namePartsList.First().Replace(",", string.Empty).Trim(); fullNameDto.LastName = lastName; namePartsList.Remove(namePartsList.First()); firstName = namePartsList.First(); fullNameDto.FirstName = firstName; namePartsList.Remove(namePartsList.First()); if (!namePartsList.Any()) { return fullNameDto; } foreach (var namePart in namePartsList) { middleName += namePart.Trim() + " "; } fullNameDto.MiddleName = middleName; return fullNameDto; } #endregion #region Last Name might be a prefixed one, like "da Vinci" if (indexOfComma == 2) { var possibleLastPrefix = namePartsList.First() .Replace(".", string.Empty) .Replace(",", string.Empty) .Trim() .ToLower(); if (lastNamePrefixes.Contains(possibleLastPrefix)) { namePartsList.Remove(namePartsList[indexOfComma]); var lastPrefix = namePartsList.First().Trim(); namePartsList.Remove(lastPrefix); lastName = $"{lastPrefix} {namePartsList.First().Replace(",", string.Empty).Trim()}"; fullNameDto.LastName = lastName; namePartsList.Remove(namePartsList.First()); } else { lastName = namePartsList.First().Replace(",", string.Empty).Trim(); namePartsList.Remove(namePartsList.First()); lastName = lastName + " " + namePartsList.First().Replace(",", string.Empty).Trim(); namePartsList.Remove(namePartsList.First()); fullNameDto.LastName = lastName; } namePartsList.Remove(","); firstName = namePartsList.First(); fullNameDto.FirstName = firstName; namePartsList.Remove(namePartsList.First()); if (!namePartsList.Any()) { return fullNameDto; } foreach (var namePart in namePartsList) { middleName += namePart.Trim() + " "; } fullNameDto.MiddleName = middleName; return fullNameDto; } #endregion #endregion } #endregion #region Everything else else { if (namePartsList.Count >= 3) { firstName = namePartsList.First().Replace(",", string.Empty).Trim(); fullNameDto.FirstName = firstName; namePartsList.RemoveAt(0); //Check for possible last name prefix var possibleLastPrefix = namePartsList[namePartsList.Count - 2] .Replace(".", string.Empty) .Replace(",", string.Empty) .Trim() .ToLower(); if (lastNamePrefixes.Contains(possibleLastPrefix)) { lastName = $"{namePartsList[namePartsList.Count - 2].Trim()} {namePartsList[namePartsList.Count -1].Replace(",", string.Empty).Trim()}"; fullNameDto.LastName = lastName; namePartsList.RemoveAt(namePartsList.Count - 1); namePartsList.RemoveAt(namePartsList.Count - 1); } else { lastName = namePartsList.Last().Replace(",", string.Empty).Trim(); fullNameDto.LastName = lastName; namePartsList.RemoveAt(namePartsList.Count - 1); } middleName = string.Join(" ", namePartsList).Trim(); fullNameDto.MiddleName = middleName; namePartsList.Clear(); } else { if (namePartsList.Count == 1) { lastName = namePartsList.First().Replace(",", string.Empty).Trim(); fullNameDto.LastName = lastName; namePartsList.RemoveAt(0); } else { var possibleLastPrefix = namePartsList.First() .Replace(".", string.Empty) .Replace(",", string.Empty) .Trim() .ToLower(); if (lastNamePrefixes.Contains(possibleLastPrefix)) { lastName = $"{namePartsList.First().Replace(",", string.Empty).Trim()} {namePartsList.Last().Replace(",", string.Empty).Trim()}"; fullNameDto.LastName = lastName; namePartsList.Clear(); } else { firstName = namePartsList.First().Replace(",", string.Empty).Trim(); fullNameDto.FirstName = firstName; namePartsList.RemoveAt(0); lastName = namePartsList.Last().Replace(",", string.Empty).Trim(); fullNameDto.LastName = lastName; namePartsList.Clear(); } } } } #endregion namePartsList.Clear(); fullNameDto.Prefix = prefix; fullNameDto.FirstName = firstName; fullNameDto.MiddleName = middleName; fullNameDto.LastName = lastName; fullNameDto.Suffix = suffix; return fullNameDto; } } This will handle quite a few different scenarios, and I've written out (thus far) over 50 different unit tests against it to make sure. Props again to @eselk for his ideas that helped in my writing an expanded version of his excellent solution. And, as a bonus, this also handles the strange instance of a person named "JR". A: If you must do this parsing, I'm sure you'll get lots of good suggestions here. My suggestion is - don't do this parsing. Instead, create your input fields so that the information is already separated out. Have separate fields for title, first name, middle initial, last name, suffix, etc. A: I appreciate that this is hard to do right - but if you provide the user a way to edit the results (say, a pop-up window to edit the name if it didn't guess right) and still guess "right" for most cases... of course it's the guessing that's tough. It's easy to say "don't do it" when looking at the problem theoretically, but sometimes circumstances dictate otherwise. Having fields for all the parts of a name (title, first, middle, last, suffix, just to name a few) can take up a lot of screen real estate - and combined with the problem of the address (a topic for another day) can really clutter up what should be a clean, simple UI. I guess the answer should be "don't do it unless you absolutely have to, and if you do, keep it simple (some methods for this have been posted here) and provide the user the means to edit the results if needed." A: Understanding this is a bad idea, I wrote this regex in perl - here's what worked the best for me. I had already filtered out company names. Output in vcard format: (hon_prefix, given_name, additional_name, family_name, hon. suffix) /^ \s* (?:((?:Dr.)|(?:Mr.)|(?:Mr?s.)|(?:Miss)|(?:2nd\sLt.)|(?:Sen\.?))\s+)? # prefix ((?:\w+)|(?:\w\.)) # first name (?: \s+ ((?:\w\.?)|(?:\w\w+)) )? # middle initial (?: \s+ ((?:[OD]['’]\s?)?[-\w]+)) # last name (?: ,? \s+ ( (?:[JS]r\.?) | (?:Esq\.?) | (?: (?:M)|(?:Ph)|(?:Ed) \.?\s*D\.?) | (?: R\.?N\.?) | (?: I+) ) )? # suffix \s* $/x notes: * *doesn't handle IV, V, VI *Hard-coded lists of prefixes, suffixes. evolved from dataset of ~2K names *Doesn't handle multiple suffixes (eg. MD, PhD) *Designed for American names - will not work properly on romanized Japanese names or other naming systems A: The real solution here does not answer the question. The portent of the information must be observed. A name is not just a name; it is how we are known. The problem here is not knowing exactly what parts are labled what, and what they are used for. Honorable prefixes should be granted only in personal corrospondences; Doctor is an honorific that is derived from a title. All information about a person is relavent to their identity, it is just determining what is relavent information. You need a first and last name for reasons of administration; phone number, email addresses, land descriptions and mailing addresses; all to the portent of identity, knowing who you are dealing with. The real problem here is that the person gets lost in the administration. All of a sudden, after only entering their personal information into a form and submitting it to an arbitrary program for processing, they become afforded all sorts of honorifics and pleasentries spewed out by a prefabricated template. This is wrong; honorable Sir or Madam, if personal interest is shown toward the reason of corrospondence, then a letter should never be written from a template. Personal corrospondance requires a little knowledge about the recipient. Male or female, went to school to be a doctor or judge, what culture in which they were raised. In other cultures, a name is made up from a variable number of characters. The person's name to us can only be interpretted as a string of numbers where the spaces are actually determined by character width instead of the space character. Honorifics in these cases are instead one or many characters prefixing and suffixing the actual name. The polite thing to do is use the string you are given, if you know the honorific then by all means use it, but this again implies some sort of personal knowledge of the recipient. Calling Sensei anything other than Sensei is wrong. Not in the sense of a logic error, but in that you have just insulted your caller, and now you should find a template that helps you apologize. For the purposes of automated, impersonal corrospondence, a template may be devised for such things as daily articles, weekly issues or whatever, but the problem becomes important when corrospondence is instigated by the recipient to an automated service. What happens is an error. Missing information. Unknown or missing information will always generate an Exception. The real problem is not how do you seperate a person's name into its seperate components with an expression, but what do you call them. The solution is to create an extra field, make it optional if there is already a first and last name, and call it "What may we call you" or "How should we refer to you as". A doctor and a judge will ensure you address them properly. These are not programming issues, they are issues of communication. Ok, bad way to put it, but in my opinion, Username, Tagname, and ID are worse. So my solution; is the missing question, "What should we call you?" This is only a solution where you can afford to make a new question. Tact prevails. Create a new field upon your user forms, call it Alias, label for the user "What should we call you?", then you have a means to communicate with. Use the first and last name unless the recipient has given an Alias, or is personally familiar with the sender then first and middle is acceptable. To Me, _______________________ (standard subscribed corrospondence) To Me ( Myself | I ), ________ (standard recipient instigated corrospondence) To Me Myself I, ______________ (look out, its your mother, and you're in big trouble; nobody addresses a person by their actual full name) Dear *(Mr./Mrs./Ms./Dr./Hon./Sen.) Me M. I *(I), To Whom it may Concern; Otherwise you are looking for something standard: hello, greetings, you may be a winner. Where you have data that is a person's name all in one string, you don't have a problem because you already have their alias. If what you need is the first and last name, then just Left(name,instr(name," ")) & " " & Right(name,instrrev(name," ")), my math is probably wrong, i'm a bit out of practice. compare left and right with known prefixes and suffixes and eliminate them from your matches. Generally the middle name is rarely used except for instances of confirming an identity; which an address or phone number tells you a lot more. Watching for hyphanation, one can determine that if the last name is not used, then one of the middle ones would be instead. For searching lists of first and last names, one must consider the possibility that one of the middle ones was instead used; this would require four searches: one to filter for first & last, then another to filter first & middle, then another to filter middle & last, and then another to filter middle & middle. Ultimately, the first name is always first, and the last is always last, and there can be any number of middle names; less is more, and where zero is likely, but improbable. Sometimes people prefer to be called Bill, Harry, Jim, Bob, Doug, Beth, Sue, or Madonna; than their actual names; similar, but unrealistically expected of anyone to fathom all the different possibilities. The most polite thing you could do, is ask; What can we call you? A: There are a few add-ins we have used in our company to accomplish this. I ended up creating a way to actually specify the formats for the name on our different imports for different clients. There is a company that has a tool that in my experience is well worth the price and is really incredible when tackling this subject. It's at: http://www.softwarecompany.com/ and works great. The most efficient way to do this w/out using any statistical approach is to split the string by commas or spaces then: 1. strip titles and prefixes out 2. strip suffixes out 3, parse name in the order of ( 2 names = F & L, 3 names = F M L or L M F) depending on order of string(). A: I know this is old and might be answers somewhere I couldn't find already, but since I couldn't find anything that works for me, this is what I came up with which I think works a lot like Google Contacts and Microsoft Outlook. It doesn't handle edge cases well, but for a good CRM type app, the user can always be asked to resolve those (in my app I actually have separate fields all the time, but I need this for data import from another app that only has one field): public static void ParseName(this string s, out string prefix, out string first, out string middle, out string last, out string suffix) { prefix = ""; first = ""; middle = ""; last = ""; suffix = ""; // Split on period, commas or spaces, but don't remove from results. List<string> parts = Regex.Split(s, @"(?<=[., ])").ToList(); // Remove any empty parts for (int x = parts.Count - 1; x >= 0; x--) if (parts[x].Trim() == "") parts.RemoveAt(x); if (parts.Count > 0) { // Might want to add more to this list string[] prefixes = { "mr", "mrs", "ms", "dr", "miss", "sir", "madam", "mayor", "president" }; // If first part is a prefix, set prefix and remove part string normalizedPart = parts.First().Replace(".", "").Replace(",", "").Trim().ToLower(); if (prefixes.Contains(normalizedPart)) { prefix = parts[0].Trim(); parts.RemoveAt(0); } } if (parts.Count > 0) { // Might want to add more to this list, or use code/regex for roman-numeral detection string[] suffixes = { "jr", "sr", "i", "ii", "iii", "iv", "v", "vi", "vii", "viii", "ix", "x", "xi", "xii", "xiii", "xiv", "xv" }; // If last part is a suffix, set suffix and remove part string normalizedPart = parts.Last().Replace(".", "").Replace(",", "").Trim().ToLower(); if (suffixes.Contains(normalizedPart)) { suffix = parts.Last().Replace(",", "").Trim(); parts.RemoveAt(parts.Count - 1); } } // Done, if no more parts if (parts.Count == 0) return; // If only one part left... if (parts.Count == 1) { // If no prefix, assume first name, otherwise last // i.e.- "Dr Jones", "Ms Jones" -- likely to be last if(prefix == "") first = parts.First().Replace(",", "").Trim(); else last = parts.First().Replace(",", "").Trim(); } // If first part ends with a comma, assume format: // Last, First [...First...] else if (parts.First().EndsWith(",")) { last = parts.First().Replace(",", "").Trim(); for (int x = 1; x < parts.Count; x++) first += parts[x].Replace(",", "").Trim() + " "; first = first.Trim(); } // Otherwise assume format: // First [...Middle...] Last else { first = parts.First().Replace(",", "").Trim(); last = parts.Last().Replace(",", "").Trim(); for (int x = 1; x < parts.Count - 1; x++) middle += parts[x].Replace(",", "").Trim() + " "; middle = middle.Trim(); } } Sorry that the code is long and ugly, I haven't gotten around to cleaning it up. It is a C# extension, so you would use it like: string name = "Miss Jessica Dark-Angel Alba"; string prefix, first, middle, last, suffix; name.ParseName(out prefix, out first, out middle, out last, out suffix); A: There is no simple solution for this. Name construction varies from culture to culture, and even in the English-speaking world there's prefixes and suffixes that aren't necessarily part of the name. A basic approach is to look for honorifics at the beginning of the string (e.g., "Hon. John Doe") and numbers or some other strings at the end (e.g., "John Doe IV", "John Doe Jr."), but really all you can do is apply a set of heuristics and hope for the best. It might be useful to find a list of unprocessed names and test your algorithm against it. I don't know that there's anything prepackaged out there, though. A: You can do the obvious things: look for Jr., II, III, etc. as suffixes, and Mr., Mrs., Dr., etc. as prefixes and remove them, then first word is first name, last word is last name, everything in between are middle names. Other than that, there's no foolproof solution for this. A perfect example is David Lee Roth (last name: Roth) and Eddie Van Halen (last name: Van Halen). If Ann Marie Smith's first name is "Ann Marie", there's no way to distinguish that from Ann having a middle name of Marie. A: If you simply have to do this, add the guesses to the UI as an optional selection. This way, you could tell the user how you parsed the name and let them pick a different parsing from a list you provide. A: There is a 3rd party tool for this kind of thing called NetGender that works surprisingly well. I used it to parse a massive amount of really mal-formed names in unpredictable formats. Take a look at the examples on their page, and you can download and try it as well. http://www.softwarecompany.com/dotnet/netgender.htm I came up with these statistics based on a sampling of 4.2 million names. Name Parts means the number of distinct parts separated by spaces. A very high percentage were correct for most names in the database. The correctness went down as the parts went up, but there were very few names with >3 parts and fewer with >4. This was good enough for our case. Where the software fell down was recognizing not-well-known multi-part last names, even when separated by a comma. If it was able to decipher this, then the number of mistakes in total would have been less than 1% for all data. Name Parts | Correct | Percent of Names in DB 2 100% 48% 3 98% 42% 4 70% 9% 5 45% 0.25% A: I already do this server-side on page load. Wrote a Coldfusion CFC that gets two params passed to it - actual user data(first name, middle, last name) and data type(first,middle,last). Then routine checks for hyphens, apostrophes, spaces and formats accordingly. ex. MacDonald, McMurray, O'Neill, Rodham-Clinton, Eric von Dutch, G. W. Bush, Jack Burton Jr., Dr. Paul Okin, Chris di Santos. For case where users only have one name, only the first name field is required, middle and last names are optional. All info is stored lowercase - except Prefix, Suffix and Custom. This formatting is done on page render, not during store to db. Though there is validation filtering when user inputs data. Sorry, cannot post code. Started out using Regex but became too confusing and unreliable for all scenarios. Used standard logic blocks(if/else, switch/case), easier to read and debug. MAKE EVERY INPUT/DB FIELD SEPARATE! Yes, this will take some coding, but after you are finished it should account for 99% of combinations. Only based on English names so far, no internationalization, that's another ball of wax. Here's some things to consider: * *Hypens (ex. Rodham-Clinton, could be in first, middle or last) *Apostrophes (ex. O'Neill, could be in first, middle or last) *Spaces *Mc and Mac (ex. McDonald, MacMurray, could be in first, middle or last) *First names: multiple first names (ex. Joe Bob Briggs) *Last names: de,di,et,der,den,van,von,af should be lowercase (ex Eric von Dander, Mary di Carlo) *Prefix: Dr., Prof., etc *Suffix: Jr., Sr., Esq., II, III, etc When user enters info, field schema in db is like so: * *Prefix/Title (Dr., etc using a dropdown) *Prefix/Title Custom (user can enter custom, ex. Capt. using a text field) *First Name *Middle *Last Name *Suffix (Jr., III, Prof., Ret., etc using a dropdown) *Suffix Custom (user can enter custom, ex. CPA) Here's the one Regex I do use to make first letter of each name uppercase. I run this first, then following routines format according to rules(it's in Coldfusion format but you get the idea): <cfset var nameString = REReplace(LCase(nameString), "(^[[:alpha:]]|[[:blank:]][[:alpha:]])", "\U\1\E", "ALL")> You could also do this client-side using JavaScript and CSS - might even be easier - but I prefer to do server-side since I need the variables set before page loads client-side. A: I would say Strip out salutations from a list then split by space, placing list.first() as first name, list.last() as last name then join the remainder by a space and have that as a middle name. And ABOVE ALL display your results and let the user modify them! A: Sure, there is a simple solution - split the string by spaces, count the number of tokens, if there is 2, interpret them to be FIRST and LAST name, if there is 3, interpret it to be FIRST, MIDDLE, and LAST. The problem is that the simple solution will not be a 100% correct solution - someone could always enter a name with many more tokens, or could include titles, last names with a space in it (is this possible?), etc. You can come up with a solution that works for most names most of the time, but not an absolute solution. I would follow Shad's recommendation to split the input fields. A: You don't want to do this, unless you are only going to be contacting people from one culture. For example: Guido van Rossum's last name is van Rossum. MIYAZAKI Hayao's first name is Hayao. The most success you could do is to strip off common titles and salutations, and try some heuristics. Even so, the easiest solution is to just store the full name, or ask for given and family name seperately. A: This is a fools errand. Too many exceptions to be able to do this deterministically. If you were doing this to pre-process a list for further review I would contend that less would certainly be more. * *Strip out salutations, titles and generational suffixes (big regex, or several small ones) *if only one name, it is 'last'. *If only two names split them first,last. *If three tokens and middle is initial split them first, middle, last *Sort the rest by hand. Any further processing is almost guaranteed to create more work as you have to go through recombining what your processing split-up. A: I agree, there's no simple solution for this. But I found an awful approach in a Microsoft KB article for VB 5.0 that is an actual implementation to much of the discussion talked about here: http://support.microsoft.com/kb/168799 Something like this could be used in a pinch. A: There is no 100% way to do this. You can split on spaces, and try to understand the name all you want, but when it comes down to it, you will get it wrong sometimes. If that is good enough, go for any of the answers here that give you ways to split. But some people will have a name like "John Wayne Olson", where "John Wayne" is the first name, and someone else will have a name like "John Wayne Olson" where "Wayne" is their middle name. There is nothing present in that name that will tell you which way to interpret it. That's just the way it is. It's an analogue world. My rules are pretty simple. Take the last part --> Last Name If there are multiple parts left, take the last part --> Middle name What is left --> First name But don't assume this will be 100% accurate, nor will any other hardcoded solution. You will need to have the ability to let the user edit this him/her-self. A: I did something similar. The main problem I had was when people entered stuff like "Richard R. Smith, Jr." I posted my code at http://www.blackbeltcoder.com/Articles/strings/splitting-a-name-into-first-and-last-names. It's in C# but could easily be converted to VB. A: A simpler way: Install HumanNameParser NuGet: Install-Package HumanNameParser And call Parse extension method. string name = "Mr Ali R Von Bayat III"; var result = name.Parse(); //result = new Name() //{ // Salutation = "Mr", // FirstName = "Ali", // MiddleInitials = "R", // LastName = "Von Bayat", // Suffix = "III" //}; A: I agree with not to do this . The name Rick Van DenBoer would end up with a middle name of Van but it's part of the last name.
{ "language": "en", "url": "https://stackoverflow.com/questions/103422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: What are the best SQL Server performance optimization techniques? I've always taken the approach of first deploying the database with a minimal set of indexes and then adding/changing indexes as performance dictates. This approach works reasonably well. However, it still doesn't tell me where I could improve performance. It only tells me where performance is so bad that users complain about it. Currently, I'm in the process of refactoring database objects on a lot of our applications. So should I not bother to look for performance improvements since "premature optimization is the root of all evil"? When refactoring application code, the developer is constantly looking for ways to improve the code quality. Is there a way to constantly be looking for improvements in database performance as well? If so, what tools and techniques have you found to be most helpful? I've briefly played around with the "Database engine tuning advisor" but didn't find it to be helpful at all. Maybe I just need more experience interpreting the results. A: SQL Server Execution Plan!!! Go here: http://dbalink.wordpress.com/2008/08/08/dissecting-sql-server-execution-plans-free-ebook/ A: After you profile, put the queries you see as troublesome into SQL Query Analyzer and display the execution plan. Identify portions of the queries that are performing costly table scans and re-index those tables to minimize this cost. Try these references: Optimizing SQL How to Optimize Queries A: My approach is to gather commands against the server or database into a table using SQL Server Profiler. Once you have that, you can query based on the max and avg execution times, max and avg cpu times, and (also very important) the number of times that the query was run. Since I try to put all database access code in stored procedures it's easy for me to break out queries. If you use inline SQL it might be harder, since a change to a value in the query would make it look like a different query. You can try to work around this using the LIKE operator to put the same types of queries into the same buckets for calculating the aggregates (max, avg, count). Once you have a "top 10" list of potential problems you can start looking at them individually to see if either the query can be reworked, an index might help, or making a minor architecture change is in order. To come up with the top 10, try looking at the data in different ways: avg * count for total cost during the period, max for worst offender, just plain avg, etc. Finally, be sure to monitor over different time periods if necessary. The database usage might be different in the morning when everyone is getting in and running their daily reports than it is at midday when users are entering new data. You may also decide that even though some nightly process takes longer than any other query it doesn't matter since it's run during off hours. Good luck! A: "premature optimization is the root of all evil" In terms of database programming, I think this quote is nonsense. It is extremely expensive to re-write your whole application because your developers don't care to write efficient code the first time. All t-sql code should be thought of in terms of how it will affect database performance second (data integrity is, of course, first). Perfomance should trump everything except data integrity. Yes, there are optimization things you shouldn't do until you have issues, but some things should be done as a matter of course and not fixed later. It takes no more time to write code that has a better chance of being efficient than code which will not be once you understand how you are affecting efficiency with the bad code. Cervo's discussion of cursor code is one example. Set-based actions are almost always much faster than cursor solutions, so cursors should not ever be written initially when a set-based solution will do. It almost always takes me less time to write a set-based solution that it would to write a cursor, but the only way to get that way is to never write cursors. And there is no reason to ever use select * instead of specifying your field names. In MSSQL you can drag those names over from the object explorer so you can't tell me it's too hard to do that. But by specyfying only the fields you actually need, you save network resources and database server resources and web server resources. So why should a programmer ever take the lazy option of select * and worry about optimizing later? The same thing with indexes. You say you do a minimal set of indexes. Depending on how you define minimal, that could be ok, but it is critical to have indexes on all foreign keys and I wouldn't want to push a database that didn't have indexes on a few fields that are most often in the where clauses. If your users are outside clients and not internal, they won't complain about how slow your site is, they will go elsewhere. It only makes busness sense to plan for efficient database access from the start. One of my main concerns about failing to consider efficiency from the beginning is that the first couple of times that things are too slow companies tend to just throw more equipment at the issue rather than performance tune. By the time people start performacne tuning you have a several gigabyte or more database with many unhappy customers who are getting timeouts more than results. At this point, often almost everything in the database has to be re-written and in the meantime you are losing customers. I remember providing support at one company with a commercial application that it literally took ten minutes for the customer service reps to move from one screen to another while they were trying to help already disgruntled customers on the phone. You can imagine how many customers the company lost due to poorly designed database queries in the commercial product that we could not change. A: profile your queries, not the obvious ones, but the complex that access different tables, views, etc and/or the ones that return many rows from different tables That will tell you exactly where you should focus A: profiling is key, but when using a profiling set you MUST be sure that it is an accurate test set of data, otherwise the tuning tools will not be able to get you an accurate result what is needed. Also the management objects with fragmentation an usage reporting in 2005 are very helpful! A: Of course you have to profile your queries and look at the execution plan. But the two main things that come up over and over again are filter out as much as you can as soon as you can and try to avoid cursors. I saw an application where someone downloaded an entire database table of events to a client and then went through each row one by one filtering based on some criteria. There was a HUGE performance increase in passing the filter criteria to the database and having the query apply the criteria in a where clause. This is obvious to people who work with databases, but I have seen similar things crop up. Also some people have queries that store a bunch of temp tables full of rows that they don't need which are then eliminated in a final join of the temp tables. Basically if you eliminate from the queries that populate the temp tables then there is less data for the rest of the query and the whole query runs faster. Cursors are obvious. If you have a million rows and go row by row then it will take forever. Doing some tests, if you connect to a database even with a "slow" dynamic language like Perl and perform some row by row operation on a dataset, the speed will still be much greater than a cursor in the database. Do it with something like Java/C/C++ and the speed difference is even bigger. If you can find/eliminate a cursor in the database code, it will run much faster... If you must use a cursor, rewriting that part in any programming language and getting it out of the database will probably yield huge performance increases. One more note on cursors, beware code like SELECT @col1 = col1, @col2 = col2, @col3 = col3 where id = @currentid in a loop that goes through IDs and then executes statements on each column. Basically this is a cursor as well. Not only that but using real cursors is often faster than this, especially static and forward_only. If you can change the operation to be set based it will be much faster.....That being said, cursors have a place for some things....but from a performance perspective there is a penalty to using them over set based approaches. Also beware the execution plan. Sometimes it estimates operations that take seconds to be very expensive and operations that take minutes to be very cheap. When viewing an execution plan make sure to check everything by maybe inserting some SELECT 'At this area', GETDATE() into your code. A: My advice is that "premature optimization is the root of all evil" in this context is absoulte nonsense. In my view its all about design - you need to think about concurrency, hotspots, indexing, scaling and usage patterns when you are designing your data schema. If you don't know what indexes you need and how they need to be configured right off the bat without doing profiling you have already failed. There are millions of ways to optimize query execution that are all well and good but at the end of the day the data lands where you tell it to. A: Apply proper indexing in the table columns in the database * *Make sure that every table in your database has a primary key. This will ensure that every table has a clustered index created (and hence, the corresponding pages of the table are physically sorted in the disk according to the primary key field). So, any data retrieval operation from the table using the primary key, or any sorting operation on the primary key field or any range of primary key values specified in the where clause will retrieve data from the table very fast. * *Create non-clustered indexes on columns which are Frequently used in the search criteria. Used to join other tables. Used as foreign key fields. Of having high selectivity (column which returns a low percentage (0-5%) of rows from a total number of rows on a particular value). Used in the ORDER BY clause. Don't use "SELECT*" in a SQL query Unnecessary columns may get fetched that will add expense to the data retrieval time. The database engine cannot utilize the benefit of "Covered Index" and hence the query performs slowly. Example: SELECT Cash, Age, Amount FROM Investments; Instead of: SELECT * FROM Investments; Try to avoid HAVING Clause in Select statements HAVING clause is used to filter the rows after all the rows are selected and is used like a filter. Try not to use HAVING clause for any other purposes. Example: SELECT Name, count (Name) FROM Investments WHERE Name!= ‘Test’ AND Name!= ‘Value’ GROUP BY Name; Instead of: SELECT Name, count (Name) FROM Investments GROUP BY Name HAVING Name!= ‘Test’ AND Name!= ‘Value’ ; Try to minimize number of sub query blocks within a query Sometimes we may have more than one sub query in our main query. We should try to minimize the number of sub query block in our query. Example: SELECT Amount FROM Investments WHERE (Cash, Fixed) = (SELECT MAX (Cash), MAX (Fixed) FROM Retirements) AND Goal = 1; Instead of: SELECT Amount FROM Investments WHERE Cash = (SELECT MAX (Cash) FROM Retirements) AND Fixed = (SELECT MAX (Fixed) FROM Retirements) AND Goal = 1; Avoid unnecessary columns in the SELECT list and unnecessary tables in join conditions Selecting unnecessary columns in a Select query adds overhead to the actual query, especially if the unnecessary columns are of LOB types. Including unnecessary tables in join conditions forces the database engine to retrieve and fetch unnecessary data and increases the query execution time. Do not use the COUNT() aggregate in a subquery to do an existence check When you use COUNT(), SQL Server does not know that you are doing an existence check. It counts all matching values, either by doing a table scan or by scanning the smallest non-clustered index. When you use EXISTS, SQL Server knows you are doing an existence check. When it finds the first matching value, it returns TRUE and stops looking. Try to avoid joining between two types of columns When joining between two columns of different data types, one of the columns must be converted to the type of the other. The column whose type is lower is the one that is converted. If you are joining tables with incompatible types, one of them can use an index, but the query optimizer cannot choose an index on the column that it converts. Try not to use COUNT(*) to obtain the record count in a table To get the total row count in a table, we usually use the following Select statement: SELECT COUNT(*) FROM [dbo].[PercentageForGoal] This query will perform a full table scan to get the row count. The following query would not require a full table scan. (Please note that this might not give you 100% perfect results always, but this is handy only if you don't need a perfect count.) SELECT rows FROM sysindexes WHERE id = OBJECT_ID('[dbo].[PercentageForGoal]') AND indid< 2 Try to use operators like EXISTS, IN and JOINS appropriately in your query * *Usually IN has the slowest performance. *IN is efficient, only when most of the filter criteria for selection are placed in the sub-query of a SQL statement. *EXISTS is efficient when most of the filter criteria for selection is in the main query of a SQL statement. Try to avoid dynamic SQL Unless really required, try to avoid the use of dynamic SQL because: Dynamic SQL is hard to debug and troubleshoot. If the user provides the input to the dynamic SQL, then there is a possibility of SQL injection attacks. Try to avoid the use of temporary tables Unless really required, try to avoid the use of temporary tables. Rather use table variables. In 99% of cases, table variables reside in memory, hence it is a lot faster. Temporary tables reside in the TempDb database. So operating on temporary tables require inter database communication and hence will be slower. Instead of LIKE search, use full text search for searching textual data Full text searches always outperform LIKE searches. Full text searches will enable you to implement complex search criteria that can't be implemented using a LIKE search, such as searching on a single word or phrase (and optionally, ranking the result set), searching on a word or phrase close to another word or phrase, or searching on synonymous forms of a specific word. Implementing full text search is easier to implement than LIKE search (especially in the case of complex search requirements). Try to use UNION to implement an "OR" operation Try not to use "OR" in a query. Instead use "UNION" to combine the result set of two distinguished queries. This will improve query performance. Better use UNION ALL if a distinguished result is not required. UNION ALL is faster than UNION as it does not have to sort the result set to find out the distinguished values. Implement a lazy loading strategy for large objects Store Large Object columns (like VARCHAR(MAX), Image, Text etc.) in a different table than the main table, and put a reference to the large object in the main table. Retrieve all the main table data in a query, and if a large object is required to be loaded, retrieve the large object data from the large object table only when it is required. Implement the following good practices in User Defined Functions Do not call functions repeatedly within your Stored Procedures, triggers, functions, and batches. For example, you might need the length of a string variable in many places of your procedure, but don't call the LEN function whenever it's needed; instead, call the LEN function once, and store the result in a variable for later use. Implement the following good practices in Triggers * *Try to avoid the use of triggers. Firing a trigger and executing the triggering event is an expensive process. *Never use triggers that can be implemented using constraints. *Do not use the same trigger for different triggering events (Insert, Update and Delete). *Do not use transactional code inside a trigger. The trigger always runs within the transactional scope of the code that fires the trigger. A: It seems that you're talking about MS SQL. Start the profiler and record tehe most common queries you run on the database. Then run those queries with the Execution Plan turned on and you will see what (if anything) is slowing your queries down. You could then go and optimize the queries or add more indexes on your fields. SQL Books will give you a good overview of both profiling and query analysis functionality. A: You might want to check internal and external framentation of current indexes and either drop and re-create them or re organize them. A: Make sure you are profiling using production volumes - in terms of number of rows and load. The queries and their plans behave differently under different load/volume scenarios A: Generally, the tips here: http://www.sql-server-performance.com/ have been high quality and useful for me in the past. A: My advice would be to start with techniques applicable to all databases and then try the ones specific to MsSQL. Optimizing SQL is difficult, and there are no hard and fast rules. There are very few generic guidelines that you can follow, such as: * *95% of performance improvements will come from the application, not from server or database engine configuration. *Design for correctness first, tweak for performance later *Reduce trips to the database *Try to express things in a way that fits your data model *Ignore generic advice about performance - yes, at some point you'll find a system or SQL statement where one of those rules does not apply. But the key point is that you should always apply the 80-20 rule. Which means that in any system you need to tweak 20% (often much less) of your code for the biggest performance gains. That's where the vendor provided tools usually fail, as they cannot usually guess the application/business context of execution.
{ "language": "en", "url": "https://stackoverflow.com/questions/103423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Displaying Window on Logon Screen Using C# in Windows XP I am trying to create a service with C# that launches a process that can be displayed on the Windows XP Logon screen. I found some code that is doing this in C++. The C++ code is for a service that creates another process with STARTUPINFO.lpDesktop set to "WinSta0\WinLogon". The created process is then displayed to the Windows Logon Screen. I can't seem to find a way to specify the 'desktop' of a new process in C# using System.Diagnostic.Process class. Does anyone know how to do this with C#? A: The solution was to call the C++ Win32 API function CreateProcess from kernel32.dll from the C# code. This site was very helpful in getting the correct function signature for C#: http://www.pinvoke.net/default.aspx/kernel32/CreateProcess.html A: I think you'll have to write it in C++, compile that to a DLL and then call the DLL from your managed code.
{ "language": "en", "url": "https://stackoverflow.com/questions/103427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: LSL communications Years ago I created a programming collaboratory in Diversity University MOO -- a room written in MOOcode that used TCP/IP to communicate with a perl server back at my campus to compile and execute C, Perl, Bash and other programs and return results to the MOO collaboratory -- all for demonstrating programming languages in a MOO teaching environment. The application is usually a romp in five or six languages and fun to play with. Now I'd like to do the same thing in SecondLife using LSL. The only suggestion I've gotten so far from that crowd is to use a WWW request, presumeably constructing an http POST message to a CGI process. I never cared much for html forms so I'd rather use TCP/IP or some other communications protocol. Has anyone tried this who'd care to provide a few hints? There are several good LSL demo sites in SecondLife but I'd like to demo other compiler and script languages, maybe even PowerShell. Dick S. A: REST is now in fashion for web services. There is no real reason to get down to TCP/IP layer for something which from your description does not require super performance or response times. LSL HTTP support is quite good so you should not have any problems. Of course it is not ideal to get the output of your programs back in real-time - for that you would need to open http connection on the server and constantly write to the body of the page (while the client would read that). But even with going back and forth between the server and the client you should get moderately good experience. A: LSL's external communication options are limited to three specific options. The official LSL wiki provides more detailed information on each option. * *Raw HTTP: requests must be initiated by LSL script *XmlHTTP: requests must be initiated by external service *Email: full two-way communication, but with enforced sleep timers. A: I would tend to agree with Ilya. The Best you might be able to pull if you want the script to be very responsive is to have your server side code call back to the object once the server is made aware of it using the XML-RPC. The main wiki for Second Life is pretty good for sample code, etc. XML-RPC A: LSL's llHTTPRequest function and corresponding http_response event are definitely your best bet. Contrary to the assumption posed in your question, using http does not necessitate using "html forms". The POST (or PUT) payload can contain data organized however you want. A REST interface is a good way to do the kind of machine-to-machine http communication we're talking about. One advantage of REST over html or xml is that REST can be much less verbose. This is important when you start approaching LSL's 2048 character limit on http responses. Though LSL has two other methods of communicating with the rest of the internet (email and xml-rpc), their use in LSL scripts is highly discouraged these days. Both of these systems (as currently implemented in Second Life) rely on centralized servers to route messages to their destinations. This doesn't scale well. These servers are under ever-increasing load as Second Life grows. llHTTPRequest on the other hand runs entirely on the simulator running your script, which means you don't have to worry about missing messages because of overloaded central servers. Finally, there will soon be a new feature added to LSL allowing any script to act as an http server (see http://wiki.secondlife.com/wiki/LSL_http_server). It's currently (as of June 2009) deployed on the beta grid, but should be on the main grid with the next major update. With this addition, many of the current LSL-to-web programs that regularly poll a web server for updated data will instead be able to have updates pushed to them when they happen. A: As Ilya said, REST and LSL-HTTP would be the way to go. The new implementation of JSON within the Linden Scripting Language should help with that. You might want to start with reading the Json usage in LSL page on the official wiki.
{ "language": "en", "url": "https://stackoverflow.com/questions/103439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Identify the host from a Windows user mode dump file Is there an easy way of finding out the host name of a machine that generated a user mode dump file via WinDbg? Or at least any piece of identifying information to try and confirm that two dump files came from the same system. A: You can do so by analyzing the user dump file with WinDbg. Run the !peb command and look for the value of COMPUTERNAME in its output. A: From debugger.chm: Finding the Computer Name in a Kernel-Mode Dump File If you need to determine the name of the computer on which the crash dump was made, you can use the !peb extension and look for the value of COMPUTERNAME it its output. Or you can use the following command: 0: kd> x srv!SrvComputerName be8ce2e8 srv!SrvComputerName = _UNICODE_STRING "AIGM-MYCOMP-PUB01" Finding the IP Address in a Kernel-Mode Dump File To determine the IP address of the computer on which the crash dump was made, find a thread stack that shows some send/receive network activity. Open one of the send packets or receive packets. The IP address will be visible in that packet. EDIT: I will note that depending on how the dump file was created, the PEB information may not be available and so you won't always be able to find the computer name. Particularly if something came through the Microsoft Winqual site, it has been sanitized. Using the shortcut for environment variables in the PEB: !envvar COMPUTERNAME A: For IP Address list: 3: kd> du poi(poi(srvnet!SrvAdminIpAddressList)) ffffe001d3d58450 "127.0.0.1" 3: kd> du ffffe001d3d58464 "::1" 3: kd> ffffe001d3d5846c "169.254.66.248" 3: kd> ffffe001d3d5848a "" 3: kd> ffffe001d3d5848c "fe80::f0cb:5439:f12f:42f8" 3: kd> ffffe001d3d584c0 "" 3: kd> ffffe001d3d584c2 "192.168.104.249" 3: kd> ffffe001d3d584e2 "" 3: kd> ffffe001`d3d584e4 "fe80::fc6f:ae16:b336:83dc" 3: kd> A: In both kernel and user mode, 10: kd> !envvar COMPUTERNAME COMPUTERNAME = a-host-name Retrieves the computer name aka hostname of the target PC. It requires EXTS.dll extension to be loaded, and Windows XP+ (W10 RS3 at the time of writing). In kernel mode, this does not work directly, !envvar will return empty 10: kd> !peb PEB NULL... Your current context is an idle thread. WinDbg (Windows 10 RS3 16299.15 SDK) help for !process only lists bits 0-4, however I found bit 5 dumps whole environment when used with 0 and 4. Flags = 0b110001. So I use this during WinDbg startup script to automatically log the computer name. !process 0 0x31 wininit.exe Will dump the all the environment variables: 10: kd> !process 0 0x31 wininit.exe PROCESS ffffc485c82655c0 SessionId: 0 Cid: 02d0 Peb: 8d04c6b000 ParentCid: 0258 DirBase: 40452f000 ObjectTable: ffffe30b1150fb40 HandleCount: 163. Image: wininit.exe VadRoot ffffc485c862b990 Vads 61 Clone 0 Private 326. Modified 12. Locked 2. DeviceMap ffffe30b0a817880 Token ffffe30b1150f060 ElapsedTime 00:00:18.541 UserTime 00:00:00.000 KernelTime 00:00:00.015 QuotaPoolUsage[PagedPool] 121696 QuotaPoolUsage[NonPagedPool] 11448 Working Set Sizes (now,min,max) (1750, 50, 345) (7000KB, 200KB, 1380KB) PeakWorkingSetSize 1697 VirtualSize 2097239 Mb PeakVirtualSize 2097239 Mb PageFaultCount 2104 MemoryPriority BACKGROUND BasePriority 13 CommitCharge 470 PEB at 0000008d04c6b000 InheritedAddressSpace: No ReadImageFileExecOptions: No BeingDebugged: No ImageBaseAddress: 00007ff7be3d0000 Ldr 00007ff8dff4f3a0 Ldr.Initialized: Yes Ldr.InInitializationOrderModuleList: 000001be470e1c10 . 000001be47128d60 Ldr.InLoadOrderModuleList: 000001be470e1d80 . 000001be47128d40 Ldr.InMemoryOrderModuleList: 000001be470e1d90 . 000001be47128d50 Base TimeStamp Module 7ff7be3d0000 600d94df Jan 24 10:40:15 2021 C:\Windows\system32\wininit.exe 7ff8dfdf0000 493793ea Dec 04 03:25:14 2008 C:\Windows\SYSTEM32\ntdll.dll ... SubSystemData: 0000000000000000 ProcessHeap: 000001be470e0000 ProcessParameters: 000001be470e1460 CurrentDirectory: 'C:\Windows\system32\' WindowTitle: '< Name not readable >' ImageFile: 'C:\Windows\system32\wininit.exe' CommandLine: 'wininit.exe' DllPath: '< Name not readable >' Environment: 000001be47104460 ALLUSERSPROFILE=C:\ProgramData CommonProgramFiles=C:\Program Files\Common Files CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files CommonProgramW6432=C:\Program Files\Common Files COMPUTERNAME=a-host-name ComSpec=C:\Windows\system32\cmd.exe NUMBER_OF_PROCESSORS=16 OS=Windows_NT Path=C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\ PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC PROCESSOR_ARCHITECTURE=AMD64 PROCESSOR_IDENTIFIER=AMD64 Family 23 Model 1 Stepping 1, AuthenticAMD PROCESSOR_LEVEL=23 PROCESSOR_REVISION=0101 ProgramData=C:\ProgramData ProgramFiles=C:\Program Files ProgramFiles(x86)=C:\Program Files (x86) ProgramW6432=C:\Program Files PSModulePath=%ProgramFiles%\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules PUBLIC=C:\Users\Public SystemDrive=C: SystemRoot=C:\Windows TEMP=C:\temp TMP=C:\temp USERNAME=SYSTEM USERPROFILE=C:\Windows\system32\config\systemprofile windir=C:\Windows You could click on a PEB dml link, or switch context via .process /p <PROCESS_ADDRESS>, then !envvar COMPUTERNAME would also work.
{ "language": "en", "url": "https://stackoverflow.com/questions/103453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the best way to determine the name of your machine in a .NET app? I need to get the name of the machine my .NET app is running on. What is the best way to do this? A: Whilst others have already said that the System.Environment.MachineName returns you the name of the machine, beware... That property is only returning the NetBIOS name (and only if your application has EnvironmentPermissionAccess.Read permissions). It is possible for your machine name to exceed the length defined in: MAX_COMPUTERNAME_LENGTH In these cases, System.Environment.MachineName will not return you the correct name! Also note, there are several names your machine could go by and in Win32 there is a method GetComputerNameEx that is capable of getting the name matching each of these different name types: * *ComputerNameDnsDomain *ComputerNameDnsFullyQualified *ComputerNameDnsHostname *ComputerNameNetBIOS *ComputerNamePhysicalDnsDomain *ComputerNamePhysicalDnsFullyQualified *ComputerNamePhysicalDnsHostname *ComputerNamePhysicalNetBIOS If you require this information, you're likely to need to go to Win32 through p/invoke, such as: class Class1 { enum COMPUTER_NAME_FORMAT { ComputerNameNetBIOS, ComputerNameDnsHostname, ComputerNameDnsDomain, ComputerNameDnsFullyQualified, ComputerNamePhysicalNetBIOS, ComputerNamePhysicalDnsHostname, ComputerNamePhysicalDnsDomain, ComputerNamePhysicalDnsFullyQualified } [DllImport("kernel32.dll", SetLastError=true, CharSet=CharSet.Auto)] static extern bool GetComputerNameEx(COMPUTER_NAME_FORMAT NameType, [Out] StringBuilder lpBuffer, ref uint lpnSize); [STAThread] static void Main(string[] args) { bool success; StringBuilder name = new StringBuilder(260); uint size = 260; success = GetComputerNameEx(COMPUTER_NAME_FORMAT.ComputerNameDnsDomain, name, ref size); Console.WriteLine(name.ToString()); } } A: System.Environment.MachineName A: Try Environment.MachineName. Ray actually has the best answer, although you will need to use some interop code.
{ "language": "en", "url": "https://stackoverflow.com/questions/103460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Adding .net Webservice references I'm getting near completely different objects from the same WSDL file when I try to Add a Web Reference depending if I am using Express or Pro version of the vs2008 .NET IDE. 1) Why is this happening? I'd expect the WSDL's to act the same across platforms--clearly they are not! 2) How do I determine what tool/wizard the IDE is calling when I select "Add Service Reference". Details: The VB.NET Express version is adding objects that are desired and expected. I'd like to use the IDE to add the service (not muck with wsdl.exe or svcutil.exe). I'm using vs2008 Pro v9.0.30729.1 on Windows Server 2003. Express version 9.0.21022.8 RTM on XP.The respective Reference.vb shows the same header "This code was generated by a tool. Runtime Version:2.0.50727.3053". The wizard UI's to add the service WSDL are visually different between the two IDEs. Express has Strict On and Pro has Strict Off. The general IDE Strict settings seem to have no control over this. Java/Eclipse are having no issues with these WSDLs. A: I'm sorry to say that the proxies visual studio generates are pretty bad. The real solution for this right now is to write your own contracts and proxies. I know, it's not great news, but 30 minutes of typing might save you from a world of hurt. Check out the helper classes at idesign.net A: I have seen in the past where wsdl.exe would produce different proxy classes than the VS IDE wizard does. This is probably the explanation. A: Try using svcutil.exe instead. A: VS 2005 and 2008 Pro generate different classes when you're adding web references - perhaps this is similar? If you click Advanced when adding a service reference, you'll find Add Web Reference at the bottom of the form.
{ "language": "en", "url": "https://stackoverflow.com/questions/103474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: iPhone programming - impressions, opinions? I've been programming in C,C++,C# and a few other languages for many years, mainly for Windows and Linux but also embedded platforms. Recently started to do some iPhone programming as a side project so I'm using Apple platforms for the first time since my Apple II days. I'm wondering what other developers that are coming to Mac OSX, Xcode and iPhone SDK think. Here are my impressions, so far: * *Mac OSX: very confusing, I tend to end up with too many open windows and don't know what's where. Luckily there's the bird's eye view, without it I'd be lost. With the shell at least there's all the familiar stuff so that helps me a lot. *Xcode: doesn't feel as good as VisualStudio or Eclipse, the two environments I'm familiar with. I think I could get used to it but I'm wondering if Apple wouldn't be better off with Eclipse. Before I found the setting where all the windows are stuck together I hated it, now I can tolerate it. *iPhone SDK: strange indeed. I understand Apple's desire to control their environment but in this day and age it just seems a little sleazy and they are missing out on so much by destroying developer goodwill. *Objective-C: I've known about it for years but never even took a look at it. The syntax is off-putting but I'm actually very intrigued by the language. I think it's an interesting third leg between C++ and C#, both of which I like a lot. Is there any chance Obj-C will break out of the Mac sandbox due to the uptick in the popularity of Apple technology? Curious to read your thoughts, Andrew A: I'm in the same boat as you (somewhat). I've been developing in C# for 7 years, ever since .NET 1.0. Over the past couple weeks I've been teaching myself Cocoa and Objective-C. Here are my impressions (note for note with yours) * *Agreed in that clutter can be a problem. I tend to use Spaces heavily when developing in XCode (put XCode in one space, Interface Builder in another space, Instruments in a third space). If you don't have Leopard (and thus, no spaces), then use Command-H to hide your active window. Using that tends to clean things up quite a bit (however it'd be nice if you could command-h automagically the current window when command-tab'ing to another app). *I'm liking XCode more and more. I hate Visual Studio - I find it to be unstable, slow, and well, just kind of a crappy IDE. Comparatively I've found XCode to be fast, stable, and I like how it organizes and filters your files. I'm not too up on my XCode shortcuts, but I'm hoping there's a way I can quick-switch from one class to another (similar to ctrl +n shortcut in ReSharper). Intellisense could be better with regards to how it displays to the user, but I really like how it essentially creates a template and you can ctrl + / to jump to the next argument in a message. *I'm hating the documentation in XCode. The help system sucks, and for whatever reason it never finds what I'm searching for. I end up just googling for anything I need to know... I hope they improve the documentation. This is my biggest beef right now. *Not quite there yet, as I'm going through the full Cocoa framework for Mac desktops. So far I'm really, really liking what I see. One thing I will say is that it would be nice if the iPhone SDK allowed for garbage collection... *Objective-C - I've never used it, this is my first foray into it. At first I was kinda wierded out by the syntax and the square brackets for messaging, but it's really growing on me. It's so quick to skim a method and see the message calls that method makes. The more I use it, the more Objective-C just feels nice... however templating/generics would be a welcome addition to the language. All in all, my foray into Mac development has been enjoyable, and I'm excited to start working (today! yay!) on some actual mac/iphone projects. A: I agree with your sentiments. Coming from Microsoft development tools (and eclipse) to XCode is kind of harsh. XCode just feels... unfinished in some respects. It certainly doesn't have the polish that I come to expect from VS and Eclipse. The SDK is similar, much of it is poorly documented, and there are a lot of holes where you know something should be, but it just isn't. Trying to carefully control audio/video file playback is one example. Objective-C, however, is great. I really like the language, despite its quirks and idiosyncrasies (messages to null isn't a run-time exception? really?) Once your C++ eyes get used to the syntax, loosely-typed anonymous messaging actually ends up being really cool to play with (if somewhat dangerous and prone to RTEs.) A: I really want to jump on and start developing iPhone apps as well. I've done a bit of Motorola, Blackberry and Windows Mobile development, which were all cool and east to get into with good documentation, easy to access and install SDKs. So far, I feel Apple is being a bit elitist in the fact that it seems their development environment is only available on a Mac. I'm also, not quite liking their licensing concepts. If you want to be able to actually publish apps, you need to go through them, and they have the final say on whether you can or can't or whether your app is deemed acceptable to run on their superior product. It is my belief that they are making it more difficult for the open source community to maintain and produce applications or for the iPhone neophite, like me, to even get started writing apps for the products. There's a lot of bad things said about Microsoft, but, I have to say they get their APIs and SDKs out there long before their products hit the market and really encourage programmers of all levels to dig in and get involved writing apps for their frameworks and operating systems. A: I have worked on a few small iPhone apps and I am just amazed that they didn't include the components of the framework that enable developers to easily access SOAP web services. Anyone else working in an enterprise IT environment feeling the pain? A: I personally think that the documentation is very good at this point. On any Objective C class you can option-doubleClick to bring up the documentation for that term, and if there are any example projects using that particular class that's listed to (at least for many iPhone specific classes). Also look into turning on Research Assistant when you are first starting out, and turn on Code Sense (don't think it's on by default). The combination of XCode + Interface Builder is quite powerful when you get used to it, and frankly in a few decades I have never used a better interface builder in terms of how the integration to code works or the ability to design interfaces that intelligently resize without a ton of extra work. A: I'm new to iPhone programming and XCode too, after many years of programming for many platforms and my impression is rather close to yours (with some differences): * *Mac OSX: I switched from Windows about 2 years ago (as an experiment) and I stayed :) - I don't think I'll switch back. Having a Unix foundation is very cool and I love the flashy GUI + I like the basic simplicity of the interface. It took me about 2 months to get used to it, but I can't imagine going back. I hate the MacBook keyboard layout and some of OSX's keyboard limitation though. It's funny how a company that is so proud of its usability insights can come up with such a lousy set of decisions. Perhaps the best examples are not having a context-menu (right-click) keyboard shortcut and the fact that you need two keys to accomplish tasks like Home, End, PgUp, etc. My main advice is to spend the time learning as many keyboard shortcuts as possible. I also recommend installing & using the following 3rd party apps that substantially improved my Mac experience: Quicksilver, Path Finder, 1Password, Things, TextMate, Text Wrangler & Transmit. *Xcode: I totally agree with you. I think XCode is rather primitive. I compare it with IntelliJ IDEA that I work with a lot and it feels like Apple is stuck at least 7 years in the past: * *code navigation is so primitive with too many windows bouncing around *you have to use the mouse all the time *templating is very limited and is based on naive macro concepts with no relation to context or scoping *refactoring is limited to just a few simple actions *you can't even easily accomplish trivial tasks like overriding a method *Code Sense is nice but could have been much better if it understood typing... The big irony is that serious Mac developers don't even understand that they have a problem... They are so used to the mess they have to deal with that they can't imagine a better world... Instead of helping you, XCode keeps getting in the way. I can come up with dozens of examples about how this environment sucks, when compared to modern Java IDEs (Eclipse, IntelliJ), but I believe it's a waste of time - it seems like Apple is too proud to learn from others... which is funny if you consider the fact that the inventors of Java weren't shy to learn from Objective-C. My only advice (to myself too) is to take a deep breath whenever you open XCode and learn as much as possible from the experts who are more used to this environment. *iPhone SDK: it's even worse than that - we considered porting our mobile app to the iPhone a couple of months ago but decided not to bother because we were worried that Apple might reject it from the app store and you can't know in advance (they've rejected a somewhat similar app in the past on the ground that it's too close to iTunes!) *Obj-C: I find Objective-C quite nice and after a few days you get used to the awkward messaging syntax, but boy do I miss garbage collection... Having to deal with memory allocations and releases feels a bit like going back in time to my early C/C++ days. I'm just beginning to learn the nuances of this language, but so far I like what I found. There are quite a few tips scattered around the web about Obj-C best practices that you can't find in the official docs and I learned a lot from them (see for example the following discussion here on stackoverflow) A: I came from a C# background as well and have been working with the iPhone SDK since beta 2. I totally agree with cranley about VS being a bit clunky compared to Xcode. Xcode is WAY different, and totally foreign when you start using it. So was VS though back in the day. Once you get by the learning curve it is a wonderful experience. The apps I am developing use C# server side (web service) and I absolutely hate having to switch to VS to write the web service code from Xcode. Obj-C is also quite fun to use once you learn how it works best: delegates (very different than .NET delegates), messages, Categories and all the other oddities present. I did some Java and Flex programming previous to .NET and I always hated the .NET docs compared to Java docs. They just don't cut it. I have personally found Xcodes docs and search system to be nothing short of amazing. There are countless PDF guides linked from the docs that have tons of sample code. Think about this: the iPhone SDK has been out of beta for about 2 months now. The docs show a maturity level of many years. And yes, it is because Obj-C has been around over a year and the frameworks are similar. Overall, the biggest issue I have found is that there are a LOT of .NET developers jumping on the iPhone bandwagon and trying to use Obj-C as if it were C# or VB. They fail to read the basic Obj-C docs let alone the iPhone docs and then they get very frustrated and eventually fail. The discussion forums are full of this scenario. iPhone programming is not easy. Learning a new language is not easy. It takes time and a lot of try.fail.try. It's not .NET so lose that mindset before you even begin and things will be wonderful.
{ "language": "en", "url": "https://stackoverflow.com/questions/103480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Building an HTML table on the fly using jQuery Below is the code I use to build an HTML table on the fly (using JSON data received from the server). I display an animated pleasewait (.gif) graphic while the data is loading. However, the graphic freezes while the JavaScript function is building the table. At first, I was just happy to make this happen (display the table), I guess now I need to work on efficiency. At the very least I need to stop the animated graphic from freezing. I can go to a static "Loading" display, but I would rather make this method work. Suggestions for my pleasewait display? And efficiency? Possibly a better way to build the table? Or maybe not a table, but some other "table" like display var t = eval( "(" + request + ")" ) ; var myTable = '' ; myTable += '<table id="myTable" cellspacing=0 cellpadding=2 border=1>' ; myTable += "<thead>" ; myTable += "<tr>"; for (var i = 0; i < t.hdrs.length; i++) { myTable += "<th>" + header + "</th>"; } myTable += "</tr>" ; myTable += "</thead>" ; myTable += "<tbody>" ; for (var i = 0; i < t.data.length; i++) { myTable += '<tr>'; for (var j = 0; j < t.hdrs.length; j++) { myTable += '<td>'; if (t.data[i][t.hdrs[j]] == "") { myTable += "&nbsp;" ; } else { myTable += t.data[i][t.hdrs[j]] ; } myTable += "</td>"; } myTable += "</tr>"; } myTable += "</tbody>" ; myTable += "</table>" ; $("#result").append(myTable) ; $("#PleaseWaitGraphic").addClass("hide"); $(".rslt").removeClass("hide") ; A: What you are doing is building a string, and then parsing it all at once upon insertion. What about creating an actual table element (i.e. $("<table>")), and then adding each row to it in turn? By the time you actually insert it into the page, the DOM nodes will all have been constructed, so it shouldn't be as big a hit. A: Using innerHTML can definitely be much faster than using jQuery's HTML-to-DOM-ifier, which uses innerHTML but does a lot of processing on the inputs. I'd suggest checking out chain.js as a way to quickly build out tables and other repeating data structures from JavaScript objects. It's a really lightweight, smart databinding plugin for jQuery. A: You basically want to set up your loops so they yield to other threads every so often. Here is some example code from this article on the topic of running CPU intensive operations without freezing your UI: function doSomething (progressFn [, additional arguments]) { // Initialize a few things here... (function () { // Do a little bit of work here... if (continuation condition) { // Inform the application of the progress progressFn(value, total); // Process next chunk setTimeout(arguments.callee, 0); } })(); } As far as simplifying the production of HTML in your script, if you're using jQuery, you might give my Simple Templates plug-in a try. It tidies up the process by cutting down drastically on the number of concatenations you have to do. It performs pretty well, too after I recently did some refactoring that resulted in a pretty big speed increase. Here's an example (without doing all of the work for you!): var t = eval('(' + request + ')') ; var templates = { tr : '<tr>#{row}</tr>', th : '<th>#{header}</th>', td : '<td>#{cell}</td>' }; var table = '<table><thead><tr>'; $.each(t.hdrs, function (key, val) { table += $.tmpl(templates.th, {header: val}); }); ... A: I've been using JTemplates to accomplish what you are describing. Dave Ward has an example on his blog here. The main benefit of JTemplates is that your html isn't woven into your javascript. You write a template and call two functions to have jTemplate build the html from your template and your json. A: For starters, check out flydom and it's variants, they will help termendously. Could you possibly give more context? If this is not in the onload and just pasted in the page, just wrapping the whole thing in $(function () { /* code */ }) will probably clean up everything you are having problems with. Inline JS is executed immediately, which means that loop for the table. onload is an event and essentially 'detached'. A: My experience has been that there are two discrete delays. One is concatenating all those strings together. The other is when the browser actually tries to render the string. Typically, it's IE that has the most trouble with UI freezes, in part because it's a lot slower at running javascript. This should get better in IE8. What I would suggest in your case is breaking the operation into steps. Say for a 100 row table, you produce a valid 10 row table first. Then you output that to screen and use a setTimeout to return control to the browser so the UI stops blocking. When the setTimeout comes back, you do the next 10 rows, etc. Creating the table using DOM is certainly "cleaner", as others have said. However, there is a steep price to pay in terms of performance. See the excellent quirksmode article on this subject, which has some benchmarks you can run yourself. Long story short, innerHTML is much, much faster than DOM, even on modern JS engines. A: Search the web for JavaScript and StringBuilder. Once you have a JavaScript string builder make sure you use the .append method for every concatenation. That is, you don't want to have any + concatenations. After that, search for JavaScript and replacehtml. Use this function instead of innerHTML. A: You could insert the table into the DOM bit by bit. Honestly I'm not entirely sure if this will help with your problem, but it's worth a try. I'd do it roughly like this (untested code, could be refine some more): $("#result").append('<table id="myTable" cellspacing=0 cellpadding=2 border=1></table>'); $('#myTable').append('<thead><tr></tr></thead>'); $('#myTable').append('<tbody></tbody>'); for (var i = 0; i < t.hdrs.length; i++) { $('#myTable thead tr').append('<th>'+header+'</th>'); } for (var i = 0; i < t.data.length; i++) { myTr = '<tr>'; for (var j = 0; j < t.hdrs.length; j++) { myTr += '<td>'; if (t.data[i][t.hdrs[j]] == "") { myTr += "&nbsp;" ; } else { myTr += t.data[i][t.hdrs[j]] ; } myTr += "</td>"; } myTr += "</tr>"; $('#myTable tbody').append(myTr); } $("#PleaseWaitGraphic").addClass("hide"); $(".rslt").removeClass("hide") ;
{ "language": "en", "url": "https://stackoverflow.com/questions/103489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Best way of restarting a jetty instance I'm using start.jar and stop.jar to stop and start my jetty instance. I restart by calling stop.jar, then start.jar. The problem is, if I don't sleep long enough between stop.jar and start.jar I start getting these random ClassNotFoundExceptions and the application doesn't work correctly. Sleeping for a longer period of time between stop and start is my current option. I also heard from someone that I should have something that manages my threads so that I end those before jetty finishes. Is this correct? The question I have about this is that stop.jar returns immediately, so it doesn't seem to help me, unless there's something I'm missing. Another option might be to poll the log file, but that's pretty ugly. What's the best way of restarting jetty? Gilbert: The Ant task is definitely not a bad way of doing it. However, it sleeps for a set amount of time, which is exactly what I'm trying to avoid. A: Can you write a shell script that does something like this after calling shutdown and before calling startup? LISTEN_PORT = `netstat -vatn | grep LISTEN| grep 8080 | wc -l ` while [$LISTEN_PORT -ne 0] ; do sleep 1 LISTEN_PORT = `netstat -vatn | grep LISTEN| grep 8080 | wc -l ` done A: This thread looks old but posting anyway, may help someone. A cross-platform approach: http://ptrthomas.wordpress.com/2009/01/24/how-to-start-and-stop-jetty-revisited/ A: Did you try JFGI? Setting up Ant task that could do the work for you? This blog post details how to setup targets that can start and stop jetty for you. You could easily cobble together another target called 'jetty-restart' which depends on 'jetty-stop' and then calls 'jetty-start'. http://ptrthomas.wordpress.com/2006/10/10/how-to-start-and-stop-jetty-from-ant/
{ "language": "en", "url": "https://stackoverflow.com/questions/103508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why use static_cast(x) instead of (int)x? I've heard that the static_cast function should be preferred to C-style or simple function-style casting. Is this true? Why? A: * *Allows casts to be found easily in your code using grep or similar tools. *Makes it explicit what kind of cast you are doing, and engaging the compiler's help in enforcing it. If you only want to cast away const-ness, then you can use const_cast, which will not allow you to do other types of conversions. *Casts are inherently ugly -- you as a programmer are overruling how the compiler would ordinarily treat your code. You are saying to the compiler, "I know better than you." That being the case, it makes sense that performing a cast should be a moderately painful thing to do, and that they should stick out in your code, since they are a likely source of problems. See Effective C++ Introduction A: The main reason is that classic C casts make no distinction between what we call static_cast<>(), reinterpret_cast<>(), const_cast<>(), and dynamic_cast<>(). These four things are completely different. A static_cast<>() is usually safe. There is a valid conversion in the language, or an appropriate constructor that makes it possible. The only time it's a bit risky is when you cast down to an inherited class; you must make sure that the object is actually the descendant that you claim it is, by means external to the language (like a flag in the object). A dynamic_cast<>() is safe as long as the result is checked (pointer) or a possible exception is taken into account (reference). A reinterpret_cast<>() (or a const_cast<>()) on the other hand is always dangerous. You tell the compiler: "trust me: I know this doesn't look like a foo (this looks as if it isn't mutable), but it is". The first problem is that it's almost impossible to tell which one will occur in a C-style cast without looking at large and disperse pieces of code and knowing all the rules. Let's assume these: class CDerivedClass : public CMyBase {...}; class CMyOtherStuff {...} ; CMyBase *pSomething; // filled somewhere Now, these two are compiled the same way: CDerivedClass *pMyObject; pMyObject = static_cast<CDerivedClass*>(pSomething); // Safe; as long as we checked pMyObject = (CDerivedClass*)(pSomething); // Same as static_cast<> // Safe; as long as we checked // but harder to read However, let's see this almost identical code: CMyOtherStuff *pOther; pOther = static_cast<CMyOtherStuff*>(pSomething); // Compiler error: Can't convert pOther = (CMyOtherStuff*)(pSomething); // No compiler error. // Same as reinterpret_cast<> // and it's wrong!!! As you can see, there is no easy way to distinguish between the two situations without knowing a lot about all the classes involved. The second problem is that the C-style casts are too hard to locate. In complex expressions it can be very hard to see C-style casts. It is virtually impossible to write an automated tool that needs to locate C-style casts (for example a search tool) without a full blown C++ compiler front-end. On the other hand, it's easy to search for "static_cast<" or "reinterpret_cast<". pOther = reinterpret_cast<CMyOtherStuff*>(pSomething); // No compiler error. // but the presence of a reinterpret_cast<> is // like a Siren with Red Flashing Lights in your code. // The mere typing of it should cause you to feel VERY uncomfortable. That means that, not only are C-style casts more dangerous, but it's a lot harder to find them all to make sure that they are correct. A: It's about how much type-safety you want to impose. When you write (bar) foo (which is equivalent to reinterpret_cast<bar> foo if you haven't provided a type conversion operator) you are telling the compiler to ignore type safety, and just do as it's told. When you write static_cast<bar> foo you are asking the compiler to at least check that the type conversion makes sense and, for integral types, to insert some conversion code. EDIT 2014-02-26 I wrote this answer more than 5 years ago, and I got it wrong. (See comments.) But it still gets upvotes! A: C Style casts are easy to miss in a block of code. C++ style casts are not only better practice; they offer a much greater degree of flexibility. reinterpret_cast allows integral to pointer type conversions, however can be unsafe if misused. static_cast offers good conversion for numeric types e.g. from as enums to ints or ints to floats or any data types you are confident of type. It does not perform any run time checks. dynamic_cast on the other hand will perform these checks flagging any ambiguous assignments or conversions. It only works on pointers and references and incurs an overhead. There are a couple of others but these are the main ones you will come across. A: static_cast, aside from manipulating pointers to classes, can also be used to perform conversions explicitly defined in classes, as well as to perform standard conversions between fundamental types: double d = 3.14159265; int i = static_cast<int>(d); A: The question is bigger than just using whether static_cast<> or C-style casting because there are different things that happen when using C-style casts. The C++ casting operators are intended to make those different operations more explicit. On the surface static_cast<> and C-style casts appear to be the same thing, for example when casting one value to another: int i; double d = (double)i; //C-style cast double d2 = static_cast<double>( i ); //C++ cast Both of those cast the integer value to a double. However when working with pointers things get more complicated. Some examples: class A {}; class B : public A {}; A* a = new B; B* b = (B*)a; //(1) what is this supposed to do? char* c = (char*)new int( 5 ); //(2) that weird? char* c1 = static_cast<char*>( new int( 5 ) ); //(3) compile time error In this example (1) may be OK because the object pointed to by A is really an instance of B. But what if you don't know at that point in code what a actually points to? (2) may be perfectly legal (you only want to look at one byte of the integer), but it could also be a mistake in which case an error would be nice, like (3). The C++ casting operators are intended to expose these issues in the code by providing compile-time or run-time errors when possible. So, for strict "value casting" you can use static_cast<>. If you want run-time polymorphic casting of pointers use dynamic_cast<>. If you really want to forget about types, you can use reintrepret_cast<>. And to just throw const out the window there is const_cast<>. They just make the code more explicit so that it looks like you know what you were doing. A: static_cast means that you can't accidentally const_cast or reinterpret_cast, which is a good thing. A: One pragmatic tip: you can search easily for the static_cast keyword in your source code if you plan to tidy up the project. A: In short: * *static_cast<>() gives you a compile time checking ability, C-Style cast doesn't. *static_cast<>() can be spotted easily anywhere inside a C++ source code; in contrast, C_Style cast is harder to spot. *Intentions are conveyed much better using C++ casts. More Explanation: The static cast performs conversions between compatible types. It is similar to the C-style cast, but is more restrictive. For example, the C-style cast would allow an integer pointer to point to a char. char c = 10; // 1 byte int *p = (int*)&c; // 4 bytes Since this results in a 4-byte pointer pointing to 1 byte of allocated memory, writing to this pointer will either cause a run-time error or will overwrite some adjacent memory. *p = 5; // run-time error: stack corruption In contrast to the C-style cast, the static cast will allow the compiler to check that the pointer and pointee data types are compatible, which allows the programmer to catch this incorrect pointer assignment during compilation. int *q = static_cast<int*>(&c); // compile-time error Read more on: What is the difference between static_cast<> and C style casting and Regular cast vs. static_cast vs. dynamic_cast
{ "language": "en", "url": "https://stackoverflow.com/questions/103512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "791" }
Q: What's the right way to dynamically choose menu items for a context menu in WinForms? I'm trying to make a context menu for a control that is "linked" to a main menu item. There are two fixed menu items that are always there and an arbitrary number of additional menu items that might need to be on the menu. I've tried solving the problem by keeping a class-level reference to the fixed menu items and a list of the dynamic menu items. I am handling both menus' Opening events by clearing the current list of items, then adding the appropriate items to the menu. This works fine for the main menu, but the context menu behaves oddly. The major problem seems to be that by the time Opening is raised, the menu has already decided which items it is going to display. This form demonstrates: using System.Collections.Generic; using System.ComponentModel; using System.Windows.Forms; namespace WindowsFormsApplication1 { public class DemoForm : Form { private List _items; public DemoForm() { var contextMenu = new ContextMenuStrip(); contextMenu.Opening += contextMenu_Opening; _items = new List(); _items.Add(new ToolStripMenuItem("item 1")); _items.Add(new ToolStripMenuItem("item 2")); this.ContextMenuStrip = contextMenu; } void contextMenu_Opening(object sender, CancelEventArgs e) { var menu = sender as ContextMenuStrip; if (menu != null) { foreach (var item in _items) { menu.Items.Add(item); } } } } } When you right-click the form the first time, nothing is displayed. The second time, the menu is displayed as expected. Is there another event that is raised where I could update the items? Is it a bad practice to dynamically choose menu items? (Note: This is for an example I started making for someone who wanted such functionality and I was curious about how difficult it is, so I can't provide details about why this might be done. This person wants to "link" a main menu item to the context menu, and since menu items can only be the child of a single menu this seemed a reasonable way to do so. Any alternative suggestions for an approach are welcome.) A: You could work out the items during the MouseDown event of the control. Check that it is the right mouse button too.
{ "language": "en", "url": "https://stackoverflow.com/questions/103514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Excel Addin Access Violation Using c#, VS2005, and .NET 2.0. (XP 32 bit) This is a Winforms app that gets called by a VBA addin (.xla) via Interop libraries. This app has been around for a while and works fine when the assembly is compiled and executed anywhere other than my dev machine. On dev it crashes hard (in debugger and just running the object) with "Unhandled exception at 0x... in EXCEL.EXE: 0x...violation reading location 0x... But here's the weird part: The first method in my interface works fine. All the other methods crash as above. Here is an approximation of the code: [Guid("123Fooetc...")] [InterfaceType(ComInterfaceType.InterfaceIsIDispatch)] public interface IBar { [DispId(1)] void ThisOneWorksFine(Excel.Workbook ActiveWorkBook); [DispId(2)] string Crash1(Excel.Workbook ActiveWorkBook); [DispId(3)] int Crash2(Excel.Workbook activeWorkBook, Excel.Range target, string someStr); } [Guid("345Fooetc..")] [ClassInterface(ClassInterfaceType.None)] [ProgId("MyNameSpace.MyClass")] public class MyClass : IBar { public void ThisOneWorksFine(Excel.Workbook ActiveWorkBook) {...} string Crash1(Excel.Workbook ActiveWorkBook); {...} int Crash2(Excel.Workbook activeWorkBook, Excel.Range target, string someStr); {...} } It seems like some kind of environmental thing. Registry chundered? Could be code bugs, but it works fine elsewhere. A: I've had problems in this scenario with Office 2003 in the past. Some things that have helped: * *Installing Office 2003 Service Pack 2 stopped some crashes that happened when closing Excel. *Installing Office 2003 Service Pack 3 fixes a bug with using XP styles in a VSTO2005 application (not your case here) *Running the Excel VBA CodeCleaner http://www.appspro.com/Utilities/CodeCleaner.htm periodically helps prevent random crashes. *Accessing Excel objects from multiple threads would be dodgy, so I hope you aren't doing that. If you have the possibility you could also try opening a case with Microsoft PSS. They are pretty good if you are able to reproduce the problem. And in most cases, this kind of thing is a bug, so you won't be charged for it :) A: Is your dev machine Win64? I've had problems with win64 builds of apps that go away if you set the build platform to x86. A: Is your dev machine running a different version of Office than the other machines? I know that the PIAs differ. So if you're developing on Office 2003 and testing on Office 2007 (or vice versa), for example, you will run into problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/103516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Junit output and OutOfMemoryError I'm running some JUnit tests on my applications. Every test has a for loop calling respective method 10000 times. The tested methods produce a lot of log. These logs are also automatically collected by JUnit as test output. This situation takes to OutOfMemoryError because the string buffer where JUnit keeps the output becomes too large. I dont' need these logs during tests, so if there is a way to tell JUnit "don't keep program output" it would be enough. Any ideas? A: What type of logging are you using? Is there some way you can override the default logging behavior to just disregard all log messages? A: Some options: * *Change your logging so that it dumps to a file instead of standard output. *Increase the maximum heap size with -Xmx <some number>M, like -Xmx 256M. A: I would just increase the available memory.. Try adding -Xmx256m -Xmx256m to your VM. A: The solution was overriding logging properties. Now I disabled logging and all seems to work (test is still running). If it works I'll configure a way for logging to a file. Thanks everybody (and congratulations to Jeff & friends for this site). A: I see that an answer has been accepted already, but here's what I would have submitted if I had gotten it typed up and tested faster: If by "logging" you mean System.out.println() or System.err.println(), and if you're sure that your test really doesn't need the logs, then you can redirect stdout and stderr programmatically. // Save the original stdout and stderr PrintStream psOut = System.out; PrintStream psErr = System.err; PrintStream psDevNull = null; try { // Send stdout and stderr to /dev/null psDevNull = new PrintStream(new ByteArrayOutputStream()); System.setOut(psDevNull); System.setErr(psDevNull); // run tests in loop for (...) { } } finally { // Restore stdout and stderr System.setOut(psOut); System.setErr(psErr); if (psDevNull != null) { psDevNull.close(); psDevNull = null; } } This way, your test output will be disabled, but the other output from JUnit will not be, as it would be if you used redirection on the command line like this: ant test &> /dev/null The command line redirection causes all of Ant/JUnit's output to be redirected, not just what is coming from the class you are testing, so that's probably not what you want. The programmatic redirection causes only the prints/writes to System.out and System.err in your program to be redirected, and you will still get the output from Ant and JUnit. A: Setting outputtoformatters="no" solved all my memory problems (in Ant 1.7.1 it this prevents output generated by tests being sent to the test formatters).
{ "language": "en", "url": "https://stackoverflow.com/questions/103519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: The essential steps in verifying a file upload What would you say are the essential steps in verifying a file upload? I'd tend to check the MIME type, give it a new (random) name, make sure its got an allowed file extension, and then I'd check the contents of the file. How do you go about it? A: Check (in this order): the file MIME type (and note certain browsers have MIME type detection problems...); that the file path exists; that a previous version of the file with the same name doesn't exist, else, rev it; that the file isn't too big; on success, run a virus check on the server. A: Check the file type, check the file size, the image dimension. These are the 3 I always check to be sure to have good result. A: If you are receiving a gif file for a profile photo as example, you should check that the mime type is gif. And that way you avoid uploading bad files. Here you have an example using php. A: Depends on the expected file contents... might be a good idea to run a virus scan on the file.
{ "language": "en", "url": "https://stackoverflow.com/questions/103522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a list of AJAX JSF Libraries available? I'm looking for an alternative to www.jsfmatrix.net to get a better idea of what JSF libraries are out there and to avoid having to write my own grid/table components. Or are these 27 the best the world has to offer (really only 3 are worth their salt.) A: have you looked at RichFaces? It comes with a lot really cool rich UI stuff. A: I'll second RichFaces, but also point out IceFaces (which isn't free, but is also very nice).
{ "language": "en", "url": "https://stackoverflow.com/questions/103523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Handling different ConnectionStates before opening SqlConnection If you need to open a SqlConnection before issuing queries, can you simply handle all non-Open ConnectionStates in the same way? For example: if (connection.State != ConnectionState.Open) { connection.Open(); } I read somewhere that for ConnectionState.Broken the connection needs to be closed before its re-opened. Does anyone have experience with this? Thanks- A: http://msdn.microsoft.com/en-us/library/system.data.connectionstate.aspx Broken connection state does need to be closed and reopened before eligible for continued use. Edit: Unfortunately closing a closed connection will balk as well. You'll need to test the ConnectionState before acting on an unknown connection. Perhaps a short switch statement could do the trick. A: This isn't directly answering your question, but the best practice is to open and close a connection for every access to the database. ADO.NET connection pooling ensures that this performs well. It's particularly important to do this in server apps (e.g. ASP.NET), but I would do it even in a WinForms app that accesses the database directly. Example: using(SqlConnection connection = new SqlConnection(...)) { connection.Open(); // ... do your stuff here } // Connection is disposed and closed here, even if an exception is thrown In this way you never need to check the connection state when opening a connection. A: You can handle it the same way. I was getting numerous connection state == broken while using IE9. There is something fundamentally wrong with IE9 in this regard since no other browser had this issue of broken connection states after 5 or 6 updates to the database tables. So I use object context. So basically just close it and re-open it. I have this code before all my reads and updates in the business logic layer: if (context.Connection.State == System.Data.ConnectionState.Broken) { context.Connection.Close(); context.Connection.Open(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/103532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I create a zoom effect in OpenGLES on the iPhone? I have an OpenGL ES game that I am hacking together. One part of it involves looking at a large "map-like" area and then double-tapping on one part to "zoom into" it. How would you use OpenGL ES to provide this effect (given that it may need to zoom in on different parts of the map). I've heard of glScale and glOrtho, but I'm unclear on how they actually work since the whole openGL world is very new to me. A: The 2-D zooming you describe might be better achieved using Core Animation. NSView (and its NDA'd iPhone counterpart) provide implicit animation when you change their frame. All you'd need to do in this case would be to set the frame's origin.x and origin.y and size.width and size.height to such values to make the view larger than the screen. If you did this and wrapped it in the appropriate calls to start and commit an animation, you'd get a zooming animation for free. Core Animation uses OpenGL behind the scenes for its animations. If, however, you feel that you have to do this in OpenGL, may I suggest a little writeup I did at http://www.sunsetlakesoftware.com/2008/08/05/lessons-molecules-opengl-es? I'm the author of Molecules, a free 3-D molecular visualizer for the iPhone, and I knew nothing about OpenGL ES before I started that project. 3 weeks later, it was in the App Store as it launched. OpenGL calls are pretty simple, it's the math surrounding them that can give you headaches. Zooming in on objects is actually pretty simple, and can be done either by moving the camera or by actually physically scaling objects. For Molecules, I went the route of scaling the object using the glScalef(x,y,z) function, where x, y, and z are the scale factors you wish to apply to your model object. I do my scaling incrementally. That is, I don't reset the transformation matrix at the start of each rendered frame (using glLoadIdentity()), but just scale it a little bit based on user input. If the user moves their fingers apart by 5%, I increase the scale by 5%. Again, I'd suggest Core Animation for the 2-D zooming you describe, but it isn't too hard to achieve the same results in OpenGL ES. A: Respectfully, the answer is to take a few days to learn the basics of OpenGL, and there are much better places for that on the net than here.
{ "language": "en", "url": "https://stackoverflow.com/questions/103537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Locking File using Apache Server and TortoiseSVN I am seeting up Apache server with TortoiseSVN for local source code repository. Currently on trial purpose I am setting only two users. Is it possible for administrator to set up some thing so that file get compulsory locked once its checkout (copy to working directory) by some one. Abhijit Dhopate A: The main reason you might want to do this on subversion is for binary files (i.e. images, etc.) that are difficult or impossible to 'merge'. In those cases, each user can request a lock on a file. There is also a svn property (needs-lock) that can be applied to files that makes them read-only on checkout, and read-write when you lock, so that you remember to request the lock before editing. See the chapter on locking in the svn book. A: Wouldn't that defeat one of the purposes of a concurrent versioning system like SubVersion? Generally, you'll check out a block of files, but the server doesn't know whether anyone is editing those files. Why not allow another user access to those files and deal with the results if a conflict emerges?
{ "language": "en", "url": "https://stackoverflow.com/questions/103548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to render unicode characters in the correct font? (C#/WinForms) My application correctly handles different kind of character sets, but only internally - when it comes to displaying text in standard WinForms labels und textboxes, it seems to have problems with chinese characters. The problem seems to be the font used (Tahoma), because when I copy&paste the text, or view it in the debugger, it is displayed correctly. Also when I set MS Mincho as the font to be used, the characters on the screen look OK. Of course, I don't want to use MS Mincho in the entire application. Do I have to switch the font depending on the characters displayed, or is there a better way I have missed? A: UniScribe, which was introduced in Windows 2000, is supposed to handle this transparently, meaning that it will automatically use a different font (such as Mincho) for characters that aren't present in the font you've selected. This is why it looks correct in the debugger, even though the font used in the debugger doesn't contain Chinese characters. Perhaps you are doing something that disables UniScribe, or is problematic in some other way. Perhaps if you could paste some code it would be easier to identify the problem. A: All fonts don't have data for all the glyphs that can be expressed in unicode. You´ll have to locate a suitable font which has the subset you want. edit: Just to clarify, there are fonts which cover the full unicode range, but the one you´re using now isn´t one of them.
{ "language": "en", "url": "https://stackoverflow.com/questions/103556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Rhino mocks ordered reply, throw exception problem I'm trying to implement some retry logic if there is an exception in my code. I've written the code and now I'm trying to get Rhino Mocks to simulate the scenario. The jist of the code is the following: class Program { static void Main(string[] args) { MockRepository repo = new MockRepository(); IA provider = repo.CreateMock<IA>(); using (repo.Record()) { SetupResult.For(provider.Execute(23)) .IgnoreArguments() .Throw(new ApplicationException("Dummy exception")); SetupResult.For(provider.Execute(23)) .IgnoreArguments() .Return("result"); } repo.ReplayAll(); B retryLogic = new B { Provider = provider }; retryLogic.RetryTestFunction(); repo.VerifyAll(); } } public interface IA { string Execute(int val); } public class B { public IA Provider { get; set; } public void RetryTestFunction() { string result = null; //simplified retry logic try { result = Provider.Execute(23); } catch (Exception e) { result = Provider.Execute(23); } } } What seems to happen is that the exception gets thrown everytime instead of just once. What should I change the setup to be? A: You need to use Expect.Call instead of SetupResult: using (repo.Record()) { Expect.Call(provider.Execute(23)) .IgnoreArguments() .Throw(new ApplicationException("Dummy exception")); Expect.Call(provider.Execute(23)) .IgnoreArguments() .Return("result"); } The Rhino.Mocks wiki says, Using SetupResult.For() completely bypasses the expectations model in Rhino Mocks
{ "language": "en", "url": "https://stackoverflow.com/questions/103557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Invalid postback or callback argument I have an ASP.net application that works fine in the development environment but in the production environment throws the following exception when clicking a link that performs a postback. Any ideas? Invalid postback or callback argument. Event validation is enabled using in configuration or <%@ Page EnableEventValidation="true" %> in a page. For security purposes, this feature verifies that arguments to postback or callback events originate from the server control that originally rendered them. If the data is valid and expected, use the ClientScriptManager.RegisterForEventValidation method in order to register the postback or callback data for validation. Edit: This seems to only be happening when viewed with IE6 but not with IE7, any ideas? A: Problem description: This is one the common issues that a lot of ASP.NET beginners face, post, and ask about. Typically, they post the error message as below and seek for resolution without sharing much about what they were trying to do. [ArgumentException: Invalid postback or callback argument. Event validation is enabled using in configuration or <%@ Page EnableEventValidation="true" %> in a page. For security purposes, this feature verifies that arguments to postback or callback events originate from the server control that originally rendered them. If the data is valid and expected, use the ClientScriptManager.RegisterForEventValidation method in order to register the postback or callback data for validation.] Though, error stack trace itself suggests a quick resolution by setting eventvalidation off, it is not a recommended solution as it opens up a security hole. It is always good to know why it happened and how to solve/handle that root problem. Assessment: Event validation is done to validate if the origin of the event is the related rendered control (and not some cross site script or so). Since control registers its events during rendering, events can be validated during postback or callback (via arguments of __doPostBack). This reduces the risk of unauthorized or malicious postback requests and callbacks. Refer: MSDN: Page.EnableEventValidation Property Based on above, possible scenarios that I have faced or heard that raises the issue in discussion are: Case #1: If we have angular brackets in the request data, it looks like some script tag is being passed to server. Possible Solution: HTML encode the angular brackets with help of JavaScript before submitting the form, i.e., replace “<” with “<” and “>” with “>” function HTMLEncodeAngularBrackets(someString) { var modifiedString = someString.replace("<","&lt;"); modifiedString = modifiedString.replace(">","&gt;"); return modifiedString; } Case #2: If we write client script that changes a control in the client at run time, we might have a dangling event. An example could be having embedded controls where an inner control registers for postback but is hidden at runtime because of an operation done on outer control. This I read about on MSDN blog written by Carlo, when looking for same issue because of multiple form tags. Possible Solution: Manually register control for event validation within Render method of the page. protected override void Render(HtmlTextWriter writer) { ClientScript.RegisterForEventValidation(myButton.UniqueID.ToString()); base.Render(writer); } As said, one of the other common scenario reported (which looks like falls in the this same category) is building a page where one form tag is embedded in another form tag that runs on server. Removing one of them corrects the flow and resolves the issue. Case #3: If we re-define/instantiate controls or commands at runtime on every postback, respective/related events might go for a toss. A simple example could be of re-binding a datagrid on every pageload (including postbacks). Since, on rebind all the controls in grid will have a new ID, during an event triggered by datagrid control, on postback the control ID’s are changed and thus the event might not connect to correct control raising the issue. Possible Solution: This can be simply resolved by making sure that controls are not re-created on every postback (rebind here). Using Page property IsPostback can easily handle it. If you want to create a control on every postback, then it is necessary to make sure that the ID’s are not changed. protected void Page_Load(object sender, EventArgs e) { if(!Page.IsPostback) { // Create controls // Bind Grid } } Conclusion: As said, an easy/direct solution can be adding enableEventValidation=”false” in the Page directive or Web.config file but is not recommended. Based on the implementation and cause, find the root cause and apply the resolution accordingly. A: This can happen if you're posting what appears to be possibly malicious things; such as a textbox that has html in it, but is not encoded prior to postback. If you are allowing html or script to be submitted, you need to encode it so that the characters, such as <, are passed as & lt;. A: It seems that the data/controls on the page are changed when the postback occurs. What happens if you turn off the event validation in the page directive. <%@ Page ... EnableEventValidation = "false" /> A: I only ever get this when I have nested <form> tags in my pages. IE6 will look at the nested form tags and try to post the values in those forms as well as the main ASP.NET form, causing the error. Other browsers don't post the nested forms (since it's invalid HTML) and don't get the error. You can certainly solve this by doing an EnableEventValidation = "false", but that can mean problems for your posted values and viewstate. It's better to weed out the nested <form> tags first. There are other spots where this can come up, like HTML-esque values in form fields, but I think the error messages for those were more specific. On a generic postback that throws this, I'd just check the rendered page for extra <form> tags. A: I had something similar happen where I had a ListBox that worked fine when I entered text manually, but when I entered data based off of a SQL query, the last item in the list would throw this exception. I searched all the Q&A and nothing matched my issue. In my case, the issue was that there was an unprintable character (\r) in the SQL data. I am guessing that the server created a hash code based on the existence of the unprintable character, but it was removed from the string that actually showed up in the ListBox so the 2nd hash didn't match the 1st. Cleaning up the string and removing the unprintable characters before putting it into the ListBox fixed my issue. This is probably a super edge-case but I wanted to add this just to be complete (and hopefully help someone else not spend 2 days going crazy).
{ "language": "en", "url": "https://stackoverflow.com/questions/103560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Can FxCop/StyleCop be limited to only analyze selected methods from with Visual Studio? I am taking on a maintenance team and would like to introduce tools like FxCop and StyleCop to help improve the code and introduce the developers to better programming techniques and standards. Since we are maintaining code and not making significant enhancements, we will probably only deal with a couple of methods/routines at a time when making changes. Is it possible to target FxCop/StyleCop to specific areas of code within Visual Studio to avoid getting overwhelmed with all of the issues that would get raised when analyzing a whole class or project? If it is possible, how do you go about it? Thanks, Matt A: I am using FxCopCmd.exe (FxCop 1.36) as an external tool with various command line parameters, including this one: /types:<type list> [Short form: /t:<type list>] Analyze only these types and members. A: FxCop can target specific types using either the command line or the GUI. StyleCop does not provide such a mechanism. However, for both of them you can target specific rules instead types which may work better to reduce the amount of "noise" to more manageable chunks. A: Try the approach in this article http://blogs.msdn.com/sourceanalysis/archive/2008/11/11/introducing-stylecop-on-legacy-projects.aspx You can remove individual files from StyleCop's beady eye by adding properties to include or exclude each file in the .csproj A: I would guess that it can't (seems a too-specific need). A: Creating new rules for FXCop is possible, but "advanced". Configuring FXCop to only use certain rules from those available is trivial.
{ "language": "en", "url": "https://stackoverflow.com/questions/103561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: The performance impact of using instanceof in Java I am working on an application and one design approach involves extremely heavy use of the instanceof operator. While I know that OO design generally tries to avoid using instanceof, that is a different story and this question is purely related to performance. I was wondering if there is any performance impact? Is is just as fast as ==? For example, I have a base class with 10 subclasses. In a single function that takes the base class, I do checks for if the class is an instance of the subclass and carry out some routine. One of the other ways I thought of solving it was to use a "type id" integer primitive instead, and use a bitmask to represent categories of the subclasses, and then just do a bit mask comparison of the subclasses "type id" to a constant mask representing the category. Is instanceof somehow optimized by the JVM to be faster than that? I want to stick to Java but the performance of the app is critical. It would be cool if someone that has been down this road before could offer some advice. Am I nitpicking too much or focusing on the wrong thing to optimize? A: I just made a simple test to see how instanceOf performance is comparing to a simple s.equals() call to a string object with only one letter. in a 10.000.000 loop the instanceOf gave me 63-96ms, and the string equals gave me 106-230ms I used java jvm 6. So in my simple test is faster to do a instanceOf instead of a one character string comparison. using Integer's .equals() instead of string's gave me the same result, only when I used the == i was faster than instanceOf by 20ms (in a 10.000.000 loop) A: instanceof is probably going to be more costly than a simple equals in most real world implementations (that is, the ones where instanceof is really needed, and you can't just solve it by overriding a common method, like every beginner textbook as well as Demian above suggest). Why is that? Because what is probably going to happen is that you have several interfaces, that provide some functionality (let's say, interfaces x, y and z), and some objects to manipulate that may (or not) implement one of those interfaces... but not directly. Say, for instance, I have: w extends x A implements w B extends A C extends B, implements y D extends C, implements z Suppose I am processing an instance of D, the object d. Computing (d instanceof x) requires to take d.getClass(), loop through the interfaces it implements to know whether one is == to x, and if not do so again recursively for all of their ancestors... In our case, if you do a breadth first exploration of that tree, yields at least 8 comparisons, supposing y and z don't extend anything... The complexity of a real-world derivation tree is likely to be higher. In some cases, the JIT can optimize most of it away, if it is able to resolve in advance d as being, in all possible cases, an instance of something that extends x. Realistically, however, you are going to go through that tree traversal most of the time. If that becomes an issue, I would suggest using a handler map instead, linking the concrete class of the object to a closure that does the handling. It removes the tree traversal phase in favor of a direct mapping. However, beware that if you have set a handler for C.class, my object d above will not be recognized. here are my 2 cents, I hope they help... A: Instanceof is very fast. It boils down to a bytecode that is used for class reference comparison. Try a few million instanceofs in a loop and see for yourself. A: instanceof is very efficient, so your performance is unlikely to suffer. However, using lots of instanceof suggests a design issue. If you can use xClass == String.class, this is faster. Note: you don't need instanceof for final classes. A: I write a performance test based on jmh-java-benchmark-archetype:2.21. JDK is openjdk and version is 1.8.0_212. The test machine is mac pro. Test result is: Benchmark Mode Cnt Score Error Units MyBenchmark.getClasses thrpt 30 510.818 ± 4.190 ops/us MyBenchmark.instanceOf thrpt 30 503.826 ± 5.546 ops/us The result shows that: getClass is better than instanceOf, which is contrary with other test. However, I don't know why. The test code is below: public class MyBenchmark { public static final Object a = new LinkedHashMap<String, String>(); @Benchmark @BenchmarkMode(Mode.Throughput) @OutputTimeUnit(TimeUnit.MICROSECONDS) public boolean instanceOf() { return a instanceof Map; } @Benchmark @BenchmarkMode(Mode.Throughput) @OutputTimeUnit(TimeUnit.MICROSECONDS) public boolean getClasses() { return a.getClass() == HashMap.class; } public static void main(String[] args) throws RunnerException { Options opt = new OptionsBuilder().include(MyBenchmark.class.getSimpleName()).warmupIterations(20).measurementIterations(30).forks(1).build(); new Runner(opt).run(); } } A: Generally the reason why the "instanceof" operator is frowned upon in a case like that (where the instanceof is checking for subclasses of this base class) is because what you should be doing is moving the operations into a method and overridding it for the appropriate subclasses. For instance, if you have: if (o instanceof Class1) doThis(); else if (o instanceof Class2) doThat(); //... You can replace that with o.doEverything(); and then have the implementation of "doEverything()" in Class1 call "doThis()", and in Class2 call "doThat()", and so on. A: 'instanceof' is actually an operator, like + or -, and I believe that it has its own JVM bytecode instruction. It should be plenty fast. I should not that if you have a switch where you are testing if an object is an instance of some subsclass, then your design might need to be reworked. Consider pushing the subclass-specific behavior down into the subclasses themselves. A: It's hard to say how a certain JVM implements instance of, but in most cases, Objects are comparable to structs and classes are as well and every object struct has a pointer to the the class struct it is an instance of. So actually instanceof for if (o instanceof java.lang.String) might be as fast as the following C code if (objectStruct->iAmInstanceOf == &java_lang_String_class) assuming a JIT compiler is in place and does a decent job. Considering that this is only accessing a pointer, getting a pointer at a certain offset the pointer points to and comparing this to another pointer (which is basically the same as testing to 32 bit numbers being equal), I'd say the operation can actually be very fast. It doesn't have to, though, it depends a lot on the JVM. However, if this would turn out to be the bottleneck operation in your code, I'd consider the JVM implementation rather poor. Even one that has no JIT compiler and only interprets code should be able to make an instanceof test in virtually no time. A: Demian and Paul mention a good point; however, the placement of the code to execute really depends on how you want to use the data... I'm a big fan of small data objects that can be used in many ways. If you follow the override (polymorphic) approach, your objects can only be used "one way". This is where patterns come in... You can use double-dispatch (as in the visitor pattern) to ask each object to "call you" passing itself -- this will resolve the type of the object. However (again) you'll need a class that can "do stuff" with all of the possible subtypes. I prefer to use a strategy pattern, where you can register strategies for each subtype you want to handle. Something like the following. Note that this only helps for exact type matches, but has the advantage that it's extensible - third-party contributors can add their own types and handlers. (This is good for dynamic frameworks like OSGi, where new bundles can be added) Hopefully this will inspire some other ideas... package com.javadude.sample; import java.util.HashMap; import java.util.Map; public class StrategyExample { static class SomeCommonSuperType {} static class SubType1 extends SomeCommonSuperType {} static class SubType2 extends SomeCommonSuperType {} static class SubType3 extends SomeCommonSuperType {} static interface Handler<T extends SomeCommonSuperType> { Object handle(T object); } static class HandlerMap { private Map<Class<? extends SomeCommonSuperType>, Handler<? extends SomeCommonSuperType>> handlers_ = new HashMap<Class<? extends SomeCommonSuperType>, Handler<? extends SomeCommonSuperType>>(); public <T extends SomeCommonSuperType> void add(Class<T> c, Handler<T> handler) { handlers_.put(c, handler); } @SuppressWarnings("unchecked") public <T extends SomeCommonSuperType> Object handle(T o) { return ((Handler<T>) handlers_.get(o.getClass())).handle(o); } } public static void main(String[] args) { HandlerMap handlerMap = new HandlerMap(); handlerMap.add(SubType1.class, new Handler<SubType1>() { @Override public Object handle(SubType1 object) { System.out.println("Handling SubType1"); return null; } }); handlerMap.add(SubType2.class, new Handler<SubType2>() { @Override public Object handle(SubType2 object) { System.out.println("Handling SubType2"); return null; } }); handlerMap.add(SubType3.class, new Handler<SubType3>() { @Override public Object handle(SubType3 object) { System.out.println("Handling SubType3"); return null; } }); SubType1 subType1 = new SubType1(); handlerMap.handle(subType1); SubType2 subType2 = new SubType2(); handlerMap.handle(subType2); SubType3 subType3 = new SubType3(); handlerMap.handle(subType3); } } A: Approach I wrote a benchmark program to evaluate different implementations: * *instanceof implementation (as reference) *object-orientated via an abstract class and @Override a test method *using an own type implementation *getClass() == _.class implementation I used jmh to run the benchmark with 100 warmup calls, 1000 iterations under measuring, and with 10 forks. So each option was measured with 10 000 times, which takes 12:18:57 to run the whole benchmark on my MacBook Pro with macOS 10.12.4 and Java 1.8. The benchmark measures the average time of each option. For more details see my implementation on GitHub. For the sake of completeness: There is a previous version of this answer and my benchmark. Results | Operation | Runtime in nanoseconds per operation | Relative to instanceof | |------------|--------------------------------------|------------------------| | INSTANCEOF | 39,598 ± 0,022 ns/op | 100,00 % | | GETCLASS | 39,687 ± 0,021 ns/op | 100,22 % | | TYPE | 46,295 ± 0,026 ns/op | 116,91 % | | OO | 48,078 ± 0,026 ns/op | 121,42 % | tl;dr In Java 1.8 instanceof is the fastest approach, although getClass() is very close. A: I'll get back to you on instanceof performance. But a way to avoid problem (or lack thereof) altogether would be to create a parent interface to all the subclasses on which you need to do instanceof. The interface will be a super set of all the methods in sub-classes for which you need to do instanceof check. Where a method does not apply to a specific sub-class, simply provide a dummy implementation of this method. If I didn't misunderstand the issue, this is how I've gotten around the problem in the past. A: Modern JVM/JIT compilers have removed the performance hit of most of the traditionally "slow" operations, including instanceof, exception handling, reflection, etc. As Donald Knuth wrote, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." The performance of instanceof probably won't be an issue, so don't waste your time coming up with exotic workarounds until you're sure that's the problem. A: The items which will determine the performance impact are: * *The number of possible classes for which the instanceof operator could return true *The distribution of your data - are most of the instanceof operations resolved in the first or second attempt? You'll want to put your most likely to return true operations first. *The deployment environment. Running on a Sun Solaris VM is significantly different than Sun's Windows JVM. Solaris will run in 'server' mode by default, while Windows will run in client mode. The JIT optimizations on Solaris, will make all method access able the same. I created a microbenchmark for four different methods of dispatch. The results from Solaris are as follows, with the smaller number being faster: InstanceOf 3156 class== 2925 OO 3083 Id 3067 A: In modern Java version the instanceof operator is faster as a simple method call. This means: if(a instanceof AnyObject){ } is faster as: if(a.getType() == XYZ){ } Another thing is if you need to cascade many instanceof. Then a switch that only call once getType() is faster. A: Answering your very last question: Unless a profiler tells you, that you spend ridiculous amounts of time in an instanceof: Yes, you're nitpicking. Before wondering about optimizing something that never needed to be optimized: Write your algorithm in the most readable way and run it. Run it, until the jit-compiler gets a chance to optimize it itself. If you then have problems with this piece of code, use a profiler to tell you, where to gain the most and optimize this. In times of highly optimizing compilers, your guesses about bottlenecks will be likely to be completely wrong. And in true spirit of this answer (which I wholeheartly believe): I absolutely don't know how instanceof and == relate once the jit-compiler got a chance to optimize it. I forgot: Never measure the first run. A: I have got same question, but because i did not find 'performance metrics' for use case similar to mine, i've done some more sample code. On my hardware and Java 6 & 7, the difference between instanceof and switch on 10mln iterations is for 10 child classes - instanceof: 1200ms vs switch: 470ms for 5 child classes - instanceof: 375ms vs switch: 204ms So, instanceof is really slower, especially on huge number of if-else-if statements, however difference will be negligible within real application. import java.util.Date; public class InstanceOfVsEnum { public static int c1, c2, c3, c4, c5, c6, c7, c8, c9, cA; public static class Handler { public enum Type { Type1, Type2, Type3, Type4, Type5, Type6, Type7, Type8, Type9, TypeA } protected Handler(Type type) { this.type = type; } public final Type type; public static void addHandlerInstanceOf(Handler h) { if( h instanceof H1) { c1++; } else if( h instanceof H2) { c2++; } else if( h instanceof H3) { c3++; } else if( h instanceof H4) { c4++; } else if( h instanceof H5) { c5++; } else if( h instanceof H6) { c6++; } else if( h instanceof H7) { c7++; } else if( h instanceof H8) { c8++; } else if( h instanceof H9) { c9++; } else if( h instanceof HA) { cA++; } } public static void addHandlerSwitch(Handler h) { switch( h.type ) { case Type1: c1++; break; case Type2: c2++; break; case Type3: c3++; break; case Type4: c4++; break; case Type5: c5++; break; case Type6: c6++; break; case Type7: c7++; break; case Type8: c8++; break; case Type9: c9++; break; case TypeA: cA++; break; } } } public static class H1 extends Handler { public H1() { super(Type.Type1); } } public static class H2 extends Handler { public H2() { super(Type.Type2); } } public static class H3 extends Handler { public H3() { super(Type.Type3); } } public static class H4 extends Handler { public H4() { super(Type.Type4); } } public static class H5 extends Handler { public H5() { super(Type.Type5); } } public static class H6 extends Handler { public H6() { super(Type.Type6); } } public static class H7 extends Handler { public H7() { super(Type.Type7); } } public static class H8 extends Handler { public H8() { super(Type.Type8); } } public static class H9 extends Handler { public H9() { super(Type.Type9); } } public static class HA extends Handler { public HA() { super(Type.TypeA); } } final static int cCycles = 10000000; public static void main(String[] args) { H1 h1 = new H1(); H2 h2 = new H2(); H3 h3 = new H3(); H4 h4 = new H4(); H5 h5 = new H5(); H6 h6 = new H6(); H7 h7 = new H7(); H8 h8 = new H8(); H9 h9 = new H9(); HA hA = new HA(); Date dtStart = new Date(); for( int i = 0; i < cCycles; i++ ) { Handler.addHandlerInstanceOf(h1); Handler.addHandlerInstanceOf(h2); Handler.addHandlerInstanceOf(h3); Handler.addHandlerInstanceOf(h4); Handler.addHandlerInstanceOf(h5); Handler.addHandlerInstanceOf(h6); Handler.addHandlerInstanceOf(h7); Handler.addHandlerInstanceOf(h8); Handler.addHandlerInstanceOf(h9); Handler.addHandlerInstanceOf(hA); } System.out.println("Instance of - " + (new Date().getTime() - dtStart.getTime())); dtStart = new Date(); for( int i = 0; i < cCycles; i++ ) { Handler.addHandlerSwitch(h1); Handler.addHandlerSwitch(h2); Handler.addHandlerSwitch(h3); Handler.addHandlerSwitch(h4); Handler.addHandlerSwitch(h5); Handler.addHandlerSwitch(h6); Handler.addHandlerSwitch(h7); Handler.addHandlerSwitch(h8); Handler.addHandlerSwitch(h9); Handler.addHandlerSwitch(hA); } System.out.println("Switch of - " + (new Date().getTime() - dtStart.getTime())); } } A: instanceof is really fast, taking only a few CPU instructions. Apparently, if a class X has no subclasses loaded (JVM knows), instanceof can be optimized as: x instanceof X ==> x.getClass()==X.class ==> x.classID == constant_X_ID The main cost is just a read! If X does have subclasses loaded, a few more reads are needed; they are likely co-located so the extra cost is very low too. Good news everyone! A: InstanceOf is a warning of poor Object Oriented design. Current JVMs do mean the instanceOf is not much of a performance worry in itself. If you are finding yourself using it a lot, especially for core functionality, it is probably time to look at the design. The performance (and simplicity/maintainability) gains of refactoring to a better design will greatly outweigh any actual processor cycles spent on the actual instanceOf call. To give a very small simplistic programming example. if (SomeObject instanceOf Integer) { [do something] } if (SomeObject instanceOf Double) { [do something different] } Is a poor architecture a better choice would have been to have SomeObject be the parent class of two child classes where each child class overrides a method (doSomething) so the code would look as such: Someobject.doSomething(); A: If speed is your sole aim then using int constants to identify sub classes seems to shave a milliseconds of the time static final int ID_A = 0; static final int ID_B = 1; abstract class Base { final int id; Base(int i) { id = i; } } class A extends Base { A() { super(ID_A); } } class B extends Base { B() { super(ID_B); } } ... Base obj = ... switch(obj.id) { case ID_A: .... break; case ID_B: .... break; } terrible OO design, but if your performance analysis indicates this is where you bottleneck is then maybe. In my code the dispatch code takes 10% of total execution time and this maybe contributed to a 1% total speed improvement. A: You should measure/profile if it's really a performance issue in your project. If it is I'd recommend a redesign - if possible. I'm pretty sure you can't beat the platform's native implementation (written in C). You should also consider the multiple inheritance in this case. You should tell more about the problem, maybe you could use an associative store, e.g. a Map<Class, Object> if you are only interested in the concrete types. A: With regard to Peter Lawrey's note that you don't need instanceof for final classes and can just use a reference equality, be careful! Even though the final classes cannot be extended, they are not guaranteed to be loaded by the same classloader. Only use x.getClass() == SomeFinal.class or its ilk if you are absolutely positive that there is only one classloader in play for that section of code. A: I also prefer an enum approach, but I would use a abstract base class to force the subclasses to implement the getType() method. public abstract class Base { protected enum TYPE { DERIVED_A, DERIVED_B } public abstract TYPE getType(); class DerivedA extends Base { @Override public TYPE getType() { return TYPE.DERIVED_A; } } class DerivedB extends Base { @Override public TYPE getType() { return TYPE.DERIVED_B; } } } A: I thought it might be worth submitting a counter-example to the general consensus on this page that "instanceof" is not expensive enough to worry about. I found I had some code in an inner loop that (in some historic attempt at optimization) did if (!(seq instanceof SingleItem)) { seq = seq.head(); } where calling head() on a SingleItem returns the value unchanged. Replacing the code by seq = seq.head(); gives me a speed-up from 269ms to 169ms, despite the fact that there are some quite heavy things happening in the loop, like string-to-double conversion. It's possible of course that the speed-up is more due to eliminating the conditional branch than to eliminating the instanceof operator itself; but I thought it worth mentioning. A: You're focusing on the wrong thing. The difference between instanceof and any other method for checking the same thing would probably not even be measurable. If performance is critical then Java is probably the wrong language. The major reason being that you can't control when the VM decides it wants to go collect garbage, which can take the CPU to 100% for several seconds in a large program (MagicDraw 10 was great for that). Unless you are in control of every computer this program will run on you can't guarantee which version of JVM it will be on, and many of the older ones had major speed issues. If it's a small app you may be ok with Java, but if you are constantly reading and discarding data then you will notice when the GC kicks in.
{ "language": "en", "url": "https://stackoverflow.com/questions/103564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "347" }
Q: Visual Studio 2008 saving files gets slow I've looked for some other articles on this problem and even tried some of the ideas in this thread; however, nothing has solved the issue yet. So, on to the issue. Something happens when working in Visual Studio (usually C#) that causes the IDE to become a bit wonky when saving a file. I will be working along just fine for a while then at some point I notice that every time I save a file (Ctrl+S) it becomes very slow. The behavior I notice is this; I hit save in some fashion (Ctrl+S, menu, etc...) and in the status bar I see the word Searching show up. It looks like it is scanning through all of the loaded namespaces for something, although I have no idea for what or why it is doing so. It causes a real hiccup in workflow since typically I will hit Ctrl+S often and keep typing. I have been unable to track down what exactly causes this to start happening. It has happened in multiple project types (web, WPF, console). Has anyone seen this behavior or have any suggestions? A: I know the question is old but this may help others who are having the same issue. I to had a problem with VS 2008 taking long time to save some files. Not all files, just a few files. Hitting Ctrl+Swould take any where from 30-120 seconds. I figured out it was on pages having external JavaScript. So I selectively commented them out and tried saving and found the offender. Culprit was Google translate javascript code. It starts with <script src="//translate.google.com/translate_a... Notice the // at the beginning of the src. All other external scripts that started with http:// had no problem, I changed the // to http:// and the problem was solved. It seems VS is trying to get the file locally if the path is not http. I don't know what it does, but this fixed the problem for me. A: I've had a problem similar to this happen before. Are you using an plugins like ReSharper or DevExpress? A: Did you disable intellsense? We've seen that bog down all sorts of things in Visual Studio. A: I had similar problem with Visual Studio 2005. I read through several posts (sorry I could not post the links because of: sorry, new users can only post a maximum of one hyperlink). I ran FileMon and discovered that on save the IDE keeps querying C:\Documents and Settings\iguigova\Local Settings\Application Data\Microsoft\WebsiteCache Then I came across this post: http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=347228 Now I am trying to clear the directory. It was FULL. I plan to set a batch file to delete its contents daily... Good luck!
{ "language": "en", "url": "https://stackoverflow.com/questions/103569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: WPF Validation Not Firing on First LostFocus of the TextBox I am trying to validate the WPF form against an object. The validation fires when I type something in the textbox lose focus come back to the textbox and then erase whatever I have written. But if I just load the WPF application and tab off the textbox without writing and erasing anything from the textbox, then it is not fired. Here is the Customer.cs class: public class Customer : IDataErrorInfo { public string FirstName { get; set; } public string LastName { get; set; } public string Error { get { throw new NotImplementedException(); } } public string this[string columnName] { get { string result = null; if (columnName.Equals("FirstName")) { if (String.IsNullOrEmpty(FirstName)) { result = "FirstName cannot be null or empty"; } } else if (columnName.Equals("LastName")) { if (String.IsNullOrEmpty(LastName)) { result = "LastName cannot be null or empty"; } } return result; } } } And here is the WPF code: <TextBlock Grid.Row="1" Margin="10" Grid.Column="0">LastName</TextBlock> <TextBox Style="{StaticResource textBoxStyle}" Name="txtLastName" Margin="10" VerticalAlignment="Top" Grid.Row="1" Grid.Column="1"> <Binding Source="{StaticResource CustomerKey}" Path="LastName" ValidatesOnExceptions="True" ValidatesOnDataErrors="True" UpdateSourceTrigger="LostFocus"/> </TextBox> A: Take a look at the ValidatesOnTargetUpdated property of ValidationRule. It will validate when the data is first loaded. This is good if you're trying to catch empty or null fields. You'd update your binding element like this: <Binding Source="{StaticResource CustomerKey}" Path="LastName" ValidatesOnExceptions="True" ValidatesOnDataErrors="True" UpdateSourceTrigger="LostFocus"> <Binding.ValidationRules> <DataErrorValidationRule ValidatesOnTargetUpdated="True" /> </Binding.ValidationRules> </Binding> A: If you're not adverse to putting a bit of logic in your code behind, you can handle the actual LostFocus event with something like this: .xaml <TextBox LostFocus="TextBox_LostFocus" .... .xaml.cs private void TextBox_LostFocus(object sender, RoutedEventArgs e) { ((Control)sender).GetBindingExpression(TextBox.TextProperty).UpdateSource(); } A: Unfortunately this is by design. WPF validation only fires if the value in the control has changed. Unbelievable, but true. So far, WPF validation is the big proverbial pain - it's terrible. One of the things you can do, however, is get the binding expression from the control's property and manually invoke the validations. It sucks, but it works. A: I found out the best way for me to handle this was on the LostFocus event of the text box I do something like this private void dbaseNameTextBox_LostFocus(object sender, RoutedEventArgs e) { if (string.IsNullOrWhiteSpace(dbaseNameTextBox.Text)) { dbaseNameTextBox.Text = string.Empty; } } Then it sees an error A: I've gone through the same problem and found an ultra simple way to resolve this : in the Loaded event of your window, simply put txtLastName.Text = String.Empty. That's it!! Since the property of your object has changed (been set to an empty string), the validation's firing ! A: The follow code loops over all the controls and validates them. Not necessarily the preferred way but seems to work. It only does TextBlocks and TextBoxes but you can easily change it. public static class PreValidation { public static IEnumerable<T> FindVisualChildren<T>(DependencyObject depObj) where T : DependencyObject { if (depObj != null) { for (int i = 0; i < VisualTreeHelper.GetChildrenCount(depObj); i++) { DependencyObject child = VisualTreeHelper.GetChild(depObj, i); if (child != null && child is T) { yield return (T)child; } foreach (T childOfChild in FindVisualChildren<T>(child)) { yield return childOfChild; } } } } public static void Validate(DependencyObject depObj) { foreach(var c in FindVisualChildren<FrameworkElement>(depObj)) { DependencyProperty p = null; if (c is TextBlock) p = TextBlock.TextProperty; else if (c is TextBox) p = TextBox.TextProperty; if (p != null && c.GetBindingExpression(p) != null) c.GetBindingExpression(p).UpdateSource(); } } } Just call Validate on your window or control and it should pre-validate them for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/103575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: What's wrong with my XPath/XML? I'm trying a very basic XPath on this xml (same as below), and it doesn't find anything. I'm trying both .NET and this website, and XPaths such as //PropertyGroup, /PropertyGroup and //MSBuildCommunityTasksPath are simply not working for me (they compiled but return zero results). Source XML: <?xml version="1.0" encoding="utf-8"?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- $Id: FxCop.proj 114 2006-03-14 06:32:46Z pwelter34 $ --> <PropertyGroup> <MSBuildCommunityTasksPath>$(MSBuildProjectDirectory)\MSBuild.Community.Tasks\bin\Debug</MSBuildCommunityTasksPath> </PropertyGroup> <Import Project="$(MSBuildProjectDirectory)\MSBuild.Community.Tasks\MSBuild.Community.Tasks.Targets" /> <Target Name="DoFxCop"> <FxCop TargetAssemblies="$(MSBuildCommunityTasksPath)\MSBuild.Community.Tasks.dll" RuleLibraries="@(FxCopRuleAssemblies)" AnalysisReportFileName="Test.html" DependencyDirectories="$(MSBuildCommunityTasksPath)" FailOnError="True" ApplyOutXsl="True" OutputXslFileName="C:\Program Files\Microsoft FxCop 1.32\Xml\FxCopReport.xsl" /> </Target> </Project> A: The tags in the document end up in the "default" namespace created by the xmlns attribute with no prefix. Unfortunately, XPath alone can not query elements in the default namespace. I'm actually not sure of the semantic details, but you have to explicitly attach a prefix to that namespace using whatever tool is hosting XPath. There may be a shorter way to do this in .NET, but the only way I've seen is via a NameSpaceManager. After you explicitly add a namespace, you can query using the namespace manager as if all the tags in the namespaced element have that prefix (I chose 'msbuild'): using System; using System.Xml; public class XPathNamespace { public static void Main(string[] args) { XmlDocument xmlDocument = new XmlDocument(); xmlDocument.LoadXml( @"<?xml version=""1.0"" encoding=""utf-8""?> <Project xmlns=""http://schemas.microsoft.com/developer/msbuild/2003""> <!-- $Id: FxCop.proj 114 2006-03-14 06:32:46Z pwelter34 $ --> <PropertyGroup> <MSBuildCommunityTasksPath>$(MSBuildProjectDirectory)\MSBuild.Community.Tasks\bin\Debug</MSBuildCommunityTasksPath> </PropertyGroup> <Import Project=""$(MSBuildProjectDirectory)\MSBuild.Community.Tasks\MSBuild.Community.Tasks.Targets""/> <Target Name=""DoFxCop""> <FxCop TargetAssemblies=""$(MSBuildCommunityTasksPath)\MSBuild.Community.Tasks.dll"" RuleLibraries=""@(FxCopRuleAssemblies)"" AnalysisReportFileName=""Test.html"" DependencyDirectories=""$(MSBuildCommunityTasksPath)"" FailOnError=""True"" ApplyOutXsl=""True"" OutputXslFileName=""C:\Program Files\Microsoft FxCop 1.32\Xml\FxCopReport.xsl"" /> </Target> </Project>"); XmlNamespaceManager namespaceManager = new XmlNamespaceManager(xmlDocument.NameTable); namespaceManager.AddNamespace("msbuild", "http://schemas.microsoft.com/developer/msbuild/2003"); foreach (XmlNode n in xmlDocument.SelectNodes("//msbuild:MSBuildCommunityTasksPath", namespaceManager)) { Console.WriteLine(n.InnerText); } } } A: You can add namespaces in your code and all that, but you can effectively wildcard the namespace. Try the following XPath idiom. //*[local-name()='PropertyGroup'] //*[local-name()='MSBuildCommunityTasksPath'] name() usually works as well, as in: //*[name()='PropertyGroup'] //*[name()='MSBuildCommunityTasksPath'] EDIT: Namespaces are great and i'm not suggesting they're not important, but wildcarding them comes in handy when cobbling together prototype code, one-off desktop tools, experimenting with XSLT, and so forth. Balance your need for convenience against acceptable risk for the task at hand. FYI, if need be, you can also strip or reassign namespaces. A: Your issue is with the namespace (xmlns="http://schemas.microsoft.com/developer/msbuild/2003"). You're receiving zero nodes because you aren't qualifying it with the namespace. If you remove the xmlns attribute, your "//PropertyGroup" XPath will work. How you query with namespace usually involves aliasing a default xmlns to an identifier (since one is not specified on the attribute), and selecting like "//myXMLNStoken:PropertyGroup".
{ "language": "en", "url": "https://stackoverflow.com/questions/103576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Exceptions: Is this a good practice? This is written in PHP, but it's really language agnostic. try { try { $issue = new DM_Issue($core->db->escape_string($_GET['issue'])); } catch(DM_Exception $e) { throw new Error_Page($tpl, ERR_NOT_FOUND, $e->getMessage()); } } catch(Error_Page $e) { die($e); } Is nested try, catch blocks a good practice to follow? It seems a little bulky just for an error page - however my Issue Datamanager throws an Exception if an error occurs and I consider that to be a good way of error detecting. The Error_Page exception is simply an error page compiler. I might just be pedantic, but do you think this is a good way to report errors and if so can you suggest a better way to write this? Thanks A: You're using Exceptions for page logic, and I personally think that's not a good thing. Exceptions should be used to signal when bad or unexpected things happen, not to control the output of an error page. If you want to generate an error page based on Exceptions, consider using set_exception_handler. Any uncaught exceptions are run through whatever callback method you specify. Keep in mind that this doesn't stop the "fatalness" of an Exception. After an exception is passed through your callback, execution will stop like normal after any uncaught exception. A: I think you'd be better off not nesting. If you expect multiple exception types, have multiple catches. try{ Something(); } catch( SpecificException se ) {blah();} catch( AnotherException ae ) {blah();} A: The ideal is for exceptions to be caught at the level which can handle them. Not before (waste of time), and not after (you lose context). So, if $tpl and ERR_NOT_FOUND are information which is only "known" close to the new DM_Issue call, for example because there are other places where you create a DM_Issue and would want ERR_SOMETHING_ELSE, or because the value of $tpl varies, then you're catching the first exception at the right place. How to get from that place to dying is another question. An alternative would be to die right there. But if you do that then there's no opportunity for intervening code to do anything (such as clearing something up in some way or modifying the error page) after the error but before exit. It's also good to have explicit control flow. So I think you're good. I'm assuming that your example isn't a complete application - if it is then it's probably needlessly verbose, and you could just die in the DM_Exception catch clause. But for a real app I approve of the principle of not just dying in the middle of nowhere. A: Depending on your needs this could be fine, but I am generally pretty hesitant to catch an exception, wrap the message in a new exception, and rethrow it because you loose the stack trace (and potentially other) information from the original exception in the wrapping exception. If you're sure that you don't need that information when examining the wrapping exception then it's probably alright. A: I'm not sure about PHP but in e.g. C# you can have more then one catch-Block so there is no need for nested try/catch-combinations. Generally I believe that errorhandling with try/catch/finally is always common-sense, also for showing "only" a error-page. It's a clean way to handle errors and avoid strange behavior on crashing. A: I wouldn't throw an exception on issue not found - it's a valid state of an application, and you don't need a stack trace just to display a 404. What you need to catch is unexpected failures, like sql errors - that's when exception handling comes in handy. I would change your code to look more like this: try { $issue = DM_Issue::fetch($core->db->escape_string($_GET['issue'])); } catch (SQLException $e) { log_error('SQL Error: DM_Issue::fetch()', $e->get_message()); } catch (Exception $e) { log_error('Exception: DM_Issue::fetch()', $e->get_message()); } if(!$issue) { display_error_page($tpl, ERR_NOT_FOUND); } else { // ... do stuff with $issue object. } A: Exceptions should be used only if there is a potentially site-breaking event - such as a database query not executing properly or something is misconfigured. A good example is that a cache or log directory is not writable by the Apache process. The idea here is that exceptions are for you, the developer, to halt code that can break the entire site so you can fix them before deployment. They are also sanity checks to make sure that if the environment changes (i.e. somebody alters the permissions of the cache folder or change the database scheme) the site is halted before it can damage anything. So, no; nested catch handlers are not a good idea. In my pages, my index.php file wraps its code in a try...cache block - and if something bad happens it checks to see if its in production or not; and either emails me and display a generic error page, or shows the error right on the screen. Remember: PHP is not C#. C# is (with the exception (hehe, no pun intended :p) of ASP.net) for applications that contain state - whereas PHP is a stateless scripting language.
{ "language": "en", "url": "https://stackoverflow.com/questions/103583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Using php, how to insert text without overwriting to the beginning of a text file I have: <?php $file=fopen(date("Y-m-d").".txt","r+") or exit("Unable to open file!"); if ($_POST["lastname"] <> "") { fwrite($file,$_POST["lastname"]."\n"); } fclose($file); ?> but it overwrites the beginning of the file. How do I make it insert? A: I'm not entirely sure of your question - do you want to write data and not have it over-write the beginning of an existing file, or write new data to the start of an existing file, keeping the existing content after it? To insert text without over-writing the beginning of the file, you'll have to open it for appending (a+ rather than r+) $file=fopen(date("Y-m-d").".txt","a+") or exit("Unable to open file!"); if ($_POST["lastname"] <> "") { fwrite($file,$_POST["lastname"]."\n"); } fclose($file); If you're trying to write to the start of the file, you'll have to read in the file contents (see file_get_contents) first, then write your new string followed by file contents to the output file. $old_content = file_get_contents($file); fwrite($file, $new_content."\n".$old_content); The above approach will work with small files, but you may run into memory limits trying to read a large file in using file_get_conents. In this case, consider using rewind($file), which sets the file position indicator for handle to the beginning of the file stream. Note when using rewind(), not to open the file with the a (or a+) options, as: If you have opened the file in append ("a" or "a+") mode, any data you write to the file will always be appended, regardless of the file position. A: A working example for inserting in the middle of a file stream without overwriting, and without having to load the whole thing into a variable/memory: function finsert($handle, $string, $bufferSize = 16384) { $insertionPoint = ftell($handle); // Create a temp file to stream into $tempPath = tempnam(sys_get_temp_dir(), "file-chainer"); $lastPartHandle = fopen($tempPath, "w+"); // Read in everything from the insertion point and forward while (!feof($handle)) { fwrite($lastPartHandle, fread($handle, $bufferSize), $bufferSize); } // Rewind to the insertion point fseek($handle, $insertionPoint); // Rewind the temporary stream rewind($lastPartHandle); // Write back everything starting with the string to insert fwrite($handle, $string); while (!feof($lastPartHandle)) { fwrite($handle, fread($lastPartHandle, $bufferSize), $bufferSize); } // Close the last part handle and delete it fclose($lastPartHandle); unlink($tempPath); // Re-set pointer fseek($handle, $insertionPoint + strlen($string)); } $handle = fopen("file.txt", "w+"); fwrite($handle, "foobar"); rewind($handle); finsert($handle, "baz"); // File stream is now: bazfoobar Composer lib for it can be found here A: If you want to put your text at the beginning of the file, you'd have to read the file contents first like: <?php $file=fopen(date("Y-m-d").".txt","r+") or exit("Unable to open file!"); if ($_POST["lastname"] <> "") { $existingText = file_get_contents($file); fwrite($file, $existingText . $_POST["lastname"]."\n"); } fclose($file); ?> A: You get the same opening the file for appending <?php $file=fopen(date("Y-m-d").".txt","a+") or exit("Unable to open file!"); if ($_POST["lastname"] <> "") { fwrite($file,$_POST["lastname"]."\n"); } fclose($file); ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/103593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Why was the arguments.callee.caller property deprecated in JavaScript? Why was the arguments.callee.caller property deprecated in JavaScript? It was added and then deprecated in JavaScript, but it was omitted altogether by ECMAScript. Some browser (Mozilla, IE) have always supported it and don't have any plans on the map to remove support. Others (Safari, Opera) have adopted support for it, but support on older browsers is unreliable. Is there a good reason to put this valuable functionality in limbo? (Or alternately, is there a better way to grab a handle on the calling function?) A: arguments.callee.caller is not deprecated, though it does make use of the Function.caller property. (arguments.callee will just give you a reference to the current function) * *Function.caller, though non-standard according to ECMA3, is implemented across all current major browsers. *arguments.caller is deprecated in favour of Function.caller, and isn't implemented in some current major browsers (e.g. Firefox 3). So the situation is less than ideal, but if you want to access the calling function in Javascript across all major browsers, you can use the Function.caller property, either accessed directly on a named function reference, or from within an anonymous function via the arguments.callee property. A: It is better to use named functions than arguments.callee: function foo () { ... foo() ... } is better than function () { ... arguments.callee() ... } The named function will have access to its caller through the caller property: function foo () { alert(foo.caller); } which is better than function foo () { alert(arguments.callee.caller); } The deprecation is due to current ECMAScript design principles. A: Early versions of JavaScript did not allow named function expressions, and because of that we could not make a recursive function expression: // This snippet will work: function factorial(n) { return (!(n>1))? 1 : factorial(n-1)*n; } [1,2,3,4,5].map(factorial); // But this snippet will not: [1,2,3,4,5].map(function(n) { return (!(n>1))? 1 : /* what goes here? */ (n-1)*n; }); To get around this, arguments.callee was added so we could do: [1,2,3,4,5].map(function(n) { return (!(n>1))? 1 : arguments.callee(n-1)*n; }); However this was actually a really bad solution as this (in conjunction with other arguments, callee, and caller issues) make inlining and tail recursion impossible in the general case (you can achieve it in select cases through tracing etc, but even the best code is sub optimal due to checks that would not otherwise be necessary). The other major issue is that the recursive call will get a different this value, for example: var global = this; var sillyFunction = function (recursed) { if (!recursed) return arguments.callee(true); if (this !== global) alert("This is: " + this); else alert("This is the global"); } sillyFunction(); Anyhow, EcmaScript 3 resolved these issues by allowing named function expressions, e.g.: [1,2,3,4,5].map(function factorial(n) { return (!(n>1))? 1 : factorial(n-1)*n; }); This has numerous benefits: * *The function can be called like any other from inside your code. *It does not pollute the namespace. *The value of this does not change. *It's more performant (accessing the arguments object is expensive). Whoops, Just realised that in addition to everything else the question was about arguments.callee.caller, or more specifically Function.caller. At any point in time you can find the deepest caller of any function on the stack, and as I said above, looking at the call stack has one single major effect: It makes a large number of optimizations impossible, or much much more difficult. Eg. if we can't guarantee that a function f will not call an unknown function, then it is not possible to inline f. Basically it means that any call site that may have been trivially inlinable accumulates a large number of guards, take: function f(a, b, c, d, e) { return a ? b * c : d * e; } If the js interpreter cannot guarantee that all the provided arguments are numbers at the point that the call is made, it needs to either insert checks for all the arguments before the inlined code, or it cannot inline the function. Now in this particular case a smart interpreter should be able to rearrange the checks to be more optimal and not check any values that would not be used. However in many cases that's just not possible and therefore it becomes impossible to inline. A: Just an extension. The value of "this" changes during recursion. In the following (modified) example, factorial gets the {foo:true} object. [1,2,3,4,5].map(function factorial(n) { console.log(this); return (!(n>1))? 1 : factorial(n-1)*n; }, {foo:true} ); factorial called first time gets the object, but this is not true for recursive calls.
{ "language": "en", "url": "https://stackoverflow.com/questions/103598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "221" }
Q: What are you using for Distributed Caching in web farms running ASP.NET? I am curious as to what others are using in this situation. I know a couple of the options that are out there like a memcached port or ScaleOutSoftware. The memcached ports don't seem to be actively worked on (correct me if I'm wrong). ScaleOutSoftware is too expensive for me (I don't doubt it is worth it). This is not to say that I don't want to hear about people using memcached or ScaleOutSoftware. I'm just stating what I "know" at this point. So my question is basically this: for those of you ACTIVELY using distributed caching, what are you using, are you happy with it, and what should I look out for? I am moving to two servers very soon...both will be at the same location. I use caching fairly heavily (but carefully) to reduce the load on my database server. Edit: I downloaded Scaleout Software's solution. I've coded for it and it seems to work real well. I just have to decide if my wallet will part with the cash for it. :) Anyone have experiences good or bad with ScaleoutSoftware? Edit Again: It's been a little while since I asked this? Any more thoughts on it? We ended up buying the solution from ScaleOutSoftware and have been happy with it, but I'm curious what others are doing. A: Microsoft has a product pending code-named Velocity. It's still in CTP, and is moving slowly, but looks like it will be pretty good. We'll be beating it up in the near future to see how it handles what we want it to do (> 2 million read/writes per hour). Will post back with results. A: There is a 100% native .NET, well documented open source (LGPL) project called Shared Cache. Looks like it is not yet mentioned on SO, but it's promising and should be able to do what most people expect from a distributed cache. It even supports different strategies like distributed or replicated caching etc. I will update this post with more details as soon as I had a chance to try it on a real project. A: We are using the memcached port for Windows and we are very pleased with it. The enyim.com memcached client API is great and easy to work with. It's also open source, which is a big advantage, if you ask me. We are now using this setup in a production web-app and it has helped a lot in improving its performance. A: There's a great .NET wrapper/port found here on Codeplex. Awesomesauce! A: We're currently using an incredibly simple cache that I wrote in a couple of hours, based on re-hosting the ASP.NET cache in a Windows Service (more info and source code here). I won't pretend it's anywhere near as optimised as something like Memcached but we were just looking for something simple and free until Velocity came along, and it's held up extremely well even under fairly heavy load. It comes down to our personal preference for core components - i.e. ones that affect whether the site is available or not - that they are either (a) supported by a vendor with a history of rapid and high quality support, or (b) written by us so that if something goes wrong we can fix it quickly. Open source is all well and good, and indeed we do use some OSS, but if your site is offline then unfortunately newsgroups et al don't have a 1 hour SLA, and just because it's OSS doesn't mean you have the necessary understanding or ability to fix it yourself. A: We use memcached with the enyim library in a production environment (www.funda.nl). Works fine, very pleased with it, but we did notice a substantial raise in CPU use on the clients. Presumably due to the serializing/deserializing going on. We do around 1000 reads per second. A: One tried and tested product by 100's of customers worldwide is NCache. Its a feature rich product that lets you store session state in a redundant and highly available manner, lets you share data within the enterprise as well as bridging for WAN communication essentially acting as a data fabric and lastly it lets you build an elastic caching tier so that when your application scales, you can add servers to the cache and actually boost performance further.
{ "language": "en", "url": "https://stackoverflow.com/questions/103601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Salesforce.com - Why uploading of Apex class is not working? I have an Apex class (controller) originally developed under Developer Edition and need to upload it to production which is Enterprise Edition. The upload fails with following message classes/RenewalController.cls(RenewalController):An error occurred on your page. package.xml(RenewalController):An object 'RenewalController' of type ApexClass was named in manifest but was not found in zipped directory The same message when I try to use Force.com IDE: Save error: An error occurred on your page. This class is working under Developer Edition but not with Enterprise. What can be the problem? A: Dmytro, you are correct. Visualforce pages, apex classes, and components must be uploaded in the correct order. Generally the pattern I use is upload the controllers, components, and then visualforce pages. A: Controller class may reference other custom SalesForce objects like pages. If controller is uploaded before these objects Save error: An error occurred on your page. is reported. Correct order of uploading of custom components should be used.
{ "language": "en", "url": "https://stackoverflow.com/questions/103616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Tools to view/solve Windows XP memory fragmentation We have a java program that requires a large amount of heap space - we start it with (among other command line arguments) the argument -Xmx1500m, which specifies a maximum heap space of 1500 MB. When starting this program on a Windows XP box that has been freshly rebooted, it will start and run without issues. But if the program has run several times, the computer has been up for a while, etc., when it tries to start I get this error: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I suspect that Windows itself is suffering from memory fragmentation, but I don't know how to confirm this suspicion. At the time that this happens, Task manager and sysinternals procexp report 2000MB free memory. I have looked at this question related to internal fragmentation So the first question is, How do I confirm my suspicion? The second question is, if my suspicions are correct, does anyone know of any tools to solve this problem? I've looked around quite a bit, but I haven't found anything that helps, other than periodic reboots of the machine. ps - changing operating systems is also not currently a viable option. A: Unless you are running out of page file space, this issue isn't that the computer is running out of memory. The whole point of virtual memory is to allow the processes to use more virtual memory than is physically available. Not knowing how the JVM handles the heap, it is a bit hard to say exactly what the problem is, but one of the common issues is that there isn't enough contiguous free address space available in your process to allow the heap to be extended. Why this would be a problem after the machine has been running a while is a bit confusing. I've been working on a similar problem at work. I have found that running the program using WinDBG and using the "!address" and "!address -summary" commands have been invaluable in tracking down why a processes' virtual address space has become fragmented. You can also try running the program after reboot and using the "!address" command to take a picture of the address space and then do the same when the program no longer runs. This might clue you in on the problem. Maybe something simple as an extra DLL getting loading might cause the problem. A: Agree with Torlack, a lot of this is because other DLLs are getting loaded and go into certain spots, breaking up the amount of memory you can get for the VM in one big chunk. You can do some work on WinXP if you have more than 3G of memory to get some of the windows stuff moved around, look up PAE here: http://www.microsoft.com/whdc/system/platform/server/PAE/PAEdrv.mspx Your best bet, if you really need more than 1.2G of memory for your java app, is to look at 64 bit windows or linux or OSX. If you're using any kind of native libraries with your app you'll have to recompile them for 64 bit, but its going to be a lot easier than trying to rebase dlls and stuff to maximize the memory you can get on 32 bit windows. Another option would be to split your program up into multiple VMs and have them communicate with eachother via RMI or messaging or something. That way each VM can have some subset of the memory you need. Without knowing what your app does, i'm not sure that this will help in any way, though... A: I suspect that the problem is Windows memory fragmentation. There is another question here on StackOverflow called Java Maximum Memory on Windows XP that mentions using Process Explorer to look at where DLLs are mapped into memory, and then to address the problem by rebasing the DLLs so that load into memory in a more compact way. A: Using Minimem (http://minimem.kerkia.net/) for that application might fix your problem. However, I'm not sure this is the answer you are looking for. I hope it helps. A: Maybe you should consider to start the program and reserving the memory and not end the VM after each run. Look for different GC options and release your objects. A: Use vmmap from Microsoft's SysInternals tools to view the fragmentation of the virtual address space, and identify what's breaking up the space
{ "language": "en", "url": "https://stackoverflow.com/questions/103622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: jQuery Menu and ASP.NET Sitemap Is it possible to use an ASP.NET web.sitemap with a jQuery Superfish menu? If not, are there any standards based browser agnostic plugins available that work with the web.sitemap file? A: Yes, it is totally possible. I have used it with the ASP:Menu control and jQuery 1.2.6 with the Superfish plugin. Note, you will need the ASP.NET 2.0 CSS Friendly Control Adapters. ASP.NET generates the ASP:Menu control as a table layout. The CSS Friendly Control Adapter will make ASP.NET generate the ASP:Menu control as a UL/LI layout inside a div. This will allow easy integration of the jQuery and Superfish plugin because the Superfish plugin relies on a UL/LI layout. A: I found this question while looking for the same answer... everyone says it's possible but no-one gives the actual solution! I seem to have it working now so thought I'd post my findings... Things I needed: * *Superfish which also includes a version of jQuery *CSS Friendly Control Adaptors download DLL and .browsers file (into /bin and /App_Browsers folders respectively) *ASP.NET SiteMap (a .sitemap XML file and siteMap provider entry in web.config) My finished Masterpage.master has the following head tag: <head runat="server"> <script type="text/javascript" src="/script/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="/script/superfish.js"></script> <link href="~/css/superfish.css" type="text/css" rel="stylesheet" media="screen" runat="server" /> <script type="text/javascript"> $(document).ready(function() { $('ul.AspNet-Menu').superfish(); }); </script> </head> Which is basically all the stuff needed for the jQuery Superfish menu to work. Inside the page (where the menu goes) looks like this (based on these instructions): <asp:SiteMapDataSource ID="SiteMapDataSource" runat="server" ShowStartingNode="false" /> <asp:Menu ID="Menu1" runat="server" DataSourceID="SiteMapDataSource" Orientation="Horizontal" CssClass="sf-menu"> </asp:Menu> Based on the documentation, this seems like it SHOULD work - but it doesn't. The reason is that the CssClass="sf-menu" gets overwritten when the Menu is rendered and the <ul> tag gets a class="AspNet-Menu". I thought the line $('ul.AspNet-Menu').superfish(); would help, but it didn't. ONE MORE THING Although it is a hack (and please someone point me to the correct solution) I was able to get it working by opening the superfish.css file and search and replacing sf-menu with AspNet-Menu... and voila! the menu appeared. I thought there would be some configuration setting in the asp:Menu control where I could set the <ul> class but didn't find any hints via google. A: It looks like you need to generate a UL for Superfish. You should be able to do this with ASP.Net from your site map. I think the site map control will do something like this. If not, it should be pretty trivial to call the site map directly from C# and generate the DOM programmatically. You could build a user control to do this, or do it in the master page. Check out this MSDN article on how to programmatically enumerate the nodes in your site map. A: Remember to add css classes for NonLink elements. Superfish css elements don't acccont for them. And if you're like me and have root menu's that are not links, then it renders horribly. Just add AspNet-Menu-NonLink elements to the superfish.css file and it should render fine. A: The SiteMapDataSource control should be able to bind to any hierarchical data bound control. I'm not familiar with superfish but I know there are plenty of jQueryish controls out there to do this. A: I created a neat little sample project you can use at http://simplesitemenu.codeplex.com/ It is a composite control which generates a nested UL/LI listing from your sitemap. Enjoy!
{ "language": "en", "url": "https://stackoverflow.com/questions/103630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Pascal's Theorem for non-unique sets? Pascal's rule on counting the subset's of a set works great, when the set contains unique entities. Is there a modification to this rule for when the set contains duplicate items? For instance, when I try to find the count of the combinations of the letters A,B,C,D, it's easy to see that it's 1 + 4 + 6 + 4 + 1 (from Pascal's Triangle) = 16, or 15 if I remove the "use none of the letters" entry. Now, what if the set of letters is A,B,B,B,C,C,D? Computing by hand, I can determine that the sum of subsets is: 1 + 4 + 8 + 11 + 11 + 8 + 4 + 1 = 48, but this doesn't conform to the Triangle I know. Question: How do you modify Pascal's Triangle to take into account duplicate entities in the set? A: A set only contains unique items. If there are duplicates, then it is no longer a set. A: Yes, if you don't want to consider sets, consider the idea of 'factors.' How many factors does: p1^a1.p2^a2....pn^an have if p1's are distinct primes. If the ai's are all 1, then the number is 2^n. In general, the answer is (a1+1)(a2+1)...(an+1) as David Nehme notes. Oh, and note that your answer by hand was wrong, it should be 48, or 47 if you don't want to count the empty set. A: It looks like you want to know how many sub-multi-sets have, say, 3 elements. The math for this gets very tricky, very quickly. The idea is that you want to add together all of the combinations of ways to get there. So you have C(3,4) = 4 ways of doing it with no duplicated elements. B can be repeated twice in C(1,3) = 3 ways. B can be repeated 3 times in 1 way. And C can be repeated twice in C(1,3) = 3 ways. For 11 total. (Your 10 you got by hand was wrong. Sorry.) In general trying to do that logic is too hard. The simpler way to keep track of it is to write out a polynomial whose coefficients have the terms you want which you multiply out. For Pascal's triangle this is easy, the polynomial is (1+x)^n. (You can use repeated squaring to calculate this more efficiently.) In your case if an element is repeated twice you would have a (1+x+x^2) factor. 3 times would be (1+x+x^2+x^3). So your specific problem would be solved as follows: (1 + x) (1 + x + x^2 + x^3) (1 + x + x^2) (1 + x) = (1 + 2x + 2x^2 + 2x^3 + x^4)(1 + 2x + 2x^2 + x^3) = 1 + 2x + 2x^2 + x^3 + 2x + 4x^2 + 4x^3 + 2x^4 + 2x^2 + 4x^3 + 4x^4 + 2x^5 + 2x^3 + 4x^4 + 4x^5 + 2x^6 + x^4 + 2x^5 + 2x^6 + x^7 = 1 + 4x + 8x^2 + 11x^3 + 11x^4 + 8x^5 + 4x^6 + x^7 If you want to produce those numbers in code, I would use the polynomial trick to organize your thinking and code. (You'd be working with arrays of coefficients.) A: You don't need to modify Pascal's Triangle at all. Study C(k,n) and you'll find out -- you basically need to divide the original results to account for the permutation of equivalent letters. E.g., A B1 B2 C1 D1 == A B2 B1 C1 D1, therefore you need to divide C(5,5) by C(2,2). A: Without duplicates (in a set as earlier posters have noted), each element is either in or out of the subset. So you have 2^n subsets. With duplicates, (in a "multi-set") you have to take into account the number the number of times each element is in the "sub-multi-set". If it m_1,m_2...m_n represent the number of times each element repeats, then the number of sub-bags is (1+m_1) * (1+m_2) * ... (1+m_n). A: Even though mathematical sets do contain unique items, you can run into the problem of duplicate items in 'sets' in the real world of programming. See this thread on Lisp unions for an example.
{ "language": "en", "url": "https://stackoverflow.com/questions/103633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why don't languages raise errors on integer overflow by default? In several modern programming languages (including C++, Java, and C#), the language allows integer overflow to occur at runtime without raising any kind of error condition. For example, consider this (contrived) C# method, which does not account for the possibility of overflow/underflow. (For brevity, the method also doesn't handle the case where the specified list is a null reference.) //Returns the sum of the values in the specified list. private static int sumList(List<int> list) { int sum = 0; foreach (int listItem in list) { sum += listItem; } return sum; } If this method is called as follows: List<int> list = new List<int>(); list.Add(2000000000); list.Add(2000000000); int sum = sumList(list); An overflow will occur in the sumList() method (because the int type in C# is a 32-bit signed integer, and the sum of the values in the list exceeds the value of the maximum 32-bit signed integer). The sum variable will have a value of -294967296 (not a value of 4000000000); this most likely is not what the (hypothetical) developer of the sumList method intended. Obviously, there are various techniques that can be used by developers to avoid the possibility of integer overflow, such as using a type like Java's BigInteger, or the checked keyword and /checked compiler switch in C#. However, the question that I'm interested in is why these languages were designed to by default allow integer overflows to happen in the first place, instead of, for example, raising an exception when an operation is performed at runtime that would result in an overflow. It seems like such behavior would help avoid bugs in cases where a developer neglects to account for the possibility of overflow when writing code that performs an arithmetic operation that could result in overflow. (These languages could have included something like an "unchecked" keyword that could designate a block where integer overflow is permitted to occur without an exception being raised, in those cases where that behavior is explicitly intended by the developer; C# actually does have this.) Does the answer simply boil down to performance -- the language designers didn't want their respective languages to default to having "slow" arithmetic integer operations where the runtime would need to do extra work to check whether an overflow occurred, on every applicable arithmetic operation -- and this performance consideration outweighed the value of avoiding "silent" failures in the case that an inadvertent overflow occurs? Are there other reasons for this language design decision as well, other than performance considerations? A: C/C++ never mandate trap behaviour. Even the obvious division by 0 is undefined behaviour in C++, not a specified kind of trap. The C language doesn't have any concept of trapping, unless you count signals. C++ has a design principle that it doesn't introduce overhead not present in C unless you ask for it. So Stroustrup would not have wanted to mandate that integers behave in a way which requires any explicit checking. Some early compilers, and lightweight implementations for restricted hardware, don't support exceptions at all, and exceptions can often be disabled with compiler options. Mandating exceptions for language built-ins would be problematic. Even if C++ had made integers checked, 99% of programmers in the early days would have turned if off for the performance boost... A: Because checking for overflow takes time. Each primitive mathematical operation, which normally translates into a single assembly instruction would have to include a check for overflow, resulting in multiple assembly instructions, potentially resulting in a program that is several times slower. A: It is likely 99% performance. On x86 would have to check the overflow flag on every operation which would be a huge performance hit. The other 1% would cover those cases where people are doing fancy bit manipulations or being 'imprecise' in mixing signed and unsigned operations and want the overflow semantics. A: In C#, it was a question of performance. Specifically, out-of-box benchmarking. When C# was new, Microsoft was hoping a lot of C++ developers would switch to it. They knew that many C++ folks thought of C++ as being fast, especially faster than languages that "wasted" time on automatic memory management and the like. Both potential adopters and magazine reviewers are likely to get a copy of the new C#, install it, build a trivial app that no one would ever write in the real world, run it in a tight loop, and measure how long it took. Then they'd make a decision for their company or publish an article based on that result. The fact that their test showed C# to be slower than natively compiled C++ is the kind of thing that would turn people off C# quickly. The fact that your C# app is going to catch overflow/underflow automatically is the kind of thing that they might miss. So, it's off by default. I think it's obvious that 99% of the time we want /checked to be on. It's an unfortunate compromise. A: Backwards compatibility is a big one. With C, it was assumed that you were paying enough attention to the size of your datatypes that if an over/underflow occurred, that that was what you wanted. Then with C++, C# and Java, very little changed with how the "built-in" data types worked. A: I think performance is a pretty good reason. If you consider every instruction in a typical program that increments an integer, and if instead of the simple op to add 1, it had to check every time if adding 1 would overflow the type, then the cost in extra cycles would be pretty severe. A: You work under the assumption that integer overflow is always undesired behavior. Sometimes integer overflow is desired behavior. One example I've seen is representation of an absolute heading value as a fixed point number. Given an unsigned int, 0 is 0 or 360 degrees and the max 32 bit unsigned integer (0xffffffff) is the biggest value just below 360 degrees. int main() { uint32_t shipsHeadingInDegrees= 0; // Rotate by a bunch of degrees shipsHeadingInDegrees += 0x80000000; // 180 degrees shipsHeadingInDegrees += 0x80000000; // another 180 degrees, overflows shipsHeadingInDegrees += 0x80000000; // another 180 degrees // Ships heading now will be 180 degrees cout << "Ships Heading Is" << (double(shipsHeadingInDegrees) / double(0xffffffff)) * 360.0 << std::endl; } There are probably other situations where overflow is acceptable, similar to this example. A: If integer overflow is defined as immediately raising a signal, throwing an exception, or otherwise deflecting program execution, then any computations which might overflow will need to be performed in the specified sequence. Even on platforms where integer overflow checking wouldn't cost anything directly, the requirement that integer overflow be trapped at exactly the right point in a program's execution sequence would severely impede many useful optimizations. If a language were to specify that integer overflows would instead set a latching error flag, were to limit how actions on that flag within a function could affect its value within calling code, and were to provide that the flag need not be set in circumstances where an overflow could not result in erroneous output or behavior, then compilers could generate more efficient code than any kind of manual overflow-checking programmers could use. As a simple example, if one had a function in C that would multiply two numbers and return a result, setting an error flag in case of overflow, a compiler would be required to perform the multiplication whether or not the caller would ever use the result. In a language with looser rules like I described, however, a compiler that determined that nothing ever uses the result of the multiply could infer that overflow could not affect a program's output, and skip the multiply altogether. From a practical standpoint, most programs don't care about precisely when overflows occur, so much as they need to guarantee that they don't produce erroneous results as a consequence of overflow. Unfortunately, programming languages' integer-overflow-detection semantics have not caught up with what would be necessary to let compilers produce efficient code. A: My understanding of why errors would not be raised by default at runtime boils down to the legacy of desiring to create programming languages with ACID-like behavior. Specifically, the tenet that anything that you code it to do (or don't code), it will do (or not do). If you didn't code some error handler, then the machine will "assume" by virtue of no error handler, that you really want to do the ridiculous, crash-prone thing you're telling it to do. (ACID reference: http://en.wikipedia.org/wiki/ACID)
{ "language": "en", "url": "https://stackoverflow.com/questions/103654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: What makes you a C programming expert? I attended a job fair yesterday and a developer asked me how I would rank my proficiency in C. I then realized that this is incredibly arbitrary and almost impossible to nail down, so my question is what knowledge makes you an expert in programming C? Edit: or what would the breakdown be? what makes you good, decent, proficient, etc. Edit again: I was looking more for like a list of skills or some other constructive measure by which to judge one's own proficiency in C, as that's hard to do. List so far: * *Experience in large projects *Mastery of Pointers (and memory management, I'd assume) *Mastery of a debugger (gdb, ...) *Mastery of a profiler (gprof,...) *Mastery of a memory profiler (valgrind, ...) *Knowledge of the fundamental standards A: Everyone is an expert at a job fair A: You're an expert in c if you can answer all the questions tagged "c" on stackoverflow.com without blinking. A: This doesn't directly answer your question (sorry), but it might help you decide how you classify yourself. Instead of just "expert" and "clueless newbie" I prefer the three-level system of expertise used by the medieval guilds: Apprentice * *Still needs to RTFM. *Getting to grips with the tools and techniques of his craft. *Needs supervision. Journeyman * *Has Read The Effin' Manual. *Competent with all the standard tools and techniques of his craft. *Can work alone, and can supervise apprentices on routine jobs. Master * *Could have Written The Effin' Manual. *Is developing or adopting new tools and techniques. *Can oversee a major project that might never have been attempted before. At a job fair? There are no experts: everybody's an expert. :) A: Some may disagree but I think experience is key to being an expert in any language. I know plenty of people who've past the certification test but couldn't apply their knowledge to anything practical in the real world. So I think overall being an expert is a product of having enough knowledge on a given subject (C) and then having applied it to enough real world scenarios to make the mistakes that we all do and learn from them. A: The answers to this question do make for some interesting reading - it seems that we can't get good convergence on what defines an expert here. What hope is there going to be in a broader forum like a jobs fair? :-) But to put my own 2 cents in... I think there's two kinds of C expert. * *There's the expert in the academic sense (as in "could write their own compiler", "has written papers"). *There's the pragmatic expert. I would like to define this as "someone who can write elegant C code that anyone can understand". I would take one of the latter over the former in a heartbeat. If you've got a chunk of code written by an expert that is so brittle that can only be read and understood by another expert then for all intents that code is unmaintainable. It's all very nice that the author of this code remembers the intricacies of type conversions in the middle of expressions, but it's much better if the code has been written so that it's completely unambiguous. Projects usually have enough technical challenge without adding the need for all team members to have memorized the C'99 standard. A: I think the trouble with this question is that the answer is kind of meaningless. I see people talking about experience, and that's good, and I see people talking about understanding the intricacies of the language, and that's good. However, if I were hiring someone to work on my C project, and I had a magic 8-ball that would give me an accurate answer to any one (and only one) question, I would never ask it, "Are they a C expert". Why? Just because someone is a C expert doesn't mean that they're a good software developer. Experience and language familiarity are good, but I think they are both trumped by that intangible, un-quantifiable property that makes someone a "good software developer". What I'm trying to say is, "What makes you a C programming expert?" is not a useful question, because there are more important questions. If someone is a Good Programmer, they will rise to the occasion. As an example: You can be a C programming expert and be horrible on a team. You can be a C programming expert and refuse to use version control. You can be a C programming expert without knowing how to actually DO anything with C. The "without" clauses in those sentences are equally important questions: What makes you a good team programmer? What's the best way to use SCM x or y? How do you approach programming a client/server game, or billing application, or web browser, or operating system, or compiler, in C? If a candidate told me "No, I am not a C expert", but gave me great answers to these other questions, I would hire them in a heartbeat over the guy who the magic 8-ball said was a C expert, but doesn't know how to check his code into subversion and hasn't learned a new language in 12 years. A: You're an expert in C when you can write your own C compiler. A: When I interviewed with Google, the interviewer told me to think about it this way. On a scale of 1-10 for C proficiency, to say you're a "10" means you've written papers and/or books or been a speaker in a conference on programming in C. Based on this, very few people are 10s. FWIW, I have been programming in C for 15 years. I consider myself very proficient. I'd perhaps give myself a solid 8 or 8.5. A: Interview questions like this are always tough. You want to blow your own horn a little, but not sound like a blowhard. If you have done a lot with C (say, worked on open source projects in C), then I'd respond with that, but not just by pointing to the list of accomplishments on your resume, but by talking about one or more of them and what was particularly interesting or challenging about it (in regards to its use of C). A: How about having read "Expert C Programming" by Peter van der Linden and remembering everything he covered? A: Lacking a standard test there's really no way to decide what expert level is but here are a few of my litmus tests, everyone's list is different, I'm sure. Without looking at documentation: * *Know the precedence of the main operators so you don't have to litter your code with parens to avoid getting the wrong order of evaluation *Be able to write a prototype for a simple function pointer *Be comfortable with passing a pointer to a pointer *Understand block, function, module scope There are more items like this. On the other hand, I don't think you have to be able to understand or be able to write out Duff's device or figure out obfuscated C contests in your head to consider yourself an expert. Even if I considered myself an expert (not sure I do) I probably would never claim it in a job interview. Andrew A: To someone less skilled than you, you're an expert. To someone more skilled than you, you're a newbie. A: Experience is key, knowing the "rules" and syntax of the of the language is of course a must, but it is only a base. Learning the common pitfalls and idioms for doing things right is key. Knowing what if any resources exist to get help from while your programing, and of course, knowing you're tool chain. I've known many C++ "experts" who had never used a debugger, or a memory tracker. If you ask me, being an expert in something is different from being proficient in something in you knowing all aspects of it. A: Mastery of pointers. A: I would say that for any given language, experience is the key thing. It just takes time to learn a language and learn the APIs and 'idioms' that the language uses. Whether someone is an expert in anything or not is something that should be asked of ones peers. To paraphrase Jeff Foxworthy, "If you answer more questions than you ask, you might be an expert." A: I think a fair answer would be understanding all of the intracies of ISO C. The reality, as any comp.lang.c regular will tell you, is that almost nothing that people need to do can be done in pure ISO C, as you generally need to interact with your environment in a more well defined way. That's where POSIX comes about. I would not blink at anyone who self-ranked themselves as an "Expert" who had a solid understanding of the language of C, a decent understanding of what ISO C promises, and a working understanding of the POSIX functions. A: Being able to write papers/books doesn't necessarily make one an expert programmer. It takes plenty of hard work, practical experience and a good understanding of various C libraries. Good luck!
{ "language": "en", "url": "https://stackoverflow.com/questions/103669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: What's the best database access pattern for testability? I've read about and dabbled with some including active record, repository, data transfer objects. Which is best? A: 'Best' questions are not really valid. The world is filled with combination and variations. You should start with the question that you have to answer: What problem are you trying to solve. After you answer that you look at the tools that work best with the issue. A: While I agree "best" questions are not the greatest form (since they are so arbitrary), they're not totally irrelevant either. In the world constructed here at S.O. where developers vote on what's "best", why not have best questions? "Best questions" prompt discussion and differing opinions. Eventually, when someone "googles" 'data access pattern' they should come to this page and see a plethora of answers then, right? A: It really depends on your task. At least you should know and understand all database access patterns to choose one most suitable for current problem. A: This is a good question which should provoke some thought. Database access is often not subject to rigorous testing - particularly not automated testing, and I would certainly like to increase the amount of testing on my database. I'm using the MbUnit test framework running from inside Visual Studio to do some testing. Our application uses stored procedures wherever possible, and the tests I have written set up the database for testing, call a stored procedure, and check the results. For a collection of related stored procedures, we have a C# file with tests for those stored procs. (However, our coverage is probably about 1% so far!). Active record is an attractive option because of Ruby's built-in emphasis on automated testing. If I were starting over, that would be a point for using active record. A: Repository is probably the best pattern for testability since it allows you to replace a repository with a mock when you need to test. ActiveRecord ties your models to the database (convenient sometimes, but generally more dificcult to test).
{ "language": "en", "url": "https://stackoverflow.com/questions/103676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Machine.Migrations mature enough to be used? Ok thanks for the 1st guy I found the code. Machine.Migrations: I mean, the data schema/data migration framework. I mean, the one mentioned here: http://blog.eleutian.com/2008/04/25/AFirstLookAtMachineMigrations.aspx Ok so somebody had used it? I think I would like to have such framework in my project (I have been using rails, such framework helps a lot on making migration more systematic) A: It is an experimental library authored by the blog author. See http://blog.eleutian.com/Default.aspx#a2e4f933f-e00c-445e-ab64-17bf9a64d96f There is a link at the bottom of that post. A: I've just publish a new migrations library called Mig#. You can find it on GitHub: https://github.com/dradovic/MigSharp It should be mature enough as we are using it at my workplace for productive code. Mig# currently offers support for SQL Server 2005/2008/CE 4, Oracle, and Teradata. On top of that, you get a nifty validation framework and more (see https://github.com/dradovic/MigSharp/wiki/Feature-Overview).
{ "language": "en", "url": "https://stackoverflow.com/questions/103677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is a javascript hash table implementation that avoids object namespace collisions? First off: I'm using a rather obscure implementation of javascript embedded as a scripting engine for Adobe InDesign CS3. This implementation sometimes diverges from "standard" javascript, hence my problem. I'm using John Resig's jsdiff library (source here) to compare selections of text between two documents. jsdiff uses vanilla objects as associative arrays to map a word from the text to another object. (See the "ns" and "os" variables in jsdiff.js, around line 129.) My headaches start when the word "reflect" comes up in the text. "reflect" is a default, read-only property on all objects. When jsdiff tries to assign a value on the associative array to ns['reflect'], everything explodes. My question: is there a way around this? Is there a way to do a hash table in javascript without using the obvious vanilla object? Ground rules: switching scripting engines isn't an option. :) A: You might be "asking the wrong question" (as Raymond Chen would say); rather than trying to avoid using the vanilla objects, try changing the way the associative array members are named. The way I'd try to approach this: instead of there being an array member ns["reflect"], change the way that jsdiff builds the arrays so that the member is ns["_reflect"] or some other variation on that. A: If the JS implementation you're using supports the hasOwnProperty method for objects, you can use it to test whether a property has explicitly been set for an object or the property is inherited from its prototype. Example: if(object.hasOwnProperty('testProperty')){ // do something } A: Well given objects in javascript are just associative arrays, there really isn't another built in solution for a hash. You might be able to create your own psuedo hashtable by wrapping a class around some arrays although there will probably be a significant performance hit with the manual work involved. Just a side note I haven't really used or looked at the jsdiff library so I can't offer any valid insight as per tips or tricks.
{ "language": "en", "url": "https://stackoverflow.com/questions/103679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: ReportViewer displaying black background in Print Layout mode In my ReportViewer control, when I click on Print Layout, the background turns black on the report. This must be a bug. Is there a workaround? A: Microsoft was already aware of this issue, its KB is located here. You can solve the problem by installing Cumulative Update 1 (build 3161), which can be requested for download through the following Microsoft page If you can wait a little while more I think the fix will come out with SQL Server 2005 SP3
{ "language": "en", "url": "https://stackoverflow.com/questions/103688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: PHP/JS - Create thumbnails on the fly or store as files For an image hosting web application: For my stored images, is it feasible to create thumbnails on the fly using PHP (or whatever), or should I save 1 or more different sized thumbnails to disk and just load those? Any help is appreciated. A: I use phpThumb, as it's the best of both worlds. You can create thumbnails on the fly, but it automatically caches the images to speed up future requests. It creates a nice wrapper around the GD and ImageMagick libraries. Worth a look! A: Save thumbnails to disk. Image processing takes a lot of resources and, depending on the size of the image, might exceed the default allowed memory limit for php. It is less of a concern if you have your own server with only your application running but it still takes a lot of cpu power and memory to resize images. If you're considering creating thumbnails on the fly anyway, you don't have to change much - upon the first request, create the thumbnail from the source file, save it to disk and upon subsequent requests just read it off the disk. A: It would be much better to cache the thumbnails. Generating them on the fly would be very taxing on the system. A: It depends on the usage pattern of the site, but, basically, how many times do you expect each image to be viewed? In the case of thumbnails, they're most likely to be around for quite a while (the image is uploaded once and never changed, so the thumbnail doesn't change either), so it's generally worthwhile to generate when the full image is uploaded and store them for later. Unless the site is completely dead, they'll be viewed many (hundreds or thousands of) times over their lifetime and disk is a lot cheaper than latency these days. This also becomes more significant as load on the server increases, of course. Conversely, for something like stock charts that get updated every hour (if not more frequently), that would be a situation where you'd do better to create them on the fly, so as to avoid wasting CPU time on constantly generating images which no user will ever see. Or, if you want to get fancy, you can optimize to handle either access pattern by generating the images on the fly the first time they're needed and then showing the pre-generated one afterwards, up until the data it's generated from changes, at which point you delete it so that it will be regenerated the next time it's needed. But that would be overkill for something as static as thumbnails, IMO. A: check out the gd library and imagemagick
{ "language": "en", "url": "https://stackoverflow.com/questions/103707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Disappearing checkbox label in ASP.Net 3.5 DetailsView control I have a web form with a button and a DetailsView control on it. In the button's click event I change the DetailsView control to insert mode so I can add records: DetailsView1.ChangeMode(DetailsViewMode.Insert) Everything works fine, except for a checkbox in the DetailsView. When the DetailsView goes into insert mode, the text describing what the checkbox is for disappears. The checkbox itself works fine. Why is my text disappearing and how can I fix it? A: I was able to fix my problem by changing it to a template field. Not sure why it wouldn't work the other way. A: Is the text in a Label that is in the item template? If so, you'd need to add it to the Edit Item Template. Also check that the width of the control is wide enough for all the controls and text. It may be getting hidden due to absolute positioning. A: Thanks for the quick reply. The text isn't in a template. It's just a CheckBoxField with the Text property set to "Active": I've tried widening the field and the DetailsView control, but the text still disappears when I click the button.
{ "language": "en", "url": "https://stackoverflow.com/questions/103708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a way to get FlexBuilder 3 to treat a project as an Application and a LIbrary? My team builds reusable libraries for other (internal) software development teams. We use FlexBuilder 3 as our development environment. Our SCM standards state that these projects must include test harnesses and a unit test runner, and (of course) we want to be able to use the debugger. For that reason, all the projects are Applications. Our build scripts (used primarily by the CI system and for release deployment) build our actual libraries which works great. This approach is used so that FlexBuilder is not required to actually build our production artifacts (on the command line). The problem is this - in order to have add a FlexBuilder Project to the Library Path for an Application it must be a Library Project. I have tried adding a nature to the project that we want included, but haven't gotten it to work yet. You would want to do that if you wanted to debug source files in another project. A simple (yet annoying) work around is to include the source folder of the "library project" as a source folder in the "application project." It's annoying because it takes multiple steps to swap between a swc of the "library project" and the source folder of the project itself. A: I would also suggest breaking this up into 2 projects. Have 1 library project and 1 application for the tests and the testrunner. On a sidenote: FlexBuilder 4 will have support for running FlexUnit tests in the IDE, for both Flex applications and Flex library projects. So you won't have to maintain an application just for the sake of running the tests. A: Assuming it is possible, I'd suggest adjusting your SCM standards to allow test harnesses and unit test runners to exist in other projects. Simply mandate that any library project must include a companion test project. A: I don't know that this is going to make it any easier, but I would actually make the library and the testing harness seperate projects. This would allow you to source control each and would solve your problem with flexbuilder. Its not going to make it easier to work with, but it will be cleaner and the easiest to update. A: I didn't totally understand the description of your situation, but if it's helpful, I'll describe how we have organized our Flex projects. The majority of our application code is contained within a SWC ("library") project. We then create two SWF ("application") projects - a "shell" application which represents the final output SWF, and a test harness FlexUnit 2 application. Both of these SWF projects reference the SWC project using a source path. Using this approach has made it trivial to enable unit testing for the application codebase in the SWC.
{ "language": "en", "url": "https://stackoverflow.com/questions/103723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a way to programmatically determine if a font file has a specific Unicode Glyph? I'm working on a project that generates PDFs that can contain fairly complex math and science formulas. The text is rendered in Times New Roman, which has pretty good Unicode coverage, but not complete. We have a system in place to swap in a more Unicode complete font for code points that don't have a glyph in TNR (like most of the "stranger" math symbols,) but I can't seem to find a way to query the *.ttf file to see if a given glyph is present. So far, I've just hard-coded a lookup table of which code points are present, but I'd much prefer an automatic solution. I'm using VB.Net in a web system under ASP.net, but solutions in any programming language/environment would be appreciated. Edit: The win32 solution looks excellent, but the specific case I'm trying to solve is in an ASP.Net web system. Is there a way to do this without including the windows API DLLs into my web site? A: Scott's answer is good. Here is another approach that is probably faster if checking just a couple of strings per font (in our case 1 string per font). But probably slower if you are using one font to check a ton of text. [DllImport("gdi32.dll", EntryPoint = "CreateDC", CharSet = CharSet.Auto, SetLastError = true)] private static extern IntPtr CreateDC(string lpszDriver, string lpszDeviceName, string lpszOutput, IntPtr devMode); [DllImport("gdi32.dll", ExactSpelling = true, SetLastError = true)] private static extern bool DeleteDC(IntPtr hdc); [DllImport("Gdi32.dll")] private static extern IntPtr SelectObject(IntPtr hdc, IntPtr hgdiobj); [DllImport("Gdi32.dll", CharSet = CharSet.Unicode)] private static extern int GetGlyphIndices(IntPtr hdc, [MarshalAs(UnmanagedType.LPWStr)] string lpstr, int c, Int16[] pgi, int fl); /// <summary> /// Returns true if the passed in string can be displayed using the passed in fontname. It checks the font to /// see if it has glyphs for all the chars in the string. /// </summary> /// <param name="fontName">The name of the font to check.</param> /// <param name="text">The text to check for glyphs of.</param> /// <returns></returns> public static bool CanDisplayString(string fontName, string text) { try { IntPtr hdc = CreateDC("DISPLAY", null, null, IntPtr.Zero); if (hdc != IntPtr.Zero) { using (Font font = new Font(new FontFamily(fontName), 12, FontStyle.Regular, GraphicsUnit.Point)) { SelectObject(hdc, font.ToHfont()); int count = text.Length; Int16[] rtcode = new Int16[count]; GetGlyphIndices(hdc, text, count, rtcode, 0xffff); DeleteDC(hdc); foreach (Int16 code in rtcode) if (code == 0) return false; } } } catch (Exception) { // nada - return true Trap.trap(); } return true; } A: Here's a pass at it using c# and the windows API. [DllImport("gdi32.dll")] public static extern uint GetFontUnicodeRanges(IntPtr hdc, IntPtr lpgs); [DllImport("gdi32.dll")] public extern static IntPtr SelectObject(IntPtr hDC, IntPtr hObject); public struct FontRange { public UInt16 Low; public UInt16 High; } public List<FontRange> GetUnicodeRangesForFont(Font font) { Graphics g = Graphics.FromHwnd(IntPtr.Zero); IntPtr hdc = g.GetHdc(); IntPtr hFont = font.ToHfont(); IntPtr old = SelectObject(hdc, hFont); uint size = GetFontUnicodeRanges(hdc, IntPtr.Zero); IntPtr glyphSet = Marshal.AllocHGlobal((int)size); GetFontUnicodeRanges(hdc, glyphSet); List<FontRange> fontRanges = new List<FontRange>(); int count = Marshal.ReadInt32(glyphSet, 12); for (int i = 0; i < count; i++) { FontRange range = new FontRange(); range.Low = (UInt16)Marshal.ReadInt16(glyphSet, 16 + i * 4); range.High = (UInt16)(range.Low + Marshal.ReadInt16(glyphSet, 18 + i * 4) - 1); fontRanges.Add(range); } SelectObject(hdc, old); Marshal.FreeHGlobal(glyphSet); g.ReleaseHdc(hdc); g.Dispose(); return fontRanges; } public bool CheckIfCharInFont(char character, Font font) { UInt16 intval = Convert.ToUInt16(character); List<FontRange> ranges = GetUnicodeRangesForFont(font); bool isCharacterPresent = false; foreach (FontRange range in ranges) { if (intval >= range.Low && intval <= range.High) { isCharacterPresent = true; break; } } return isCharacterPresent; } Then, given a char toCheck that you want to check and a Font theFont to test it against... if (!CheckIfCharInFont(toCheck, theFont) { // not present } Same code using VB.Net <DllImport("gdi32.dll")> _ Public Shared Function GetFontUnicodeRanges(ByVal hds As IntPtr, ByVal lpgs As IntPtr) As UInteger End Function <DllImport("gdi32.dll")> _ Public Shared Function SelectObject(ByVal hDc As IntPtr, ByVal hObject As IntPtr) As IntPtr End Function Public Structure FontRange Public Low As UInt16 Public High As UInt16 End Structure Public Function GetUnicodeRangesForFont(ByVal font As Font) As List(Of FontRange) Dim g As Graphics Dim hdc, hFont, old, glyphSet As IntPtr Dim size As UInteger Dim fontRanges As List(Of FontRange) Dim count As Integer g = Graphics.FromHwnd(IntPtr.Zero) hdc = g.GetHdc() hFont = font.ToHfont() old = SelectObject(hdc, hFont) size = GetFontUnicodeRanges(hdc, IntPtr.Zero) glyphSet = Marshal.AllocHGlobal(CInt(size)) GetFontUnicodeRanges(hdc, glyphSet) fontRanges = New List(Of FontRange) count = Marshal.ReadInt32(glyphSet, 12) For i = 0 To count - 1 Dim range As FontRange = New FontRange range.Low = Marshal.ReadInt16(glyphSet, 16 + (i * 4)) range.High = range.Low + Marshal.ReadInt16(glyphSet, 18 + (i * 4)) - 1 fontRanges.Add(range) Next SelectObject(hdc, old) Marshal.FreeHGlobal(glyphSet) g.ReleaseHdc(hdc) g.Dispose() Return fontRanges End Function Public Function CheckIfCharInFont(ByVal character As Char, ByVal font As Font) As Boolean Dim intval As UInt16 = Convert.ToUInt16(character) Dim ranges As List(Of FontRange) = GetUnicodeRangesForFont(font) Dim isCharacterPresent As Boolean = False For Each range In ranges If intval >= range.Low And intval <= range.High Then isCharacterPresent = True Exit For End If Next range Return isCharacterPresent End Function A: FreeType is a library that can read TrueType font files (among others) and can be used to query the font for a specific glyph. However, FreeType is designed for rendering, so using it might cause you to pull in more code than you need for this solution. Unfortunately, there's not really a clear solution even within the world of OpenType / TrueType fonts; the character-to-glyph mapping has about a dozen different definitions depending on the type of font and what platform it was originally designed for. You might try to look at the cmap table definition in Microsoft's copy of the OpenType spec, but it's not exactly easy reading. A: This Microsoft KB article may help: http://support.microsoft.com/kb/241020 It's a bit dated (was originally written for Windows 95), but the general principle may still apply. The sample code is C++, but since it's just calling standard Windows APIs, it'll more than likely work in .NET languages as well with a little elbow grease. -Edit- It seems that the old 95-era APIs have been obsoleted by a new API Microsoft calls "Uniscribe", which should be able to do what you need it to. A: The code posted by Scott Nichols is great, except for one bug: if the glyph id is greater than Int16.MaxValue, it throws an OverflowException. To fix it, I added the following function: Protected Function Unsign(ByVal Input As Int16) As UInt16 If Input > -1 Then Return CType(Input, UInt16) Else Return UInt16.MaxValue - (Not Input) End If End Function And then changed the main for loop in the function GetUnicodeRangesForFont to look like this: For i As Integer = 0 To count - 1 Dim range As FontRange = New FontRange range.Low = Unsign(Marshal.ReadInt16(glyphSet, 16 + (i * 4))) range.High = range.Low + Unsign(Marshal.ReadInt16(glyphSet, 18 + (i * 4)) - 1) fontRanges.Add(range) Next A: I have done this with just a VB.Net Unit Test and no WIN32 API calls. It includes code to check specific characters U+2026 (ellipsis) & U+2409 (HTab), and also returns # of characters (and low and high values) that have glyphs. I was only interested in Monospace fonts, but easy enough to change ... Dim fnt As System.Drawing.Font, size_M As Drawing.Size, size_i As Drawing.Size, size_HTab As Drawing.Size, isMonospace As Boolean Dim ifc = New Drawing.Text.InstalledFontCollection Dim bm As Drawing.Bitmap = New Drawing.Bitmap(640, 64), gr = Drawing.Graphics.FromImage(bm) Dim tf As Windows.Media.Typeface, gtf As Windows.Media.GlyphTypeface = Nothing, ok As Boolean, gtfName = "" For Each item In ifc.Families 'TestContext_WriteTimedLine($"N={item.Name}.") fnt = New Drawing.Font(item.Name, 24.0) Assert.IsNotNull(fnt) tf = New Windows.Media.Typeface(item.Name) Assert.IsNotNull(tf, $"item.Name={item.Name}") size_M = System.Windows.Forms.TextRenderer.MeasureText("M", fnt) size_i = System.Windows.Forms.TextRenderer.MeasureText("i", fnt) size_HTab = System.Windows.Forms.TextRenderer.MeasureText(ChrW(&H2409), fnt) isMonospace = size_M.Width = size_i.Width Assert.AreEqual(size_M.Height, size_i.Height, $"fnt={fnt.Name}") If isMonospace Then gtfName = "-" ok = tf.TryGetGlyphTypeface(gtf) If ok Then Assert.AreEqual(True, ok, $"item.Name={item.Name}") Assert.IsNotNull(gtf, $"item.Name={item.Name}") gtfName = $"{gtf.FamilyNames(Globalization.CultureInfo.CurrentUICulture)}" Assert.AreEqual(True, gtf.CharacterToGlyphMap().ContainsKey(AscW("M")), $"item.Name={item.Name}") Assert.AreEqual(True, gtf.CharacterToGlyphMap().ContainsKey(AscW("i")), $"item.Name={item.Name}") Dim t = 0, nMin = &HFFFF, nMax = 0 For n = 0 To &HFFFF If gtf.CharacterToGlyphMap().ContainsKey(n) Then If n < nMin Then nMin = n If n > nMax Then nMax = n t += 1 End If Next gtfName &= $",[x{nMin:X}-x{nMax:X}]#{t}" ok = gtf.CharacterToGlyphMap().ContainsKey(AscW(ChrW(&H2409))) If ok Then gtfName &= ",U+2409" End If ok = gtf.CharacterToGlyphMap().ContainsKey(AscW(ChrW(&H2026))) If ok Then gtfName &= ",U+2026" End If End If Debug.WriteLine($"{IIf(isMonospace, "*M*", "")} N={fnt.Name}, gtf={gtfName}.") gr.Clear(Drawing.Color.White) gr.DrawString($"Mi{ChrW(&H2409)} {fnt.Name}", fnt, New Drawing.SolidBrush(Drawing.Color.Black), 10, 10) bm.Save($"{fnt.Name}_MiHT.bmp") End If Next The output was M N=Consolas, gtf=Consolas,[x0-xFFFC]#2488,U+2026. M N=Courier New, gtf=Courier New,[x20-xFFFC]#3177,U+2026. M N=Lucida Console, gtf=Lucida Console,[x20-xFB02]#644,U+2026. M N=Lucida Sans Typewriter, gtf=Lucida Sans Typewriter,[x20-xF002]#240,U+2026. M N=MingLiU-ExtB, gtf=MingLiU-ExtB,[x0-x2122]#212. M N=MingLiU_HKSCS-ExtB, gtf=MingLiU_HKSCS-ExtB,[x0-x2122]#212. M N=MS Gothic, gtf=MS Gothic,[x0-xFFEE]#15760,U+2026. M N=NSimSun, gtf=NSimSun,[x20-xFFE5]#28737,U+2026. M N=OCR A Extended, gtf=OCR A Extended,[x20-xF003]#248,U+2026. M N=SimSun, gtf=SimSun,[x20-xFFE5]#28737,U+2026. M N=SimSun-ExtB, gtf=SimSun-ExtB,[x20-x7F]#96. M N=Webdings, gtf=Webdings,[x20-xF0FF]#446.
{ "language": "en", "url": "https://stackoverflow.com/questions/103725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: How to think in data stores instead of databases? As an example, Google App Engine uses Google Datastore, not a standard database, to store data. Does anybody have any tips for using Google Datastore instead of databases? It seems I've trained my mind to think 100% in object relationships that map directly to table structures, and now it's hard to see anything differently. I can understand some of the benefits of Google Datastore (e.g. performance and the ability to distribute data), but some good database functionality is sacrificed (e.g. joins). Does anybody who has worked with Google Datastore or BigTable have any good advice to working with them? A: Take a look at the Objectify documentation. The first comment at the bottom of the page says: "Nice, although you wrote this to describe Objectify, it is also one of the most concise explanation of appengine datastore itself I've ever read. Thank you." https://github.com/objectify/objectify/wiki/Concepts A: The way I have been going about the mind switch is to forget about the database altogether. In the relational db world you always have to worry about data normalization and your table structure. Ditch it all. Just layout your web page. Lay them all out. Now look at them. You're already 2/3 there. If you forget the notion that database size matters and data shouldn't be duplicated then you're 3/4 there and you didn't even have to write any code! Let your views dictate your Models. You don't have to take your objects and make them 2 dimensional anymore as in the relational world. You can store objects with shape now. Yes, this is a simplified explanation of the ordeal, but it helped me forget about databases and just make an application. I have made 4 App Engine apps so far using this philosophy and there are more to come. A: If you're used to thinking about ORM-mapped entities then that's basically how an entity-based datastore like Google's App Engine works. For something like joins, you can look at reference properties. You don't really need to be concerned about whether it uses BigTable for the backend or something else since the backend is abstracted by the GQL and Datastore API interfaces. A: I always chuckle when people come out with - it's not relational. I've written cellectr in django and here's a snippet of my model below. As you'll see, I have leagues that are managed or coached by users. I can from a league get all the managers, or from a given user I can return the league she coaches or managers. Just because there's no specific foreign key support doesn't mean you can't have a database model with relationships. My two pence. class League(BaseModel): name = db.StringProperty() managers = db.ListProperty(db.Key) #all the users who can view/edit this league coaches = db.ListProperty(db.Key) #all the users who are able to view this league def get_managers(self): # This returns the models themselves, not just the keys that are stored in teams return UserPrefs.get(self.managers) def get_coaches(self): # This returns the models themselves, not just the keys that are stored in teams return UserPrefs.get(self.coaches) def __str__(self): return self.name # Need to delete all the associated games, teams and players def delete(self): for player in self.leagues_players: player.delete() for game in self.leagues_games: game.delete() for team in self.leagues_teams: team.delete() super(League, self).delete() class UserPrefs(db.Model): user = db.UserProperty() league_ref = db.ReferenceProperty(reference_class=League, collection_name='users') #league the users are managing def __str__(self): return self.user.nickname # many-to-many relationship, a user can coach many leagues, a league can be # coached by many users @property def managing(self): return League.gql('WHERE managers = :1', self.key()) @property def coaching(self): return League.gql('WHERE coaches = :1', self.key()) # remove all references to me when I'm deleted def delete(self): for manager in self.managing: manager.managers.remove(self.key()) manager.put() for coach in self.managing: coach.coaches.remove(self.key()) coaches.put() super(UserPrefs, self).delete() A: There's two main things to get used to about the App Engine datastore when compared to 'traditional' relational databases: * *The datastore makes no distinction between inserts and updates. When you call put() on an entity, that entity gets stored to the datastore with its unique key, and anything that has that key gets overwritten. Basically, each entity kind in the datastore acts like an enormous map or sorted list. *Querying, as you alluded to, is much more limited. No joins, for a start. The key thing to realise - and the reason behind both these differences - is that Bigtable basically acts like an enormous ordered dictionary. Thus, a put operation just sets the value for a given key - regardless of any previous value for that key, and fetch operations are limited to fetching single keys or contiguous ranges of keys. More sophisticated queries are made possible with indexes, which are basically just tables of their own, allowing you to implement more complex queries as scans on contiguous ranges. Once you've absorbed that, you have the basic knowledge needed to understand the capabilities and limitations of the datastore. Restrictions that may have seemed arbitrary probably make more sense. The key thing here is that although these are restrictions over what you can do in a relational database, these same restrictions are what make it practical to scale up to the sort of magnitude that Bigtable is designed to handle. You simply can't execute the sort of query that looks good on paper but is atrociously slow in an SQL database. In terms of how to change how you represent data, the most important thing is precalculation. Instead of doing joins at query time, precalculate data and store it in the datastore wherever possible. If you want to pick a random record, generate a random number and store it with each record. There's a whole cookbook of this sort of tips and tricks here. A: I came from Relational Database world then i found this Datastore thing. it took several days to get hang of it. well there are some of my findings. You must have already know that Datastore is build to scale and that is the thing that separates it from RDMBS. to scale better with large dataset, App Engine has done some changes(some means lot of changes). RDBMS VS DataStore Structure In database, we usually structure our data in Tables, Rows which is in Datastore it becomes Kinds and Entities. Relations In RDBMS, Most of the people folllows the One-to-One, Many-to-One, Many-to-Many relationship, In Datastore, As it has "No Joins" thing but still we can achieve our normalization using "ReferenceProperty" e.g. One-to-One Relationship Example . Indexes Usually in RDMBS we make indexes like Primary Key, Foreign Key, Unique Key and Index key to speed up the search and boost our database performance. In datastore, you have to make atleast one index per kind(it will automatically generate whether you like it or not) because datastore search your entity on the basis of these indexes and believe me that is the best part, In RDBMS you can search using non-index field though it will take some time but it will. In Datastore you can not search using non-index property. Count In RDMBS, it is much easier to count(*) but in datastore, Please dont even think it in normal way(Yeah there is a count function) as it has 1000 Limit and it will cost as much small opertion as the entity which is not good but we always have good choices, we can use Shard Counters. Unique Constraints In RDMBS, We love this feature right? but Datastore has its own way. you cannot define a property as unique :(. Query GAE Datatore provides a better feature much LIKE(Oh no! datastore does not have LIKE Keyword) SQL which is GQL. Data Insert/Update/Delete/Select This where we all are interested in, as in RDMBS we require one query for Insert, Update, Delete and Select just like RDBMS, Datastore has put, delete, get(dont get too excited) because Datastore put or get in terms of Write, Read, Small Operations(Read Costs for Datastore Calls) and thats where Data Modeling comes into action. you have to minimize these operations and keep your app running. For Reducing Read operation you can use Memcache. A: The way I look at datastore is, kind identifies table, per se, and entity is individual row within table. If google were to take out kind than its just one big table with no structure and you can dump whatever you want in an entity. In other words if entities are not tied to a kind you pretty much can have any structure to an entity and store in one location (kind of a big file with no structure to it, each line has structure of its own). Now back to original comment, google datastore and bigtable are two different things so do not confuse google datastore to datastore data storage sense. Bigtable is more expensive than bigquery (Primary reason we didn't go with it). Bigquery does have proper joins and RDBMS like sql language and its cheaper, why not use bigquery. That being said, bigquery does have some limitations, depending on size of your data you might or might not encounter them. Also, in terms of thinking in terms of datastore, i think proper statement would have been "thinking in terms of NoSQL databases". There are too many of them available out there these days but when it comes to google products except google cloud SQL (which is mySQL) everything else is NoSQL. A: Being rooted in the database world, a data store to me would be a giant table (hence the name "bigtable"). BigTable is a bad example though because it does a lot of other things that a typical database might not do, and yet it is still a database. Chances are unless you know you need to build something like Google's "bigtable", you will probably be fine with a standard database. They need that because they are handling insane amounts of data and systems together, and no commercially available system can really do the job the exact way they can demonstrate that they need the job to be done. (bigtable reference: http://en.wikipedia.org/wiki/BigTable)
{ "language": "en", "url": "https://stackoverflow.com/questions/103727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "185" }
Q: How do I persist the value of a label through a response.redirect? Here's the situation: I have a label's text set, immediately followed by a response.redirect() call as follows (this is just an example, but I believe it describes my situation accurately): aspx: <asp:Label runat="server" Text="default text" /> Code-behind (code called on an onclick event): Label.Text = "foo"; Response.Redirect("Default.aspx"); When the page renders, the label says "default text". What do I need to do differently? My understanding was that such changes would be done automatically behind the scenes, but apparently, not in this case. Thanks. For a little extra background, the code-behind snippet is called inside a method that's invoked upon an onclick event. There is more to it, but I only included that which is of interest to this issue. A: A Response.Redirect call will ask the user's browser to load the page specified in the URL you give it. Because this is a new request for your page the page utilises the text which is contained in your markup (as I assume that the label text is being set inside a button handler or similar). If you remove the Response.Redirect call your page should work as advertised. A: ASP and ASP.Net are inherently stateless unless state is explicitly specified. Normally between PostBacks information like the value of a label is contained in the viewstate, but if you change pages that viewstate is lost because it is was being stored in a hidden field on the page. If you want to maintain the value of the label between calls you need to use one of the state mechanisms (e.g. Session, Preferences) or communication systems (Request (GET, POST)). Additionally you may be looking for Server.Transfer which will change who is processing the page behind the scenes. Response.Redirect is designed to ditch your current context in most cases. A: After a redirect you will loose any state information associated to your controls. If you simply want the page to refresh, remove the redirect. After the code has finished executing, the page will refresh and any state will be kept. Behind the scenes, this works because ASP.NET writes the state information to a hidden input field on the page. When you click a button, the form is posted and ASP.NET deciphers the viewstate. Your code runs, modifying the state, and after that the state is again written to the hidden field and the cycle continues, until you change the page without a POST. This can happen when clicking an hyperlink to another page, or via Response.Redirect(), which instructs the browser to follow the specified url. A: To persist state, use Server.Transfer instead of Response.Redirect. A: So, if I may answer my own question (according to the FAQ, that's encouraged), the short answer is, you don't persist view state through redirects. View state is for postbacks, not redirects. Bonus: Everything you ever wanted to know about View State in ASP.NET, with pictures! A: For what it's worth (and hopefully it's worth something), Chapter 6 of Pro ASP.NET 3.5 in C# 2008, Second Edition is a terrific resource on the subject. The whole book has been great so far.
{ "language": "en", "url": "https://stackoverflow.com/questions/103765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the best way to store changes to database records that require approval before being visible? I need to store user entered changes to a particular table, but not show those changes until they have been viewed and approved by an administrative user. While those changes are still in a pending state, I would still display the old version of the data. What would be the best way of storing these changes waiting for approval? I have thought of several ways, but can't figure out what is the best method. This is a very small web app. One way would be to have a PendingChanges table that mimics the other table's schema, and then once the change is approved, I could update the real table with the information. Another approach would be to do some sort of record versioning where I store multiple versions of the data in the table and then always pull the record with the highest version number that has been marked approved. That would limit the number of extra tables (I need to do this for multiple tables), but would require me to do extra processing every time I pull out a set of records to make sure I get the right ones. Any personal experiences with these methods or others that might be good? Update: Just to clarify, in this particular situation I am not interested so much in historical data. I just need some way of approving any changes that are made by a user before they go live on the site. So, a user will edit their "profile" and then an administrator will look at that modification and approve it. Once approved, that will become the displayed value and the old version does not need to be kept. Anybody tried the solution below where you store pending changes from any table that needs to track them as XML in a special PendingChanges table? Each record would have a column that said which table the changes were for, a column that maybe stored the id of the record that would be changed (null if it's a new record), a datetime column to store when the change was made, and a column to store the xml of the changed record (could maybe serialize my data object). Since I don't need history, after a change was approved, the real table would be updated and the PendingChange record could be deleted. Any thoughts about that method? A: I work in a banking domain and we have this need - that the changes done by one user must only be reflected after being approved by another. The design we use is as below * *Main Table A *Another Table B that stores the changed record (and so is exactly similar to the first) + 2 additional columns (an FKey to C and a code to indicate the kind of change) *A third table C that stores all such records that need approval *A fourth table D that stores history (you probably don't need this). I recommend this approach. It handles all scenarios including updates and deletions very gracefully. A: Given the SOx compliance movement that has been shoved in the face of most publically traded companies, I've had quite a bit of experience in this area. Usually I have been using a separate table with a time stamped pending changes with some sort of flag column. The person in charge of administration of this data gets a list of pending changes and can choose to accept or not to accept. When a piece of data gets accepted, I use triggers to integrate the new data into the table. Though some people don't like the trigger method and would rather code this into the stored procs. This has worked well for me, even in rather large databases. The complexity can get a little difficult to deal with, especially in dealing with a situation where one change directly conflicts with another change and what order to process these changes in. The table holding the request data can never be able to be deleted, since it holds the "bread crumbs" so to speak that are required in case there is a need to trace back what happened in a particular situation. But in any approach, the risks need to be assessed, such as what I mentioned with the conflicting data, and a business logic layer needs to be in place to determine the process in these situations. I personally don't like the same table method, because in the cases of data stores that are constantly being changed, this extra data in a table can unnecessarily bog down the request on the table, and would require a lot more detail to how you are indexing the table and your execution plans. A: Definitely store them in the main table with a column to indicate whether the data is approved or not. When the change is approved, no copying is required. The extra work to filter the unapproved data is the sort of thing databases are supposed to do, when you think about it. If you index the approved column, it shouldn't be too burdensome to do the right thing. A: Yet another idea would be to have three tables. * *One would be the main table to hold the original data. *The second would hold the proposed data. *The third would hold the historical data. This approach gives you the ability to quickly and easily roll back and also gives you an audit trail if you need it. A: I would create a table with an flag and create a view like CREATE OR REPLACE VIEW AS SELECT * FROM my_table where approved = 1 It can help to separate dependencies between the aprovement and the queries. But may be is not the best idea if need to make updates to the view. Moving records might have some performance considerations. But Partitioned tables could do something quite similar. A: Size is your enemy. If you are dealing with lots of data and large numbers of rows, then having the historical mixed in with the current will hammer you. You'll also have problems if you join out to other data with making sure you've got the right rows. If you need to save the historical data to show changes over time, I would go with the separate historical, table that updates the live, real data once it's approved. It's just all-around cleaner. If you have a lot of datatypes that will have this mechanism but don't need to keep a historical record, I would suggest a common queue talbe for reviewing pending items, say stored as xml. This would allow just one table to be read by administrators and would enable you to add this functionality to any table in you system fairly easily. A: As this is a web app i'm going to assume there are more reads than writes, and you want something reasonably fast, and your conflict resolution (i.e out of order approvals) results in the same behaviour -- latest update is the one that is used. Both of the strategies you propose are similar in they both hold one row per change set, have to deal with conflicts etc, the only difference being whether to store the data in one table or two. Given the scenario, two tables seems the better solution for performance reasons. You could also solve this with the one table and a view of the most recent approved changes if your database supports it. A: I think the second way is the better approach, simply because it scales better to multiple tables. Also, the extra processing would be minimal, as you can create an index to the table based on the 'approved' bit, and you can specialize your queries to either pull approved (for viewing) or unapproved (for approving) entries.
{ "language": "en", "url": "https://stackoverflow.com/questions/103766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: What are the (dis)advantages of using Cassini instead of IIS? I've found that on some occasions I can edit the source while debugging. Are there any other advantages of using the Visual Studio built-in webserver instead of a virtual directory in IIS? I'm using Windows XP on my development environment, and a local instance of IIS 5. I work on several projects, so I use multiple virtual directories to manage all the different sites. Are there any disadvantages? A: Cassini does not support virtual directories. A: It looks like a third option is coming soon: IIS Express. A: The built-in web server for Visual Studio is called Cassini and here are a few of its limitations... * *It can host only one ASP.NET application per port. *It does not support HTTPS. *It does not support authentication. *It responds only to localhost requests. *It is slow startup compared to IIS A: The built-in server works well for larger corporations that don't want to give developers any administrator access on their own machines to configure IIS. A: The Visual Studio web server is less forgiving about // in the path. It will refuse to serve a link like http://localhost:52632/main//images/logo.jpg where IIS will do. That's pretty obscure, but it means we have a lot of fixing to do to get rid of all the // occurrences. A: Another disadvantage I've run into is on a Forms authenticated website using custom IPrincipal/IIdentity. Cassini will switch the AppDomains without warning (or notice). Check this blog post for more.The headache on this made me drop Cassini and stick with IIS. A: There's a bug in the way the built-in server handles HTTPModules - there is a workaround, but I hate having to put in code that'll never be needed in production. A: * *You need to have Visual Studio running to use it (under normal circumstances) *It only responds to localhost, so you can't give the link http://simon-laptop:37473/app1 to a friend to view your site over the network *Big disadvantage: it's harder to get fiddler working, because localhost traffic isn't sent through the proxy. Using http://ipv4.fiddler:37473 is the best way to get Fiddler working with it. A: If you 'web reference' the URL for web services that are on the built-in webserver, the port might change. Unless you have set a "Specific port" mentioned in menu Project → Properties options page. This is something I've gotten used to now. I always set a specific port. Now when sometimes the webserver crashes (I've had that happen), I simply change the port number, and all is well. I reckon restarting will also fix this. A: The built-in server means the developer doesn't have to know how to set up IIS to test their site. You could argue this is a disadvantage, and that a Windows developer should know at least that much IIS. Or you could argue that a developer who isn't a system administrator shouldn't be messing around with the web server at all. A: Cassini also does not support ASP Classic pages. This is only an issue for legacy projects where old ASP Classic pages still exist (like our web application at work). A: You cant use virtual directories :( A: All the previous responses are great answers - here's one gottcha with Cassini that might require IIS on the destkop. Cassini runs in the context of the developer, not as the IIS user (IUSR_, IWAM, or in WinXP x64, the w3wp process). This can be a bit painful if you've got a web site that is accessing external files or creating temp files. It is most evident when your developer is running as an Admin of their desktop. When you move to the server IIS, something that you would have had access to in Cassini doesn't work the same. CACLing with the IIS_WPG usually is all it takes to fix, but if your developer is not thinking about this, they will quickly get quite frustrated with their deploy. A: If you do hobby work at home using XP Home, you can't install IIS locally. A: The built-in server isn't as configurable, and it runs on an odd port, so if you're counting on specific behavior it can be troublesome. A: I often take the best of both worlds and create an application in IIS, and use the built-in web server for more efficient debugging. A: Cassini is meant to be a lightweight test webserver. The idea is that a developer does not need to have IIS installed and configured to test his/her application. Use IIS if you are familiar with it and you have it set up and your box can handle it. Cassini is not meant to be a replacement. A: When you use IIS in Vista or Windows 7 with UAC enabled, you must run Visual Studio with administrative rights. If you do this, you can't drag an drop from your shell to Visual Studio (even if you run an instance of explorer.exe as administrator). For this reason I use Cassini for most projects. A: FYI, Windows XP 64-bit comes with IIS 6. A: This is an old thread started 2 years ago. I just stumbled upon UtilDev Cassini while googling. Looks promising to me. At least it has the ability to run multiple sites simultaneously. That feature is really useful for me, because I work on 2 different sites and have to continuously switch between them using IIS. A: Here's a reason for a third way: although UWS Pro is probably closer to IIS than Cassini (although inspired by Cassini and is from the vendor of the UltiDev Cassini fork), its main purpose is to be redistributable along with ASP.NET applications. A: Install IISAdmin, and you can setup separate sites in IIS 5, instead of using virtual directories. A: The built-in webserver is a little less robust than IIS, but requires no setup so it is just a tradeoff. You may not always want your development projects exposed on your IIS server (even your local IIS server) so the built-in server is good for that. However, if your application is going to access resources outside of the norm for a web app then you may want to debug frequently in IIS so that your app will run with restricted permissions and you can see where the pain points will be. A: We've also seen some issues with Visual Studio built-in server regarding some third-party controls which put their scripts in the \aspnet_client folder. Since the folder isn't there when you're not running under IIS, the controls didn't work. It seems a lot simpler to always work with IIS and avoid strange problems. A: One difference I've found is that the development server handles uploading files differently than IIS does. You can't trap the error if the file being uploaded is bigger than your Max_File_Size setting. The page just dies and returns a 500. A: Another dis-advantage is that it sends every request through the gloabal asax file which includes all requests for images and stylesheets. This means if you have code in there which does things with the file names, such as a look up, then the auxillary files willget processed too. A: Also via IIS, you don't have to worry about automatically remembering and setting a stupid port number in your localhost url. That's something funky directly relied upon with Cassini...big pain in the ass. Who wants to remember some abritrary port number. Just run the damn site in IIS..plain and simple. A: If your project resides in the IIS directory you can still edit code. It just depends if it has been published or not. You will run into so many issues on the Cassini vs. IIS when you are debugging certain permission based scenarios, like Kerberos and NTLM authentication as well as issues like server compression, etc. All in all, the Cassini is still okay to develop with, but make sure you do extensive testing when publishing to IIS.
{ "language": "en", "url": "https://stackoverflow.com/questions/103785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: How do you add a button to the email message window toolbar in Lotus Notes 8.5+? A coworker has been struggling with this problem. The desired result is an installable plugin for Notes that will add a button emails with attachments that will let users save the attachment to a document management system. Finding documentation on doing this for Notes has been an uphill battle to say the least. Writing the actual java to do the work isn't a problem, but figuring out how to extend Notes is. So, is there a way to add a button/icon to the toolbar, or is it just a matter of adding a new toolbar? If we add a new toolbar then can we make it only visible (or just grey it out otherwise) when no email is open? A: Both Lotus Notes 8+ and Lotus Symphony use the IBM Lotus Expeditor Toolkit. If you get the Lotus Symphony SDK here. Their are one or two examples dealing with adding button's to the symphony toolbar. They should translate almost identically to Notes. Good Luck, Brian Gianforcaro A: I had to do this once in Notes for a plugin I was developing. What I ended up doing was editing the Notes template in the designer, and then writing some LotusScript behind it that called a .NET class via a DLL. So when you clicked the button, it triggered the event in the LotusScript, and then called the DLL, and passed the item information to it. I should also note that it was a freakin' bear to figure out because Notes documentation is terrible. A: Depending on what access you have to the system the task can be fairly easy. Typically you customize your mail template to include a button in the inbox folder and the all documents view (for safety precautions see this entry). You customize ($Inbox) ($All) if you want to have the buttons only on the view level or additionaly the forms (there is a shared header subform you can use. Give the button a meaningful label and add this code: @Command([ToolsRunMacro];"(ExportDocumentsTo[yourSystemNameHere])") The round brackets are actually important. Your code (Java I presume) the goes into an agent. You select "Create Agent" and Java as language. You specify "selected documents" to run against and agent list selection as trigger (this puts the () around your name). You can can get them from the Session class. If your users are ok using a menu instead of a button you can simply select Action list as trigger and the agent will be listed in the action menu. A: From your question I gather you want this for the Eclipse client. Please peruse Mikkel Heisterberg's site LekkimWorld.com It contains tons of material. Start by reading his presentations and search the site. It has a lot of useful material.
{ "language": "en", "url": "https://stackoverflow.com/questions/103791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: T-SQL: How do I get the rows from one table whose values completely match up with values in another table? Given the following: declare @a table ( pkid int, value int ) declare @b table ( otherID int, value int ) insert into @a values (1, 1000) insert into @a values (1, 1001) insert into @a values (2, 1000) insert into @a values (2, 1001) insert into @a values (2, 1002) insert into @b values (-1, 1000) insert into @b values (-1, 1001) insert into @b values (-1, 1002) How do I query for all the values in @a that completely match up with @b? {@a.pkid = 1, @b.otherID = -1} would not be returned (only 2 of 3 values match) {@a.pkid = 2, @b.otherID = -1} would be returned (3 of 3 values match) Refactoring tables can be an option. EDIT: I've had success with the answers from James and Tom H. When I add another case in @b, they fall a little short. insert into @b values (-2, 1000) Assuming this should return two additional rows ({@a.pkid = 1, @b.otherID = -2} and {@a.pkid = 2, @b.otherID = -2}, it doesn't work. However, for my project this is not an issue. A: This is more efficient (it uses TOP 1 instead of COUNT), and works with (-2, 1000): SELECT * FROM ( SELECT ab.pkid, ab.otherID, ( SELECT TOP 1 COALESCE(ai.value, bi.value) FROM ( SELECT * FROM @a aii WHERE aii.pkid = ab.pkid ) ai FULL OUTER JOIN ( SELECT * FROM @b bii WHERE bii.otherID = ab.otherID ) bi ON ai.value = bi.value WHERE ai.pkid IS NULL OR bi.otherID IS NULL ) unmatch FROM ( SELECT DISTINCT pkid, otherid FROM @a a , @b b ) ab ) q WHERE unmatch IS NOT NULL A: Probably not the cheapest way to do it: SELECT a.pkId,b.otherId FROM (SELECT a.pkId,CHECKSUM_AGG(DISTINCT a.value) as 'ValueHash' FROM @a a GROUP BY a.pkId) a INNER JOIN (SELECT b.otherId,CHECKSUM_AGG(DISTINCT b.value) as 'ValueHash' FROM @b b GROUP BY b.otherId) b ON a.ValueHash = b.ValueHash You can see, basically I'm creating a new result set for each representing one value for each Id's set of values in each table and joining only where they match. A: The following query gives you the requested results: select A.pkid, B.otherId from @a A, @b B where A.value = B.value group by A.pkid, B.otherId having count(B.value) = ( select count(*) from @b BB where B.otherId = BB.otherId) A: Works for your example, and I think it will work for all cases, but I haven't tested it thoroughly: SELECT SQ1.pkid FROM ( SELECT a.pkid, COUNT(*) AS cnt FROM @a AS a GROUP BY a.pkid ) SQ1 INNER JOIN ( SELECT a1.pkid, b1.otherID, COUNT(*) AS cnt FROM @a AS a1 INNER JOIN @b AS b1 ON b1.value = a1.value GROUP BY a1.pkid, b1.otherID ) SQ2 ON SQ2.pkid = SQ1.pkid AND SQ2.cnt = SQ1.cnt INNER JOIN ( SELECT b2.otherID, COUNT(*) AS cnt FROM @b AS b2 GROUP BY b2.otherID ) SQ3 ON SQ3.otherID = SQ2.otherID AND SQ3.cnt = SQ1.cnt A: -- Note, only works as long as no duplicate values are allowed in either table DECLARE @validcomparisons TABLE ( pkid INT, otherid INT, num INT ) INSERT INTO @validcomparisons (pkid, otherid, num) SELECT a.pkid, b.otherid, A.cnt FROM (select pkid, count(*) as cnt FROM @a group by pkid) a INNER JOIN (select otherid, count(*) as cnt from @b group by otherid) b ON b.cnt = a.cnt DECLARE @comparison TABLE ( pkid INT, otherid INT, same INT) insert into @comparison(pkid, otherid, same) SELECT a.pkid, b.otherid, count(*) FROM @a a INNER JOIN @b b ON a.value = b.value GROUP BY a.pkid, b.otherid SELECT COMP.PKID, COMP.OTHERID FROM @comparison comp INNER JOIN @validcomparisons val ON comp.pkid = val.pkid AND comp.otherid = val.otherid AND comp.same = val.num A: I've added a few extra test cases. You can change your duplicate handling by changing the way you use distinct keywords in your aggregates. Basically, I'm getting a count of matches and comparing it to a count of required matches in each @a and @b. declare @a table ( pkid int, value int ) declare @b table ( otherID int, value int ) insert into @a values (1, 1000) insert into @a values (1, 1001) insert into @a values (2, 1000) insert into @a values (2, 1001) insert into @a values (2, 1002) insert into @a values (3, 1000) insert into @a values (3, 1001) insert into @a values (3, 1001) insert into @a values (4, 1000) insert into @a values (4, 1000) insert into @a values (4, 1001) insert into @b values (-1, 1000) insert into @b values (-1, 1001) insert into @b values (-1, 1002) insert into @b values (-2, 1001) insert into @b values (-2, 1002) insert into @b values (-3, 1000) insert into @b values (-3, 1001) insert into @b values (-3, 1001) SELECT Matches.pkid, Matches.otherId FROM ( SELECT a.pkid, b.otherId, n = COUNT(*) FROM @a a INNER JOIN @b b ON a.Value = b.Value GROUP BY a.pkid, b.otherId ) AS Matches INNER JOIN ( SELECT pkid, n = COUNT(DISTINCT value) FROM @a GROUP BY pkid ) AS ACount ON Matches.pkid = ACount.pkid INNER JOIN ( SELECT otherId, n = COUNT(DISTINCT value) FROM @b GROUP BY otherId ) AS BCount ON Matches.otherId = BCount.otherId WHERE Matches.n = ACount.n AND Matches.n = BCount.n A: How do I query for all the values in @a that completely match up with @b? I'm afraid this definition is not quite perfectly clear. It seems from your additional example that you want all pairs of a.pkid, b.otherID for which every b.value for the given b.otherID is also an a.value for the given a.pkid. In other words, you want the pkids in @a that have at least all the values for otherIDs in b. Extra values in @a appear to be okay. Again, this is reasoning based on your additional example, and the assumption that (1, -2) and (2, -2) would be valid results. In both of those cases, the a.value values for the given pkid are more than the b.value values for the given otherID. So, with that in mind: select matches.pkid ,matches.otherID from ( select a.pkid ,b.otherID ,count(1) as cnt from @a a inner join @b b on b.value = a.value group by a.pkid ,b.otherID ) as matches inner join ( select otherID ,count(1) as cnt from @b group by otherID ) as b_counts on b_counts.otherID = matches.otherID where matches.cnt = b_counts.cnt A: Several ways of doing this, but a simple one is to create a union view as create view qryMyUinion as select * from table1 union all select * from table2 be careful to use union all, not a simple union as that will omit the duplicates then do this select count( * ), [field list here] from qryMyUnion group by [field list here] having count( * ) > 1 the Union and Having statements tend to be the most overlooked part of standard SQL, but they can solve a lot of tricky issues that otherwise require procedural code A: If you are trying to return only complete sets of records, you could try this. I would definitely recommend using meaningful aliases, though ... Cervo is right, we need an additional check to ensure that a is an exact match of b and not a superset of b. This is more of an unwieldy solution at this point, so this would only be reasonable in contexts where analytical functions in the other solutions do not work. select a.pkid, a.value from @a a where a.pkid in ( select pkid from ( select c.pkid, c.otherid, count(*) matching_count from ( select a.pkid, a.value, b.otherid from @a a inner join @b b on a.value = b.value ) c group by c.pkid, c.otherid ) d inner join ( select b.otherid, count(*) b_record_count from @b b group by b.otherid ) e on d.otherid = e.otherid and d.matching_count = e.b_record_count inner join ( select a.pkid match_pkid, count(*) a_record_count from @a a group by a.pkid ) f on d.pkid = f.match_pkid and d.matching_count = f.a_record_count ) A: To iterate the point further: select a.* from @a a inner join @b b on a.value = b.value This will return all the values in @a that match @b A: 1) i assume that you don't have duplicate id 2) get the key with the same number of value 3) the row with the number of key value equal to the number of equal value is the target I hope it's what you searched for (you don't search performance don't you ?) declare @a table( pkid int, value int) declare @b table( otherID int, value int) insert into @a values (1, 1000) insert into @a values (1, 1001) insert into @a values (2, 1000) insert into @a values (2, 1001) insert into @a values (2, 1002) insert into @a values (3, 1000) insert into @a values (3, 1001) insert into @a values (4, 1000) insert into @a values (4, 1001) insert into @b values (-1, 1000) insert into @b values (-1, 1001) insert into @b values (-1, 1002) insert into @b values (-2, 1001) insert into @b values (-2, 1002) insert into @b values (-3, 1000) insert into @b values (-3, 1001) select cntok.cntid1 as cntid1, cntok.cntid2 as cntid2 from (select cnt.cnt, cnt.cntid1, cnt.cntid2 from (select acnt.cnt as cnt, acnt.cntid as cntid1, bcnt.cntid as cntid2 from (select count(pkid) as cnt, pkid as cntid from @a group by pkid) as acnt full join (select count(otherID) as cnt, otherID as cntid from @b group by otherID) as bcnt on acnt.cnt = bcnt.cnt) as cnt where cntid1 is not null and cntid2 is not null) as cntok inner join (select count(1) as cnt, cnta.cntid1 as cntid1, cnta.cntid2 as cntid2 from (select cnt, cntid1, cntid2, a.value as value1 from (select cnt.cnt, cnt.cntid1, cnt.cntid2 from (select acnt.cnt as cnt, acnt.cntid as cntid1, bcnt.cntid as cntid2 from (select count(pkid) as cnt, pkid as cntid from @a group by pkid) as acnt full join (select count(otherID) as cnt, otherID as cntid from @b group by otherID) as bcnt on acnt.cnt = bcnt.cnt) as cnt where cntid1 is not null and cntid2 is not null) as cntok inner join @a as a on a.pkid = cntok.cntid1) as cnta inner join (select cnt, cntid1, cntid2, b.value as value2 from (select cnt.cnt, cnt.cntid1, cnt.cntid2 from (select acnt.cnt as cnt, acnt.cntid as cntid1, bcnt.cntid as cntid2 from (select count(pkid) as cnt, pkid as cntid from @a group by pkid) as acnt full join (select count(otherID) as cnt, otherID as cntid from @b group by otherID) as bcnt on acnt.cnt = bcnt.cnt) as cnt where cntid1 is not null and cntid2 is not null) as cntok inner join @b as b on b.otherid = cntok.cntid2) as cntb on cnta.cntid1 = cntb.cntid1 and cnta.cntid2 = cntb.cntid2 and cnta.value1 = cntb.value2 group by cnta.cntid1, cnta.cntid2) as cntequals on cntok.cnt = cntequals.cnt and cntok.cntid1 = cntequals.cntid1 and cntok.cntid2 = cntequals.cntid2 A: As CQ says, a simple inner join is all you need. Select * -- all columns but only from #a from #a inner join #b on #a.value = #b.value -- only return matching rows where #a.pkid = 2
{ "language": "en", "url": "https://stackoverflow.com/questions/103829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is GnuPG compatible with McAfee eBusiness Server 7.1? Right now, we're using PGP command line 9.0. Does anybody know if GnuPG will work? It'd be a lot cheaper. EDIT: Theoretically, GnuPG/PGP/McAfee eBusiness Server should be able to interoperate. In practice, you pretty much just have to test to see. We did not make GnuPG work with McAfee eBusiness Server. A: I've never used McAfee eBusiness Server specifically, but the entire point of GnuPG was to provide Free Software that implemented the OpenPGP spec. Unless McAfee is for some hideously obnoxious reason mandating specific ciphers, there shouldn't be a problem. Note that if some components are going to be checking a key with PGP, and some with GnuPG, you may want to doublecheck the interoperability FAQ question for GnuPG, as you may, in fact, have to limit your cipher and compression algorithms or signature versions. That FAQ is discussing a much older version of PGP, so it may actually no longer be an issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/103831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How do I run a batch file on startup for a Win x64 machine? I know you can use autoexnt to run a batch file on startup for Windows XP, but that only seems to work for 32-bit machines. I'm running Windows XP x64 on a box, and I need to have a script run on startup (without anyone's logging in). Any ides? Thanks for the help. A: Can also use local computer policy to configure startup and shutdown scripts. http://vlaurie.com/computers2/Articles/group_policy_editor.htm Has a good walkthrough of how to do it. A: In your registry, accessible through "regedit" you can navigate to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run Add a Reg_sz type entry, doesn't matter what the key name is really, but as the value give the fully qualified path name to your program or batch file. A: On startup meaning Login, or on startup meaning (before anyone logs in)? On login, you could just put a BAT in your Startup folder.
{ "language": "en", "url": "https://stackoverflow.com/questions/103842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: TIGER shapefiles - using and interpreting I know of the US GIS TIGER file format from years ago, but have never used it. I'm very shortly going to need to very quickly implement simple geocoding and vector graphics of roads and other features. * *Where do I go for information - are there tutorials, example queries, etc? Are there other ways to include geocoding and basic mapping in a mobile (no internet) device? -Adam A: As far as I'm aware of, there aren't many applications that make use of the TIGER/Line format directly. Most apps use TIGER files that have been translated into ESRI's shapefile format. Edited to add: Is there information on ESRI's format available? There's an ESRI whitepaper describing the file format. If you're planning to use shapefiles in an application, there are various libraries out there. A: The OpenStreetMap project imported TIGER data, you might find useful code snippets there. See the TIGER page on the OpenStreetMap wiki for more information and links
{ "language": "en", "url": "https://stackoverflow.com/questions/103843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I merge a 2D array in Python into one string with List Comprehension? List Comprehension for me seems to be like the opaque block of granite that regular expressions are for me. I need pointers. Say, I have a 2D list: li = [[0,1,2],[3,4,5],[6,7,8]] I would like to merge this either into one long list li2 = [0,1,2,3,4,5,6,7,8] or into a string with separators: s = "0,1,2,3,4,5,6,7,8" Really, I'd like to know how to do both. A: Try that: li=[[0,1,2],[3,4,5],[6,7,8]] li2 = [ y for x in li for y in x] You can read it like this: Give me the list of every ys. The ys come from the xs. The xs come from li. To map that in a string: ','.join(map(str,li2)) A: There's a couple choices. First, you can just create a new list and add the contents of each list to it: li2 = [] for sublist in li: li2.extend(sublist) Alternately, you can use the itertools module's chain function, which produces an iterable containing all the items in multiple iterables: import itertools li2 = list(itertools.chain(*li)) If you take this approach, you can produce the string without creating an intermediate list: s = ",".join(itertools.chain(*li)) A: My favorite, and the shortest one, is this: li2 = sum(li, []) and s = ','.join(li2) EDIT: use sum instead of reduce, (thanks Thomas Wouters!) A: Like so: [ item for innerlist in outerlist for item in innerlist ] Turning that directly into a string with separators: ','.join(str(item) for innerlist in outerlist for item in innerlist) Yes, the order of 'for innerlist in outerlist' and 'for item in innerlist' is correct. Even though the "body" of the loop is at the start of the listcomp, the order of nested loops (and 'if' clauses) is still the same as when you would write the loop out: for innerlist in outerlist: for item in innerlist: ... A: For the second one, there is a built-in string method to do that : >>> print ','.join(str(x) for x in li2) "0,1,2,3,4,5,6,7,8" For the first one, you can use join within a comprehension list : >>> print ",".join([",".join(str(x) for x in li]) "0,1,2,3,4,5,6,7,8" But it's easier to use itertools.flatten : >>> import itertools >>> print itertools.flatten(li) [0,1,2,3,4,5,6,7,8] >>> print ",".join(str(x) for x in itertools.flatten(li)) "0,1,2,3,4,5,6,7,8" N.B : itertools is a module that help you to deal with common tasks with iterators such as list, tuples or string... It's handy because it does not store a copy of the structure you're working on but process the items one by one. EDIT : funny, I am learning plenty of way to do it. Who said that there was only one good way to do it ? A: import itertools itertools.flatten( li ) A: To make it a flattened list use either: * *http://code.activestate.com/recipes/121294/ *http://code.activestate.com/recipes/363051/ Then, join to make it a string. A: There are many ways to do this problem. I like Numpy's tools because it is normally already imported in everything I do. However, if you aren't using Numpy for anything else this probably isn't a good method. import numpy li = [[0,1,2],[3,4,5],[6,7,8]] li2=li[0] #first element of array to merge i=1 while i<len(li): li2=numpy.concatenate((li2,li[i])) i+=1 print li2 This would print [0 1 2 3 4 5 6 7 8] and then you can convert this into your string too. A: Here is a way: def convert2DArrtostring(ndArr): '''converts 2D array to string''' arr_str = "[" for i in ndArr: arr_str += "[" for j in i: arr_str += str(j) + " " arr_str += "]\n" arr_str += "]" return arr_str
{ "language": "en", "url": "https://stackoverflow.com/questions/103844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Communicating between Java and Flash without a Flash-specific server I have Java and Flash client applications. What is the best way for the two to communicate without special Flash-specific servers such as BlazeDS or Red5? I am looking for a light client-only solution. A: Well, you can make http requests from flash to any url... so if your java server has a point where it can listen to incoming requests and process XML or JSON, your flash client can just make the request to that url. BlazeDS and Red5 just aim to make it simpler by handling the translation for you making it possible to call the server-side functions transparently. A: Are they running in a browser (applet and SWF), or are they standalone apps? If they're running in a browser then you can use javascript. Both Flash and Java are can access javascript. It's fragile, but it works. If they're running as actual applications then you can have Java open a socket connection on some port. Then Flash can connect to that and they can send XML data back and forth. I've done both of these, so I know they both work. The javascript thing is fragile, but the socket stuff has worked great. A: WebORB for Java may be of some help to you. It integrates with your J2EE code. For more info: http://www.themidnightcoders.com/weborb/java/ I'm sorry, I reread your question that you are only looking for a client side solution. In this case, WebORB will not help you. Sorry for the misunderstanding. A: Merapi Bridge API Merapi allows developers to connect Adobe AIR applications, written in Adobe Flex to Java applications running on the user's local computer. A: There's a Flash implementation of Caucho's Hessian web service protocol. This approach would be similar to using JSon or XML, but is more performant, since Hessian is a binary protocol. If you happen to be using Spring on your server, you can use the Spring/Hessian binding to call you Spring services directly from your Flash application with minimal work.
{ "language": "en", "url": "https://stackoverflow.com/questions/103861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How does Authorize.Net Silent Post work? Authorize.net offers a "Silent POST" feature for their Automated Recurring Billing. It's supposed to POST data to a url of your choosing, telling you whether they were able to charge the customer, how much, etc. The problem is, it isn't very well documented. * *Is there any way to test a post to that URL? I've signed up for a developer account, but there's no way to specify that URL like you could in the actual system. Hence, there doesn't seem to be a way to test it out. *If not, is there a list of possible values it could return? It appears to send x_first_name, x_amount - I've seen code that uses those values - but since I can't actually get it to send a response, I'm not sure. *Is there documentation for this feature anywhere? Or even class that implements it fully? A: Better late then never: All About Authorize.Net’s Silent Post A: I have not seen much on it only for AIM and SIM, you might just give them a call. A: Log in to your Authorize.Net order processing account, and click on the Settings link (under ACCOUNT, in the left column). Then click on the "Silent Post URL" link in the Transaction Format Settings area. You can enter your silent post URL on the next page. The next page also contains a link to the documentation explaining the technical details. HTH A: Here's a few more (somewhat) useful posts I found on the subject. * *Merchant Account Services - gives some limited sample code (PHP) *Experts Exchange - lists a few helpful variables, gives an idea of what's being sent (ASP). A: You still have to call your account rep for them to activate Silent Post URL with your account because that is not something that is enabled automatically A: Our clients use the following tool to test silent post url requests sent from the Authorize.Net gateway. Simply add the following url to your silent post settings and change the email address for the results to be delivered to an email of choice. URL: http://www.silentposturl.com/action/email/index.php?support@silentposturl.com
{ "language": "en", "url": "https://stackoverflow.com/questions/103892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Automate a Ruby Gem install that has input I am trying to install the ibm_db gem so that I can access DB2 from Ruby. When I try: sudo gem install ibm_db I get the following request for clarification: Select which gem to install for your platform (i486-linux) 1. ibm_db 0.10.0 (ruby) 2. ibm_db 0.10.0 (mswin32) 3. ibm_db 0.9.5 (mswin32) 4. ibm_db 0.9.5 (ruby) 5. Skip this gem 6. Cancel installation I am always going to be installing the linux version (which I assume is the "ruby" version), so is there a way to pick which one I will install straight from the gem install command? The reason this is a problem is that I need to automate this install via a bash script, so I would like to select that I want the "ruby" version ahead of time. A: You can use a 'here document'. That is: sudo gem install ibm_db <<heredoc 1 heredoc What's between the \<\<\SOMETHING and SOMETHING gets inputted as entry to the previous command (somewhat like ruby's own heredocuments). The 1 there alone, of course, is the selection of the "ibm_db 0.10.0 (ruby)" platform. Hope it's enough. A: Try this: sudo gem install --platform ruby ibm_db Note that you can get help on the install command using: gem help install UPDATE: Looks like this option only works for RubyGems 0.9.5 or above. A: @John Topley I already tried gem help install, and --platform is not an option, both in help and in practice: $ sudo gem install ibm_db --platform ruby ERROR: While executing gem ... (OptionParser::InvalidOption) invalid option: --platform UPDATE: The Ubuntu repos have 0.9.4 version of rubygems, which doesn't have the --platform option. It appears it may be a new feature in 0.9.5, but there is still no online documentation for it, and regardless, it won't work on Ubuntu which is the platform I need it to work on. A: Try this, I think it only works on Bash though sudo gem install ibm_db < <(echo 1) A: Versions of Rubygems from 1.0 and up automatically detect the platform you are running and thus do not ask that question. Are you able to update your gems to the latest? $ sudo gem update --system Be warned if you are on Windows once you have updated; you might run into this issue. A: Another option is to download the .gem file and install it manually as such: sudo gem install path/to/ibm_db-0.10.0.gem This particular gem was at rubyforge.
{ "language": "en", "url": "https://stackoverflow.com/questions/103918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you implement a function which returns the URL of last visited page I want to store the current URL in a session variable to reference the previous visited page. If I store every URL (via a before_filter on ApplicationController), also actions which end in a redirect (create, update, etc) are considered as last visited page. Is there a way to tell Rails only to execute a function if a template is rendered?? Update Thanks for the after_filter tip... having written so many before_filters I didn't see the obvious. But the Trick with @performed_redirect doesn't work- This is what I got so far class ApplicationController < ActionController::Base after_filter :set_page_as_previous_page def set_page_as_previous_page unless @performed_redirect flash[:previous_page] = request.request_uri else flash[:previous_page] = flash[:previous_page] end end end I need to implement a "Go Back" Link, without the use of Javascript, the HTTP Referer. Sorry If I should have mentioned that, I appreciate your help! Update 2 I found a solution, which is not very elegant and only works if your app follows the standard naming scheme def after_filter if File.exists?(File.join(Rails.root,"app","views", controller_path, action_name+".html.erb")) flash[:previous_page] = request.request_uri else flash[:previous_page] = flash[:previous_page] end end A: Not sure why @performed_redirect isn't working, you can see that it does exist and have the desired values by calling the actions on this test controller: class RedirController < ApplicationController after_filter :redir_raise def raise_true redirect_to :action => :whatever end def raise_false render :text => 'foo' end private def redir_raise raise @performed_redirect.to_s end end As an aside, instead of doing flash[:previous_page] = flash[:previous_page] you can do flash.keep :previous_page (My patch, that. back in the days :P) A: Another possible approach to determine whether the response is a redirect vs. render is to check the status code: class ApplicationController < ActionController::Base after_filter :set_page_as_previous_page def set_page_as_previous_page unless 302 == request.status #redirecting flash[:previous_page] = request.request_uri else flash[:previous_page] = flash[:previous_page] end end end It really seems like there should be a redirect? method in ActionController::Base for this. A: Can be a bit more specific? I cant get your question- by template you means render :view, the layout? or only when called with render :template? Rendering a page? render :action=>:new is a page too... Can you be a bit more specific on which you want to capture and which you want to exclude? Saw the accepted answer :) A: The controller will have these variables, which might be helpful: @performed_render @performed_redirect But anyway, how exactly are you storing the url? Show us the filter code. Why not using an after_filter?
{ "language": "en", "url": "https://stackoverflow.com/questions/103919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ResultSet not closed when connection closed? I've been doing code review (mostly using tools like FindBugs) of one of our pet projects and FindBugs marked following code as erroneous (pseudocode): Connection conn = dataSource.getConnection(); try{ PreparedStatement stmt = conn.prepareStatement(); //initialize the statement stmt.execute(); ResultSet rs = stmt.getResultSet(); //get data }finally{ conn.close(); } The error was that this code might not release resources. I figured out that the ResultSet and Statement were not closed, so I closed them in finally: finally{ try{ rs.close() }catch(SqlException se){ //log it } try{ stmt.close(); }catch(SqlException se){ //log it } conn.close(); } But I encountered the above pattern in many projects (from quite a few companies), and no one was closing ResultSets or Statements. Did you have troubles with ResultSets and Statements not being closed when the Connection is closed? I found only this and it refers to Oracle having problems with closing ResultSets when closing Connections (we use Oracle db, hence my corrections). java.sql.api says nothing in Connection.close() javadoc. A: Oracle will give you errors about open cursors in this case. According to: http://java.sun.com/javase/6/docs/api/java/sql/Statement.html it looks like reusing a statement will close any open resultsets, and closing a statement will close any resultsets, but i don't see anything about closing a connection will close any of the resources it created. All of those details are left to the JDBC driver provider. Its always safest to close everything explicitly. We wrote a util class that wraps everything with try{ xxx } catch (Throwable {} so that you can just call Utils.close(rs) and Utils.close(stmt), etc without having to worry about exceptions that close scan supposedly throw. A: The ODBC Bridge can produce a memory leak with some ODBC drivers. If you use a good JDBC driver then you should does not have any problems with closing the connection. But there are 2 problems: * *Does you know if you have a good driver? *Will you use other JDBC drivers in the future? That the best practice is to close it all. A: I work in a large J2EE web environment. We have several databases that may be connected to in a single request. We began getting logical deadlocks in some of our applications. The issue was that as follows: * *User would request page *Server connects to DB 1 *Server Selects on DB 1 *Server "closes" connection to DB 1 *Server connects to DB 2 *Deadlocked! This occurred for 2 reasons, we were experiencing far higher volume of traffic than normal and the J2EE Spec by default does not actually close your connection until the thread finishes execution. So, in the above example step 4 never actually closed the connection even though they were closed properly in finally . To fix this, you you have to use resource references in the web.xml for your Database Connections and you have to set the res-sharing-scope to unsharable. Example: <resource-ref> <description>My Database</description> <res-ref-name>jdbc/jndi/pathtodatasource</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> <res-sharing-scope>Unshareable</res-sharing-scope> </resource-ref> A: One problem with ONLY closing the connection and not the result set, is that if your connection management code is using connection pooling, the connection.close() would just put the connection back in the pool. Additionally, some database have a cursor resource on the server that will not be freed properly unless it is explicitly closed. A: I've definitely seen problems with unclosed ResultSets, and what can it hurt to close them all the time, right? The unreliability of needing to remembering to do this is one of the best reasons to move to frameworks that manage these details for you. It might not be feasible in your development environment, but I've had great luck using Spring to manage JPA transactions. The messy details of opening connections, statements, result sets, and writing over-complicated try/catch/finally blocks (with try/catch blocks in the finally block!) to close them again just disappears, leaving you to actually get some work done. I'd highly recommend migrating to that kind of a solution. A: In Java, Statements (not Resultsets) correlate to Cursors in Oracle. It is best to close the resources that you open as unexpected behavior can occur in regards to the JVM and system resources. Additionally, some JDBC pooling frameworks pool Statements and Connections, so not closing them might not mark those objects as free in the pool, and cause performance issues in the framework. In general, if there is a close() or destroy() method on an object, there's a reason to call it, and to ignore it is done so at your own peril. A: I've had problems with unclosed ResultSets in Oracle, even though the connection was closed. The error I got was "ORA-01000: maximum open cursors exceeded" So: Always close your ResultSet! A: You should always close all JDBC resources explicitly. As Aaron and John already said, closing a connection will often only return it to a pool and not all JDBC drivers are implemented exact the same way. Here is a utility method that can be used from a finally block: public static void closeEverything(ResultSet rs, Statement stmt, Connection con) { if (rs != null) { try { rs.close(); } catch (SQLException e) { } } if (stmt != null) { try { stmt.close(); } catch (SQLException e) { } } if (con != null) { try { con.close(); } catch (SQLException e) { } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/103938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: Flex: Custom Item Renderer For Combobox controls truncates text I've implemented a custom item renderer that I'm using with a combobox on a flex project I'm working on. It displays and icon and some text for each item. The only problem is that when the text is long the width of the menu is not being adjusted properly and the text is being truncated when displayed. I've tried tweaking all of the obvious properties to alleviate this problem but have not had any success. Does anyone know how to make the combobox menu width scale appropriately to whatever data it's rendering? My custom item renderer implementation is: <?xml version="1.0" encoding="utf-8"?> <mx:HBox xmlns:mx="http://www.adobe.com/2006/mxml" styleName="plain" horizontalScrollPolicy="off"> <mx:Image source="{data.icon}" /> <mx:Label text="{data.label}" fontSize="11" fontWeight="bold" truncateToFit="false"/> </mx:HBox> And my combobox uses it like so: <mx:ComboBox id="quicklinksMenu" change="quicklinkHandler(quicklinksMenu.selectedItem.data);" click="event.stopImmediatePropagation();" itemRenderer="renderers.QuickLinkItemRenderer" width="100%"/> EDIT: I should clarify on thing: I can set the dropdownWidth property on the combobox to some arbitrarily large value - this will make everything fit, but it will be too wide. Since the data being displayed in this combobox is generic, I want it to automatically size itself to the largest element in the dataprovider (the flex documentation says it will do this, but I have the feeling my custom item renderer is somehow breaking that behavior) A: Just a random thought (no clue if this will help): Try setting the parent HBox and the Label's widths to 100%. That's generally fixed any problems I've run into that were similar. A: Have you tried using the calculatePreferredSizeFromData() method? protected override function calculatePreferredSizeFromData(count:int):Object A: This answer is probably too late, but I had a very similar problem with the DataGrid's column widths. After much noodling, I decided to pre-render my text in a private TextField, get the width of the rendered text from that, and explicitly set the width of the column on all of the appropriate resize type events. A little hack-y but works well enough if you haven't got a lot of changing data. A: You would need to do two things: * *for the text, use mx.controls.Text (that supports text wrapping) instead of mx.controls.Label *set comboBox's dropdownFactory.variableRowHeight=true -- this dropdownFactory is normally a subclass of List, and the itemRenderer you are setting on ComboBox is what will be used to render each item in the list And, do not explicitly set comboBox.dropdownWidth -- let the default value of comboBox.width be used as dropdown width. A: If you look at the measure method of mx.controls.ComboBase, you'll see that the the comboBox calculates it's measuredMinWidth as a sum of the width of the text and the width of the comboBox button. // Text fields have 4 pixels of white space added to each side // by the player, so fudge this amount. // If we don't have any data, measure a single space char for defaults if (collection && collection.length > 0) { var prefSize:Object = calculatePreferredSizeFromData(collection.length); var bm:EdgeMetrics = borderMetrics; var textWidth:Number = prefSize.width + bm.left + bm.right + 8; var textHeight:Number = prefSize.height + bm.top + bm.bottom + UITextField.TEXT_HEIGHT_PADDING; measuredMinWidth = measuredWidth = textWidth + buttonWidth; measuredMinHeight = measuredHeight = Math.max(textHeight, buttonHeight); } The calculatePreferredSizeFromData method mentioned by @defmeta (implemented in mx.controls.ComboBox) assumes that the data renderer is just a text field, and uses flash.text.lineMetrics to calculate the text width from label field in the data object. If you want to add an additional visual element to the item renderer and have the ComboBox take it's size into account when calculating it's own size, you will have to extend the mx.controls.ComboBox class and override the calculatePreferredSizeFromData method like so: override protected function calculatePreferredSizeFromData(count:int):Object { var prefSize:Object = super.calculatePrefferedSizeFromData(count); var maxW:Number = 0; var maxH:Number = 0; var bookmark:CursorBookmark = iterator ? iterator.bookmark : null; var more:Boolean = iterator != null; for ( var i:int = 0 ; i < count ; i++) { var data:Object; if (more) data = iterator ? iterator.current : null; else data = null; if(data) { var imgH:Number; var imgW:Number; //calculate the image height and width using the data object here maxH = Math.max(maxH, prefSize.height + imgH); maxW = Math.max(maxW, prefSize.width + imgW); } if(iterator) iterator.moveNext(); } if(iterator) iterator.seek(bookmark, 0); return {width: maxW, height: maxH}; } If possible store the image dimensions in the data object and use those values as imgH and imgW, that will make sizing much easier. EDIT: If you are adding elements to the render besides an image, like a label, you will also have to calculate their size as well when you iterate through the data elements and take those dimensions into account when calculating maxH and maxW.
{ "language": "en", "url": "https://stackoverflow.com/questions/103945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SSRS - ReportViewer LocalReport Set SubReport parameter value How can you programmatically set the parameters for a subreport? For the top-level report, you can do the following: reportViewer.LocalReport.SetParameters ( new Microsoft.Reporting.WebForms.ReportParameter[] { new Microsoft.Reporting.WebForms.ReportParameter("ParameterA", "Test"), new Microsoft.Reporting.WebForms.ReportParameter("ParameterB", "1/10/2009 10:30 AM"), new Microsoft.Reporting.WebForms.ReportParameter("ParameterC", "1234") } ); Passing parameters like the above only seems to pass them to the top-level report, not the subreports. The LocalReport allows you to handle the SubreportProcessing event. That passes you an instance of SubreportProcessingEventArgs, which has a property of Type ReportParameterInfoCollection. The values in this collection are read-only. A: Add the parameter to the parent report and set the sub report parameter value from the parent report (in the actual report definition). This is what I've read. Let me know if it works for you. A: set the parameter to <Expression...> and use formula builder to add the parent parameter.
{ "language": "en", "url": "https://stackoverflow.com/questions/103976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Specifying location of new inlineshape in Word VBA? I'm working on a document "wizard" for the company that I work for. It's a .dot file with a header consisting of some text and some form fields, and a lot of VBA code. The body of the document is pulled in as an OLE object from a separate .doc file. Currently, this is being done as a Shape, rather than an InlineShape. I did this because I can absolutely position the Shape, whereas the InlineShape always appears at the beginning of the document. The problem with this is that a Shape doesn't move when the size of the header changes. If someone needs to add or remove a line from the header due to a special case, they also need to move the object that defines the body. This is a pain, and I'd like to avoid it if possible. Long story short, how do I position an InlineShape using VBA in Word? The version I'm using is Word 97. A: InlineShape is treated as a letter. Hence, the same technique. ThisDocument.Range(15).InlineShapes.AddPicture "1.gif" A: My final code ended up using ThisDocument.Paragraphs to get the range I needed. But GSerg pointed me in the right direction of using a Range to get my object where it needed to be.
{ "language": "en", "url": "https://stackoverflow.com/questions/103980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: looking for a tuple matching algorithm I need to implement an in-memory tuple-of-strings matching feature in C. There will be large list of tuples associated with different actions and a high volume of events to be matched against the list. List of tuples: ("one", "four") ("one") ("three") ("four", "five") ("six") event ("one", "two", "three", "four") should match list item ("one", "four") and ("one") and ("three") but not ("four", "five") and not ("six") my current approach uses a map of all tuple field values as keys for lists of each tuple using that value. there is a lot of redundant hashing and list insertion. is there a right or classic way to do this? A: If you only have a small number of possible tuple values it would make sense to write some sort of hashing function which could turn them into integer indexes for quick searching. If there are < 32 values you could do something with bitmasks: unsigned int hash(char *value){...} typedef struct _tuple { unsigned int bitvalues; void * data } tuple; tuple a,b,c,d; a.bitvalues = hash("one"); a.bitvalues |= hash("four"); //a.data = something; unsigned int event = 0; //foreach value in event; event |= hash(string_val); // foreach tuple if(x->bitvalues & test == test) { //matches } If there are too many values to do a bitmask solution you could have an array of linked lists. Go through each item in the event. If the item matches key_one, walk through the tuples with that first key and check the event for the second key: typedef struct _tuple { unsigned int key_one; unsigned int key_two; _tuple *next; void * data; } tuple; tuple a,b,c,d; a.key_one = hash("one"); a.key_two = hash("four"); tuple * list = malloc(/*big enough for all hash indexes*/ memset(/*clear list*/); //foreach touple item if(list[item->key_one]) put item on the end of the list; else list[item->key_one] = item; //foreach event //foreach key if(item_ptr = list[key]) while(item_ptr.next) if(!item_ptr.key_two || /*item has key_two*/) //match item_ptr = item_ptr.next; This code is in no way tested and probably has many small errors but you should get the idea. (one error that was corrected was the test condition for tuple match) If event processing speed is of utmost importance it would make sense to iterate through all of your constructed tuples, count the number of occurrences and go through possibly re-ordering the key one/key two of each tuple so the most unique value is listed first. A: A possible solution would be to assign a unique prime number to each of the words. Then if you multiply the words together in each tuple, then you have a number that represents the words in the list. Divide one list by another, and if you get an integer remainder, then the one list is contained in the other. A: I don't know of any classical or right way to do this, so here is what I would do :P It looks like you want to decide if A is a superset of B, using set theory jargon. One way you can do it is to sort A and B, and do a merge sort-esque operation on A and B, in that you try to find where in A a value in B goes. Those elements of B which are also in A, will have duplicates, and the other elements won't. Because both A and B are sorted, this shouldn't be too horrible. For example, you take the first value of B, and walk A until you find its duplicate in A. Then you take the second value of B, and start walking A from where you left off previously. If you get to end of A without finding a match, then A is not a superset of B, and you return false. If these tuples can stay sorted, then the sorting cost is only incurred once. A: If you have a smallish number of possible strings, you can assign an index to each and use bitmaps. That way a simple bitwise and will tell you if there's overlap. If that's not practical, your inverted index setup is probably going to be hard to match for speed, especially if you only have to build it once. (does the list of tuples change at runtime?) A: public static void Main() { List<List<string>> tuples = new List<List<string>>(); string [] tuple = {"one", "four"}; tuples.Add(new List<string>(tuple)); tuple = new string [] {"one"}; tuples.Add(new List<string>(tuple)); tuple = new string [] {"three"}; tuples.Add(new List<string>(tuple)); tuple = new string[]{"four", "five"}; tuples.Add(new List<string>(tuple)); tuple = new string[]{"six"}; tuples.Add(new List<string>(tuple)); tuple = new string[] {"one", "two", "three", "four"}; List<string> checkTuple = new List<string>(tuple); List<List<string>> result = new List<List<string>>(); foreach (List<string> ls in tuples) { bool ok = true; foreach(string s in ls) if(!checkTuple.Contains(s)) { ok = false; break; } if (ok) result.Add(ls); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/103989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How can I get full string value of variable in VC6 watch window? I'm wanting to get the full value of a char[] variable in the VC6 watch window, but it only shows a truncated version. I can copy the value from a debug memory window, but that contains mixed lines of hex and string values. Surely there is a better way?? A: For large strings, you're pretty much stuck with the memory window - the tooltip would truncate eventually. Fortunately, the memory window is easy to get data from - I tend to show it in 8-byte chunks so its easy to manage, find your string data and cut&paste the lot into a blank window, then use alt+drag to select columns and delete the hex values. Then start at the bottom of the string and continually page up/delete (the newline) to build your string (I use a macro for that bit). I don't think there's any better way once you get long strings. A: Push come to shove you can put in the watch given char bigArray[1000]; watch: &bigArray[0] &bigArray[100] &bigArray[200] ... or change the index for where in the string you want to look... Its clunky, but its worked for me in the past. A: I do not have VC6 any more, so I cannot try it. I do not know if it works, but maybe you can enter (char*)textArray; in the watch window. The bettter solution maybe: VS2008 automatically displays the text the way you want. And there is a Express Edition for VS2008 free of change, which can, as far as I know, be used to develop commerecial applications. You can even try to continue developing with VC6, and use VS2008 for debugging only. With VS2003 it was possible. About 5 year ago I had to maintain an app which was developed with VC6. I kept using VC6 for developing, but for debugging I used VS2003. A: The only technique i have seen is to watch the string then the string + 50, + 100 etc. Eugene Ivakhiv wrote an addin for msvc 6 that lets you display the full string in an edit box. A: There's a cute plugin for VC6 called XDebug. It adds a dialog for viewing different types of strings. It worked great for me. A: Perhaps, get used to creating logfiles, and write output into the file directly, then bring up in your favorite text editor.
{ "language": "en", "url": "https://stackoverflow.com/questions/104009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Localize Strings in Javascript I'm currently using .resx files to manage my server side resources for .NET. the application that I am dealing with also allows developers to plugin JavaScript into various event handlers for client side validation, etc.. What is the best way for me to localize my JavaScript messages and strings? Ideally, I would like to store the strings in the .resx files to keep them with the rest of the localized resources. I'm open to suggestions. A: I would use an object/array notation: var phrases={}; phrases['fatalError'] ='On no!'; Then you can just swap the JS file, or use an Ajax call to redefine your phrase list. A: With a satellite assembly (instead of a resx file) you can enumerate all strings on the server, where you know the language, thus generating a Javascript object with only the strings for the correct language. Something like this works for us (VB.NET code): Dim rm As New ResourceManager([resource name], [your assembly]) Dim rs As ResourceSet = rm.GetResourceSet(Thread.CurrentThread.CurrentCulture, True, True) For Each kvp As DictionaryEntry In rs [Write out kvp.Key and kvp.Value] Next However, we haven't found a way to do this for .resx files yet, sadly. A: JSGettext does an excellent job -- dynamic loading of GNU Gettext .po files using pretty much any language on the backend. Google for "Dynamic Javascript localization with Gettext and PHP" to find a walkthrough for JSGettext with PHP (I'd post the link, but this silly site won't let me, sigh...) Edit: this should be the link A: There's a library for localizing JavaScript applications: https://github.com/wikimedia/jquery.i18n It can do parameter replacement, supports gender (clever he/she handling), number (clever plural handling, including languages that have more than one plural form), and custom grammar rules that some languages need. The strings are stored in JSON files. The only requirement is jQuery. A: A basic JavaScript object is an associative array, so it can easily be used to store key/value pairs. So using JSON, you could create an object for each string to be localized like this: var localizedStrings={ confirmMessage:{ 'en/US':'Are you sure?', 'fr/FR':'Est-ce que vous êtes certain?', ... }, ... } Then you could get the locale version of each string like this: var locale='en/US'; var confirm=localizedStrings['confirmMessage'][locale]; A: I did the following to localize JavaScript for a mobile app running HTML5: 1.Created a set of resource files for each language calling them like "en.js" for English. Each contained the different strings the app as follows: var localString = { appName: "your app name", message1: "blah blah" }; 2.Used Lazyload to load the proper resource file based on the locale language of the app: https://github.com/rgrove/lazyload 3.Pass the language code via a Query String (As I am launching the html file from Android using PhoneGap) 4.Then I wrote the following code to load dynamically the proper resource file: var lang = getQueryString("language"); localization(lang); function localization(languageCode) { try { var defaultLang = "en"; var resourcesFolder = "values/"; if(!languageCode || languageCode.length == 0) languageCode = defaultLang; // var LOCALIZATION = null; LazyLoad.js(resourcesFolder + languageCode + ".js", function() { if( typeof LOCALIZATION == 'undefined') { LazyLoad.js(resourcesFolder + defaultLang + ".js", function() { for(var propertyName in LOCALIZATION) { $("#" + propertyName).html(LOCALIZATION[propertyName]); } }); } else { for(var propertyName in LOCALIZATION) { $("#" + propertyName).html(LOCALIZATION[propertyName]); } } }); } catch (e) { errorEvent(e); } } function getQueryString(name) { name = name.replace(/[\[]/, "\\\[").replace(/[\]]/, "\\\]"); var regexS = "[\\?&]" + name + "=([^]*)"; var regex = new RegExp(regexS); var results = regex.exec(window.location.href); if(results == null) return ""; else return decodeURIComponent(results[1].replace(/\+/g, " ")); } 5.From the html file I refer to the strings as follows: span id="appName" A: Well, I think that you can consider this. English-Spanish example: Write 2 Js Scripts, like that: en-GB.js lang = { date_message: 'The start date is incorrect', ... }; es-ES.js lang = { date_message: 'Fecha de inicio incorrecta', ... }; Server side - code behind: Protected Overrides Sub InitializeCulture() Dim sLang As String sLang = "es-ES" Me.Culture = sLang Me.UICulture = sLang Page.ClientScript.RegisterClientScriptInclude(sLang & ".js", "../Scripts/" & sLang & ".js") MyBase.InitializeCulture() End Sub Where sLang could be "en-GB", you know, depending on current user's selection ... Javascript calls: alert (lang.date_message); And it works, very easy, I think. A: Inspired by SproutCore You can set properties of strings: 'Hello'.fr = 'Bonjour'; 'Hello'.es = 'Hola'; and then simply spit out the proper localization based on your locale: var locale = 'en'; alert( message[locale] ); A: After Googling a lot and not satisfied with the majority of solutions presented, I have just found an amazing/generic solution that uses T4 templates. The complete post by Jochen van Wylick you can read here: Using T4 for localizing JavaScript resources based on .resx files Main advantages are: * *Having only 1 place where resources are managed ( namely the .resx files ) *Support for multiple cultures *Leverage IntelliSense - allow for code completion Disadvantages: The shortcomings of this solution are of course that the size of the .js file might become quite large. However, since it's cached by the browser, we don't consider this a problem for our application. However - this caching can also result in the browser not finding the resource called from code. How this works? Basically he defined a T4 template that points to your .resx files. With some C# code he traverses each and every resource string and add it to JavaScript pure key value properties that then are output in a single JavaScript file called Resources.js (you can tweak the names if you wish). T4 template [ change accordingly to point to your .resx files location ] <#@ template language="C#" debug="false" hostspecific="true"#> <#@ assembly name="System.Windows.Forms" #> <#@ import namespace="System.Resources" #> <#@ import namespace="System.Collections" #> <#@ import namespace="System.IO" #> <#@ output extension=".js"#> <# var path = Path.GetDirectoryName(Host.TemplateFile) + "/../App_GlobalResources/"; var resourceNames = new string[1] { "Common" }; #> /** * Resources * --------- * This file is auto-generated by a tool * 2012 Jochen van Wylick **/ var Resources = { <# foreach (var name in resourceNames) { #> <#=name #>: {}, <# } #> }; <# foreach (var name in resourceNames) { var nlFile = Host.ResolvePath(path + name + ".nl.resx" ); var enFile = Host.ResolvePath(path + name + ".resx" ); ResXResourceSet nlResxSet = new ResXResourceSet(nlFile); ResXResourceSet enResxSet = new ResXResourceSet(enFile); #> <# foreach (DictionaryEntry item in nlResxSet) { #> Resources.<#=name#>.<#=item.Key.ToString()#> = { 'nl-NL': '<#= ("" + item.Value).Replace("\r\n", string.Empty).Replace("'","\\'")#>', 'en-GB': '<#= ("" + enResxSet.GetString(item.Key.ToString())).Replace("\r\n", string.Empty).Replace("'","\\'")#>' }; <# } #> <# } #> In the Form/View side To have the correct translation picked up, add this in your master if you're using WebForms: <script type="text/javascript"> var locale = '<%= System.Threading.Thread.CurrentThread.CurrentCulture.Name %>'; </script> <script type="text/javascript" src="/Scripts/Resources.js"></script> If you're using ASP.NET MVC (like me), you can do this: <script type="text/javascript"> // Setting Locale that will be used by JavaScript translations var locale = $("meta[name='accept-language']").attr("content"); </script> <script type="text/javascript" src="/Scripts/Resources.js"></script> The MetaAcceptLanguage helper I got from this awesome post by Scott Hanselman: Globalization, Internationalization and Localization in ASP.NET MVC 3, JavaScript and jQuery - Part 1 public static IHtmlString MetaAcceptLanguage<T>(this HtmlHelper<T> html) { var acceptLanguage = HttpUtility.HtmlAttributeEncode( Thread.CurrentThread.CurrentUICulture.ToString()); return new HtmlString( String.Format("<meta name=\"{0}\" content=\"{1}\">", "accept-language", acceptLanguage)); } Use it var msg = Resources.Common.Greeting[locale]; alert(msg); A: Expanding on diodeus.myopenid.com's answer: Have your code write out a file containing a JS array with all the required strings, then load the appropriate file/script before the other JS code. A: The MSDN way of doing it, basically is: You create a separate script file for each supported language and culture. In each script file, you include an object in JSON format that contains the localized resources values for that language and culture. I can't tell you the best solution for your question, but IMHO this is the worst way of doing it. At least now you know how NOT to do it. A: We use MVC and have simply created a controller action to return a localized string. We maintain the user's culture in session and set the thread culture before any call to retrieve a language string, AJAX or otherwise. This means we always return a localized string. I'll admit, it isn't the most efficient method but getting a localised string in javascript is seldom required as most localization is done in our partial views. Global.asax.cs protected void Application_PreRequestHandlerExecute(object sender, EventArgs e) { if (Context.Handler is IRequiresSessionState || Context.Handler is IReadOnlySessionState) { // Set the current thread's culture var culture = (CultureInfo)Session["CultureInfo"]; if (culture != null) { Thread.CurrentThread.CurrentCulture = culture; Thread.CurrentThread.CurrentUICulture = culture; } } } Controller Action public string GetString(string key) { return Language.ResourceManager.GetString(key); } Javascript /* Retrieve a localized language string given a lookup key. Example use: var str = language.getString('MyString'); */ var language = new function () { this.getString = function (key) { var retVal = ''; $.ajax({ url: rootUrl + 'Language/GetString?key=' + key, async: false, success: function (results) { retVal = results; } }); return retVal; } };
{ "language": "en", "url": "https://stackoverflow.com/questions/104022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: How to list the contents of a package using YUM? I know how to use rpm to list the contents of a package (rpm -qpil package.rpm). However, this requires knowing the location of the .rpm file on the filesystem. A more elegant solution would be to use the package manager, which in my case is YUM. How can YUM be used to achieve this? A: $ yum install -y yum-utils $ repoquery -l packagename A: There is a package called yum-utils that builds on YUM and contains a tool called repoquery that can do this. $ repoquery --help | grep -E "list\ files" -l, --list list files in this package/group Combined into one example: $ repoquery -l time /usr/bin/time /usr/share/doc/time-1.7 /usr/share/doc/time-1.7/COPYING /usr/share/doc/time-1.7/NEWS /usr/share/doc/time-1.7/README /usr/share/info/time.info.gz On at least one RH system, with rpm v4.8.0, yum v3.2.29, and repoquery v0.0.11, repoquery -l rpm prints nothing. If you are having this issue, try adding the --installed flag: repoquery --installed -l rpm. DNF Update: To use dnf instead of yum-utils, use the following command: $ dnf repoquery -l time /usr/bin/time /usr/share/doc/time-1.7 /usr/share/doc/time-1.7/COPYING /usr/share/doc/time-1.7/NEWS /usr/share/doc/time-1.7/README /usr/share/info/time.info.gz A: I don't think you can list the contents of a package using yum, but if you have the .rpm file on your local system (as will most likely be the case for all installed packages), you can use the rpm command to list the contents of that package like so: rpm -qlp /path/to/fileToList.rpm If you don't have the package file (.rpm), but you have the package installed, try this: rpm -ql packageName A: There are several good answers here, so let me provide a terrible one: : you can type in anything below, doesnt have to match anything yum whatprovides "me with a life" : result of the above (some liberties taken with spacing): Loaded plugins: fastestmirror base | 3.6 kB 00:00 extras | 3.4 kB 00:00 updates | 3.4 kB 00:00 (1/4): extras/7/x86_64/primary_db | 166 kB 00:00 (2/4): base/7/x86_64/group_gz | 155 kB 00:00 (3/4): updates/7/x86_64/primary_db | 9.1 MB 00:04 (4/4): base/7/x86_64/primary_db | 5.3 MB 00:05 Determining fastest mirrors * base: mirrors.xmission.com * extras: mirrors.xmission.com * updates: mirrors.xmission.com base/7/x86_64/filelists_db | 6.2 MB 00:02 extras/7/x86_64/filelists_db | 468 kB 00:00 updates/7/x86_64/filelists_db | 5.3 MB 00:01 No matches found : the key result above is that "primary_db" files were downloaded : filelists are downloaded EVEN IF you have keepcache=0 in your yum.conf : note you can limit this to "primary_db.sqlite" if you really want find /var/cache/yum -name '*.sqlite' : if you download/install a new repo, run the exact same command again : to get the databases for the new repo : if you know sqlite you can stop reading here : if not heres a sample command to dump the contents echo 'SELECT packages.name, GROUP_CONCAT(files.name, ", ") AS files FROM files JOIN packages ON (files.pkgKey = packages.pkgKey) GROUP BY packages.name LIMIT 10;' | sqlite3 -line /var/cache/yum/x86_64/7/base/gen/primary_db.sqlite : remove "LIMIT 10" above for the whole list : format chosen for proof-of-concept purposes, probably can be improved a lot A: rpm -ql [packageName] Example # rpm -ql php-fpm /etc/php-fpm.conf /etc/php-fpm.d /etc/php-fpm.d/www.conf /etc/sysconfig/php-fpm ... /run/php-fpm /usr/lib/systemd/system/php-fpm.service /usr/sbin/php-fpm /usr/share/doc/php-fpm-5.6.0 /usr/share/man/man8/php-fpm.8.gz ... /var/lib/php/sessions /var/log/php-fpm No need to install yum-utils, or to know the location of the rpm file. A: currently reopquery is integrated into dnf and yum, so typing: dnf repoquery -l <pkg-name> will list package contents from a remote repository (even for the packages that are not installed yet) meaning installing a separate dnf-utils or yum-utils package is no longer required for the functionality as it is now being supported natively. for listing installed or local (*.rpm files) packages' contents there is rpm -ql i don't think it is possible with yum org dnf (not repoquery subcommand) please correct me if i am wrong A: Yum doesn't have it's own package type. Yum operates and helps manage RPMs. So, you can use yum to list the available RPMs and then run the rpm -qlp command to see the contents of that package.
{ "language": "en", "url": "https://stackoverflow.com/questions/104055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "347" }
Q: SQL Server 2005 - clicking on job->Properties yields "New Job" window Recently, I've started having a problem with my SQL Server 2005 client running on Windows XP where right-clicking on any job and selecting Properties instead brings me to the New Job window. Also, if I select "View History", I get the history for all jobs, instead of the one I right-clicked on. This happened to me once before, and I found that I hadn't installed a service pack for SQL 2005. Once I installed it, the problem went away, and I haven't seen it in about a year. I haven't run any updates on it since, and I'm not sure what could have caused this. As a possibly related note, I've tried installing XP Service Pack 3 on my machine twice, and it just hung on my machine(I started running it on Friday before leaving for the weekend, and it hadn't gone more than5-10% when I got back on Monday). I'm not sure if that fact is related at all, but I thought it possible that the XP update somehow overwrote something that SQL 2005 used before hanging. Any ideas on what could cause this? I've included the current version info that shows up in SQL 2005. Microsoft SQL Server Management Studio - 9.00.1399.00 Microsoft Analysis Services Client Tools - 2005.090.1399.00 Microsoft Data Access Components (MDAC) - 2000.085.1117.00 (xpsp_sp2_rtm.040803-2158) Microsoft MSXML - 2.6 3.0 4.0 5.0 6.0 Microsoft Internet Explorer - 7.0.5730.13 Microsoft .NET Framework - 2.0.50727.1433 Operating System - 5.1.2600 Update: I reinstalled SQL 2005 service pack 2 on my machine and it fixed the problem. I'll have to see if the problem was caused when I tried installing xp sp3. A: I would suggest the following path: * *Make sure that you have current backups for the server *Try to get a clean install of the XP service pack *Try reinstalling the client tools on the machine *If that fails, try to install (or reinstall) SP2 for SQL Server
{ "language": "en", "url": "https://stackoverflow.com/questions/104057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: System.Convert.ToInt vs (int) I noticed in another post, someone had done something like: double d = 3.1415; int i = Convert.ToInt32(Math.Floor(d)); Why did they use the convert function, rather than: double d = 3.1415; int i = (int)d; which has an implicit floor and convert. Also, more concerning, I noticed in some production code I was reading: double d = 3.1415; float f = Convert.ToSingle(d); Is that the same as: float f = (float)d; Are all those otherwise implicit conversions just in the Convert class for completeness, or do they serve a purpose? I can understand a need for .ToString(), but not the rest. A: Rounding is also handled differently: x=-2.5 (int)x=-2 Convert.ToInt32(x)=-2 x=-1.5 (int)x=-1 Convert.ToInt32(x)=-2 x=-0.5 (int)x= 0 Convert.ToInt32(x)= 0 x= 0.5 (int)x= 0 Convert.ToInt32(x)= 0 x= 1.5 (int)x= 1 Convert.ToInt32(x)= 2 x= 2.5 (int)x= 2 Convert.ToInt32(x)= 2 Notice the x=-1.5 and x=1.5 cases. In some algorithms, the rounding method used is critical to getting the right answer. A: Casting to int is implicit truncation, not implicit flooring: double d = -3.14; int i = (int)d; // i == -3 I choose Math.Floor or Math.Round to make my intentions more explicit. A: You can use Convert when you have a string that you want to convert to an int int i = Convert.ToInt32("1234"); Convert and casting will both throw an exception if they fail. i.e, this will still throw an exception, it will not return 0: Convert.ToInt32("1234NonNumber"); In many cases Convert and casting will have the same result, but a cast is often times easier to read. A: Convert.ToInt32() is used on strings (http://msdn.microsoft.com/en-us/library/sf1aw27b.aspx) while casting can only be used on types that have internal converters (numeric types). The real trick comes in deciding between Int32.Parse and Convert.ToInt32(). Convert.ToInt32() is tolerant of a null parameter and returns 0 while Int32.Parse() will throw an ArgumentNullException.
{ "language": "en", "url": "https://stackoverflow.com/questions/104063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do I troubleshoot performance problems with an Oracle SQL statement I have two insert statements, almost exactly the same, which run in two different schemas on the same Oracle instance. What the insert statement looks like doesn't matter - I'm looking for a troubleshooting strategy here. Both schemas have 99% the same structure. A few columns have slightly different names, other than that they're the same. The insert statements are almost exactly the same. The explain plan on one gives a cost of 6, the explain plan on the other gives a cost of 7. The tables involved in both sets of insert statements have exactly the same indexes. Statistics have been gathered for both schemas. One insert statement inserts 12,000 records in 5 seconds. The other insert statement inserts 25,000 records in 4 minutes 19 seconds. The number of records being insert is correct. It's the vast disparity in execution times that confuses me. Given that nothing stands out in the explain plan, how would you go about determining what's causing this disparity in runtimes? (I am using Oracle 10.2.0.4 on a Windows box). Edit: The problem ended up being an inefficient query plan, involving a cartesian merge which didn't need to be done. Judicious use of index hints and a hash join hint solved the problem. It now takes 10 seconds. Sql Trace / TKProf gave me the direction, as I it showed me how many seconds each step in the plan took, and how many rows were being generated. Thus TKPROF showed me:- Rows Row Source Operation ------- --------------------------------------------------- 23690 NESTED LOOPS OUTER (cr=3310466 pr=17 pw=0 time=174881374 us) 23690 NESTED LOOPS (cr=3310464 pr=17 pw=0 time=174478629 us) 2160900 MERGE JOIN CARTESIAN (cr=102 pr=0 pw=0 time=6491451 us) 1470 TABLE ACCESS BY INDEX ROWID TBL1 (cr=57 pr=0 pw=0 time=23978 us) 8820 INDEX RANGE SCAN XIF5TBL1 (cr=16 pr=0 pw=0 time=8859 us)(object id 272041) 2160900 BUFFER SORT (cr=45 pr=0 pw=0 time=4334777 us) 1470 TABLE ACCESS BY INDEX ROWID TBL1 (cr=45 pr=0 pw=0 time=2956 us) 8820 INDEX RANGE SCAN XIF5TBL1 (cr=10 pr=0 pw=0 time=8830 us)(object id 272041) 23690 MAT_VIEW ACCESS BY INDEX ROWID TBL2 (cr=3310362 pr=17 pw=0 time=235116546 us) 96565 INDEX RANGE SCAN XPK_TBL2 (cr=3219374 pr=3 pw=0 time=217869652 us)(object id 272084) 0 TABLE ACCESS BY INDEX ROWID TBL3 (cr=2 pr=0 pw=0 time=293390 us) 0 INDEX RANGE SCAN XIF1TBL3 (cr=2 pr=0 pw=0 time=180345 us)(object id 271983) Notice the rows where the operations are MERGE JOIN CARTESIAN and BUFFER SORT. Things that keyed me into looking at this were the number of rows generated (over 2 million!), and the amount of time spent on each operation (compare to other operations). A: Use the SQL Trace facility and TKPROF. A: The main culprits in insert slow downs are indexes, constraints, and oninsert triggers. Do a test without as many of these as you can remove and see if it's fast. Then introduce them back in and see which one is causing the problem. I have seen systems where they drop indexes before bulk inserts and rebuild at the end -- and it's faster. A: The first thing to realize is that, as the documentation says, the cost you see displayed is relative to one of the query plans. The costs for 2 different explains are not comparable. Secondly the costs are based on an internal estimate. As hard as Oracle tries, those estimates are not accurate. Particularly not when the optimizer misbehaves. Your situation suggests that there are two query plans which, according to Oracle, are very close in performance. But which, in fact, perform very differently. The actual information that you want to look at is the actual explain plan itself. That tells you exactly how Oracle executes that query. It has a lot of technical gobbeldy-gook, but what you really care about is knowing that it works from the most indented part out, and at each step it merges according to one of a small number of rules. That will tell you what Oracle is doing differently in your two instances. What next? Well there are a variety of strategies to tune bad statements. The first option that I would suggest, if you're in Oracle 10g, is to try their SQL tuning advisor to see if a more detailed analysis will tell Oracle the error of its ways. It can then store that plan, and you will use the more efficient plan. If you can't do that, or if that doesn't work, then you need to get into things like providing query hints, manual stored query outlines, and the like. That is a complex topic. This is where it helps to have a real DBA. If you don't, then you'll want to start reading the documentation, but be aware that there is a lot to learn. (Oracle also has a SQL tuning class that is, or at least used to be, very good. It isn't cheap though.) A: I've put up my general list of things to check to improve performance as an answer to another question: Favourite performance tuning tricks ... It might be helpful as a checklist, even though it's not Oracle-specific. A: I agree with a previous poster that SQL Trace and tkprof are a good place to start. I also highly recommend the book Optimizing Oracle Performance, which discusses similar tools for tracing execution and analyzing the output. A: SQL Trace and tkprof are only good if you have access to theses tools. Most of the large companies that I do work for do not allow developers to access anything under the Oracle unix IDs. I believe you should be able to determine the problem by first understanding the question that is being asked and by reading the explain plans for each of the queries. Many times I find that the big difference is that there are some tables and indexes that have not been analyzed. A: Another good reference that presents a general technique for query tuning is the book SQL Tuning by Dan Tow. A: When the performance of a sql statement isn't as expected / desired, one of the first things I do is to check the execution plan. The trick is to check for things that aren't as expected. For example you might find table scans where you think an index scan should be faster or vice versa. A point where the oracle optimizer sometimes takes a wrong turn are the estimates how many rows a step will return. If the execution plan expects 2 rows, but you know it will more like 2000 rows, the execution plan is bound to be less than optimal. With two statements to compare you can obviously compare the two execution plans to see where they differ. From this analysis, I come up with an execution plan that I think should be suited better. This is not an exact execution plan, but just some crucial changes, to the one I found, like: It should use Index X or a Hash Join instead of a nested loop. Next thing is to figure out a way to make Oracle use that execution plan. Often by using Hints, or creating additonal indexes, sometimes changing the SQL statement. Then of course test that the changed statement a) still does what it is supposed to do b) is actually faster With b it is very important to make sure you are testing the correct use case. A typical pit fall is the difference between returning the first row, versus returning the last row. Most tools show you the first results as soon as they are available, with no direct indication, that there is more work to be done. But if your actual program has to process all rows before it continues to the next processing step, it is almost irrelevant when the first row appears, it is only relevant when the last row is available. If you find a better execution plan, the final step is to make you database actually use it in the actual program. If you added an index, this will often work out of the box. Hints are an option, but can be problematic if a library creates your sql statement, those ofte don't support hints. As a last resort you can save and fix execution plans for specific sql statements. I'd avoid this approach, because its easy to become forgotten and in a year or so some poor developer will scratch her head why the statement performs in a way that might have been apropriate with the data one year ago, but not with the current data ... A: analyzing the oI also highly recommend the book Optimizing Oracle Performance, which discusses similar tools for tracing execution and utput.
{ "language": "en", "url": "https://stackoverflow.com/questions/104066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How Do VB.NET Optional Parameters work 'Under the hood'? Are they CLS-Compliant? Let's say we have the following method declaration: Public Function MyMethod(ByVal param1 As Integer, _ Optional ByVal param2 As Integer = 0, _ Optional ByVal param3 As Integer = 1) As Integer Return param1 + param2 + param3 End Function How does VB.NET make the optional parameters work within the confines of the CLR? Are optional parameters CLS-Compliant? A: Interestingly, this is the decompiled C# code, obtained via reflector. public int MyMethod(int param1, [Optional, DefaultParameterValue(0)] int param2, [Optional, DefaultParameterValue(1)] int param3) { return ((param1 + param2) + param3); } Notice the Optional and DefaultParameterValue attributes. Try putting them in C# methods. You will find that you are still required to pass values to the method. In VB code however, its turned into Default! That being said, I personally have never use Default even in VB code. It feels like a hack. Method overloading does the trick for me. Default does help though, when dealing with the Excel Interop, which is a pain in the ass to use straight out of the box in C#. A: Contrary to popular belief, optional parameters do appear to be CLS-compliant. (However, my primary check for this was to mark the assembly, class and method all with the CLSCompliant attribute, set to True.) So what does this look like in MSIL? .method public static int32 MyMethod(int32 param1, [opt] int32 param2, [opt] int32 param3) cil managed { .custom instance void [mscorlib]System.CLSCompliantAttribute::.ctor(bool) = ( 01 00 01 00 00 ) .param [2] = int32(0x00000000) .param [3] = int32(0x00000001) // Code size 11 (0xb) .maxstack 2 .locals init ([0] int32 MyMethod) IL_0000: nop IL_0001: ldarg.0 IL_0002: ldarg.1 IL_0003: add.ovf IL_0004: ldarg.2 IL_0005: add.ovf IL_0006: stloc.0 IL_0007: br.s IL_0009 IL_0009: ldloc.0 IL_000a: ret } // end of method Module1::MyMethod Note the [opt] markings on the parameters -- MSIL supports this natively, without any hacks. (Unlike MSIL's support for VB's Static keyword, which is another topic altogether.) So, why aren't these in C#? I can't answer that, other than my speculation that it might be a presumed lack of demand. My own preference has always been to specify the parameters, even if they were optional -- to me, the code looks cleaner and is easier to read. (If there are omitted parameters, I often look first for an overload that matches the visible signature -- it's only after I fail to find one that I realize that optional parameters are involved.)
{ "language": "en", "url": "https://stackoverflow.com/questions/104068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is it possible to tell if a user has viewed a portion of the page? As the title says on a website is it possible to tell if a user has viewed a portion of the page? A: Will moving that portion to a separate iframe work? then if they scroll to the bottom, issue a get request for a small image file..forgot the name of the technique.. Update: It is called Web Bug..A Web bug is an object that is embedded in a web page or e-mail and is usually invisible to the user but allows checking that a user has viewed the page or e-mail. One common use is in e-mail tracking. Alternative names are Web beacon, tracking bug, tracking pixel, pixel tag, 1×1 gif, and clear gif. A: If you are talking about to check if the user has actually viewed some part of the page you would need to install a web camera and track his eye-movement. If you are talking about detecting how far the user has scrolled down the page, you can use Javascript to detect this in the OnScroll event. You can then fire some ajax to the server if you need to record this. A: I'm not sure this would be ethical - but technically if you use javascript, you could detect the mouseover event of each paragraph tag in the document, and then AJAX that information back to the server. As the user scrolls down the page, they're likely to mouse over the paragraphs, and then you know at least approximately where they've read to. A: Not reliably, no. Simple example: I middle-click on a link, which opens it in a new background tab. I then decide against it, and close the tab without ever looking at it. Any JavaScript trick is going to report that I viewed everything above the fold. More complicated example: A newbie user doesn't have the browser window maximised, and a portion of the browser window is off-screen. Any JavaScript trick is going to report as if the entire viewport is being viewed, so even restricting your query to only the cases where scrolling occurs will not help. A: Unless you require a user action of some kind, all you will be able to tell is that they downloaded some portion, not that they actually looked at it. A: Sure. Put that content inside a div, then in your html, with some javascript, capture the onmouseover event and do your work there. If they've put their mouse over something, it's a pretty safe bet that they've seen it, I'd say... Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/104076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Polymorphic Models in Ruby on Rails? For a project I'm working on, the store has two types of products - a real product and a group of products. For this discussion, let's call them "1 T shirt" and "a box of T shirts". For one t-shirt, I need to store the normal attributes - price, sku, size, color, description, etc. For the box of t-shirts I need to have a price, sku, description, and a list of t-shirts that are included. So right now, I'm representing this with the Shirt and ShirtCollection models. I can see this causing difficulty down the road when I need to do reporting and order management and making sure SKUs are unique. So what's the best way of representing this? A: You can have a Tshirt table and then self reference it with a has_many :through association. Tshirt - id, sku, price, size, color, description, is_box TshirtBox - parent_tshirt (id that references tshirt), child_tshirt (id that references tshirt) Check out this link for more on self referential has_many :through http://www.aldenta.com/2006/11/10/has_many-through-self-referential-example/ A: I would have the following models Tshirt TshirtBox has_many TshirtItems TshirtBoxItems (This is basically a join table with an id tshirt_box_id and tshirt_id) belongs_to TshirtBox TshirtBoxItems is a way to link a Tshirt with a box and potentially other things in the future.
{ "language": "en", "url": "https://stackoverflow.com/questions/104086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }