text
stringlengths
8
267k
meta
dict
Q: How to use Java reflection when the enum type is a Class? I was using an enum in which the constant was a Class. I needed to invoke a method on the constant but could not introduce a compile time dependency and the enum was not always available at runtime (part of optional install). Therefore, I wanted to use reflection. This is easy, but I hadn't used reflection with enums before. The enum looked something like this: public enum PropertyEnum { SYSTEM_PROPERTY_ONE("property.one.name", "property.one.value"), SYSTEM_PROPERTY_TWO("property.two.name", "property.two.value"); private String name; private String defaultValue; PropertyEnum(String name) { this.name = name; } PropertyEnum(String name, String value) { this.name = name; this.defaultValue = value; } public String getName() { return name; } public String getValue() { return System.getProperty(name); } public String getDefaultValue() { return defaultValue; } } What is an example of invoking a method of the constant using reflection? A: import java.lang.reflect.Method; class EnumReflection { public static void main(String[] args) throws Exception { Class<?> clz = Class.forName("test.PropertyEnum"); /* Use method added in Java 1.5. */ Object[] consts = clz.getEnumConstants(); /* Enum constants are in order of declaration. */ Class<?> sub = consts[0].getClass(); Method mth = sub.getDeclaredMethod("getDefaultValue"); String val = (String) mth.invoke(consts[0]); /* Prove it worked. */ System.out.println("getDefaultValue " + val.equals(PropertyEnum.SYSTEM_PROPERTY_ONE.getDefaultValue())); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/140537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: What character set should I assume the encoded characters in a URL to be in? RFC 1738 specifies the syntax for URL's, and mentions that URLs are written only with the graphic printable characters of the US-ASCII coded character set. The octets 80-FF hexadecimal are not used in US-ASCII, and the octets 00-1F and 7F hexadecimal represent control characters; these must be encoded. It does not, however, say what code set these octets then represent. RFC 2396 seems to try and improve on the situation, but: For original character sequences that contain non-ASCII characters, however, the situation is more difficult. Internet protocols that transmit octet sequences intended to represent character sequences are expected to provide some way of identifying the charset used, if there might be more than one [RFC2277]. However, there is currently no provision within the generic URI syntax to accomplish this identification. An individual URI scheme may require a single charset, define a default charset, or provide a way to indicate the charset used. It is expected that a systematic treatment of character encoding within URI will be developed as a future modification of this specification. Is there any unambigous way in which a client can determine in which character set to interpret encoded octets, or in which a server can determine what a client used to encode with ? It looks to me like most servers default to UTF-8, but this seems to be a de facto choice more than a specified one. A: I believe the specification you are looking for is RFC 3987, which describes IRIs - Internationalized Resource Identifiers. A: As per your quote, URLs are ASCII. That's all. URIs OTOH, allow for bigger charsets; usually UTF-8 as you said yourself. The point to remember is that URLs are a subset of URIs. Therefore, the real question is, which of these is what you write in a browser? I'd guess you can write an URI, and the browser should try its best to transform to an URL (which is what HTTP/1.1 support, AFAICR). For non-ASCII characters, that means hexcodes, usually coding UTF-8.
{ "language": "en", "url": "https://stackoverflow.com/questions/140549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: What is the best way to store a large amount of text in a SQL server table? What is the best way to store a large amount of text in a table in SQL server? Is varchar(max) reliable? A: In SQL 2005 and higher, VARCHAR(MAX) is indeed the preferred method. The TEXT type is still available, but primarily for backward compatibility with SQL 2000 and lower. A: Varchar(max) is available only in SQL 2005 or later. This will store up to 2GB and can be treated as a regular varchar. Before SQL 2005, use the "text" type. A: According to the text found here, varbinary(max) is the way to go. You'll be able to store approximately 2GB of data. A: Split the text into chunks that your database can actually handle. And, put the split up text in another table. Use the id from the text_chunk table as text_chunk_id in your original table. You might want another column in your table to keep text that fits within your largest text data type. CREATE TABLE text_chunk ( id NUMBER, chunk_sequence NUMBER, text BIGTEXT) A: I like using VARCHAR(MAX) (or actually NVARCHAR) because it works like a standard VARCHAR field. Since it's introduction, I use it rather than TEXT fields whenever possible. A: In a BLOB BLOBs are very large variable binary or character data, typically documents (.txt, .doc) and pictures (.jpeg, .gif, .bmp), which can be stored in a database. In SQL Server, BLOBs can be text, ntext, or image data type, you can use the text type text Variable-length non-Unicode data, stored in the code page of the server, with a maximum length of 231 - 1 (2,147,483,647) characters. A: Depending on your situation, a design alternative to consider is saving them as .txt file to server and save the file path to your database. A: Use nvarchar(max) to store the whole chat conversation thread in a single record. Each individual text message (or block) is identified in the content text by inserting markers. Example: {{UserId: Date and time}}<Chat Text>. On display time UI should be intelligent enough to understand this markers and display it correctly. This way one record should suffice for a single conversation as long as size limit is not reached.
{ "language": "en", "url": "https://stackoverflow.com/questions/140550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Buying a machine for continuous integration - key factors? I'm planning to propose to my (very small) company that we buy a computer to run continous integration on. If they say yes, the task of actually buying the machine will probably fall on me, so my question is: What do I look for in a computer that will be used for continuous integration for a very small (3 people) php team? What "stuff" (memory, processor, etc.) are important, and what are not? A: You're not really going to need that powerful of a machine. If you are running tests or other metrics, processing ability is probably your primary concern but really you could run that on an old pentium 1 and it would probably work. Your constraints are going to be your operating environment. If you are running LAMP you want to use a machine that can handle LAMP setup well which is pretty much any *nix machine that is fairly modern. I set up a continuous integration setup for .NET on an old Pentium4 workstation we had laying around and it handled just fine. One thing to keep in mind is storage space if you are archiving your code in the build. A: Storage became the issue for us when we were using Maven, Continuum and Clearcase and building hourly. The snapshot views were being left around after each build. We had a powerful enough box (Sun Fire V490) and used it for our development integration environment and Archiva repository. So we never really had any issues with performance and memory. In fact the only time we had a problem with PermGen memory it was building the Maven site target and that just meant using -XX:MaxPermSize=128m. A: From my experience, this does not have to be a powerhouse machine. Any machine you'd use for development would be more than satisfactory. Obviously, the faster the machine, the faster the response if you are running unit tests on code commits. Our CI server is running XP SP2, 3G processor, 3G of RAM, and it's way overpowered for our needs right now. That said, it's nice to get an email no more than 6 minutes after you commit that lets you know if the build is clean and all the tests pass. For doing nightly builds, the specs can probably go down more, as you probably have more time to get those done. Hard drive space (300G is reasonably attainable these days) is nice for storing reports and builds to regression, but if you have a NAS you can probably push off artifacts after they've been built. A: Pretty much any new machine you could buy today can handle the task of continuous integration on a not-too-large source tree. Some things to look for: * *2-4GB of RAM, more if you want to run many tests in parallel or you want to run virtual machines to simulate clients. *A multi-core processor (or multiple processors) to increase the chances of catching threading bugs. *"Server" class machines tend to handle 24/7 operation better than "desktop" class machines, but there is no clear line between the two. *RAID1 or RAID1+0 redundant disks are a must. Even if you have backups (and you should have them anyway) it's a pain to rebuild a server and an extra $100 hard disk is more than worth the money as insurance. A: RAM: enough to run your CI tool (phpUnderControl?) and whatever supporting software you want for your build and tests. Storage: decide how many old builds you want to keep on the machine. In my experience it isn't useful to keep very many, esp. if you have small team w/out a lot of formal process for rolling back to older builds. CPU: non-issue. Any machine you can buy will work. So between the two I tend to favor RAM over Storage space. A: Unless the app is huge I'd just get a dual core box with about 4 gigs of ram and probably 2 reasonably fast SATA disks set in RAID 0. 500 Gigs maybe? If you want to be really safe with it, get two 70ish gig drives for RAID 0 for the OS partition and then 3 140+ gig drives in RAID 5 for the data. A: The machine performance hardly matters, but take good care of availability because once you start using it and one day the magic smoke gets out, you need it replaced soon to continue working. Define a sensible backup policy and make sure you know how to set up a new identical system when necessary. For example, you might have it run from a small partition that you can image into another machine, and then the main part of the data can be physically moved if it resides on RAID1 and at least one drive works (though have a backup available elsewhere on the network as well). A: I think one thing that a lot of people here are getting at is the machine isn't as important as the CI software. The only time the machine is important is if you need different architectures. Otherwise, get a machine that matches your target environment. If you are building a server app, it might be wise to get a 64bit processor since your app probably will be running on a 64bit server. I would care more about which tool I'm usinging for CI. You need something that will run fast, and as people here have pointed out, it shouldn't hold onto the old builds unless you need them to be available. If so, I'd look for something that allows uploading builds and results to a separate server.
{ "language": "en", "url": "https://stackoverflow.com/questions/140574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Enterprise App and the Enterprise App Client I came aboard a new project with a new company and we are trying to use JPA to do some DB work. So we have an Ear with an EJB, a webservice, and then there is a app client in the ear that really does all the work. The Webservice, calls the EJB, and the EJB calls the client to do the DB work. So within the appclient I want to load an EntityManager via annotations, but it does not seem to work (em is always null): @Entity public class Whatever...{ @PersistenceContext(unitName="pu") EntityManager em; } So I was thinking that I need to load the EntityManager at the EJB, but that didn't work either, because it seems that JPA didn't see the Entity classes since they are in the appclient and not the EJB. Can anyone give me some guidance? A: This is a misuse of an app client. All your db processing should occur in the EJB. There doesn't seem to be any apparent reason for the app clients' existence. This link is to an old article, but gives examples as to what an app client is used for (Applications not backend services). Application Client
{ "language": "en", "url": "https://stackoverflow.com/questions/140575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I detect if my program runs in an Active Directory environment? How do I detect if my program runs in an Active Directory environment? I'm using C# and .Net 2.0 A: Try getting Environment.UserDomainName and comparing it to Environment.MachineName. If the two are the same then it's likely that the user does not have a domain. If they are not the same then the user is logged into a domain which must have a directory server. A: This code will check if the Computer itself is a member of a domain using System.DirectoryServices.ActiveDirectory; bool isDomain = false; try { Domain.GetComputerDomain(); isDomain = true; } catch (ActiveDirectoryObjectNotFoundException) { } However the computer can be in a domain, but the currently logged in user may be a local user account. If you want to check for this use the Domain.GetCurrentDomain() function A: One way might be to query the LOGONSERVER environmental variable. That'll give the server name of your AD controller... Which, as far as I know, will be blank (or match current workstation? Not sure) if it isn't currently logged into a domain. Example Usage: string ADServer = Environment.GetEnvironmentVariable("LOGONSERVER"); A: From http://msdn.microsoft.com/en-us/library/system.directoryservices.directoryentry.path.aspx To bind to the current domain using LDAP, use the path "LDAP://RootDSE", then get the default naming context and rebind the entry. So without a domain the binding to "LDAP://RootDSE" should either fail or return nothing. I didn't try it for myself. use System.DirectoryServices; // add reference to system.directoryservices.dll ... DirectoryEntry ent = new DirectoryEntry("LDAP://RootDSE"); String str = ent.Properties["defaultNamingContext"][0]; DirectoryEntry domain = new DirectoryEntry("LDAP://" + str); This is definitely a cleaner way of checking for an Active Directory than relying on an environment variable (which the user could delete or add to spoof the program). A: I found something that works: using System.Net.NetworkInformation; IPGlobalProperties.GetIPGlobalProperties().DomainName; Works with a local user and a domain user.
{ "language": "en", "url": "https://stackoverflow.com/questions/140579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do I call a WCF webservice from Silverlight? I am trying to call a WCF webservice (which I developed) from a Silverlight application. For some reason the Silverlight app does not make the http soap call to the service. I know this because I am sniffing all http traffic with Fiddler (and it is not a localhost call). This my configuration in the server relevant to WCF: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> <services> <service behaviorConfiguration="ServiceBehavior" name="Service"> <endpoint address="" binding="basicHttpBinding" contract="Service"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> </system.serviceModel> And the ServiceReferences.ClientConfig file in the silverlight app (i am using the beta 2): <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_Service" maxBufferSize="65536" maxReceivedMessageSize="65536"> <security mode="None" /> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://itlabws2003/Service.svc" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_Service" contract="Silverlight_organigram.DataService.Service" name="BasicHttpBinding_Service" /> </client> </system.serviceModel> This is the silverlight method that calls the service, I paste the whole method for copleteness, the lambda is to make the call synchronous, I have debugged it and after the line client.GetPersonsAsync(), Fiddler does not show any message travelling to the server. public static List<Person> GetPersonsFromDatabase() { List<Person> persons = new List<Person>(); ServiceClient client = new ServiceClient(); ManualResetEvent eventGetPersons = new ManualResetEvent(false); client.GetPersonsCompleted += new EventHandler<GetPersonsCompletedEventArgs>(delegate(object sender, GetPersonsCompletedEventArgs e) { foreach (DTOperson dtoPerson in e.Result) { persons.Add(loadFromDto(dtoPerson)); } eventGetPersons.Set(); }); client.GetPersonsAsync(); eventGetPersons.WaitOne(); return persons; } Does anyone have any suggestions how I might fix this? A: If the Silverlight application is not hosted in the same domain that exposes the Web service you want to call, then cross-domain restrictions applies. If you want the Silverlight application to be hosted in another domain than the web service, you may want to have a look on this post to help you to have a cross domain definition file, or to write a middle "proxy" instead. A: You wouldn't happen to be running from the filesystem would you? If you are serving up the silverlight application your local machine and not using the VS Web Server or IIS, you won't be able to make HTTP calls for security reasons. Similarly if you're loading from a web server, you can't access local resources. Also I've found that Nikhil's Web Development Helper http://www.nikhilk.net/ASPNETDevHelperTool.aspx can be more useful than Fiddler because you will see local traffic as well, although it doesn't look like that is your issue in this case. A: I am not 100% certain, but if you are running on Vista or Server 2008 you may have run into the User Access Control issue with http.sys So in Vista and Win2k8 server, the HttpListener will listen only if you are running under a high privelege account. In fact, from my experience, even if you add yourself to the local administrators group, you might run into this issue. In any case, try launching Visual Studio on Vista by Right Clicking and runas Administrator. See if that fixes it. If it does, you're good, but.... ideally you should run httpcfg like: httpcfg set urlacl -u http://itlabws2003 -a D:(A;;GX;;;yoursid) your sid = the security identifier for the account you're running as, you can find it here: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList if you don't know it already, or you could possibly add yourself to BUILTIN\Administators, find the sid and run the httpcfg via command line again, specifying that sid. User Access Control, Vista and Http.sys cause all this...if this is indeed the problem you are running into. Not sure but maybe its worth a try
{ "language": "en", "url": "https://stackoverflow.com/questions/140602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What tools are available to measure the "health" of an enterprise web-based system? I assist in maintaining an enterprise web-based system (programmed in J2EE, but this is a more general question) and I'd like to know: what good tools are out there to measure the "health" of an enterprise system? For instance, tools to check memory space on servers, check the status of batch runs, the number of records processed in a certain amount of time, etc? I don't wish to limit this to one tool per answer, though, multiple tools per answer are certainly acceptable. A: OpenNMS is a nice monitoring tool. Out of the box it can monitor various aspects of a server, mostly things like memory, network usage, disk space. But it's open source, and can be extended to monitor other things. We use it to monitor thousands of services. It's very good at what it does. It may not be a good fit for the number of records processed, at least we don't use it that way. A: We use Nagios I'd provide more detail but our admin guys set it up so hopefully someone can give more info in comments. What i do know is that we use it for hosting a couple clients sites and the sites are rather large with quite a bit of traffic. It works exceptionally well. A: +1 for OpenNMS. In addition to its out-of-the-box system-level monitoring, it can be easily extended with JMX, so your applications can expose their innards as JMX attributes, and OpenNMS can monitor them, graph them, raise alerts based on them, etc. We've also extended OpenNMS to send SMS alerts when things go wonky.
{ "language": "en", "url": "https://stackoverflow.com/questions/140608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to address OpenID providers downtime? OpenID is all good... UNTIL the provider goes down. At that point you're potentially locked out of EVERYTHING (since you jumped on the bandwagon and applied OpenID everywhere you could). This question came up because I can't, for the life of me, login with my myopenid.com provider. :-( A: This is why I use my personal website to delegate OpenID services to another site. If WordPress.com (my current chosen provider) goes down, I just switch the code in my site to point at a different provider. A few seconds and I'm back up and running. A: The fix is for your OpenID site to accept multiple OpenIDs per user account. Something that the spec recommends. A: The answer is simple. Store an email for the user. Have your own login mechanism. Making OpenId optional is the straight forward answer to this. Unfortunately some sites are closed minded about OpenId.
{ "language": "en", "url": "https://stackoverflow.com/questions/140613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How can I use post-commit hooks to copy committed files to a web directory from SVN? My Ubuntu server has Apache and Subversion installed. I use this server as a staging server, purely for testing purposes. I use Apache to host the web application, and Subversion to keep versioned copies of the source code. My current workflow: * *Make changes to a file *Commit the file to the Subversion repository *Upload the file new over SFTP to the Apache public directory *View the changes in my web browser I would be much happier if my workflow was like this: * *Make changes to a file *Commit the file to the Subversion repository *In the background, Subversion puts a copy of the committed file into the Apache public directory *View the changes in my web browser I have very little server admin experience, and any help or pointers are appreciated. I heard that post-commit hooks are what I need, and that I can write bash scripts to do this, but I'm not sure where to start and didn't really find anything after quite a lot of Googling. Thank you! A: The "official" answer is here. I'm managing a website in my repository. How can I make the live site automatically update after every commit? A: It can be done, but automatically pushing every commit to the production website isn't always a good idea. Sometimes there are other changes that need to go along, and breaking the site because the new code is there, but the database schema hasn't been updated yet is just embarrassing. What I tend to do instead is make the server checkout a copy of svn, then, once I'm ready with everything else that has to happen, I do an svn update on it. But if you really wanted, you can put commands in the post-commit trigger, that will do everything automatically for you. This could include running a migration script on the server (if one exists for this change), to take care of any non-code changes that need to happen. A: I think the real, overarching question you should be asking yourself---which you may already have asked yourself of course---is this: "how can I test my code most easily before deploying it?" I think a good answer is to install Apache on your development box and run it as your own user, with webroot and/or cgi path at /home/richardhenry/src/mywebsite (or whereever you check out your code). That way, you can test your code without even committing. As a result, you won't litter your trunk with broken or useless commits. In general, keeping independent things independent tends to be A Good Idea (TM). Alternatively, sync the web server against your working directory with rsync, or write a script which pushes your file(s) from the dev box to your staging server and add a Makefile rule which runs your script (or calls up rsync). If you want to be really fancy, use inotify or some other file notification monitor to run your script automatically.
{ "language": "en", "url": "https://stackoverflow.com/questions/140614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is there a NAnt task that will display all property name / values? Is there a NAnt task that will echo out all property names and values that are currently set during a build? Something equivalent to the Ant echoproperties task maybe? A: I wanted them sorted so I expanded on the other answer. It's not very efficient, but it works: <script language="C#" prefix="util" > <references> <include name="System.dll" /> </references> <imports> <import namespace="System.Collections.Generic" /> </imports> <code> <![CDATA[ public static void ScriptMain(Project project) { SortedDictionary<string, string> sorted = new SortedDictionary<string, string>(); foreach (DictionaryEntry entry in project.Properties){ sorted.Add((string)entry.Key, (string)entry.Value); } foreach (KeyValuePair<string, string> entry in sorted) { project.Log(Level.Info, "{0}={1}", entry.Key, entry.Value); } } ]]> </code> </script> A: Try this snippet: <project> <property name="foo" value="bar"/> <property name="fiz" value="buz"/> <script language="C#" prefix="util" > <code> <![CDATA[ public static void ScriptMain(Project project) { foreach (DictionaryEntry entry in project.Properties) { Console.WriteLine("{0}={1}", entry.Key, entry.Value); } } ]]> </code> </script> </project> You can just save and run with nant. And no, there isn't a task or function to do this for you already. A: I tried the solutions suggested by Brad C, but they did not work for me (running Windows 7 Profession on x64 with NAnt 0.92). However, this works for my local configuration: <target name="echo-properties" verbose="false" description="Echo property values" inheritall="true"> <script language="C#"> <code> <![CDATA[ public static void ScriptMain(Project project) { System.Collections.SortedList sortedByKey = new System.Collections.SortedList(); foreach(DictionaryEntry de in project.Properties) { sortedByKey.Add(de.Key, de.Value); } NAnt.Core.Tasks.EchoTask echo = new NAnt.Core.Tasks.EchoTask(); echo.Project = project; foreach(DictionaryEntry de in sortedByKey) { if(de.Key.ToString().StartsWith("nant.")) { continue; } echo.Message = String.Format("{0}: {1}", de.Key,de.Value); echo.Execute(); } } ]]> </code> </script> </target> A: You can't prove a negative, but I can't find one and haven't seen one. I've traditionally rolled my own property echoes.
{ "language": "en", "url": "https://stackoverflow.com/questions/140616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Session Variables and Web Services I just wrote my first web service so lets make the assumption that my web service knowlege is non existant. I want to try to call a dbClass function from the web service. However I need some params that are in the session. Is there any way I can get these call these session variables from the webservice?? A: You should avoid increasing the complexity of the service layer adding session variables. As someone previously pointed out, think of the web services as isolated methods that take all what is needed to perform the task from their argument list. A: In general web services should not rely on session data. Think of them as ordinary methods: parameters go in and an answer comes out. A: If you are using ASP.NET web services and you want to have a session environment maintained for you, you need to embellish your web service method with an attribute that indicates you require a session. [WebMethod(EnableSession = true)] public void MyWebService() { Foo foo; Session["MyObjectName"] = new Foo(); foo = Session["MyObjectName"] as Foo; } Once you have done this, you may access session objects similar to aspx. Metro. A: if you have to want Session["username"].ToString(); as in the other C# pages behind aspx then you should simply replace [WebMethod] above the WebService method with [WebMethod(EnableSession = true)] thanks to :) Metro A: Maybe this will work HttpContext.Current.Session["Name] Or else you might have to take in some parameters or store them in a Database A: Your question is a little vague, but I'll try my best to answer. I'm assuming that your session variables exist on the server that is making the webservice call, and not on the server that hosts the webservice. In that case, you will need to pass the necessary values as parameters of your web service methods. A: To use session in webservice we have to follow 2 steps- * *Use [WebMethod(EnableSession = true)] attribute on the method. *Session["Name"] =50 (what ever you want to save) Please check the following Example. [WebMethod(EnableSession = true)] public void saveName(string pname) { Session["Name"] = pname; }
{ "language": "en", "url": "https://stackoverflow.com/questions/140627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: "Show All Files" option missing in VS Database Project The VS Database Project does not seem to have the "Show All Files" option in the Solution Explorer. Does anyone know of a way to turn the option on in VS? The "Show All Files" option on the solution explorer actually does two things. With the option selected, VS shows "hidden/nested" files within the project AND it shows files within the directory of the project that are not currently part of the project. (It shows the latter with a ghosted icon.) While DB projects may not have nested or hidden files within the project, there is no other way that I know of to have the solution explorer show files within the directory that are not part of the project. Also, while this action occurs within the solution explorer, it is actually a project issue. A: The correct answer to this issue is:- You must have the Database Project root node selected in the Solution Explorer. Selecting any other node in the Solution Explorer other than the Database Project root node will cause the "Show All Files" button to become hidden. A: it is by design. Some project types just don't allow you to. You might be able to go into the sln file in notepad and try something, but it might just screw up your solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/140632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: NSIS get path of current installer file that is running Is there an NSIS var to get the path of the currently running installer? A: Found it: $EXEPATH A: There are few useful variables: * *$EXEPATH - holds installer filename. *$EXEDIR - holds the complete path to the installer. So according to the topic NSIS get path of current installer file that is running the most appropriate is $EXEDIR.
{ "language": "en", "url": "https://stackoverflow.com/questions/140640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: ORA-01031: insufficient privileges when selecting view When I try to execute a view that includes tables from different schemas an ORA-001031 Insufficient privileges is thrown. These tables have execute permission for the schema where the view was created. If I execute the view's SQL Statement it works. What am I missing? A: Q. When is the "with grant option" required ? A. when you have a view executed from a third schema. Example: schema DSDSW has a view called view_name a) that view selects from a table in another schema (FDR.balance) b) a third shema X_WORK tries to select from that view Typical grants: grant select on dsdw.view_name to dsdw_select_role; grant dsdw_select_role to fdr; But: fdr gets select count(*) from dsdw.view_name; ERROR at line 1: ORA-01031: insufficient privileges issue the grant: grant select on fdr.balance to dsdw with grant option; now fdr: select count(*) from dsdw.view_name; 5 rows A: Let me make a recap. When you build a view containing object of different owners, those other owners have to grant "with grant option" to the owner of the view. So, the view owner can grant to other users or schemas.... Example: User_a is the owner of a table called mine_a User_b is the owner of a table called yours_b Let's say user_b wants to create a view with a join of mine_a and yours_b For the view to work fine, user_a has to give "grant select on mine_a to user_b with grant option" Then user_b can grant select on that view to everybody. A: Finally I got it to work. Steve's answer is right but not for all cases. It fails when that view is being executed from a third schema. For that to work you have to add the grant option: GRANT SELECT ON [TABLE_NAME] TO [READ_USERNAME] WITH GRANT OPTION; That way, [READ_USERNAME] can also grant select privilege over the view to another schema A: As the table owner you need to grant SELECT access on the underlying tables to the user you are running the SELECT statement as. grant SELECT on TABLE_NAME to READ_USERNAME; A: If the view is accessed via a stored procedure, the execute grant is insufficient to access the view. You must grant select explicitly. A: If the view is accessed via a stored procedure, the execute grant is insufficient to access the view. You must grant select explicitly. simply type this grant all on to public; A: To use a view, the user must have the appropriate privileges but only for the view itself, not its underlying objects. However, if access privileges for the underlying objects of the view are removed, then the user no longer has access. This behavior occurs because the security domain that is used when a user queries the view is that of the definer of the view. If the privileges on the underlying objects are revoked from the view's definer, then the view becomes invalid, and no one can use the view. Therefore, even if a user has been granted access to the view, the user may not be able to use the view if the definer's rights have been revoked from the view's underlying objects. Oracle Documentation http://docs.oracle.com/cd/B28359_01/network.111/b28531/authorization.htm#DBSEG98017 A: you may also create view with schema name for example create or replace view schema_name.view_name as select..
{ "language": "en", "url": "https://stackoverflow.com/questions/140643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Where can I find documentation for the Erlang shell? The Erlang documentation contains the documentation of modules. Where can I find the documentation of the Erlang shell? (Which is not a module, I suppose.) A: This page in the documentation seems to be a starting point. Especially the link in it. Check also the first link in it, with the shell's manpage. A: I'm using Getting Started with Erlang, chapter 1.2.1. It's about the shell and brings you up to speed on usage, etc.. A: I think Erlang shell is also a module. Take a look at this: Erlang shell A: This might also be helpful, as well as this.
{ "language": "en", "url": "https://stackoverflow.com/questions/140648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How often should you refactor? I had a discussion a few weeks back with some co-workers on refactoring, and I seem to be in a minority that believes "Refactor early, refactor often" is a good approach that keeps code from getting messy and unmaintainable. A number of other people thought that it just belongs in the maintenance phases of a project. If you have an opinion, please defend it. A: Just like you said: refactor early, refactor often. Refactoring early means the necessary changes are still fresh on my mind. Refactoring often means the changes tend to be smaller. Delaying refactoring only ends up making a big mess which further makes it harder to refactor. Cleaning up as soon as I notice the mess prevents it from building up and becoming a problem later. A: As the book says, You refactor when * *you add some code... a new feature *when you fix a bug / defect *when you do a code-review with multiple people *when you find yourself duplicating something for the third time.. 3 strikes rule A: I try to go by this motto: leave all the code you touch better than it was. When I make a fix or add a feature I usually use that opportunity to do limited refactoring on the impacted code. Often this makes it easier to make my intended change, so it actually doesn't cost anything. Otherwise, you should budget dedicated time for refactoring, if you can't because you are always fighting fires (I wonder why) then you should force yourself to refactor when you find making changes becomes much harder than it should and when "code smells" are just unbearable. A: A lot of times when I'm flushing out ideas my code starts out very tightly coupled and messy. As I start polishing the idea more the logical separations start becomming more and more clear and I begin refactoring. It's a constant process and as everyone suggests should be done 'Early and Often'. A: I refactor when: I'm modifying code and I'm confused by it. If it takes me a while to sift it out, it needs refactoring. I'm creating new code and after I've got it "working". Often times I'll get things working and as I'm coding I realize "Hey, I need to redo what I did 20 lines up, only with a few changes". At that point I refactor and continue. The only thing that in my opinion should stop you from doing this is time constraints. Like it or not, sometimes you just don't have the time to do it. A: It's like the National Parks -- Always leave it a little better than you found it. To me, that means any time I open code, and have to scratch my head to figure out what's going on, I should refactor something. My primary goal is for readability and understanding. Usually it's just renaming a variable for clarity. Sometimes it's extracting a method - For example (trivial), If I came across temp = array[i]; array[i] = array[j]; array[j] = temp; I would probably replace that with a swap(i,j) method. The compiler will likely inline it anyways, and a swap() tells everyone semantically what's going on. That being said, with my own code (starting from scratch), I tend to refactor for design. I often find it easier to work in a concrete class. When its done and debugged, then I'll pull the old Extract Interface trick. I'll leave it to a co-worker to refactor for readability, as I'm too close to the code to notice the holes. After all, I know what I meant. A: Refactor opportunistically! Do it whenever it's easy. If refactoring is difficult, then you're doing it at the wrong time (when the code doesn't need it) or on the wrong part of the code (where there are better efficiencies to be gained elswhere). (Or you're not that good at refactoring yet.) Saving refactoring for "maintenance" is a tautology. Refactoring is maintenance. A: I refactor every time I read anything and can make it more readable. Not a major restructuring. But if I think to myself "what does this List contain? Oh, Integers!" then I'll change it to List<Integer>. Also, I often extract methods in the IDE to put a good name of a few lines of code. A: I refactor code as soon as it's functional (all the tests pass). This way I clean it up while it's still fresh in my mind, and before anyone else sees how ugly the first version was. After the initial check-in I typically refactor every time I touch a piece of code. Refactoring isn't something you should set aside separate time for. It should be something you just do as you go. A: The answer is always, but more specifically: * *Assuming you branch for each task, then on each new branch, before it goes to QA. *If you develop all in the trunk, then before each commit. *When maintaining old code, use the above for new tasks, and for old code do refactoring on major releases that will obtain extra QA. A: You write code with two hats on. The just-get-the-thing-working hat and the I-need-to-understand-this-tomorrow hat. Obviously the second hat is the refactoring one. So you refactor every time you have made something work but have (inevitably) introduced smells like duplicated code, long methods, fragile error handling, bad variable names etc... Refactoring whilst trying to get something working (i.e. wearing both hats) isn't practical for non-trivial tasks. But postponing refactoring till the next day/week/iteration is very bad because the context of the problem will be gone from your head. So switch between hats as often as possible but never combine them. A: Everytime you encounter a need. At least when you're going to change a piece of code that needs refactoring. A: I localize refactoring to code related to my current task. I try to do my refactoring up front. I commit these changes separately since from a functional standpoint, it is unrelated to the actual task. This way the code is cleaner to work with and the revision history is also cleaner. A: "Refactor early, refactor often" is a productive guideline. Though that kind of assumes that you really know the code. The older a system gets, the more dangerous refactoring becomes, and the more deliberation is required. In some cases refactoring needs to be managed tasks, with effort level and time estimates, etc. A: Continuously, within reason. You should always be looking for ways to improve your software, but you have to be careful to avoid situations where you’re refactoring for the sake of refactoring (Refactorbation). If you can make a case that a refactoring will make a piece of code faster, easier to read, easier to maintain or easier or provide some other value to the business I say go for it! A: If you have a refactoring tool that will make the changes safely, then you should refactor whenever the code will compile, if it will make the code clearer. If you do not have such a tool, you should refactor whenever the tests are green, if it will make the code clearer. Make small changes -- rename a method to make what it does clearer. Extract a class to make a group of related variables be clearly related. Refactoring is not about making large changes, but about making things cleaner minute by minute. Refactoring is clearing your dishes after each meal, instead of waiting until every surface is covered in dirty plates. A: I refactor every chance I get because it lets me hone my code into the best it can be. I do this even when actively developing to prevent creating unmaintainable code in the first place. It also oftentimes lets me straighten out a poor design decision before it becomes unfixable. A: Three good reasons to refactor: * *Your original design (perhaps in a very small area, but design nonetheless) was wrong. This includes where you discover a common operation and want to share code. *You are designing iteratively. *The code is so bad that it needs major refurbishment. Three good reasons not to refactor: * *"This looks a little bit messy". *"I don't entirely agree with the way the last guy did this". *"It might be more efficient". (The problem there is 'might'). "Messy" is controversial - there is a valid argument variously called "fixing broken windows", or "code hygiene", which suggests that if you let small things slide, then you will start to let large things slide too. That's fine, and is a good thing to bear in mind, but remember that it's an analogy. It doesn't excuse shunting stuff around interminably, in search of the cleanest possible solution. How often you refactor should depend on how often the good reasons occur, and how confident you are that your test process protects you from introducing bugs. Refactoring is never a goal in itself. But if something doesn't work, it has to be fixed, and that's as true in initial development as it is in maintenance. For non-trivial changes it's almost always better to refactor, and incorporate the new concepts cleanly, than to patch a single place with great lumps of junk in order to avoid any change elsewhere. For what it's worth, I think nothing of changing an interface provided that I have a handle on what uses it, and that the scope of the resulting change is manageable. A: Absolutely as soon as it seems expedient. If you don't the pain builds up. Since switching to Squeak (which I now seem to mention every post) I've realised that lots of design questions during prototyping fall away because refactoring is really easy in that environment. To be honest, if you don't have an environment where refactoring is basically painless, I recommend that you try squeak just to know what it can be like. A: Refactoring often can often save the day, or at least some time. There was a project I was working on and we refactored all of our code after we hit some milestone. It was a great way because if we needed to rip code out that was no longer useful it made it easier to patch in whatever new thing we needed. A: We're having a discussion at work on this right now. We more or less agree that "write it so it works, then fix it". But we differ on the time perspective. I am more "fix it right away", my coworker is more "fix it in the next iteration". Some quotes that back him up: Douglas Crockford, Senior Javascript Architect Yahoo: refactor every 7th sprint. Ken Thompson (unix man): Code by itself almost rots and it's gonna be rewritten. Even when nothing has changed, for some reason it rots. I would like that once done with a task the code submitted is something you can come back to in 2 months and think "yes, I did well here". I do not believe that it is easy to find time to come back later and fix it. believing this is somewhat naive from my point of view. Edit: spelling error A: I think you should refactor something when you're currently working on a part of it. Means if you have to enhance function A, then you should refactor it before (and afterwards?). If you don't do anything with this function, then leave it as it is, as long as you have something else to do. Do not refactor a working part of the system, unless you already have to change it. A: There are many views on this topic, some linked to a particular methodology or approach to development. When using TDD, refactor early and often is, as you say, a favoured approach. In other situations you may refactor as and when needed. For example, when you spot repetitious code. When following more traditional methods with detailed up-front design, the refactoring may be less often. However, I would recommend not leaving refactoring until the end of a project. Not only will you be likely to introduce problems, potentially post-UAT, it is often also the case that refactoring gets progressively more difficult. For this reason, time constraints on the classic project cause refactoring and extra testing to be dropped and when maintenance kicks in you may have already created a spaghetti-code monster. A: If it ain't broke, don't refactor it. I'd say the time to refactor belongs in the initial coding stage, and ou can do it as often and as many times as you like. Once its in the hands of a customer, then it becomes a different matter. You do not want to make work for yourself 'tidying' up code only to find that it gets shipped and breaks something. The time after initial delivery to refactor is when you say you'll do it. When the code gets a bit too smelly, then have a dedicated release that contains refactorings and probably a few more important fixes. That way, if you do happen to break something, you know where it went wrong, you can much more easily fix it. If you refactor all the time, you will break things, you will not know that its broken until it gets QAd, and then you'll have a hard time trying to figure out whether the bugfix/feature code changes caused the problem, or some refactoring you performed ages ago. Checking for cbreaking changes is a lot easier when the code looks roughly like it used to. Refactor a lot of code structure and you can make it next to impossible, so only refactor when you seriously mean to. Treat it like you would any other product code change and you should be ok. A: I think this Top 100 Wordpress blog post may have some good advice. http://blog.accurev.com/2008/09/17/dr-strangecode-or-how-i-learned-to-stop-worrying-and-love-old-code/
{ "language": "en", "url": "https://stackoverflow.com/questions/140677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Firebug - how can I run multiline scripts or create a new JavaScript file? Is there a way in Firebug to start a new script file to apply to page? Basically I want to do work like I'd normally do on the Firebug console but be able to to paste in multi-line functions, etc. It doesn't seem like the console is amenable to that. A: Down in the lower-right corner of the FireBug UI you should see a red square icon with an up arrow. Use that and stretch it to a size you like. A: maybe not within firebug, but you could try some techniques similar to the jQuery bookmarklet. bookmarklet link A: What about this idea: Assuming your page can already have script tags that reference jQuery and a 'Script Include' jQuery plugin, then in the Console you could arbitrarily do: $.include('http://example.com/scripts/some_script.js');
{ "language": "en", "url": "https://stackoverflow.com/questions/140680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What high level languages support multithreading? I'm wondering which languages support (or don't support) native multithreading, and perhaps get some details about the implementation. Hopefully we can produce a complete overview of this specific functionality. A: Erlang has built-in support for concurrent programming. Strictly speaking, Erlang processe are greenlets. But the language and virtual machine are designed from the ground up to support concurrency. The language has specific control structures for asynchronous inter-process messaging. In Python, greenlet is a third-party package that provides lightweight threads and channel-based messaging. But it does not bear the comparison with Erlang. A: I suppose that the list of languages that are higher-level than Haskell is pretty short, and it has pretty good support for concurrency and parallelism. A: Older versions of C and C++ (namely, C89, C99, C++98, and C++03) have no support at all in the core language, although libraries such as POSIX threads are available for pretty much every platform in common user today. The newest versions of C and C++, C11 and C++11, do have built-in threading support in the language, but it's an optional feature of C11, so implementations such as single-core embedded systems can choose not to support it while supporting the rest of C11 if they desire. A: With CPython, one has to remember about the GIL. To summarize: only one processor is used, even on multiprocessor machines. There are multiple ways around this, as the comment shows. A: Delphi/FreePascal also has support for threads. I'll assume, from other answers, that it's only native on the Windows platforms. Some nice libraries that implement better features on top of the TThread Object: * *OmniThreadLibrary *BMThread A: Clojure is an up and coming Lisp-dialect for the JVM that is specifically designed to handle concurrency well. It features a functional style API, some very efficient implementations of various immutable data structures, and agent system (bit like actors in Scala and processes in Erlang). It even has software transactional memory. All in all, Clojure goes to great lenght to help you write correct multithreaded and concurrent code. A: I believe that the official squeak VM does not support native (OS) threads, but that the Gemstone version does. (Feel free to edit this if not correct). A: You need to define "native" in this context. Java claims some sort of built-in multithreading, but is just based on coarse grained locking and some library support. At this moment, it is not more 'native' than C with the POSIX threads. The next version of C++ (0x) will include a threading library as well. A: I know Java and C# support multithreading and that the next version of C++ will support it directly... (The planned implementation is available as part of the boost.org libraries...) A: Boost::thread is great, I'm not sure whether you can say its part of the language though. It depends if you consider the CRT/STL/Boost to be 'part' of C++, or an optional add-on library. (otherwise practically no language has native threading as they're all a feature of the OS). A: This question doesn't make sense: whether a particular implementation chooses to implement threads as native threads or green threads has nothing to do with the language, that is an internal implementation detail. There are Java implementations that use native threads and Java implementations that use green threads. There are Ruby implementations that use native threads and Ruby implementations that use green threads. There are Python implementations that use native threads and Python implementations that use green threads. There are even POSIX Thread implementations that use green threads, e.g. the old LinuxThreads library or the GNU pth library. And just because an implementation uses native threads doesn't mean that these threads can actually run in parallel; many implementations use a Global Interpreter Lock to ensure only one thread can run at a time. On the other hand, using green threads doesn't mean that they can't run in parallel: the BEAM Erlang VM for example can schedule its green threads (more precisely green processes) across mulitple CPU cores, the same is planned for the Rubinius Ruby VM. A: Perl doesn't usefully support native threads. Yes, there is a Perl threads module, and yes it uses native platform threads in its implementation. The problem is, it isn't very useful in the general case. When you create a new thread using Perl threads, it copies the entire state of the Perl interpreter. This is very slow and uses lots of RAM. In fact it's probably slower than using fork() on Unix, as the latter uses copy-on-write and Perl threads do not. But in general each language has its own threading model, some are different from others. Python (mostly) uses native platform threads but has a big lock which ensures that only one runs (Python code) at once. This actually has some advantages. Aren't threads out of fashion these days in favour of processes? (Think Google Chrome, IE8) A: I made a multithreading extension for Lua recently, called Lua Lanes. It merges multithreading concepts so naturally to the language, that I would not see 'built in' multithreading being any better. For the record, Lua's built in co-operative multithreading (coroutines) can often also be used. With or without Lanes. Lanes has no GIL, and runs code in separate Lua universes per thread. Thus, unless your C libraries would crash, it is immune to the problems associated with thread usage. In fact, the concept is more like processes and message passing, although only one OS process is used. A: Finally Go is here with multi-threading with its own pkg Goroutine. People say it is on the structure of C-language. It is also easy to use and understand . A: Perl and Python do. Ruby is working on it, but the threads in Ruby 1.8 are not really threads.
{ "language": "en", "url": "https://stackoverflow.com/questions/140696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Glassfish: web application deployed with non-root context interprets requests relative to domain1/docroot The webapp uses Spring MVC. <bean class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="urlMap"> <map> <entry key="/*" value-ref="defaultHandler"/> </map> </property> <property name="order" value="2"/> </bean> <bean name="defaultHandler" class="org.springframework.web.servlet.mvc.UrlFilenameViewController"/> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/"/> <property name="suffix" value=""/> </bean> So requests like http://localhost:8080/application-context-folder/index.jsp should resolve to application-context-folder/index.jsp and they resolve to domain1/docroot/application-context-folder. Is it by design or do I need to change something in the application or configuration ? @Edit: there was a typo, the requested URL is http://localhost:8080/application-context-folder/index.jsp, not http://localhost:8080/index.jsp A: Use redirect to your application context. Place an index.html file in the docroot folder of your domain. File may look something like this: <html> <head> <title>Your application title</title> <frameset> <frame src="http://localhost:8080/[application_context]"> </frameset> </head> <body> Redirecting to <a href="http://localhost:8080/[application_context]">Some title</a>... </body>
{ "language": "en", "url": "https://stackoverflow.com/questions/140708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I ensure that user entered data containing international characters doesn't get corrupted? It often happens that characters such as é gets transformed to é, even though the collation for the MySQL DB, table and field is set to utf8_general_ci. The encoding in the Content-Type for the page is also set to UTF8. I know about utf8_encode/decode, but I'm not quite sure about where and how to use it. I have read the "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)" article, but I need some MySQL / PHP specific pointers. How do I ensure that user entered data containing international characters doesn't get corrupted? A: Collation and charset are not the same thing. Your collation needs to match the charset, so if your charset is utf-8, so should the collation. Picking the wrong collation won't garble your data though - Just make string-comparison/sorting work wrongly. That said, there are several places, where you can set charset settings in PHP. I would recommend that you use utf-8 throughout, if possible. Places that needs charset specified are: * *The database. This can be set on database, table and field level, and even on a per-query level. *Connection between PHP and database. *HTTP output; Make sure that the HTTP-header Content-Type specifies utf-8. You can set default values in PHP and in Apache, or you can use PHP's header function. *HTTP input. Generally forms will be submitteed in the same charset as the page was served up in, but to make sure, you should specify the accept-charset property. Also make sure that URL's are utf-8 encoded, or avoid using non-ascii characters in url's (And GET parameters). utf8_encode/decode functions are a little strangely named. They specifically convert between latin1 (ISO-8859-1) and utf-8. If everything in your application is utf-8, you won't have to use them much. There are at least two gotchas in regards to utf-8 and PHP. The first is that PHP's builtin string functions expect strings to be single-byte. For a lot of operations, this doesn't matter, but it means than you can't rely on strlen and other functions. There is a good run-down of the limitations at this page. Usually, it's not a big problem, but especially when using 3-party libraries, you need to be aware that things could blow up on this. One option is also to use the mb_string extension, which has the option to replace all troublesome functions with utf-8 aware alternatives. It's still not a 100% bulletproof solution, but it'll work for most cases. Another problem is that some installations of PHP still has the magic_quotes setting turned on. This problem is orthogonal to utf-8, but can lead to some head scratching. Turn it off, for your own sanity's sake. A: Things you should do: * *Make sure Apache puts out UTF-8 content. Do this in your httpd.conf, or use PHP's header()-function to do it manually. *Make sure your database connection is UTF8. SET NAMES utf8 does the trick. *Make sure all your tables are set to UTF8. *Make sure all your PHP and template files are encoded as UTF8 if you store international characters in them. You usually don't have to do to much using the mb_string or utf8_encode/decode-functions when you do this. A: On the first look at http://www.nicknettleton.com/zine/php/php-utf-8-cheatsheet I think that one important thing is missing (perhaps I overlooked this one). Depending on your MySQL installation and/or configuration you have to set the connection encoding so that MySQL knows what encoding you're expecting on the client side (meaning the client side of the MySQL connection, which should be you PHP script). You can do this by manually issuing a SET NAMES utf8 query prior to any other query you send to the MySQL server. If your're using PDO on the PHP side you can set-up the connection to automatically issue this query on every (re)connect by using $db=new PDO($dsn, $user, $pass); $db->setAttribute(PDO::MYSQL_ATTR_INIT_COMMAND, "SET NAMES utf8"); when initializing your db connection. A: For better unicode correctness, you should use utf8_unicode_ci (though the documentation is a little vague on the differences). You should also make sure the following Mysql flags are set correctly - * *default-character-set=utf8 *skip-character-set-client-handshake //Important so the client doesn't enforce another encoding Those can be set in the mysql configuration file (under the [mysqld] tab) or at run time by sending the appropriate queries. A: Regardless of the language it's written in, if you were to create an app that allows a wide array of encodings, handle it in pieces: * *Identify the encoding * *somehow you want to find out what kind of encoding you're dealing with, otherwise, it's pretty pointless to consider it further. You'll end up with junk chars. *Handle your bytes * *think of these strings less like 'strings' of characters, and more like lists of bytes *PHP is especially sneaky. Don't let it truncate your data on-the-fly. If you're regexing a UTF-8 string, make sure you identify it as such *Store for the LCD * *Again, you don't want to truncate data. If you're storing a sentence in English, can you also store a set of Mandarin glyphps? How about Arabic? Which of these is going to require the most space? Account for it.
{ "language": "en", "url": "https://stackoverflow.com/questions/140728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Why should I upgrade from SQL2000 to SQL2005? I'm looking for the single biggest reason you are glad that you've already made the jump from SQL2000 to SQL2005. A: * *Recursion without creating temporary tables. *Native Exception support (Try/Catch instead of if @Error goto) A: Because: Microsoft would like to remind customers that support for SQL Server 2000 Service Pack 3a (SP3a) will end on July 10, 2007. A: Native XML support is big for us here. A: SSIS support. Blows DTS away and is quite handy. :) A: * *SSRS - A really huge advantage for my organization is having the free reporting tools that come with SQL Server 2005. Reporting Services allows me to produce nice looking reports that have exactly the fields that our managers need in very little time. It has a built in tool so they can convert to excel, pdf, or several other formats. Lots of value here. *SSIS - Integration services in 2005 is very powerful for ETL (export, transform, load) functions. You can set up automated processes to run on a schedule. *SSAS - Analysis services looks promising. I have not made any data cubes yet because I want to organize an actual data warehouse. Once you have that, robust data mining algorithms are already built in. Take a look at these three tools that are included with SQL Server 2005. If i had to pick one as the single biggest reason to move to 2005, it would be SSRS. At this point, I would suggest looking at SQL Server 2008. A: Pagination without (manually) creating temporary tables is a basic, but huge improvement. However, if you are then going to drag & drop some GridViews in your ASP.NET app directly from the data table, you'd be paging in the app... A: CLR integration A: Row Versioning-Based Transaction Isolation A: I think the single biggest reason is that SQL 2000 is not supported on Vista. I had to move to SQL 2005 because of that. A: I don't know if it's just me, but Linq2SQL doesn't exactly work perfectly with SQL 2000. Ordinarily its supposed to automatically generate and populate child collections based on inferences from your schema's keys, foreign keys, RI, etc. Works fine in 2005 but i haven't had much luck in 2000. A: Common Table Expressions have proven incredibly useful. A: Exception handling... how did we ever manage on SQL 2000...? A: PIVOT. That beautiful little statement has saved me more time then any other SQL Server 2005 enhancement. A: YOU CAN'T EXPORT TABLES with a "right click" anymore. This is more of a problem than a good reason. A: Dynamic Management Views for Optimisation and to quickly find out the state of the server.
{ "language": "en", "url": "https://stackoverflow.com/questions/140733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Best way to cache resized images using PHP and MySQL What would be the best practice way to handle the caching of images using PHP. The filename is currently stored in a MySQL database which is renamed to a GUID on upload, along with the original filename and alt tag. When the image is put into the HTML pages it is done so using a url such as '/images/get/200x200/{guid}.jpg which is rewritten to a php script. This allows my designers to specify (roughly - the source image maybe smaller) the file size. The php script then creates a hash of the size (200x200 in the url) and the GUID filename and if the file has been generated before (file with the name of the hash exists in TMP directory) sends the file from the application TMP directory. If the hashed filename does not exist, then it is created, written to disk and served up in the same manner, Is this efficient as it could be? (It also supports watermarking the images and the watermarking settings are stored in the hash as well, but thats out of scope for this.) A: One note worth adding is to make sure you're code does not generate "unauthorized" sizes of these images. So the following URL will create a 200x200 version of image 1234 if one doesn't already exist. I'd highly suggest you make sure that the requested URL contains image dimensions you support. /images/get/200x200/1234.jpg A malicious person could start requesting random URLs, always altering the height & width of the image. This would cause your server some serious issues b/c it will be sitting there, essentially under attack, generating images of sizes you do not support. /images/get/0x1/1234.jpg /images/get/0x2/1234.jpg ... /images/get/0x9999999/1234.jpg /images/get/1x1/1234.jpg ... etc Here's a random snip of code illustrating this: <?php $pathOnDisk = getImageDiskPath($_SERVER['REQUEST_URI']); if(file_exists($pathOnDisk)) { // send header with image mime type echo file_get_contents($pathOnDisk); exit; } else { $matches = array(); $ok = preg_match( '/\/images\/get\/(\d+)x(\d+)\/(\w+)\.jpg/', $_SERVER['REQUEST_URI'], $matches); if(! $ok) { // invalid url handleInvalidRequest(); } else { list(, $width, $height, $guid) = $matches; // you should do this! if(isSupportedSize($width, $height)) { // size is supported. all good // generate the resized image, save it & output it } else { // invalid size requested!!! handleInvalidRequest(); } } } // snip function handleInvalidRequest() { // do something w/ invalid request // show a default graphic, log it etc } ?> A: I would do it in a different manner. Problems: 1. Having PHP serve the files out is less efficient than it could be. 2. PHP has to check the existence of files every time an image is requested 3. Apache is far better at this than PHP will ever be. There are a few solutions here. You can use mod_rewrite on Apache. It's possible to use mod_rewrite to test to see if a file exists, and if so, serve that file instead. This bypasses PHP entirely, and makes things far faster. The real way to do this, though, would be to generate a specific URL schema that should always exist, and then redirect to PHP if not. For example: RewriteCond %{REQUEST_URI} ^/images/cached/ RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-f RewriteRule (.*) /images/generate.php?$1 [L] So if a client requests /images/cached/<something> and that file doesn't exist already, Apache will redirect the request to /images/generate.php?/images/cached/<something>. This script can then generate the image, write it to the cache, and then send it to the client. In the future, the PHP script is never called except for new images. Use caching. As another poster said, use things like mod_expires, Last-Modified headers, etc. to respond to conditional GET requests. If the client doesn't have to re-request images, page loads will speed dramatically, and load on the server will decrease. For cases where you do have to send an image from PHP, you can use mod_xsendfile to do it with less overhead. See the excellent blog post from Arnold Daniels on the issue, but note that his example is for downloads. To serve images inline, take out the Content-Disposition header (the third header() call). Hope this helps - more after my migraine clears up. A: There is two typos in Dan Udey's rewrite example (and I can't comment on it), it should rather be : RewriteCond %{REQUEST_URI} ^/images/cached/ RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-f RewriteRule (.*) /images/generate.php?$1 [L] Regards. A: Seems great post, but my problem still remains unsolved. I dont have access to htaccess in my host provider, so no question of apache tweaking. Is there really a way to set cace-control header for images? A: Your approach seems quite reasonable - I would add that some mechanism should be put into place to check that the date the cached version was generated was after the last modified timestamp of the original (source) image file and if not regenerate the cached/resized version. This will ensure that if an image is changed by the designers the cache will be updated appropriately. A: That sounds like a solid way to do it. The next step may be to go beyond PHP/MySQL. Perhaps, tweak your headers: If you're using PHP to send MIME types, you might also use 'Keep-alive' and 'Cache-control' headers to extend the life of your images on the server and take some of the load off of PHP/MySQL. Also, consider apache plugin(s) for caching as well. Like mod_expires. Oh, one more thing, how much control do you have over your server? Should we limit this conversation to just PHP/MySQL? A: phpThumb is a framework that generates resized images/thumbnails on the fly. It also implements caching and it's very easy to implement. The code to resize an image is: <img src="/phpThumb.php?src=/path/to/image.jpg&w=200&amp;h=200" alt="thumbnail"/> will give you a thumbnail of 200 x 200; It also supports watermarking. Check it out at: http://phpthumb.sourceforge.net/ A: I've managed to do this simply using a redirect header in PHP: if (!file_exists($filename)) { // *** Insert code that generates image *** // Content type header('Content-type: image/jpeg'); // Output readfile($filename); } else { // Redirect $host = $_SERVER['HTTP_HOST']; $uri = rtrim(dirname($_SERVER['PHP_SELF']), '/\\'); $extra = $filename; header("Location: http://$host$uri/$extra"); } A: Instead of keeping the file address in the db I prefer adding a random number to the file name whenever the user logs in. Something like this for user 1234: image/picture_1234.png?rnd=6534122341 If the user submits a new picture during the session I just refresh the random number. GUID tackles the cache problem 100%. However it sort of makes it harder to keep track of the picture files. With this method there is a chance the user might see the same picture again at a future login. However the odds are low if you generate your random number from a billion numbers.
{ "language": "en", "url": "https://stackoverflow.com/questions/140734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: In BIRT, how do I access an arbitrary data set from JavaScript? I am building my first report in BIRT. Very quickly I ran into a problem in which I wanted to display some text or data based on an expression that included data from two different tables (not tables that can/should be joined - (hypothetically example- take a student's ACT score from his record in the student table and compare it against the statistics table's entry for ACT statistics). I soon realized that a data element has to be bound to a dataset (only one of them.) I found a similar question in the BIRT mailing list which helped me get to a solution - I can bind an individual data element to a different dataset, but it can still access the elements of its container. I can send a parameters to the dataset that the element is bound to (e.g. "ACT" in the example I mentioned above). Eventually however, I came to a place where I needed to use data from three different tables. I am stuck here, and I'm assuming that there is a way to do this through the scripting abilities, but I have yet to see in the documentation a way to extract data from a data set - everything I have dealt with so far is associated with binding a report element to a dataset. To be clear, I have seen that I can add JavaScript functions to the initialize section of the top level report (and call them from an expression in a data element) but I don't see how in a script I can query any of my datasets -- as opposed to only interacting with the dataset bound to my data element). How can I access an arbitrary (though already defined) data set from JavaScript in BIRT? (Or how can I access more than two datasets from an element - one that it is bound to, and one that its container is bound to?) A: I have not tried to do this for a while. The immediate answer that pops to mind is that you need to put the third data set into a table (can have visibility set to false) and you would need to populate the table values to a GlobalValue. Then you could get at the GlobalValues from the data control through script. I know that it is not pretty. I will have a look over the weekend and see if 2.3 has added any functionality that makes this easier. A: Use the this.getValue() which will return the current column's binding value instead of dataSetRow["RUN"]
{ "language": "en", "url": "https://stackoverflow.com/questions/140743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is it possible to compile .NET IL code to machine code? I would like to distribute my .NET programs without the .NET framework. Is it possible to compile a .NET program to machine code? A: There are some third party tools that do this, e.g. * *http://www.remotesoft.com/linker/ *http://www.xenocode.com/ A: Another (expensive and proprietary, licenses start at $1599) product that can do this is Xenocode Postbuild. Haven't used it myself though, with it costing about the gross national product of a small African country and all... A: Yes, you can now! Recently Microsoft announced .NET Native, a piece of technology that compiles your application with .NET Native runtime (they call it MRT) into one binary (actually one executable and one dynamic library). See links: http://blogs.msdn.com/b/dotnet/archive/2014/04/02/announcing-net-native-preview.aspx http://blogs.msdn.com/b/dotnet/archive/2014/04/24/dotnetnative-performance.aspx A: Is it possible to compile .NET IL code to machine code? Yes, but the .NET Framework does it for you at runtime (default) or at install time (ngen). Among other reasons, this IL -> machine code is done separately on each install machine so it can be optimized for that particular machine. I would like to distribute my .NET programs without the .NET framework. Is it possible to compile a .NET program to machine code? No, for all intents and purposes you cannot do this. The 3rd party workarounds may work in some scenarios, but by then I wouldn't really consider it "managed code" or ".NET" anymore. A: Yes, you can precompile using Ngen.exe, however this does not remove the CLR dependence. You must still ship the IL assemblies as well, the only benefit of Ngen is that your application can start without invoking the JIT, so you get a real fast startup time. According to CLR Via C#: Also, assemblies precompiled using Ngen are usually slower than JIT'ed assemblies because the JIT compiler can optimize to the targets machine (32-bit? 64-bit? Special registers? etc), while NGEN will just produce a baseline compilation. EDIT: There is some debate on the above info from CLR Via C#, as some say that you are required to run Ngen on the target machine only as part of the install process. A: Remotesoft has one: Salamander .NET Linker I don't have any experience with it though. A: There is IL2CPU for the compilation part. A: I don't think that it is possible. You can make a prejitted assembly but you still need the framework. A: I think you should not: That's the task of the JIT compiler. However, you could use ClickOnce or Windows Installer to deploy it so that the missing framework isn't such a big problem: you could tell the installer to download the Framework and install it. A: If you just concerned with the size of deploying the Framework, you might read up on this. A: Look at MONO project: mono command with --aot option. http://www.mono-project.com/AOT A: I'd always thought it would be cool to compile c# to machine code directly without the CLR dependency though....
{ "language": "en", "url": "https://stackoverflow.com/questions/140750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: How do you manipulate GUID's when programming C or C++ in Windows? How do you manipulate GUID's when doing Windows programming in C or C++? A: Under Win32 if you want to manipulate GUID's you can use the UuidXXX() Network/RPC functions: * *UuidCompare() *UuidCreate() *UuidCreateNil() *UuidCreateSequential() *UuidEqual() *UuidFromString() *UuidHash() *UuidIsNil() *UuidToString() A: I use boost::uuid : http://www.boost.org/doc/libs/1_43_0/libs/uuid/index.html That way it is guaranteed to still work if I port my application on another plaftorm.
{ "language": "en", "url": "https://stackoverflow.com/questions/140751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Looking for File Traversal Functions in Python that are Like Java's In Java you can do File.listFiles() and receive all of the files in a directory. You can then easily recurse through directory trees. Is there an analogous way to do this in Python? A: As a long-time Pythonista, I have to say the path/file manipulation functions in the std library are sub-par: they are not object-oriented and they reflect an obsolete, lets-wrap-OS-system-functions-without-thinking philosophy. I'd heartily recommend the 'path' module as a wrapper (around os, os.path, glob and tempfile if you must know): much nicer and OOPy: http://pypi.python.org/pypi/path.py/2.2 This is walk() with the path module: dir = path(os.environ['HOME']) for f in dir.walk(): if f.isfile() and f.endswith('~'): f.remove() A: Try "listdir()" in the os module (docs): import os print os.listdir('.') A: Yes, there is. The Python way is even better. There are three possibilities: 1) Like File.listFiles(): Python has the function os.listdir(path). It works like the Java method. 2) pathname pattern expansion with glob: The module glob contains functions to list files on the file system using Unix shell like pattern, e.g. files = glob.glob('/usr/joe/*.gif') 3) File Traversal with walk: Really nice is the os.walk function of Python. The walk method returns a generation function that recursively list all directories and files below a given starting path. An Example: import os from os.path import join for root, dirs, files in os.walk('/usr'): print "Current directory", root print "Sub directories", dirs print "Files", files You can even on the fly remove directories from "dirs" to avoid walking to that dir: if "joe" in dirs: dirs.remove("joe") to avoid walking into directories called "joe". listdir and walk are documented here. glob is documented here. A: Straight from Python's Refererence Library >>> import glob >>> glob.glob('./[0-9].*') ['./1.gif', './2.txt'] >>> glob.glob('*.gif') ['1.gif', 'card.gif'] >>> glob.glob('?.gif') ['1.gif'] A: Take a look at os.walk() and the examples here. With os.walk() you can easily process a whole directory tree. An example from the link above... # Delete everything reachable from the directory named in 'top', # assuming there are no symbolic links. # CAUTION: This is dangerous! For example, if top == '/', it # could delete all your disk files. import os for root, dirs, files in os.walk(top, topdown=False): for name in files: os.remove(os.path.join(root, name)) for name in dirs: os.rmdir(os.path.join(root, name)) A: Use os.path.walk if you want subdirectories as well. walk(top, func, arg) Directory tree walk with callback function. For each directory in the directory tree rooted at top (including top itself, but excluding '.' and '..'), call func(arg, dirname, fnames). dirname is the name of the directory, and fnames a list of the names of the files and subdirectories in dirname (excluding '.' and '..'). func may modify the fnames list in-place (e.g. via del or slice assignment), and walk will only recurse into the subdirectories whose names remain in fnames; this can be used to implement a filter, or to impose a specific order of visiting. No semantics are defined for, or required of, arg, beyond that arg is always passed to func. It can be used, e.g., to pass a filename pattern, or a mutable object designed to accumulate statistics. Passing None for arg is common. A: I'd recommend against os.path.walk as it is being removed in Python 3.0. os.walk is simpler, anyway, or at least I find it simpler. A: You can also check out Unipath, an object-oriented wrapper of Python's os, os.path and shutil modules. Example: >>> from unipath import Path >>> p = Path('/Users/kermit') >>> p.listdir() Path(u'/Users/kermit/Applications'), Path(u'/Users/kermit/Desktop'), Path(u'/Users/kermit/Documents'), Path(u'/Users/kermit/Downloads'), ... Installation through Cheese shop: $ pip install unipath A: Seeing as i have programmed in python for a long time, i have many times used the os module and made my own function to print all files in a directory. The code for the function: import os def PrintFiles(direc): files = os.listdir(direc) for x in range(len(files)): print("File no. "+str(x+1)+": "+files[x]) PrintFiles(direc)
{ "language": "en", "url": "https://stackoverflow.com/questions/140758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do I know WHEN to close an HTTP 1.1 Keep-Alive Connection? I am writing a web server in Java and I want it to support HTTP 1.1 Keep-Alive connections. But how can I tell when the client is done sending requests for a given connection? (like a double end-of-line or something). Lets see how stackoverflow handles this very obscure question -- answers for which, on Google, are mired in technical specifications and obscure language. I want a plain-english answer for a non-C programmer :) I see. that confirms my suspicion of having to rely on the SocketTimeoutException. But i wasn't sure if there was something i could rely on from the client that indicates it is done with the connection--which would allow me to close the connections sooner in most cases--instead of waiting for the timeout. Thanks A: If you're building your server to meet the standard, then you've got a lot of information to guide you here already. Simple spoken, it should be based on a time since a connection was used, and not so much at the level of request data. In a longer-winded way, the practical considerations section of the HTTP/1.1 document has some guidance for you: "Servers will usually have some time-out value beyond which they will no longer maintain an inactive connection. Proxy servers might make this a higher value since it is likely that the client will be making more connections through the same server. The use of persistent connections places no requirements on the length (or existence) of this time-out for either the client or the server." or "When a client or server wishes to time-out it SHOULD issue a graceful close on the transport connection. Clients and servers SHOULD both constantly watch for the other side of the transport close, and respond to it as appropriate. If a client or server does not detect the other side's close promptly it could cause unnecessary resource drain on the network." A: Lets see how stackoverflow handles this very obscure question -- answers for which, on Google, are mired in technical specifications and obscure language. I just put When should I close an HTTP 1.1 connection? into Google, and the third hit was HTTP Made Really Easy. In the table of contents, there is a link to a section entitled Persistent Connections and the "Connection: close" Header. This section is three paragraphs long, uses very simple language, and tells you exactly what you want to know. I want a plain-english answer for a non-C programmer :) With all due respect, programming is a technical endeavour where the details matter a great deal. Reading technical documentation is an absolutely essential skill. Relying on "plain English" third-party interpretations of the specifications will only result in you doing a poor job. A: You close it whenever you'd like. The header indicates that the client would prefer you to leave the connection open, but that doesn't require the server to comply. Most servers leave it open for about 5-10 seconds, some don't pay attention to it at all. A: You should read the RFCs dealing with the Keep-Alive feature. Otherwise you might end up with a server that doesn't work as expected. As @[Stephen] has already pointed out, the server is free to close the connection anytime it wishes (ok, not in the middle of a request/response pair though). Ditto for the client. Any other solution would allow the server or the client to perform a DoS on the other party. EDIT: Have a look at the Connection header. The client (and the server) can request a graceful connection closure using the header. For example, Connection: close inside the request is a request to the server to close the connection after it sends the response.
{ "language": "en", "url": "https://stackoverflow.com/questions/140765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to simplify this code (generates a random int between min and max base on unsigned int)? The code is return min + static_cast<int>(static_cast<double>(max - min + 1.0) * (number / (UINT_MAX + 1.0))); number is a random number obtained by rand_s. min and max are ints and represent minimum and maximum values (inclusive). If you provide a solution not using unsigned int as a number, please also explain how to make it be random. Please do not submit solutions using rand(). A: @Andrew Stein In Numerical Recipes in C: The Art of Scientific Computing (William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling; New York: Cambridge University Press, 1992 (2nd ed., p. 277)), the following comments are made: "If you want to generate a random integer between 1 and 10, you should always do it by using high-order bits, as in j = 1 + (int) (10.0 * (rand() / (RAND_MAX + 1.0))); and never by anything resembling j = 1 + (rand() % 10); (which uses lower-order bits)." From man 3 rand A: The static_cast<double> is redundant because the "+1.0"s will cause promotion to double anyway. A: How about Boost:Random A: You could do the arithmetic in an unsigned long long instead of a double, but only if ULONGLONG_MAX >= UINT_MAX*UINT_MAX, which is probably implementation defined. But if you're worried about that, you'd be worried about potential loss of precision in the original code in the case where (max - min) or RAND_MAX is large. Whether the long long is actually faster might depend how good your platform's hardware float is. But integer arithmetic arguably is inherently simpler than floating-point. A: Something like min + number % (max - min + 1) Check the end-cases
{ "language": "en", "url": "https://stackoverflow.com/questions/140786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I test if another installation is already in progress? Assuming I'm trying to automate the installation of something on windows and I want to try to test whether another installation is in progress before attempting install. I don't have control over the installer and have to do this in the automation framework. Is there a better way to do this, some win32 api?, than just testing if msiexec is running? [Update 2] Improved the previous code I had been using to just access the mutex directly, this is a lot more reliable: using System.Threading; [...] /// <summary> /// Wait (up to a timeout) for the MSI installer service to become free. /// </summary> /// <returns> /// Returns true for a successful wait, when the installer service has become free. /// Returns false when waiting for the installer service has exceeded the timeout. /// </returns> public static bool WaitForInstallerServiceToBeFree(TimeSpan maxWaitTime) { // The _MSIExecute mutex is used by the MSI installer service to serialize installations // and prevent multiple MSI based installations happening at the same time. // For more info: http://msdn.microsoft.com/en-us/library/aa372909(VS.85).aspx const string installerServiceMutexName = "Global\\_MSIExecute"; try { Mutex MSIExecuteMutex = Mutex.OpenExisting(installerServiceMutexName, System.Security.AccessControl.MutexRights.Synchronize | System.Security.AccessControl.MutexRights.Modify); bool waitSuccess = MSIExecuteMutex.WaitOne(maxWaitTime, false); MSIExecuteMutex.ReleaseMutex(); return waitSuccess; } catch (WaitHandleCannotBeOpenedException) { // Mutex doesn't exist, do nothing } catch (ObjectDisposedException) { // Mutex was disposed between opening it and attempting to wait on it, do nothing } return true; } A: See the description of the _MSIExecute Mutex on MSDN. A: I was getting an unhandled exception using the code above. I cross referenced this article witt this one Here's my updated code: /// <summary> /// Wait (up to a timeout) for the MSI installer service to become free. /// </summary> /// <returns> /// Returns true for a successful wait, when the installer service has become free. /// Returns false when waiting for the installer service has exceeded the timeout. /// </returns> public static bool IsMsiExecFree(TimeSpan maxWaitTime) { // The _MSIExecute mutex is used by the MSI installer service to serialize installations // and prevent multiple MSI based installations happening at the same time. // For more info: http://msdn.microsoft.com/en-us/library/aa372909(VS.85).aspx const string installerServiceMutexName = "Global\\_MSIExecute"; Mutex MSIExecuteMutex = null; var isMsiExecFree = false; try { MSIExecuteMutex = Mutex.OpenExisting(installerServiceMutexName, System.Security.AccessControl.MutexRights.Synchronize); isMsiExecFree = MSIExecuteMutex.WaitOne(maxWaitTime, false); } catch (WaitHandleCannotBeOpenedException) { // Mutex doesn't exist, do nothing isMsiExecFree = true; } catch (ObjectDisposedException) { // Mutex was disposed between opening it and attempting to wait on it, do nothing isMsiExecFree = true; } finally { if(MSIExecuteMutex != null && isMsiExecFree) MSIExecuteMutex.ReleaseMutex(); } return isMsiExecFree; } A: I have been working on this - for about a week - using your notes (Thank you) and that from other sites - too many to name (Thank you all). I stumbled across information revealing that the Service could yield enough information to determine if the MSIEXEC service is already in use. The Service being 'msiserver' - Windows Installer - and it's information being both state and acceptstop. The following VBScript code checks this: Set objWMIService = GetObject("winmgmts:\\.\root\cimv2") Check = False Do While Not Check WScript.Sleep 3000 Set colServices = objWMIService.ExecQuery("Select * From Win32_Service Where Name="'msiserver'") For Each objService In colServices If (objService.Started And Not objService.AcceptStop) WScript.Echo "Another .MSI is running." ElseIf ((objService.Started And objService.AcceptStop) Or Not objService.Started) Then WScript.Echo "Ready to install an .MSI application." Check = True End If Next Loop
{ "language": "en", "url": "https://stackoverflow.com/questions/140820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Can a macro be used for read-only access to a variable? Can you define a macro that accesses a normal variable, but in a read-only fashion (other than defining it as a call to a function)? For example, can the VALUE macro in the following code be defined in such a way that the dostuff() function causes a compile error? struct myobj { int value; } /* This macro does not satisfy the read-only requirement */ #define VALUE(o) (o)->value /* This macro uses a function, unfortunately */ int getvalue(struct myobj *o) { return o->value; } #define VALUE(o) getvalue(o) void dostuff(struct myobj *foo) { printf("The value of foo is %d.\n", VALUE(foo)); /* OK */ VALUE(foo) = 1; /* We want a compile error here */ foo->value = 1; /* This is ok. */ } A: If the variable is always numeric, this works: #define VALUE(x) (x+0) or in the context of your example, #define VALUE(x) (x->value+0) A: See §6.5.17 in the C standard (C99 & C1x): “A comma operator does not yield an lvalue.” #define VALUE(x) (0, x) (Not portable to C++.) A: Try #define VALUE(o) (const int)((o)->value) A: Ok, I came up with one: #define VALUE(o) (1 ? (o)->value : 0) A: Is this a puzzle or is it an engineering task? If it's an engineering task, then there are better ways to get opacity of structures in C. In this blog article, I wrote a decent enough description of how to do that in C.
{ "language": "en", "url": "https://stackoverflow.com/questions/140825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Has anyone used Raven? What do you think about this build tool? I'm thinking of migrating from maven2 to raven (my poms are getting bigger and bigger), but I'd like to hear some opinions first. Thanks! @andre: Thank's for writing but I was actually looking for real experiences using raven. Anyway, the fact that nobody wrote is an indicator by itself (it seems few people are using it) A: I haven't used either Raven or Buildr, but I have heard good things about the latter. In this blog article by Assaf Arkin, there is a nice case study: a 5,443 line, 52 file Maven configuration was reduced to 485 lines of Buildr. And, even though everybody says "Ruby is slow", Buildr was 2-6x faster than Maven. Also, unlike Raven, Buildr seems to still be maintained: it is currently in the incubator stage as an official Apache project. A: pom growth is a problem that everybody faces w/ maven I guess, but maven is at least maintained (2.1. just around the corner) and the raven project looks pretty dead to me. No updates this year and the mailinglist archives are also very small. It looks to me as it's too risky to switch your build process to a tool w/o a living community. Not quite the answer you wanted I guess, but my 2 cents. A: I don't know anything about raven. You should check out plain old rake, which lets you create very powerful tasks. I've also heard about sake, which is just like rake tasks but system-wide, instead of being only available inside one of your projects. They may not be specialized for Java, but they sure beat the hell out of plain old bash or (heaven forbid) batch scripts.
{ "language": "en", "url": "https://stackoverflow.com/questions/140843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to avoid SSIS FTP task from failing when there are no files to download? I'm using SQL Server 2005, and creating ftp tasks within SSIS. Sometimes there will be files to ftp over, sometimes not. If there are no files, I don't want the task nor the package to fail. I've changed the arrow going from the ftp task to the next to "completion", so the package runs through. I've changed the allowed number of errors to 4 (because there are 4 ftp tasks, and any of the 4 directories may or may not have files). But, when I run the package from a job in agent, it marks the job as failing. Since this will be running every 15 minutes, I don't want a bunch of red x's in my job history, which will cause us to not see a problem when it really does occur. How do I set the properties in the ftp task so that not finding files to ftp is not a failure? The operation I am using is "Send files". Here is some more information: the files are on a server that I don't have any access through except ftp. And, I don't know the filenames ahead of time. The user can call them whatever they want. So I can't check for specific files, nor, I think, can I check at all. Except through using the ftp connection and tasks based upon that connection. The files are on a remote server, and I want to copy them over to my server, to get them from that remote server. I can shell a command level ftp in a script task. Perhaps that is what I need to use instead of a ftp task. (I have changed to use the ftp command line, with a parameter file, called from a script task. It gives no errors when there are no files to get. I think this solution is going to work for me. I'm creating the parameter file dynamically, which means I don't need to have connection information in the plain text file, but rather can be stored in my configuration file, which is in a more secure location.) A: I just had this issue, after reading some of the replies here, nothing really sorted out my problem and the solutions in here seem insane in terms of complexity. My FTP task was failing since I did not allow overwriting files, lets say the job was kicked off twice in a row, the first pass will be fine, because some files are transferred over but will fail if a local file already exists. My solution was simple: * *Right click task - Properties *Set ForceExecutionResult = "Success" A: (I can't accept my own answer, but this was the solution that worked for me.) It may not be the best solution, but this works. I use a script task, and have a bunch of variables for the ftp connection information, and source and destination directories. (Because, we'll be changing the server this is run on, and it will be easier to change in a config package.) I create a text file on the fly, and write the ftp commands to it: Dim ftpStream As StreamWriter = ftpFile.CreateText() ftpStream.WriteLine(ftpUser) ftpStream.WriteLine(ftpPassword) ftpStream.WriteLine("prompt off") ftpStream.WriteLine("binary") ftpStream.WriteLine("cd " & ftpDestDir) ftpStream.WriteLine("mput " & ftpSourceDir) ftpStream.WriteLine("quit 130") ftpStream.Close() Then, after giving it enough time to really close, I start a process to do the ftp command: ftpParameters = "-s:" & ftpParameterLoc & ftpParameterFile & " " & ftpServer proc = System.Diagnostics.Process.Start("ftp", ftpParameters) Then, after giving it some more time for the ftp process to run, I delete the temporary ftp file (that has connection information in it!). If files don't exist in the source directory (the variable has the \\drive\dir\*.* mapping), then there is no error. If some other error happens, the task still fails, as it should. I'm new to SSIS, and this may be a kludge. But it works for now. I guess I asked for the best way, and I'll certainly not claim that this is it. As I pointed out, I have no way of knowing what the files are named, or even if there are any files there at all. If they are there, I want to get them. A: Check this link that describes about gracefully handling task error in SSIS Package. I had almost the same problem but, with retrieving files. I wanted the package NOT to fail when no files were found on FTP server. The above link stops the error bubbling up and causing the package to fail; something you would have thought FailPackageOnError=false should have done? :-S Hope this solves it for you too! A: I understand that you have found an answer to your question. This is for other users who might stumble upon this question. Here is one possible way of achieving this. Script Task can be used to find the list of files present in an FTP folder path for a given pattern (say *.txt). Below example shows how this can be done. Step-by-step process: * *On the SSIS package, create an FTP Connection named FTP and also create 5 variables as shown in screenshot #1. Variable RemotePath contains the FTP folder path; LocalPath contains the folder where the files will be downloaed to; FilePattern contains the file pattern to find the list of files to download from FTP server; FileName will be populated by the Foreach loop container but to avoid FTP task design time error, it can be populated with / or the DelayValidation property on the FTP Task can be set to True. *On the SSIS package, place a Script Task, Foreach Loop container and FTP Task within the Foreach Loop container as shown in screenshots #2. *Replace the Main() method within the Script Task with the code under the Script Task Code section. Script Task will populate the variable ListOfFiles with the collection of files matching a given pattern. This example will first use the pattern *.txt, which yields no results and then later the pattern *.xls that will match few files on the FTP server. *Configure the Foreach Loop container as shown in screenshots #3 and #4. This task will loop through the variable **ListOfFiles*. If there are no files, the FTP task inside the loop container will not execute. If there are files, the FTP task inside the loop container will execute for the task for the number of files found on the FTP server. *Configure the FTP Task as shown in screenshots #5 and #6. *Screenshot #7 shows sample package execution when no matching files are found for the pattern *.txt. *Screenshot #8 shows the contents of the folder C:\temp\ before execution of the package. *Screenshot #9 shows sample package execution when matching files are found for the pattern *.xls. *Screenshot #10 shows the contents of the FTP remote path /Practice/Directory_New. *Screenshot #11 shows the contents of the folder C:\temp\ after execution of the package. *Screenshot #12 shows the package failure when provided with incorrect Remote path. *Screenshot #13 shows the error message related to the package failure. Hope that helps. Script Task Code: C# code that can be used in SSIS 2008 and above. Include the using statement using System.Text.RegularExpressions; public void Main() { Variables varCollection = null; ConnectionManager ftpManager = null; FtpClientConnection ftpConnection = null; string[] fileNames = null; string[] folderNames = null; System.Collections.ArrayList listOfFiles = null; string remotePath = string.Empty; string filePattern = string.Empty; Regex regexp; int counter; Dts.VariableDispenser.LockForWrite("User::RemotePath"); Dts.VariableDispenser.LockForWrite("User::FilePattern"); Dts.VariableDispenser.LockForWrite("User::ListOfFiles"); Dts.VariableDispenser.GetVariables(ref varCollection); try { remotePath = varCollection["User::RemotePath"].Value.ToString(); filePattern = varCollection["User::FilePattern"].Value.ToString(); ftpManager = Dts.Connections["FTP"]; ftpConnection = new FtpClientConnection(ftpManager.AcquireConnection(null)); ftpConnection.Connect(); ftpConnection.SetWorkingDirectory(remotePath); ftpConnection.GetListing(out folderNames, out fileNames); ftpConnection.Close(); listOfFiles = new System.Collections.ArrayList(); if (fileNames != null) { regexp = new Regex("^" + filePattern + "$"); for (counter = 0; counter <= fileNames.GetUpperBound(0); counter++) { if (regexp.IsMatch(fileNames[counter])) { listOfFiles.Add(remotePath + fileNames[counter]); } } } varCollection["User::ListOfFiles"].Value = listOfFiles; } catch (Exception ex) { Dts.Events.FireError(-1, string.Empty, ex.ToString(), string.Empty, 0); Dts.TaskResult = (int) ScriptResults.Failure; } finally { varCollection.Unlock(); ftpConnection = null; ftpManager = null; } Dts.TaskResult = (int)ScriptResults.Success; } VB code that can be used in SSIS 2005 and above. Include the Imports statement Imports System.Text.RegularExpressions Public Sub Main() Dim varCollection As Variables = Nothing Dim ftpManager As ConnectionManager = Nothing Dim ftpConnection As FtpClientConnection = Nothing Dim fileNames() As String = Nothing Dim folderNames() As String = Nothing Dim listOfFiles As Collections.ArrayList Dim remotePath As String = String.Empty Dim filePattern As String = String.Empty Dim regexp As Regex Dim counter As Integer Dts.VariableDispenser.LockForRead("User::RemotePath") Dts.VariableDispenser.LockForRead("User::FilePattern") Dts.VariableDispenser.LockForWrite("User::ListOfFiles") Dts.VariableDispenser.GetVariables(varCollection) Try remotePath = varCollection("User::RemotePath").Value.ToString() filePattern = varCollection("User::FilePattern").Value.ToString() ftpManager = Dts.Connections("FTP") ftpConnection = New FtpClientConnection(ftpManager.AcquireConnection(Nothing)) ftpConnection.Connect() ftpConnection.SetWorkingDirectory(remotePath) ftpConnection.GetListing(folderNames, fileNames) ftpConnection.Close() listOfFiles = New Collections.ArrayList() If fileNames IsNot Nothing Then regexp = New Regex("^" & filePattern & "$") For counter = 0 To fileNames.GetUpperBound(0) If regexp.IsMatch(fileNames(counter)) Then listOfFiles.Add(remotePath & fileNames(counter)) End If Next counter End If varCollection("User::ListOfFiles").Value = listOfFiles Dts.TaskResult = ScriptResults.Success Catch ex As Exception Dts.Events.FireError(-1, String.Empty, ex.ToString(), String.Empty, 0) Dts.TaskResult = ScriptResults.Failure Finally varCollection.Unlock() ftpConnection = Nothing ftpManager = Nothing End Try Dts.TaskResult = ScriptResults.Success End Sub Screenshot #1: Screenshot #2: Screenshot #3: Screenshot #4: Screenshot #5: Screenshot #6: Screenshot #7: Screenshot #8: Screenshot #9: Screenshot #10: Screenshot #11: Screenshot #12: Screenshot #13: A: I don't have a packaged answer for you, but since no one else has posted anything yet... You should be able to set a variable in an ActiveX script task and then use that to decide whether or not the FTP task should run. There is an example here that works with local paths. Hopefully you can adapt the concept (or if possible, map the FTP drive and do it that way). A: 1) Set the FTP Task property ForceExecutionResult = Success 2) Add this code to FTP Task OnError event handler. public void Main() { // TODO: Add your code here int errorCode = (int)Dts.Variables["System::ErrorCode"].Value; if (errorCode.ToString().Equals("-1073573501")) { Dts.Variables["System::Propagate"].Value = false; } else { Dts.Variables["System::Propagate"].Value = true; } Dts.TaskResult = (int)ScriptResults.Success; } A: Put it in a ForEach container, which iterates over the files to upload. No files, no FTP, no failure. A: You can redirect on failure, to another task that does nothing, ie a script that just returns true. To do this, add the new script task, highlight your FTP task, a second green connector will appear, drag this to the script task, and then double click it. Select Failure on the Value drop down. Obviously, you'll then need to handle real failures in this script task to still display right in the Job history. A: Aha, OK - Thanks for clarification. As the FTP task cannot return a folder listing it will not be possible to use the ForEach as I initially said - That only works if you're uploading X amount of files to a remote source. To download X amount of files, you can go two ways, either you can do it entirely in .Net in a script task, or you can populate an ArrayList with the file names from within a .Net script task, then ForEach over the ArrayList, passing the file name to a variable and downloading that variable name in a standard FTP task. Code example to suit: http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=2472491&SiteID=1 So, in the above, you'd get the FileNames() and populate the ArrayList from that, then assign the ArrayList to an Object type variable in Dts.Variables, then ForEach over that Object (ArrayList) variable using code something like: http://www.sqlservercentral.com/articles/SSIS/64014/ A: You can use the free SSIS FTP Task++ from eaSkills. It doesn't throw an error if the file or files don't exist, it support wild cards and gives you the option to download and delete if you need to do so. Here's the link to the feature page: http://www.easkills.com/ssis/ftptask A: This is another solution that is working for me, using built-in stuff and so without manually re-writing the FTP logic: 1) Create a variable in your package called FTP_Error 2) Click your FTP Task, then click "Event Handlers" tab 3) Click within the page to create an event handler for "FTP Task/OnError" - this will fire whenever there is trouble with the FTP 4) From the toolbox, drag in a Script Task item, and double-click to open that up 5) In the first pop-up, ReadOnlyVariables - add System::ErrorCode, System::ErrorDescription 6) In the first pop-up, ReadWriteVariables - add your User::FTP_Error variable 7) Edit Script 8) In the script set your FTP_Error variable to hold the ReadOnlyVariables we had above: Dts.Variables["FTP_Error"].Value = "ErrorCode:" + Dts.Variables["ErrorCode"].Value.ToString() + ", ErrorDescription=" + Dts.Variables["ErrorDescription"].Value.ToString(); 9) Save and close script 10) Hit "OK" to script task 11) Go back to "Control Flow" tab 12) From the FTP task, OnError go to a new Script task, and edit that 13) ReadOnlyVariables: User::FTP_Error from before 14) Now, when there are no files found on the FTP, the error code is -1073573501 (you can find the error code reference list here: http://msdn.microsoft.com/en-us/library/ms345164.aspx) 15) In your script, put in the logic to do what you want - if you find a "no files found" code, then maybe you say task successful. If not, then task failed. And your normal flow can handle this as you wish: if (Dts.Variables["FTP_Error"].Value.ToString().Contains("-1073573501")) { // file not found - not a problem Dts.TaskResult = (int)ScriptResults.Success; } else { // some other error - raise alarm! Dts.TaskResult = (int)ScriptResults.Failure; } And from there your Succeeded/Failed flow will do what you want to do with it. A: An alternative is to use this FTP File Enumerator
{ "language": "en", "url": "https://stackoverflow.com/questions/140850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Nested class in .net I have a nested class. I want to access the outer and nested classes in other class. How to access both class properties and methods and my condition is i want to create object for only one class plz provide the code snippet A: In my opinion, the reason to nest a class is that it will only ever be used by its parent class. If you need to access the inner class, you should revisit using the nested class in the first place. A: You can learn about Nested Types here. A: Your question isn't very clear. However, I guess you're from a Java background. C#'s and VB's nested classes behave much different from Java's nested classes. In fact, they behave much like Java's static nested classes, i.e. they don't belong to an instance of the outer class. Therefore, instances of the inner class can't access nonstatic fields in the outer class (at least not without being given an instance explicitly). A: Although I would never recommend nested public classes, here's some code: public class Foo() { public Foo() { } private Bar m_Bar = new Bar(); public Bar TheBar { get { return m_Bar; } } public class Bar { ... } }
{ "language": "en", "url": "https://stackoverflow.com/questions/140852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can you make a site with ASP.NET MVC Framework using .NET 2.0? Is it possible to make a site with ASP.NET MVC Framework using .NET 2.0? I am limited to using .NET 2.0 (we use VS 2008, but we have to use the 2.0 Framework) and I really want to try out the MVC Framework. A: Scott Hanselman described a way to make it work, with some caveats, in his blog: Deploying ASP.NET MVC on ASP.NET 2.0 A: It can be done using Visual Studio 2008, but it can cause headaches... * *Create an ASP.NET MVC Web Application *Set Project Target Framework to 2.0 in Project Properties *Add a references to System.Web.MVC (click through warning messages) *Add any additional references you may need (System.Web.Routing, System.Web.Abstractions) again clicking through any warning messages *Start coding! *Not everything you try will work, if you see errors like this on deployment it means that whatever you are doing isn't supported by the 2.0 framework... * *"The type or namespace name 'var' could not be found (are you missing a using directive or an assembly reference?)" *Configure your IIS to support MVC Routes and extensions *Copy "C:\windows\assembly\GAC_MSIL\System.Core" from the .NET 3.5 development framework to the /bin folder of the IIS Server running .NET 2.0 SP1. Much of this can be found in a lot more detail on Scott Hanselman's blog
{ "language": "en", "url": "https://stackoverflow.com/questions/140858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there ever any reason not to take the advice of the Database Engine Tuning Advisor? I'm on a team maintaining a .Net web app with a SQL Server 2005 back end. The system's been running a little slow in places lately, so after doing all the tuning kind of stuff we could think of (adding indexes, cleaning up really badly written stored procedures, etc.) I ran a typical workload through the Tuning Advisor - and it spit out a huge list of additional Indexes and Statistics to create. My initial reaction was to say "sure, you got it, SQL Server," but is there ever any reason NOT to just do what the Advisor says? A: There are 2 problems with indexes. * *Indexes take space. Space is cheap, so this is usually not a strong argument against indexes. However, it is worth considering. *Indexes will slow down certain queries (like insert, update, and delete). Creating proper indexes is a balancing act. If you don't have enough, your system will be slow. If you have too many, your system will be slow. For systems that perform more reads than writes, you can get away with adding more indexes. A: Sql Server does a good job of managing statistics if you have enabled auto-create and auto-update of statistics (you should), so ignore the statistics recommendations. Take the indexes and analyze them to make sure you can handle the extra space requirements, and also make sure they aren't duplicating some other index that has similar columns. You can often consolidate indexes by just adding a column or two (paying attention to the order of columns) or by adding an included column (covering index). If the index is on a table with heavy OLAP use, you want to limit your indexes to maybe 5-10. For tables that rarely get inserts or updates (less than several per second), space limitations should be the only concern. The tuning wizard recommendations can be a great learning tool. Take the indexes, go back to the query plan and try to figure out why exactly the recommendation was made. A: I recommend this SQL script; it uses SQL 2005's built in perfomance counters to suggest indexes: SELECT migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure, 'CREATE INDEX [missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle) + '_' + LEFT (PARSENAME(mid.statement, 1), 32) + ']' + ' ON ' + mid.statement + ' (' + ISNULL (mid.equality_columns,'') + CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (mid.inequality_columns, '') + ')' + ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement, migs.*, mid.database_id, mid.[object_id] FROM sys.dm_db_missing_index_groups mig INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10 ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC A: It has suggested indexes on computed columns. Which then prevents users from inserting rows into the table. So watch out for stuff like that. A: I think the advice is helpful, but in my opinion it only gives you things to try. You have to actually do some benchmarking and see what helps and what doesn't. This can be very time consuming but is probably worth it for the reasons G Mastros pointed out. Database optimization is not a straight forward science but rather a matter of striking the right balance for you exact situation. A: Be careful of it's DROP INDEX recommendations - If your trace capture missed some scheduled or rare queries they could suffer next time it's run. A: Also note that Database tweaking will depend heavily on your usage patterns, which might change a lot between prototyping, development and production. So my best recommendation is to tweak your heart away now, while you have the time, and learn what effects your changes may have. It'll definitely serve you later. A: Like all advice take it with a grain of salt and use it to reach your own conclusion.
{ "language": "en", "url": "https://stackoverflow.com/questions/140869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Windows Installer - upgrade resuming after reboot I have a Windows Installer MSI package that installs drivers which sometimes require a restart before they can be upgraded; the drivers are installed by a deferred custom action after WriteRegistryValues. When a reboot IS needed there's a rollback and the user is told to reboot and run the install again. In the InstallExecuteSequence the RemoveExistingProducts action is between InstallValidate and InstallInitialize, so when an upgrade signals that it needs a restart, the previous package has been removed. So when a reboot's needed, after the rollback and the error message saying "reboot and re-run this" our software is no longer installed. If the user reboots and repeats the install things work fine. I need to automate rebooting and resuming the install, so the user doesn't have to actually do anything (apart from agreeing to the restart of course.) A command written into the registry's RunOnce key can run the install again after the reboot, but I'm thinking it will be tricky to condition ForceReboot on what happens in the deferred custom action that does the driver install. Also maybe tricky to decide what to do in the resumed install. Advice on best practices or pointers to potential problems will be very welcome. A: RemoveExistingProducts before InstallInitialize or after InstallFinalize will not put the Action in the audit script of the new product, so as you said the old product is removed before the Upgrade is done. So you might want to try to put the RemoveExisitngProducts execution between InstallInitialize and InstallFinalize, that way it's in the remove is in the audit scripted part, so it will track, the reboot and resume. have a look at the system reboot properties here
{ "language": "en", "url": "https://stackoverflow.com/questions/140887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: IE Script debugging pop up In order to debug an asp.net web app I have to have IE Script debugging enabled. Unfortunately, in the past week or so google's analytics javascript has developed a problem. So that when I browse to a site that has google analytics I receive the little pop up "A runtime error has occurred. Do you wish to debug?" Yes, even stackoverflow is affected. This is a tremendous pain in the butt. Is there any way to keep IE Script debugging enabled to keep the .net debugger happy, but get rid of the pop up? A: I would suggest using IE for debugging purposes only, and Firefox for darn near everything else. Your life will benefit from this. A: Script Debugging is not required to debug ASP.NET pages, it is only required if you wish to stepthrough script errors, you can disable this option in IE and still debug your server code fine. Unfortunately it is an IE wide option so if it's on it's on and if it's off it's off. But disabling the option will not prevent you from debugging your asp.net applications. A: Are you debugging javascript behaviour specifically in IE? If not, get Firefox and Firebug. Javascript debugging in IE is painful, and should only be resorted to in situations where you're trying to fix IE idiosyncrasies. You don't need script debugging enabled in IE if you're just debugging web forms. A: You could write some code to make a change to the following key in the Registry HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main\Disable Script Debugger Depending on how you have your project you might be able to tie it to the actual build (via a Macro), or if anything just put it somewhere in your Application_Start event. Returning back to normal might be a little more difficult. Potentially you could watch for the iexplore.exe process and when it dies you revert it back. Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/140899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Querying XML like SQL? Is there any framework for querying XML SQL Syntax, I seriously tire of iterating through node lists. Or is this just wishful thinking (if not idiotic) and certainly not possible since XML isn't a relational database? A: .Net Framework provides LINQ to do this or you can use the .Net System.Data namespace to load data from XML files. You can even create queries that have joins among the tables, etc. For example, System.Data.DataTable provides a ReadXml() method. A: You could try LINQ to XML, but it's not language agnostic. A: XQuery and XPath... XQuery is more what you are looking for if a SQL structure is desirable. A: XQuery is a functional language that is closest to SQL. XPath is a notation for locating a node within the document that is used as part of XSLT and XQuery. XML databases such as MarkLogic serve as XQuery engines for XML data, much as relational databases serve as SQL engines for relational data. A: That depends on the problem you are solving. If XML file is pretty large, sometimes it's a necessity to use something like SAX parsers to traverse the file node by node, or you will get OutOfMemoryException or will run out even of virtual memory on your computer. But, if the expected size of XML file is relatively small, you can simply use something like Linq, also see my answer - here I tried to explain, how to make traversing through nodes much easier with constructions like yield return. A: SQL Server 2005 supports XML DML on it's native xml data type. A: XQuery is certainly the way forward. This is what is used by XML databases like eXist and MarkLogic. In the Java world there are several solutions for running XQuery on flat files, most notably Saxon For .NET, there is not so much available. Microsoft did have an XQuery library, although this was pulled from .NET 2 and has never resurfaced. XQSharp is a native .NET alternative, although currently only a command line version has been released.
{ "language": "en", "url": "https://stackoverflow.com/questions/140908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to implement sorting in a listview while maintaining neighboring columns? I have a ListView that has several columns. One of them contains "Names", the other contains "Amount". I would like to allow the user to click the Names column in the listview and have it sort alphabetically and also allow the user to click the "Amount" and have it sort numerically (higher/lower - lower/higher). What is the best way to implement this? A: It is partially implemented but not completely. Microsoft have a description of how to approach this problem at http://support.microsoft.com/kb/319401. A: To solve this, I wrote my own ListViewItemComparer which implemented the IComparer interface. Then, based on whether the column was numeric or string, I did the appropriate comparison. A: ObjectListView (an open source wrapper around .NET WinForms ListView) does exactly this for you automatically.
{ "language": "en", "url": "https://stackoverflow.com/questions/140918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Has anyone used Ruby/Rails with a Sales Logix database? Has anyone used Ruby/Rails with a Sales Logix database? A: This page says SalesLogix runs on MS SQL Server or Oracle, both of which can connect with Rails through ActiveRecord. Here is a page that details setting up for MS SQL (which is what is more likely to be running on).
{ "language": "en", "url": "https://stackoverflow.com/questions/140922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Normalize newlines in C# I have a data stream that may contain \r, \n, \r\n, \n\r or any combination of them. Is there a simple way to normalize the data to make all of them simply become \r\n pairs to make display more consistent? So something that would yield this kind of translation table: \r --> \r\n \n --> \r\n \n\n --> \r\n\r\n \n\r --> \r\n \r\n --> \r\n \r\n\n --> \r\n\r\n A: I believe this will do what you need: using System.Text.RegularExpressions; // ... string normalized = Regex.Replace(originalString, @"\r\n|\n\r|\n|\r", "\r\n"); I'm not 100% sure on the exact syntax, and I don't have a .Net compiler handy to check. I wrote it in perl, and converted it into (hopefully correct) C#. The only real trick is to match "\r\n" and "\n\r" first. To apply it to an entire stream, just run it on chunks of input. (You could do this with a stream wrapper if you want.) The original perl: $str =~ s/\r\n|\n\r|\n|\r/\r\n/g; The test results: [bash$] ./test.pl \r -> \r\n \n -> \r\n \n\n -> \r\n\r\n \n\r -> \r\n \r\n -> \r\n \r\n\n -> \r\n\r\n Update: Now converts \n\r to \r\n, though I wouldn't call that normalization. A: since .NET 6 it is supported out of the box: string normalized = originalString.ReplaceLineEndings(); //uses Environment.NewLine string normalized = originalString.ReplaceLineEndings('\r\n'); see https://github.com/dotnet/runtime/blob/a879885975b5498db559729811304888463c15ed/src/libraries/System.Private.CoreLib/src/System/String.Manipulation.cs#L1183 A: A Regex would help.. could do something roughly like this.. (\r\n|\n\n|\n\r|\r|\n) replace with \r\n This regex produced these results from the table posted (just testing left side) so a replace should normalize. \r => \r \n => \n \n\n => \n\n \n\r => \n\r \r\n => \r\n \r\n => \r\n \n => \n A: Normalise breaks, so that they are all \r\n var normalisedString = sourceString .Replace("\r\n", "\n") .Replace("\n\r", "\n") .Replace("\r", "\n") .Replace("\n", "\r\n"); A: It's a two step process. First you convert all the combinations of \r and \n into a single one, say \r Then you convert all the \r into your target \r\n normalized = original.Replace("\r\n", "\r"). Replace("\n\r", "\r"). Replace("\n", "\r"). Replace("\r", "\r\n"); // last step A: You're thinking too complicated. Ignore every \r and turn every \n into an \r\n. In Pseudo-C#: char[] chunk = new char[X]; StringBuffer output = new StringBuffer(); buffer.Read(chunk); foreach (char c in chunk) { switch (c) { case '\r' : break; // ignore case '\n' : output.Append("\r\n"); default : output.Append(c); } } EDIT: \r alone is no line-terminator so I doubt you really want to expand \r to \r\n. A: I'm with Jamie Zawinski on RegEx: "Some people, when confronted with a problem, think "I know, I’ll use regular expressions." Now they have two problems" For those of us who prefer readability: * *Step 1 Replace \r\n by \n Replace \n\r by \n (if you really want this, some posters seem to think not) Replace \r by \n *Step 2 Replace \n by Environment.NewLine or \r\n or whatever. A: This is the answer to the question. The given solution replaces a string by the given translation table. It does not use an expensive regex function. It also does not use multiple replacement functions that each individually did loop over the data with several checks etc. So the search is done directly in 1 for loop. For the number of times that the capacity of the result array has to be increased, a loop is also used within the Array.Copy function. That are all the loops. In some cases, a larger page size might be more efficient. public static string NormalizeNewLine(this string val) { if (string.IsNullOrEmpty(val)) return val; const int page = 6; int a = page; int j = 0; int len = val.Length; char[] res = new char[len]; for (int i = 0; i < len; i++) { char ch = val[i]; if (ch == '\r') { int ni = i + 1; if (ni < len && val[ni] == '\n') { res[j++] = '\r'; res[j++] = '\n'; i++; } else { if (a == page) //ensure capacity { char[] nres = new char[res.Length + page]; Array.Copy(res, 0, nres, 0, res.Length); res = nres; a = 0; } res[j++] = '\r'; res[j++] = '\n'; a++; } } else if (ch == '\n') { int ni = i + 1; if (ni < len && val[ni] == '\r') { res[j++] = '\r'; res[j++] = '\n'; i++; } else { if (a == page) //ensure capacity { char[] nres = new char[res.Length + page]; Array.Copy(res, 0, nres, 0, res.Length); res = nres; a = 0; } res[j++] = '\r'; res[j++] = '\n'; a++; } } else { res[j++] = ch; } } return new string(res, 0, j); } The translation table really appeals to me even if '\n\r' is not actually used on basic platforms. Who would use two types of linebreaks for indicate 2 linebreaks? If you want to know that, than you need to take a look before to know if the \n and \r both are used seperatly in the same document.
{ "language": "en", "url": "https://stackoverflow.com/questions/140926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Partial class definition on C++? Anyone knows if is possible to have partial class definition on C++ ? Something like: file1.h: class Test { public: int test1(); }; file2.h: class Test { public: int test2(); }; For me it seems quite useful for definining multi-platform classes that have common functions between them that are platform-independent because inheritance is a cost to pay that is non-useful for multi-platform classes. I mean you will never have two multi-platform specialization instances at runtime, only at compile time. Inheritance could be useful to fulfill your public interface needs but after that it won't add anything useful at runtime, just costs. Also you will have to use an ugly #ifdef to use the class because you can't make an instance from an abstract class: class genericTest { public: int genericMethod(); }; Then let's say for win32: class win32Test: public genericTest { public: int win32Method(); }; And maybe: class macTest: public genericTest { public: int macMethod(); }; Let's think that both win32Method() and macMethod() calls genericMethod(), and you will have to use the class like this: #ifdef _WIN32 genericTest *test = new win32Test(); #elif MAC genericTest *test = new macTest(); #endif test->genericMethod(); Now thinking a while the inheritance was only useful for giving them both a genericMethod() that is dependent on the platform-specific one, but you have the cost of calling two constructors because of that. Also you have ugly #ifdef scattered around the code. That's why I was looking for partial classes. I could at compile-time define the specific platform dependent partial end, of course that on this silly example I still need an ugly #ifdef inside genericMethod() but there is another ways to avoid that. A: #include will work as that is preprocessor stuff. class Foo { #include "FooFile_Private.h" } //////// FooFile_Private.h: private: void DoSg(); A: This is not possible in C++, it will give you an error about redefining already-defined classes. If you'd like to share behavior, consider inheritance. A: How about this: class WindowsFuncs { public: int f(); int winf(); }; class MacFuncs { public: int f(); int macf(); } class Funcs #ifdef Windows : public WindowsFuncs #else : public MacFuncs #endif { public: Funcs(); int g(); }; Now Funcs is a class known at compile-time, so no overheads are caused by abstract base classes or whatever. A: As written, it is not possible, and in some cases it is actually annoying. There was an official proposal to the ISO, with in mind embedded software, in particular to avoid the RAM ovehead given by both inheritance and pimpl pattern (both approaches require an additional pointer for each object): http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0309r0.pdf Unfortunately the proposal was rejected. A: Try inheritance Specifically class AllPlatforms { public: int common(); }; and then class PlatformA : public AllPlatforms { public: int specific(); }; A: You can't partially define classes in C++. Here's a way to get the "polymorphism, where there's only one subclass" effect you're after without overhead and with a bare minimum of #define or code duplication. It's called simulated dynamic binding: template <typename T> class genericTest { public: void genericMethod() { // do some generic things std::cout << "Could be any platform, I don't know" << std::endl; // base class can call a method in the child with static_cast (static_cast<T*>(this))->doClassDependentThing(); } }; #ifdef _WIN32 typedef Win32Test Test; #elif MAC typedef MacTest Test; #endif Then off in some other headers you'll have: class Win32Test : public genericTest<Win32Test> { public: void win32Method() { // windows-specific stuff: std::cout << "I'm in windows" << std::endl; // we can call a method in the base class genericMethod(); // more windows-specific stuff... } void doClassDependentThing() { std::cout << "Yep, definitely in windows" << std::endl; } }; and class MacTest : public genericTest<MacTest> { public: void macMethod() { // mac-specific stuff: std::cout << "I'm in MacOS" << std::endl; // we can call a method in the base class genericMethod(); // more mac-specific stuff... } void doClassDependentThing() { std::cout << "Yep, definitely in MacOS" << std::endl; } }; This gives you proper polymorphism at compile time. genericTest can non-virtually call doClassDependentThing in a way that gives it the platform version, (almost like a virtual method), and when win32Method calls genericMethod it of course gets the base class version. This creates no overhead associated with virtual calls - you get the same performance as if you'd typed out two big classes with no shared code. It may create a non-virtual call overhead at con(de)struction, but if the con(de)structor for genericTest is inlined you should be fine, and that overhead is in any case no worse than having a genericInit method that's called by both platforms. Client code just creates instances of Test, and can call methods on them which are either in genericTest or in the correct version for the platform. To help with type safety in code which doesn't care about the platform and doesn't want to accidentally make use of platform-specific calls, you could additionally do: #ifdef _WIN32 typedef genericTest<Win32Test> BaseTest; #elif MAC typedef genericTest<MacTest> BaseTest; #endif You have to be a bit careful using BaseTest, but not much more so than is always the case with base classes in C++. For instance, don't slice it with an ill-judged pass-by-value. And don't instantiate it directly, because if you do and call a method that ends up attempting a "fake virtual" call, you're in trouble. The latter can be enforced by ensuring that all of genericTest's constructors are protected. A: As written, it is not possible. You may want to look into namespaces. You can add a function to a namespace in another file. The problem with a class is that each .cpp needs to see the full layout of the class. A: Nope. But, you may want to look up a technique called "Policy Classes". Basically, you make micro-classes (that aren't useful on their own) then glue them together at some later point. A: Either use inheritance, as Jamie said, or #ifdef to make different parts compile on different platforms. A: Since headers are just textually inserted, one of them could omit the "class Test {" and "}" and be #included in the middle of the other. I've actually seen this in production code, albeit Delphi not C++. It particularly annoyed me because it broke the IDE's code navigation features. A: For me it seems quite useful for definining multi-platform classes that have common functions between them that are platform-independent. Except developers have been doing this for decades without this 'feature'. I believe partial was created because Microsoft has had, for decades also, a bad habit of generating code and handing it off to developers to develop and maintain. Generated code is often a maintenance nightmare. What habits to that entire MFC generated framework when you need to bump your MFC version? Or how do you port all that code in *.designer.cs files when you upgrade Visual Studio? Most other platforms rely more heavily on generating configuration files instead that the user/developer can modify. Those, having a more limited vocabulary and not prone to be mixed with unrelated code. The configuration files can even be inserted in the binary as a resource file if deemed necessary. I have never seen 'partial' used in a place where inheritance or a configuration resource file wouldn't have done a better job. A: Dirty but practical way is using #include preprocessor: Test.h: #ifndef TEST_H #define TEST_H class Test { public: Test(void); virtual ~Test(void); #include "Test_Partial_Win32.h" #include "Test_Partial_OSX.h" }; #endif // !TEST_H Test_Partial_OSX.h: // This file should be included in Test.h only. #ifdef MAC public: int macMethod(); #endif // MAC Test_Partial_WIN32.h: // This file should be included in Test.h only. #ifdef _WIN32 public: int win32Method(); #endif // _WIN32 Test.cpp: // Implement common member function of class Test in this file. #include "stdafx.h" #include "Test.h" Test::Test(void) { } Test::~Test(void) { } Test_Partial_OSX.cpp: // Implement OSX platform specific function of class Test in this file. #include "stdafx.h" #include "Test.h" #ifdef MAC int Test::macMethod() { return 0; } #endif // MAC Test_Partial_WIN32.cpp: // Implement WIN32 platform specific function of class Test in this file. #include "stdafx.h" #include "Test.h" #ifdef _WIN32 int Test::win32Method() { return 0; } #endif // _WIN32 A: Suppose that I have: MyClass_Part1.hpp, MyClass_Part2.hpp and MyClass_Part3.hpp Theoretically someone can develop a GUI tool that reads all these hpp files above and creates the following hpp file: MyClass.hpp class MyClass { #include <MyClass_Part1.hpp> #include <MyClass_Part2.hpp> #include <MyClass_Part3.hpp> }; The user can theoretically tell the GUI tool where is each input hpp file and where to create the output hpp file. Of course that the developer can theoretically program the GUI tool to work with any varying number of hpp files (not necessarily 3 only) whose prefix can be any arbitrary string (not necessarily "MyClass" only). Just don't forget to #include <MyClass.hpp> to use the class "MyClass" in your projects. A: or you could try PIMPL common header file: class Test { public: ... void common(); ... private: class TestImpl; TestImpl* m_customImpl; }; Then create the cpp files doing the custom implementations that are platform specific. A: Declaring a class body twice will likely generate a type redefinition error. If you're looking for a work around. I'd suggest #ifdef'ing, or using an Abstract Base Class to hide platform specific details. A: You can get something like partial classes using template specialization and partial specialization. Before you invest too much time, check your compiler's support for these. Older compilers like MSC++ 6.0 didn't support partial specialization. A: This is not possible in C++, it will give you an error about redefining already-defined classes. If you'd like to share behavior, consider inheritance. I do agree on this. Partial classes is strange construct that makes it very difficult to maintain afterwards. It is difficult to locate on which partial class each member is declared and redefinition or even reimplementation of features are hard to avoid. Do you want to extend the std::vector, you have to inherit from it. This is because of several reasons. First of all you change the responsibility of the class and (properly?) its class invariants. Secondly, from a security point of view this should be avoided. Consider a class that handles user authentication... partial class UserAuthentication { private string user; private string password; public bool signon(string usr, string pwd); } partial class UserAuthentication { private string getPassword() { return password; } } A lot of other reasons could be mentioned... A: Let platform independent and platform dependent classes/functions be each-others friend classes/functions. :) And their separate name identifiers permit finer control over instantiation, so coupling is looser. Partial breaks encapsulation foundation of OO far too absolutely, whereas the requisite friend declarations barely relax it just enough to facilitate multi-paradigm Separation of Concerns like Platform Specific aspects from Domain-Specific platform independent ones. A: I've been doing something similar in my rendering engine. I have a templated IResource interface class from which a variety of resources inherit (stripped down for brevity): template <typename TResource, typename TParams, typename TKey> class IResource { public: virtual TKey GetKey() const = 0; protected: static shared_ptr<TResource> Create(const TParams& params) { return ResourceManager::GetInstance().Load(params); } virtual Status Initialize(const TParams& params, const TKey key, shared_ptr<Viewer> pViewer) = 0; }; The Create static function calls back to a templated ResourceManager class that is responsible for loading, unloading, and storing instances of the type of resource it manages with unique keys, ensuring duplicate calls are simply retrieved from the store, rather than reloaded as separate resources. template <typename TResource, typename TParams, typename TKey> class TResourceManager { sptr<TResource> Load(const TParams& params) { ... } }; Concrete resource classes inherit from IResource utilizing the CRTP. ResourceManagers specialized to each resource type are declared as friends to those classes, so that the ResourceManager's Load function can call the concrete resource's Initialize function. One such resource is a texture class, which further uses a pImpl idiom to hide its privates: class Texture2D : public IResource<Texture2D , Params::Texture2D , Key::Texture2D > { typedef TResourceManager<Texture2D , Params::Texture2D , Key::Texture2D > ResourceManager; friend class ResourceManager; public: virtual Key::Texture2D GetKey() const override final; void GetWidth() const; private: virtual Status Initialize(const Params::Texture2D & params, const Key::Texture2D key, shared_ptr<Texture2D > pTexture) override final; struct Impl; unique_ptr<Impl> m; }; Much of the implementation of our texture class is platform-independent (such as the GetWidth function if it just returns an int stored in the Impl). However, depending on what graphics API we're targeting (e.g. Direct3D11 vs. OpenGL 4.3), some of the implementation details may differ. One solution could be to inherit from IResource an intermediary Texture2D class that defines the extended public interface for all textures, and then inherit a D3DTexture2D and OGLTexture2D class from that. The first problem with this solution is that it requires users of your API to be constantly mindful of which graphics API they're targeting (they could call Create on both child classes). This could be resolved by restricting the Create to the intermediary Texture2D class, which uses maybe a #ifdef switch to create either a D3D or an OGL child object. But then there is still the second problem with this solution, which is that the platform-independent code would be duplicated across both children, causing extra maintenance efforts. You could attempt to solve this problem by moving the platform-independent code into the intermediary class, but what happens if some of the member data is used by both platform-specific and platform-independent code? The D3D/OGL children won't be able to access those data members in the intermediary's Impl, so you'd have to move them out of the Impl and into the header, along with any dependencies they carry, exposing anyone who includes your header to all that crap they don't need to know about. API's should be easy to use right and hard to use wrong. Part of being easy to use right is restricting the user's exposure to only the parts of the API they should be using. This solution opens it up to be easily used wrong and adds maintenance overhead. Users should only have to care about the graphics API they're targeting in one spot, not everywhere they use your API, and they shouldn't be exposed to your internal dependencies. This situation screams for partial classes, but they are not available in C++. So instead, you might simply define the Impl structure in separate header files, one for D3D, and one for OGL, and put an #ifdef switch at the top of the Texture2D.cpp file, and define the rest of the public interface universally. This way, the public interface has access to the private data it needs, the only duplicate code is data member declarations (construction can still be done in the Texture2D constructor that creates the Impl), your private dependencies stay private, and users don't have to care about anything except using the limited set of calls in the exposed API surface: // D3DTexture2DImpl.h #include "Texture2D.h" struct Texture2D::Impl { /* insert D3D-specific stuff here */ }; // OGLTexture2DImpl.h #include "Texture2D.h" struct Texture2D::Impl { /* insert OGL-specific stuff here */ }; // Texture2D.cpp #include "Texture2D.h" #ifdef USING_D3D #include "D3DTexture2DImpl.h" #else #include "OGLTexture2DImpl.h" #endif Key::Texture2D Texture2D::GetKey() const { return m->key; } // etc...
{ "language": "en", "url": "https://stackoverflow.com/questions/140935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: Has anyone used C# with a Sales Logix database? Has anyone used C# with a Sales Logix database? A: Yes. I rewrote a History tab that worked better and faster than the original Best/Sage built-in tab version using datagridviews in C#. It was a .NET plugin and I used Ryan Farley's instructions for getting it to work in SLX 6.x. In SLX 7.x you can use C# and VB.NET natively for building plug in components. A: Yes. I have. (Not the most interesting answer on SOB today, but that's what the question asked...)
{ "language": "en", "url": "https://stackoverflow.com/questions/140936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-4" }
Q: Is there a way to make Strongly Typed Resource files public (as opposed to internal)? Here's what I'd like to do: I want to create a library project that contains my Resource files (ie, UI Labels and whatnot). I'd like to then use the resource library both in my UI and in my Tests. (Ie, basically have a common place for my resources that I reference from multiple projects.) Unfortunately, because the StronglyTypedResourceBuilder (the .Net class which generates the code for Resources) makes resource files internal by default, I can't reference my strongly typed resources in the library from another project (ie, my UI or tests), without jumping through hoops (ie, something similar to what is described here, or writing a public wrapper class/function). Unfortunately, both those solutions remove my ability to keep the references strongly-typed. Has anyone found a straight-forward way to create strongly typed .Net resources that can be referenced from multiple projects? I'd prefer to avoid having to use a build event in order to accomplish this (ie, to do something like replace all instances of 'internal' with 'public', but that's basically my fall-back plan if I can't find an answer.. A: Not sure which version of Visual Studio you are using, so I will put steps for either one: VS 2008 - When you open the resx file in design view, there is an option at the top beside Add Resource and Remove Resource, called Access Modifier, it is a drop down where you can change the generated code from internal to public. VS 2005 - You don't have the option to generate the code like in VS 2008. It was a feature that was added, because of this headache. There are work around's though. You could use a third party generator like this tool or you could use the InternalsVisibleTo attribute in your AssemblyInfo.cs to add the projects that will have access to the internal classes of your resource library. A: Visual Studio 2008 allows you to select whether the generated resource class should be internal or public. There is also the ResXFileCodeGeneratorEx, which should do that for Visual Studio 2005. A: From the dataset designer and with the properties window visible, there is a "Modifier" property. For your datasets, it is likely saying internal. I don't know if there is a setting to default all new datasets to public.
{ "language": "en", "url": "https://stackoverflow.com/questions/140937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Faster Directory Walk with VB6 Query: Cache and Ram issues? Below is a rather simple function which counts how many files are on a machine. Called on "C:\", it takes about 5 seconds to run. Unless I haven't run it in a while or first run a ram-clearing program, in which case it takes 60 seconds or more. I wouldn't have thought it could be caching since I'm doing a new scan each time (i.e. starting a new intance of the program, since all it does is this scan), but perhaps it relates to memory allocation? Any ideas on how to make that fast run happen every time, or on why it can't be done? Other programs (e.g. SpaceMonger) manage to get a total count of files in 10s even when I clear my ram or wait a long time between runs. So, there is definitely a way to do this, though not necessarily in VB. Private Function countFiles(Name As String) As Long On Error GoTo ErrorHandler DoEvents Const CurMthd = "countFiles" Dim retval As Long 13 Dim FindData As win.WIN32_FIND_DATA 14 Dim SearchPath As String 15 Dim FileName As String 17 Dim SearchHandle As Long If Right(Name, 1) <> "\" Then Name = Name & "\" 19 SearchPath = Name & "*.*" 20 SearchHandle = win.FindFirstFile(SearchPath, FindData) Do DoEvents ' g_Cancel = True If g_Cancel Then countFiles = retval Exit Function End If 22 If SearchHandle = win.INVALID_HANDLE_VALUE Or SearchHandle = ERROR_NO_MORE_FILES Then Exit Do 23 FileName = dsMain.RetainedStrFromPtrA(VarPtr(FindData.cFileName(0))) 24 If AscW(FileName) <> 46 Then If (FindData.dwFileAttributes And win.FILE_ATTRIBUTE_DIRECTORY) Then retval = retval + countFiles(Name & FileName) Else retval = retval + 1 End If 28 End If 29 Loop Until win.FindNextFile(SearchHandle, FindData) = 0 win.FindClose SearchHandle countFiles = retval Exit Function ErrorHandler: Debug.Print "Oops: " & Erl & ":" & Err.Description Resume Next End Function A: The operating system itself caches data read from disk. This is completely outside your program, and you don't really have any control over it. Thus, when you run your "ram clearing" program it clears out these caches. This is why those "ram clearing" programs are generally completely useless - as you can see, by emptying the cache it makes your program run slower.
{ "language": "en", "url": "https://stackoverflow.com/questions/140948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Grab and move application windows from a .NET app? Is it possible for a .NET application to grab all the window handles currently open, and move/resize these windows? I'd pretty sure its possible using P/Invoke, but I was wondering if there were some managed code wrappers for this functionality. A: Yes, it is possible using the Windows API. This post has information on how to get all window handles from active processes: http://www.c-sharpcorner.com/Forums/ShowMessages.aspx?ThreadID=35545 using System; using System.Diagnostics; class Program { static void Main() { Process[] procs = Process.GetProcesses(); IntPtr hWnd; foreach(Process proc in procs) { if ((hWnd = proc.MainWindowHandle) != IntPtr.Zero) { Console.WriteLine("{0} : {1}", proc.ProcessName, hWnd); } } } } And then you can move the window using the Windows API: http://www.devasp.net/net/articles/display/689.html [DllImport("User32.dll", ExactSpelling = true, CharSet = System.Runtime.InteropServices.CharSet.Auto)] private static extern bool MoveWindow(IntPtr hWnd, int x, int y, int cx, int cy, bool repaint); ... MoveWindow((IntPtr)handle, (trackBar1.Value*80), 20 , (trackBar1.Value*80)-800, 120, true); Here are the parameters for the MoveWindow function: In order to move the window, we use the MoveWindow function, which takes the window handle, the co-ordinates for the top corner, as well as the desired width and height of the window, based on the screen co-ordinates. The MoveWindow function is defined as: MoveWindow(HWND hWnd, int nX, int nY, int nWidth, int nHeight, BOOL bRepaint); The bRepaint flag determines whether the client area should be invalidated, causing a WM_PAINT message to be sent, allowing the client area to be repainted. As an aside, the screen co-ordinates can be obtained using a call similar to GetClientRect(GetDesktopWindow(), &rcDesktop) with rcDesktop being a variable of type RECT, passed by reference. (http://windows-programming.suite101.com/article.cfm/client_area_size_with_movewindow)
{ "language": "en", "url": "https://stackoverflow.com/questions/140993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How can I set the text of a WPF Hyperlink via data binding? In WPF, I want to create a hyperlink that navigates to the details of an object, and I want the text of the hyperlink to be the name of the object. Right now, I have this: <TextBlock><Hyperlink Command="local:MyCommands.ViewDetails" CommandParameter="{Binding}">Object Name</Hyperlink></TextBlock> But I want "Object Name" to be bound to the actual name of the object. I would like to do something like this: <TextBlock><Hyperlink Command="local:MyCommands.ViewDetails" CommandParameter="{Binding}" Text="{Binding Path=Name}"/></TextBlock> However, the Hyperlink class does not have a text or content property that is suitable for data binding (that is, a dependency property). Any ideas? A: It looks strange, but it works. We do it in about 20 different places in our app. Hyperlink implicitly constructs a <Run/> if you put text in its "content", but in .NET 3.5 <Run/> won't let you bind to it, so you've got to explicitly use a TextBlock. <TextBlock> <Hyperlink Command="local:MyCommands.ViewDetails" CommandParameter="{Binding}"> <TextBlock Text="{Binding Path=Name}"/> </Hyperlink> </TextBlock> Update: Note that as of .NET 4.0 the Run.Text property can now be bound: <Run Text="{Binding Path=Name}" /> A: On Windows Store app (and Windows Phone 8.1 RT app) above example does not work, use HyperlinkButton and bind Content and NavigateUri properties as ususal. A: This worked for me in a "Page". <TextBlock> <Hyperlink NavigateUri="{Binding Path}"> <TextBlock Text="{Binding Path=Path}" /> </Hyperlink> </TextBlock>
{ "language": "en", "url": "https://stackoverflow.com/questions/140996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "135" }
Q: Javascript error: [elementname] has no properties I'm doing some maintenance coding on a webapp and I am getting a javascript error of the form: "[elementname] has no properties" Part of the code is being generated on the fly with an AJAX call that changes innerHTML for part of the page, after this is finished I need to copy a piece of data from a hidden input field to a visible input field. So we have the destination field: <input id="dest" name="dest" value="0"> And the source field: <input id="source" name="source" value="1"> Now when the ajax runs it overwrites the innerHTML of the div that source is in, so the source field now reads: <input id="source" name="source" value="2"> Ok after the javascript line that copies the ajax data to innerHTML the next line is: document.getElementById('dest').value = document.getElementById('source').value; I get the following error: Error: document.getElementById("source") has no properties (I also tried document.formname.source and document.formname.dest and same problem) What am I missing? Note1: The page is fully loaded and the element exists. The ajax call only happens after a user action and replaces the html section that the element is in. Note2: As for not using innerHTML, this is how the codebase was given to me, and in order to remove it I would need to rewrite all the ajax calls, which is not in the scope of the current maintenance cycle. Note3: the innerHTML is updated with the new data, a whole table with data and formatting is being copied, I am trying to add a boolean to the end of this big chunk, instead of creating a whole new ajax call for one boolean. It looks like that is what I will have to do... as my hack on the end then copy method is not working. Extra pair of eyes FTW. Yeah I had a couple guys take a look here at work and they found my simple typing mistake... I swear I had those right to begin with, but hey we live and learn... Thanks for the help guys. A: "[elementname] has no properties" is javascript error speak for "the element you tried to reference doesn't exist or is nil" This means you've got one or more of a few possible problems: * *Your page hasn't rendered yet and you're trying to reference it before it exists *You've got a spelling error *You've named your id the same as a reserved word (submit on a submit button for instance) *What you think you're referencing you're really not (a passed variable that isn't what you think you're passing) A: Make sure your code runs AFTER the page fully loads. If your code runs before the element you are looking for is rendered, this type of error will occur. A: What your describing is this functionality: <div id="test2"> <input id="source" value="0" /> </div> <input id="dest" value="1" /> <script type="text/javascript" charset="utf-8"> //<![CDATA[ function pageLoad() { var container = document.getElementById('test2'); container.innerHTML = "<input id='source' value='2' />"; var source = document.getElementById('source'); var dest = document.getElementById('dest'); dest.value = source.value; } //]]> </script> This works in common browsers (I checked in IE, Firefox and Safari); are you using some other browser or are you sure that it created the elements correct on innerHTML action? A: It sounds like the DOM isn't being updated with the new elements to me. For that matter, why are you rewriting the entire div just to change the source input? Wouldn't it be just as easy to change source's value directly? A: This is a stretch, but just may be the trick - I have seen this before and this hack actually worked. So, you said: Ok after the javascript line that copies the ajax data to innerHTML the next line is: document.getElementById('dest').value = document.getElementById('source').value; Change that line to this: setTimeout(function() { document.getElementById("dest").value = document.getElementById("source").value; }, 10); You really shouldn't need this, but it is possible that the time between your setting the innerHTML and then trying to access the "source" element is so fast that the browser is unable to find it. I know, sounds completely whack, but I have seen browsers do this in certain instances for some reason that is beyond me. A: Generally you shouldn't use innerHTML, but create elements using DOM-methods. I cannot say if this is your problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/141002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Creating a XAML Resource from Code Without a Key Is there a way to add a resource to a ResourceDictionary from code without giving it a resource key? For instance, I have this resource in XAML: <TreeView.Resources> <HierarchicalDataTemplate DataType="{x:Type xbap:FieldPropertyInfo}" ItemsSource="{Binding Path=Value.Values}"> <TextBlock Text="{Binding Path=Name}" /> <HierarchicalDataTemplate> </TreeView.Resources> I need to create this resource dynamically from code and add it to the TreeView ResourceDictionary. However, in XAML having no Key means that it's used, by default, for all FieldPropertyInfo types. Is there a way to add it to the resource in code without having a key or is there a way I can use a key and still have it used on all FieldPropertyInfo types? Here's what I've done in C# so far: HierarchicalDataTemplate fieldPropertyTemplate = new HierarchicalDataTemplate("FieldProperyInfo"); fieldPropertyTemplate.ItemsSource = new Binding("Value.Values"); this.Resources.Add(null, fieldPropertyTemplate); Obviously, adding a resource to the ResourceDictionary the key null doesn't work. A: Use the type that you want the template to apply to as the key: HierarchicalDataTemplate fieldPropertyTemplate = new HierarchicalDataTemplate("FieldProperyInfo"); fieldPropertyTemplate.SetBinding( HierarchialDataTemplate.ItemSourceProperty, new Binding("Value.Values"); this.Resources.Add(FieldPropertyInfo.GetType(), fieldPropertyTemplate); The reason your code wasn't working was your weren't actually setting the binding. You need to call SetBinding, with the property you want the binding bound to. A: Use the type that you want the template to apply to as the key: this.Resources.Add(FieldPropertyInfo.GetType(), fieldPropertyTemplate); As with your template above you provide a type. You have to either have to provide a name or a type.
{ "language": "en", "url": "https://stackoverflow.com/questions/141007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Roll Up Task in Team Foundation Server We are using TFS 2008 for project managmeent and I am looking for a method to roll up smaller tasks into larger tasks within tfs. Our work flow works like this: * *I create a new large work item, say 'Implement web page X' and assign it to my developer (lets call him Brad) *Brad receives the task. Now he has never designed 'web page X' before and has no idea how long the whole thing will take to implement. However, he has done pages like it in the past...so Brad takes the large task and splits it into four or five smaller tasks that he can estimate. *Brad takes his five new tasks and creates an estimate for how long each task will take. Even a few of these tasks are longer than say 8 hours of work, so he continues to break the tasks down into small enough pieces where he believes he can accurately esitmate how long it will take him to implement the new web page. *Brad now has a great task list with realistic estimates of how long each task will take. I can use this data and figure out how long it will take to implement this new page. After going through all of this we have lost the overall 'master task'. I am looking for a workflow that would allow Brad and myself to reference the master task easily, and to communicate that all of these sub tasks belong to the larger master task. Any thoughts on how to implement such a work flow, or a better suggestions on how to tackle task roll up in tfs? A: There is no such support in TFS yet. However it will be possible to do something like this in rosario because it will support nested tasks, just like a tree structure. You could utilize iterations and areas within TFS to accommodate this need. Use these links as inspiration: http://blogs.msdn.com/ericlee/archive/2006/08/09/when-to-use-team-projects.aspx http://blogs.msdn.com/slange/archive/2007/01/30/my-2-cents-on-areas-and-iterations-in-team-foundation-server.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/141019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's a good API for recording/capturing and playing back sound in Delphi and/or C#? <flavor> I want to create a spelling test program for my grade schoolers that would let them enter and record their spelling words then test them on them through out the week.</flavor> What's a good Delphi API with which I could select a recording device, capture and save sound files, then play them back? I'm also toying with doing the same project in C#, so C# Sound capture/playback API recommendations would also be appreciated. A: I've found New Audio Components to be quite good for Delphi. A: An alternative to recording would be to use the MS Speech API with C#, enter the words via keyboard, and have it state what was keyed in. Just a thought... Good luck on your app -- it sounds like a really cool program! A: This component set looks promising though I've never used it myself. AudioLab 3.1 has both VCL components as well .NET 2.0 components which should allow you to use it whether you stay with developing your application in Delphi or move to C#. Finally, it appears to be Free for non-commercial use. A: The best place to look for Delphi Components (Audio) http://www.torry.net/pages.php?id=167 A: Why not use the TMediaPlayer that comes with Delphi (in the System Tab of the Palette)? It can record and play wave files very easily A: I was also going to suggest AudioLab.
{ "language": "en", "url": "https://stackoverflow.com/questions/141023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: ActionScript3 User Interface Components? After using AS2 for several years, I'm getting started with writing applications in AS3 (Flash9/Flash10). I've come to the point where I need some full sets of GUI components, and I need to decide which set I'm going to use. Back in the AS2 days, the built in components included with flash were pretty crappy - bloated filesize, slow, buggy, etc. However, I heard good things about the new ones (included with CS3). So I'm looking for advice from people who have used a few different sets. Component sets I've heard of: * *CS3 Components - downside, I need to fiddle with the CS3 IDE, I'd prefer to work all from FlashDevelop only. *Flex Components - downside, I need the Flex Framework, meaning I have to start with a MXML file, plus the bloat of the framework *bit101's MinimalComps - These look like they might be a good starting point, though a bit limited *ASwing A3 - These look interesting, but they seem a bit overengineered. Ideally, they would be lightweight, have a decent API, and not be overly complex. A: I'm actually a fan of the CS3 ones mainly because it is so easy to just double click on those bad boys and edit right in the Flash IDE using the drawing tools. Very helpful for those times where you have to rapidly push a skinned video player to production... On the open source side there's also Thimbault Imbert's Liquid Components (http://www.bytearray.org/?p=137) demo here (http://www.bytearray.org/?p=109)... It's pretty darn awesome and does runtime skinning which was a major time saver in a project I was working on. Pretty easy to get started with too... A: Yahoo's Astra components aren't bad either: http://developer.yahoo.com/flash/astra-flash/ A: If you are making a GUI application, this is exactly what Flex is for. As well as the built-in types you get a visual editor which is very nice, cool binding functionality, and other stuff like built-in drag & drop. A: Flex framework has a nice, consistent API that I wouldn't consider "bloated" - yes, it's a feature-packed UI framework but basic things like buttons and layout panels don't require any deep knowledge. And XML-based language for UIs is a blessing (all modern UI platforms do that, be it DHTML, Flex or WPF/Silverlight). A: Go with the Cs3 ones - created by Grant Skinner I think. A: Try out AS DataProvider Controls A: MyLib is another nice component library for AS
{ "language": "en", "url": "https://stackoverflow.com/questions/141024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Is the SendChuncked Property missing from System.Net.HttpWebRequest in Silverlight 2? I need to send large files from silverlight 2 beta 2, so I need to set the sendchuncked property to true, but the property doesn't seem to exist. I've seen other posts on the internet where people have been able to set this in Silverlight. I disassebled the .dll at C:\Program Files\Microsoft SDKs\Silverlight\v2.0\Reference Assemblies in reflector and have confirmed that the property doesn't exist. Am I missing something? A: After a bit more of research, I was looking at the documentation for ASP.NET. You need to wire up your own file upload chuncking mechanism. See Tim Heuer's great video about this topic A: Could you provide links to those posts? Maybe they refer to a different of Silverlight, some things have been removed from S2B2.
{ "language": "en", "url": "https://stackoverflow.com/questions/141031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I replace the *first instance* of a string in .NET? I want to replace the first occurrence in a given string. How can I accomplish this in .NET? A: In C# syntax: int loc = original.IndexOf(oldValue); if( loc < 0 ) { return original; } return original.Remove(loc, oldValue.Length).Insert(loc, newValue); A: As itsmatt said Regex.Replace is a good choice for this however to make his answer more complete I will fill it in with a code sample: using System.Text.RegularExpressions; ... Regex regex = new Regex("foo"); string result = regex.Replace("foo1 foo2 foo3 foo4", "bar", 1); // result = "bar1 foo2 foo3 foo4" The third parameter, set to 1 in this case, is the number of occurrences of the regex pattern that you want to replace in the input string from the beginning of the string. I was hoping this could be done with a static Regex.Replace overload but unfortunately it appears you need a Regex instance to accomplish it. A: Assumes that AA only needs to be replaced if it is at the very start of the string: var newString; if(myString.StartsWith("AA")) { newString ="XQ" + myString.Substring(2); } If you need to replace the first occurrence of AA, whether the string starts with it or not, go with the solution from Marc. A: And because there is also VB.NET to consider, I would like to offer up: Private Function ReplaceFirst(ByVal text As String, ByVal search As String, ByVal replace As String) As String Dim pos As Integer = text.IndexOf(search) If pos >= 0 Then Return text.Substring(0, pos) + replace + text.Substring(pos + search.Length) End If Return text End Function A: One of the overloads of Regex.Replace takes an int for "The maximum number of times the replacement can occur". Obviously, using Regex.Replace for plain text replacement may seem like overkill, but it's certainly concise: string output = (new Regex("AA")).Replace(input, "XQ", 1); A: For anyone that doesn't mind a reference to Microsoft.VisualBasic, there is the Replace Method: string result = Microsoft.VisualBasic.Strings.Replace("111", "1", "0", 2, 1); // "101" A: Take a look at Regex.Replace. A: using System.Text.RegularExpressions; RegEx MyRegEx = new RegEx("F"); string result = MyRegex.Replace(InputString, "R", 1); will find first F in InputString and replace it with R. A: string ReplaceFirst(string text, string search, string replace) { int pos = text.IndexOf(search); if (pos < 0) { return text; } return text.Substring(0, pos) + replace + text.Substring(pos + search.Length); } Example: string str = "The brown brown fox jumps over the lazy dog"; str = ReplaceFirst(str, "brown", "quick"); EDIT: As @itsmatt mentioned, there's also Regex.Replace(String, String, Int32), which can do the same, but is probably more expensive at runtime, since it's utilizing a full featured parser where my method does one find and three string concatenations. EDIT2: If this is a common task, you might want to make the method an extension method: public static class StringExtension { public static string ReplaceFirst(this string text, string search, string replace) { // ...same as above... } } Using the above example it's now possible to write: str = str.ReplaceFirst("brown", "quick"); A: Taking the "first only" into account, perhaps: int index = input.IndexOf("AA"); if (index >= 0) output = input.Substring(0, index) + "XQ" + input.Substring(index + 2); ? Or more generally: public static string ReplaceFirstInstance(this string source, string find, string replace) { int index = source.IndexOf(find); return index < 0 ? source : source.Substring(0, index) + replace + source.Substring(index + find.Length); } Then: string output = input.ReplaceFirstInstance("AA", "XQ"); A: C# extension method that will do this: public static class StringExt { public static string ReplaceFirstOccurrence(this string s, string oldValue, string newValue) { int i = s.IndexOf(oldValue); return s.Remove(i, oldValue.Length).Insert(i, newValue); } } A: This example abstracts away the substrings (but is slower), but is probably much fast than a RegEx: var parts = contents.ToString().Split(new string[] { "needle" }, 2, StringSplitOptions.None); return parts[0] + "replacement" + parts[1]; A: Updated extension method utilizing Span to minimize new string creation public static string ReplaceFirstOccurrence(this string source, string search, string replace) { int index = source.IndexOf(search); if (index < 0) return source; var sourceSpan = source.AsSpan(); return string.Concat(sourceSpan.Slice(0, index), replace, sourceSpan.Slice(index + search.Length)); } A: With ranges and C# 10 we can do: public static string ReplaceFirst(this string text, string search, string replace) { int pos = text.IndexOf(search, StringComparison.Ordinal); return pos < 0 ? text : string.Concat(text[..pos], replace, text.AsSpan(pos + search.Length)); } A: string abc = "AAAAX1"; if(abc.IndexOf("AA") == 0) { abc.Remove(0, 2); abc = "XQ" + abc; }
{ "language": "en", "url": "https://stackoverflow.com/questions/141045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "118" }
Q: MS Paint command line switches I have been looking for documentation related to interacting with MSPaint from the command line. I have only found references to /p, /pt and /wia, but no guidance as to how to use them and their limitations. I am trying to send some graphics files to the printer and when I drop the file on my printer driver I get a different print output than if I call paint from the command line. I am using the UDC print driver to convert graphics, and I am using paint to send my graphics file to the printer driver in order for my file to convert. Any ideas? A: I actually suggest you look into doing this in Paint.Net instead. You will have much more freedom. A: I suggest ImageMagick hands down... it's like having Photoshop on the command line! A: I know that mspaint /p filename and mspaint /pt filename both print straight to the default printer. Not sure what /wia does, maybe something to do with Windows Image Acquisition? Also, as others have pointed out, there are many programs a lot more capable for doing what you want than MSPaint. A: Use PngOptimizer: https://portableapps.com/apps/graphics_pictures/pngoptimizer-portable it is freeware, doesn't require installation, and is less than 1MB. Converts nicely from BMP to PNG and many other things. There is a specific command line version PngOptimizerCL to be downloaded from: http://psydk.org/pngoptimizer To run in command line, converting from bmp to png: PngOptimizerCL.exe file.bmp file.png A: What OS (specific version) are you using? The newer versions of Windows support printing graphics files without the need for MS Paint or any other graphics program. It's called the "Photo Printing Wizard" in XP, and you can even just right-click on a graphics file and choose "Print" right from Explorer - no other program required (and no command line switches needed either). If all you are trying to do is send some graphics files to the printer, and you're able to drag & drop them, then this is what I'd recommend using.
{ "language": "en", "url": "https://stackoverflow.com/questions/141052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Crystal Report 10.5 [included with Visual Studio 2008] Selection by Image column I have an Image column (Allow Null = true) in SQL Server 2005. I am using Crystal Reports designer (ver 10.5) that comes with Visual Studio 2008. Crystal sees the column as blob field and puts an image object for the column. When I am trying to limit the record selection by using NOT ISNULL({Employee.Picture}) as Selection Formula, I get the following error: Error in formula . 'NOT (ISNULL({Employee.Picture}))' This function cannot be used because it must be evaluated later. Is there a way to filter out rows with out pictures? Thanks, Kishore A A: The ISNULL function is more like the SWITCH function in in VB. What you are looking for is probably something more along the lines of: WHERE NOT Employee.Picture IS NULL Two separate words for IS NULL. A: since no one is jumping in on this I'll try (note: i can't test these possibilities at the moment). * *if you can get a size out of the field using the formula editor, try size greater than 0 or whatever. *if you pull the field on the report, does it show a pic? *if you look at the database, are the empty field actually null or could CR be reading them as "0"?
{ "language": "en", "url": "https://stackoverflow.com/questions/141068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to iterate over a dictionary? I've seen a few different ways to iterate over a dictionary in C#. Is there a standard way? A: Generally, asking for "the best way" without a specific context is like asking what is the best color? One the one hand, there are many colors and there's no best color. It depends on the need and often on taste, too. On the other hand, there are many ways to iterate over a Dictionary in C# and there's no best way. It depends on the need and often on taste, too. Most straightforward way foreach (var kvp in items) { // key is kvp.Key doStuff(kvp.Value) } If you need only the value (allows to call it item, more readable than kvp.Value). foreach (var item in items.Values) { doStuff(item) } If you need a specific sort order Generally, beginners are surprised about order of enumeration of a Dictionary. LINQ provides a concise syntax that allows to specify order (and many other things), e.g.: foreach (var kvp in items.OrderBy(kvp => kvp.Key)) { // key is kvp.Key doStuff(kvp.Value) } Again you might only need the value. LINQ also provides a concise solution to: * *iterate directly on the value (allows to call it item, more readable than kvp.Value) *but sorted by the keys Here it is: foreach (var item in items.OrderBy(kvp => kvp.Key).Select(kvp => kvp.Value)) { doStuff(item) } There are many more real-world use case you can do from these examples. If you don't need a specific order, just stick to the "most straightforward way" (see above)! A: I found this method in the documentation for the DictionaryBase class on MSDN: foreach (DictionaryEntry de in myDictionary) { //Do some stuff with de.Value or de.Key } This was the only one I was able to get functioning correctly in a class that inherited from the DictionaryBase. A: Sometimes if you only needs the values to be enumerated, use the dictionary's value collection: foreach(var value in dictionary.Values) { // do something with entry.Value only } Reported by this post which states it is the fastest method: http://alexpinsker.blogspot.hk/2010/02/c-fastest-way-to-iterate-over.html A: C# 7.0 introduced Deconstructors and if you are using .NET Core 2.0+ Application, the struct KeyValuePair<> already include a Deconstruct() for you. So you can do: var dic = new Dictionary<int, string>() { { 1, "One" }, { 2, "Two" }, { 3, "Three" } }; foreach (var (key, value) in dic) { Console.WriteLine($"Item [{key}] = {value}"); } //Or foreach (var (_, value) in dic) { Console.WriteLine($"Item [NO_ID] = {value}"); } //Or foreach ((int key, string value) in dic) { Console.WriteLine($"Item [{key}] = {value}"); } A: I would say foreach is the standard way, though it obviously depends on what you're looking for foreach(var kvp in my_dictionary) { ... } Is that what you're looking for? A: I know this is a very old question, but I created some extension methods that might be useful: public static void ForEach<T, U>(this Dictionary<T, U> d, Action<KeyValuePair<T, U>> a) { foreach (KeyValuePair<T, U> p in d) { a(p); } } public static void ForEach<T, U>(this Dictionary<T, U>.KeyCollection k, Action<T> a) { foreach (T t in k) { a(t); } } public static void ForEach<T, U>(this Dictionary<T, U>.ValueCollection v, Action<U> a) { foreach (U u in v) { a(u); } } This way I can write code like this: myDictionary.ForEach(pair => Console.Write($"key: {pair.Key}, value: {pair.Value}")); myDictionary.Keys.ForEach(key => Console.Write(key);); myDictionary.Values.ForEach(value => Console.Write(value);); A: If you want to use a for loop, you can do as below: var keyList=new List<string>(dictionary.Keys); for (int i = 0; i < keyList.Count; i++) { var key= keyList[i]; var value = dictionary[key]; } A: You can also try this on big dictionaries for multithreaded processing. dictionary .AsParallel() .ForAll(pair => { // Process pair.Key and pair.Value here }); A: I will take the advantage of .NET 4.0+ and provide an updated answer to the originally accepted one: foreach(var entry in MyDic) { // do something with entry.Value or entry.Key } A: foreach(KeyValuePair<string, string> entry in myDictionary) { // do something with entry.Value or entry.Key } A: If say, you want to iterate over the values collection by default, I believe you can implement IEnumerable<>, Where T is the type of the values object in the dictionary, and "this" is a Dictionary. public new IEnumerator<T> GetEnumerator() { return this.Values.GetEnumerator(); } A: The standard way to iterate over a Dictionary, according to official documentation on MSDN is: foreach (DictionaryEntry entry in myDictionary) { //Read entry.Key and entry.Value here } A: I wrote an extension to loop over a dictionary. public static class DictionaryExtension { public static void ForEach<T1, T2>(this Dictionary<T1, T2> dictionary, Action<T1, T2> action) { foreach(KeyValuePair<T1, T2> keyValue in dictionary) { action(keyValue.Key, keyValue.Value); } } } Then you can call myDictionary.ForEach((x,y) => Console.WriteLine(x + " - " + y)); A: I appreciate this question has already had a lot of responses but I wanted to throw in a little research. Iterating over a dictionary can be rather slow when compared with iterating over something like an array. In my tests an iteration over an array took 0.015003 seconds whereas an iteration over a dictionary (with the same number of elements) took 0.0365073 seconds that's 2.4 times as long! Although I have seen much bigger differences. For comparison a List was somewhere in between at 0.00215043 seconds. However, that is like comparing apples and oranges. My point is that iterating over dictionaries is slow. Dictionaries are optimised for lookups, so with that in mind I've created two methods. One simply does a foreach, the other iterates the keys then looks up. public static string Normal(Dictionary<string, string> dictionary) { string value; int count = 0; foreach (var kvp in dictionary) { value = kvp.Value; count++; } return "Normal"; } This one loads the keys and iterates over them instead (I did also try pulling the keys into a string[] but the difference was negligible. public static string Keys(Dictionary<string, string> dictionary) { string value; int count = 0; foreach (var key in dictionary.Keys) { value = dictionary[key]; count++; } return "Keys"; } With this example the normal foreach test took 0.0310062 and the keys version took 0.2205441. Loading all the keys and iterating over all the lookups is clearly a LOT slower! For a final test I've performed my iteration ten times to see if there are any benefits to using the keys here (by this point I was just curious): Here's the RunTest method if that helps you visualise what's going on. private static string RunTest<T>(T dictionary, Func<T, string> function) { DateTime start = DateTime.Now; string name = null; for (int i = 0; i < 10; i++) { name = function(dictionary); } DateTime end = DateTime.Now; var duration = end.Subtract(start); return string.Format("{0} took {1} seconds", name, duration.TotalSeconds); } Here the normal foreach run took 0.2820564 seconds (around ten times longer than a single iteration took - as you'd expect). The iteration over the keys took 2.2249449 seconds. Edited To Add: Reading some of the other answers made me question what would happen if I used Dictionary instead of Dictionary. In this example the array took 0.0120024 seconds, the list 0.0185037 seconds and the dictionary 0.0465093 seconds. It's reasonable to expect that the data type makes a difference on how much slower the dictionary is. What are my Conclusions? * *Avoid iterating over a dictionary if you can, they are substantially slower than iterating over an array with the same data in it. *If you do choose to iterate over a dictionary don't try to be too clever, although slower you could do a lot worse than using the standard foreach method. A: As already pointed out on this answer, KeyValuePair<TKey, TValue> implements a Deconstruct method starting on .NET Core 2.0, .NET Standard 2.1 and .NET Framework 5.0 (preview). With this, it's possible to iterate through a dictionary in a KeyValuePair agnostic way: var dictionary = new Dictionary<int, string>(); // ... foreach (var (key, value) in dictionary) { // ... } A: There are plenty of options. My personal favorite is by KeyValuePair Dictionary<string, object> myDictionary = new Dictionary<string, object>(); // Populate your dictionary here foreach (KeyValuePair<string,object> kvp in myDictionary) { // Do some interesting things } You can also use the Keys and Values Collections A: With .NET Framework 4.7 one can use decomposition var fruits = new Dictionary<string, int>(); ... foreach (var (fruit, number) in fruits) { Console.WriteLine(fruit + ": " + number); } To make this code work on lower C# versions, add System.ValueTuple NuGet package and write somewhere public static class MyExtensions { public static void Deconstruct<T1, T2>(this KeyValuePair<T1, T2> tuple, out T1 key, out T2 value) { key = tuple.Key; value = tuple.Value; } } A: As of C# 7, you can deconstruct objects into variables. I believe this to be the best way to iterate over a dictionary. Example: Create an extension method on KeyValuePair<TKey, TVal> that deconstructs it: public static void Deconstruct<TKey, TVal>(this KeyValuePair<TKey, TVal> pair, out TKey key, out TVal value) { key = pair.Key; value = pair.Value; } Iterate over any Dictionary<TKey, TVal> in the following manner // Dictionary can be of any types, just using 'int' and 'string' as examples. Dictionary<int, string> dict = new Dictionary<int, string>(); // Deconstructor gets called here. foreach (var (key, value) in dict) { Console.WriteLine($"{key} : {value}"); } A: In some cases you may need a counter that may be provided by for-loop implementation. For that, LINQ provides ElementAt which enables the following: for (int index = 0; index < dictionary.Count; index++) { var item = dictionary.ElementAt(index); var itemKey = item.Key; var itemValue = item.Value; } A: foreach is fastest and if you only iterate over ___.Values, it is also faster A: Using C# 7, add this extension method to any project of your solution: public static class IDictionaryExtensions { public static IEnumerable<(TKey, TValue)> Tuples<TKey, TValue>( this IDictionary<TKey, TValue> dict) { foreach (KeyValuePair<TKey, TValue> kvp in dict) yield return (kvp.Key, kvp.Value); } } And use this simple syntax foreach (var(id, value) in dict.Tuples()) { // your code using 'id' and 'value' } Or this one, if you prefer foreach ((string id, object value) in dict.Tuples()) { // your code using 'id' and 'value' } In place of the traditional foreach (KeyValuePair<string, object> kvp in dict) { string id = kvp.Key; object value = kvp.Value; // your code using 'id' and 'value' } The extension method transforms the KeyValuePair of your IDictionary<TKey, TValue> into a strongly typed tuple, allowing you to use this new comfortable syntax. It converts -just- the required dictionary entries to tuples, so it does NOT converts the whole dictionary to tuples, so there are no performance concerns related to that. There is a only minor cost calling the extension method for creating a tuple in comparison with using the KeyValuePair directly, which should NOT be an issue if you are assigning the KeyValuePair's properties Key and Value to new loop variables anyway. In practice, this new syntax suits very well for most cases, except for low-level ultra-high performance scenarios, where you still have the option to simply not use it on that specific spot. Check this out: MSDN Blog - New features in C# 7 A: Simplest form to iterate a dictionary: foreach(var item in myDictionary) { Console.WriteLine(item.Key); Console.WriteLine(item.Value); } A: Depends on whether you're after the keys or the values... From the MSDN Dictionary(TKey, TValue) Class description: // When you use foreach to enumerate dictionary elements, // the elements are retrieved as KeyValuePair objects. Console.WriteLine(); foreach( KeyValuePair<string, string> kvp in openWith ) { Console.WriteLine("Key = {0}, Value = {1}", kvp.Key, kvp.Value); } // To get the values alone, use the Values property. Dictionary<string, string>.ValueCollection valueColl = openWith.Values; // The elements of the ValueCollection are strongly typed // with the type that was specified for dictionary values. Console.WriteLine(); foreach( string s in valueColl ) { Console.WriteLine("Value = {0}", s); } // To get the keys alone, use the Keys property. Dictionary<string, string>.KeyCollection keyColl = openWith.Keys; // The elements of the KeyCollection are strongly typed // with the type that was specified for dictionary keys. Console.WriteLine(); foreach( string s in keyColl ) { Console.WriteLine("Key = {0}", s); } A: If you are trying to use a generic Dictionary in C# like you would use an associative array in another language: foreach(var item in myDictionary) { foo(item.Key); bar(item.Value); } Or, if you only need to iterate over the collection of keys, use foreach(var item in myDictionary.Keys) { foo(item); } And lastly, if you're only interested in the values: foreach(var item in myDictionary.Values) { foo(item); } (Take note that the var keyword is an optional C# 3.0 and above feature, you could also use the exact type of your keys/values here) A: Dictionary< TKey, TValue > It is a generic collection class in c# and it stores the data in the key value format.Key must be unique and it can not be null whereas value can be duplicate and null.As each item in the dictionary is treated as KeyValuePair< TKey, TValue > structure representing a key and its value. and hence we should take the element type KeyValuePair< TKey, TValue> during the iteration of element.Below is the example. Dictionary<int, string> dict = new Dictionary<int, string>(); dict.Add(1,"One"); dict.Add(2,"Two"); dict.Add(3,"Three"); foreach (KeyValuePair<int, string> item in dict) { Console.WriteLine("Key: {0}, Value: {1}", item.Key, item.Value); } A: The best answer is of course: Think, if you could use a more appropriate data structure than a dictionary if you plan to iterate over it- as Vikas Gupta mentioned already in the (beginning of the) discussion under the question. But that discussion as this whole thread still lacks surprisingly good alternatives. One is: SortedList<string, string> x = new SortedList<string, string>(); x.Add("key1", "value1"); x.Add("key2", "value2"); x["key3"] = "value3"; foreach( KeyValuePair<string, string> kvPair in x ) Console.WriteLine($"{kvPair.Key}, {kvPair.Value}"); Why it could be argued a code smell of iterating over a dictionary (e.g. by foreach(KeyValuePair<,>) ? A basic principle of Clean Coding: "Express intent!" Robert C. Martin writes in "Clean Code": "Choosing names that reveal intent". Obviously naming alone is too weak. "Express (reveal) intent with every coding decision" expresses it better. A related principle is "Principle of least surprise" (=Principle of Least Astonishment). Why this is related to iterating over a dictionary? Choosing a dictionary expresses the intent of choosing a data structure which was made for primarily finding data by key. Nowadays there are so much alternatives in .NET, if you want to iterate through key/value pairs that you could choose something else. Moreover: If you iterate over something, you have to reveal something about how the items are (to be) ordered and expected to be ordered! Although the known implementations of Dictionary sort the key collection in the order of the items added- AFAIK, Dictionary has no assured specification about ordering (has it?). But what are the alternatives? TLDR: SortedList: If your collection is not getting too large, a simple solution would be to use SortedList<,> which gives you also full indexing of key/value pairs. Microsoft has a long article about mentioning and explaining fitting collections: Keyed collection To mention the most important: KeyedCollection<,> and SortedDictionary<,> . SortedDictionary<,> is a bit faster than SortedList for only inserting if it gets large, but lacks indexing and is needed only if O(log n) for inserting is preferenced over other operations. If you really need O(1) for inserting and accept slower iterating in exchange, you have to stay with simple Dictionary<,>. Obviously there is no data structure which is the fastest for every possible operation.. Additionally there is ImmutableSortedDictionary<,>. And if one data structure is not exactly what you need, then derivate from Dictionary<,> or even from the new ConcurrentDictionary<,> and add explicit iteration/sorting functions! A: in addition to the highest ranking posts where there is a discussion between using foreach(KeyValuePair<string, string> entry in myDictionary) { // do something with entry.Value or entry.Key } or foreach(var entry in myDictionary) { // do something with entry.Value or entry.Key } most complete is the following because you can see the dictionary type from the initialization, kvp is KeyValuePair var myDictionary = new Dictionary<string, string>(x);//fill dictionary with x foreach(var kvp in myDictionary)//iterate over dictionary { // do something with kvp.Value or kvp.Key } A: Just wanted to add my 2 cent, as the most answers relate to foreach-loop. Please, take a look at the following code: Dictionary<String, Double> myProductPrices = new Dictionary<String, Double>(); //Add some entries to the dictionary myProductPrices.ToList().ForEach(kvP => { kvP.Value *= 1.15; Console.Writeline(String.Format("Product '{0}' has a new price: {1} $", kvp.Key, kvP.Value)); }); Altought this adds a additional call of '.ToList()', there might be a slight performance-improvement (as pointed out here foreach vs someList.Foreach(){}), espacially when working with large Dictionaries and running in parallel is no option / won't have an effect at all. Also, please note that you wont be able to assign values to the 'Value' property inside a foreach-loop. On the other hand, you will be able to manipulate the 'Key' as well, possibly getting you into trouble at runtime. When you just want to "read" Keys and Values, you might also use IEnumerable.Select(). var newProductPrices = myProductPrices.Select(kvp => new { Name = kvp.Key, Price = kvp.Value * 1.15 } ); A: var dictionary = new Dictionary<string, int> { { "Key", 12 } }; var aggregateObjectCollection = dictionary.Select( entry => new AggregateObject(entry.Key, entry.Value));
{ "language": "en", "url": "https://stackoverflow.com/questions/141088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3199" }
Q: Are "dirty reads" safe to use in Terracotta? "Dirty reads", meaning reading an object's value even though it is write-locked by another thread, are described on Terracotta's website, yet I've heard that they shouldn't be used, even if you don't care about the possibility that you might get old data when you dirty-read the locked object. Does anyone have any experience of using dirty reads in Terracotta, and are they safe to use if you don't care about the possibility of reading an old value? A: A dirty read is a dirty read. Terracotta, being distributed/clustered, only adds the possibility to read even older values of the shared mutable state that you are accessing without proper synchronization. You should note that, under the memory model in Java 5, you are not guaranteed to ever read an updated value if you don't use proper synchronization. Terracotta may decide to take advantage of this possibility. In fact, any JVM may, at their leisure, take advantage of it. Even if it might work on your machine, it may break on other machines. It may break on minor updates of the JVM, and it may break for the same version of the same JVM on a different CPU. With that in mind, you can say that dirty reads isn't safe in any JVM... Unless you don't mind the possibility that you won't ever be able to read the updates that other threads make - an unlikely situation but it could happen. Also, when you actually follow your link to Terracottas wiki, it says that the article has been removed and that the pattern is discouraged. A: I'm a Terracotta developer. The gist of the answer is just as Christian Vest Hansen already noted - just as the JVM makes no guarantees about the visibility of updates of a shared object that is accessed w/o proper synchronization, Terracotta likewise can make no guarantees about dirty reads of a clustered object. The link has indeed purposely been removed and replaced with a warning to not use this pattern.
{ "language": "en", "url": "https://stackoverflow.com/questions/141090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I integrate the ASP .Net Model View Presenter (MVP) pattern and static page methods marked as [WebMethod]? In an asp.net application, I would like to combine the use of the Webclient Software Factory (WCSF), and its associated Model View Presenter pattern (MVP), with Page Method, that is static methods on the .aspx Views marked with the [WebMethod] attribute. However, static methods on the aspx page would seem to break the Model View Presenter pattern since an instance method is required on the page to have the context of the Presenter and Controller necessary for the View to talk to. How would one extended asp .net's MVP pattern in WCSF to support [WebMethods] on the page, aka the View? A: I had a similar problem recently when doing a MVP patterened project and wanting a lot of AJAX integration. You're best off having web services which conform to the MVP pattern that you call. Keep in mind that a PageMethod is little more than a web service, just in the current page. It doesn't have access to any page-level objects so the advantages of having it there are minimal. I actually think they are disadvantagious, they give developers (who are unfamiliar with the concept) the idea that they can interact with page-level objects. The flip-side of the coin is what your PageMethod is doing, if your page method is not needing to interact with the Model (say, it's handling complex arithmatic calculations which are faster in C#/VB.NET than JS) then the operation is really a UI level operation and quite probably irrelivant if you were to turn the app into a WinForm (or something else). Keep in mind that all interaction with data at a UI level is specific for that UI implementation. If you were to write a different UI for the presenters then chances are you'll have different UI level data interaction. A: I think you could come close to what you are looking for by using an ASP.Net AJAX Web Service instead of static page methods. The web service has the advantage of not being static, and depending on how your views are implemented, (I'm not familiar with the specifics of the WCSF MVP pattern) you could potentially make the web service your "View" layer..or at least something fairly close. I've done something similar in a project I'm working on. I ended up needing to create a thin data-only class which got serialized to JSON by the web service to carry the data from the model to the "view", but the web service had essentially the same methods that would be exposed as events on the view. One of the things I liked about this approach is that all the bits, including the web service, are testable.
{ "language": "en", "url": "https://stackoverflow.com/questions/141104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to find the foreach index? Is it possible to find the foreach index? in a for loop as follows: for ($i = 0; $i < 10; ++$i) { echo $i . ' '; } $i will give you the index. Do I have to use the for loop or is there some way to get the index in the foreach loop? A: PHP arrays have internal pointers, so try this: foreach($array as $key => $value){ $index = current($array); } Works okay for me (only very preliminarily tested though). A: I use ++$key instead of $key++ to start from 1. Normally it starts from 0. @foreach ($quiz->questions as $key => $question) <h2> Question: {{++$key}}</h2> <p>{{$question->question}}</p> @endforeach Output: Question: 1 ...... Question:2 ..... . . . A: Jonathan is correct. PHP arrays act as a map table mapping keys to values. in some cases you can get an index if your array is defined, such as $var = array(2,5); for ($i = 0; $i < count($var); $i++) { echo $var[$i]."\n"; } your output will be 2 5 in which case each element in the array has a knowable index, but if you then do something like the following $var = array_push($var,10); for ($i = 0; $i < count($var); $i++) { echo $var[$i]."\n"; } you get no output. This happens because arrays in PHP are not linear structures like they are in most languages. They are more like hash tables that may or may not have keys for all stored values. Hence foreach doesn't use indexes to crawl over them because they only have an index if the array is defined. If you need to have an index, make sure your arrays are fully defined before crawling over them, and use a for loop. A: It should be noted that you can call key() on any array to find the current key its on. As you can guess current() will return the current value and next() will move the array's pointer to the next element. A: I solved this way, when I had to use the foreach index and value in the same context: $array = array('a', 'b', 'c'); foreach ($array as $letter=>$index) { echo $letter; //Here $letter content is the actual index echo $array[$letter]; // echoes the array value }//foreach A: Owen has a good answer. If you want just the key, and you are working with an array this might also be useful. foreach(array_keys($array) as $key) { // do stuff } A: You can put a hack in your foreach, such as a field incremented on each run-through, which is exactly what the for loop gives you in a numerically-indexed array. Such a field would be a pseudo-index that needs manual management (increments, etc). A foreach will give you your index in the form of your $key value, so such a hack shouldn't be necessary. e.g., in a foreach $index = 0; foreach($data as $key=>$val) { // Use $key as an index, or... // ... manage the index this way.. echo "Index is $index\n"; $index++; } A: I normally do this when working with associative arrays: foreach ($assoc_array as $key => $value) { //do something } This will work fine with non-associative arrays too. $key will be the index value. If you prefer, you can do this too: foreach ($array as $indx => $value) { //do something } A: You can create $i outside the loop and do $i++ at the bottom of the loop. A: foreach($array as $key=>$value) { // do stuff } $key is the index of each $array element A: These two loops are equivalent (bar the safety railings of course): for ($i=0; $i<count($things); $i++) { ... } foreach ($things as $i=>$thing) { ... } eg for ($i=0; $i<count($things); $i++) { echo "Thing ".$i." is ".$things[$i]; } foreach ($things as $i=>$thing) { echo "Thing ".$i." is ".$thing; } A: I think best option is like same: foreach ($lists as $key=>$value) { echo $key+1; } it is easy and normally A: I would like to add this, I used this in laravel to just index my table: * *With $loop->index *I also preincrement it with ++$loop to start at 1 My Code: @foreach($resultsPerCountry->first()->studies as $result) <tr> <td>{{ ++$loop->index}}</td> </tr> @endforeach A: foreach(array_keys($array) as $key) { // do stuff }
{ "language": "en", "url": "https://stackoverflow.com/questions/141108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "631" }
Q: Why is deleting a branch in CVS not recommended? Under what circumstances would this or would this not be safe? I have a branch that has a four changes (no file add or deletes). Would this be safe to delete? Edit: The reason for wanting to delete it is that it was misnamed and is going to lead to confusion. A: People landing here looking for the answer to "How to delete a branch in cvs" cvs tag -dB branchname The -d will delete, the -B will override and let it know to delete the branch (not a tag) A: Deleting branches is normally not recommended because it loses so much history and cannot be undone. The general recommendation is actually to only remove a branch when it is very young, and when you've made a mistake. Like a typo in the branch name. A: If a branch is empty (you didn´t commit anything in it), than it is ok to deleted, it work just like untagging files. But if you already have a commited some files a few times, this would be a little dangerous since you can remove the branch reference, but not the files under it. This would be messy at least. Instead if really want to "secure" this branch, you could lock the files under(by script is better) so no one could make changes to it anymore and forget about it. A: I believe that CVS won't actually delete the branch, it will just remove the tag from the branch -- so the branch is still present in each ",v" file that is affected, it just won't be trivially accessible any more. The result is weird, but probably not dangerous. A: Curt is correct, to delete a branch you have to physically run a delete command from the box. It seems like in the case you mentioned, it would be ok to delete it.
{ "language": "en", "url": "https://stackoverflow.com/questions/141123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: What is important to keep in mind when designing a database? What is important to keep in mind when designing a database? I don't want to limit your answer to my needs as I am sure that others can benefit from your insights as well. But I am planning a content management system for a multi-client community driven site. A: Try to imagine the SQL queries that you will preform against it. This is important because you will do it A LOT! A: Some things I would keep in mind. Make sure every table has a way to uniquely identify records (you will save untold hours of pain doing this). Normalize but do not join on large multi-column natural keys unless you want the whole thing to be slow. Use a numeric key that is autogenerated in the parent table instead. Yes, think about the kinds of queries and reports you will need to run. Think about extensibility. It may seem like you wan't need more than 10 products columns in the order table but what happens when you need 11. Better to have an order table and an order detail table. Make sure all data integrity rules are incorporated into the database. Not all data changes happen from the user interface and I've had to try to fix too many badly messed up databases because the designers figured it was OK to put all rules in the GUI. The most critical things to consider when desiging are first how to ensure data integrity (if the data is meaningless then the database is useless) and second how to ensure performance. Do not use an object model to design a relational database unless you want bad performance. The next most important thing is data protection and security. Users should never have direct access to the database tables. If your design requires dynamic SQL they will have to have that access. This is bad from the perspective of potential hacking in through things like SQL injection attacks, but even more importantly, it opens up your database for internal people commit fraud. Are there fields where you need to encrypt the data (credit card information, passwords, and Social Security numbers are among the items that should never be stored unencrypted). How do you plan to do that and how do you plan to audit decryption to ensure people are not decrypting when they have no need to see the data. Are there legal hoops you must go through (HIPPA and Sarbanes Oxley spring to mind)? A: "Normalize till it hurts; de-normalize till it works." A: Get a really good book on data modeling - one written by a true database developer, not a .NET developer who tries to teach you how it's done in the "real world". The problem space of database design is simply way too large to be significantly covered in a forum like this. Despite that though, I'll give you a few personal pointers: Listen to the above posts about normalization. NEVER denormalize because you THINK that you have to for performance reasons. You should only denormalize after you've experience actual performance issues (ideally in your QA environment, not production). Even then, consider that there may be a better way to write your queries or improve indexing first. Constrain the data as much as possible. Columns should be NOT NULL as much as possible. Use CHECK constraints and FOREIGN KEYs wherever they should be. If you don't do this, bad data will get into your database and cause a lot of headaches and special case programming. Think through your data before you actually start designing tables. Get a good handle on how your processes will flow and what data they will need to track. Often times what you think is an entity at first glance turns out to be two entities. As an example, in a system that I'm working on, the previous designer created a Member table and all of the information from their application was part of the Member table. It turns out that a Member might want to change data that was on their application, but we still need to track what the original application looked like, so the Application is really its own entity and the Member is an entity that might initially be populated from the Application. In short, do extensive data analysis, don't just start creating tables. A: Since there have been several posts advocating this now, I'll add one more thing... DON'T fall into the trap of putting ID columns on all of your tables. There are many VERY good reasons why modern database design theory uses real primary keys and they aren't strictly academic reasons. I've worked with databases that included hundreds of tables, many of which were multi-million row tables, with over 1000 concurrent users and using real primary keys did not "break down". Using ID columns on all of your tables means that you will have to do multi-table joins to traverse across the database, which becomes a big hassle. It also tends to promote sloppy database design and even beyond that often results in problems with duplicate rows. Another issue is that when dealing with outside systems you now have to communicate these IDs around. There are places for surrogate IDs - type code tables and conceptual tables (for example, a table of system rules could use an ID if the rules don't have real-world identifiers). Using them everywhere is a mistake IMO. It's a long-standing debate, but that's my opinion on the matter, for what it's worth. A: Data Is Eternal. Processing Comes and Goes. Get the relational model to be a high-fidelity representation of the real world. This matters more than anything else. Processing will change and evolve for years. But your data -- and the data model -- can't evolve at the same pace and with the same flexibility. You can add processing, but you can't magically add information. You don't want to delete information (but you can ignore it.) Get the model right. The entities and relationships in your diagrams should make rational sense to a casual non-technical user. Even the application programming should be simple, clear and precise. If you're struggling with the model, don't invent big, complex queries or (worse) stored procedures to work around the problems. Procedural work-arounds are a costly mistake. Understand what you have, what you want to do, and apply the YAGNI principle to pare things down to the essentials. A: (Assuming OLTP) Normalisation of your data-structures. (Performance de-normalisations can generally follow later where needed) http://en.wikipedia.org/wiki/Database_normalization A: I know this has been stated, but normalization, normalization, normalization is the key. If there is an instance where you feel that for whatever reason that you need to store data in a non-normalized format, don't do it. This should be handled through views or in a separate reporting database. My other key advice is to avoid text/ntext fields wherever possible. A: "Thumb rule of Databases - Down Always Beats Across!" Examples: If you have a Customer table with columns for Mailing Address and Shipping address and Billing address... Create a separate CustomerAddress table with an Address Type If you have a CancellationDetails table with CancellationReason01, CancellationReason02, CancellationReason03.. create a separate CancellationReason table A: Be practical. Keep in mind what your goals are and don't go crazy creating unnecessary complexity. I have some preferences: * *Keep the number of tables small *prefer narrow tables over wide ones full of null values. *Normalization is generally good *Triggers are typically very painful But these are a means to an end (and are contradictory in many cases and require careful balancing), the main thing is to let the requirements drive the design. Your choice of what is a separate entity, and what is part of another entity, and what is cat food (not anything whose identity you care about) depends entirely on your requirements. A: Make sure you use constraints (CHECK, NOT NULL, FOREIGN KEY, PRIMARY KEY, and DEFAULT) to ensure that only correct data is stored in the database in the first place. You can always buy faster hardware but you cannot buy more correct data. A: Establish consistent naming standards up-front. It will save several minutes of unnecessary thinking in the long run. (This may read as irony, but I am serious.) And don't abbreviate anything, unless it is extremely common. Don't turn the database into a license-plate message guessing game. It's amazing what becomes not-obvious after a year. A: If you have queries that you're going to be running A LOT, make them into stored procedures. They will almost always run faster. A: If you'll be looking rows up by fields other than the primary key, make sure to index them. A: Is it to an Object Oriented language? So try modelling your objects before the database. This will help you to focus on the model. A: Understand the requirements as much as you possibly can up front. Then design a logical schema that will only have to change if the requirements change, or if you migrate to a completely different kind of database, like one that doesn't use SQL. Then refine and extend your design into a physical design that takes into account your particular DBMS product, your volume, your load, and your speed requirements. Learn how to normalise, but also learn when to break the normalization rules. A: I strongly echo that normalization is critical, with tactical de-normalization to follow for performance or other maintainability reasons. However, if you're expecting to have more than just a few tables, I'd like to offer one caveat about normalization that will make your life a lot easier as the number of tables grows. The caveat is to make the primary key for each table a single numeric column (appropriate for your flavor of DB). In academic normalization, the idea is to combine whatever attributes (columns) of an entity (table) so that you can uniquely identify an instance of what is being described (row), and you can end up with a multi-column composite primary key. So then whenever you migrate that composite key as a foreign key to other tables, you end up duplicating those multiple columns in every table that references it. That might work for you if you only have half a dozen tables. But it falls apart quickly when you go much bigger than that. So instead of a multi-column composite primary key, go with a sequential numeric primary key even though that approach goes against some of the strict normalization teachings. A: Make sure that as much meta data as possible is encoded in the model. It should be possible to infer almost any business rule or concept from just looking at the data model. This means, take care to pick names that reflect the reality of the users (but don't be afraid to change their perception of reality if it helps the model). Encode all constraints you can in the database. Don't rely on the application layer to only supply sensible data. Make sure that only sensible data can exist in the first place. Don't do aggregate data in the model. Keep the model as atomic as possible. Either aggregate on the fly or run regular aggregation jobs into aggregate tables. Pick a good partition between schemas. Some partitioning makes sense to do with foreign keys, and some by pure physical seperation. A: Don`t use a large set of columns as primary keys A: Remember that normalisation is only relative to what you are modelling. Perhaps you are modelling a collection of objects in your domain. Maybe you are recording a series of events, in which data are repeated because the same data happen to apply at more than one time. Don't mix up the two things. A: I agree that knowing about your data is good and normalizing. Something else I would suggest is to keep very large text fiels in a separate table. For example, if you have a contract you might want to keep a lot of the information about the contract in one table but keep the legal (and very large) document in a separate table. Just put in an index from the main table into the legal document. A: I'd say an important thing to keep in mind is that the structure may change. So don't design yourself into a corner. Make sure whatever you do leaves you some "room" and even an avenue to migrate the data into a different structure some day. A: As much as you can make primary key a sequence generated number.
{ "language": "en", "url": "https://stackoverflow.com/questions/141126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Does TCP/IP prevent packet replays? Does TCP/IP prevent multiple copies of the same packet from reaching the destination? Or is it up to the endpoint to layer idempotency logic above it? Please reference specific paragraphs from the TCP/IP specification if possible. A: It's the TCP stack's job to recover from duplicate packets: The TCP must recover from data that is damaged, lost, duplicated, or delivered out of order by the internet communication system. This is achieved by assigning a sequence number to each octet transmitted, and requiring a positive acknowledgment (ACK) from the receiving TCP. If the ACK is not received within a timeout interval, the data is retransmitted. At the receiver, the sequence numbers are used to correctly order segments that may be received out of order and to eliminate duplicates. Damage is handled by adding a checksum to each segment transmitted, checking it at the receiver, and discarding damaged segments. -- RFC 793 - Transmission Control Protocol, Section 1.5 However, if they're the same packets with new sequence numbers, then no. A: TCP uses sequence numbers to detect duplication in the case of retransmission, which will also prevent trivial replay attacks. From RFC 793, Section 3.3 - Sequence Numbers: A fundamental notion in the design is that every octet of data sent over a TCP connection has a sequence number. Since every octet is sequenced, each of them can be acknowledged. The acknowledgment mechanism employed is cumulative so that an acknowledgment of sequence number X indicates that all octets up to but not including X have been received. This mechanism allows for straight-forward duplicate detection in the presence of retransmission. Numbering of octets within a segment is that the first data octet immediately following the header is the lowest numbered, and the following octets are numbered consecutively. The duplicate detection will ensure that the same packet cannot be trivially retransmitted. Sequence numbers will also ensure that insertion (rather than replacement) of data in the data stream will be noticed, as further legitimate packets following forged packets will have duplicate sequence numbers, which will disrupt the data flow. This will likely cause those packets to be dropped as duplicates, which will likely break the protocol being used. More information about the original (1981) TCP/IP specification can be found in RFC 793, and the many other RFCs involving extensions or modifications to the TCP/IP protocol. A: Yes, the TCP layer prevents duplicate packets. The IP layer below it does not. Details in RFC 1122. A: You seem to be concerned about two different things: * *What guarantees does the TCP reliable delivery provide *Can an attacker affect my server process with a replay attack Answer to 1: TCP guarantees reliable, in-order delivery of a sequence of bytes. What ever data the client application send to TCP via write() will come out exactly the same during the server's read() call. Answer to 2: Replay attacks do not work well with TCP, since every connection depends on two random 32 bit numbers generated by the client and server respectively. For a replay attack to work, the attacker must guess the sequence number generated by the server for the fake connection it is initiating (theoretically, the attacker has a a 1 / 2**32 chance to guess correctly). If the attacker guesses incorrectly, she will at worst cause some buffering of data in your OS. Note that just because a replay attack doesn't work, nothing prevents an attacker from forming a legitimate connection with your server and transmitting whatever data stream she wants to your application. This is why it's important to always validate input. A: Layers below TCP can experience multiple packets or dropped packets. Layers above TCP do not experience repetition or dropped packets. A: I don't know about packet repitition, but I've never encountered it using TCP/IP and I know that it does guarantee that the packets all arrive and in the correct order, so I can't understand why it wouldn't. A: It really depends on how you are receiving your data - although technically the protocol should not give you duplicates (i.e. packets with the same tcp checksum), other factors could cause you to see duplicates - for example, the network hardware you are using; also if you are using sniffers to look at tcp streams, rather than just reading an open socket in your application, it's possible to get dup packets from the sniffers even if the actual tcp streams they were monitoring did not have dup packets. To give a real world example - At the moment I'm working on some tcp analysis of internal networks for a major stock exchange, and the data I'm looking at is coming in from multiple sniffers and being spliced back together. So in pulling in the data, I've found that I need to do a number of pre-processing steps, including finding and removing duplicates. For example, in a stream I just read in, of approx 60,000 data packets, I have located and removed 95 duplicate packets. The strategy I take here is to keep a rolling window of the 10 most recent tcp checksums, and to ignore packets that match those checksums. Note this works well for PSH packets, but not so well for ACK packets - but I'm less concerned with those anyways. I've written a special collection for the purpose of tracking this rolling window of tcp checksums, which might be helpful to others: /// <summary> /// Combination of a double-linked-list and a hashset with a max bound; /// Works like a bounded queue where new incoming items force old items to be dequeued; /// Re-uses item containers to avoid GC'ing; /// Public Add() and Contains() methods are fully thread safe through a ReaderWriterLockSlim; /// </summary> public class BoundedHashQueue<T> { private readonly int _maxSize = 100; private readonly HashSet<T> _hashSet = new HashSet<T>(); private readonly ReaderWriterLockSlim _lock = new ReaderWriterLockSlim(); private readonly Item _head; private readonly Item _tail; private int _currentCount = 0; public BoundedHashQueue(int maxSize) { _maxSize = maxSize; _head = _tail = new Item(); } private class Item { internal T Value; internal Item Next; internal Item Previous; } public void Add(T value) { _lock.Write(() => { if (_currentCount == 0) { Item item = new Item(); item.Value = value; _head.Next = item; item.Previous = _head; item.Next = _tail; _tail.Previous = item; _currentCount++; } else { Item item; if (_currentCount >= _maxSize) { item = _tail.Previous; _tail.Previous = item.Previous; _tail.Previous.Next = _tail; _hashSet.Remove(item.Value); } else { item = new Item(); _currentCount++; } item.Value = value; item.Next = _head.Next; item.Next.Previous = item; item.Previous = _head; _head.Next = item; _hashSet.Add(value); } }); } public bool Contains(T value) { return _lock.Read(() => _hashSet.Contains(value)); } }} A: You don't fully understand the problem. See this link: http://en.wikipedia.org/wiki/Transmission_Control_Protocol On this page is write: "TCP timestamps, defined in RFC 1323, help TCP compute the round-trip time between the sender and receiver. Timestamp options include a 4-byte timestamp value, where the sender inserts its current value of its timestamp clock, and a 4-byte echo reply timestamp value, where the receiver generally inserts the most recent timestamp value that it has received. The sender uses the echo reply timestamp in an acknowledgment to compute the total elapsed time since the acknowledged segment was sent.[2] TCP timestamps are also used to help in the case where TCP sequence numbers encounter their 2^32 bound and "wrap around" the sequence number space. This scheme is known as Protect Against Wrapped Sequence numbers, or PAWS (see RFC 1323 for details)." Regards, Joint (Poland)
{ "language": "en", "url": "https://stackoverflow.com/questions/141128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Calculate timespan in JavaScript I have a .net 2.0 ascx control with a start time and end time textboxes. The data is as follows: txtStart.Text = 09/19/2008 07:00:00 txtEnd.Text = 09/19/2008 05:00:00 I would like to calculate the total time (hours and minutes) in JavaScript then display it in a textbox on the page. A: function stringToDate(string) { var matches; if (matches = string.match(/^(\d{4,4})-(\d{2,2})-(\d{2,2}) (\d{2,2}):(\d{2,2}):(\d{2,2})$/)) { return new Date(matches[1], matches[2] - 1, matches[3], matches[4], matches[5], matches[6]); } else { return null; }; } function getTimeSpan(ticks) { var d = new Date(ticks); return { hour: d.getUTCHours(), minute: d.getMinutes(), second: d.getSeconds() } } var beginDate = stringToDate('2008-09-19 07:14:00'); var endDate = stringToDate('2008-09-19 17:35:00'); var sp = getTimeSpan(endDate - beginDate); alert("timeuse:" + sp.hour + " hour " + sp.minute + " minute " + sp.second + " second "); you can use getUTCHours() instead Math.floor(n / 3600000); A: Once your textbox date formats are known in advance, you can use Matt Kruse's Date functions in Javascript to convert the two to a timestamp, subtract and then write to the resulting text box. Equally the JQuery Date Input code for stringToDate could be adapted for your purposes - the below takes a string in the format "YYYY-MM-DD" and converts it to a date object. The timestamp (getTime()) of these objects could be used for your calculations. stringToDate: function(string) { var matches; if (matches = string.match(/^(\d{4,4})-(\d{2,2})-(\d{2,2})$/)) { return new Date(matches[1], matches[2] - 1, matches[3]); } else { return null; }; } A: I took what @PConroy did and added to it by doing the calculations for you. I also added the regex to make sure the time is part of the string to create the date object. <html> <head> <script type="text/javascript"> function stringToDate(string) { var matches; if (matches = string.match(/^(\d{4,4})-(\d{2,2})-(\d{2,2}) (\d{2,2}):(\d{2,2}):(\d{2,2})$/)) { return new Date(matches[1], matches[2] - 1, matches[3], matches[4], matches[5], matches[6]); } else { return null; }; } //Convert duration from milliseconds to 0000:00:00.00 format function MillisecondsToDuration(n) { var hms = ""; var dtm = new Date(); dtm.setTime(n); var h = "000" + Math.floor(n / 3600000); var m = "0" + dtm.getMinutes(); var s = "0" + dtm.getSeconds(); var cs = "0" + Math.round(dtm.getMilliseconds() / 10); hms = h.substr(h.length-4) + ":" + m.substr(m.length-2) + ":"; hms += s.substr(s.length-2) + "." + cs.substr(cs.length-2); return hms; } var beginDate = stringToDate('2008-09-19 07:14:00'); var endDate = stringToDate('2008-09-19 17:35:00'); var n = endDate.getTime() - beginDate.getTime(); alert(MillisecondsToDuration(n)); </script> </head> <body> </body> </html> This is pretty rough, since I coded it up pretty fast, but it works. I tested it out. The alert box will display 0010:21:00.00 (HHHH:MM:SS.SS). Basically all you need to do is get the values from your text boxes. A: The answers above all assume string manipulation. Here's a solution that works with pure date objects: var start = new Date().getTime(); window.setTimeout(function(){ var diff = new Date(new Date().getTime() - start); // this will log 0 hours, 0 minutes, 1 second console.log(diff.getHours(), diff.getMinutes(),diff.getSeconds()); },1000); A: I googled for calculating a timespan in javascript and found this question on SO; unfortunately the question text and actual question (only needing hours and minutes) are not the same... so I think I arrived here in error. I did write an answer to the question title, however - so if anyone else wants something that prints out something like "1 year, and 15 minutes", then this is for you: function formatTimespan(from, to) { var text = '', span = { y: 0, m: 0, d: 0, h: 0, n: 0 }; function calcSpan(n, fnMod) { while (from < to) { // Modify the date, and check if the from now exceeds the to: from = from[fnMod](1); if (from <= to) { span[n] += 1; } else { from = from[fnMod](-1); return; } } } function appendText(n, unit) { if (n > 0) { text += ((text.length > 0) ? ', ' : '') + n.toString(10) + ' ' + unit + ((n === 1) ? '' : 's'); } } calcSpan('y', 'addYears'); calcSpan('m', 'addMonths'); calcSpan('d', 'addDays'); calcSpan('h', 'addHours'); calcSpan('n', 'addMinutes'); appendText(span.y, 'year'); appendText(span.m, 'month'); appendText(span.d, 'day'); appendText(span.h, 'hour'); appendText(span.n, 'minute'); if (text.lastIndexOf(',') < 0) { return text; } return text.substring(0, text.lastIndexOf(',')) + ', and' + text.substring(text.lastIndexOf(',') + 1); } A: Use Math.floor(n / 3600000) instead of getUTCHours() or else you would lose the number of hours greater than 24. For example, if you have 126980000 milliseconds, this should translate to 0035:16:20.00 If you use getUTCHours() you get an incorrect string 0011:16:20.00 Better instead, use this (modifications denoted by KK-MOD): function MillisecondsToDuration(n) { var hms = ""; var dtm = new Date(); dtm.setTime(n); var d = Math.floor(n / 3600000 / 24); // KK-MOD var h = "0" + (Math.floor(n / 3600000) - (d * 24)); // KK-MOD var m = "0" + dtm.getMinutes(); var s = "0" + dtm.getSeconds(); var cs = "0" + Math.round(dtm.getMilliseconds() / 10); hms = (d > 0 ? d + "T" : "") + h.substr(h.length - 2) + ":" + m.substr(m.length - 2) + ":"; // KK-MOD hms += s.substr(s.length - 2) + "." + cs.substr(cs.length - 2); return hms; } So now, 192680000 gets displayed as 1T11:16:20.00 which is 1 day 11 hours 16 minutes and 20 seconds A: I like the K3 + KK-MOD approach, but I needed to show negative timespans, so I made the following modifications: function MillisecondsToDuration(milliseconds) { var n = Math.abs(milliseconds); var hms = ""; var dtm = new Date(); dtm.setTime(n); var d = Math.floor(n / 3600000 / 24); // KK-MOD var h = "0" + (Math.floor(n / 3600000) - (d * 24)); // KK-MOD var m = "0" + dtm.getMinutes(); var s = "0" + dtm.getSeconds(); var cs = "0" + Math.round(dtm.getMilliseconds() / 10); hms = (milliseconds < 0 ? " - " : ""); hms += (d > 0 ? d + "." : "") + h.substr(h.length - 2) + ":" + m.substr(m.length - 2) + ":"; // KK-MOD hms += s.substr(s.length - 2) + "." + cs.substr(cs.length - 2); return hms; } I also changed the 'T' separator to a '.' for my own formatting purposes. Now a negative value passed in, say -360000 (negative six minutes) will produce the following output: - 00:06:00
{ "language": "en", "url": "https://stackoverflow.com/questions/141136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Why does Java not have block-scoped variable declarations? The following method does not work because the inner block declares a variable of the same name as one in the outer block. Apparently variables belong to the method or class in which they are declared, not to the block in which they are declared, so I therefore can't write a short little temporary block for debugging that happens to push a variable in the outer scope off into shadow just for a moment: void methodName() { int i = 7; for (int j = 0; j < 10; j++) { int i = j * 2; } } Almost every block-scoped language I've ever used supported this, including trivial little languages that I wrote interpreters and compilers for in school. Perl can do this, as can Scheme, and even C. Even PL/SQL supports this! What's the rationale for this design decision for Java? Edit: as somebody pointed out, Java does have block-scoping. What's the name for the concept I'm asking about? I wish I could remember more from those language-design classes. :) A: Well, strictly speaking, Java does have block-scoped variable declarations; so this is an error: void methodName() { for (int j = 0; j < 10; j++) { int i = j * 2; } System.out.println(i); // error } Because 'i' doesn't exist outside the for block. The problem is that Java doesn't allow you to create a variable with the same name of another variable that was declared in an outer block of the same method. As other people have said, supposedly this was done to prevent bugs that are hard to identify. A: Because it's not uncommon for writers to do this intentionally and then totally screw it up by forgetting that there are now two variables with the same name. They change the inner variable name, but leave code that uses the variable, which now unintentially uses the previously-shadowed variable. This results in a program that still compiles, but executes buggily. Similarly, it's not uncommon to accidentally shadow variables and change the program's behavior. Unknowingly shadowing an existing variable can change the program as easily as unshadowing a variable as I mentioned above. There's so little benefit to allowing this shadowing that they ruled it out as too dangerous. Seriously, just call your new variable something else and the problem goes away. A: I believe the rationale is that most of the time, that isn't intentional, it is a programming or logic flaw. in an example as trivial as yours, its obvious, but in a large block of code, accidentally redeclaring a variable may not be obvious. ETA: it might also be related to exception handling in java. i thought part of this question was discussed in a question related to why variables declared in a try section were not available in the catch/finally scopes. A: It leads to bugs that are hard to spot, I guess. It's similar in C#. Pascal does not support this, since you have to declare variables above the function body. A: The underlying assumption in this question is wrong. Java does have block-level scope. But it also has a hierarchy of scope, which is why you can reference i within the for loop, but not j outside of the for loop. public void methodName() { int i = 7; for (int j = 0; j < 10; j++) { i = j * 2; } //this would cause a compilation error! j++; } I can't for the life of me figure out why you would want scoping to behave any other way. It'd be impossible to determine which i you were referring to inside the for loop, and I'd bet chances are 99.999% of the time you want to refer to the i inside the method. A: another reason: if this kind of variable declaration were allowed, people would want (need?) a way to access outer block variables. may be something like "outer" keyword would be added: void methodName() { int i = 7; for (int j = 0; j < 10; j++) { int i = outer.i * 2; if(i > 10) { int i = outer.outer.i * 2 + outer.i; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/141140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: SQL trace in Great Plains shows Invalid column name 'desSPRkmhBBCreh' This error seems to just pop up now and again. It is not restricted to a single table and even happens on tables it just created. Anybody else see this weird behavior? [Edit w/solution] It turns out that this query is used to determine if the table exists. Apparently it is much quicker to query an invalid column than just check for a table. Sql, go figure. :) A: Yes, it looks like someone else has seen it: http://microsoft-programming.hostweb.com/TopicMessages/microsoft.public.greatplains/1866812/1/Default.aspx Unfortunately, I can't find the knowledge base article they refer to. Victoria Yudin there says "Take a look at KB article 875229 - it addresses this exact question. Basically, this is a dummy value entered to same time in getting information from SQL and the error is expected."
{ "language": "en", "url": "https://stackoverflow.com/questions/141144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to find my Subversion server version number? I want to know if my server is running Subversion 1.5. How can I find that out? Also would be nice to know my SVN client version number. svn help hasn't been helpful. Note: I don't want my project's revision number, etc. This question is about the Subversion software itself. A: One more option: If you have Firefox (I am using 14.0.1) and a SVN web interface: * *Open Tools->Web Developer->Web Console on a repo page *Refresh page *Click on the GET line *Look in the Response Headers section at the Server: line There should be an "SVN/1.7.4" string or similar there. Again, this will probably only work if you have "ServerTokens Full" as mentioned above. A: There really isn't an easy way to find out what version of Subversion your server is running -- except to get onto the server and see for yourself. However, this may not be as big a problem as you may think. Subversion clients is were much of the grunt work is handled, and most versions of the Subversion clients can work with almost any version of the server. The last release where the server version really made a difference to the client was the change from release 1.4 to release 1.5 when merge tracking was added. Merge tracking had been greatly improved in version 1.6, but that doesn't really affect the interactions between the client and server. Let's take the latest changes in Subversion 1.8: * *svn move is now a first class operation: Subversion finally understands the svn move is not a svn copy and svn delete. However, this is something that the client handles and doesn't really affect the server version. *svn merge --reintegrate deprecated: Again, as long as the server is at version 1.5 or greater this isn't an issue. *Property Inheritance: This is another 1.8 release update, but this will work with any Subversion server -- although Subversion servers running 1.8 will deliver better performance on inheritable properties. *Two new inheritable properties - svn:global-ignores and svn:auto-props: Alas! What we really wanted. A way to setup these two properties without depending upon the Subversion configuration file itself. However, this is a client-only issue, so it again doesn't matter what version of the server you're using. *gnu-agent memory caching: Another client-only feature. *fsfs performance enhancements and authz in-repository authentication. Nice features, but these work no matter what version of the client you're using. Of all the features, only one depends upon the version of the server being 1.5 or greater (and 1.4 has been obsolete for quite a while. The newer features of 1.8 will improve performance of your working copy, but the server being at revision 1.8 isn't necessary. You're much more affected by your client version than your server version. I know this isn't the answer you wanted (no official way to see the server version), but fortunately the server version doesn't really affect you that much. A: On the server: svnserve --version in case of svnserve-based configuration (svn:// and svn+xxx://). (For completeness). A: Try this: ssh your_user@your_server svnserve --version svnserve, version 1.3.1 (r19032) compiled May 8 2006, 07:38:44 I hope it helps. A: Here's the simplest way to get the SVN server version. HTTP works even if your SVN repository requires HTTPS. $ curl -X OPTIONS http://my-svn-domain/ <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head>...</head> <body>... <address>Apache/2.2.11 (Debian) DAV/2 SVN/1.5.6 PHP/5.2.9-4 ...</address> </body></html> A: For a svn+ssh configuration, use ssh to run svnserve --version on the host machine: $ ssh user@host svnserve --version It is necessary to run the svnserve command on the machine that is actually serving as the server. A: For an HTTP-based server there is a Python script to find the server version at: http://svn.apache.org/repos/asf/subversion/trunk/tools/client-side/server-version.py You can get the client version with `svn --version` A: If the Subversion server version is not printed in the HTML listing, it is available in the HTTP RESPONSE header returned by the server. You can get it using this shell command wget -S --no-check-certificate \ --spider 'http://svn.server.net/svn/repository' 2>&1 \ | sed -n '/SVN/s/.*\(SVN[0-9\/\.]*\).*/\1/p'; If the SVN server requires you provide a user name and password, then add the wget parameters --user and --password to the command like this wget -S --no-check-certificate \ --user='username' --password='password' \ --spider 'http://svn.server.net/svn/repository' 2>&1 \ | sed -n '/SVN/s/.*\(SVN[0-9\/\.]*\).*/\1/p'; A: To find the version of the subversion REPOSITORY you can: * *Look to the repository on the web and on the bottom of the page it will say something like: "Powered by Subversion version 1.5.2 (r32768)." *From the command line: <insert curl, grep oneliner here> If not displayed, view source of the page <svn version="1.6.13 (r1002816)" href="http://subversion.tigris.org/"> Now for the subversion CLIENT: svn --version will suffice A: Just use a web browser to go to the SVN address. Check the source code (Ctrl + U). Then you will find something like in the HTML code: <svn version="1.6. ..." ... A: Browse the repository with Firefox and inspect the element with Firebug. Under the NET tab, you can check the Header of the page. It will have something like: Server: Apache/2.2.14 (Win32) DAV/2 SVN/1.X.X A: If you use VisualSVN Server, you can find out the version number by several different means. Use VisualSVN Server Manager Follow these steps to find out the version via the management console: * *Start the VisualSVN Server Manager console. *See the Version at the bottom-right corner of the dashboard. If you click Version you will also see the versions of the components. Check the README.txt file Follow these steps to find out the version from the readme.txt file: * *Start notepad.exe. *Open the %VISUALSVN_SERVER%README.txt file. The first line shows the version number. A: Let's merge these responses: For REPOSITORY / SERVER (the original question): If able to access the Subversion server: * *From an earlier answer by Manuel, run the following on the SVN server: svnadmin --version If HTTP/HTTPS access: * *See the "powered by Subversion" line when accessing the server via a browser. *Access the repository via browser and then look for the version string embedded in the HTML source. From earlier answers by elviejo and jaredjacobs. Similarly, from ??, use your browser's developer tools (usually Ctrl + Shift + I) to read the full response. This is also the easiest (non-automated) way to deal with certificates and authorization - your browser does it for you. *Check the response tags (these are not shown in the HTML source), from an earlier answer by Christopher wget -S --spider 'http://svn.server.net/svn/repository' 2>&1 | sed -n '/SVN/s/.*\(SVN[0-9\/\.]*\).*/\1/p' If svn:// or ssh+svn access * *From an earlier answer by Milen svnserve --version (run on svn server) *From an earlier answer by Glenn ssh user@host svnserve --version If GoogleCode SVN servers Check out the current version in a FAQ: http://code.google.com/p/support/wiki/SubversionFAQ#What_version_of_Subversion_do_you_use? If another custom SVN servers TBD Please edit to finish this answer For CLIENT (not the original question): svn --version A: You can connect to your Subversion server using HTTP and find the version number in the HTTP header. A: For Subversion 1.7 and above, the server doesn't provide a footer that indicates the server version. But you can run the following command to gain the version from the response headers $ curl -s -D - http://svn.server.net/svn/repository HTTP/1.1 401 Authorization Required Date: Wed, 09 Jan 2013 03:01:43 GMT Server: Apache/2.2.9 (Unix) DAV/2 SVN/1.7.4 Note that this also works on Subversion servers where you don't have authorization to access.
{ "language": "en", "url": "https://stackoverflow.com/questions/141146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "305" }
Q: How can I determine installed SQL Server instances and their versions? I'm trying to determine what instances of sql server/sql express I have installed (either manually or programmatically) but all of the examples are telling me to run a SQL query to determine this which assumes I'm already connected to a particular instance. A: You could query this registry value to get the SQL version directly: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\Tools\ClientSetup\CurrentVersion Alternatively you can query your instance name and then use sqlcmd with your instance name that you would like: To see your instance name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names Then execute this: SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition') If you are using C++ you can use this code to get the registry information. A: All of the instances installed should show up in the Services Snap-In in the Microsoft Management Console. To get the instance names, go to Start | Run | type Services.msc and look for all entries with "Sql Server (Instance Name)". A: SQL Server permits applications to find SQL Server instances within the current network. The SqlDataSourceEnumerator class exposes this information to the application developer, providing a DataTable containing information about all the visible servers. This returned table contains a list of server instances available on the network that matches the list provided when a user attempts to create a new connection, and expands the drop-down list containing all the available servers on the Connection Properties dialog box. The results displayed are not always complete. In order to retrieve the table containing information about the available SQL Server instances, you must first retrieve an enumerator, using the shared/static Instance property: using System.Data.Sql; class Program { static void Main() { // Retrieve the enumerator instance and then the data. SqlDataSourceEnumerator instance = SqlDataSourceEnumerator.Instance; System.Data.DataTable table = instance.GetDataSources(); // Display the contents of the table. DisplayData(table); Console.WriteLine("Press any key to continue."); Console.ReadKey(); } private static void DisplayData(System.Data.DataTable table) { foreach (System.Data.DataRow row in table.Rows) { foreach (System.Data.DataColumn col in table.Columns) { Console.WriteLine("{0} = {1}", col.ColumnName, row[col]); } Console.WriteLine("============================"); } } } from msdn http://msdn.microsoft.com/en-us/library/a6t1z9x2(v=vs.80).aspx A: One more option would be to run SQLSERVER discovery report..go to installation media of sqlserver and double click setup.exe and in the next screen,go to tools and click discovery report as shown below This will show you all the instances present along with entire features..below is a snapshot on my pc A: -- T-SQL Query to find list of Instances Installed on a machine DECLARE @GetInstances TABLE ( Value nvarchar(100), InstanceNames nvarchar(100), Data nvarchar(100)) Insert into @GetInstances EXECUTE xp_regread @rootkey = 'HKEY_LOCAL_MACHINE', @key = 'SOFTWARE\Microsoft\Microsoft SQL Server', @value_name = 'InstalledInstances' Select InstanceNames from @GetInstances A: SQL Server Browser Service http://msdn.microsoft.com/en-us/library/ms181087.aspx A: If you are interested in determining this in a script, you can try the following: sc \\server_name query | grep MSSQL Note: grep is part of gnuwin32 tools A: This query should get you the server name and instance name : SELECT @@SERVERNAME, @@SERVICENAME A: From Windows command-line, type: SC \\server_name query | find /I "SQL Server (" Where "server_name" is the name of any remote server on which you wish to display the SQL instances. This requires enough permissions of course. A: I know this thread is a bit old, but I came across this thread before I found the answer I was looking for and thought I'd share. If you are using SQLExpress (or localdb) there is a simpler way to find your instance names. At a command line type: > sqllocaldb i This will list the instance names you have installed locally. So your full server name should include (localdb)\ in front of the instance name to connect. Also, sqllocaldb allows you to create new instances or delete them as well as configure them. See: SqlLocalDB Utility. A: At a command line: SQLCMD -L or OSQL -L (Note: must be a capital L) This will list all the sql servers installed on your network. There are configuration options you can set to prevent a SQL Server from showing in the list. To do this... At command line: svrnetcn In the enabled protocols list, select 'TCP/IP', then click properties. There is a check box for 'Hide server'. A: I had the same problem. The "osql -L" command displayed only a list of servers but without instance names (only the instance of my local SQL Sever was displayed). With Wireshark, sqlbrowser.exe (which can by found in the shared folder of your SQL installation) I found a solution for my problem. The local instance is resolved by registry entry. The remote instances are resolved by UDP broadcast (port 1434) and SMB. Use "sqlbrowser.exe -c" to list the requests. My configuration uses 1 physical and 3 virtual network adapters. If I used the "osql -L" command the sqlbrowser displayed a request from one of the virtual adaptors (which is in another network segment), instead of the physical one. osql selects the adpater by its metric. You can see the metric with command "route print". For my configuration the routing table showed a lower metric for teh virtual adapter then for the physical. So I changed the interface metric in the network properties by deselecting automatic metric in the advanced network settings. osql now uses the physical adapter. A: The commands OSQL -L and SQLCMD -L will show you all instances on the network. If you want to have a list of all instances on the server and doesn't feel like doing scripting or programming, do this: * *Start Windows Task Manager *Tick the checkbox "Show processes from all users" or equivalent *Sort the processes by "Image Name" *Locate all sqlsrvr.exe images The instances should be listed in the "User Name" column as MSSQL$INSTANCE_NAME. And I went from thinking the poor server was running 63 instances to realizing it was running three (out of which one was behaving like a total bully with the CPU load...) A: If you just want to see what's installed on the machine you're currently logged in to, I think the most straightforward manual process is to just open the SQL Server Configuration Manager (from the Start menu), which displays all the SQL Services (and only SQL services) on that hardware (running or not). This assumes SQL Server 2005, or greater; dotnetengineer's recommendation to use the Services Management Console will show you all services, and should always be available (if you're running earlier versions of SQL Server, for example). If you're looking for a broader discovery process, however, you might consider third party tools such as SQLRecon and SQLPing, which will scan your network and build a report of all SQL Service instances found on any server to which they have access. It's been a while since I've used tools like this, but I was surprised at what they found (namely, a handful of instances that I didn't know existed). YMMV. You might Google for details, but I believe this page has the relevant downloads: http://www.sqlsecurity.com/Tools/FreeTools/tabid/65/Default.aspx A: I just installed Sql server 2008, but i was unable to connect to any database instances. The commands @G Mastros posted listed no active instances. So i looked in services and found that the SQL server agent was disabled. I fixed it by setting it to automatic and then starting it. A: I had this same issue when I was assessing 100+ servers, I had a script written in C# to browse the service names consist of SQL. When instances installed on the server, SQL Server adds a service for each instance with service name. It may vary for different versions like 2000 to 2008 but for sure there is a service with instance name. I take the service name and obtain instance name from the service name. Here is the sample code used with WMI Query Result: if (ServiceData.DisplayName == "MSSQLSERVER" || ServiceData.DisplayName == "SQL Server (MSSQLSERVER)") { InstanceData.Name = "DEFAULT"; InstanceData.ConnectionName = CurrentMachine.Name; CurrentMachine.ListOfInstances.Add(InstanceData); } else if (ServiceData.DisplayName.Contains("SQL Server (") == true) { InstanceData.Name = ServiceData.DisplayName.Substring( ServiceData.DisplayName.IndexOf("(") + 1, ServiceData.DisplayName.IndexOf(")") - ServiceData.DisplayName.IndexOf("(") - 1 ); InstanceData.ConnectionName = CurrentMachine.Name + "\\" + InstanceData.Name; CurrentMachine.ListOfInstances.Add(InstanceData); } else if (ServiceData.DisplayName.Contains("MSSQL$") == true) { InstanceData.Name = ServiceData.DisplayName.Substring( ServiceData.DisplayName.IndexOf("$") + 1, ServiceData.DisplayName.Length - ServiceData.DisplayName.IndexOf("$") - 1 ); InstanceData.ConnectionName = CurrentMachine.Name + "\\" + InstanceData.Name; CurrentMachine.ListOfInstances.Add(InstanceData); } A: Will get the instances of SQL server reg query "HKLM\Software\Microsoft\Microsoft SQL Server\Instance Names\SQL" or Use SQLCMD -L A: Here is a simple method: go to Start then Programs then Microsoft SQL Server 2005 then Configuration Tools then SQL Server Configuration Manager then SQL Server 2005 Network Configuration then Here you can locate all the instance installed onto your machine. A: I know its an old post but I found a nice solution with PoweShell where you can find SQL instances installed on local or a remote machine including the version and also be extend get other properties. $MachineName = ‘.’ # Default local computer Replace . with server name for a remote computer $reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey(‘LocalMachine’, $MachineName) $regKey= $reg.OpenSubKey("SOFTWARE\\Microsoft\\Microsoft SQL Server\\Instance Names\\SQL" ) $values = $regkey.GetValueNames() $values | ForEach-Object {$value = $_ ; $inst = $regKey.GetValue($value); $path = "SOFTWARE\\Microsoft\\Microsoft SQL Server\\"+$inst+"\\MSSQLServer\\"+"CurrentVersion"; #write-host $path; $version = $reg.OpenSubKey($path).GetValue("CurrentVersion"); write-host "Instance" $value; write-host "Version" $version} A: If your within SSMS you might find it easier to use: SELECT @@Version
{ "language": "en", "url": "https://stackoverflow.com/questions/141154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "251" }
Q: How can I determine if a different process id is running using Java or JRuby on Linux? I need to see if a given process id is running, and it must work in either Java or JRuby (preferably a Ruby solution). It can be system dependent for Linux (specifically Debian and/or Ubuntu). I already have the PID I am looking for, just need to see if it is currently running. UPDATE: Thanks for all the responses everyone! I appreciate it, however it's not QUITE what I'm looking for... I am hoping for something in a standard Ruby library (or Java, but preferably Ruby)... if no such library call exists, I will probably stick with the procfs solution I already have. A: Unix has a special feature of the kill system call around signal zero. Error checking is performed, but no signal is sent. def pid_exists? (pid) system "kill -0 #{pid}" return $? == 0 end One caveat: this won't detect processes with that pid that you don't have permission to signal. A: From my answer to this question, I was thinking of just using procfs again, by checking if the given directory exists via File.exist? "/proc/#{pid}". This worked in jirb: irb(main):001:0> File.exist? "/proc/5555" => false irb(main):002:0> File.exist? "/proc/7677" => true However, I would still prefer to use a method that specifically exists to detect if a process is running... like Process.exist?(pid)... which unfortunately doesn't exist that I've seen. A: Darron's comment was spot on, but rather than calling the "kill" binary, you can just use Ruby's Process.kill method with the 0 signal: #!/usr/bin/ruby pid = ARGV[0].to_i begin Process.kill(0, pid) puts "#{pid} is running" rescue Errno::EPERM # changed uid puts "No permission to query #{pid}!"; rescue Errno::ESRCH puts "#{pid} is NOT running."; # or zombied rescue puts "Unable to determine status for #{pid} : #{$!}" end [user@host user]$ ./is_running.rb 14302 14302 is running [user@host user]$ ./is_running.rb 99999 99999 is NOT running. [user@host user]$ ./is_running.rb 37 No permission to query 37! [user@host user]$ sudo ./is_running.rb 37 37 is running Reference: http://pleac.sourceforge.net/pleac_ruby/processmanagementetc.html A: I can't speak for JRuby, but in Java, the only way to check is if you launched the process from Java (in which case you would have an instance of Process that you could do things with). A: You'll probably want to double check for the JVM that you're using. But if you send a SIGQUIT signal kill -3 I believe, (I don't have a terminal handy). That should generate a Javacore file which will have stack traces of the in use thread, check for JRuby packages in that file. It shouldn't terminate or anything but as always be careful sending signals. A: If you don't mind creating a whole new process then this lazy way should work: def pid_exists? (pid) system "ps -p #{pid} > /dev/null" return $? == 0 end For most variations of ps, it should return 0 on success and non-zero on error. The usual error with the usage above will be not finding the process with the given PID. The version of ps I have under Ubuntu returns 256 in this case. You could also use Process.kill to send a signal of 0 to the process (signal 0 indicates if a signal may be sent), but that seems to only work if you own the process you're sending the signal to (or otherwise have permissions to send it signals). A: You could use the command line tool jps which comes with your java installation. jps lists all java Processes of a user. E.g. >jps -l 5960 org.jruby.Main 2124 org.jruby.Main 5376 org.jruby.Main 4428 sun.tools.jps.Jps Or if you need to get the results into your script you could use %x[..]: >> result = %x[jps -l] => "5960 org.jruby.Main\n2264 sun.tools.jps.Jps\n2124 org.jruby.Main\n5376 org.jruby.Main\n" >> p result "5960 org.jruby.Main\n2264 sun.tools.jps.Jps\n2124 org.jruby.Main\n5376 org.jruby.Main\n" => nil
{ "language": "en", "url": "https://stackoverflow.com/questions/141162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: C# Dynamically created LinkButton Command Event Handler So I have a weird situation here... I have an System.Web.UI.WebControls.WebParts.EditorPart class. It renders a "Search" button, when you click this button, it's clickHandler method does a DB search, and dynamically creates a LinkButton for each row it returns, sets the CommandName and CommandArgument properties and adds a CommandEventHandler method, then adds the LinkButton control to the page. The problem is, when you click a LinkButton, its CommandEventHandler method is never called, it looks like the page just posts back to where it was before the ORIGINAL "Search" button was pressed. I have seen postings saying that you need to add the event handlers in OnLoad() or some other early method, but my LinkButtons haven't even been created until the user tells us what to search for and hits the "Search" button... Any ideas on how to deal with this? Thanks! A: This is my favorite trick :) Our scenario is to first render a control. Then using some input from the user, render further controls and have them respond to events. The key here is state - you need to know the state of the control when it arrives at PostBack - so we use ViewState. The issue becomes then a chicken-and-egg problem; ViewState isn't available until after the LoadViewState() call, but you must create the controls before that call to have the events fired correctly. The trick is to override LoadViewState() and SaveViewState() so we can control things. (note that the code below is rough, from memory and probably has issues) private string searchQuery = null; private void SearchButton(object sender, EventArgs e) { searchQuery = searchBox.Text; var results = DataLayer.PerformSearch(searchQuery); CreateLinkButtonControls(results); } // We save both the base state object, plus our query string. Everything here must be serializable. protected override object SaveViewState() { object baseState = base.SaveViewState(); return new object[] { baseState, searchQuery }; } // The parameter to this method is the exact object we returned from SaveViewState(). protected override void LoadViewState(object savedState) { object[] stateArray = (object[])savedState; searchQuery = stateArray[1] as string; // Re-run the query var results = DataLayer.PerformSearch(searchQuery); // Re-create the exact same control tree as at the point of SaveViewState above. It must be the same otherwise things will break. CreateLinkButtonControls(results); // Very important - load the rest of the ViewState, including our controls above. base.LoadViewState(stateArray[0]); } A: You need to re-add the dynamically created controls, in the onload, so that they can be in the page hierarchy and fire their event. A: LinkButton link= new LinkButton(); link.Command +=new CommandEventHandler(LinkButton1_Command); protected void LinkButton1_Command(object sender, CommandEventArgs e) { try { System.Threading.Thread.Sleep(300); if (e.CommandName == "link") { ////////// } } catch { } } A: A dirty hack I just came up with, is to create dummy LinkButtons with the same IDs as the real buttons. So let's say you are going to create a LinkButton "foo" at Pre_Render (which is too late), then also create a dummy foo at Page_Load: var link = new LinkButton(); link.ID = "foo"; link.Click += fooEventHandler; dummyButtons.Controls.Add(link); (Where "dummyButtons" is just a PlaceHolder on the page with Visibility set to false.) It's ugly, but it works.
{ "language": "en", "url": "https://stackoverflow.com/questions/141169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I detect a keyboard modifier in a bookmarklet? Is there a way to detect if the user is holding down the shift key (or other modifier keys) when executing a javascript bookmarklet? In my tests of Safari 3.1 and Firefox 3, window.event is always undefined. A: window.event is an IE only. Event objects are passed to an event listener as an argument in firefox and safari. So you can tell in IE, but not in any other popular browser. A: If you're looking for a way to detect the mouse position while the bookmarklet is being physically clicked, no, there is no way. Since the bookmarklet is positioned outside of any page (this area is generally called the browser "chrome" - which is confusing since there's now a browser with that name) it's not possible to detect JavaScript-related events there. That being said, if you created this as a Firefox extension then you would have access to event information, JavaScript, and keyboard modifiers. But that doesn't appear to be what you're looking for.
{ "language": "en", "url": "https://stackoverflow.com/questions/141198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to best handle per-Model database connections with ActiveRecord? I'd like the canonical way to do this. My Google searches have come up short. I have one ActiveRecord model that should map to a different database than the rest of the application. I would like to store the new configurations in the database.yml file as well. I understand that establish_connection should be called, but it's not clear where. Here's what I got so far, and it doesn't work: class Foo < ActiveRecord::Base establish_connection(('foo_' + ENV['RAILS_ENV']).intern) end A: Heh. I was right! More cleanly: class Foo < ActiveRecord::Base establish_connection "foo_#{ENV['RAILS_ENV']}" end Great post at pragedave.pragprog.com. A: Also, it is a good idea to subclass your model that uses different database, such as: class AnotherBase < ActiveRecord::Base self.abstract_class = true establish_connection "anotherbase_#{RAILS_ENV}" end And in your model class Foo < AnotherBase end It is useful when you need to add subsequent models that access the same, another database.
{ "language": "en", "url": "https://stackoverflow.com/questions/141201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: AppDomain And the Current Directory I have a class that utilizes a directory swap method for the Environment.CurrentDirectory. The code looks something like this: var str = Environment.CurrentDirectory; Environment.CurrentDirectory = Path.GetDirectoryName(pathToAssembly); var assembly = Assembly.Load(Path.GetFileNameWithoutExtension(pathToAssembly)); Environment.CurrentDirectory = str; As with my earlier post we are using this directory switching method to allow for loading of the specified assembly as well as any references assemblies, as well as unmanaged assemblies. The problem I am having is that this function is being run in two separate AppDomains. In AppDomain A (An AppDomain I create) the code works fine. In AppDomain B (the default AppDomain) it throws FileNotFoundException. For both of the calls I am trying to load the same assembly. Any clue why this would be the case? A: This post suggests that you can't change the search path of the primary AppDomain once it is loaded -- you have to set it in the config file -- and has a number of suggestions, though they all boil down to "you can't do it in the primary AppDomain".
{ "language": "en", "url": "https://stackoverflow.com/questions/141202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: When would I need a SecureString in .NET? I'm trying to grok the purpose of .NET's SecureString. From MSDN: An instance of the System.String class is both immutable and, when no longer needed, cannot be programmatically scheduled for garbage collection; that is, the instance is read-only after it is created and it is not possible to predict when the instance will be deleted from computer memory. Consequently, if a String object contains sensitive information such as a password, credit card number, or personal data, there is a risk the information could be revealed after it is used because your application cannot delete the data from computer memory. A SecureString object is similar to a String object in that it has a text value. However, the value of a SecureString object is automatically encrypted, can be modified until your application marks it as read-only, and can be deleted from computer memory by either your application or the .NET Framework garbage collector. The value of an instance of SecureString is automatically encrypted when the instance is initialized or when the value is modified. Your application can render the instance immutable and prevent further modification by invoking the MakeReadOnly method. Is the automatic encryption the big payoff? And why can't I just say: SecureString password = new SecureString("password"); instead of SecureString pass = new SecureString(); foreach (char c in "password".ToCharArray()) pass.AppendChar(c); What aspect of SecureString am I missing? A: One of the big benefits of a SecureString is that it is supposed avoid the possibility of your data being stored to disk due to page caching. If you have a password in memory and then load a large program or data set, your password may get written to the swap file as your program is paged out of memory. With a SecureString, at least the data will not be sitting around indefinitely on your disk in clear text. A: I guess it's because the string is meant to be secure, i.e. a hacker should not be able to read it. If you initialize it with a string, the hacker could read the original string. A: Well, as the description states, the value is stored encrypted, with means that a memory dump of your process won't reveal the string's value (without some fairly serious work). The reason you can't just construct a SecureString from a constant string is because then you would have an unencrypted version of the string in memory. Limiting you to creating the string in pieces reduces the risk of having the whole string in memory at once. A: I would stop using SecureString . Looks like PG guys are dropping support for it. Possibly even pull it in the future - https://github.com/dotnet/apireviews/tree/master/2015-07-14-securestring . We should remove encryption from SecureString across all platforms in .NET Core - We should obsolete SecureString - We probably shouldn't expose SecureString in .NET Core A: Edit: Don't use SecureString Current guidance now says the class should not be used. The details can be found at this link: https://github.com/dotnet/platform-compat/blob/master/docs/DE0001.md From the article: DE0001: SecureString shouldn't be used Motivation * *The purpose of SecureString is to avoid having secrets stored in the process memory as plain text. *However, even on Windows, SecureString doesn't exist as an OS concept. * *It just makes the window getting the plain text shorter; it doesn't fully prevent it as .NET still has to convert the string to a plain text representation. *The benefit is that the plain text representation doesn't hang around as an instance of System.String -- the lifetime of the native buffer is shorter. *The contents of the array is unencrypted except on .NET Framework. * *In .NET Framework, the contents of the internal char array is encrypted. .NET doesn't support encryption in all environments, either due to missing APIs or key management issues. Recommendation Don't use SecureString for new code. When porting code to .NET Core, consider that the contents of the array are not encrypted in memory. The general approach of dealing with credentials is to avoid them and instead rely on other means to authenticate, such as certificates or Windows authentication. End Edit : Original Summary below Lots of great answers; here’s a quick synopsis of what has been discussed. Microsoft has implemented the SecureString class in an effort to provide better security with sensitive information (like credit cards, passwords, etc.). It automatically provides: * *encryption (in case of memory dumps or page caching) *pinning in memory *ability to mark as read-only (to prevent any further modifications) *safe construction by NOT allowing a constant string to be passed in Currently, SecureString is limited in use but expect better adoption in the future. Based on this information, the constructor of the SecureString should not just take a string and slice it up to char array as having the string spelled out defeats the purpose of SecureString. Additional info: * *A post from the .NET Security blog talking about much the same as covered here. *And another one revisiting it and mentioning a tool that CAN dump the contents of the SecureString. Edit: I found it tough to pick the best answer as there's good information in many; too bad there is no assisted answer options. A: Short Answer why can't I just say: SecureString password = new SecureString("password"); Because now you have password in memory; with no way to wipe it - which is exactly the point of SecureString. Long Answer The reason SecureString exists is because you cannot use ZeroMemory to wipe sensitive data when you're done with it. It exists to solve an issue that exists because of the CLR. In a regular native application you would call SecureZeroMemory: Fills a block of memory with zeros. Note: SecureZeroMemory is is identical to ZeroMemory, except the compiler won't optimize it away. The problem is that you can't call ZeroMemory or SecureZeroMemory inside .NET. And in .NET strings are immutable; you can't even overwrite the contents of the string like you can do in other languages: //Wipe out the password for (int i=0; i<password.Length; i++) password[i] = \0; So what can you do? How do we provide the ability in .NET to wipe a password, or credit card number from memory when we're done with it? The only way it can be done would be to place the string in some native memory block, where you can then call ZeroMemory. A native memory object such as: * *a BSTR *an HGLOBAL *CoTaskMem unmanaged memory SecureString gives the lost ability back In .NET, Strings cannot be wiped when you are done with them: * *they are immutable; you cannot overwrite their contents *you cannot Dispose of them *their cleanup is at the mercy of the garbage collector SecureString exists as a way to pass around strings safety, and be able to guarantee their cleanup when you need to. You asked the question: why can't I just say: SecureString password = new SecureString("password"); Because now you have password in memory; with no way to wipe it. It's stuck there until the CLR happens to decide to re-use that memory. You've put us right back where we started; a running application with a password we can't get rid of, and where a memory dump (or Process Monitor) can see the password. SecureString uses the Data Protection API to store the string encrypted in memory; that way the string will not exist in swapfiles, crash dumps, or even in the local variables window with a colleague looking over your should. How do i read the password? Then is the question: how do i interact with the string? You absolutely don't want a method like: String connectionString = secureConnectionString.ToString() because now you're right back where you started - a password you cannot get rid of. You want to force developers to handle the sensitive string correctly - so that it can be wiped from memory. That is why .NET provides three handy helper functions to marshall a SecureString into a unmanaged memory: * *SecureStringToBSTR (freed with ZeroFreeCoTaskMemUnicode) *SecureStringToCoTaskMemUnicode (freed with ZeroFreeCoTaskMemUnicode) *SecureStringToGlobalAllocUnicode (freed with ZeroFreeGlobalAllocUnicode) You convert the string into an unmanaged memory blob, handle it, and then wipe it again. Some APIs accept SecureStrings. For example in ADO.net 4.5 the SqlConnection.Credential takes a set SqlCredential: SqlCredential cred = new SqlCredential(userid, password); //password is SecureString SqlConnection conn = new SqlConnection(connectionString); conn.Credential = cred; conn.Open(); You can also change the password within a Connection String: SqlConnection.ChangePassword(connectionString, cred, newPassword); And there are a lot of places inside .NET where they continue to accept a plain String for compatibility purposes, then quickly turn around an put it into a SecureString. How to put text into the SecureString? This still leaves the problem: How do i get a password into the SecureString in the first place? This is the challenge, but the point is to get you thinking about security. Sometimes the functionality is already provided for you. For example, the WPF PasswordBox control can return you the entered password as a SecureString directly: PasswordBox.SecurePassword Property Gets the password currently held by the PasswordBox as a SecureString. This is helpful because everywhere you used to pass around a raw string, you now have the type system complaining that SecureString is incompatible with String. You want to go as long as possible before having to convert your SecureString back into regular string. Converting a SecureString is easy enough: * *SecureStringToBSTR *PtrToStringBSTR as in: private static string CreateString(SecureString secureString) { IntPtr intPtr = IntPtr.Zero; if (secureString == null || secureString.Length == 0) { return string.Empty; } string result; try { intPtr = Marshal.SecureStringToBSTR(secureString); result = Marshal.PtrToStringBSTR(intPtr); } finally { if (intPtr != IntPtr.Zero) { Marshal.ZeroFreeBSTR(intPtr); } } return result; } They just really don't want you doing it. But how do i get a string into a SecureString? Well what you need to do is stop having a password in a String in the first place. You needed to have it in something else. Even a Char[] array would be helpful. That's when you can append each character and wipe the plaintext when you're done: for (int i=0; i < PasswordArray.Length; i++) { password.AppendChar(PasswordArray[i]); PasswordArray[i] = (Char)0; } You need your password stored in some memory that you can wipe. Load it into the SecureString from there. tl;dr: SecureString exists to provide the equivalent of ZeroMemory. Some people don't see the point in wiping the user's password from memory when a device is locked, or wiping wiping keystrokes from memory after they'authenticated. Those people do not use SecureString. A: There are very few scenarios where you can sensibly use SecureString in the current version of the Framework. It's really only useful for interacting with unmanaged APIs - you can marshal it using Marshal.SecureStringToGlobalAllocUnicode. As soon as you convert it to/from a System.String, you've defeated its purpose. The MSDN sample generates the SecureString a character at a time from console input and passes the secure string to an unmanaged API. It's rather convoluted and unrealistic. You might expect future versions of .NET to have more support for SecureString that will make it more useful, e.g.: * *SecureString Console.ReadLineSecure() or similar to read console input into a SecureString without all the convoluted code in the sample. *WinForms TextBox replacement that stores its TextBox.Text property as a secure string so that passwords can be entered securely. *Extensions to security-related APIs to allow passwords to be passed as SecureString. Without the above, SecureString will be of limited value. A: I believe the reason why you have to do character appending instead of one flat instantiation is because in the background passing "password" to the constructor of SecureString puts that "password" string in memory defeating the purpose of secure string. By appending you are only putting a character at a time into memory which is likley not to be adjacent to each other physically making it much harder to reconstruct the original string. I could be wrong here but that's how it was explained to me. The purpose of the class is to prevent secure data from being exposed via a memory dump or similar tool. A: MS found that on certain instances of causing the server (desktop, whatever) to crash there were times when the runtime environment would do a memory dump exposing the contents of what's in memory. Secure String encrypts it in memory to prevent the attacker from being able to retrieve the contents of the string. A: Some parts of the framework that currently use SecureString: * *WPF's System.Windows.Controls.PasswordBox control keeps the password as a SecureString internally (exposed as a copy through PasswordBox::SecurePassword) *The System.Diagnostics.ProcessStartInfo::Password property is a SecureString *The constructor for X509Certificate2 takes a SecureString for the password The main purpose is to reduce the attack surface, rather than eliminate it. SecureStrings are "pinned" in RAM so the Garbage Collector won't move it around or make copies of it. It also makes sure the plain text won't get written to the Swap file or in core dumps. The encryption is more like obfuscation and won't stop a determined hacker, though, who would be able to find the symmetric key used to encrypt and decrypt it. As others have said, the reason you have to create a SecureString character-by-character is because of the first obvious flaw of doing otherwise: you presumably have the secret value as a plain string already, so what's the point? SecureStrings are the first step in solving a Chicken-and-Egg problem, so even though most current scenarios require converting them back into regular strings to make any use of them at all, their existence in the framework now means better support for them in the future - at least to a point where your program doesn't have to be the weak link. A: Another use case is when you are working with payment applications (POS) and you simply can't use immutable data structures in order to store sensitive data because you are careful developer. For instance: if I will store sensitive card data or authorisation metadata into immutable string there always would be the case when this data will be available in memory for significant amount of time after it was discarded. I cannot simply overwrite it. Another huge advantage where such sensitive data being kept in memory encrypted.
{ "language": "en", "url": "https://stackoverflow.com/questions/141203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "185" }
Q: What is the proper way to ensure a SQL connection is closed when an exception is thrown? I use a pattern that looks something like this often. I'm wondering if this is alright or if there is a best practice that I am not applying here. Specifically I'm wondering; in the case that an exception is thrown is the code that I have in the finally block enough to ensure that the connection is closed appropriately? public class SomeDataClass : IDisposable { private SqlConnection _conn; //constructors and methods private DoSomethingWithTheSqlConnection() { //some code excluded for brevity try { using (SqlCommand cmd = new SqlCommand(SqlQuery.CountSomething, _SqlConnection)) { _SqlConnection.Open(); countOfSomething = Convert.ToInt32(cmd.ExecuteScalar()); } } finally { //is this the best way? if (_SqlConnection.State == ConnectionState.Closed) _SqlConnection.Close(); } //some code excluded for brevity } public Dispose() { _conn.Dispose(); } } A: The .Net Framework mantains a connection pool for a reason. Trust it! :) You don't have to write so much code just to connect to the database and release the connection. You can just use the 'using' statement and rest assured that 'IDBConnection.Release()' will close the connection for you. Highly elaborate 'solutions' tend to result in buggy code. Simple is better. A: MSDN Docs make this pretty clear... * *The Close method rolls back any pending transactions. It then releases the connection to the connection pool, or closes the connection if connection pooling is disabled. You probably haven't (and don't want to) disable connection pooling, so the pool ultimately manages the state of the connection after you call "Close". This could be important as you may be confused looking from the database server side at all the open connections. * *An application can call Close more than one time. No exception is generated. So why bother testing for Closed? Just call Close(). * *Close and Dispose are functionally equivalent. This is why a using block results in a closed connection. using calls Dispose for you. * *Do not call Close or Dispose on a Connection, a DataReader, or any other managed object in the Finalize method of your class. Important safety tip. Thanks, Egon. A: Wrap your database handling code inside a "using" using (SqlConnection conn = new SqlConnection (...)) { // Whatever happens in here, the connection is // disposed of (closed) at the end. } A: I'm guessing that by _SqlConnection.State == ConnectionState.Closed you meant != This will certainly work. I think it is more customary to contain the connection object itself inside a using statement, but what you have is good if you want to reuse the same connection object for some reason. One thing that you should definitely change, though, is the Dispose() method. You should not reference the connection object in dispose, because it may have already been finalized at that point. You should follow the recommended Dispose pattern instead. A: Since you're using IDisposables anyway. You can use the 'using' keyword, which is basically equivalent to calling dispose in a finally block, but it looks better. A: Put the connection close code inside a "Finally" block like you show. Finally blocks are executed before the exception is thrown. Using a "using" block works just as well, but I find the explicit "Finally" method more clear. Using statements are old hat to many developers, but younger developers might not know that off hand. A: See this question for the answer: Close and Dispose - which to call? If your connection lifetime is a single method call, use the using feature of the language to ensure the proper clean-up of the connection. While a try/finally block is functionally the same, it requires more code and IMO is less readable. There is no need to check the state of the connection, you can call Dispose regardless and it will handle cleaning-up the connection. If your connection lifetime corresponds to the lifetime of a containing class, then implement IDisposable and clean-up the connection in Dispose. A: no need for a try..finally around a "using", the using IS a try..finally A: Might I suggest this: class SqlOpener : IDisposable { SqlConnection _connection; public SqlOpener(SqlConnection connection) { _connection = connection; _connection.Open(); } void IDisposable.Dispose() { _connection.Close(); } } public class SomeDataClass : IDisposable { private SqlConnection _conn; //constructors and methods private void DoSomethingWithTheSqlConnection() { //some code excluded for brevity using (SqlCommand cmd = new SqlCommand("some sql query", _conn)) using(new SqlOpener(_conn)) { int countOfSomething = Convert.ToInt32(cmd.ExecuteScalar()); } //some code excluded for brevity } public void Dispose() { _conn.Dispose(); } } Hope that helps :)
{ "language": "en", "url": "https://stackoverflow.com/questions/141204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How do you pass parameters to called function using ASP.Net Ajax $addHandler I am trying to use the $addHandler function to add a handler to a text box's click event var o=$get('myTextBox'); var f = Type.parse('funcWithArgs'); $addHandler(o, 'click', f); However I need to pass parameters to the called function. How do you do that? TIA A: Wrap your function with an anonymous function (aka lambda): $addHandler(o, 'click', function() { f(my, arguments, go, here); }); Alternative solution: If you had a function that created partials, you could do that as well - I use a toolkit that provides for that, and this is how it would be done: $addHandler(o, 'click', partial(f, my, arguments, go, here)); I don't know (and actually doubt) that Microsoft's framework provides for that, but you could look into writing your own partial function.
{ "language": "en", "url": "https://stackoverflow.com/questions/141207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to have a LinkClicked event using an ArrayList of LinkLabels in .NET I'm working on a form that will display links to open different types of reports. This system has different types of users, so the users should only be able to see the links to the types of reports they can access. Currently, the way I have this set up is that I have an ArrayList of LinkLabels, but the problem I'm having is how to have a LinkClicked event for each LinkLabel in the ArrayList so that it will bring up a form specific to each report. A: Actually, I would have a single event handler for all the linklabels, (add the handler during the databinding process of the ArrayList) with the name of the report to be loaded in the CommandName label of the LinkLabel. When the event handler fires, you would check the CommandName attribute and the fire off the appropriate functionality to load the given report. A: You can apply the same event handler to every LinkLabel in your list and get the specific LinkLabel from the sender argument. A: Definitely recommend a single event handler for all of the dynamic LinkLabel instances. I usually use a Hashtable where the key is the LinkLabel instance and the value is something that will be used within the click event (such as the report instance, if appropriate). Then in the click event you use (for example) Report r = m_TheTable[sender] as Report; if( r != null ) r.DoSomething();
{ "language": "en", "url": "https://stackoverflow.com/questions/141212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How many database indexes is too many? I'm working on a project with a rather large Oracle database (although my question applies equally well to other databases). We have a web interface which allows users to search on almost any possible combination of fields. To make these searches go fast, we're adding indexes to the fields and combinations of fields on which we believe users will commonly search. However, since we don't really know how our customers will use this software, it's hard to tell which indexes to create. Space isn't a concern; we have a 4 terabyte RAID drive of which we are using only a small fraction. However, I'm worried about the possible performance penalties of having too many indexes. Because those indexes need to be updated every time a row is added, deleted, or modified, I imagine it'd be a bad idea to have dozens of indexes on a single table. So how many indexes is considered too many? 10? 25? 50? Or should I just cover the really, really common and obvious cases and ignore everything else? A: In addition to the points everyone else has raised, the Cost Based Optimizer incurs a cost when creating a plan for an SQL statement if there are more indexes because there are more combinations for it to consider. You can reduce this by correctly using bind variables so that SQL statements stay in the SQL cache. Oracle can then do a soft parse and re-use the plan it found last time. As always, nothing is simple. If there are skewed columns and histograms involved then this can be a bad idea. In our web applications we tend to limit the combinations of searches that we allow. Otherwise you would have to test literally every combination for performance to ensure you did not have a lurking problem that someone will find one day. We have also implemented resource limits to stop this causing issues elsewhere in the application should something go wrong. A: I made some simple tests on my real project and real MySql database. I already answered in this topic: What is the cost of indexing multiple db columns? But I think it will be better if I quote it here: I made some simple tests using my real project and real MySql database. My results are: adding average index (1-3 columns in an index) to a table - makes inserts slower by 2.1%. So, if you add 20 indexes, your inserts will be slower by 40-50%. But your selects will be 10-100 times faster. So is it ok to add many indexes? - It depends :) I gave you my results - You decide! A: I usually proceed like this. * *Get a log of the real queries run on the data on a typical day. *Add indexes so the most important queries hit the indexes in their execution plan. *Try to avoid indexing fields that have a lot of updates or inserts *After a few indexes, get a new log and repeat. As with all any optimization, I stop when the requested performance is reached (this obviously implies that point 0. would be getting specific performance requirements). A: Ultimately how many indexes you need depend on the behavior of your applications that ride on top of your database server. In general the more inserting you do the more painful your indexes become. Each time you do an insert, all the indexes that include that table have to be updated. Now if your application has a decent amount of reading, or even more so if it's almost all reading, then indexes are the way to go as there will be major performance improvements for very little cost. A: There's no static answer in my opinion, this sort of thing falls under 'performance tuning'. It could be that everything your app does is looked up by a primary key, or it could be the oposite in that queries are done over unristricted combinations of fields and any one in particular could be used at any given time. Beyond just indexing, there's reogranizing your DB to include calculated search fields, splitting tables, etc - it's really dependant on your load shapes and query parameters, how much/what data 'really' needs to be retruend by a query. If your entire DB is fronted by stored-procedure facades turning becomes a bit easier, as you don't have to wory about every ad-hoc query. Or you may have a deep understanding of the kind of queries that will hit your DB, and can limit the tuning to those. For SQL Server I've found the Database Engine Tuning advisor usefull - you set up 'typical' workloads and it can make recommendations about adding/removing indexes and statistics. I'm sure other DBs have similar tools, either 'offical' or third party. A: This really is a more theoretical questions than practical. Indexes impact on your performance depends on the hardware you have, the version of Oracle, index types, etc. Yesterday I heard Oracle announced a dedicated storage, made by HP, which is supposed to perform 10 times faster with 11g database. As for your case, there can be several solutions: 1. Have a large amount of indexes (>20) and rebuild them daily (nightly). This would be especially useful if the table gets thousands of updates/deletes daily. 2. Partition your table (if that applies your data model). 3. Use a separate table for new/updated data, and run a nightly process which combines the data together. This would require a change in your application logic. 4. Switch to IOT (index organized table), if your data support this. Of course there might be many more solutions for such case. My first suggestion to you, would be to clone the DB to a development environment, and run some stress testing against it. A: An index imposes a cost when the underlying table is updated. An index provides a benefit when it is used to spped up a query. For each index, you need to balance the cost against the benefit. How much slower does the query run without the index? How much of a benefit is running faster? Can you or your users tolerate the slow speed when the index is missing? Can you tolerate the additional time it takes to complete an update? You need to compare costs and benefits. That's particular to your situation. There's no magic number of indexes that passes the threshold of "too many". There's also the cost of the space needed to store the index, but you've said that in your situation that's not an issue. The same is true in most situations, given how cheap disk space has become. A: Everyone else has been giving you great advice. I have an added suggestion for you as you move forward. At some point you have to make a decision as to your best indexing strategy. In the end though, the best PLANNED indexing strategy can still end up creating indexes that don't end up getting used. One strategy that lets you find indexes that aren't used is to monitor index usage. You do this as follows:- alter index my_index_name monitoring usage; You can then monitor whether the index is used or not from that point forward by querying v$object_usage. Information on this can be found in the Oracle® Database Administrator's Guide. Just remember that if you have a warehousing strategy of dropping indexes before updating a table, then recreating them, you will have to set the index up for monitoring again, and you'll lose any monitoring history for that index. A: If you do mostly reads (and few updates) then there's really no reason not to index everything you'll need to index. If you update often, then you may need to be cautious on how many indexes you have. There's no hard number, but you'll notice when things start to slow down. Make sure your clustered index is the one that makes the most sense based on the data. A: One thing you may consider is building indexes to target a standard combination of searches. If column1 is commonly searched, and column2 is often used with it, and column3 is sometimes used with column2 and column1, then an index on column1, column2, and column3 in that order can be used for any of those three circumstances, though it is only one index that has to be maintained. A: In data warehousing it is very common to have a high number of indexes. I have worked with fact tables having two hundred columns and 190 of them indexed. Although there is an overhead to this it must be understood in the context that in a data warehouse we generally only insert a row once, we never update it, but it can then participate in thousands of SELECT queries which might benefit from indexing on any of the columns. For maximum flexibility a data warehouse generally uses single column bitmap indexes except on high cardinality columns, where (compressed) btree indexes can be used. The overhead on index maintenance is mostly associated with the expense of writing to a great many blocks and the block splits as new rows are added with values that are "in the middle" of existing value ranges for that column. This can be mitigated by partitioning and having the new data loads aligned with the partitioning scheme, and by using direct path inserts. To address your question more directly, I think it is probably fine to index the obvious at first, but do not be afraid of adding more indexes on if the queries against the table would benefit. A: In a paraphrase of Einstein about simplicity, add as many indexes as you need and no more. Seriously, however, every index you add requires maintenance whenever data is added to the table. On tables that are primarily read only, lots of indexes are a good thing. On tables that are highly dynamic, fewer is better. My advice is to cover the common and obvious cases and then, as you encounter issues where you need more speed in getting data from specific tables, evaluate and add indices at that point. Also, it's a good idea to re-evaluate your indexing schemes every few months, just to see if there is anything new that needs indexing or any indices that you've created that aren't being used for anything and should be gotten rid of. A: It depends on the operations that occur on the table. If there's lots of SELECTs and very few changes, index all you like.... these will (potentially) speed the SELECT statements up. If the table is heavily hit by UPDATEs, INSERTs + DELETEs ... these will be very slow with lots of indexes since they all need to be modified each time one of these operations takes place Having said that, you can clearly add a lot of pointless indexes to a table that won't do anything. Adding B-Tree indexes to a column with 2 distinct values will be pointless since it doesn't add anything in terms of looking the data up. The more unique the values in a column, the more it will benefit from an index. A: How many columns are there? I have always been told to make single-column indexes, not multi-column indexes. So no more indexes than the amount of columns, IMHO. A: What it really comes down to is, don't add an index unless you know (and this often means gathering usage statistics) that it will be used far more often than it's updated. Any index that doesn't meet that criteria will cost you more to rebuild than the performance penalty of not having it in the odd case it got used. A: Sql server gives you some good tools that let you see which indexes are actually being used. This article, http://www.mssqltips.com/tip.asp?tip=1239, gives you some queries that let you get a better insight into how much an index is used, as opposed to how much it is updated. A: It is totally based on the columns which are being used in Where Clause. And as the Thumb of Rule, we must have indexes on Foreign Key Columns to avoid DEADLOCKS. AWR report should analyze periodically to understand the need of indexes.
{ "language": "en", "url": "https://stackoverflow.com/questions/141232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "127" }
Q: Does java have an equivalent to the C# "using" clause I've seen reference in some C# posted questions to a "using" clause. Does java have the equivalent? A: The nearest equivalent within the language is to use try-finally. using (InputStream in as FileInputStream("myfile")) { ... use in ... } becomes final InputStream in = FileInputStream("myfile"); try { ... use in ... } finally { in.close(); } Note the general form is always: acquire; try { use; } finally { release; } If acquisition is within the try block, you will release in the case that the acquisition fails. In some cases you might be able to hack around with unnecessary code (typically testing for null in the above example), but in the case of, say, ReentrantLock bad things will happen. If you're doing the same thing often, you can use the "execute around" idiom. Unfortunately Java's syntax is verbose, so there is a lot of bolier plate. fileInput("myfile", new FileInput<Void>() { public Void read(InputStream in) throws IOException { ... use in ... return null; } }); where public static <T> T fileInput(FileInput<T> handler) throws IOException { final InputStream in = FileInputStream("myfile"); try { handler.read(in); } finally { in.close(); } } More complicated example my, for instance, wrap exceptions. A: Yes. Java 1.7 introduced the try-with-resources construct allowing you to write: try(InputStream is1 = new FileInputStream("/tmp/foo"); InputStream is2 = new FileInputStream("/tmp/bar")) { /* do stuff with is1 and is2 */ } ... just like a using statement. Unfortunately, before Java 1.7, Java programmers were forced to use try{ ... } finally { ... }. In Java 1.6: InputStream is1 = new FileInputStream("/tmp/foo"); try{ InputStream is2 = new FileInputStream("/tmp/bar"); try{ /* do stuff with is1 and is 2 */ } finally { is2.close(); } } finally { is1.close(); } A: It was a long time coming but with Java 7 the try-with-resources statement was added, along with the AutoCloseable interface. A: Not that I'm aware of. You can somewhat simulate with a try...finally block, but it's still not quite the same. A: The closest you can get in Java is try/finally. Also, Java does not provide an implicit Disposable type. C#: scoping the variable outside a using block public class X : System.IDisposable { public void Dispose() { System.Console.WriteLine("dispose"); } private static void Demo() { X x = new X(); using(x) { int i = 1; i = i/0; } } public static void Main(System.String[] args) { try { Demo(); } catch (System.DivideByZeroException) {} } } Java: scoping the variable outside a block public class X { public void dispose() { System.out.println("dispose"); } private static void demo() { X x = new X(); try { int i = 1 / 0; } finally { x.dispose(); } } public static void main(String[] args) { try { demo(); } catch(ArithmeticException e) {} } } C#: scoping the variable inside a block public class X : System.IDisposable { public void Dispose() { System.Console.WriteLine("dispose"); } private static void Demo() { using(X x = new X()) { int i = 1; i = i/0; } } public static void Main(System.String[] args) { try { Demo(); } catch (System.DivideByZeroException) {} } } Java: scoping the variable inside a block public class X { public void dispose() { System.out.println("dispose"); } private static void demo() { { X x = new X(); try { int i = 1 / 0; } finally { x.dispose(); } } } public static void main(String[] args) { try { demo(); } catch(ArithmeticException e) {} } } A: Yes, since Java 7 you can rewrite: InputStream is1 = new FileInputStream("/tmp/foo"); try{ InputStream is2 = new FileInputStream("/tmp/bar"); try{ /* do stuff with is1 and is2 */ } finally { is2.close(); } } finally { is1.close(); } As try(InputStream is1 = new FileInputStream("/tmp/foo"); InputStream is2 = new FileInputStream("/tmp/bar")) { /* do stuff with is1 and is2 */ } The objects passed as parameters to the try statement should implement java.lang.AutoCloseable.Have a look at the official docs. For older versions of Java checkout this answer and this answer. A: I think you can achieve something similar to the "using" block, implementing an anonymous inner class. Like Spring does with the "Dao Templates". A: Well, using was syntactic sugar anyway so Java fellows, don't sweat it. A: If we get BGGA closures in Java, this would also open up for similar structures in Java. Gafter has used this example in his slides, for example: withLock(lock) { //closure } A: The actual idiom used by most programmers for the first example is this: InputStream is1 = null; InputStream is2 = null; try{ is1 = new FileInputStream("/tmp/bar"); is2 = new FileInputStream("/tmp/foo"); /* do stuff with is1 and is 2 */ } finally { if (is1 != null) { is1.close(); } if (is2 != null) { is2.close(); } } There is less indenting using this idiom, which becomes even more important when you have more then 2 resources to cleanup. Also, you can add a catch clause to the structure that will deal with the new FileStream()'s throwing an exception if you need it to. In the first example you would have to have another enclosing try/catch block if you wanted to do this. A: No, there is no using in Java, the most similar functionality is the "import" keyword. A: No there isn't. You can public void func(){ { ArrayList l = new ArrayList(); } System.out.println("Hello"); } This gives you the limited scope of the using clause, but there isn't any IDisposable interface to call finalization code. You can use try{}catch(){}Finally{}, but it doesn't have the sugar of using. Incidentally using finalizers in Java is generally a bad idea.
{ "language": "en", "url": "https://stackoverflow.com/questions/141241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Exceptions vs Result Codes for a socket client class I have a class that encapsulates tcp socket communications with a server. For each command message sent to the server, the server will send back a response message that invariably contains a response code (OK, Fail). Using my class, each command can be executed either sync or async. There are basically two types of exceptions that can take place: A "fault" that is caused by a disconnect or some other non-recoverable error and an unexpected exception like "send buffer is full". In the event of a fault, no command can continue or try again or anything until the connection is re-established. In the event of a fail response or even an exception, the command can be tried again... So, right now my sync command methods return an enum that can have the following values: OK, Fail, Fault. If an exception occurs, it is simply raised to the calling thread (in a sync command). For async commands, the Result property enum value can contain an extra value: OK, Fail, Fault or Exception and the callback can access the actual exception object via the command object's Exception property. What do you think about this strategy? I am tempted to not raise exceptions at all for sync commands and just log the exception internally and return the 4th enum value instead because that's all I'll really do with exceptions in any given case anyway... Or, should I not be using result codes at all and just raise exceptions in all cases, even faults? Thanks. A: I think your strategy is basically sound. Keep in mind that the purpose of Exceptions is to deal with exceptional conditions. The closer to the source of the problem, the better. In your case, it appears that your strategy is something like "It didn't work right now. Let's retry". I don't see a reason to really raise exceptions. If dealing with a closed socket was something that required a totally different flow in your code, then maybe exceptions would make sense. From your description, that's not really the case. My philosophy on Exceptions is that they should be for exceptional conditions that you can't really deal with. A closed socket? Hmm...how many times does the internet go down at my house... A: My preference is you throw an exception any time your method does not successfully complete its mission. So if I, the caller, call yourObject.UploadFile(), I will assume the file was uploaded successfully when the call returns. If it fails for any reason, I expect your object will throw an exception. If you want to distinguish between commands I can retry and commands I shouldn't retry, put that information in the exception and I can decide how to react accordingly. When calling yourObject.BeginAsyncUploadFile(), I'd expect the same behavior except that I'd need to wait to on the IAsyncResult or equivalent object to find out whether the file upload succeeded or not and then check an Exception/Error property if it didn't. A: Result codes and exceptions can both work fine. It is a matter of personal taste (and the taste of the others on your team). Exceptions have some advantages, especially in more complex settings, but in your setting it sounds simple enough that return codes should work okay. Some people will foam at the mouth and insist on exceptions, but on my project people like the simplicity of return codes, making them the better choice overall. A: This is a rather interesting question. As such, there's probably no '100% correct' answer, and mostly it depends on how do you think the code using your function should be structured. The way I see it is you use exceptions only when you want to provide the code calling your function with a way to escape gracefully from a 'disastrous' situation. So, in my code I normally throw an exception when something really, really horrible happens, and the caller needs to know. Now, if what you have is a normal, and expected, situation you should probably return an error value. That way the code knows it needs to 'try harder', but it won't be compromised by what happened. In your case, for example, you could treat the timeouts as something expected, and hence return an error code, and more severe problems (like a full send buffer) where the calling code needs to perform some extra actions to go back to 'normal' as an exception. But then, beauty is in the eye of the beholder, and some people will tell you to only use exceptions, others (mostly C programmers) to only use return codes. Just remember, exceptions should always be exceptional. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/141242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Controlling property names on serialized ASP.Net Ajax Objects To start, I know there are two "kinds" of JSON serialization currently built into ASP.Net: you can either use the JavaScriptSerializer class to serialize your object to JSON or the new DataContractJsonSerializer class to convert a object to JSON. If you use the JavaScriptSerializer() method, you must mark your class as Serializable() -- if you use the DataContractJsonSerializer method, you must mark your class as DataContract(), and mark your properties as DataMembers(). If you want, you can specify a NAME attribute for each DataMembers, so when the property gets serialized/deserialized, it uses that name. For my purposes, I see this as being useful to make the JSON not so "wordy". For example, instead of stating "UserID" as my property (and having it repeat throughout my JSON object), I'd like to simply use "u". Less data across the wire, etc. The two serialization engines render a bit different, and you can only use the JavaScriptSerializer with ASP.Net Web/Script Methods. Therein lies my problem. Is there a equivalent way of setting a property name to something else, strictly for the purposes of serialization/deserialization using the regular JavaScriptSerializer? A: No, there isn't. A: Assuming UTF-8 you are saving 40 bits over the wire by this technique, its not worth the sleep lost.
{ "language": "en", "url": "https://stackoverflow.com/questions/141244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: MATLAB Environment Tweaks How have you tweaked the MATLAB environment to better suit your needs? One tweak per answer. A: I run "format compact" to remove all those frustrating blank lines spacing out my output. Oh so simple, but makes it so much nicer to me. A: I use a function idetitle() that can change the window title of the Matlab GUI itself. Useful in a development environment where I'm running several Matlab processes, possible on different branches of source code or model runs. Sometimes I'll put the PID in the window title to make it easy to find in Process Explorer for monitoring resource usage. function idetitle(Title) %IDETITLE Set Window title of the Matlab IDE % % Examples: % idetitle('Matlab - Foo model') % idetitle(sprintf('Matlab - some big model - #%d', feature('getpid'))) win = appwin(); if ~isempty(win) win.setTitle(Title); end function out = appwin() %APPWIN Get main application window wins = java.awt.Window.getOwnerlessWindows(); for i = 1:numel(wins) if isa(wins(i), 'com.mathworks.mde.desk.MLMainFrame') out = wins(i); return end end out = []; A: I changed the default font in the MATLAB editor to 10 point ProFont (which can be obtained here) so I could write code for long periods of time without giving myself a headache from straining my eyes. A: I run Matlab with the options -nodesktop -nojvm. That way it just sits in a terminal out of the way, and I can use my favourite text editor to my heart's content. You do miss out of some killer features this way though. A: I set the number of lines in the command window scroll buffer to the maximum (25,000). This doesn't seem to noticeably affect performance and allows me to display a large amount of data/results. A: I use a startup.m file (sits in the local MATLAB path) to make sure that I have the settings I want whenever I start up MATLAB. This includes such things as formatting the REPL and plot parameters. A: I set the Command Window output numeric format to long g. A: I implemented analogues of xlim and ylim: xlim_global([xmin xmax]) and ylim_global([ymin ymax]), which sets the axes' limits the same for every subplot in the figure. A: I invert colors to have a black backgroud, easier on the eyes. (Alt+Shift+PrintScreen on Windows, you can configure away the huge icons) A: I keep a diary for each session (possibly multiple diary files per day) to recall all commands executed. This is controlled by a startup.m file that checks for previous diary files from that day. A: I wrote a small function called fig.m to call up figure windows with names rather than numbers and display the name in the status bar. Funnily enough, there are two or three identically named files that do exactly the same thing on the file exchange. A: I have functions to 1) save the current figure locations and sizes on the screen, and 2) and one to load such configuration. It's very useful e.g. when monitoring data-heavy simulations. A: I set shortcuts for * *open current directory *up 1 folder *an action to do 'close all; clear all; clc;' Ref: http://www.mathworks.com/matlabcentral/fileexchange/19097-custom-panzoom-icons A: * *send the outputs to your email esp when the running is long http://www.mathworks.com/matlabcentral/fileexchange/29183-sending-reports-and-timestamped-file-by-emailing *create a result collector for archiving and sending http://www.mathworks.com/matlabcentral/fileexchange/29255-track-collect-and-tar-inputs-and-outputs *a patch to line up the file within a directory in proper order http://www.mathworks.com/matlabcentral/fileexchange/29033-file-ordering-patch-utility-for-matlab
{ "language": "en", "url": "https://stackoverflow.com/questions/141247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Suppress ClientAbortException in struts2 VelocityResult class I am getting the following stack trace in my log file and was wanting to suppress just this error from displaying in the log: ERROR 08-09-26 14:48:45.141 http-80-215 org.apache.struts2.dispatcher.VelocityResult: Unable to render Velocity Template, '/jsondata.vm' ClientAbortException: java.net.SocketException: Broken pipe I understand what causes the error, and it is not really exceptional in this particular use case; I just want to suppress the ClientAbortException from displaying in the log file, but display a debug level message instead. A: As a workaround you can set log level for the class org.apache.struts2.dispatcher.VelocityResult to fatal.
{ "language": "en", "url": "https://stackoverflow.com/questions/141248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Stateless EJB question We have a stateless EJB that is behind a webservices (EJB3), this EJB also loads a EntityManager that is passed in calls that it makes. With that I have a question. Do simultaneous call to the webservice use the same EJB or is there different instances? I ask this especially concerning the use of the EntityManager, which is injected. Thanks A: Is up to the Application server to use the same or different. You may think as if they were different. Now, if you're injecting it I assume you have it declared as an instance variable, this is a very bad idea for an stateless EJB, because well. It should not have state. Instead of inject the EntityManager, let the app server do its work, and you just take it from the context. Each method call from a stateless belongs to a transaction and won't interfere with others calls. In summary: Assume they are different instances, and don't inject your self those kind of objects. Take them from the context where the app server is responsible to leave them. I hope I've understand properly your question.
{ "language": "en", "url": "https://stackoverflow.com/questions/141249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Windows .url links that point to same address when copied over or deleted This is really annoying, we've switched our client downloads page to a different site and want to send a link out with our installer. When the link is created and overwrites the existing file, the metadata in windows XP still points to the same place even though the contents of the .url shows the correct address. I can change that URL property to google.com and it points to the same place when I copy over the file. [InternetShortcut] URL=https://www.xxxx.com/?goto=clientlogon.php IDList= HotKey=0 It works if we rename our link .url file. But we expect that the directory will be reused and that would result in one bad link and one good link which is more confusing than it is cool. A: Take a look at here: http://www.cyanwerks.com/file-format-url.html It explains there's a Modified field you can add to the .url file. It also explains how to interpret it. A: .URL files are wierd (are they documented anywhere?) Mine look like this and I don't seem to have that problem (maybe because of the Modified entry?) [DEFAULT] BASEURL=http://www.xxxx.com/Help [InternetShortcut] URL=http://www.xxxx.com/Help Modified=60D0EDADF1CAC5014B
{ "language": "en", "url": "https://stackoverflow.com/questions/141251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can someone explain hex offsets to me? I downloaded Hex Workshop, and I was told to read a .dbc file. It should contain 28,315 if you read offset 0x04 and 0x05 I am unsure how to do this? What does 0x04 mean? A: Think of a binary file as a linear array of bytes. 0x04 would be the 5th (in a 0 based array) element in the array, and 0x05 would be the 6th. The two values in 0x04 and 0x05 can be OR'ed together to create the number 28,315. Since the value you are reading is 16 bit, you need to bitshift one value over and then OR them together, ie if you were manipulating the file in c#, you would use something like this: int value = (ByteArray[4] >> 8) | ByteArray[5]); Hopefully this helps explain how hex addresses work. A: It's the 4th and the 5th XX code your viewing... 1 2 3 4 5 6 01 AB 11 7B FF 5A So, the 0x04 and 0x05 is "7B" and "FF". Assuming what you're saying, in your case 7BFF should be equal to your desired value. HTH A: 0x04 is hex for 4 (the 0x is just a common prefix convention for base 16 representation of numbers - since many people think in decimal), and that would be the fourth byte (since they are saying offset, they probably count the first byte as byte 0, so offset 0x04 would be the 5th byte). I guess they are saying that the 4th and 5th byte together would be 28315, but did they say if this is little-endian or big-endian? 28315 (decimal) is 0x6E9B in hexadecimal notation, probably in the file in order 0x9B 0x6E if it's little-endian. Note: Little-endian and big-endian refer to the order bytes are written. Humans typical write decimal notation and hexadecimal in a big-endian way, so: 256 would be written as 0x0100 (digits on the left are the biggest scale) But that takes two bytes and little-endian systems will write the low byte first: 0x00 0x01. Big-endian systems will write the high-byte first: 0x01 0x00. Typically Intel systems are little-endian and other systems vary. A: 0x04 in hex is 4 in decimal. 0x10 in hex is 16 in decimal. calc.exe can convert between hex and decimal for you. Offset 4 means 4 bytes from the start of the file. Offset 0 is the first byte in the file. A: Look at bytes 4 and five they should have the values 0x6E 0x9B (or 0x9B 0x6E) depending on your endianess. A: Start here. Once you learn how to read hexadecimal values, you'll be in much better shape to actually solve your problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/141262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Subqueries vs joins I refactored a slow section of an application we inherited from another company to use an inner join instead of a subquery like: WHERE id IN (SELECT id FROM ...) The refactored query runs about 100x faster. (~50 seconds to ~0.3) I expected an improvement, but can anyone explain why it was so drastic? The columns used in the where clause were all indexed. Does SQL execute the query in the where clause once per row or something? Update - Explain results: The difference is in the second part of the "where id in ()" query - 2 DEPENDENT SUBQUERY submission_tags ref st_tag_id st_tag_id 4 const 2966 Using where vs 1 indexed row with the join: SIMPLE s eq_ref PRIMARY PRIMARY 4 newsladder_production.st.submission_id 1 Using index A: before the queries are run against the dataset they are put through a query optimizer, the optimizer attempts to organize the query in such a fashion that it can remove as many tuples (rows) from the result set as quickly as it can. Often when you use subqueries (especially bad ones) the tuples can't be pruned out of the result set until the outer query starts to run. With out seeing the the query its hard to say what was so bad about the original, but my guess would be it was something that the optimizer just couldn't make much better. Running 'explain' will show you the optimizers method for retrieving the data. A: Look at the query plan for each query. Where in and Join can typically be implemented using the same execution plan, so typically there is zero speed-up from changing between them. A: Optimizer didn't do a very good job. Usually they can be transformed without any difference and the optimizer can do this. A: This question is somewhat general, so here's a general answer: Basically, queries take longer when MySQL has tons of rows to sort through. Do this: Run an EXPLAIN on each of the queries (the JOIN'ed one, then the Subqueried one), and post the results here. I think seeing the difference in MySQL's interpretation of those queries would be a learning experience for everyone. A: The where subquery has to run 1 query for each returned row. The inner join just has to run 1 query. A: You are running the subquery once for every row whereas the join happens on indexes. A: Usually its the result of the optimizer not being able to figure out that the subquery can be executed as a join in which case it executes the subquery for each record in the table rather then join the table in the subquery against the table you are querying. Some of the more "enterprisey" database are better at this, but they still miss it sometimes. A: With a subquery, you have to re-execute the 2nd SELECT for each result, and each execution typically returns 1 row. With a join, the 2nd SELECT returns a lot more rows, but you only have to execute it once. The advantage is that now you can join on the results, and joining relations is what a database is supposed to be good at. For example, maybe the optimizer can spot how to take better advantage of an index now. A: It isn't so much the subquery as the IN clause, although joins are at the foundation of at least Oracle's SQL engine and run extremely quickly. A: The subquery was probably executing a "full table scan". In other words, not using the index and returning way too many rows that the Where from the main query were needing to filter out. Just a guess without details of course but that's the common situation. A: Taken from the Reference Manual (14.2.10.11 Rewriting Subqueries as Joins): A LEFT [OUTER] JOIN can be faster than an equivalent subquery because the server might be able to optimize it better—a fact that is not specific to MySQL Server alone. So subqueries can be slower than LEFT [OUTER] JOINS. A: A "correlated subquery" (i.e., one in which the where condition depends on values obtained from the rows of the containing query) will execute once for each row. A non-correlated subquery (one in which the where condition is independent of the containing query) will execute once at the beginning. The SQL engine makes this distinction automatically. But, yeah, explain-plan will give you the dirty details. A: Here's an example of how subqueries are evaluated in MySQL 6.0. The new optimizer will convert this kind of subqueries into joins.
{ "language": "en", "url": "https://stackoverflow.com/questions/141278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "164" }
Q: What's the best way to count keywords in JavaScript? What's the best and most efficient way to count keywords in JavaScript? Basically, I'd like to take a string and get the top N words or phrases that occur in the string, mainly for the use of suggesting tags. I'm looking more for conceptual hints or links to real-life examples than actual code, but I certainly wouldn't mind if you'd like to share code as well. If there are particular functions that would help, I'd also appreciate that. Right now I think I'm at using the split() function to separate the string by spaces and then cleaning punctuation out with a regular expression. I'd also want it to be case-insensitive. A: Once you have that array of words cleaned up, and let's say you call it wordArray: var keywordRegistry = {}; for(var i = 0; i < wordArray.length; i++) { if(keywordRegistry.hasOwnProperty(wordArray[i]) == false) { keywordRegistry[wordArray[i]] = 0; } keywordRegistry[wordArray[i]] = keywordRegistry[wordArray[i]] + 1; } // now keywordRegistry will have, as properties, all of the // words in your word array with their respective counts // this will alert (choose something better than alert) all words and their counts for(var keyword in keywordRegistry) { alert("The keyword '" + keyword + "' occurred " + keywordRegistry[keyword] + " times"); } That should give you the basics of doing this part of the work. A: Cut, paste + execute demo: var text = "Text to be examined to determine which n words are used the most"; // Find 'em! var wordRegExp = /\w+(?:'\w{1,2})?/g; var words = {}; var matches; while ((matches = wordRegExp.exec(text)) != null) { var word = matches[0].toLowerCase(); if (typeof words[word] == "undefined") { words[word] = 1; } else { words[word]++; } } // Sort 'em! var wordList = []; for (var word in words) { if (words.hasOwnProperty(word)) { wordList.push([word, words[word]]); } } wordList.sort(function(a, b) { return b[1] - a[1]; }); // Come back any time, straaanger! var n = 10; var message = ["The top " + n + " words are:"]; for (var i = 0; i < n; i++) { message.push(wordList[i][0] + " - " + wordList[i][1] + " occurance" + (wordList[i][1] == 1 ? "" : "s")); } alert(message.join("\n")); Reusable function: function getTopNWords(text, n) { var wordRegExp = /\w+(?:'\w{1,2})?/g; var words = {}; var matches; while ((matches = wordRegExp.exec(text)) != null) { var word = matches[0].toLowerCase(); if (typeof words[word] == "undefined") { words[word] = 1; } else { words[word]++; } } var wordList = []; for (var word in words) { if (words.hasOwnProperty(word)) { wordList.push([word, words[word]]); } } wordList.sort(function(a, b) { return b[1] - a[1]; }); var topWords = []; for (var i = 0; i < n; i++) { topWords.push(wordList[i][0]); } return topWords; } A: Try to split you string on words and count the resulting words, then sort on the counts. A: This builds upon a previous answer by insin by only having one loop: function top_words(text, n) { // Split text on non word characters var words = text.toLowerCase().split(/\W+/) var positions = new Array() var word_counts = new Array() for (var i=0; i<words.length; i++) { var word = words[i] if (!word) { continue } if (typeof positions[word] == 'undefined') { positions[word] = word_counts.length word_counts.push([word, 1]) } else { word_counts[positions[word]][1]++ } } // Put most frequent words at the beginning. word_counts.sort(function (a, b) {return b[1] - a[1]}) // Return the first n items return word_counts.slice(0, n) } // Let's see if it works. var text = "Words in here are repeated. Are repeated, repeated!" alert(top_words(text, 3)) The result of the example is: [['repeated',3], ['are',2], ['words', 1]] A: I would do exactly what you have mentioned above to isolate each word. I would then probably add each word as the index of an array with the number of occurrences as the value. For example: var a = new Array; a[word] = a[word]?a[word]+1:1; Now you know how many unique words there are (a.length) and how many occurrences of each word existed (a[word]).
{ "language": "en", "url": "https://stackoverflow.com/questions/141280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: The difference between the Runnable and Callable interfaces in Java What is the difference between using the Runnable and Callable interfaces when designing a concurrent thread in Java, why would you choose one over the other? A: Callable and Runnable both is similar to each other and can use in implementing thread. In case of implementing Runnable you must implement run() method but in case of callable you must need to implement call() method, both method works in similar ways but callable call() method have more flexibility.There is some differences between them. Difference between Runnable and callable as below-- 1) The run() method of runnable returns void, means if you want your thread return something which you can use further then you have no choice with Runnable run() method. There is a solution 'Callable', If you want to return any thing in form of object then you should use Callable instead of Runnable. Callable interface have method 'call()' which returns Object. Method signature - Runnable-> public void run(){} Callable-> public Object call(){} 2) In case of Runnable run() method if any checked exception arises then you must need to handled with try catch block, but in case of Callable call() method you can throw checked exception as below public Object call() throws Exception {} 3) Runnable comes from legacy java 1.0 version, but callable came in Java 1.5 version with Executer framework. If you are familiar with Executers then you should use Callable instead of Runnable. Hope you understand. A: Runnable (vs) Callable comes into point when we are using Executer framework. ExecutorService is a subinterface of Executor, which accepts both Runnable and Callable tasks. Earlier Multi-Threading can be achieved using Interface RunnableSince 1.0, but here the problem is after completing the thread task we are unable to collect the Threads information. In-order to collect the data we may use Static fields. Example Separate threads to collect each student data. static HashMap<String, List> multiTasksData = new HashMap(); public static void main(String[] args) { Thread t1 = new Thread( new RunnableImpl(1), "T1" ); Thread t2 = new Thread( new RunnableImpl(2), "T2" ); Thread t3 = new Thread( new RunnableImpl(3), "T3" ); multiTasksData.put("T1", new ArrayList() ); // later get the value and update it. multiTasksData.put("T2", new ArrayList() ); multiTasksData.put("T3", new ArrayList() ); } To resolve this problem they have introduced Callable<V>Since 1.5 which returns a result and may throw an exception. * *Single Abstract Method : Both Callable and Runnable interface have a single abstract method, which means they can be used in lambda expressions in java 8. public interface Runnable { public void run(); } public interface Callable<Object> { public Object call() throws Exception; } There are a few different ways to delegate tasks for execution to an ExecutorService. * *execute(Runnable task):void crates new thread but not blocks main thread or caller thread as this method return void. *submit(Callable<?>):Future<?>, submit(Runnable):Future<?> crates new thread and blocks main thread when you are using future.get(). Example of using Interfaces Runnable, Callable with Executor framework. class CallableTask implements Callable<Integer> { private int num = 0; public CallableTask(int num) { this.num = num; } @Override public Integer call() throws Exception { String threadName = Thread.currentThread().getName(); System.out.println(threadName + " : Started Task..."); for (int i = 0; i < 5; i++) { System.out.println(i + " : " + threadName + " : " + num); num = num + i; MainThread_Wait_TillWorkerThreadsComplete.sleep(1); } System.out.println(threadName + " : Completed Task. Final Value : "+ num); return num; } } class RunnableTask implements Runnable { private int num = 0; public RunnableTask(int num) { this.num = num; } @Override public void run() { String threadName = Thread.currentThread().getName(); System.out.println(threadName + " : Started Task..."); for (int i = 0; i < 5; i++) { System.out.println(i + " : " + threadName + " : " + num); num = num + i; MainThread_Wait_TillWorkerThreadsComplete.sleep(1); } System.out.println(threadName + " : Completed Task. Final Value : "+ num); } } public class MainThread_Wait_TillWorkerThreadsComplete { public static void main(String[] args) throws InterruptedException, ExecutionException { System.out.println("Main Thread start..."); Instant start = java.time.Instant.now(); runnableThreads(); callableThreads(); Instant end = java.time.Instant.now(); Duration between = java.time.Duration.between(start, end); System.out.format("Time taken : %02d:%02d.%04d \n", between.toMinutes(), between.getSeconds(), between.toMillis()); System.out.println("Main Thread completed..."); } public static void runnableThreads() throws InterruptedException, ExecutionException { ExecutorService executor = Executors.newFixedThreadPool(4); Future<?> f1 = executor.submit( new RunnableTask(5) ); Future<?> f2 = executor.submit( new RunnableTask(2) ); Future<?> f3 = executor.submit( new RunnableTask(1) ); // Waits until pool-thread complete, return null upon successful completion. System.out.println("F1 : "+ f1.get()); System.out.println("F2 : "+ f2.get()); System.out.println("F3 : "+ f3.get()); executor.shutdown(); } public static void callableThreads() throws InterruptedException, ExecutionException { ExecutorService executor = Executors.newFixedThreadPool(4); Future<Integer> f1 = executor.submit( new CallableTask(5) ); Future<Integer> f2 = executor.submit( new CallableTask(2) ); Future<Integer> f3 = executor.submit( new CallableTask(1) ); // Waits until pool-thread complete, returns the result. System.out.println("F1 : "+ f1.get()); System.out.println("F2 : "+ f2.get()); System.out.println("F3 : "+ f3.get()); executor.shutdown(); } } A: See explanation here. The Callable interface is similar to Runnable, in that both are designed for classes whose instances are potentially executed by another thread. A Runnable, however, does not return a result and cannot throw a checked exception. A: I found this in another blog that can explain it a little bit more these differences: Though both the interfaces are implemented by the classes who wish to execute in a different thread of execution, but there are few differences between the two interface which are: * *A Callable<V> instance returns a result of type V, whereas a Runnable instance doesn't. *A Callable<V> instance may throw checked exceptions, whereas a Runnable instance can't The designers of Java felt a need of extending the capabilities of the Runnable interface, but they didn't want to affect the uses of the Runnable interface and probably that was the reason why they went for having a separate interface named Callable in Java 1.5 than changing the already existing Runnable. A: Java functional interfaces It is a kind of an interface naming convention which matches with functional programming //Runnable interface Runnable { void run(); } //Action - throws exception interface Action { void run() throws Exception; } //Consumer - consumes a value/values, throws exception //BiConsumer, interface Consumer1<T> { void accept(T t) throws Exception; } //Callable - return result, throws exception interface Callable<R> { R call() throws Exception; } //Supplier - returns result, throws exception interface Supplier<R> { R get() throws Exception; } //Predicate - consumes a value/values, returns true or false, throws exception interface Predicate1<T> { boolean test(T t) throws Exception; } //Function - consumes a value/values, returns result, throws exception //BiFunction, Function3... public interface Function1<T, R> { R apply(T t) throws Exception; } ... //Executor public interface Executor { void execute(Runnable command); } [Swift closure naming] A: Let us look at where one would use Runnable and Callable. Runnable and Callable both run on a different thread than the calling thread. But Callable can return a value and Runnable cannot. So where does this really apply. Runnable : If you have a fire and forget task then use Runnable. Put your code inside a Runnable and when the run() method is called, you can perform your task. The calling thread really does not care when you perform your task. Callable : If you are trying to retrieve a value from a task, then use Callable. Now callable on its own will not do the job. You will need a Future that you wrap around your Callable and get your values on future.get (). Here the calling thread will be blocked till the Future comes back with results which in turn is waiting for Callable's call() method to execute. So think about an interface to a target class where you have both Runnable and Callable wrapped methods defined. The calling class will randomly call your interface methods not knowing which is Runnable and which is Callable. The Runnable methods will execute asynchronously, till a Callable method is called. Here the calling class's thread will block since you are retrieving values from your target class. NOTE : Inside your target class you can make the calls to Callable and Runnable on a single thread executor, making this mechanism similar to a serial dispatch queue. So as long as the caller calls your Runnable wrapped methods the calling thread will execute really fast without blocking. As soon as it calls a Callable wrapped in Future method it will have to block till all the other queued items are executed. Only then the method will return with values. This is a synchronization mechanism. A: What are the differences in the applications of Runnable and Callable. Is the difference only with the return parameter present in Callable? Basically, yes. See the answers to this question. And the javadoc for Callable. What is the need of having both if Callable can do all that Runnable does? Because the Runnable interface cannot do everything that Callable does! Runnable has been around since Java 1.0, but Callable was only introduced in Java 1.5 ... to handle use-cases that Runnable does not support. In theory, the Java team could have changed the signature of the Runnable.run() method, but this would have broken binary compatiblity with pre-1.5 code, requiring recoding when migrating old Java code to newer JVMs. That is a BIG NO-NO. Java strives to be backwards compatible ... and that's been one of Java's biggest selling points for business computing. And, obviously, there are use-cases where a task doesn't need to return a result or throw a checked exception. For those use-cases, using Runnable is more concise than using Callable<Void> and returning a dummy (null) value from the call() method. A: In addition to all other answers: We can not pass/use Callable to an individual thread for execution i.e. Callable can be used only in Executor Framework. But, Runnable can be passed to an individual thread for execution (new Thread(new CustomRunnable())), as well as can be used in Executor Framework. A: Callable interface declares call() method and you need to provide generics as type of Object call() should return - public interface Callable<V> { /** * Computes a result, or throws an exception if unable to do so. * * @return computed result * @throws Exception if unable to compute a result */ V call() throws Exception; } Runnable on the other hand is interface that declares run() method that is called when you create a Thread with the runnable and call start() on it. You can also directly call run() but that just executes the run() method is same thread. public interface Runnable { /** * When an object implementing interface <code>Runnable</code> is used * to create a thread, starting the thread causes the object's * <code>run</code> method to be called in that separately executing * thread. * <p> * The general contract of the method <code>run</code> is that it may * take any action whatsoever. * * @see java.lang.Thread#run() */ public abstract void run(); } To summarize few notable Difference are * *A Runnable object does not return a result whereas a Callable object returns a result. *A Runnable object cannot throw a checked exception wheras a Callable object can throw an exception. *The Runnable interface has been around since Java 1.0 whereas Callable was only introduced in Java 1.5. Few similarities include * *Instances of the classes that implement Runnable or Callable interfaces are potentially executed by another thread. *Instance of both Callable and Runnable interfaces can be executed by ExecutorService via submit() method. *Both are functional interfaces and can be used in Lambda expressions since Java8. Methods in ExecutorService interface are <T> Future<T> submit(Callable<T> task); Future<?> submit(Runnable task); <T> Future<T> submit(Runnable task, T result); A: Purpose of these interfaces from oracle documentation : Runnable interface should be implemented by any class whose instances are intended to be executed by a Thread. The class must define a method of no arguments called run. Callable: A task that returns a result and may throw an exception. Implementors define a single method with no arguments called call. The Callable interface is similar to Runnable, in that both are designed for classes whose instances are potentially executed by another thread. A Runnable, however, does not return a result and cannot throw a checked exception. Other differences: * *You can pass Runnable to create a Thread. But you can't create new Thread by passing Callable as parameter. You can pass Callable only to ExecutorService instances. Example: public class HelloRunnable implements Runnable { public void run() { System.out.println("Hello from a thread!"); } public static void main(String args[]) { (new Thread(new HelloRunnable())).start(); } } *Use Runnable for fire and forget calls. Use Callable to verify the result. *Callable can be passed to invokeAll method unlike Runnable. Methods invokeAny and invokeAll perform the most commonly useful forms of bulk execution, executing a collection of tasks and then waiting for at least one, or all, to complete *Trivial difference : method name to be implemented => run() for Runnable and call() for Callable. A: As it was already mentioned here Callable is relatively new interface and it was introduced as a part of concurrency package. Both Callable and Runnable can be used with executors. Class Thread (that implements Runnable itself) supports Runnable only. You can still use Runnable with executors. The advantage of Callable that you can send it to executor and immediately get back Future result that will be updated when the execution is finished. The same may be implemented with Runnable, but in this case you have to manage the results yourself. For example you can create results queue that will hold all results. Other thread can wait on this queue and deal with results that arrive. A: Difference between Callable and Runnable are following: * *Callable is introduced in JDK 5.0 but Runnable is introduced in JDK 1.0 *Callable has call() method but Runnable has run() method. *Callable has call method which returns value but Runnable has run method which doesn't return any value. *call method can throw checked exception but run method can't throw checked exception. *Callable use submit() method to put in task queue but Runnable use execute() method to put in the task queue. A: * *A Callable needs to implement call() method while a Runnable needs to implement run() method. *A Callable can return a value but a Runnable cannot. *A Callable can throw checked exception but a Runnable cannot. *A Callable can be used with ExecutorService#invokeXXX(Collection<? extends Callable<T>> tasks) methods but a Runnable cannot be. public interface Runnable { void run(); } public interface Callable<V> { V call() throws Exception; } A: +----------------------------------------+--------------------------------------------------------------------------------------------------+ | Runnable | Callable<T> | +----------------------------------------+--------------------------------------------------------------------------------------------------+ | Introduced in Java 1.0 of java.lang | Introduced in Java 1.5 of java.util.concurrent library | | Runnable cannot be parametrized | Callable is a parametrized type whose type parameter indicates the return type of its run method | | Runnable has run() method | Callable has call() method | | Runnable.run() returns void | Callable.call() returns a generic value V | | No way to propagate checked exceptions | Callable's call()“throws Exception” clause so we can easily propagate checked exceptions further | | +----------------------------------------+--------------------------------------------------------------------------------------------------+ The designers of Java felt a need of extending the capabilities of the Runnable interface, but they didn't want to affect the uses of the Runnable interface and probably that was the reason why they went for having a separate interface named Callable in Java 1.5 than changing the already existing Runnable interface which has been a part of Java since Java 1.0. source
{ "language": "en", "url": "https://stackoverflow.com/questions/141284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "557" }
Q: Possible to use Flex Framework/Components without using MXML? Is it possible to use the Flex Framework and Components, without using MXML? I know ActionScript pretty decently, and don't feel like messing around with some new XML language just to get some simple UI in there. Can anyone provide an example consisting of an .as file which can be compiled (ideally via FlashDevelop, though just telling how to do it with the Flex SDK is ok too) and uses the Flex Framework? For example, just showing a Flex button that pops open an Alert would be perfect. If it's not possible, can someone provide a minimal MXML file which will bootstrap a custom AS class which then has access to the Flex SDK? A: This is a very simple app that does only the basic bootstrapping in MXML. This is the MXML: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" creationComplete="onCreationComplete()"> <mx:Script source="Script.as" /> </mx:Application> This is the Script.as: import mx.controls.Button; import flash.events.MouseEvent; import mx.controls.Alert; import mx.core.Application; private function onCreationComplete() : void { var button : Button = new Button(); button.label = "Click me"; button.addEventListener(MouseEvent.CLICK, function(e : MouseEvent) : void { Alert.show("Clicked"); }); Application.application.addChild(button); } A: NOTE: The below answer will not actually work unless you initialize the Flex library first. There is a lot of code involved to do that. See the comments below, or other answers for more details. The main class doesn't even have to be in MXML, just create a class that inherits from mx.core.Application (which is what an MXML class with a <mx:Application> root node is compiled as anyway): package { import mx.core.Application; public class MyFancyApplication extends Application { // do whatever you want here } } Also, any ActionScript code compiled with the mxmlc compiler -- or even the Flash CS3 authoring tool -- can use the Flex classes, it's just a matter of making them available in the classpath (refering to the framework SWC when using mxmlc or pointing to a folder containing the source when using either). Unless the document class inherits from mx.core.Application you might run in to some trouble, though, since some things in the framework assume that this is the case. A: I did a simple bootstrap similar to Borek (see below). I would love to get rid of the mxml file, but if I don't have it, I don't get any of the standard themes that come with Flex (haloclassic.swc, etc). Does anybody know how to do what Theo suggests and still have the standard themes applied? Here's my simplified bootstrapping method: main.mxml <?xml version="1.0" encoding="utf-8"?> <custom:ApplicationClass xmlns:custom="components.*"/> ApplicationClass.as package components { import mx.core.Application; import mx.events.FlexEvent; import flash.events.MouseEvent; import mx.controls.Alert; import mx.controls.Button; public class ApplicationClass extends Application { public function ApplicationClass () { addEventListener (FlexEvent.CREATION_COMPLETE, handleComplete); } private function handleComplete( event : FlexEvent ) : void { var button : Button = new Button(); button.label = "My favorite button"; button.styleName="halo" button.addEventListener(MouseEvent.CLICK, handleClick); addChild( button ); } private function handleClick(e:MouseEvent):void { Alert.show("You clicked on the button!", "Clickity"); } } } Here are the necessary updates to use it with Flex 4: main.mxml <?xml version="1.0" encoding="utf-8"?> <local:MyApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:local="components.*" /> MyApplication.as package components { import flash.events.MouseEvent; import mx.controls.Alert; import mx.events.FlexEvent; import spark.components.Application; import spark.components.Button; public class MyApplication extends Application { public function MyApplication() { addEventListener(FlexEvent.CREATION_COMPLETE, creationHandler); } private function creationHandler(e:FlexEvent):void { var button : Button = new Button(); button.label = "My favorite button"; button.styleName="halo" button.addEventListener(MouseEvent.CLICK, handleClick); addElement( button ); } private function handleClick(e:MouseEvent):void { Alert.show("You clicked it!", "Clickity!"); } } } A: Yes, you just need to include the flex swc in your classpath. You can find flex.swc in the flex sdk in frameoworks/lib/flex.swc edit: One more thing: if you're using Flex Builder you can simply create a new ActionScript project, which will essentially do the same as above.
{ "language": "en", "url": "https://stackoverflow.com/questions/141288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How to list only top level directories in Python? I want to be able to list only the directories inside some folder. This means I don't want filenames listed, nor do I want additional sub-folders. Let's see if an example helps. In the current directory we have: >>> os.listdir(os.getcwd()) ['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'LICENSE.txt', 'mod_p ython-wininst.log', 'NEWS.txt', 'pymssql-wininst.log', 'python.exe', 'pythonw.ex e', 'README.txt', 'Removemod_python.exe', 'Removepymssql.exe', 'Scripts', 'tcl', 'Tools', 'w9xpopen.exe'] However, I don't want filenames listed. Nor do I want sub-folders such as \Lib\curses. Essentially what I want works with the following: >>> for root, dirnames, filenames in os.walk('.'): ... print dirnames ... break ... ['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'Scripts', 'tcl', 'Tools'] However, I'm wondering if there's a simpler way of achieving the same results. I get the impression that using os.walk only to return the top level is inefficient/too much. A: Just to add that using os.listdir() does not "take a lot of processing vs very simple os.walk().next()[1]". This is because os.walk() uses os.listdir() internally. In fact if you test them together: >>>> import timeit >>>> timeit.timeit("os.walk('.').next()[1]", "import os", number=10000) 1.1215229034423828 >>>> timeit.timeit("[ name for name in os.listdir('.') if os.path.isdir(os.path.join('.', name)) ]", "import os", number=10000) 1.0592019557952881 The filtering of os.listdir() is very slightly faster. A: Filter the list using os.path.isdir to detect directories. filter(os.path.isdir, os.listdir(os.getcwd())) A: A very much simpler and elegant way is to use this: import os dir_list = os.walk('.').next()[1] print dir_list Run this script in the same folder for which you want folder names.It will give you exactly the immediate folders name only(that too without the full path of the folders). A: You could also use os.scandir: with os.scandir(os.getcwd()) as mydir: dirs = [i.name for i in mydir if i.is_dir()] In case you want the full path you can use i.path. Using scandir() instead of listdir() can significantly increase the performance of code that also needs file type or file attribute information, because os.DirEntry objects expose this information if the operating system provides it when scanning a directory. A: [x for x in os.listdir(somedir) if os.path.isdir(os.path.join(somedir, x))] A: Python 3.4 introduced the pathlib module into the standard library, which provides an object oriented approach to handle filesystem paths: from pathlib import Path p = Path('./') [f for f in p.iterdir() if f.is_dir()] A: 2021 answer using glob: import glob, os p = "/some/path/" for d in glob.glob(p + "*" + os.path.sep): print(d) A: os.walk Use os.walk with next item function: next(os.walk('.'))[1] For Python <=2.5 use: os.walk('.').next()[1] How this works os.walk is a generator and calling next will get the first result in the form of a 3-tuple (dirpath, dirnames, filenames). Thus the [1] index returns only the dirnames from that tuple. A: directories=[d for d in os.listdir(os.getcwd()) if os.path.isdir(d)] A: being a newbie here i can't yet directly comment but here is a small correction i'd like to add to the following part of ΤΖΩΤΖΙΟΥ's answer : If you prefer full pathnames, then use this function: def listdirs(folder): return [ d for d in (os.path.join(folder, d1) for d1 in os.listdir(folder)) if os.path.isdir(d) ] for those still on python < 2.4: the inner construct needs to be a list instead of a tuple and therefore should read like this: def listdirs(folder): return [ d for d in [os.path.join(folder, d1) for d1 in os.listdir(folder)] if os.path.isdir(d) ] otherwise one gets a syntax error. A: This seems to work too (at least on linux): import glob, os glob.glob('*' + os.path.sep) A: Filter the result using os.path.isdir() (and use os.path.join() to get the real path): >>> [ name for name in os.listdir(thedir) if os.path.isdir(os.path.join(thedir, name)) ] ['ctypes', 'distutils', 'encodings', 'lib-tk', 'config', 'idlelib', 'xml', 'bsddb', 'hotshot', 'logging', 'doc', 'test', 'compiler', 'curses', 'site-packages', 'email', 'sqlite3', 'lib-dynload', 'wsgiref', 'plat-linux2', 'plat-mac'] A: Note that, instead of doing os.listdir(os.getcwd()), it's preferable to do os.listdir(os.path.curdir). One less function call, and it's as portable. So, to complete the answer, to get a list of directories in a folder: def listdirs(folder): return [d for d in os.listdir(folder) if os.path.isdir(os.path.join(folder, d))] If you prefer full pathnames, then use this function: def listdirs(folder): return [ d for d in (os.path.join(folder, d1) for d1 in os.listdir(folder)) if os.path.isdir(d) ] A: Using list comprehension, [a for a in os.listdir() if os.path.isdir(a)] I think It is the simplest way A: Like so? >>>> [path for path in os.listdir(os.getcwd()) if os.path.isdir(path)] A: For a list of full path names I prefer this version to the other solutions here: def listdirs(dir): return [os.path.join(os.path.join(dir, x)) for x in os.listdir(dir) if os.path.isdir(os.path.join(dir, x))] A: scanDir = "abc" directories = [d for d in os.listdir(scanDir) if os.path.isdir(os.path.join(os.path.abspath(scanDir), d))] A: FWIW, the os.walk approach is almost 10x faster than the list comprehension and filter approaches: In [30]: %timeit [d for d in os.listdir(os.getcwd()) if os.path.isdir(d)] 1.23 ms ± 97.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [31]: %timeit list(filter(os.path.isdir, os.listdir(os.getcwd()))) 1.13 ms ± 13.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [32]: %timeit next(os.walk(os.getcwd()))[1] 132 µs ± 9.34 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) A: Using python 3.x with pathlib.Path.iter_dir $ mkdir tmpdir $ mkdir -p tmpdir/a/b/c $ mkdir -p tmpdir/x/y/z $ touch tmpdir/a/b/c/abc.txt $ touch tmpdir/a/b/ab.txt $ touch tmpdir/a/a.txt $ python --version Python 3.7.12 >>> from pathlib import Path >>> tmpdir = Path("./tmpdir") >>> [d for d in tmpdir.iterdir() if d.is_dir] [PosixPath('tmpdir/x'), PosixPath('tmpdir/a')] >>> sorted(d for d in tmpdir.iterdir() if d.is_dir) [PosixPath('tmpdir/a'), PosixPath('tmpdir/x')] A: A safer option that does not fail when there is no directory. def listdirs(folder): if os.path.exists(folder): return [d for d in os.listdir(folder) if os.path.isdir(os.path.join(folder, d))] else: return [] A: -- This will exclude files and traverse through 1 level of sub folders in the root def list_files(dir): List = [] filterstr = ' ' for root, dirs, files in os.walk(dir, topdown = True): #r.append(root) if (root == dir): pass elif filterstr in root: #filterstr = ' ' pass else: filterstr = root #print(root) for name in files: print(root) print(dirs) List.append(os.path.join(root,name)) #print(os.path.join(root,name),"\n") print(List,"\n") return List
{ "language": "en", "url": "https://stackoverflow.com/questions/141291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "189" }
Q: Checking File is Open in Delphi Is there a way to check if a file has been opened by ReWrite in Delphi? Code would go something like this: AssignFile(textfile, 'somefile.txt'); if not textFile.IsOpen then Rewrite(textFile); A: This works fine: function IsOpen(const txt:TextFile):Boolean; const fmTextOpenRead = 55217; fmTextOpenWrite = 55218; begin Result := (TTextRec(txt).Mode = fmTextOpenRead) or (TTextRec(txt).Mode = fmTextOpenWrite) end; A: You can get the filemode. (One moment, I'll create an example). TTextRec(txt).Mode gives you the mode: 55216 = closed 55217 = open read 55218 = open write fmClosed = $D7B0; fmInput = $D7B1; fmOutput = $D7B2; fmInOut = $D7B3; Search TTextRec in the system unit for more information. A: Try this: function IsFileInUse(fName: string) : boolean; var HFileRes: HFILE; begin Result := False; if not FileExists(fName) then begin Exit; end; HFileRes := CreateFile(PChar(fName) ,GENERIC_READ or GENERIC_WRITE ,0 ,nil ,OPEN_EXISTING ,FILE_ATTRIBUTE_NORMAL ,0); Result := (HFileRes = INVALID_HANDLE_VALUE); if not(Result) then begin CloseHandle(HFileRes); end; end; A: I found it easier to keep a boolean variable as a companion; example: bFileIsOpen. Wherever the file is opened, set bFileIsOpen := true then, whenever you need to know if the file is open, just test this variable; example: if (bFileIsOpen) then Close(datafile); A: Joseph's answer works perfectly - I called the function filenotopen and changed the line if Result := (HFileRes = INVALID_HANDLE_VALUE); to Result := NOT (HFileRes = INVALID_HANDLE_VALUE); I also removed the line 'if not(Result) then begin' (and the 'end') so that it ALWAYS closes the handle or subsequent assignments and reads give errors I now call it like this if filenotopen(filename) then begin assignfile(f,filename); reset(f) etc; end else message('file open by a different program')
{ "language": "en", "url": "https://stackoverflow.com/questions/141302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Pronouncing dictionaries Are there any alternatives to The CMU Pronouncing Dictionary, commercial or open source? A: I don't believe the answer is definitively "no," but I do know that CMU is the most popular pronouncing dictionary in my anecdotal experience. I believe it is open source so if it's missing something, perhaps you could find a way to add it (or request it be added). Barring that, I would check with the folks at Language Log. They deal a lot with phonetics. A: I am searching for something similar, too. Next to it I found http://www.voxforge.org/home/downloads A: There is CELEX 2, available from the Linguistic Data Consortium, which contains phonology information and costs $300. The problem is that it's a little dated, and the English dictionary is BE, not AE. You can use CALLHOME, too, but with $2250 it's more more expensive than CELEX. A: I found DictionaryForMIDs and desktionary. I haven't used either but both are open source. A: forvo.com. Free and open. A: Checkout Merriam-Webster for things like this: stack overflow
{ "language": "en", "url": "https://stackoverflow.com/questions/141312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: php check for a valid date, weird date conversions Is there a way to check to see if a date/time is valid you would think these would be easy to check: $date = '0000-00-00'; $time = '00:00:00'; $dateTime = $date . ' ' . $time; if(strtotime($dateTime)) { // why is this valid? } what really gets me is this: echo date('Y-m-d', strtotime($date)); results in: "1999-11-30", huh? i went from 0000-00-00 to 1999-11-30 ??? I know i could do comparison to see if the date is either of those values is equal to the date i have but it isn't a very robust way to check. Is there a good way to check to see if i have a valid date? Anyone have a good function to check this? Edit: People are asking what i'm running: Running PHP 5.2.5 (cli) (built: Jul 23 2008 11:32:27) on Linux localhost 2.6.18-53.1.14.el5 #1 SMP Wed Mar 5 11:36:49 EST 2008 i686 i686 i386 GNU/Linux A: As mentioned here: https://bugs.php.net/bug.php?id=45647 There is no bug here, 00-00-00 means 2000-00-00, which is 1999-12-00, which is 1999-11-30. No bug, perfectly normal. And as shown with a few tests, rolling backwards is expected behavior, if a little unsettling: >> date('Y-m-d', strtotime('2012-03-00')) string: '2012-02-29' >> date('Y-m-d', strtotime('2012-02-00')) string: '2012-01-31' >> date('Y-m-d', strtotime('2012-01-00')) string: '2011-12-31' >> date('Y-m-d', strtotime('2012-00-00')) string: '2011-11-30' A: From php.net <?php function isValidDateTime($dateTime) { if (preg_match("/^(\d{4})-(\d{2})-(\d{2}) ([01][0-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])$/", $dateTime, $matches)) { if (checkdate($matches[2], $matches[3], $matches[1])) { return true; } } return false; } ?> A: echo date('Y-m-d', strtotime($date)); results in: "1999-11-30" The result of strtotime is 943920000 - this is the number of seconds, roughly, between the Unix epoch (base from which time is measured) to 1999-11-30. There is a documented mysql bug on mktime(), localtime(), strtotime() all returning this odd value when you try a pre-epoch time (including "0000-00-00 00:00:00"). There's some debate on the linked thread as to whether this is actually a bug: Since the time stamp is started from 1970, I don't think it supposed to work in anyways. Below is a function that I use for converting dateTimes such as the above to a timestamp for comparisons, etc, which may be of some use to you, for dates beyond "0000-00-00 00:00:00" /** * Converts strings of the format "YYYY-MM-DD HH:MM:SS" into php dates */ function convert_date_string($date_string) { list($date, $time) = explode(" ", $date_string); list($hours, $minutes, $seconds) = explode(":", $time); list($year, $month, $day) = explode("-", $date); return mktime($hours, $minutes, $seconds, $month, $day, $year); } A: Don't expect coherent results when you're out of range: cf strtotime cf Gnu Calendar-date-items.html "For numeric months, the ISO 8601 format ‘year-month-day’ is allowed, where year is any positive number, month is a number between 01 and 12, and day is a number between 01 and 31. A leading zero must be present if a number is less than ten." So '0000-00-00' gives weird results, that's logical! "Additionally, not all platforms support negative timestamps, therefore your date range may be limited to no earlier than the Unix epoch. This means that e.g. %e, %T, %R and %D (there might be more) and dates prior to Jan 1, 1970 will not work on Windows, some Linux distributions, and a few other operating systems." cf strftime Use checkdate function instead (more robust): month: The month is between 1 and 12 inclusive. day: The day is within the allowed number of days for the given month. Leap year s are taken into consideration. year: The year is between 1 and 32767 inclusive. A: If you just want to handle a date conversion without the time for a mysql date field, you can modify this great code as I did. On my version of PHP without performing this function I get "0000-00-00" every time. Annoying. function ConvertDateString ($DateString) { list($year, $month, $day) = explode("-", $DateString); return date ("Y-m-d, mktime (0, 0, 0, $month, $day, $year)); } A: This version allows for the field to be empty, has dates in mm/dd/yy or mm/dd/yyyy format, allow for single digit hours, adds optional am/pm, and corrects some subtle flaws in the time match. Still allows some pathological times like '23:14 AM'. function isValidDateTime($dateTime) { if (trim($dateTime) == '') { return true; } if (preg_match('/^(\d{1,2})\/(\d{1,2})\/(\d{2,4})(\s+(([01]?[0-9])|(2[0-3]))(:[0-5][0-9]){0,2}(\s+(am|pm))?)?$/i', $dateTime, $matches)) { list($all,$mm,$dd,$year) = $matches; if ($year <= 99) { $year += 2000; } return checkdate($mm, $dd, $year); } return false; } A: I have been just changing the martin answer above, which will validate any type of date and return in the format you like. Just change the format by editing below line of script strftime("10-10-2012", strtotime($dt)); <?php echo is_date("13/04/10"); function is_date( $str ) { $flag = strpos($str, '/'); if(intval($flag)<=0){ $stamp = strtotime( $str ); } else { list($d, $m, $y) = explode('/', $str); $stamp = strtotime("$d-$m-$y"); } //var_dump($stamp) ; if (!is_numeric($stamp)) { //echo "ho" ; return "not a date" ; } $month = date( 'n', $stamp ); // use n to get date in correct format $day = date( 'd', $stamp ); $year = date( 'Y', $stamp ); if (checkdate($month, $day, $year)) { $dt = "$year-$month-$day" ; return strftime("%d-%b-%Y", strtotime($dt)); //return TRUE; } else { return "not a date" ; } } ?> A: <?php function is_valid_date($user_date=false, $valid_date = "1900-01-01") { $user_date = date("Y-m-d H:i:s",strtotime($user_date)); return strtotime($user_date) >= strtotime($valid_date) ? true : false; } echo is_valid_date("00-00-00") ? 1 : 0; // return 0 echo is_valid_date("3/5/2011") ? 1 : 0; // return 1 A: I have used the following code to validate dates coming from ExtJS applications. function check_sql_date_format($date) { $date = substr($date, 0, 10); list($year, $month, $day) = explode('-', $date); if (!is_numeric($year) || !is_numeric($month) || !is_numeric($day)) { return false; } return checkdate($month, $day, $year); } A: <?php function is_date( $str ) { $stamp = strtotime( $str ); if (!is_numeric($stamp)) { return FALSE; } $month = date( 'm', $stamp ); $day = date( 'd', $stamp ); $year = date( 'Y', $stamp ); if (checkdate($month, $day, $year)) { return TRUE; } return FALSE; } ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/141315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What's the difference between Phing and PHPUnderControl? We currently use a hand-rolled setup and configuration script and a hand-rolled continuous integration script to build and deploy our application. I am looking at formalizing this somewhat with a third party system designed for these purposes. I have looked into Phing before, and I get that it's basically like Ant. But, my Ant experience is somewhat limited so that doesn't help me much. (Most of the Java work I have done was just deployed as a jar file). I have looked into Cruise Control before, and I understand that phpUnderControl is a plug-in for CC. But, Phing says it also works with CC. So I am not clear on the overlap here. Do I need both Phing and phpUnderControl to work with CruiseControl, or are they mutually exlclusive? What I need exactly is something that can: * *Check out source from SVN *Install the database from SQL file *Generate some local configuration files from a series of templates and an ini file *Run all of our unit tests (currently ST, but easy to convert to PHPUnit) and send an email to the dev team if any tests break (with a stack trace of course) *Generate API documentation for the application and put it somewhere *Run a test coverage report Now, we have just about all of this in one form or another. But, it'd be nice to have it all automated and bundled together in one process. A: phing is pretty much ant written in PHP where phpUnderControl adds support for PHP projects to CruiseControl and uses phing or ant on the backend to parse the build.xml file and run commands. I just set up CruiseControl and phpUnderControl and it's been working great. It checks out my SVN, runs it through phpDocumentor, PHP_CodeSniffer, and PHPUnit whenever we do a check in. Since it's all based off of the build.xml file you can run just about any software you want through it. A: I'm sure lots of people will say this by the time I've typed this but... I know it's not PHP but we're finding Capistrano just the job for this kind of thing. It really is an excellent piece of software. A: We've been using Phing, and the cost to set it up has been very low; it's really easy to learn even if you don't know ANT. I've had very bad experiences with CruiseControl (instability - going down randomly) - so I like the simplicity of Phing. Plus, it's easily extensible using PHP (in case you have a custom task that they don't support out of the box).
{ "language": "en", "url": "https://stackoverflow.com/questions/141319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Perl - Win32 - How to do a non-blocking read of a filehandle from another process? I'm writing some server code that talks to a client process via STDIN. I'm trying to write a snippet of perl code that asynchronously receives responses from the client's STDOUT. The blocking version of the code might look like this: sub _read_from_client { my ($file_handle) = @_; while (my $line = <$file_handle>) { print STDOUT $line; } return; } Importantly, the snippet needs to work in Win32 platform. There are many solutions for *nix platforms that I'm not interested in. I'm using ActivePerl 5.10. A: This thread on Perlmonks suggests you can make a socket nonblocking on Windows in Perl this way: ioctl($socket, 0x8004667e, 1); More details and resources in that thread A: If you don't want to go the low-level route, you will have to look at the other more frameworked solutions. You can use a thread to read from the input and have it stuff all data it reads into a Thread::Queue which you then handle in your main thread. You can look at POE which implements an event based framework, especially POE::Wheel::Run::Win32. Potentially, you can also steal the code from it to implement the nonblocking reads yourself. You can look at [Coro], which implements a cooperative multitasking system using coroutines. This is mostly similar to threads except that you get userspace threads, not system threads. You haven't stated how far up you want to go, but your choice is between sysread and a framework, or writing said framework yourself. The easiest route to go is just to use threads or by going through the code of Poe::Wheel::Run::Win32.
{ "language": "en", "url": "https://stackoverflow.com/questions/141332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Should I store entire objects, or pointers to objects in containers? Designing a new system from scratch. I'll be using the STL to store lists and maps of certain long-live objects. Question: Should I ensure my objects have copy constructors and store copies of objects within my STL containers, or is it generally better to manage the life & scope myself and just store the pointers to those objects in my STL containers? I realize this is somewhat short on details, but I'm looking for the "theoretical" better answer if it exists, since I know both of these solutions are possible. Two very obvious disadvantage to playing with pointers: 1) I must manage allocation/deallocation of these objects myself in a scope beyond the STL. 2) I cannot create a temp object on the stack and add it to my containers. Is there anything else I'm missing? A: Since people are chiming in on the efficency of using pointers. If you're considering using a std::vector and if updates are few and you often iterate over your collection and it's a non polymorphic type storing object "copies" will be more efficent since you'll get better locality of reference. Otoh, if updates are common storing pointers will save the copy/relocation costs. A: This really depends upon your situation. If your objects are small, and doing a copy of the object is lightweight, then storing the data inside an stl container is straightforward and easier to manage in my opinion because you don't have to worry about lifetime management. If you objects are large, and having a default constructor doesn't make sense, or copies of objects are expensive, then storing with pointers is probably the way to go. If you decide to use pointers to objects, take a look at the Boost Pointer Container Library. This boost library wraps all the STL containers for use with dynamically allocated objects. Each pointer container (for example ptr_vector) takes ownership of an object when it is added to the container, and manages the lifetime of those objects for you. You also access all the elements in a ptr_ container by reference. This lets you do things like class BigExpensive { ... } // create a pointer vector ptr_vector<BigExpensive> bigVector; bigVector.push_back( new BigExpensive( "Lexus", 57700 ) ); bigVector.push_back( new BigExpensive( "House", 15000000 ); // get a reference to the first element MyClass& expensiveItem = bigList[0]; expensiveItem.sell(); These classes wrap the STL containers and work with all of the STL algorithms, which is really handy. There are also facilities for transferring ownership of a pointer in the container to the caller (via the release function in most of the containers). A: If you're storing polymporhic objects you always need to use a collection of base class pointers. That is if you plan on storing different derived types in your collection you must store pointers or get eaten by the slicing deamon. A: You seem to have a good grasp of the difference. If the objects are small and easy to copy, then by all means store them. If not, I would think about storing smart pointers (not auto_ptr, a ref counting smart pointer) to ones you allocate on the heap. Obviously, if you opt for smart pointers, then you can't store temp stack allocated objects (as you have said). @Torbjörn makes a good point about slicing. A: Using pointers will be more efficient since the containers will be only copying pointers around instead of full objects. There's some useful information here about STL containers and smart pointers: Why is it wrong to use std::auto_ptr<> with standard containers? A: Sorry to jump in 3 years after the event, but a cautionary note here... On my last big project, my central data structure was a set of fairly straightforward objects. About a year into the project, as the requirements evolved, I realised that the object actually needed to be polymorphic. It took a few weeks of difficult and nasty brain surgery to fix the data structure to be a set of base class pointers, and to handle all the collateral damage in object storage, casting, and so on. It took me a couple of months to convince myself that the new code was working. Incidentally, this made me think hard about how well-designed C++'s object model is. On my current big project, my central data structure is a set of fairly straightforward objects. About a year into the project (which happens to be today), I realised that the object actually needs to be polymorphic. Back to the net, found this thread, and found Nick's link to the the Boost pointer container library. This is exactly what I had to write last time to fix everything, so I'll give it a go this time around. The moral, for me, anyway: if your spec isn't 100% cast in stone, go for pointers, and you may potentially save yourself a lot of work later. A: If the objects are to be referred to elsewhere in the code, store in a vector of boost::shared_ptr. This ensures that pointers to the object will remain valid if you resize the vector. Ie: std::vector<boost::shared_ptr<protocol> > protocols; ... connection c(protocols[0].get()); // pointer to protocol stays valid even if resized If noone else stores pointers to the objects, or the list doesn't grow and shrink, just store as plain-old objects: std::vector<protocol> protocols; connection c(protocols[0]); // value-semantics, takes a copy of the protocol A: Why not get the best of both worlds: do a container of smart pointers (such as boost::shared_ptr or std::shared_ptr). You don't have to manage the memory, and you don't have to deal with large copy operations. A: Generally storing the objects directly in the STL container is best as it is simplest, most efficient, and is easiest for using the object. If your object itself has non-copyable syntax or is an abstract base type you will need to store pointers (easiest is to use shared_ptr) A: This question has been bugging me for a while. I lean to storing pointers, but I have some additional requirements (SWIG lua wrappers) that might not apply to you. The most important point in this post is to test it yourself, using your objects I did this today to test the speed of calling a member function on a collection of 10 million objects, 500 times. The function updates x and y based on xdir and ydir (all float member variables). I used a std::list to hold both types of objects, and I found that storing the object in the list is slightly faster than using a pointer. On the other hand, the performance was very close, so it comes down to how they will be used in your application. For reference, with -O3 on my hardware the pointers took 41 seconds to complete and the raw objects took 30 seconds to complete.
{ "language": "en", "url": "https://stackoverflow.com/questions/141337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "167" }
Q: How to check if a directory exists in %PATH% How does one check if a directory is already present in the PATH environment variable? Here's a start. All I've managed to do with the code below, though, is echo the first directory in %PATH%. Since this is a FOR loop you'd think it would enumerate all the directories in %PATH%, but it only gets the first one. Is there a better way of doing this? Something like FIND or FINDSTR operating on the %PATH% variable? I'd just like to check if a directory exists in the list of directories in %PATH%, to avoid adding something that might already be there. FOR /F "delims=;" %%P IN ("%PATH%") DO ( @ECHO %%~P ) A: Using for and delims, you cannot capture an arbitrary number of fields (as Adam pointed out as well) so you have to use a looping technique instead. The following command script will list each path in the PATH environment variable on a separate line: @echo off setlocal if "%~1"=="" ( set PATHQ=%PATH% ) else ( set PATHQ=%~1 ) :WHILE if "%PATHQ%"=="" goto WEND for /F "delims=;" %%i in ("%PATHQ%") do echo %%i for /F "delims=; tokens=1,*" %%i in ("%PATHQ%") do set PATHQ=%%j goto WHILE :WEND It simulates a classical while…wend construct found in many programming languages. With this in place, you can use something like findstr to subsequently filter and look for a particular path. For example, if you saved the above script in a file called tidypath.cmd then here is how you could pipe to findstr, looking for paths under the standard programs directory (using a case-insensitive match): > tidypath | findstr /i "%ProgramFiles%" A: This may work: echo ;%PATH%; | find /C /I ";<string>;" It should give you 0 if the string is not found and 1 or more if it is. A: This will look for an exact but case-insensitive match, so mind any trailing backslashes etc.: for %P in ("%path:;=";"%") do @if /i %P=="PATH_TO_CHECK" echo %P exists in PATH or, in a batch file (e.g. checkpath.bat) which takes an argument: @for %%P in ("%path:;=";"%") do @if /i %%P=="%~1" echo %%P exists in PATH In the latter form, one could call e.g. checkpath "%ProgramFiles%" to see if the specified path already exists in PATH. Please note that this implementation assumes no semicolons or quotes are present inside a single path item. A: Just to elaborate on Heyvoon's response (2015-06-08) using PowerShell, this simple PowerShell script should give you detail on %path%: $env:Path -split ";" | % {"$(test-path $_); $_"} It is generating this kind of output which you can independently verify: True;C:\WINDOWS True;C:\WINDOWS\system32 True;C:\WINDOWS\System32\Wbem False;C:windows\System32\windowsPowerShell\v1.0\ False;C:\Program Files (x86)\Java\jre7\bin To reassemble for updating Path: $x = $null; foreach ($t in ($env:Path -split ";") ) {if (test-path $t) {$x += $t + ";"}}; $x A: Another way to check if something is in the path is to execute some innocent executable that is not going to fail if it's there, and check the result. As an example, the following code snippet checks if Maven is in the path: mvn --help > NUL 2> NUL if errorlevel 1 goto mvnNotInPath So I try to run mvn --help, ignore the output (I don't actually want to see the help if Maven is there) (> NUL), and also don't display the error message if Maven was not found (2> NUL). A: If your question was "why doesn't this cmd script fragment work?" then the answer is that for /f iterates over lines. The delims split lines into fields, but you're only capturing the first field in %%P. There is no way to capture an arbitrary number of fields with a for /f loop. A: I took your implementation using the for loop and extended it into something that iterates through all elements of the path. Each iteration of the for loop removes the first element of the path (%p) from the entire path (held in %q and %r). @echo off SET MYPATHCOPY=%PATH% :search for /f "delims=; tokens=1,2*" %%p in ("%MYPATHCOPY%") do ( @echo %%~p SET MYPATHCOPY=%%~q;%%~r ) if "%MYPATHCOPY%"==";" goto done; goto search; :done Sample output: Z:\>path.bat C:\Program Files\Microsoft DirectX SDK (November 2007)\Utilities\Bin\x86 c:\program files\imagemagick-6.3.4-q16 C:\WINDOWS\system32 C:\WINDOWS C:\SFU\common\ c:\Program Files\Debugging Tools for Windows C:\Program Files\Nmap A: You can also use substring replacement to test for the presence of a substring. Here I remove quotes to create PATH_NQ, then I remove "c:\mydir" from the PATH_NQ and compare it to the original to see if anything changed: set PATH_NQ=%PATH:"=% if not "%PATH_NQ%"=="%PATH_NQ:c:\mydir=%" goto already_in_path set PATH=%PATH%;c:\mydir :already_in_path A: This version works fairly well. It simply checks whether executable vim71 (Vim 7.1) is in the path, and prepends it if not. @echo off echo %PATH% | find /c /i "vim71" > nul if not errorlevel 1 goto jump PATH = C:\Program Files\Vim\vim71\;%PATH% :jump This demo is to illustrate the errorlevel logic: @echo on echo %PATH% | find /c /i "Windows" if "%errorlevel%"=="0" echo Found Windows echo %PATH% | find /c /i "Nonesuch" if "%errorlevel%"=="0" echo Found Nonesuch The logic is reversed in the vim71 code since errorlevel 1 is equivalent to errorlevel >= 1. It follows that errorlevel 0 would always evaluate true, so "not errorlevel 1" is used. Postscript: Checking may not be necessary if you use SETLOCAL and ENDLOCAL to localise your environment settings, e.g., @echo off setlocal PATH = C:\Program Files\Vim\vim71\;%PATH% rem your code here endlocal After ENDLOCAL you are back with your original path. A: I've combined some of the above answers to come up with this to ensure that a given path entry exists exactly as given with as much brevity as possible and no junk echos on the command line. set myPath=<pathToEnsure | %1> echo ;%PATH%; | find /C /I ";%myPath%;" >nul if %ERRORLEVEL% NEQ 0 set PATH=%PATH%;%myPath% A: After reading the answers here I think I can provide a new point of view: if the purpose of this question is to know if the path to a certain executable file exists in %PATH% and if not, insert it (and this is the only reason to do that, I think), then it may solved in a slightly different way: "How to check if the directory of a certain executable program exist in %PATH%"? This question may be easily solved this way: for %%p in (programname.exe) do set "progpath=%%~$PATH:p" if not defined progpath ( rem The path to programname.exe don't exist in PATH variable, insert it: set "PATH=%PATH%;C:\path\to\progranname" ) If you don't know the extension of the executable file, just review all of them: set "progpath=" for %%e in (%PATHEXT%) do ( if not defined progpath ( for %%p in (programname.%%e) do set "progpath=%%~$PATH:p" ) ) A: First I will point out a number of issues that make this problem difficult to solve perfectly. Then I will present the most bullet-proof solution I have been able to come up with. For this discussion I will use lower case path to represent a single folder path in the file system, and upper case PATH to represent the PATH environment variable. From a practical standpoint, most people want to know if PATH contains the logical equivalent of a given path, not whether PATH contains an exact string match of a given path. This can be problematic because: * *The trailing \ is optional in a path Most paths work equally well both with and without the trailing \. The path logically points to the same location either way. The PATH frequently has a mixture of paths both with and without the trailing \. This is probably the most common practical issue when searching a PATH for a match. * *There is one exception: The relative path C: (meaning the current working directory of drive C) is very different than C:\ (meaning the root directory of drive C) *Some paths have alternate short names Any path that does not meet the old 8.3 standard has an alternate short form that does meet the standard. This is another PATH issue that I have seen with some frequency, particularly in business settings. *Windows accepts both / and \ as folder separators within a path. This is not seen very often, but a path can be specified using / instead of \ and it will function just fine within PATH (as well as in many other Windows contexts) *Windows treats consecutive folder separators as one logical separator. C:\FOLDER\\ and C:\FOLDER\ are equivalent. This actually helps in many contexts when dealing with a path because a developer can generally append \ to a path without bothering to check if the trailing \ already exists. But this obviously can cause problems if trying to perform an exact string match. * *Exceptions: Not only is C:, different than C:\, but C:\ (a valid path), is different than C:\\ (an invalid path). *Windows trims trailing dots and spaces from file and directory names. "C:\test. " is equivalent to "C:\test". *The current .\ and parent ..\ folder specifiers may appear within a path Unlikely to be seen in real life, but something like C:\.\parent\child\..\.\child\ is equivalent to C:\parent\child *A path can optionally be enclosed within double quotes. A path is often enclosed in quotes to protect against special characters like <space> , ; ^ & =. Actually any number of quotes can appear before, within, and/or after the path. They are ignored by Windows except for the purpose of protecting against special characters. The quotes are never required within PATH unless a path contains a ;, but the quotes may be present never-the-less. *A path may be fully qualified or relative. A fully qualified path points to exactly one specific location within the file system. A relative path location changes depending on the value of current working volumes and directories. There are three primary flavors of relative paths: * *D: is relative to the current working directory of volume D: *\myPath is relative to the current working volume (could be C:, D: etc.) *myPath is relative to the current working volume and directory It is perfectly legal to include a relative path within PATH. This is very common in the Unix world because Unix does not search the current directory by default, so a Unix PATH will often contain .\. But Windows does search the current directory by default, so relative paths are rare in a Windows PATH. So in order to reliably check if PATH already contains a path, we need a way to convert any given path into a canonical (standard) form. The ~s modifier used by FOR variable and argument expansion is a simple method that addresses issues 1 - 6, and partially addresses issue 7. The ~s modifier removes enclosing quotes, but preserves internal quotes. Issue 7 can be fully resolved by explicitly removing quotes from all paths prior to comparison. Note that if a path does not physically exist then the ~s modifier will not append the \ to the path, nor will it convert the path into a valid 8.3 format. The problem with ~s is it converts relative paths into fully qualified paths. This is problematic for Issue 8 because a relative path should never match a fully qualified path. We can use FINDSTR regular expressions to classify a path as either fully qualified or relative. A normal fully qualified path must start with <letter>:<separator> but not <letter>:<separator><separator>, where <separator> is either \ or /. UNC paths are always fully qualified and must start with \\. When comparing fully qualified paths we use the ~s modifier. When comparing relative paths we use the raw strings. Finally, we never compare a fully qualified path to a relative path. This strategy provides a good practical solution for Issue 8. The only limitation is two logically equivalent relative paths could be treated as not matching, but this is a minor concern because relative paths are rare in a Windows PATH. There are some additional issues that complicate this problem: 9) Normal expansion is not reliable when dealing with a PATH that contains special characters. Special characters do not need to be quoted within PATH, but they could be. So a PATH like C:\THIS & THAT;"C:\& THE OTHER THING" is perfectly valid, but it cannot be expanded safely using simple expansion because both "%PATH%" and %PATH% will fail. 10) The path delimiter is also valid within a path name A ; is used to delimit paths within PATH, but ; can also be a valid character within a path, in which case the path must be quoted. This causes a parsing issue. jeb solved both issues 9 and 10 at 'Pretty print' windows %PATH% variable - how to split on ';' in CMD shell So we can combine the ~s modifier and path classification techniques along with my variation of jeb's PATH parser to get this nearly bullet proof solution for checking if a given path already exists within PATH. The function can be included and called from within a batch file, or it can stand alone and be called as its own inPath.bat batch file. It looks like a lot of code, but over half of it is comments. @echo off :inPath pathVar :: :: Tests if the path stored within variable pathVar exists within PATH. :: :: The result is returned as the ERRORLEVEL: :: 0 if the pathVar path is found in PATH. :: 1 if the pathVar path is not found in PATH. :: 2 if pathVar is missing or undefined or if PATH is undefined. :: :: If the pathVar path is fully qualified, then it is logically compared :: to each fully qualified path within PATH. The path strings don't have :: to match exactly, they just need to be logically equivalent. :: :: If the pathVar path is relative, then it is strictly compared to each :: relative path within PATH. Case differences and double quotes are :: ignored, but otherwise the path strings must match exactly. :: ::------------------------------------------------------------------------ :: :: Error checking if "%~1"=="" exit /b 2 if not defined %~1 exit /b 2 if not defined path exit /b 2 :: :: Prepare to safely parse PATH into individual paths setlocal DisableDelayedExpansion set "var=%path:"=""%" set "var=%var:^=^^%" set "var=%var:&=^&%" set "var=%var:|=^|%" set "var=%var:<=^<%" set "var=%var:>=^>%" set "var=%var:;=^;^;%" set var=%var:""="% set "var=%var:"=""Q%" set "var=%var:;;="S"S%" set "var=%var:^;^;=;%" set "var=%var:""="%" setlocal EnableDelayedExpansion set "var=!var:"Q=!" set "var=!var:"S"S=";"!" :: :: Remove quotes from pathVar and abort if it becomes empty set "new=!%~1:"=!" if not defined new exit /b 2 :: :: Determine if pathVar is fully qualified echo("!new!"|findstr /i /r /c:^"^^\"[a-zA-Z]:[\\/][^\\/]" ^ /c:^"^^\"[\\][\\]" >nul ^ && set "abs=1" || set "abs=0" :: :: For each path in PATH, check if path is fully qualified and then do :: proper comparison with pathVar. :: Exit with ERRORLEVEL 0 if a match is found. :: Delayed expansion must be disabled when expanding FOR variables :: just in case the value contains ! for %%A in ("!new!\") do for %%B in ("!var!") do ( if "!!"=="" endlocal for %%C in ("%%~B\") do ( echo(%%B|findstr /i /r /c:^"^^\"[a-zA-Z]:[\\/][^\\/]" ^ /c:^"^^\"[\\][\\]" >nul ^ && (if %abs%==1 if /i "%%~sA"=="%%~sC" exit /b 0) ^ || (if %abs%==0 if /i "%%~A"=="%%~C" exit /b 0) ) ) :: No match was found so exit with ERRORLEVEL 1 exit /b 1 The function can be used like so (assuming the batch file is named inPath.bat): set test=c:\mypath call inPath test && (echo found) || (echo not found) Typically the reason for checking if a path exists within PATH is because you want to append the path if it isn't there. This is normally done simply by using something like path %path%;%newPath%. But Issue 9 demonstrates how this is not reliable. Another issue is how to return the final PATH value across the ENDLOCAL barrier at the end of the function, especially if the function could be called with delayed expansion enabled or disabled. Any unescaped ! will corrupt the value if delayed expansion is enabled. These problems are resolved using an amazing safe return technique that jeb invented here: http://www.dostips.com/forum/viewtopic.php?p=6930#p6930 @echo off :addPath pathVar /B :: :: Safely appends the path contained within variable pathVar to the end :: of PATH if and only if the path does not already exist within PATH. :: :: If the case insensitive /B option is specified, then the path is :: inserted into the front (Beginning) of PATH instead. :: :: If the pathVar path is fully qualified, then it is logically compared :: to each fully qualified path within PATH. The path strings are :: considered a match if they are logically equivalent. :: :: If the pathVar path is relative, then it is strictly compared to each :: relative path within PATH. Case differences and double quotes are :: ignored, but otherwise the path strings must match exactly. :: :: Before appending the pathVar path, all double quotes are stripped, and :: then the path is enclosed in double quotes if and only if the path :: contains at least one semicolon. :: :: addPath aborts with ERRORLEVEL 2 if pathVar is missing or undefined :: or if PATH is undefined. :: ::------------------------------------------------------------------------ :: :: Error checking if "%~1"=="" exit /b 2 if not defined %~1 exit /b 2 if not defined path exit /b 2 :: :: Determine if function was called while delayed expansion was enabled setlocal set "NotDelayed=!" :: :: Prepare to safely parse PATH into individual paths setlocal DisableDelayedExpansion set "var=%path:"=""%" set "var=%var:^=^^%" set "var=%var:&=^&%" set "var=%var:|=^|%" set "var=%var:<=^<%" set "var=%var:>=^>%" set "var=%var:;=^;^;%" set var=%var:""="% set "var=%var:"=""Q%" set "var=%var:;;="S"S%" set "var=%var:^;^;=;%" set "var=%var:""="%" setlocal EnableDelayedExpansion set "var=!var:"Q=!" set "var=!var:"S"S=";"!" :: :: Remove quotes from pathVar and abort if it becomes empty set "new=!%~1:"^=!" if not defined new exit /b 2 :: :: Determine if pathVar is fully qualified echo("!new!"|findstr /i /r /c:^"^^\"[a-zA-Z]:[\\/][^\\/]" ^ /c:^"^^\"[\\][\\]" >nul ^ && set "abs=1" || set "abs=0" :: :: For each path in PATH, check if path is fully qualified and then :: do proper comparison with pathVar. Exit if a match is found. :: Delayed expansion must be disabled when expanding FOR variables :: just in case the value contains ! for %%A in ("!new!\") do for %%B in ("!var!") do ( if "!!"=="" setlocal disableDelayedExpansion for %%C in ("%%~B\") do ( echo(%%B|findstr /i /r /c:^"^^\"[a-zA-Z]:[\\/][^\\/]" ^ /c:^"^^\"[\\][\\]" >nul ^ && (if %abs%==1 if /i "%%~sA"=="%%~sC" exit /b 0) ^ || (if %abs%==0 if /i %%A==%%C exit /b 0) ) ) :: :: Build the modified PATH, enclosing the added path in quotes :: only if it contains ; setlocal enableDelayedExpansion if "!new:;=!" neq "!new!" set new="!new!" if /i "%~2"=="/B" (set "rtn=!new!;!path!") else set "rtn=!path!;!new!" :: :: rtn now contains the modified PATH. We need to safely pass the :: value accross the ENDLOCAL barrier :: :: Make rtn safe for assignment using normal expansion by replacing :: % and " with not yet defined FOR variables set "rtn=!rtn:%%=%%A!" set "rtn=!rtn:"=%%B!" :: :: Escape ^ and ! if function was called while delayed expansion was enabled. :: The trailing ! in the second assignment is critical and must not be removed. if not defined NotDelayed set "rtn=!rtn:^=^^^^!" if not defined NotDelayed set "rtn=%rtn:!=^^^!%" ! :: :: Pass the rtn value accross the ENDLOCAL barrier using FOR variables to :: restore the % and " characters. Again the trailing ! is critical. for /f "usebackq tokens=1,2" %%A in ('%%^ ^"') do ( endlocal & endlocal & endlocal & endlocal & endlocal set "path=%rtn%" ! ) exit /b 0 A: Building on Randy's answer, you have to make sure a substring of the target isn't found. if a%X%==a%PATH% echo %X% is in PATH echo %PATH% | find /c /i ";%X%" if errorlevel 1 echo %X% is in PATH echo %PATH% | find /c /i "%X%;" if errorlevel 1 echo %X% is in PATH A: You mention that you want to avoid adding the directory to search path if it already exists there. Is your intention to store the directory permanently to the path, or just temporarily for batch file's sake? If you wish to add (or remove) directories permanently to PATH, take a look at Path Manager (pathman.exe) utility in Windows Resource Kit Tools for administrative tasks, http://support.microsoft.com/kb/927229. With that you can add or remove components of both system and user paths, and it will handle anomalies such as duplicate entries. If you need to modify the path only temporarily for a batch file, I would just add the extra path in front of the path, with the risk of slight performance hit because of duplicate entry in the path. A: Add the directory to PATH if it does not already exist: set myPath=c:\mypath For /F "Delims=" %%I In ('echo %PATH% ^| find /C /I "%myPath%"') Do set pathExists=%%I 2>Nul If %pathExists%==0 (set PATH=%myPath%;%PATH%) A: Just as an alternative: * *In the folder you are going to search the PATH variable for, create a temporary file with such an unusual name that you would never ever expect any other file on your computer to have. *Use the standard batch scripting construct that lets you perform the search for a file by looking up a directory list defined by some environment variable (typically PATH). *Check if the result of the search matches the path in question, and display the outcome. *Delete the temporary file. This might look like this: @ECHO OFF SET "mypath=D:\the\searched-for\path" SET unusualname=nowthisissupposedtobesomeveryunusualfilename ECHO.>"%mypath%\%unusualname%" FOR %%f IN (%unusualname%) DO SET "foundpath=%%~dp$PATH:f" ERASE "%mypath%\%unusualname%" IF "%mypath%" == "%foundpath%" ( ECHO The dir exists in PATH ) ELSE ( ECHO The dir DOES NOT exist in PATH ) Known issues: * *The method can work only if the directory exists (which isn't always the case). *Creating / deleting files in a directory affects its 'modified date/time' attribute (which may be an undesirable effect sometimes). *Making up a globally unique file name in one's mind cannot be considered very reliable. Generating such a name is itself not a trivial task. A: In general, this is to add an EXE or DLL file on the path. As long as this file won’t appear anywhere else: @echo off where /q <put filename here> if %errorlevel% == 1 ( setx PATH "%PATH%;<additional path stuff>" ) else ( echo "already set path" ) A: This is a variation of Kevin Edwards's answer using string replacement. The basic pattern is: IF "%PATH:new_path=%" == "%PATH%" PATH=%PATH%;new_path For example: IF "%PATH:C:\Scripts=%" == "%PATH%" PATH=%PATH%;C:\Scripts In a nutshell, we make a conditional test where we attempt to remove/replace new_path from our PATH environment variable. If new_path doesn't exist, the condition succeeds and the new_path will be appended to PATH for the first time. If new_path already exists then the condition fails and we will not add new_path a second time. A: You can accomplish this using PowerShell; Test-Path $ENV:SystemRoot\YourDirectory Test-Path C:\Windows\YourDirectory This returns TRUE or FALSE Short, simple and easy! A: This routine will search for a path\ or file.ext in the path variable. It returns 0 if found. Path\ or file may contain spaces if quoted. If a variable is passed as the last argument, it will be set to d:\path\file. @echo off&goto :PathCheck :PathCheck.CMD echo.PathCheck.CMD: Checks for existence of a path or file in %%PATH%% variable echo.Usage: PathCheck.CMD [Checkpath] or [Checkfile] [PathVar] echo.Checkpath must have a trailing \ but checkfile must not echo.If Checkpath contains spaces use quotes ie. "C:\Check path\" echo.Checkfile must not include a path, just the filename.ext echo.If Checkfile contains spaces use quotes ie. "Check File.ext" echo.Returns 0 if found, 1 if not or -1 if checkpath does not exist at all echo.If PathVar is not in command line it will be echoed with surrounding quotes echo.If PathVar is passed it will be set to d:\path\checkfile with no trailing \ echo.Then %%PathVar%% will be set to the fully qualified path to Checkfile echo.Note: %%PathVar%% variable set will not be surrounded with quotes echo.To view the path listing line by line use: PathCheck.CMD /L exit/b 1 :PathCheck if "%~1"=="" goto :PathCheck.CMD setlocal EnableDelayedExpansion set "PathVar=%~2" set "pth=" set "pcheck=%~1" if "%pcheck:~-1%" equ "\" ( if not exist %pcheck% endlocal&exit/b -1 set/a pth=1 ) for %%G in ("%path:;=" "%") do ( set "Pathfd=%%~G\" set "Pathfd=!Pathfd:\\=\!" if /i "%pcheck%" equ "/L" echo.!Pathfd! if defined pth ( if /i "%pcheck%" equ "!Pathfd!" endlocal&exit/b 0 ) else ( if exist "!Pathfd!%pcheck%" goto :CheckfileFound ) ) endlocal&exit/b 1 :CheckfileFound endlocal&( if not "%PathVar%"=="" ( call set "%~2=%Pathfd%%pcheck%" ) else (echo."%Pathfd%%pcheck%") exit/b 0 ) A: -contains worked for me $pathToCheck = "c:\some path\to\a\file.txt" $env:Path - split ';' -contains $pathToCheck To add the path when it does not exist yet I use $pathToCheck = "c:\some path\to\a\file.txt" if(!($env:Path -split ';' -contains $vboxPath)) { $documentsDir = [Environment]::GetFolderPath("MyDocuments") $profileFilePath = Join-Path $documentsDir "WindowsPowerShell/profile.ps1" Out-File -FilePath $profileFilePath -Append -Force -Encoding ascii -InputObject "`$env:Path += `";$pathToCheck`"" Invoke-Expression -command $profileFilePath } A: rem https://stackoverflow.com/a/59571160/2292993 rem Don't get mess with %PATH%, it is a concatenation of USER+SYSTEM, and will cause a lot of duplication in the result. for /f "usebackq tokens=2,*" %%A in (`reg query HKCU\Environment /v PATH`) do set userPATH=%%B rem userPATH should be %USERPROFILE%\AppData\Local\Microsoft\WindowsApps rem https://stackoverflow.com/questions/141344 for /f "delims=" %%A in ('echo ";%userPATH%;" ^| find /C /I ";%WINAPPS%;"') do set pathExists=%%A If %pathExists%==0 ( echo Inserting user path... setx PATH "%WINAPPS%; %userPATH%" )
{ "language": "en", "url": "https://stackoverflow.com/questions/141344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72" }
Q: xsl scope help I have a xsl file that is grabbing variables from xml and they seem to not be able to see each other. I know it is a scope issue, I just do not know what I am doing wrong. <xsl:template match="one"> <xsl:variable name="varOne" select="@count" /> </xsl:template> <xsl:template match="two"> <xsl:if test="$varOne = 'Y'"> <xsl:value-of select="varTwo"/> </xsl:if> </xsl:template> This has been simplified for here. Any help is appreciated. A: You may also solve some scoping issues by passing parameters... <xsl:apply-templates select="two"> <xsl:with-param name="varOne"> <xsl:value-of select="one/@count"/> </xsl:with-param> </xsl:apply-templates> <xsl:template match="two"> <xsl:param name="varOne"/> <xsl:if test="$varOne = 'Y'"> <xsl:value-of select="varTwo"/> </xsl:if> </xsl:template> A: I'm fairly certain that variables are scoped and therefore you can't declare a variable in one and then use it in the other. You're going to have to move your variable declaration out of the template so that it's in a higher scope than both of them. A: Remembering that xsl variables are immutable... <!-- You may want to use absolute path --> <xsl:variable name="varOne" select="one/@count" /> <xsl:template match="one"> <!-- // do something --> </xsl:template> <xsl:template match="two"> <xsl:if test="$varOne = 'Y'"> <xsl:value-of select="varTwo"/> </xsl:if> </xsl:template> A: The scope of a variable in XSLT is its enclosing element. To make a variable visible to multiple elements, its declaration has to be at the same level or higher than those elements.
{ "language": "en", "url": "https://stackoverflow.com/questions/141345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }