text
stringlengths
8
267k
meta
dict
Q: Enforcing a strict timeout policy on a server side webrequest We will need to call out to a 3rd party to retrieve a value using REST, however if we do not receive a response within 10ms, I want to use a default value and continue processing. I'm leaning towards using an asynchronous WebRequest do to this, but I was wondering if there was a trick to doing it using a synchronous request. Any advice? A: If you are doing a request and waiting on it to return I'd say stay synchronous - there's no reason to do an async request if you're not going to do anything or stay responsive while waiting. For a sync call: WebRequest request = WebRequest.Create("http://something.somewhere/url"); WebResponse response = null; request.Timeout = 10000; // 10 second timeout try { response = request.GetResponse(); } catch(WebException e) { if( e.Status == WebExceptionStatus.Timeout) { //something } } If doing async: You will have to call Abort() on the request object - you'll need to check the timeout yourself, there's no built-in way to enforce a hard timeout. A: You could encapsulate your call to the 3rd party in a WebService. You could then call this WebService synchronously from your application - the web service reference has a simple timeout property that you can set to 10 seconds or whatever. Your call to get the 3rd party data from your WebService will throw a WebException after the timeout period has elapsed. You catch it and use a default value instead. EDIT: Philip's response above is better. RIF.
{ "language": "en", "url": "https://stackoverflow.com/questions/134917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Visual Studio Editor: Remove Structured IF/End If Is there and easy way to remove the IF/End If structured pairing. Being able to do this in one keystroke would be nice (I also use Refactor! Pro) Right now this is what I do: * *Delete the IF line *Watch Visual Studio reformat the code to line up correctly taking into account that the IF is Missing. *Navigate to the End If *Delete the End If line i.e. In the following example I want to change the code from IF Value = True Then DoSomething() DoSomething2() End IF To DoSomething() DoSomething2() A: While this is not a literal refactoring in the sense specified by Martin Fowler's book Refactoring, This is how I use resharper to achieve this goal: * *Move/click on like with if statement *Press control + delete to delete the line *Press Alt + enter, and the option remove braces will be the first one specified. *Press enter Done. Not quite simple, but the keystrokes are short, and not too complicated, and I don't have to spend/waste time with dumb arrow keys or the mouse to accomplish this type of code change. Resharper supports VB.net code as of 4.0, I believe.
{ "language": "en", "url": "https://stackoverflow.com/questions/134921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Display number with leading zeros How do I display a leading zero for all numbers with less than two digits? 1 → 01 10 → 10 100 → 100 A: Or this: print '{0:02d}'.format(1) A: You can do this with f strings. import numpy as np print(f'{np.random.choice([1, 124, 13566]):0>8}') This will print constant length of 8, and pad the rest with leading 0. 00000001 00000124 00013566 A: This is how I do it: str(1).zfill(len(str(total))) Basically zfill takes the number of leading zeros you want to add, so it's easy to take the biggest number, turn it into a string and get the length, like this: Python 3.6.5 (default, May 11 2018, 04:00:52) [GCC 8.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> total = 100 >>> print(str(1).zfill(len(str(total)))) 001 >>> total = 1000 >>> print(str(1).zfill(len(str(total)))) 0001 >>> total = 10000 >>> print(str(1).zfill(len(str(total)))) 00001 >>> A: x = [1, 10, 100] for i in x: print '%02d' % i results in: 01 10 100 Read more information about string formatting using % in the documentation. A: Use a format string - http://docs.python.org/lib/typesseq-strings.html For example: python -c 'print "%(num)02d" % {"num":5}' A: The Pythonic way to do this: str(number).rjust(string_width, fill_char) This way, the original string is returned unchanged if its length is greater than string_width. Example: a = [1, 10, 100] for num in a: print str(num).rjust(2, '0') Results: 01 10 100 A: width = 5 num = 3 formatted = (width - len(str(num))) * "0" + str(num) print formatted A: In Python 2.6+ and 3.0+, you would use the format() string method: for i in (1, 10, 100): print('{num:02d}'.format(num=i)) or using the built-in (for a single number): print(format(i, '02d')) See the PEP-3101 documentation for the new formatting functions. A: Or another solution. "{:0>2}".format(number) A: Use: '00'[len(str(i)):] + str(i) Or with the math module: import math '00'[math.ceil(math.log(i, 10)):] + str(i) A: All of these create the string "01": >python -m timeit "'{:02d}'.format(1)" 1000000 loops, best of 5: 357 nsec per loop >python -m timeit "'{0:0{1}d}'.format(1,2)" 500000 loops, best of 5: 607 nsec per loop >python -m timeit "f'{1:02d}'" 1000000 loops, best of 5: 281 nsec per loop >python -m timeit "f'{1:0{2}d}'" 500000 loops, best of 5: 423 nsec per loop >python -m timeit "str(1).zfill(2)" 1000000 loops, best of 5: 271 nsec per loop >python Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)] on win32 A: You could also do: '{:0>2}'.format(1) which will return a string. A: In Python 2 (and Python 3) you can do: number = 1 print("%02d" % (number,)) Basically % is like printf or sprintf (see docs). For Python 3.+, the same behavior can also be achieved with format: number = 1 print("{:02d}".format(number)) For Python 3.6+ the same behavior can be achieved with f-strings: number = 1 print(f"{number:02d}") A: print('{:02}'.format(1)) print('{:02}'.format(10)) print('{:02}'.format(100)) prints: 01 10 100 A: You can use str.zfill: print(str(1).zfill(2)) print(str(10).zfill(2)) print(str(100).zfill(2)) prints: 01 10 100 A: In Python >= 3.6, you can do this succinctly with the new f-strings that were introduced by using: f'{val:02}' which prints the variable with name val with a fill value of 0 and a width of 2. For your specific example you can do this nicely in a loop: a, b, c = 1, 10, 100 for val in [a, b, c]: print(f'{val:02}') which prints: 01 10 100 For more information on f-strings, take a look at PEP 498 where they were introduced. A: This would be the Python way, although I would include the parameter for clarity - "{0:0>2}".format(number), if someone will wants nLeadingZeros they should note they can also do:"{0:0>{1}}".format(number, nLeadingZeros + 1) A: Its built into python with string formatting f'{number:02d}' A: If dealing with numbers that are either one or two digits: '0'+str(number)[-2:] or '0{0}'.format(number)[-2:]
{ "language": "en", "url": "https://stackoverflow.com/questions/134934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1329" }
Q: Type double byte character into vbscript file I need to convert → (&rarr) to a symbol I can type into a ANSI VBScript file. I am writing a script that translates a select set of htmlcodes to their actual double byte symbols using a regex. Many languages accomplish this using "\0x8594;"... what is the equivelent in VBScript? A: Answer was ChrW(8594) A: ChrW(&H8594) A: Note: Bob King's answer is correct for the information given. The problem is that alumb is mistaken about the meaning of a numeric character entity reference. → (→ single right arrow) is, as stated, also identified as → but this is decimal and so is not equivalent to \x8594 in "many languages" (e.g. C++). This is why chrW(&H8594) gave the "wrong" character. Hexadecimal character entity references are specified using "&#x" instead of "&#". Thus 薔 (薔) = \x8594 = chrW(&H8594) while → (→) = chrW(8594) = \x2192.
{ "language": "en", "url": "https://stackoverflow.com/questions/134937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best practice for placement of display logic in a view in CakePHP I have a CakePHP 1.2 application. I'm running into the case where I need to do some logic to render things correctly in the view. Example: I have a table called Types. There are flags for various types, so each row has one or more BOOL fields set. For one type of type, I store the contents of an RTF in another field. So when I display a listing of all typoes in the view, I want to create a link for the types that are the type where I store RTF. I don't want to show a link, otherwise. Where is the best place for this logic? I want to use html->link to create the link, but I can't use that inside of my model. Should the logic go in the model, and if so, should I be building HTML in my model, e.g. the link? A: i'd suggest keeping that logic in the view, building HTML in your model breaks the whole view/model separation. i don't think there's anything wrong with view logic, personally i tend to constrain it to basic conditionals. ultimately i think the idea is figuring out where this logic best fits. in this case, the logic is, "if i have a certain type, i should provide a link to it", which is a view problem, not a model problem. A: Build your own AppHtmlHelper which extends HtmlHelper and does the check for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/134948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is a good alternative for SourceSafe on a USB Drive? I like to keep my source files on a USB drive for the portability. To date I have been using Visual SourceSafe for this purpose. What is a good alternative to VSS that fulfils these criteria: * *Entire database can be held on a USB "pen" drive *Code / documentation duplicated on local drives *Does not require a central server *Easy to backup and restore using standard backup tools *Integrates with Visual Studio *Has a small footprint *Easy to clean the database and keep small *Compatible with Windows XP, Vista and Vista x64 A good reference on setup would be good too. A: I would use SVN (Subversion). You can use SVN in "file" mode (w/o using the network). combine this with tortoiseSVN, which integrates to explorer, and you have a nice little portable repository. For Visual Studio integration, there is the commercial($49) VisualSVN (which I believe is the setup used to develop StackOverflow). Someone also mentioned AnkhSVN which I haven't used, but some people find it less than satisfying. A: Don't use SourceSafe. There's major problems with it. See this: * *Article1 *Article2 I'd recommend using SubVersion instead. If you're using Windows, you can use TortoiseSVN. If you're working on Linux or other Unix variants, try RapidSVN. A: Use Subversion. The FSFS style repository will work best as older BDB ones can have issues when moved from computer to computer. With AnkhSVN you'll have full integration with Visual Studio (AnkhSVN 2.x is a source control plugin; older versions still do the job, though). A: Bazaar does what you're asking for (in terms of working very well standalone), and there was a 2007 Summer of Code project to build a Visual Studio integration plugin which appears to have produced an at-least-partially-functional product. Bazaar (and other distributed tools, such as Git, Mercurial, Darcs and the like) are ideal because you can have your repository stored in multiple places (ie. on your pen drive, but also copied up to a server on a regular basis), make changes in one or the other branch (let's say you leave your pen drive at home -- you can build changes against the copy on a remote server, upload them via WebDAV, SFTP, etc, and be able to seamlessly merge them into changes done locally to the pen drive; non-distributed solutions such as Subversion don't have that capability). A: There are two common free front-ends Ankhsvn integrates into visual studio and TortoiseSVN integrates with explorer ( my preference). There is also sliksvn a self contained svn server for windows. A: I'd recommend SubVersion as well - you can find a hosting provider who offers SVN for really cheap, this way your source code is always backed up and available, all you need to keep on your flash drive is SVN client...
{ "language": "en", "url": "https://stackoverflow.com/questions/134950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you perform address validation? Is it even possible to perform address (physical, not e-mail) validation? It seems like the sheer number of address formats, even in the US alone, would make this a fairly difficult task. On the other hand it seems like a task that would be necessary for several business requirements. A: In the course of developing an in-house address verification service at a German company I used to work for I've come across a number of ways to tackle this issue. I'll do my best to sum up my findings below: Free, Open Source Software Clearly, the first approach anyone would take is an open-source one (like openstreetmap.org), which is never a bad idea. But whether or not you can really put this to good and reliable use depends very much on how much you need to rely on the results. Addresses are an incredibly variable thing. Verifying U.S. addresses is not an easy task, but bearable, but once you're going for Europe, especially the U.K. with their extensive Postal Code system, the open-source approach will simply lack data. Web Services / APIs Enterprise-Class Software Money gets it done, obviously. But not every business or developer can spend ~$0.15 per address lookup (that's $150 for 1,000 API requests) - a very expensive business model the vast majority of address validation APIs have implemented. What I ended up integrating: streetlayer API Since I was not willing to take on the programmatic approach of verifying address data manually I finally came to the conclusion that I was in need of an API with a price tag that would not make my boss want to fire me and still deliver solid and reliable international verification results. Long story short, I ended up integrating an API built by apilayer, called "streetlayer API". I was easily convinced by a simple JSON integration, surprisingly accurate validation results and their developer-friendly pricing. Also, 100 requests/month are entirely free. Hope this helps! A: Here's a free and sort of "outside the box" way to do it. Not 100% perfect, but it should reject blatantly non-existent addresses. Submit the entire address to Google's geocoding web service. This service attempts to return the exact coordinates of the location you feed it, i.e. latitude and longitude. In my experience if the address is invalid you will get a result of 602 from the service. There's definitely a possibility of false positives or false negatives, but used in conjunction with other consistency checks it could be useful. (Yahoo's geocoding web service, on the other hand, will return the coordinates of the center of the town if the town exists but the rest of the address is bogus. Potentially useful as long as you pay close attention to the "precision" field in the result). A: There are a number of good answers in here but most of them make the assumption that the user wants an "API" solution where they must write code to connect to a 3rd-party service and/or screen scrape the USPS. This is all well and good, but should be factored into the business requirements and costs associated with the implementation and then weighed against the desired benefits. Depending upon the business requirements and the way that the data is received into the system, a real-time address processing solution may be the best bet. If a real-time solution is required, you will want to consider the license agreement and technical limitations of the Google Maps/Bing/Yahoo APIs. They typically limit the number of calls you can make each day. The USPS web tools API is the same in additional they restrict how/why you can use their system and how you are allowed to use the data thereafter. At the same time, there are a handful of great service providers that can easily process a static list of addresses. Essentially, you give the service provider a CSV file or Excel file, they clean it up and get it back to you. It's a one-time deal with no long-term commitment or obligation—usually. Full disclosure: I'm the founder of SmartyStreets. We do address verification for addresses within the United States. We are easily able to CASS certify a list and we also offer a address verification web service API. We have no hidden fees, contracts, or anything. You use our service until you no longer need it and you can walk away. (Unlike cell phone companies that require a contract.) A: USPS has an address cleaner online, which someone has screen scraped into a poor man's webservice. However, if you're doing this often enough, it'd be a better idea to apply for a USPS account and call their own webservice. A: I will refer you to my blog post - A lesson in address storage, I go into some of the techniques and algorithms used in the process of address validation. My key thought is "Don't be lazy with address storage, it will cause you nothing but headaches in the future!" Also, there is another StackOverflow question that asks this question entitled How should international geographic addresses be stored in a relational database. A: For us-based address data my company has used GeoStan. It has bindings for C and Java (and we created a Perl binding). Note that it is a commercial product and isn't cheap. It is quite fast though (~300 addresses per second) and offers features like CASS certification (USPS bulk mail discount), DPV (Delivery point verification) flagging, and LON/LAT geocoding. There is a Perl module Geo::PostalAddress, but it uses heuristics and doesn't have the other features mentioned for GeoStan. Edit: some have mentioned 'doing it yourself', if you do decide to do this, a good source of information to start with is the US Census Tiger Data Set, which contains a lot of information about the US including address information. A: I have used the services of http://www.melissadata.com Their "address object" works very well. Its pricey, yes. But when you consider costs of writing your own solutions, the cost of dirty data in your application, returned mailers - lost sales, and the like - the costs can be justified. A: As seen on reddit: $address = urlencode('1600 Pennsylvania Avenue, Washington, DC'); $json = json_decode(file_get_contents("http://where.yahooapis.com/geocode?q=$address&flags=J")); print_r($json); A: Fixaddress.com service is available that provides following services, 1) Address Validation. 2) Address Correction. 3) Address spell correcting. 4) Correct addresses phonetic mistakes. Fixaddress.com uses USPS and Tiger data as reference data. For more detail visit below link, http://www.fixaddress.com/ A: One area where address lookups have to be performed reliably is for VOIP E911 services. I know companies reliably using the following services for this: Bandwidth.com 9-1-1 Access API MSAG Address Validation MSAG = Master Street Address Guide https://www.bandwidth.com/9-1-1/ SmartyStreet US Street Address API https://smartystreets.com/docs/cloud/us-street-api A: There are companies that provide this service. Service bureaus that deal with mass mailing will scrub an entire mailing list to that it's in the proper format, which results in a discount on postage. The USPS sells databases of address information that can be used to develop custom solutions. They also have lists of approved vendors who provide this kind of software and service. There are some (but not many) packages that have APIs for hooking address validation into your software. However, you're right that its a pretty nasty problem. http://www.usps.com/ncsc/ziplookup/vendorslicensees.htm A: As mentioned there are many services out there, if you are looking to truly validate the entire address then I highly recommend going with a Web Service type service to ensure that changes can quickly be recognized by your application. In addition to the services listed above, webservice.net has this US Address Validation service. http://www.webservicex.net/WCF/ServiceDetails.aspx?SID=24 A: We have had success with Perfect Address. Their database has all the US street names and street number ranges. Also acts as a pretty decent parser for free-form address fields, if you are lucky enough to have that kind of data. A: Validating it is a valid address is one thing. But if you're trying to validate a given person lives at a given address, your only almost-guarantee would be a test mail to the address, and even that is not certain if the person is organised or knows somebody at that address. Otherwise people could just specify an arbitrary random address which they know exists and it would mean nothing to you. The best you can do for immediate results is request the user send a photographed / scanned copy of the head of their bank statement or some other proof-of-recent-residence, because at least then they have to work harder to forget it, and forging said things show up easily with a basic level of image forensic analysis. A: There is no global solution. For any given country it is at best rather tricky. In the UK, the PostOffice controlls postal addresses, and can provide (at a cost) address information for validation purposes. Government agencies also keep an extensive list of addresses, and these are centrally collated in the NLPG (National Land and Property Gazetteer). Actually validating against these lists is very difficult. Most people don't even know exactly how their address as it is held by the PostOffice. Some businesses don't even know what number they are on a particular street. Your best bet is to approach a company that specialises in this kind of thing. A: Yahoo has also a Placemaker API. It is good only for locations but it has an universal id for all world locations. It look that there is no standard in ISO list. A: You could also try SAP's Data Quality solutions which are available in both a server platform is processing a large number of requests or as an embeddable SDK if you wanted to run it in process with your application. We use it in our application and it's very robust and scalable. A: NAICS.com is coming out with an API that will add all kinds of key business data including street address. This would happen on the fly as your site's forms are processed. https://www.naics.com/business-intelligence-api/ A: You can try Pitney Bowes “IdentifyAddress” Api available at - https://identify.pitneybowes.com/ The service analyses and compares the input addresses against the known address databases around the world to output a standardized detail. It corrects addresses, adds missing postal information and formats it using the format preferred by the applicable postal authority. I also uses additional address databases so it can provide enhanced detail, including address quality, type of address, transliteration (such as from Chinese Kanji to Latin characters) and whether an address is validated to the premise/house number, street, or city level of reference information. You will find a lot of samples and sdk available on the site and i found it extremely easy to integrate. A: For US addresses you can require a valid state, and verify that the zip is valid. You could even check that the zip code is in the right state, but beyond that I don't think there are many tests you could run that wouldn't provide a lot of false negatives. What are you trying to do -- prevent simple mistakes or enforcing some kind of identity check?
{ "language": "en", "url": "https://stackoverflow.com/questions/134956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "95" }
Q: Get top results for each group (in Oracle) How would I be able to get N results for several groups in an oracle query. For example, given the following table: |--------+------------+------------| | emp_id | name | occupation | |--------+------------+------------| | 1 | John Smith | Accountant | | 2 | Jane Doe | Engineer | | 3 | Jack Black | Funnyman | |--------+------------+------------| There are many more rows with more occupations. I would like to get three employees (lets say) from each occupation. Is there a way to do this without using a subquery? A: I don't have an oracle instance handy right now so I have not tested this: select * from (select emp_id, name, occupation, rank() over ( partition by occupation order by emp_id) rank from employee) where rank <= 3 Here is a link on how rank works: http://www.psoug.org/reference/rank.html A: Add RowNum to rank : select * from (select emp_id, name, occupation,rank() over ( partition by occupation order by emp_id,RowNum) rank from employee) where rank <= 3 A: This produces what you want, and it uses no vendor-specific SQL features like TOP N or RANK(). SELECT MAX(e.name) AS name, MAX(e.occupation) AS occupation FROM emp e LEFT OUTER JOIN emp e2 ON (e.occupation = e2.occupation AND e.emp_id <= e2.emp_id) GROUP BY e.emp_id HAVING COUNT(*) <= 3 ORDER BY occupation; In this example it gives the three employees with the lowest emp_id values per occupation. You can change the attribute used in the inequality comparison, to make it give the top employees by name, or whatever. A: I'm not sure this is very efficient, but maybe a starting place? select * from people p1 join people p2 on p1.occupation = p2.occupation join people p3 on p1.occupation = p3.occupation and p2.occupation = p3.occupation where p1.emp_id != p2.emp_id and p1.emp_id != p3.emp_id This should give you rows that contain 3 distinct employees all in the same occupation. Unfortunately, it will give you ALL combinations of those. Can anyone pare this down please? A: tested this in SQL Server (and it uses subquery) select emp_id, name, occupation from employees t1 where emp_id IN (select top 3 emp_id from employees t2 where t2.occupation = t1.occupation) just do an ORDER by in the subquery to suit your needs
{ "language": "en", "url": "https://stackoverflow.com/questions/134958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Can you use multiple VaryByCustom parameters when caching a user control? I've been trying this a couple of different ways, but it's not working for some reason. Is it even possible? A: If you are overriding GetVaryByCustomString() in the Global.asax.cs file, you can pass in a semicolon delimited list of values which you then need to parse. There is one built-in value (Browser) which will be used if the attribute specified does not exist. A: Yes. Separate them in your declaration by semicolons. A: You can use multiple parameters by separating them by a semicolon, but you have to implement the logic of splitting them yourself. This means you can use any character as your separator, because you need to parse it yourself. You probably overriding GetVaryByCustomString(HttpContext context, string custom) in your global.asax. The custom parameter will contain anything you pass using VaryByCustom, like this <%@ OutputCache Duration="86400" VaryByParam="none" VaryByCustom="custom1;custom2" %> Extra note: base.GetVaryByCustomString doesn't implement any string splitting capabilities and will only do something when browser is passed as a value. Otherwise it will return null.
{ "language": "en", "url": "https://stackoverflow.com/questions/134962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Ruby - equivalent of Python __str__() method? In Ruby, is there the equivalent of the __str__() method that you can define on Python classes? A: FWIW, inspect is probably more like __repr__() than __str__() from the library reference... repr( self) Called by the repr() built-in function and by string conversions (reverse quotes) to compute the ``official'' string representation of an object. If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment). If this is not possible, a string of the form "<...some useful description...>" should be returned. The return value must be a string object. If a class defines __repr__() but not __str__(), then __repr__() is also used when an ``informal'' string representation of instances of that class is required. A: On the core classes it is typically 'inspect'. Eg: irb(main):001:0> puts "array is: #{[1,2,3].inspect}" array is: [1, 2, 3] => nil irb(main):002:0> puts "array is: #{[1,2,3]}" array is: 123 => nil irb(main):003:0> A: You could use to_s. http://briancarper.net/2006/09/26/ruby-to_s-vs-to_str/
{ "language": "en", "url": "https://stackoverflow.com/questions/134969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I find the number of references that exist for a particular object? I have a bit of code that passes around a ton of objects and I want to determine if there are any references still lingering around that I am unaware of and where it is those references are located. Any idea of how I can do this? My understanding is that the watch window only allows me to see items available to the currently executing code block, and the "Find All References" only helps if I add references to objects at compile time. Unless there is more to the watch window than I am aware of. A: if you are talking in the code, just right click on the object name, in the drop down menu pick "Find all references", the list of references will appear below in the output window EDIT: Since there was only a .NET tag, Visual Studio assumed A: In a IDE like Elcipse or Visual Studio you can do it with the context menu. A: A profiler will allow you to do this. CLR Profiler or ANTS Profiler are two examples.
{ "language": "en", "url": "https://stackoverflow.com/questions/134976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Secure only Login.aspx for a site * *Is it possible to secure only the Login.aspx page (and the postback) and not the whole site in IIS? *We are looking to do this specifically with a SharePoint site running Forms Based Authentication against our Active Directory. *Links to this will be helpful. This is what we have done so far: 1. Setup SharePoint to use FBA against AD. 2. Moved Login Page to Secure/Login.aspx 3. Set the appropriate Login url in web.config as https://..../Secure/Login.aspx This is not working and help is needed here. However even if this works, how do we get the user back to http from https? A: There's not a whole lot of point. If the only thing that's encrypted is the Login.aspx page, that would mean that someone could sniff all the traffic that was not sent through the login page. Which might prevent people from getting user:pass, but all your other data is exposed. A: Besides all the data which is exposed, and the user's operation which can be changed en route, the user's session id (or other authentication data) is sent in the clear. This means that an attacker can steal your cookie (...) and impersonate you to the system, even without getting your password. (If I remember correctly SPSv.3 also supports builtin password changing module...) So I would say that this is not a Great Idea, unless you dont care about that system very much anyway.... But then, why bother with authentication at all? just make it anonymous? A: I agree with AviD and Dan Williams that securing only the login page isn't a great idea because it exposes other data after leaving the password page. However, you can require SSL for only the login.aspx page via the IIS Manger. If you navigate to the login.aspx page in IIS Manager (I believe it's under /_layouts), you can right-click on the individual file and select Properties. From there, go to the File Security tab and click on the Edit... button under Secure communications. There, you can check the Require secure channel (SSL) box, and SSL will be required for that page only. I'm not positive about getting the user back to http from there, but I believe its default behavior is to send you to the requested page if the login is successful. If not, I would think you could customize where the login page sends you on a successful login.
{ "language": "en", "url": "https://stackoverflow.com/questions/134988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Best way to check if server is reachable in .NET? I have created a timeclock application in C# that connects to a web service on our server in order to clock employees in/out. The application resides in the system tray and clocks users out if they shut down/suspend their machines or if they are idle for more than three hours to which it clocks them out at the time of last activity. My issue arises that when a user brings his machine back up from a sleep state (which fires the SystemEvents.PowerModeChanged event), the application attempts to clock the employee back in but the network connection isn't fully initialized at that time and the web-service call times out. An obvious solution, albeit it a hack, would be to put a delay on the clock in but this wouldn't necessarily fix the problem across the board. What I am looking to do is a sort of "relentless" clock in where it will wait until it can see the server until it actually attempts to clock in. What is the best method to determine if a connection to a web service can be made? A: The best way is going to be to actually try to make the connection and catch the errors. You can ping the machine, but that will only tell you if the machine is running and on the network, which doesn't necessarily reflect on whether the webservice is running and available. A: When handling the event, put your connection code into a method that will loop through until success, catching errors and retrying. Even a delay wouldn't be perfect as depending on the individual systems and other applications running it can take varying times for the network connection to be re-established. A: if the problem is latency in re-establishing the network service, Ping is the solution; it's like ringing the doorbell to see if anyone is home if ping succeeds, then try calling the web service, catching exceptions appropriately (I think both SocketException and SoapException can occur depending on readiness/responsiveness) A: Implement a queue where you post messages and have a thread periodically try to flush the in-memory queue to the web service. A: Ping can be disabled although the web service port is open. I wouldn't use this method...
{ "language": "en", "url": "https://stackoverflow.com/questions/134992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to prevent blank xmlns attributes in output from .NET's XmlDocument? When generating XML from XmlDocument in .NET, a blank xmlns attribute appears the first time an element without an associated namespace is inserted; how can this be prevented? Example: XmlDocument xml = new XmlDocument(); xml.AppendChild(xml.CreateElement("root", "whatever:name-space-1.0")); xml.DocumentElement.AppendChild(xml.CreateElement("loner")); Console.WriteLine(xml.OuterXml); Output: <root xmlns="whatever:name-space-1.0"><loner xmlns="" /></root> Desired Output: <root xmlns="whatever:name-space-1.0"><loner /></root> Is there a solution applicable to the XmlDocument code, not something that occurs after converting the document to a string with OuterXml? My reasoning for doing this is to see if I can match the standard XML of a particular protocol using XmlDocument-generated XML. The blank xmlns attribute may not break or confuse a parser, but it's also not present in any usage that I've seen of this protocol. A: This is a variant of JeniT's answer (Thank you very very much btw!) XmlElement new_element = doc.CreateElement("Foo", doc.DocumentElement.NamespaceURI); This eliminates having to copy or repeat the namespace everywhere. A: Since root is in an unprefixed namespace, any child of root that wants to be un-namespaced has to be output like your example. The solution would be to prefix the root element like so: <w:root xmlns:w="whatever:name-space-1.0"> <loner/> </w:root> code: XmlDocument doc = new XmlDocument(); XmlElement root = doc.CreateElement( "w", "root", "whatever:name-space-1.0" ); doc.AppendChild( root ); root.AppendChild( doc.CreateElement( "loner" ) ); Console.WriteLine(doc.OuterXml); A: If the <loner> element in your sample XML didn't have the xmlns default namespace declaration on it, then it would be in the whatever:name-space-1.0 namespace rather than being in no namespace. If that's what you want, you need to create the element in that namespace: xml.CreateElement("loner", "whatever:name-space-1.0") If you want the <loner> element to be in no namespace, then the XML that's been produced is exactly what you need, and you shouldn't worry about the xmlns attribute that's been added automatically for you. A: Thanks to Jeremy Lew's answer and a bit more playing around, I figured out how to remove blank xmlns attributes: pass in the root node's namespace when creating any child node you want not to have a prefix on. Using a namespace without a prefix at the root means that you need to use that same namespace on child elements for them to also not have prefixes. Fixed Code: XmlDocument xml = new XmlDocument(); xml.AppendChild(xml.CreateElement("root", "whatever:name-space-1.0")); xml.DocumentElement.AppendChild(xml.CreateElement("loner", "whatever:name-space-1.0")); Console.WriteLine(xml.OuterXml); Thanks everyone to all your answers which led me in the right direction! A: If possible, create a serialization class then do: XmlSerializerNamespaces ns = new XmlSerializerNamespaces(); ns.Add("", ""); XmlSerializer serializer = new XmlSerializer(yourType); serializer.Serialize(xmlTextWriter, someObject, ns); It's safer, and you can control the namespaces with attributes if you really need more control. A: I've solved the problem by using the Factory Pattern. I created a factory for XElement objects. As parameter for the instantiation of the factory I've specified a XNamespace object. So, everytime a XElement is created by the factory the namespace will be added automatically. Here is the code of the factory: internal class XElementFactory { private readonly XNamespace currentNs; public XElementFactory(XNamespace ns) { this.currentNs = ns; } internal XElement CreateXElement(String name, params object[] content) { return new XElement(currentNs + name, content); } } A: Yes you can prevent the XMLNS from the XmlElement . First Creating time it is coming : like that <trkpt lat="30.53597" lon="-97.753324" xmlns=""> <ele>249.118774</ele> <time>2006-05-05T14:34:44Z</time> </trkpt> Change the code : And pass xml namespace like this C# code: XmlElement bookElement = xdoc.CreateElement("trkpt", "http://www.topografix.com/GPX/1/1"); bookElement.SetAttribute("lat", "30.53597"); bookElement.SetAttribute("lon", "97.753324");
{ "language": "en", "url": "https://stackoverflow.com/questions/135000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "128" }
Q: How can I script a no-cost notification of batch file errors/returns? I have a problem with stopping a service and starting it again and want to be notified when the process runs and let me know what the result is. Here's the scenario, I have a text file output of an "sc" command. I want to send that file but not as an attachment. Also, I want to see the initial status quickly in the subject of the email. Here's the 'servstop.txt' file contents: [SC] StartService FAILED 1058: The service cannot be started, either because it is disabled or because it has no enabled devices associated with it. I want the subject of the email to be "Alert Service Start: [SC] StartService FAILED 1058" and the body to contain the entire error message above. I will put my current method in an answer below using a program called blat to send me the result. A: Here's how I'm doing this. First I got blat mail (public domain smtp mailer) and dropped it into d:\blat directory. My Exchange server allows me to email without id/password and just assumes that I am the person in the 'from' field of the blat command. @echo off sc start Alerter >servstop.txt SetLocal EnableDelayedExpansion set content= set subj= for /F "delims=" %%i in (servstop.txt) do set content=!content! %%i for /f "tokens=1 delims=:" %%s in ("%content%") do set subj=%%s d:\blat\blat.exe -body "%content%" -to my-email@foo.bar -f my-email@foo.bar -server smtp.foo.bar -s "Alert Service Start:%subj% " -log d:\blat\blat.log EndLocal A: Splunk looks promising. Haven't tried it though. Two blockquotes from the site about index and alert below. INDEX: With a variety of flexible input methods you can index logs, configurations, traps and alerts, messages, scripts, and code and performance data from all your applications, servers and network devices. Monitor file systems for scripts and configuration changes, capture archive files, find and tail live application logs, connect to network ports to receive syslog, SNMP and other network-based instrumentation. And this is just where it starts. ALERT: Any search can be run on a schedule and trigger notifications or actions based on the search results. And because it works across different components and technologies, Splunk is the most flexible monitoring tool in your arsenal. Notifications can be sent via email, RSS or SNMP to other management consoles. Actions trigger scripts performing user described activities like restarting an application, server or network device.
{ "language": "en", "url": "https://stackoverflow.com/questions/135010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Advantages to Using Private Static Methods When creating a class that has internal private methods, usually to reduce code duplication, that don't require the use of any instance fields, are there performance or memory advantages to declaring the method as static? Example: foreach (XmlElement element in xmlDoc.DocumentElement.SelectNodes("sample")) { string first = GetInnerXml(element, ".//first"); string second = GetInnerXml(element, ".//second"); string third = GetInnerXml(element, ".//third"); } ... private static string GetInnerXml(XmlElement element, string nodeName) { return GetInnerXml(element, nodeName, null); } private static string GetInnerXml(XmlElement element, string nodeName, string defaultValue) { XmlNode node = element.SelectSingleNode(nodeName); return node == null ? defaultValue : node.InnerXml; } Is there any advantage to declaring the GetInnerXml() methods as static? No opinion responses please, I have an opinion. A: A call to a static method generates a call instruction in Microsoft intermediate language (MSIL), whereas a call to an instance method generates a callvirt instruction, which also checks for a null object references. However, most of the time the performance difference between the two is not significant. Source: MSDN - https://learn.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2012/79b3xss3(v=vs.110) A: It'll be slightly quicker as there is no this parameter passed (although the performance cost of calling the method is probably considerably more than this saving). I'd say the best reason I can think of for private static methods is that it means you can't accidentally change the object (as there's no this pointer). A: This forces you to remember to also declare any class-scoped members the function uses as static as well, which should save the memory of creating those items for each instance. A: I very much prefer all private methods to be static unless they really can't be. I would much prefer the following: public class MyClass { private readonly MyDependency _dependency; public MyClass(MyDependency dependency) { _dependency = dependency; } public int CalculateHardStuff() { var intermediate = StepOne(_dependency); return StepTwo(intermediate); } private static int StepOne(MyDependency dependency) { return dependency.GetFirst3Primes().Sum(); } private static int StepTwo(int intermediate) { return (intermediate + 5)/4; } } public class MyDependency { public IEnumerable<int> GetFirst3Primes() { yield return 2; yield return 3; yield return 5; } } over every method accessing the instance field. Why is this? Because as this process of calculating becomes more complex and the class ends up with 15 private helper methods, then I REALLY want to be able to pull them out into a new class that encapsulates a subset of the steps in a semantically meaningful way. When MyClass gets more dependencies because we need logging and also need to notify a web service (please excuse the cliche examples), then it's really helpful to easily see what methods have which dependencies. Tools like R# lets you extract a class from a set of private static methods in a few keystrokes. Try doing it when all private helper methods are tightly coupled to the instance field and you'll see it can be quite a headache. A: From the FxCop rule page on this: After you mark the methods as static, the compiler will emit non-virtual call sites to these members. Emitting non-virtual call sites will prevent a check at runtime for each call that ensures that the current object pointer is non-null. This can result in a measurable performance gain for performance-sensitive code. In some cases, the failure to access the current object instance represents a correctness issue. A: Yes, the compiler does not need to pass the implicit this pointer to static methods. Even if you don't use it in your instance method, it is still being passed. A: When I'm writing a class, most methods fall into two categories: * *Methods that use/change the current instance's state. *Helper methods that don't use/change the current object's state, but help me compute values I need elsewhere. Static methods are useful, because just by looking at its signature, you know that the calling it doesn't use or modify the current instance's state. Take this example: public class Library { private static Book findBook(List<Book> books, string title) { // code goes here } } If an instance of library's state ever gets screwed up, and I'm trying to figure out why, I can rule out findBook as the culprit, just from its signature. I try to communicate as much as I can with a method or function's signature, and this is an excellent way to do that. A: As has already been stated, there are many advantages to static methods. However; keep in mind that they will live on the heap for the life of the application. I recently spent a day tracking down a memory leak in a Windows Service... the leak was caused by private static methods inside a class that implemented IDisposable and was consistently called from a using statement. Each time this class was created, memory was reserved on the heap for the static methods within the class, unfortunately, when the class was disposed of, the memory for the static methods was not released. This caused the memory footprint of this service to consume the available memory of the server within a couple of days with predictable results.
{ "language": "en", "url": "https://stackoverflow.com/questions/135020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "247" }
Q: Python Library Path In ruby the library path is provided in $:, in perl it's in @INC - how do you get the list of paths that Python searches for modules when you do an import? A: I think you're looking for sys.path import sys print (sys.path) A: python -c "import sys; print('\n'.join(sys.path))" /usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python39.zip /usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python3.9 /usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload /usr/local/lib/python3.9/site-packages A: You can also make additions to this path with the PYTHONPATH environment variable at runtime, in addition to: import sys sys.path.append('/home/user/python-libs') A: import sys sys.path
{ "language": "en", "url": "https://stackoverflow.com/questions/135035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: Should you always favor xrange() over range()? Why or why not? A: range() returns a list, xrange() returns an xrange object. xrange() is a bit faster, and a bit more memory efficient. But the gain is not very large. The extra memory used by a list is of course not just wasted, lists have more functionality (slice, repeat, insert, ...). Exact differences can be found in the documentation. There is no bonehard rule, use what is needed. Python 3.0 is still in development, but IIRC range() will very similar to xrange() of 2.X and list(range()) can be used to generate lists. A: I would just like to say that it REALLY isn't that difficult to get an xrange object with slice and indexing functionality. I have written some code that works pretty dang well and is just as fast as xrange for when it counts (iterations). from __future__ import division def read_xrange(xrange_object): # returns the xrange object's start, stop, and step start = xrange_object[0] if len(xrange_object) > 1: step = xrange_object[1] - xrange_object[0] else: step = 1 stop = xrange_object[-1] + step return start, stop, step class Xrange(object): ''' creates an xrange-like object that supports slicing and indexing. ex: a = Xrange(20) a.index(10) will work Also a[:5] will return another Xrange object with the specified attributes Also allows for the conversion from an existing xrange object ''' def __init__(self, *inputs): # allow inputs of xrange objects if len(inputs) == 1: test, = inputs if type(test) == xrange: self.xrange = test self.start, self.stop, self.step = read_xrange(test) return # or create one from start, stop, step self.start, self.step = 0, None if len(inputs) == 1: self.stop, = inputs elif len(inputs) == 2: self.start, self.stop = inputs elif len(inputs) == 3: self.start, self.stop, self.step = inputs else: raise ValueError(inputs) self.xrange = xrange(self.start, self.stop, self.step) def __iter__(self): return iter(self.xrange) def __getitem__(self, item): if type(item) is int: if item < 0: item += len(self) return self.xrange[item] if type(item) is slice: # get the indexes, and then convert to the number start, stop, step = item.start, item.stop, item.step start = start if start != None else 0 # convert start = None to start = 0 if start < 0: start += start start = self[start] if start < 0: raise IndexError(item) step = (self.step if self.step != None else 1) * (step if step != None else 1) stop = stop if stop is not None else self.xrange[-1] if stop < 0: stop += stop stop = self[stop] stop = stop if stop > self.stop: raise IndexError if start < self.start: raise IndexError return Xrange(start, stop, step) def index(self, value): error = ValueError('object.index({0}): {0} not in object'.format(value)) index = (value - self.start)/self.step if index % 1 != 0: raise error index = int(index) try: self.xrange[index] except (IndexError, TypeError): raise error return index def __len__(self): return len(self.xrange) Honestly, I think the whole issue is kind of silly and xrange should do all of this anyway... A: For performance, especially when you're iterating over a large range, xrange() is usually better. However, there are still a few cases why you might prefer range(): * *In python 3, range() does what xrange() used to do and xrange() does not exist. If you want to write code that will run on both Python 2 and Python 3, you can't use xrange(). *range() can actually be faster in some cases - eg. if iterating over the same sequence multiple times. xrange() has to reconstruct the integer object every time, but range() will have real integer objects. (It will always perform worse in terms of memory however) *xrange() isn't usable in all cases where a real list is needed. For instance, it doesn't support slices, or any list methods. [Edit] There are a couple of posts mentioning how range() will be upgraded by the 2to3 tool. For the record, here's the output of running the tool on some sample usages of range() and xrange() RefactoringTool: Skipping implicit fixer: buffer RefactoringTool: Skipping implicit fixer: idioms RefactoringTool: Skipping implicit fixer: ws_comma --- range_test.py (original) +++ range_test.py (refactored) @@ -1,7 +1,7 @@ for x in range(20): - a=range(20) + a=list(range(20)) b=list(range(20)) c=[x for x in range(20)] d=(x for x in range(20)) - e=xrange(20) + e=range(20) As you can see, when used in a for loop or comprehension, or where already wrapped with list(), range is left unchanged. A: You should favour range() over xrange() only when you need an actual list. For instance, when you want to modify the list returned by range(), or when you wish to slice it. For iteration or even just normal indexing, xrange() will work fine (and usually much more efficiently). There is a point where range() is a bit faster than xrange() for very small lists, but depending on your hardware and various other details, the break-even can be at a result of length 1 or 2; not something to worry about. Prefer xrange(). A: Go with range for these reasons: 1) xrange will be going away in newer Python versions. This gives you easy future compatibility. 2) range will take on the efficiencies associated with xrange. A: A good example given in book: Practical Python By Magnus Lie Hetland >>> zip(range(5), xrange(100000000)) [(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)] I wouldn’t recommend using range instead of xrange in the preceding example—although only the first five numbers are needed, range calculates all the numbers, and that may take a lot of time. With xrange, this isn’t a problem because it calculates only those numbers needed. Yes I read @Brian's answer: In python 3, range() is a generator anyway and xrange() does not exist. A: One other difference is that Python 2 implementation of xrange() can't support numbers bigger than C ints, so if you want to have a range using Python's built in large number support, you have to use range(). Python 2.7.3 (default, Jul 13 2012, 22:29:01) [GCC 4.7.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> range(123456787676676767676676,123456787676676767676679) [123456787676676767676676L, 123456787676676767676677L, 123456787676676767676678L] >>> xrange(123456787676676767676676,123456787676676767676679) Traceback (most recent call last): File "<stdin>", line 1, in <module> OverflowError: Python int too large to convert to C long Python 3 does not have this problem: Python 3.2.3 (default, Jul 14 2012, 01:01:48) [GCC 4.7.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> range(123456787676676767676676,123456787676676767676679) range(123456787676676767676676, 123456787676676767676679) A: While xrange is faster than range in most circumstances, the difference in performance is pretty minimal. The little program below compares iterating over a range and an xrange: import timeit # Try various list sizes. for list_len in [1, 10, 100, 1000, 10000, 100000, 1000000]: # Time doing a range and an xrange. rtime = timeit.timeit('a=0;\nfor n in range(%d): a += n'%list_len, number=1000) xrtime = timeit.timeit('a=0;\nfor n in xrange(%d): a += n'%list_len, number=1000) # Print the result print "Loop list of len %d: range=%.4f, xrange=%.4f"%(list_len, rtime, xrtime) The results below shows that xrange is indeed faster, but not enough to sweat over. Loop list of len 1: range=0.0003, xrange=0.0003 Loop list of len 10: range=0.0013, xrange=0.0011 Loop list of len 100: range=0.0068, xrange=0.0034 Loop list of len 1000: range=0.0609, xrange=0.0438 Loop list of len 10000: range=0.5527, xrange=0.5266 Loop list of len 100000: range=10.1666, xrange=7.8481 Loop list of len 1000000: range=168.3425, xrange=155.8719 So by all means use xrange, but unless you're on a constrained hardware, don't worry too much about it. A: Okay, everyone here as a different opinion as to the tradeoffs and advantages of xrange versus range. They're mostly correct, xrange is an iterator, and range fleshes out and creates an actual list. For the majority of cases, you won't really notice a difference between the two. (You can use map with range but not with xrange, but it uses up more memory.) What I think you rally want to hear, however, is that the preferred choice is xrange. Since range in Python 3 is an iterator, the code conversion tool 2to3 will correctly convert all uses of xrange to range, and will throw out an error or warning for uses of range. If you want to be sure to easily convert your code in the future, you'll use xrange only, and list(xrange) when you're sure that you want a list. I learned this during the CPython sprint at PyCon this year (2008) in Chicago. A: * *range(): range(1, 10) returns a list from 1 to 10 numbers & hold whole list in memory. *xrange(): Like range(), but instead of returning a list, returns an object that generates the numbers in the range on demand. For looping, this is lightly faster than range() and more memory efficient. xrange() object like an iterator and generates the numbers on demand (Lazy Evaluation). In [1]: range(1,10) Out[1]: [1, 2, 3, 4, 5, 6, 7, 8, 9] In [2]: xrange(10) Out[2]: xrange(10) In [3]: print xrange.__doc__ Out[3]: xrange([start,] stop[, step]) -> xrange object range() does the same thing as xrange() used to do in Python 3 and there is not term xrange() exist in Python 3. range() can actually be faster in some scenario if you iterating over the same sequence multiple times. xrange() has to reconstruct the integer object every time, but range() will have real integer objects. A: No, they both have their uses: Use xrange() when iterating, as it saves memory. Say: for x in xrange(1, one_zillion): rather than: for x in range(1, one_zillion): On the other hand, use range() if you actually want a list of numbers. multiples_of_seven = range(7,100,7) print "Multiples of seven < 100: ", multiples_of_seven A: xrange() is more efficient because instead of generating a list of objects, it just generates one object at a time. Instead of 100 integers, and all of their overhead, and the list to put them in, you just have one integer at a time. Faster generation, better memory use, more efficient code. Unless I specifically need a list for something, I always favor xrange()
{ "language": "en", "url": "https://stackoverflow.com/questions/135041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "463" }
Q: Interview questions for Flash developers For an interview in a few, I'm not at all familiar with Flash development. What would you ask? A: Honestly, if you yourself aren't a flash developer, I'd caution against interviewing a potential Flash developer, unless you're only doing it to get a sense of their character (as opposed to their skill level). Experience in working with the technology is going to give you a much more realistic perspective - blindly asking questions that other people told you are good ones to ask will get you into trouble. That said, if you can find someone you know to sit in on the interview with you and provide evaluation from a technical standpoint, here's a few things I'd remind them to ask: * *object oriented programming *loading external media *audio & video playback (also maybe volume control / mixing) *event listeners *transitions / animation *filter / sort algorithms *common UI elements: scrollbar, form elements, drop-down menu, rollover states, drag-and-drop, etc. Those are sort of generally useful areas of knowlege, but the true test of profficiency is a practical test - "write a class that meets these requirements", or "this code isn't working, find out why and fix it" are good ways to immediately gain insight into the candidate's work (and thought) process. Most importantly: even if the interviewee is short on specific knowlege or experience of the subjects you settle on, it's better to get someone who is a fast learner and will easily comprehend new concepts then to get someone who might know a lot now, but will resist learning new stuff. A: Flash can be used for a lot of different purposes. If I was interested in 2D animation, I could use Flash to draw characters, animate scenes, add audio, etc. and not know a lick of programming. On the flip side, I might need to use flash to make an audio player widget for my website. I'd need to know how to do a fair amount of programming (the name of Flash's programming language is ActionScript), but I might not know anything about vector drawing, design, or animation. What I'm getting at, is just because someone lists "Flash experience" on their resume doesn't mean they've used flash to do anything close to what you want them to do. If you want a graphic designer, animator, artist, etc. the interview will be much different than if you want someone who can program. A: * *Explain what a MovieClip in AS2/F8 is. Then go to AS3 and explain the diff between Sprite and Movieclip. *How to handle keyboard events in AS2? AS3? *Best way to embed flash into html. Static and dynamic publishing. *How to make a preloader? Both first-frame and separate *write a simple clickable button in AS2 and AS3. This list can go on of course, just some starters. A: Do you know Object Oriented? What do you like in ActionScript3 that ActionScript 2 haven't? A: Depending on what you need them for, I would ask what else they know in conjunction with Flash. If it's for anything on the web, 100% Flash should very rarely be the answer and so it's important to have other skills in the tool belt. A: Please give me an example of a time when you used a Flash implementation, and later decided you should have used something else. What should you have used and why? A: Please give me an example of a time you implemented an interface in [select a technology] and later decided you should have used Flash. Explain why Flash would have been the better choice. A: What are the advantages and disadvantages of Flash over other web interface technologies? Give some examples of when you have used different technologies and why.
{ "language": "en", "url": "https://stackoverflow.com/questions/135047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Internet Explorer 8 and Internet Explorer 6 side by side Possible Duplicate: Running Internet Explorer 6, Internet Explorer 7, and Internet Explorer 8 on the same machine Is there a way to have Internet Explorer 8 and Internet Explorer 6 side by side without virtualizing? I used Multiple IEs which works fine with Internet Explorer 7, but since I installed Internet Explorer 8 beta 2, Internet Explorer 6 started behaving oddly (that is, more than usual). A: Virtualization is the easiest way to achieve this. It has a higher overhead, but since IE has so many hooks into the OS trying to install multiple versions of it is doomed to confusion and failure. A: A very light-weight (and just released) way to do this is to use Expression Web SuperPreview. It allows you to compare IE6 and IE7 (or IE6 and IE8+IE7-compatibility-mode) side-by-side. It's currently just a preview, but I've used it with good results. They're going to release a commercial version that enables side-by-side comparison of more browsers, but they say the IE-specific one will remain free forever. A: One more multiple, standalone IE option: IE Collection. A: I also use virtualisation. I've got Virtual PC 2007, which is a free download from here, on my machine and have downloaded the Internet Explorer Virual PC images from Microsoft. You can get the images here. A: Either run it in a VM, wait untill multiples get IE8 added or, use http://browsershots.org/ which will take screenshots of your website from several different Operating systmes, and browsers. A: I've written a step-by-step blog post showing how to run IE6, IE7 and IE8 as "virtual applications" on Windows 7 Ultimate. A: Microsoft does not support multiple versions of Internet Explorer on one operating system. The reason is that the operating system and Internet Explorer share certain DLLs. When you upgrade from Internet Explorer 6 to Internet Explorer 7 (or Internet Explorer 8) you're actually replacing some system DLLs. This is the reason why you "get" Internet Explorer 6 when you uninstall Internet Explorer 7. Chris Wilson, Internet Explorer architect, addressed this issue in a blog post Multiple IEs on one machine. Chris states that on-the-fly replacement of mshtml.dll might work for CSS rendering "...but it's not the same as having a full set of new Internet Explorer system DLLs installed" and would certainly not be considered a definitive solution. Only virtualization can provide the full DLL stack for definitive testing. Edit: On March 18, 2009, the Microsoft Expression Web team released SuperPreview, a free stand-alone application that allows cross-browser side-by-side and onionskin comparison between Internet Explorer 8, Internet Explorer 8 - Internet Explorer7 compatibility mode, and Internet Explorer 6. Additional browsers and an on-demand service is planned to render pages in realtime on other operating systems. Edit in response to Zac comment Thanks for the comment. Expression Web 3 (which will include SuperPreview) will allow comparison between any combination of Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, and Firefox 3. This is according to Somasgear's blog entry Expresion Web 3 posted on June 5, 2009. In the screenshot on his blog, you'll see Firefox 3 as the base browser (left side) and Internet Explorer 6 as the comparison browser. Any browser can be placed on either side of the comparison window. A: Try this: http://www.my-debugbar.com/wiki/IETester/HomePage LE: This isn't really fully compatible yet, there are a few minor issues, like it crashes on JavaScript pop-ups, but I've found it quite reliable during development. At the end of everything, I just tested the web application against a real IE6 to make sure everything's fine. A: There's also IE7 standalone A: What I do is use VMware with other OS with IE6. Not perfect, but it helps. A: I use a utility called "Sandboxie" (free for personal use, $29 for commercial) to provide application sandboxing. One useful side-effect of this is that you can install applications (even OS-modifying ones such as IE) into the sandbox, and the parent OS is completely unaware (allowing you to have different versions of the parent OS's IE and the sandboxed IE - and both running simultaneously). The two scenarios I've used so far: * *Internet Explorer 7 in the parent OS, and uninstalled IE7 in the sandbox to make IE6 available *Internet Explorer 6 in the parent os, and upgraded to IE8 in the sandbox Caveats: * *If you need more than one additional version of IE available simultaneously, then you will need to purchase the full version, as you can only have one version of IE in a sandbox, and the free version supports only one active sandbox at a time *Installing a version of IE into the sandbox can take a little trial and error - IE8 was particularly tricky. Some warnings that occur during installation (or uninstallation) are trivial, though.
{ "language": "en", "url": "https://stackoverflow.com/questions/135057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: #ifdef vs #if - which is better/safer as a method for enabling/disabling compilation of particular sections of code? This may be a matter of style, but there's a bit of a divide in our dev team and I wondered if anyone else had any ideas on the matter... Basically, we have some debug print statements which we turn off during normal development. Personally I prefer to do the following: //---- SomeSourceFile.cpp ---- #define DEBUG_ENABLED (0) ... SomeFunction() { int someVariable = 5; #if(DEBUG_ENABLED) printf("Debugging: someVariable == %d", someVariable); #endif } Some of the team prefer the following though: // #define DEBUG_ENABLED ... SomeFunction() { int someVariable = 5; #ifdef DEBUG_ENABLED printf("Debugging: someVariable == %d", someVariable); #endif } ...which of those methods sounds better to you and why? My feeling is that the first is safer because there is always something defined and there's no danger it could destroy other defines elsewhere. A: My initial reaction was #ifdef, of course, but I think #if actually has some significant advantages for this - here's why: First, you can use DEBUG_ENABLED in preprocessor and compiled tests. Example - Often, I want longer timeouts when debug is enabled, so using #if, I can write this DoSomethingSlowWithTimeout(DEBUG_ENABLED? 5000 : 1000); ... instead of ... #ifdef DEBUG_MODE DoSomethingSlowWithTimeout(5000); #else DoSomethingSlowWithTimeout(1000); #endif Second, you're in a better position if you want to migrate from a #define to a global constant. #defines are usually frowned on by most C++ programmers. And, Third, you say you've a divide in your team. My guess is this means different members have already adopted different approaches, and you need to standardise. Ruling that #if is the preferred choice means that code using #ifdef will compile -and run- even when DEBUG_ENABLED is false. And it's much easier to track down and remove debug output that is produced when it shouldn't be than vice-versa. Oh, and a minor readability point. You should be able to use true/false rather than 0/1 in your #define, and because the value is a single lexical token, it's the one time you don't need parentheses around it. #define DEBUG_ENABLED true instead of #define DEBUG_ENABLED (1) A: I myself prefer: #if defined(DEBUG_ENABLED) Since it makes it easier to create code that looks for the opposite condition much easier to spot: #if !defined(DEBUG_ENABLED) vs. #ifndef(DEBUG_ENABLED) A: It's a matter of style. But I recommend a more concise way of doing this: #ifdef USE_DEBUG #define debug_print printf #else #define debug_print #endif debug_print("i=%d\n", i); You do this once, then always use debug_print() to either print or do nothing. (Yes, this will compile in both cases.) This way, your code won't be garbled with preprocessor directives. If you get the warning "expression has no effect" and want to get rid of it, here's an alternative: void dummy(const char*, ...) {} #ifdef USE_DEBUG #define debug_print printf #else #define debug_print dummy #endif debug_print("i=%d\n", i); A: They're both hideous. Instead, do this: #ifdef DEBUG #define D(x) do { x } while(0) #else #define D(x) do { } while(0) #endif Then whenever you need debug code, put it inside D();. And your program isn't polluted with hideous mazes of #ifdef. A: #if gives you the option of setting it to 0 to turn off the functionality, while still detecting that the switch is there. Personally I always #define DEBUG 1 so I can catch it with either an #if or #ifdef A: #if and #define MY_MACRO (0) Using #if means that you created a "define" macro, i.e., something that will be searched in the code to be replaced by "(0)". This is the "macro hell" I hate to see in C++, because it pollutes the code with potential code modifications. For example: #define MY_MACRO (0) int doSomething(int p_iValue) { return p_iValue + 1 ; } int main(int argc, char **argv) { int MY_MACRO = 25 ; doSomething(MY_MACRO) ; return 0; } gives the following error on g++: main.cpp|408|error: lvalue required as left operand of assignment| ||=== Build finished: 1 errors, 0 warnings ===| Only one error. Which means that your macro successfully interacted with your C++ code: The call to the function was successful. In this simple case, it is amusing. But my own experience with macros playing silently with my code is not full of joy and fullfilment, so... #ifdef and #define MY_MACRO Using #ifdef means you "define" something. Not that you give it a value. It is still polluting, but at least, it will be "replaced by nothing", and not seen by C++ code as lagitimate code statement. The same code above, with a simple define, it: #define MY_MACRO int doSomething(int p_iValue) { return p_iValue + 1 ; } int main(int argc, char **argv) { int MY_MACRO = 25 ; doSomething(MY_MACRO) ; return 0; } Gives the following warnings: main.cpp||In function ‘int main(int, char**)’:| main.cpp|406|error: expected unqualified-id before ‘=’ token| main.cpp|399|error: too few arguments to function ‘int doSomething(int)’| main.cpp|407|error: at this point in file| ||=== Build finished: 3 errors, 0 warnings ===| So... Conclusion I'd rather live without macros in my code, but for multiple reasons (defining header guards, or debug macros), I can't. But at least, I like to make them the least interactive possible with my legitimate C++ code. Which means using #define without value, using #ifdef and #ifndef (or even #if defined as suggested by Jim Buck), and most of all, giving them names so long and so alien no one in his/her right mind will use it "by chance", and that in no way it will affect legitimate C++ code. Post Scriptum Now, as I'm re-reading my post, I wonder if I should not try to find some value that won't ever ever be correct C++ to add to my define. Something like #define MY_MACRO @@@@@@@@@@@@@@@@@@ that could be used with #ifdef and #ifndef, but not let code compile if used inside a function... I tried this successfully on g++, and it gave the error: main.cpp|410|error: stray ‘@’ in program| Interesting. :-) A: #ifdef just checks if a token is defined, given #define FOO 0 then #ifdef FOO // is true #if FOO // is false, because it evaluates to "#if 0" A: That is not a matter of style at all. Also the question is unfortunately wrong. You cannot compare these preprocessor directives in the sense of better or safer. #ifdef macro means "if macro is defined" or "if macro exists". The value of macro does not matter here. It can be whatever. #if macro if always compare to a value. In the above example it is the standard implicit comparison: #if macro !=0 example for the usage of #if #if CFLAG_EDITION == 0 return EDITION_FREE; #elif CFLAG_EDITION == 1 return EDITION_BASIC; #else return EDITION_PRO; #endif you now can either put the definition of CFLAG_EDITION either in your code #define CFLAG_EDITION 1 or you can set the macro as compiler flag. Also see here. A: We have had this same problem across multiple files and there is always the problem with people forgetting to include a "features flag" file (With a codebase of > 41,000 files it is easy to do). If you had feature.h: #ifndef FEATURE_H #define FEATURE_H // turn on cool new feature #define COOL_FEATURE 1 #endif // FEATURE_H But then You forgot to include the header file in file.cpp: #if COOL_FEATURE // definitely awesome stuff here... #endif Then you have a problem, the compiler interprets COOL_FEATURE being undefined as a "false" in this case and fails to include the code. Yes gcc does support a flag that causes a error for undefined macros... but most 3rd party code either defines or does not define features so this would not be that portable. We have adopted a portable way of correcting for this case as well as testing for a feature's state: function macros. if you changed the above feature.h to: #ifndef FEATURE_H #define FEATURE_H // turn on cool new feature #define COOL_FEATURE() 1 #endif // FEATURE_H But then you again forgot to include the header file in file.cpp: #if COOL_FEATURE() // definitely awseome stuff here... #endif The preprocessor would have errored out because of the use of an undefined function macro. A: The first seems clearer to me. It seems more natural make it a flag as compared to defined/not defined. A: Both are exactly equivalent. In idiomatic use, #ifdef is used just to check for definedness (and what I'd use in your example), whereas #if is used in more complex expressions, such as #if defined(A) && !defined(B). A: There is a difference in case of different way to specify a conditional define to the driver: diff <( echo | g++ -DA= -dM -E - ) <( echo | g++ -DA -dM -E - ) output: 344c344 < #define A --- > #define A 1 This means, that -DA is synonym for -DA=1 and if value is omitted, then it may lead to problems in case of #if A usage. A: For the purposes of performing conditional compilation, #if and #ifdef are almost the same, but not quite. If your conditional compilation depends on two symbols then #ifdef will not work as well. For example, suppose you have two conditional compilation symbols, PRO_VERSION and TRIAL_VERSION, you might have something like this: #if defined(PRO_VERSION) && !defined(TRIAL_VERSION) ... #else ... #endif Using #ifdef the above becomes much more complicated, especially getting the #else part to work. I work on code that uses conditional compilation extensively and we have a mixture of #if & #ifdef. We tend to use #ifdef/#ifndef for the simple case and #if whenever two or more symbols are being evaluation. A: I think it's entirely a question of style. Neither really has an obvious advantage over the other. Consistency is more important than either particular choice, so I'd recommend that you get together with your team and pick one style, and stick to it. A: A little OT, but turning on/off logging with the preprocessor is definitely sub-optimal in C++. There are nice logging tools like Apache's log4cxx which are open-source and don't restrict how you distribute your application. They also allow you to change logging levels without recompilation, have very low overhead if you turn logging off, and give you the chance to turn logging off completely in production. A: I used to use #ifdef, but when I switched to Doxygen for documentation, I found that commented-out macros cannot be documented (or, at least, Doxygen produces a warning). This means I cannot document the feature-switch macros that are not currently enabled. Although it is possible to define the macros only for Doxygen, this means that the macros in the non-active portions of the code will be documented, too. I personally want to show the feature switches and otherwise only document what is currently selected. Furthermore, it makes the code quite messy if there are many macros that have to be defined only when Doxygen processes the file. Therefore, in this case, it is better to always define the macros and use #if. A: Alternatively, you can declare a global constant, and use the C++ if, instead of the preprocessor #if. The compiler should optimize the unused branches away for you, and your code will be cleaner. Here is what C++ Gotchas by Stephen C. Dewhurst says about using #if's. A: I've always used #ifdef and compiler flags to define it... A: I like #define DEBUG_ENABLED (0) when you might want multiple levels of debug. For example: #define DEBUG_RELEASE (0) #define DEBUG_ERROR (1) #define DEBUG_WARN (2) #define DEBUG_MEM (3) #ifndef DEBUG_LEVEL #define DEBUG_LEVEL (DEBUG_RELEASE) #endif //... //now not only #if (DEBUG_LEVEL) //... #endif //but also #if (DEBUG_LEVEL >= DEBUG_MEM) LOG("malloc'd %d bytes at %s:%d\n", size, __FILE__, __LINE__); #endif Makes it easier to debug memory leaks, without having all those log lines in your way of debugging other things. Also the #ifndef around the define makes it easier to pick a specific debug level at the commandline: make -DDEBUG_LEVEL=2 cmake -DDEBUG_LEVEL=2 etc If not for this, I would give advantage to #ifdef because the compiler/make flag would be overridden by the one in the file. So you don't have to worry about changing back the header before doing the commit. A: As with many things, the answer depends. #ifdef is great for things that are guaranteed to be defined or not defined in a particular unit. Include guards for example. If the include file is present at least once, the symbol is guaranteed to be defined, otherwise not. However, some things don't have that guarantee. Think about the symbol HAS_FEATURE_X. How many states exist? * *Undefined *Defined *Defined with a value (say 0 or 1). So, if you're writing code, especially shared code, where some may #define HAS_FEATURE_X 0 to mean feature X isn't present and others may just not define it, you need to handle all those cases. #if !defined(HAS_FEATURE_X) || HAS_FEATURE_X == 1 Using just an #ifdef could allow for a subtle error where something is switched in (or out) unexpectedly because someone or some team has a convention of defining unused things to 0. In some ways, I like this #if approach because it means the programmer actively made a decision. Leaving something undefined is passive and from an external point of view, it can sometimes be unclear whether that was intentional or an oversight.
{ "language": "en", "url": "https://stackoverflow.com/questions/135069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "126" }
Q: Why do Chrome, Firefox and IE all render fixed-width SELECT controls differently? I am losing hair on this one ... it seems that when I fix width an HTML SELECT control it renders its width differently depending on the browser. Any idea how to to standardize this without having to turn to multiple style sheets? Here is what I am working with: .combo { padding: 2px; width: 200px; } .text { padding: 2px; width: 200px; } This is my document type for the page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> A: Try setting font-size on the selects as well, that can affect how they are rendered. Also consider the min-width and max-width properties. A: Form controls will always be less obedient to styling attempts,in particular selects and file inputs, so the only way to reliably style them cross-browser and with future-proofing in mind, is to replace them with JavaScript or Flash and mimic their functionality A: input[type=text], select { border: solid 1px #c2c1c1; width: 150px; padding: 2px; } // then select { width: 156px; //needs to be input[type=text] width + (border and padding) } /* The input[type=text] width = width + padding + border The select width just equals width. The padding and border get rendered inside that width constraint. That's just the way SELECT rolls... */ A: Make sure you remove all default margins and padding, and define them explicitly. Make sure you're using a proper DOCTYPE and therefore rendering IE in Standards Mode. A: You may use faked dropdown widget and replace the SELECT. A: Browsers tend to limit the amount you can style form controls with CSS, because form controls have a lot of complicated styling that varies between operating systems. CSS can’t describe that fully, so browsers put some of it off limits. Eric Meyer wrote a good article on the subject: http://meyerweb.com/eric/thoughts/2007/05/15/formal-weirdness/ The best you can do is accept you don’t have complete control over the look of form fields, and experiment with whatever styling is really important. A: Try using Firebug or Chrome's "Inspect Element" feature (right click on the select control, click "inspect element") to see exactly what style properties are being inherited/rendered for that specific object. That should lead you in the right direction. A: I've tried all these suggestions ... and I finally have it so it looks good in IE and Firefox. Looks like there is something wrong with the padding on the SELECT control. If I increase the width of the SELECT by 2 pixels the now size correctly. .combo { padding: 2px; width: 206px; } .text { padding: 2px; width: 200px; } However, Chrome still does not show them the same size. A: Martinator is correct. Sounds like you're trying to control the width of various types of inputs or menus across bowsers. You can directly select the object and specify the width. For example: select { width:350px; } Or you can do this with text area: select { width:350px; } Other types of inputs require the syntax Martinator mentions. So, for a text, input, or even file type input, you'd do this for each one: input[type=text] { width:350px; } input[type=input] { width:350px; } input[type=file] { width:350px; }
{ "language": "en", "url": "https://stackoverflow.com/questions/135076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How can you make a DataGridView scroll one item at a time using the mouse wheel? We'd like to override DataGridView's default behavior when using a mouse wheel with this control. By default, the DataGridView scrolls a number of rows equal the SystemInformation.MouseWheelScrollLines setting. What we'd like to do is scroll just one item at a time. (We display images in the DataGridView, which are somewhat large. Because of this scroll three rows (a typical system setting) is too much, often causing the user to scroll to items they can't even see.) I've tried a couple things already and haven't had much success so far. Here are some issues I've run into: * *You can subscribe to MouseWheel events but there's no way to mark the event as handled and do my own thing. *You can override OnMouseWheel but this never appears to be called. *You might be able to correct this in the base scrolling code but it sounds like a messy job since other types of scrolling (e.g. using the keyboard) come through the same pipeline. Anyone have a good suggestion? Here's the final code, using the wonderful answer given: /// <summary> /// Handle the mouse wheel manually due to the fact that we display /// images, which don't work well when you scroll by more than one /// item at a time. /// </summary> /// /// <param name="sender"> /// sender /// </param> /// <param name="e"> /// the mouse event /// </param> private void mImageDataGrid_MouseWheel(object sender, MouseEventArgs e) { // Hack alert! Through reflection, we know that the passed // in event argument is actually a handled mouse event argument, // allowing us to handle this event ourselves. // See http://tinyurl.com/54o7lc for more info. HandledMouseEventArgs handledE = (HandledMouseEventArgs) e; handledE.Handled = true; // Do the scrolling manually. Move just one row at a time. int rowIndex = mImageDataGrid.FirstDisplayedScrollingRowIndex; mImageDataGrid.FirstDisplayedScrollingRowIndex = e.Delta < 0 ? Math.Min(rowIndex + 1, mImageDataGrid.RowCount - 1): Math.Max(rowIndex - 1, 0); } A: I just did a little scrounging and testing of my own. I used Reflector to investigate and discovered a couple things. The MouseWheel event provides a MouseEventArgs parameter, but the OnMouseWheel() override in DataGridView casts it to HandledMouseEventArgs. This also works when handling the MouseWheel event. OnMouseWheel() does indeed get called, and it is in DataGridView's override that it uses SystemInformation.MouseWheelScrollLines. So: * *You could indeed handle the MouseWheel event, casting MouseEventArgs to HandledMouseEventArgs and set Handled = true, then do what you want. *Subclass DataGridView, override OnMouseWheel() yourself, and try to recreate all the code I read here in Reflector except for replacing SystemInformation.MouseWheelScrollLines with 1. The latter would be a huge pain because it uses a number of private variables (including references to the ScrollBars) and you'd have replace some with your own and get/set others using Reflection. A: I would subclass the DataGridView into my own custom control (you know, add a new Windows Forms --> Custom Control file and change the base class from Control to DataGridView). public partial class MyDataGridView : DataGridView Then override the WndProc method and substitute something like so: protected override void WndProc(ref Message m) { if (m.Msg == 0x20a) { int wheelDelta = ((int)m.WParam) >> 16; // 120 = UP 1 tick // -120 = DOWN 1 tick this.FirstDisplayedScrollingRowIndex -= (wheelDelta / 120); } else { base.WndProc(ref m); } } Of course, you'll have the check that you don't set FirstDisplayedScrollingRowIndex to a number outside of the range of your grid etc. But this works quite well! Richard A: Overriding OnMouseWheel and not calling base.OnMouseWheel should work. Some wheel mice have special settings that you may need to set yourself for it to work properly. See this post http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=126295&SiteID=1 A: UPDATE: Since I've now learned that the DataGridView has a MouseWheel event, I've added a second, simpler override. One way to accomplish this is to subclass the DataGridView and override the WndProc to add special handling of the WM_MOUSEWHEEL message. This example catches the mouse wheel movement and replaces it with a call to SendKeys.Send. (This is a little different than just scrolling, since it also selects the next/previous row of the DataGridView. But it works.) public class MyDataGridView : DataGridView { private const uint WM_MOUSEWHEEL = 0x20a; protected override void WndProc(ref Message m) { if (m.Msg == WM_MOUSEWHEEL) { var wheelDelta = ((int)m.WParam) >> 16; if (wheelDelta < 0) { SendKeys.Send("{DOWN}"); } if (wheelDelta > 0) { SendKeys.Send("{UP}"); } return; } base.WndProc(ref m); } } 2nd take (with the same caveats as mentioned above): public class MyDataGridView : DataGridView { protected override void OnMouseWheel(MouseEventArgs e) { if (e.Delta < 0) SendKeys.Send("{DOWN}"); else SendKeys.Send("{UP}"); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/135105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Java Developer meets Objective-C on Mac OS I have developed in C++ many years ago, but these days I am primarily a Java software engineer. Given I own an iPhone, am ready to spring for a MacBook next month, and am generally interested in getting started with Mac OS developmentmt (using Objective C), I thought I would just put this question out there: What Next? More specifically, what books should I pick up, and are there any web resources that some folks could point me to? Some books that I am planning to purchase: * *Programming in Objective-C 2.0 *Cocoa(R) Programming for Mac OS X (3rd Edition) Anyone familiar with these titles? Finally, I would be very interested in a summary of what I should be prepared to expect, once I embark on this journey. As someone that develops in Java using IntelliJ IDEA, what are some key differences I will notice as I move over to writing ObjectiveC code in Xcode? What's the differences between Mac OS desktop development and iPhone development? Being used to Java garbage collection, what should I know about ObjectiveC garbage collection / memory management. Any other language specific issues that anyone would like to point out? How about building UIs? Is it closer to Swing, building Visual C++ resource files that code interacts with, or is it more like some of the borland IDEs that will generate code for guis? A: Having purchased both of the books in your question, I recommend Cocoa Programming for Mac OS X as a quick way to learn the language and the Cocoa framework, and is probably the fastest way to start producing real applications in Cocoa. I highly recommend it. Programming in Objective-C 2.0 is a great reference book, but if you already know C, there's no much it's going to teach you that you can't pick up from the other book. However, if you ever need to a list of all the reserved keywords in Objective-C, that's the book to go to. All of the user interface can be generated progmatically, but you'll find it much easier to use Interface Builder, which comes with XCode, to lay out the user interface. You'll end up with a lot less code. With bindings, you can even eliminate code which isn't directly related to laying out the interface. The details are in the Cocoa Programming for Mac OS X book. The one big thing I miss from Java is the collection API. In Cocoa, you just get NSSet, NSArray, and NSDictionary, and there's no analog to the Comparable interface. These classes are also immutable, but have mutable versions such as NSMutableArray. I actually haven't played with the Garbage Collection in Objective-C 2.0. In previous versions of Objective-C, memory management was handled by the retain, release, and autorelease methods. Objects were created with a retain count of 1. Retaining incremented that count, releasing decremented it, and autoreleasing objects is a little more complicated. Again, the Cocoa Programming book explains it well. Garbage collection is an option, and if it's turned on, the retain, release and autorelease methods do nothing. However, if you are writing a library or framework to be used by others, you should program it as if garbage collection is turned off. That way applications can use it whether or not they have garbage collection turned on. As for Web resources, http://cocoadevcentral.com/ is a great site with beginner tutorials. The CocoaDev Wiki at http://www.cocoadev.com/ contains detailed information on a lot of topics, and you can usually find some useful information and people on the cocoa-dev mailing list http://lists.apple.com/mailman/listinfo/cocoa-dev iPhone development is a little different, and the details are restricted by an NDA. However, if you get approved by Apple to get access to the iPhone developer center, Apple has provided some great video overviews of the differences, which point you to the documentation you need to make the jump from Mac OS X to iPhone OS X programming. A: Get Cocoa Programming for Mac OS X. Most of your questions will be answered by that book. You can also start reading Become an XCoder and Cocoa Dev Central. The iPhone SDK is still under NDA so you wont be able to find any online resources about it except for the one provided by Apple. Cocoa UI is based on MVC. You use Interface Builder to design your views and then bind it to your models and controllers. Objective C is a mixture of C and Smalltalk. A: Another option for you is jaiPhon, which allows you to write java apps that get translated into iPhone-speak at build time. I don't know if it's availanle yet, or if it's commercial-ware or whatever, but it's interesting none the less. http://www.jaiphon.com/ A: Also check out this: http://www.xmlvm.org/overview/ It is a project that attempts to be able to cross-compile programs written in a variety of source languages to a variety of target languages. One of the initial test cases was to write programs in Java and run them on an iPhone. Watching the video on the site is worthwhile. With that said, I haven't tried it. The project seems quite beta, and there isn't a lot of activity on their SourceForge site. A: Are you already familiar with the square brackets? Here's a brief of its rationale from my point of view. I hope this may helps also. A: I think you will feel rather naked when jumping from IntelliJ to Xcode. But this is just from a tool-perspective. Bring extra clothes (TextMate+FScript)!
{ "language": "en", "url": "https://stackoverflow.com/questions/135112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How can I use an event to cause a method to run? So in my documentation it says: public event TreeViewPlusNodeCheckedEventHandler NodeChecked() You can use this event to run cause a method to run whenever the check-box for a node is checked on the tree. So how do I add a method to my code behind file that will run when a node is checked? The method I want to run is: protected void TOCNodeCheckedServer(object sender, TreeViewPlusNodeEventArgs args) { TreeViewPlusNode aNode = args.Node; if (!aNode.Checked) return; List<string> BaseLayers = new List<string>(); _arcTOCConfig.BaseDataLayers.CopyTo(BaseLayers); List<MapResourceItem> mapResources = new List<MapResourceItem>(); if (BaseLayers.Contains(aNode.Text)) { foreach (BaseDataLayerElement anEl in _arcTOCConfig.BaseDataLayers) { if (!aNode.Text.Equals(anEl.Name)) { if (aNode.TreeViewPlus.Nodes.FindByValue(anEl.Name).Checked) { aNode.TreeViewPlus.Nodes.FindByValue(anEl.Name).Checked = false; aNode.TreeViewPlus.Nodes.FindByValue(anEl.Name).Refresh(); MapResourceItem aMapResource = this.Map1.MapResourceManagerInstance.ResourceItems.Find(anEl.Name); aMapResource.DisplaySettings.Visible = false; this.Map1.RefreshResource(anEl.Name); mapResources.Add(aMapResource); this.Map1.MapResourceManagerInstance.ResourceItems.Remove(aMapResource); } else { MapResourceItem aMapResource = this.Map1.MapResourceManagerInstance.ResourceItems.Find(anEl.Name); mapResources.Add(aMapResource); this.Map1.MapResourceManagerInstance.ResourceItems.Remove(aMapResource); } } } foreach (MapResourceItem aMapResource in mapResources) { int count = this.Map1.MapResourceManagerInstance.ResourceItems.Count - 1; this.Map1.MapResourceManagerInstance.ResourceItems.Insert(count, aMapResource); this.Map1.MapResourceManagerInstance.CreateResource(aMapResource); } this.Map1.InitializeFunctionalities(); this.Map1.Refresh(); } } vs 2008 c# .net 3.5 A: You need to assign a delegate to the event and have it run the method you want. Something like : TreeViewControl.NodeChecked += new TreeViewPlusNodeCheckedEventHandler(TOCNodeCheckedServer) A: Just add a handler to the event. myTreeView.NodeChecked += new TreeViewPlusNodeCheckedEventHandler(TOCNodeCheckedServer); or (because instantiating the TreeViewPlusNodeCheckedEventHandler isn't actually necessary) myTreeView.NodeChecked += TOCNodeCheckedServer; A: This is a standard case of registering a handler for an event treeView.NodeChecked += TOCNodeCheckedServer; A: On your initialise method for the form add TOCTree.NodeChecked += new TreeViewPlusNodeCheckedEventHandler (TOCNodeCheckedServer); This will tell your app to run TOCNodeCheckedServer when the TOCNode Fires the NodeChecked Event. There are loads of resources on the web explaining how this works. Check out http://www.csharphelp.com/archives/archive253.html as an example.
{ "language": "en", "url": "https://stackoverflow.com/questions/135121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Hosted CRM system with an API? I apologize if this is slightly off-topic. I'm hoping to find a software-as-a-service CRM system that can be easily integrated with our custom user management application. Fundamentally, we have user our own accounts and provide services to these registered users; frequently, we have email conversations with people that own these accounts - it would be great if our CRM interface would suddenly light up with the record of these conversations. Here's my dream solution, let me know if this is possible: - We have a "service" email alias; i'd want to add the "track@GreatCrmVendor.com" to that alias so that all emails are CC'd to our CRM vendor. - In the admin UI for our app, I'd love to have access to the emails that the CRM vendor has captured for us - something like a REST-based web service call that aks "give me all email headers for customer with email X". Do you know of such CRM vendor? Clarification: I know how to build such a catch-all email account, parse the emails, record them in the database and all... I just don't want to invest the development time in it, I'm hoping we can just integrate with a good off-the-shelf solution. Thanks! A: Salesforce.com has an extensive SOAP API for what I understand. http://www.salesforce.com/developer/ A: http://www.advancedsps.com/?gclid=CPnlpuXL95UCFQRfagodDR554w A: Try this:link text It plays well with email - AND has an API if you want to do integration... The links to info on email integration is on the above page, as well as developer API's << DISCLAIMER: not employed by them, just a happy customer >> A: There are several apps in the Salesforce AppExchange that claim to handle this: * *http://sites.force.com/appexchange/apex/listingDetail?listingId=a0N300000016YR4EAM *http://sites.force.com/appexchange/apex/listingDetail?listingId=a0N300000016Y7nEAE The salesforce.com API is also pretty straightforward to use. A: Sugar CRM has an API, and they offer a hosted service. See: http://www.sugarcrm.com/wiki/index.php?title=SOAP_Documentation
{ "language": "en", "url": "https://stackoverflow.com/questions/135122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I get Gmail-like file uploads for my web app? I want to add gmail-like file upload functionality for one of my projects. Can anybody help me with this? My application is built in vb.net. I would appreciate any kind of help or guidance. Thanks A: Here is the gmail uploader clone. This is the exact clone of gmail uploader with some extra facilities. You can see the thumbnails of images after uploading, Drag the thumbnails to change the order and replace any thumbnail. It is done using jQuery. You can see the demo here. The source code is free to download in a single zip file. I hope you can easily remove some code and get the desired thing. You may leave comments on the ABCoder blog if you need further help. A: Check out SWFUpload, which is essentially a javascript api to flash's absolutely superior file upload handling capabilities. Best thing out there until the browsers finally catch up. From link: * *Upload multiple files at once by ctrl/shift-selecting in dialog *Javascript callbacks on all events *Get file information before upload starts *Style upload elements with XHTML and css *Display information while files are uploading using HTML *No page reloads necessary *Works on all platforms/browsers that has Flash support. *Degrades gracefully to normal HTML upload form if Flash or javascript is unavailable *Control filesize before upload starts *Only display chosen filetypes in dialog *Queue uploads, remove/add files before starting upload Demos ----- iframe upload ----- To start, you want to have an iframe on your page. This is meant for server communication. You'll hide it later, but for now, keep it visible. Give that iframe a name attribute, like "uploader" or something. Now, in your form, set the target to the iframe's name and the action to a script you have on the server that will accept a file upload (like a normal form with a file upload). Add a link inside that form with the text "Add File". Set that link to run a javascript function which will add a new input to the form. This can be done via the DOM, but I would recommend a javascript library like jquery. Once the new file input is added to the form, set the blur event of that input to a javascript function that will submit the form and then check it periodically for output. Reading an iframe can be tricky, but it's possible. Have your file upload script output a "Done." or a filename or something when the upload is complete. Check it every second or so until there is content. Once you have content, kill your timer and replace the file input with the name of the file (or "File Uploaded") or whatever. Hide your iframe with css. A: From YUI! (Yahoo User Interface), https://yuilibrary.com/yui/docs/uploader/ * *Multiple file selection in a single "Open File" dialog. *File extension filters to facilitate the user's selection. *Progress tracking for file uploads. *A range of file metadata: filename, size, date created, date modified, and author. *A set of events dispatched on various aspects of the file upload process: file selection, upload progress, upload completion, etc. *Inclusion of additional data in the file upload POST request. *Faster file upload on broadband connections due to the modified SEND buffer size. *Same-page server response upon completion of the file upload. Totally Free A: For a non-flash solution, you can use NeatUpload. I used it on an extensive project last year with a no-flash requirement. It's very easy to integrate into existing solutions. I thought it was a breeze to work with. Easier, in my limited experience, than working with SWFUpload in ASP.NET. Probably because NeatUpload is built just for ASP.NET. A: Are you talking about an upload without a full page postback? Take a look at http://www.phpletter.com/Demo/AjaxFileUpload-Demo/, which creates a hidden iframe, copies the input control, and uses the iframe to perform the post to get the file on the server. If you're looking for the behavior where when the user clicks "attach file" and the file browsing dialog automatically pops up, that can be done via Javascript but doesn't work in FireFox, which has the security precaution of requiring the user to invoke the "Browse" button directly (rather than programmatically through script). For the "automatic" upload, use Javascript to attach to the "change" event for the "value" property of the the 'input' control so that the will perform when a file has been selected. A: You may use Flickr Uploader clone instead. A: Now it has been 2 years, I used the uploadify in my legacy system and it works good. but you need to write some hack code (such like hold the session). I recommend you use jquery upload, which is pure HTML, no swf, no session problems and really great! == on 2013, what I wrote: I am considering which to choose, SWFupload or uploadify . but on SWFupload's official website , it says that it has not been under active development and the author is hoping someday the SWFupload could revive... so ... I decided to try "uploadify", which seems supports many options, callbacks with lots of demos. (after checking its source code, I guess the author wraps the "SWFupload v1" and "SWFupload v2" in his "uploadify v3"...) and there's a full list of this kind of uploaders. A: I'd like a little more clarification of "Gmail-like" file uploading. do you mean how if it sits for a little bit, it automatically attaches it to a draft? A: Gmail's code is difficult to find your way around, but if I had to guess, this is how it works: * *When you click "attach another file", it inserts a regular input type file control. On IE, it may also programmatically trigger the click event on the "browse" button so the file dialog opens immediately (it doesn't do this on firefox, and I don't have IE handy, but I believe IE allows for this) *After you select a file, it detects the change event of the input control, and starts a timer. *When the timer triggers, it detaches the input element from the form, and adds it to a different form in a hidden iframe, placing a simple link in the main (visible) form. The hidden iframe is then submitted to upload the file. (It may also clone the input element, but I haven't tried whether this works.) A: You can use iFrames for this
{ "language": "en", "url": "https://stackoverflow.com/questions/135123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Should one prefer STL algorithms over hand-rolled loops? I seem to be seeing more 'for' loops over iterators in questions & answers here than I do for_each(), transform(), and the like. Scott Meyers suggests that stl algorithms are preferred, or at least he did in 2001. Of course, using them often means moving the loop body into a function or function object. Some may feel this is an unacceptable complication, while others may feel it better breaks down the problem. So... should STL algorithms be preferred over hand-rolled loops? A: The std::foreach is the kind of code that made me curse the STL, years ago. I cannot say if it's better, but I like more to have the code of my loop under the loop preamble. For me, it is a strong requirement. And the std::foreach construct won't allow me that (strangely enough, the foreach versions of Java or C# are cool, as far as I am concerned... So I guess it confirms that for me the locality of the loop body is very very important). So I'll use the foreach only if there is only already a readable/understandable algorithm usable with it. If not, no, I won't. But this is a matter of taste, I guess, as I should perhaps try harder to understand and learn to parse all this thing... Note that the people at boost apparently felt somewhat the same way, for they wrote BOOST_FOREACH: #include <string> #include <iostream> #include <boost/foreach.hpp> int main() { std::string hello( "Hello, world!" ); BOOST_FOREACH( char ch, hello ) { std::cout << ch; } return 0; } See : http://www.boost.org/doc/libs/1_35_0/doc/html/foreach.html A: That's really the one thing that Scott Meyers got wrong. If there is an actual algorithm that matches what you need to do, then of course use the algorithm. But if all you need to do is loop through a collection and do something to each item, just do the normal loop instead of trying to separate code out into a different functor, that just ends up dicing code up into bits without any real gain. There are some other options like boost::bind or boost::lambda, but those are really complex template metaprogramming things, they do not work very well with debugging and stepping through the code so they should generally be avoided. As others have mentioned, this will all change when lambda expressions become a first class citizen. A: The for loop is imperative, the algorithms are declarative. When you write std::max_element, it’s obvious what you need, when you use a loop to achieve the same, it’s not necessarily so. Algorithms also can have a slight performance edge. For example, when traversing an std::deque, a specialized algorithm can avoid checking redundantly whether a given increment moves the pointer over a chunk boundary. However, complicated functor expressions quickly render algorithm invocations unreadable. If an explicit loop is more readable, use it. If an algorithm call can be expressed without ten-storey bind expressions, by all means prefer it. Readability is more important than performance here, because this kind of optimization is what Knuth so famously attributes to Hoare; you’ll be able to use another construct without trouble once you realize it’s a bottleneck. A: It depends, if the algorithm doesn't take a functor, then always use the std algorithm version. It's both simpler for you to write and clearer. For algorithms that take functors, generally no, until C++0x lambdas can be used. If the functor is small and the algorithm is complex (most aren't) then it may be better to still use the std algorithm. A: I'm a big fan of the STL algorithms in principal but in practice it's just way too cumbersome. By the time you define your functor/predicate classes a two line for loop can turn into 40+ lines of code that is suddenly 10x harder to figure out. Thankfully, things are going to get a ton easier in C++0x with lambda functions, auto and new for syntax. Checkout this C++0x Overview on Wikipedia. A: It depends on: * *Whether high-performance is required *The readability of the loop *Whether the algorithm is complex If the loop isn't the bottleneck, and the algorithm is simple (like for_each), then for the current C++ standard, I'd prefer a hand-rolled loop for readability. (Locality of logic is key.) However, now that C++0x/C++11 is supported by some major compilers, I'd say use STL algorithms because they now allow lambda expressions — and thus the locality of the logic. A: I wouldn't use a hard and fast rule for it. There are many factors to consider, like often you perform that certain operation in your code, is just a loop or an "actual" algorithm, does the algorithm depend on a lot of context that you would have to transmit to your function? For example I wouldn't put something like for (int i = 0; i < some_vector.size(); i++) if (some_vector[i] == NULL) some_other_vector[i]++; into an algorithm because it would result in a lot more code percentage wise and I would have to deal with getting some_other_vector known to the algorithm somehow. There are a lot of other examples where using STL algorithms makes a lot of sense, but you need to decide on a case by case basis. A: I think the STL algorithm interface is sub-optimal and should be avoided because using the STL toolkit directly (for algorithms) might give a very small gain in performance, but will definitely cost readability, maintainability, and even a bit of writeability when you're learning how to use the tools. How much more efficient is a standard for loop over a vector: int weighted_sum = 0; for (int i = 0; i < a_vector.size(); ++i) { weighted_sum += (i + 1) * a_vector[i]; // Just writing something a little nontrivial. } than using a for_each construction, or trying to fit this into a call to accumulate? You could argue that the iteration process is less efficient, but a for _ each also introduces a function call at each step (which might be mitigated by trying to inline the function, but remember that "inline" is only a suggestion to the compiler - it may ignore it). In any case, the difference is small. In my experience, over 90% of the code you write is not performance critical, but is coder-time critical. By keeping your STL loop all literally inline, it is very readable. There is less indirection to trip over, for yourself or future maintainers. If it's in your style guide, then you're saving some learning time for your coders (admit it, learning to properly use the STL the first time involves a few gotcha moments). This last bit is what I mean by a cost in writeability. Of course there are some special cases -- for example, you might actually want that for_each function separated to re-use in several other places. Or, it might be one of those few highly performance-critical sections. But these are special cases -- exceptions rather than the rule. A: IMO, a lot of standard library algorithms like std::for_each should be avoided - mainly for the lack-of-lambda issues mentioned by others, but also because there's such a thing as inappropriate hiding of details. Of course hiding details away in functions and classes is all part of abstraction, and in general a library abstraction is better than reinventing the wheel. But a key skill with abstraction is knowing when to do it - and when not to do it. Excessive abstraction can damage readability, maintainability etc. Good judgement comes with experience, not from inflexible rules - though you must learn the rules before you learn to break them, of course. OTOH, it's worth considering the fact that a lot of programmers have been using C++ (and before that, C, Pascal etc) for a long time. Old habits die hard, and there is this thing called cognitive dissonance which often leads to excuses and rationalisations. Don't jump to conclusions, though - it's at least as likely that the standards guys are guilty of post-decisional dissonance. A: I’m going to go against the grain here and advocate that using STL algorithms with functors makes code much easier to understand and maintain, but you have to do it right. You have to pay more attention to readability and clearity. Particularly, you have to get the naming right. But when you do, you can end up with cleaner, clearer code, and paradigm shift into more powerful coding techniques. Let’s take an example. Here we have a group of children, and we want to set their “Foo Count” to some value. The standard for-loop, iterator approach is: for (vector<Child>::iterator iter = children.begin(); iter != children.end(); ++iter) { iter->setFooCount(n); } Which, yeah, it’s pretty clear, and definitely not bad code. You can figure it out with just a little bit of looking at it. But look at what we can do with an appropriate functor: for_each(children.begin(), children.end(), SetFooCount(n)); Wow, that says exactly what we need. You don’t have to figure it out; you immediately know that it’s setting the “Foo Count” of every child. (It would be even clearer if we didn’t need the .begin() / .end() nonsense, but you can’t have everything, and they didn’t consult me when making the STL.) Granted, you do need to define this magical functor, SetFooCount, but its definition is pretty boilerplate: class SetFooCount { public: SetFooCount(int n) : fooCount(n) {} void operator () (Child& child) { child.setFooCount(fooCount); } private: int fooCount; }; In total it’s more code, and you have to look at another place to find out exactly what SetFooCount is doing. But because we named it well, 99% of the time we don’t have to look at the code for SetFooCount. We assume it does what it says, and we only have to look at the for_each line. What I really like is that using the algorithms leads to a paradigm shift. Instead of thinking of a list as a collection of objects, and doing things to every element of the list, you think of the list as a first class entity, and you operate directly on the list itself. The for-loop iterates through the list, calling a member function on each element to set the Foo Count. Instead, I am doing one command, which sets the Foo Count of every element in the list. It’s subtle, but when you look at the forest instead of the trees, you gain more power. So with a little thought and careful naming, we can use the STL algorithms to make cleaner, clearer code, and start thinking on a less granular level. A: I think a big factor is the developer's comfort level. It's probably true that using transform or for_each is the right thing to do, but it's not any more efficient, and handwritten loops aren't inherently dangerous. If it would take half an hour for a developer to write a simple loop, versus half a day to get the syntax for transform or for_each right, and move the provided code into a function or function object. And then other developers would need to know what was going on. A new developer would probably be best served by learning to use transform and for_each rather than handmade loops, since he would be able to use them consistently without error. For the rest of us for whom writing loops is second nature, it's probably best to stick with what we know, and get more familiar with the algorithms in our spare time. Put it this way -- if I told my boss I had spent the day converting handmade loops into for_each and transform calls, I doubt he'd be very pleased.
{ "language": "en", "url": "https://stackoverflow.com/questions/135129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Semantic merge tool Background: In my job, we use SVN, C# and VisualStudio. Part of my task regularly involves global renames. Often I end up with a broken build after renaming something and then merging in changes. The question: is there a solution out there that can look at my changes, notice the global rename and then apply that to the edit that others have made while merging them in? Another way to get much the same effect would be some sort of refactor log and then apply that to the incoming edits. The tool need not be perfect, even if it just noted any references in their edits that referred to something that I have edited would be valuable. Edit I'm aware of VS's refactor tools. What I'm looking for is something that will allow me to, after I have refactored my working copy, apply the same refactorings to other peoples edits that I now need to merge in. The ideal solution would be to make sure there are no outstanding edits when I do the refactoring, but that would prevent anyone else from getting anything done for the next week or more. (Because they would have to sync every half hour or so for the next week) A: There is a commercial tool for exactly that use case called Semantic Merge. They provide a 15-day free trial, open source projects may use it for free (contact the support). The company behind semantic merge also has a git client with integrated Sematic Merge which is currently beta (here, have some short intro videos). A: Keep renaming seperate from other refactorings. They can generally be automated and therefore making the changes is easy. You can even distribute scripts to allow other engineers with merge hell to perform the transformations on there files. There is no easy way to automate refactorings, so keep it simple. A rename should only take minutes and you should be able to check out and commit with minimal testing. A: Assuming at least VS 2005 and the global rename is a variable/property/function, there is a Refactor - Rename right-click menu option you could use. By design it propagates the name change in your entire solution. A: Wouldn't it be possible to reduce the time needed for you to commit your changes? One or more week seems quite long between commits... A: I understood your problem. Unfortunately, I think there isn't a SVN script smart enough to do this job while syncing. Maybe your team working more appropriately with SVN could make this situation be unpainful. When you do an svn:update in your working copy and realize merging operations, it is a good practice to rebuild the updated solution before do commit the changes. Having a svn script with the ability to doing it automagically when merging would be great, indeed.
{ "language": "en", "url": "https://stackoverflow.com/questions/135130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Passing a ref or pointer to a managed type as an argument in C++.net I'm really baffled by this - I know how to do this in VB, unmanaged C++ and C# but for some reason I can't accept a ref variable of a managed type in C++. I'm sure there's a simple answer, really - but here's the C# equivalent: myClass.myFunction(ref variableChangedByfunction); I've tried C++ pointers - no dice. I've tried ref keywords. No dice. I tried the [out] keyword. Didn't work. I can't find any documentation that clearly explains my problem, either. A: Turns out in the function declaration you need to use a % after the parameter name: bool Importer::GetBodyChunk(String^% BodyText, String^% ChunkText) And then you pass in the variable per usual. A: Use a ^ instead of a * A: Just to make it a little clearer: Parameters of reference types (e.g. System::String) have to be denoted with ^ in the newer C++/CLI syntax. This tells the compiler that the parameter is a handle to a GC object. If you need a tracking reference (like with ref or out in C#) you need to add % as well. And here comes a tip: I often find it helpful to use the .NET Reflector to look at existing assemblies and switch to C++ code style. This gives good insight into usage of attributes for interoperability between different .net languages.
{ "language": "en", "url": "https://stackoverflow.com/questions/135132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I discover if there are other devices on my local sub-net? I'm trying to confirm a user a unplugged my embedded device from a network before performing some maintenance. I'm considering "ping"ing all IP address on my sub-net, but that sounds crude. Is there a broadcast/ARP method that might work better? A: You can try a broadcast ping (this is from linux): ping -b 255.255.255.255 Another option is to download Nmap and do a ping-scan. A: You could use nmap. It's still crude, but at least it's using a tool designed to do it so you don't have to spend time on it. A: If you can't get reliable link state information from your Ethernet device (which most chipsets should support these days, BTW...), sending an ARP request for each IP on your local subnet is a decent substitute. The overhead is minimal, and as soon as you get a single response, you can be sure you're still connected to a network. The only possible problem I see here, is that if your device is on a /8 subnet, it can take quite a while to loop through all 4294967296 possible IPs. So, you may want to consider some optimization, such as only sending ARP requests for your default gateway, as well as all IPs currently in your ARP table. A: If there's a peer you know you were connected to recently you could try pinging or arping that first. That could cut down on the traffic you're generating. A: you could also run tcpdump -n to see what's active on the network too. A: Not receiving any responses to ICMP pings or ARP requests is not a 100% guarantee that there's no network connection. For instances, there might be devices on the network that are firewalled off. EDIT: May be you could access some lower-level information on your embedded device to check whether the network interface has its link up without actually sending any data. A: Is there any chance that your device supports UPnP or Bonjour? Beside of the low-level protocols your should also have a look at these protocols which support some kind of plug-&-Play functionality. A UPnP device for example sends a message on the LAN before it is switched off (though, this doesn't help if it is just removed by unplugging it...).
{ "language": "en", "url": "https://stackoverflow.com/questions/135149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What's the best way to remove tags from the end of a string? The .NET web system I'm working on allows the end user to input HTML formatted text in some situations. In some of those places, we want to leave all the tags, but strip off any trailing break tags (but leave any breaks inside the body of the text.) What's the best way to do this? (I can think of ways to do this, but I'm sure they're not the best.) A: Small change to bdukes code, which should be faster as it doesn't backtrack. public static Regex regex = new Regex( @"(?:\<br[^>]*\>)*$", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant | RegexOptions.IgnorePatternWhitespace | RegexOptions.Compiled ); regex.Replace(text, string.Empty); A: I'm sure this isn't the best way either, but it should work unless you have trailing spaces or something. while (myHtmlString.EndsWith("<br>")) { myHtmlString = myHtmlString.SubString(0, myHtmlString.Length - 4); } A: I'm trying to ignore the ambiguity in your original question, and read it literally. Here is an extension method that overloads TrimEnd to take a string. static class StringExtensions { public static string TrimEnd(this string s, string remove) { if (s.EndsWith(remove)) { return s.Substring(0, s.Length - remove.Length); } return s; } } Here are some tests to show that it works: Debug.Assert("abc".TrimEnd("<br>") == "abc"); Debug.Assert("abc<br>".TrimEnd("<br>") == "abc"); Debug.Assert("<br>abc".TrimEnd("<br>") == "<br>abc"); I want to point out that this solution is easier to read than regex, probably faster than regex (you should use a profiler, not speculation, if you're concerned about performance), and useful for removing other things from the ends of strings. regex becomes more appropriate if your problem is more general than you stated (e.g., if you want to remove <BR> and </BR> and deal with trailing spaces or whatever. A: You can use a regex to find and remove the text with the regex match set to anchor at the end of the string. A: As @Mitch said, // using System.Text.RegularExpressions; /// <summary> /// Regular expression built for C# on: Thu, Sep 25, 2008, 02:01:36 PM /// Using Expresso Version: 2.1.2150, http://www.ultrapico.com /// /// A description of the regular expression: /// /// Match expression but don't capture it. [\<br\s*/?\>], any number of repetitions /// \<br\s*/?\> /// < /// br /// Whitespace, any number of repetitions /// /, zero or one repetitions /// > /// End of line or string /// /// /// </summary> public static Regex regex = new Regex( @"(?:\<br\s*/?\>)*$", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant | RegexOptions.IgnorePatternWhitespace | RegexOptions.Compiled ); regex.Replace(text, string.Empty); A: You could also try (if the markup is likely to be a valid tree) something similar to: string s = "<markup><div>Text</div><br /><br /></markup>"; XmlDocument doc = new XmlDocument(); doc.LoadXml(s); Console.WriteLine(doc.InnerXml); XmlElement markup = doc["markup"]; int childCount = markup.ChildNodes.Count; for (int i = childCount -1; i >= 0; i--) { if (markup.ChildNodes[i].Name.ToLower() == "br") { markup.RemoveChild(markup.ChildNodes[i]); } else { break; } } Console.WriteLine("---"); Console.WriteLine(markup.InnerXml); Console.ReadKey(); The code above is a bit "scratch-pad" but if you cut and paste it into a Console application and run it, it does work :=) A: you can use RegEx or check if the trailing string is a break and remove it
{ "language": "en", "url": "https://stackoverflow.com/questions/135151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I call a long-running external program from Excel / VBA? What is the best way to run an external program from excel. It might run for several minutes. What's the best-practice about how to to this. Ideally, * *A model dialog box that let's the user know that the process is executing. *If the executable fails, the user should receive a notification. *A timeout should be enforced. *A cancel button should be on the dialog box. But any best-practices are welcome. I'm interested in solutions with calling either a .dll or an .exe. Preferably something that works with Excel '03 or earlier, but I'd love to hear a reason to move to a later version as well. A: You should check out these two Microsoft KB articles How to launch a Win32 Application from Visual Basic and How To Use a 32-Bit Application to Determine When a Shelled Process Ends They both quickly give you the framework to launch a process and then check on its completion. Each of the KB articles have some additional references that may be relevant. The latter knowledgebase article assumes that you want to wait for an infinite amount of time for your shell process to end. You can modify the ret& = WaitForSingleObject(proc.hProcess, INFINITE) function call to return after some finite amount of time in milliseconds--replace INFINITE with a positive value representing milliseconds and wrap the whole thing in a Do While loop. The return value tells you if the process completed or the timer ran out. The return value will be zero if the process ended. If the return value is non-zero then the process is still running, but control is given back to your application. During this time while you have positive control of your application, you can determine whether to update some sort of UI status, check on cancellation, etc. Or you can loop around again and wait some more. There are even additional options if the program you are shelling to is something that you wrote. You could hook into one of its windows and have the program post messages that you can attach to and use as status feedback. This is probably best left for a separate item if you need to consider it. You can use the process structure to get a return value from the called process. Your process does need to return a value for this to be useful. My general approach to this kind of need is to: * *give the user a non-modal status dialog at the start of the process with a cancel button, which when clicked will set a flag to be checked later. Providing the user with any status is most likely impossible, so giving them an elapsed time or one of those animated GIFs might be helpful in managing expectations. *Start the process *Wait for the process to complete, allowing cancellation check every 500ms *If the process is complete close the dialog and continue along. *If the process is not complete, see if the user hit cancel and if so send a close message to the process' window. This may not work and terminating the process might be required--careful if you do this. Your best bet may be to abandon the process if it won't close properly. You can also check for a timeout at this point and try to take the same path as if the user hit cancel. Hope this helps, Bill.
{ "language": "en", "url": "https://stackoverflow.com/questions/135158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is Google App Engine a worthy platform for a Lifestreaming app? I'm building a Lifestreaming app that will involve pulling down lots of feeds for lots of users, and performing data-mining, and machine learning algorithms on the results. GAE's load balanced and scalable hosting sounds like a good fit for a system that could eventually be moving around a LOT of data, but it's lack of cron jobs is a nuisance. Would I be better off using Django on a co-loc and dealing with my own DB scaling? A: It might change when they offer paid plans, but as it stands, App Engine is not good for CPU intensive apps. It is designed to scale to handle a large number of requests, not necessarily a large amount of calculation per request. I am running into this issue with fairly minor calculations, and I fear I may have to start looking elsewhere as my data set grows. A: While I can not answer your question directly, my experience of building Microupdater (a news aggregator collecting a few hundred feeds on AppEngine) may give you a little insight. * *Fetching feeds. Fetching lots of feeds by cron jobs (it was the only solution until SDK 1.2.5) is not efficient and scalable, which has lower limit on job frequency (say 1 min, so you could only fetch at most 60 feeds hourly). And with latest SDK 1.2.5, there is XMPP API, which I have not implemented yet. The best promising approach would be PubSubHubbub, of which you offer an callback url and HubBub will notify you new entries in real-time. And there is an demo implementation on AppEngine, which you can play around. *Parsing feeds. You may already know that parsing feeds is cpu-intensive. I use Universal Feed Parser by Mark Pilgrim, when parsing a large feed (say a public google reader topic), AppEngine may fail to process all entries. My dashboard have a lot of these CPU-limit warnings. But it may result in my incapability to optimize the code yet. Totally said, AppEngine is not yet an ideal platform for lifestream app, but that may change in future. A: (This is obviously pretty old, responding just because it still comes up really high in related Google queries...) I just started using AppEngine and haven't been using it for tons of external requests. But I do know that the info above is probably a lot less valid now, and might not even still stand. They relaxed the limits quite a bit since September 08 - check Aral Balkan's blog for his initial complaint about the above, and later developments. A: If you're app solely relies on Django, then App Engine is a good bet. However, if you ever need to add C-enhanced libraries, you're up a creek. App Engine doesn't support things like PIL or ReportLab, which use C to speed up processing times. I'm only mentioning this because you may want to use C to speed up some of your routines in the long run. If you decide to use a co-loc, check out WebFaction.com. They have great Django/Python support and they have no issue with you using the aforementioned lirbaries. A: Take a look at Slice Host: They sell xen based virtualized server instances starting at $20.00 / month... We’re just like you. Sick of oversold, underperforming, ancient hosting companies. We took matters into our own hands. We built a hosting company for people who know their stuff. Give us a box, give us bandwidth, give us performance and we get to work. Fast machines, RAID-10 drives, Tier-1 bandwidth and root access. Managed with a customized Xen VPS backend to ensure that your resources are protected and guaranteed. It's great for starting a project on and scaling it out WITHOUT incurring the costs of a managed provider or colo. A: No. If you need to pull lots of things down, App Engine isn't going to work so well. You can use it as a front end by putting your data in their store after doing your offline preprocessing, but you can't do much in the ~1 second time you have per request without doing some really crazy things. Your app would likely be better off on your own hosting. A: Pulling feeds or doing calculations won't be a problem. But you'll soon have to pay for your account. App engine includes Django, except you'll need to work with some adaptors for the model part. It will surely save you from maintenance headaches.
{ "language": "en", "url": "https://stackoverflow.com/questions/135169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Merging contacts in SQL table without creating duplicate entries I have a table that holds only two columns - a ListID and PersonID. When a person is merged with another in the system, I was to update all references from the "source" person to be references to the "destination" person. Ideally, I would like to call something simple like UPDATE MailingListSubscription SET PersonID = @DestPerson WHERE PersonID = @SourcePerson However, if the destination person already exists in this table with the same ListID as the source person, a duplicate entry will be made. How can I perform this action without creating duplicated entries? (ListID, PersonID is the primary key) EDIT: Multiple ListIDs are used. If SourcePerson is assigned to ListIDs 1, 2, and 3, and DestinationPerson is assigned to ListIDs 3 and 4, then the end result needs to have four rows - DestinationPerson assigned to ListID 1, 2, 3, and 4. A: --out with the bad DELETE FROM MailingListSubscription WHERE PersonId = @SourcePerson and ListID in (SELECT ListID FROM MailingListSubscription WHERE PersonID = @DestPerson) --update the rest (good) UPDATE MailingListSubscription SET PersonId = @DestPerson WHERE PersonId = @SourcePerson A: I have to agree with David B here. Remove all the older stuff that shouldn't be there and then do your update. A: Actually, I think you should go back and reconsider your database design as you really shouldn't be in circumstances where you're changing the primary key for a record as you're proposing to do - it implies that the PersonID column is not actually a suitable primary key in the first place. My guess is your PersonID is exposed to your users, they've renumbered their database for some reason and you're syncing the change back in. This is generally a poor idea as it breaks audit trails and temporal consistency. In these circumstances, it's generally better to use your own non-changing primary key - usually an identity - and set up the PersonID that the users see as an attribute of that. It's extra work but will give you additional consistency and robustness in the long run. A good rule of thumb is the primary key of a record should not be exposed to the users where possible and you should only do so after careful consideration. OK, I confess to breaking this myself on numerous occasions but it's worth striving for where you can :-) A: First you should subscribe destperson to all lists that SourcePerson is subscribed to that Destperson isn't already subscibed. Then delete all the SourcePersons subscriptions. This will work with multiple ListIDs. Insert into MailingListSubscription ( ListID, PersonID ) Select ListID, @DestPerson From MailingListSubscription as t1 Where PersonID = @SourcePerson and Not Exists ( Select * From MailingListSubscription as t2 Where PersonID = @DestPerson and t1.ListID = t2.ListID ) Delete From MailingListSubscription Where PersonID = @SourcePerson
{ "language": "en", "url": "https://stackoverflow.com/questions/135173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MS Access Data Access Limitations I have a project right now where I'd like to be able to pull rows out of an Access database that a 3rd party product uses to store its information. There will likely be a small number of users hitting this database at the same time my "export" process does, so I'm a little concerned about data integrity and concurrent access. Will I likely run into problems with my .NET import process (using LINQ/ADO.NET/?) when it is trying to pull data out of the MDB at the same time someone else is saving a row? How does Access's locking work? A: There should no problem. Problems can occur only on concurrent write operations. The locking from MS Access based on file locks in the ldb file. The locks occur only on pages and not on the completely file. Because the locks are in the ldb file and not in the mdb file that there are no problems with parallel reading. A: In previous workings with Access (back when I was using 2003 for things) the only thing I ran into was that occasionally a read would lock rows just above and below the current read. However, I believe this may have been an isolated issue with our application. A: When you open the database, do not attempt to open in read-only mode (although you might think it makes sense). When you are the first user in, Access opens the mdb file in read-only mode and does not create an ldb, forcing all subsequent users to be in read-only mode as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/135183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Reading and posting to web pages using C# I have a project at work the requires me to be able to enter information into a web page, read the next page I get redirected to and then take further action. A simplified real-world example would be something like going to google.com, entering "Coding tricks" as search criteria, and reading the resulting page. Small coding examples like the ones linked to at http://www.csharp-station.com/HowTo/HttpWebFetch.aspx tell how to read a web page, but not how to interact with it by submitting information into a form and continuing on to the next page. For the record, I'm not building a malicious and/or spam related product. So how do I go read web pages that require a few steps of normal browsing to reach first? A: You can programmatically create an Http request and retrieve the response: string uri = "http://www.google.com/search"; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri); request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; // encode the data to POST: string postData = "q=searchterm&hl=en"; byte[] encodedData = new ASCIIEncoding().GetBytes(postData); request.ContentLength = encodedData.Length; Stream requestStream = request.GetRequestStream(); requestStream.Write(encodedData, 0, encodedData.Length); // send the request and get the response using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) { // Do something with the response stream. As an example, we'll // stream the response to the console via a 256 character buffer using (StreamReader reader = new StreamReader(response.GetResponseStream())) { Char[] buffer = new Char[256]; int count = reader.Read(buffer, 0, 256); while (count > 0) { Console.WriteLine(new String(buffer, 0, count)); count = reader.Read(buffer, 0, 256); } } // reader is disposed here } // response is disposed here Of course, this code will return an error since Google uses GET, not POST, for search queries. This method will work if you are dealing with specific web pages, as the URLs and POST data are all basically hard-coded. If you needed something that was a little more dynamic, you'd have to: * *Capture the page *Strip out the form *Create a POST string based on the form fields FWIW, I think something like Perl or Python might be better suited to that sort of task. edit: x-www-form-urlencoded A: You might try Selenium. Record the actions in Firefox using Selenium IDE, save the script in C# format, then play them back using the Selenium RC C# wrapper. As others have mentioned you could also use System.Net.HttpWebRequest or System.Net.WebClient. If this is a desktop application see also System.Windows.Forms.WebBrowser. Addendum: Similar to Selenium IDE and Selenium RC, which are Java-based, WatiN Test Recorder and WatiN are .NET-based. A: What you need to do is keep retrieving and analyzing the html source for each page in the chain. For each page, you need to figure out what the form submission will look like and send a request that will match that to get the next page in the chain. What I do is build a custom class the wraps System.Net.HttpWebRequest/HttpWebResponse, so retrieving pages is as simple as using System.Net.WebClient. However, my custom class also keeps the same cookie container across requests and makes it a little easier to send post data, customize the user agent, etc. A: Depending on how the website works you can either manipulate the url to perform what you want. e.g to search for the word "beatles" you could just open a request to google.com?q=beetles and then just read the results. Alternatively if the website does not use querystring values (url) to process page actions then you will need to work on a webrequest which posts the required values to the website instead. Search in Google for working with WebRequest and webresponse.
{ "language": "en", "url": "https://stackoverflow.com/questions/135186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Source Control on the IBM i (iSeries) On the web side we are working on getting source control. Now, I want to see what can be done for the iSeries side. What is your favorite source control application for iSeries and why? I am looking for low-cost if possible. A: The two most common source control packages for the iSeries are Turnover and Aldon. Neither are low cost but integrate well with the iSeries. I prefer Turnover. It flawlessly handles production installs to both a local and remote iSeries. A: Don't forget about MKS Implementer ;) A: If you're using the WebSphere Development Studio or Rational from a PC then any source control system that will play nicely with that is an option if you don't want to shell out for the native iSeries one. A: We use Aldon for our COBOL, CL, DDS code and it does a really good job. I don't know about the cost of it. There's a plug-in for the WebSphere Development Studio. Just about any source control option could handle archiving/versioning the source code, but Aldon excels at handling the compilation and deployment from dev to QA to production environments. It can keep different library lists for each, for example, and change them dynamically for compiling in different environments. It will even push code to other LPARs, if your dev and prod environments are not on the same LPAR.
{ "language": "en", "url": "https://stackoverflow.com/questions/135188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What's the difference between JavaScript and JScript? I have always wondered WHaT tHE HecK?!? is the difference between JScript and JavaScript. A: Just different names for what is really ECMAScript. John Resig has a good explanation. Here's the full version breakdown: * *IE 6-7 support JScript 5 (which is equivalent to ECMAScript 3, JavaScript 1.5) *IE 8 supports JScript 6 (which is equivalent to ECMAScript 3, JavaScript 1.5 - more bug fixes over JScript 5) *Firefox 1.0 supports JavaScript 1.5 (ECMAScript 3 equivalent) *Firefox 1.5 supports JavaScript 1.6 (1.5 + Array Extras + E4X + misc.) *Firefox 2.0 supports JavaScript 1.7 (1.6 + Generator + Iterators + let + misc.) *Firefox 3.0 supports JavaScript 1.8 (1.7 + Generator Expressions + Expression Closures + misc.) *The next version of Firefox will support JavaScript 1.9 (1.8 + To be determined) *Opera supports a language that is equivalent to ECMAScript 3 + Getters and Setters + misc. *Safari supports a language that is equivalent to ECMAScript 3 + Getters and Setters + misc. A: JScript is Microsoft's implementation of the ECMAScript specification. JavaScript is the Mozilla implementation of the specification. A: Javascript, the language, came first, from Netscape. Microsoft reverse engineered Javascript and called it JScript to avoid trademark issues with Sun. (Netscape and Sun were partnered up at the time, so this was less of an issue) The languages are identical, both are dialects of ECMA script, the after-the-fact standard. Although the languages are identical, since JScript runs in Internet Explorer, it has access to different objects exposed by the browser (such as ActiveXObject) A: JScript is the Microsoft implementation of Javascript A: According to this article: * *JavaScript is a scripting language developed by Netscape Communications designed for developing client and server Internet applications. Netscape Navigator is designed to interpret JavaScript embedded into Web pages. JavaScript is independent of Sun Microsystem's Java language. *Microsoft JScript is an open implementation of Netscape's JavaScript. JScript is a high-performance scripting language designed to create active online content for the World Wide Web. JScript allows developers to link and automate a wide variety of objects in Web pages, including ActiveX controls and Java programs. Microsoft Internet Explorer is designed to interpret JScript embedded into Web pages. A: Long time ago, all browser providers were making JavaScript engines for their browsers and only they and god knew what was happening inside this. One beautiful day, ECMA international came and said: let's make engines based on common standard, let's make something general to make life more easy and fun, and they made that standard. Since all browser providers make their JavaScript engines based on ECMAScript core (standard). For example, Google Chrome uses V8 engine and this is open source. You can download it and see how C++ program translates a command 'print' of JavaScript to machine code. Internet Explorer uses JScript (Chakra) engine for their browser and others do so and they all uses common core. A: As far as I can tell, two things: * *ActiveXObject constructor *The idiom f(x) = y, which is roughly equivalent to f[x] = y. A: There are some code differences to be aware of. A negative first parameter to subtr is not supported, e.g. in Javascript: "string".substr(-1) returns "g", whereas in JScript: "string".substr(-1) returns "string" It's possible to do "string"[0] to get "s" in Javascript, but JScript doesn't support such a construct. (Actually, only modern browsers appear to support the "string"[0] construct. A: From Wikipedia: http://en.wikipedia.org/wiki/Jscript JScript is the Microsoft dialect of the ECMAScript scripting language specification. JavaScript (the Netscape/Mozilla implementation of the ECMA specification), JScript, and ECMAScript are very similar languages. In fact the name "JavaScript" is often used to refer to ECMAScript or JScript. Microsoft uses the name JScript for its implementation to avoid trademark issues (JavaScript is a trademark of Oracle Corporation). A: Wikipedia has this to say about the differences. In general JScript is an ActiveX scripting language that is probably interpreted as JavaScript by non-IE browsers. A: JScript and Javascript is TOTALLY different scripting languages. Javascript runs on the browser, but JScript can use ActiveX objects and has almost total control on your operating system if you've ran it, it can delete files, run or write files, download files from the web(via Powershell) run cmd commands etc. JScript is almost the same thing as VBScript, but has different syntax. A: JScript is Microsoft's equivalent of JavaScript. Java is an Oracle product and used to be a Sun product. Oracle bought Sun. JavaScript + Microsoft = JScript A: Jscript is a .NET language similar to C#, with the same capabilities and access to all the .NET functions. JavaScript is run on the ASP Classic server. Use Classic ASP to run the same JavaScript that you have on the Client (excluding HTML5 capabilities). I only have one set of code this way for most of my code. I run .ASPX JScript when I require Image and Binary File functions, (among many others) that are not in Classic ASP. This code is unique for the server, but extremely powerful.
{ "language": "en", "url": "https://stackoverflow.com/questions/135203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "124" }
Q: Difference between ref and out parameters in .NET What is the difference between ref and out parameters in .NET? What are the situations where one can be more useful than the other? What would be a code snippet where one can be used and another can't? A: Ref and Out Parameters: The out and the ref parameters are used to return values in the same variable, that you pass as an argument of a method. These both parameters are very useful when your method needs to return more than one value. You must assigned value to out parameter in calee method body, otherwise the method won't get compiled. Ref Parameter : It has to be initialized before passing to the Method. The ref keyword on a method parameter causes a method to refer to the same variable that was passed as an input parameter for the same method. If you do any changes to the variable, they will be reflected in the variable. int sampleData = 0; sampleMethod(ref sampleData); Ex of Ref Parameter public static void Main() { int i = 3; // Variable need to be initialized sampleMethod(ref i ); } public static void sampleMethod(ref int sampleData) { sampleData++; } Out Parameter : It is not necessary to be initialized before passing to Method. The out parameter can be used to return the values in the same variable passed as a parameter of the method. Any changes made to the parameter will be reflected in the variable. int sampleData; sampleMethod(out sampleData); Ex of Out Parameter public static void Main() { int i, j; // Variable need not be initialized sampleMethod(out i, out j); } public static int sampleMethod(out int sampleData1, out int sampleData2) { sampleData1 = 10; sampleData2 = 20; return 0; } A: ref and out both allow the called method to modify a parameter. The difference between them is what happens before you make the call. * *ref means that the parameter has a value on it before going into the function. The called function can read and or change the value any time. The parameter goes in, then comes out *out means that the parameter has no official value before going into the function. The called function must initialize it. The parameter only goes out Here's my favorite way to look at it: ref is to pass variables by reference. out is to declare a secondary return value for the function. It's like if you could write this: // This is not C# public (bool, string) GetWebThing(string name, ref Buffer paramBuffer); // This is C# public bool GetWebThing(string name, ref Buffer paramBuffer, out string actualUrl); Here's a more detailed list of the effects of each alternative: Before calling the method: ref: The caller must set the value of the parameter before passing it to the called method. out: The caller method is not required to set the value of the argument before calling the method. Most likely, you shouldn't. In fact, any current value is discarded. During the call: ref: The called method can read the argument at any time. out: The called method must initialize the parameter before reading it. Remoted calls: ref: The current value is marshalled to the remote call. Extra performance cost. out: Nothing is passed to the remote call. Faster. Technically speaking, you could use always ref in place of out, but out allows you to be more precise about the meaning of the argument, and sometimes it can be a lot more efficient. A: out: In C#, a method can return only one value. If you would like to return more than one value, you can use the out keyword. The out modifier returns as return-by-reference. The simplest answer is that the keyword “out” is used to get the value from the method. * *You don't need to initialize the value in the calling function. *You must assign the value in the called function, otherwise the compiler will report an error. ref: In C#, when you pass a value type such as int, float, double etc. as an argument to the method parameter, it is passed by value. Therefore, if you modify the parameter value, it does not affect argument in the method call. But if you mark the parameter with “ref” keyword, it will reflect in the actual variable. * *You need to initialize the variable before you call the function. *It’s not mandatory to assign any value to the ref parameter in the method. If you don’t change the value, what is the need to mark it as “ref”? A: They're pretty much the same - the only difference is that a variable you pass as an out parameter doesn't need to be initialized but passing it as a ref parameter it has to be set to something. int x; Foo(out x); // OK int y; Foo(ref y); // Error: y should be initialized before calling the method Ref parameters are for data that might be modified, out parameters are for data that's an additional output for the function (eg int.TryParse) that are already using the return value for something. A: Ref parameters aren't required to be set in the function, whereas out parameters must be bound to a value before exiting the function. Variables passed as out may also be passed to a function without being initialized. A: out specifies that the parameter is an output parameters, i.e. it has no value until it is explicitly set by the method. ref specifies that the value is a reference that has a value, and whose value you can change inside the method. A: Example for OUT : Variable gets value initialized after going into the method. Later the same value is returned to the main method. namespace outreftry { class outref { static void Main(string[] args) { yyy a = new yyy(); ; // u can try giving int i=100 but is useless as that value is not passed into // the method. Only variable goes into the method and gets changed its // value and comes out. int i; a.abc(out i); System.Console.WriteLine(i); } } class yyy { public void abc(out int i) { i = 10; } } } Output: 10 =============================================== Example for Ref : Variable should be initialized before going into the method. Later same value or modified value will be returned to the main method. namespace outreftry { class outref { static void Main(string[] args) { yyy a = new yyy(); ; int i = 0; a.abc(ref i); System.Console.WriteLine(i); } } class yyy { public void abc(ref int i) { System.Console.WriteLine(i); i = 10; } } } Output: 0 10 ================================= Hope its clear now. A: Why does C# have both 'ref' and 'out'? The caller of a method which takes an out parameter is not required to assign to the variable passed as the out parameter prior to the call; however, the callee is required to assign to the out parameter before returning. In contrast ref parameters are considered initially assigned by the caller. As such, the callee is not required to assign to the ref parameter before use. Ref parameters are passed both into and out of a method. So, out means out, while ref is for in and out. These correspond closely to the [out] and [in,out] parameters of COM interfaces, the advantages of out parameters being that callers need not pass a pre-allocated object in cases where it is not needed by the method being called - this avoids both the cost of allocation, and any cost that might be associated with marshaling (more likely with COM, but not uncommon in .NET). A: * *A ref variable needs to be initialized before passing it in. *An out variable needs to be set in your function implementation *out parameters can be thought of as additional return variables (not input) *ref parameters can be thought of as both input and output variables. A: out parameters are initialized by the method called, ref parameters are initialized before calling the method. Therefore, out parameters are used when you just need to get a secondary return value, ref parameters are used to get a value and potentially return a change to that value (secondarily to the main return value). A: The ref keyword is used to pass values by reference. (This does not preclude the passed values being value-types or reference types). Output parameters specified with the out keyword are for returning values from a method. One key difference in the code is that you must set the value of an output parameter within the method. This is not the case for ref parameters. For more details look at http://www.blackwasp.co.uk/CSharpMethodParameters.aspx A: An out parameter is a ref parameter with a special Out() attribute added. If a parameter to a C# method is declared as out, the compiler will require that the parameter be written before it can be read and before the method can return. If C# calls a method whose parameter includes an Out() attribute, the compiler will, for purposes of deciding whether to report "undefined variable" errors, pretend that the variable is written immediately before calling the method. Note that because other .net languages do not attach the same meaning to the Out() attribute, it is possible that calling a routine with an out parameter will leave the variable in question unaffected. If a variable is used as an out parameter before it is definitely assigned, the C# compiler will generate code to ensure that it gets cleared at some point before it is used, but if such a variable leaves and re-enters scope, there's no guarantee that it will be cleared again. A: ref will probably choke on null since it presumably expects to be modifying an existing object. out expects null, since it's returning a new object. A: This The out and ref Paramerter in C# has some good examples. The basic difference outlined is that out parameters don't need to be initialized when passed in, while ref parameters do. A: out and ref are exactly the same with the exception that out variables don't have to be initialized before sending it into the abyss. I'm not that smart, I cribbed that from the MSDN library :). To be more explicit about their use, however, the meaning of the modifier is that if you change the reference of that variable in your code, out and ref will cause your calling variable to change reference as well. In the code below, the ceo variable will be a reference to the newGuy once it returns from the call to doStuff. If it weren't for ref (or out) the reference wouldn't be changed. private void newEmployee() { Person ceo = Person.FindCEO(); doStuff(ref ceo); } private void doStuff(ref Person employee) { Person newGuy = new Person(); employee = newGuy; } A: They are subtly different. An out parameter does not need to be initialized by the callee before being passed to the method. Therefore, any method with an out parameter * *Cannot read the parameter before assigning a value to it *Must assign a value to it before returning This is used for a method which needs to overwrite its argument regardless of its previous value. A ref parameter must be initialized by the callee before passing it to the method. Therefore, any method with a ref parameter * *Can inspect the value before assigning it *Can return the original value, untouched This is used for a method which must (e.g.) inspect its value and validate it or normalize it. A: out has gotten a new more succint syntax in C#7 https://learn.microsoft.com/en-us/dotnet/articles/csharp/whats-new/csharp-7#more-expression-bodied-members and even more exciting is the C#7 tuple enhancements that are a more elegant choice than using ref and out IMHO.
{ "language": "en", "url": "https://stackoverflow.com/questions/135234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "412" }
Q: What's the best ASP.NET file type for a (non-SOAP, non-WSDL) web services project? Haven't done ASP.NET development since VS 2003, so I'd like to save some time and learn from other's mistakes. Writing a web services app, but not a WSDL/SOAP/etc. -- more like REST + XML. Which of the many "New Item" options (Web Form, Generic Handler, ASP.NET Handler, etc.) makes the most sense if I want to handle different HTTP verbs, through the same URI, separately. In a perfect world, I'd like the dispatching done declaratively in the code rather than via web.config -- but if I'm making life too hard that way, I'm open to change. A: If you're not using the built in web services (.asmx), then you should probably use a generic handler (.ashx). A: This is an idea I've been playing with... use at your own risk, the code is in my "Sandbox" folder ;) I think I want to move away from using reflection to determine which method to run, it might be faster to register a delegate in a dictionary using the HttpVerb as a key. Anyway, this code is provided with no warranty, blah, blah, blah... Verbs to use with REST Service public enum HttpVerb { GET, POST, PUT, DELETE } Attribute to mark methods on your service [AttributeUsage(AttributeTargets.Method, AllowMultiple=false, Inherited=false)] public class RestMethodAttribute: Attribute { private HttpVerb _verb; public RestMethodAttribute(HttpVerb verb) { _verb = verb; } public HttpVerb Verb { get { return _verb; } } } Base class for a Rest Service public class RestService: IHttpHandler { private readonly bool _isReusable = true; protected HttpContext _context; private IDictionary<HttpVerb, MethodInfo> _methods; public void ProcessRequest(HttpContext context) { _context = context; HttpVerb verb = (HttpVerb)Enum.Parse(typeof (HttpVerb), context.Request.HttpMethod); MethodInfo method = Methods[verb]; method.Invoke(this, null); } private IDictionary<HttpVerb, MethodInfo> Methods { get { if(_methods == null) { _methods = new Dictionary<HttpVerb, MethodInfo>(); BuildMethodsMap(); } return _methods; } } private void BuildMethodsMap() { Type serviceType = this.GetType(); MethodInfo[] methods = serviceType.GetMethods(BindingFlags.Instance | BindingFlags.DeclaredOnly | BindingFlags.Public); foreach (MethodInfo info in methods) { RestMethodAttribute[] attribs = info.GetCustomAttributes(typeof(RestMethodAttribute), false) as RestMethodAttribute[]; if(attribs == null || attribs.Length == 0) continue; HttpVerb verb = attribs[0].Verb; Methods.Add(verb, info); } } public bool IsReusable { get { return _isReusable; } } } Sample REST Service public class MyRestService: RestService { [RestMethod(HttpVerb.GET)] public void HelloWorld() { _context.Current.Response.Write("Hello World"); _context.Current.Response.End(); } } A: Probably MVC if you need rest. A: I agree with the ashx, gives you the most control. you could also go more complex and create a custom Http Handler. that way you can intercept any extension you decide. Of course you could add an Http Module and rewrite any request to a generic ashx handler.
{ "language": "en", "url": "https://stackoverflow.com/questions/135243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Reading 32bit Packed Binary Data On 64bit System I'm attempting to write a Python C extension that reads packed binary data (it is stored as structs of structs) and then parses it out into Python objects. Everything works as expected on a 32 bit machine (the binary files are always written on 32bit architecture), but not on a 64 bit box. Is there a "preferred" way of doing this? It would be a lot of code to post but as an example: struct { WORD version; BOOL upgrade; time_t time1; time_t time2; } apparms; File *fp; fp = fopen(filePath, "r+b"); fread(&apparms, sizeof(apparms), 1, fp); return Py_BuildValue("{s:i,s:l,s:l}", "sysVersion",apparms.version, "powerFailTime", apparms.time1, "normKitExpDate", apparms.time2 ); Now on a 32 bit system this works great, but on a 64 bit my time_t sizes are different (32bit vs 64 bit longs). Damn, you people are fast. Patrick, I originally started using the struct package but found it just way to slow for my needs. Plus I was looking for an excuse to write a Python Extension. I know this is a stupid question but what types do I need to watch out for? Thanks. A: Explicitly specify that your data types (e.g. integers) are 32-bit. Otherwise if you have two integers next to each other when you read them they will be read as one 64-bit integer. When you are dealing with cross-platform issues, the two main things to watch out for are: * *Bitness. If your packed data is written with 32-bit ints, then all of your code must explicitly specify 32-bit ints when reading and writing. *Byte order. If you move your code from Intel chips to PPC or SPARC, your byte order will be wrong. You will have to import your data and then byte-flip it so that it matches up with the current architecture. Otherwise 12 (0x0000000C) will be read as 201326592 (0x0C000000). Hopefully this helps. A: The 'struct' module should be able to do this, although alignment of structs in the middle of the data is always an issue. It's not very hard to get it right, however: find out (once) what boundary the structs-in-structs align to, then pad (manually, with the 'x' specifier) to that boundary. You can doublecheck your padding by comparing struct.calcsize() with your actual data. It's certainly easier than writing a C extension for it. In order to keep using Py_BuildValue() like that, you have two options. You can determine the size of time_t at compiletime (in terms of fundamental types, so 'an int' or 'a long' or 'an ssize_t') and then use the right format character to Py_BuildValue -- 'i' for an int, 'l' for a long, 'n' for an ssize_t. Or you can use PyInt_FromSsize_t() manually, in which case the compiler does the upcasting for you, and then use the 'O' format characters to pass the result to Py_BuildValue. A: You need to make sure you're using architecture independent members for your struct. For instance an int may be 32 bits on one architecture and 64 bits on another. As others have suggested, use the int32_t style types instead. If your struct contains unaligned members, you may need to deal with padding added by the compiler too. Another common problem with cross architecture data is endianness. Intel i386 architecture is little-endian, but if you're reading on a completely different machine (e.g. an Alpha or Sparc), you'll have to worry about this too. The Python struct module deals with both these situations, using the prefix passed as part of the format string. * *@ - Use native size, endianness and alignment. i= sizeof(int), l= sizeof(long) *= - Use native endianness, but standard sizes and alignment (i=32 bits, l=64 bits) *< - Little-endian standard sizes/alignment * * *Big-endian standard sizes/alignment In general, if the data passes off your machine, you should nail down the endianness and the size / padding format to something specific — ie. use "<" or ">" as your format. If you want to handle this in your C extension, you may need to add some code to handle it. A: What's your code for reading the binary data? Make sure you're copying the data into properly-sized types like int32_t instead of just int. A: Why aren't you using the struct package?
{ "language": "en", "url": "https://stackoverflow.com/questions/135246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to manipulate the LargeChange and SmallChange on a scrollbar to work I have a number of panels in a single window in C# application and I created 2 scrollbars, one for horizontal and one vertical. This is what I currently have: picture with 2 scroolbars http://www.simnet.is/elinnils52/scrollbar.jpg I have 1 variable and that is the total height all the items take & need. Here is my code on scroll change: for (int i = 0; i < this._splitMainView.Panel2.Controls.Count; i++) { this._splitMainView.Panel2.Controls[i].Location = new Point( 3 - _scrollBarX.Value, 3 + (132 + 6) * (i - 2) - _scrollBarY.Value); this._splitMainView.Panel2.Controls[i].Refresh(); } The scrollbar maximum is the total amount of all the containers height, the space in between and a few pixels extra. As you can see from the picture, it doesn't look good. Even if the maximum in this case is a little around 50 - 100 pixels it still looks like it's a thousand pages long. When I change the SmallChange and LargeChange, the scrollbar bar itself lengthens but then it wont reach the maximum pixels. It will be able to get almost at the end (depening on the SmallChange and LargeChange value) and leave around 5 - 29 px left. And as everyone knows, seeing half is not good. Does anyone know how to overcome this obstacle or a better way to implement it? A: Why not just make the maximum value of the scrollbar the overflow (visible area height - panel height)? Then just set the top of the panel to the value of the scrollbar * -1. A: Is there a reason not to use the AutoScroll property?
{ "language": "en", "url": "https://stackoverflow.com/questions/135254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to preserve stack space with good design? I'm programming in C for RAM limited embedded microcontroller with RTOS. I regularly break my code to short functions, but every function calling require to more stack memory. Every task needs his stack, and this is one of the significant memory consumers in the project. Is there an alternative to keep the code well organized and readable, still preserve the memory? A: In the event you can spare a lot of main memory but have only a small shred of stack, I suggest evaluating static allocations. In C, all variables declared inside a function are "automatically managed" which means they're allocated on the stack. Qualifying the declarations as "static" stores them in main memory instead of on the stack. They basically behave like global variables but still allow you to avoid the bad habits that come with overusing globals. You can make a good case for declaring large, long-lived buffers/variables as static to reduce pressure on the stack. Beware that this doesn't work well/at all if your application is multithreaded or if you use recursion. A: Turn on optimization, specifically aggressive inlining. The compiler should be able to inline methods to minimize calls. Depending on the compiler and the optimization switches you use, marking some methods as inline may help (or it may be ignored). With GCC, try adding the "-finline-functions" (or -O3) flag and possibly the " -finline-limit=n" flag. A: Try to make the call stack flatter, so instead of a() calling b() which calls c() which calls d(), have a() call b(), c(), and d() itself. If a function is only referenced once, mark it inline (assuming your compiler supports this). A: There are 3 components to your stack usage: * *Function Call return addresses *Function Call parameters *automatic(local) variables The key to minimizing your stack usage is to minimize parameter passing and automatic variables. The space consumption of the actual function call itself is rather minimal. Parameters One way to address the parameter issue is to pass a structure (via pointer) instead of a large number of parameters. foo(int a, int b, int c, int d) { ... bar(int a, int b); } do this instead: struct my_params { int a; int b; int c; int d; }; foo(struct my_params* p) { ... bar(p); }; This strategy is good if you pass down a lot of parameters. If the parameters are all different, then it might not work well for you. You would end up with a large structure being passed around that contains many different parameters. Automatic Variables (locals) This tend to be the biggest consumer of stack space. * *Arrays are the killer. Don't define arrays in your local functions! *Minimize the number of local variables. *Use the smallest type necessary. *If re-entrancy is not an issue, you can use module static variables. Keep in mind that if you're simply moving all your local variables from local scope to module scope, you have NOT saved any space. You traded stack space for data segment space. Some RTOS support thread local storage, which allocates "global" storage on a per-thread basis. This might allow you to have multiple independent global variables on a per task basis, but this will make your code not as straightforward. A: One trick that I read somewhere inorder to evaluate the stack requirements of the code in an embedded setup is to fill the stack space at the onset with a known pattern(DEAD in hex being my favorite) and let the system run for a while. After a normal run, read the stack space and see how much of the stack space has not been replaced during the course of operation. Design so as to leave atleast 150% of that so as to tackle all obsure code paths that might not have been exercised. A: Can you replace some of your local variables by globals? Arrays in particular can eat up stack. If the situation allows you to share some globals between some those between functions, there is a chance you can reduce your memory foot print. The trade off cost is increased complexity, and greater risk of unwanted side effects between functions vs a possibly smaller memory foot print. What sort of variables do you have in your functions? What sizes and limits are we talking about? A: Depending on your compiler, and how aggressive your optimisation options are, you will have stack usage for every function call you make. So to start with you will probably need to limit the depth of your function calls. Some compilers do use jumps rather than branches for simple functions which will reduce stack usage. Obviously you can do the same thing by using, say, an assembler macro to jump to your functions rather than a direct function call. As mentioned in other answers, inlining is one option available although that does come at the cost of greater code size. The other area that eats stack is the local parameters. This area you do have some control over. Using (file level) statics will avoid stack allocation at the cost of your static ram allocation. Globals likewise. In (truly) extreme cases you can come up with a convention for functions that uses a fixed number of global variables as temporary storage in lieu of locals on the stack. The tricky bit is making sure that none of the functions that use the same globals ever get called at the same time. (hence the convention) A: If you need to start preserving stack space you should either get a better compiler or more memory. Your software will typically grow (new features,...) , so if you have to start a project by thinking about how to preserve stack space it's doomed from the beginning. A: Yes, an RTOS can really eat up RAM for task stack usage. My experience is that as a new user of an RTOS, there's a tendency to use more tasks than necessary. For an embedded system using an RTOS, RAM can be a precious commodity. To preserve RAM, for simple features it can still be effective to implement several features within one task, running in round-robin fashion, with a cooperative multitasking design. Thus reduce total number of tasks. A: I think you may be imagining a problem which doesnt exist here. Most compilers don't actually do anything when they "allocate" automaticic variables on the stack. The stack is allocated before "main()" is executed. When you call function b() from function a() the address of the storage area immediately after the last variable used by a is passed to b(). This becomes the start of b()'s stack if b() then calls function c() then c's stack starts after the last automatic variable defined by b(). Note that the stack memory is already there and allocated, that no initialisation takes place and the only processing involved is passing a stack pointer. The only time this becomes a problem would be where all three functions use large amounts of storage the stack then has to accomadate the memory of all three functions. Try to keep functions which allocate large amounts of storage at the bottom of the call stack i.e. dont call another function from them. Another trick for memory constained systems is to split of the memory hogging parts of a function into separate self contained functions.
{ "language": "en", "url": "https://stackoverflow.com/questions/135262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Can I run ASP.NET 2.0 and 3.5 code on the same website? Can I run ASP.NET 2.0 and 3.5 code on the same website? ...or, do I need to separate them by applications, and/or servers? A: As long as your server is running 3.5, you can run both. A: As far as IIS is concerned, 3.5 and 2.0 are the same. What you have to be careful about is not mixing 1.1 and 2.0 in the same app pool. A: .NET 3.5 is 2.0 with a few extra libraries. So the answer is yes you can run them on the same web site. In fact you cannot even set a web application to run under 3.5. It just runs under 2.0. You can check the ASP.NET tab in the properties of an IIS site to see that there isn't even an option to run your application under 3.5. A: Yes you can without issue. A: .Net 3.5 is an extenion to the .Net 2.0 framework. After you upgrade to the .Net 3.5 framework you can run applications that use all of the .Net 2.0/3.0 and 3.5 framworks. A: You can run code in .NET 2.0 and .NET 3.5 on the same server, but you must have at least one application pool per framework version. The only thing you have to watch is not to mix a 2.0 app and a 3.5 app in the same pool. Rationale : only one framework can be loaded for each process and each application spawns its own process(es) A: ASP.NET 3.5 is still running on the .NET 2.0 CLR, if you go into IIS you'll see that you can only pick 2.0 or 1.1 So the answer is, yes...ASP.NET 3.5 is basically just extra assemblies in the GAC. .NET 3.5 was just modifications to the compilers themselves, and the libraries, not the CLR. A: You can run them both at the same time as long as .NET 3.5 is installed. A: I would just convert all of the code to 3.5, and it should work perfectly, if you have 3.5 installed on the box. Aslo note that VS 2008 does multi targeting, and a lot of the features that are new in 3.5 are actually features of the compiler, not the framework itself. So you can target the 2.0 framework and still get many of the new featres of 3.5.
{ "language": "en", "url": "https://stackoverflow.com/questions/135269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How does any application (chrome, flash, etc) get a time resolution better than the system time resolution? This article on Microsoft's tech net site supplies an exe that will calculate your windows machine's minimum time resolution - this should be the smallest "tick" available to any application on that machine: http://technet.microsoft.com/en-us/sysinternals/bb897568.aspx The result of running this app on my current box is 15.625 ms. I have also run tests against Internet Explorer and gotten the exact same resolution from the Date() javascript function. What is confusing is that the SAME test I ran against IE gives back much finer resolution for Google's Chrome browser (resolution of 1ms) and for a flash movie running in IE (1ms). Can anyone explain how any application can get a clock resolution better then the machine's clock? If so, is there some way I can get a consistently better resolution in browsers other then Chrome (without using flash)? The first answer below leads to two other questions: * *How does a multi-media timer get times between system clock ticks. I imagine the system clock as an analog watch with a ticking hand, each tick being 15ms. How are times between ticks measured? *Are multimedia timers available to browsers, especially Internet Explorer? Can I access one with C# or Javascript without having to push software to a user's browser/machine? A: You can get down to 1 ms with multimedia timers and even further with QueryPerformanceCounter. See also GetLocalTime() API time resolution. EDIT: Partial answer to the new subquestion ... System time resolution was around 15 ms on the Windows 3 and 95 architecture. On NT (and successors) you can get better resolution from the hardware abstraction layer (HAL). The QueryPerformanceCounter counts elapsed time, not CPU cycles article, written by the Raymond Chen, may give you some additional insights. As for the browsers - I have no idea what they are using for timers and how they interact with the OS. A: Look at the timeBeginPeriod API. From MSDN: "This function affects a global Windows setting. Windows uses the lowest value (that is, highest resolution) requested by any process." http://msdn.microsoft.com/en-us/library/ms713413(VS.85).aspx (Markdown didn't like parens in the URL) See "Inside Windows NT High Resolution Timers" referenced from the link you posted. A: The APIC on the processor runs at bus speed and has a timer. They may be using that instead of the system time. (Or they might just be giving a bunch of precision that isn't there.) This description of the Local APIC mentions using it as a timer. (It may also be the case that there is some performance counter they are using. That actually seems more likely since a device driver would be needed to program the APIC.)
{ "language": "en", "url": "https://stackoverflow.com/questions/135274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to programmatically associate a name like COM51 to a physical serial port in Microsoft Windows? How to programmatically associate a name like COM51 to a physical serial port in Microsoft Windows? To manually perform the operation I can follow the steps described in link text Open Device Manager with devmgmt.msc Double-click Ports (COM & LPT). Right-click the port I want, and then click Properties. On the Port Settings tab, if I want to change the COM port number (for example, from COM1 to COM51), I click the Advanced button, and then select the COM port number I need from the list. But, how can I do the job with a program? Is there an API to do the job? Thank you. A: I don't know any API to achieve that, but you can edit the registry values under HKLM\Hardware\DEVICEMAP\SERIALCOMM A: ComDBClaimPort http://msdn.microsoft.com/en-us/library/ms800845.aspx That only does part of the job though.
{ "language": "en", "url": "https://stackoverflow.com/questions/135287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is MSVCRT under Windows like glibc (libc) under *nix? I frequently come across Windows programs that bundle in MSVCRT (or their more current equivalents) with the program executables. On a typical PC, I would find many copies of the same .DLL's. My understanding is that MSVCRT is the C runtime library, somewhat analogous to glibc/libc.so under *nix. Why do Windows programs have to bring along their C libraries with them, instead of just sharing the system-wide libc? Update: thanks to Shog9, I started to read about SxS, which has further opened up my eyes to the DLL linkage issues (DLL Hell) - Link is one useful intro to the issue... A: Programs are linked against a specific version of the runtime, and that required version is not guaranteed to exist on the target machine. Also, matching up versions used to be problematic. In the Windows world, it's very bad manners to expect your users to go out and find and install a separate library to use your application. You make sure any dependencies not part of the host system are included with your app. In the linux world this isn't always as simple, since there's a much larger variation for how the host system might look. A: [I'm the current maintainer of the Native SxS technology at Microsoft] New versions of MSVCRT are released with new versions of Visual Studio, and reflect changes to the C++ toolset. So that programs compiled with versions of VS released after a particular version of Windows continue can work downlevel (such as VS 2008 projects on Windows XP), the MSVCRT is redistributable, so it can be installed there. CRT installation drops the libraries into %windir%\winsxs\, which is a global system location, requiring administrator privileges to do so. Since some programs do not want to ship with an installer, or do not want the user to need administrator privileges on the machine to run their installer, they bundle the CRT directly in the same directory as the application, for private use. So on a typical machine, you'll find many programs that have opted for this solution. A: There isn't really a "system-wide libc" in Windows. In *nix, there's generally one compiler, one linker, and with them a well-defined object file format, calling convention, and name mangling spec. This stuff usually comes with the OS. The compiler's semi-special status (plus an emphasis on portability across different *nixes) means that certain stuff can be expected to be there, and to be named and/or versioned in such a way that programs can easily find and use it. In Windows, things are more fragmented. A compiler doesn't come with the OS, so people need to get their own. Each compiler provides its own CRT, which may or may not have the same functions in it as MSVCRT. There's also no One True Spec on calling conventions or how names should appear in the libraries, so different compilers (with different ways of doing stuff) might have trouble finding functions in the library. BTW, the name should be a clue here; MSVCRT is short for "MicroSoft Visual C++ RunTime". It's not really a "system-wide" library in the same way that, say, kernel32 is -- it's just the runtime library used by MS's compilers, which they presumably used when building Windows. Other compilers could conceivably link against it, but (1) there might be licensing issues; and (2) the compilers would be tying their code to MS's -- meaning (2a) they'd no longer have any way to add to the runtime or fix bugs, short of hoping MS will fix them; and (2b) if MS decides to change what's in the RTL (which they can do at will, and probably have in each new version of VC++), or how the names appear, those other programs might break. A: Short answer? Because, up until SxS, MSVCRT was not reliably versioned! Can you imagine the madness that would result if programs compiled and tested against libc 5 would silently start using libc 6? That's the situation we were in for many years on Windows. Most of us would just as soon never again trust MS with keeping breaking changes out of a version
{ "language": "en", "url": "https://stackoverflow.com/questions/135296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Sprite / Character animation in Silverlight (v2) We have a Silverlight 2 project (game) that will require a lot of character animation. Can anyone suggest a good way to do this. Currently we plan to build the art in Illustrator, imported to Silverlight via Mike Snow's plug-in as this matches the skills our artists have. Is key framing the animations our only option here? And if it is, what's the best way to do it? Hundreds of individual png's or is there some way in Silverlight to draw just a portion of a larger image? A: You can use the Clip property on the image itself or on a container for the image to display a specific piece of a larger image, like a sprite sheet. This may or may not be more performant than swapping pngs. Also you could use the ImageBrush on a Rectangle to show just what you want, this would probably be a bit more efficient than the Clip property. A: I just posted some code using Bill's suggestion regarding the Rectange and ImageBrush. A: Silverlight at this time does not support bitmap effects nor has any libraries to manipulate the images. Your option now is to use keyframe animations from one png to another. Now you can get at the raw bytes of an image. If you have your own image processing libraries you can compile them with the Silverlight dlls and then use the library in your Silverlight app.
{ "language": "en", "url": "https://stackoverflow.com/questions/135299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a way to programmatically import ICS into Google Calendar? I don't see any obvious way to import ICS files into Google Calendar from the API docs here: http://code.google.com/apis/calendar/developers_guide_protocol.html And I'd greatly prefer not to have to parse them myself just to send the appointments into GCal. I'm looking for a programmatic solution, not something like import plugins for Thunderbird, Outlook, etc. Third party APIs to do the ICS parsing are acceptable, in any language. Any ideas? A: I have created a simple open source .net utility to do just that, available at http://gcalicsimporter.codeplex.com/. A: You shouldn't have to parse an ICS just to import it into Google Calendar, it is capable of importing them directly... From the end-user's web view, it's as easy as clicking Import Calendar. From the API, I would look at the Adding New Subscriptions section. A: For my iCal2GCal app I'm using the Googlecalendar Ruby Gem to both parse .ics files and then add the events inside to a Googlecalendar. It might give you some ideas on how to go about it. You can check out the full source code.
{ "language": "en", "url": "https://stackoverflow.com/questions/135302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: How can I closely achieve ?: from C++/C# in Python? In C# I could easily write the following: string stringValue = string.IsNullOrEmpty( otherString ) ? defaultString : otherString; Is there a quick way of doing the same thing in Python or am I stuck with an 'if' statement? A: @Dan if otherString: stringValue = otherString else: stringValue = defaultString This type of code is longer and more expressive, but also more readable Well yes, it's longer. Not so sure about “more expressive” and “more readable”. At the very least, your claim is disputable. I would even go as far as saying it's downright wrong, for two reasons. First, your code emphasizes the decision-making (rather extremely). Onthe other hand, the conditional operator emphasizes something else, namely the value (resp. the assignment of said value). And this is exactly what the writer of this code wants. The decision-making is really rather a by-product of the code. The important part here is the assignment operation. Your code hides this assignment in a lot of syntactic noise: the branching. Your code is less expressive because it shifts the emphasis from the important part. Even then your code would probably trump some obscure ASCII art like ?:. An inline-if would be preferable. Personally, I don't like the variant introduced with Python 2.5 because it's backwards. I would prefer something that reads in the same flow (direction) as the C ternary operator but uses words instead of ASCII characters: C = if cond then A else B This wins hands down. C and C# unfortunately don't have such an expressive statement. But (and this is the second argument), the ternary conditional operator of C languages is so long established that it has become an idiom in itself. The ternary operator is as much part of the language as the “conventional” if statement. Because it's an idiom, anybody who knows the language immediately reads this code right. Furthermore, it's an extremely short, concise way of expressing these semantics. In fact, it's the shortest imaginable way. It's extremely expressive because it doesn't obscure the essence with needless noise. Finally, Jeff Atwood has written the perfect conclusion to this: The best code is no code at all. A: In Python 2.5, there is A if C else B which behaves a lot like ?: in C. However, it's frowned upon for two reasons: readability, and the fact that there's usually a simpler way to approach the problem. For instance, in your case: stringValue = otherString or defaultString A: It's never a bad thing to write readable, expressive code. if otherString: stringValue = otherString else: stringValue = defaultString This type of code is longer and more expressive, but also more readable and less likely to get tripped over or mis-edited down the road. Don't be afraid to write expressively - readable code should be a goal, not a byproduct. A: There are a few duplicates of this question, e.g. * *Does Python have a ternary conditional operator? *What's the best way to replace the ternary operator in Python? In essence, in a general setting pre-2.5 code should use this: (condExp and [thenExp] or [elseExp])[0] (given condExp, thenExp and elseExp are arbitrary expressions), as it avoids wrong results if thenExp evaluates to boolean False, while maintaining short-circuit evaluation. A: By the way, j0rd4n, you don't (please don't!) write code like this in C#. Apart from the fact that the IsDefaultOrNull is actually called IsNullOrEmpty, this is pure code bloat. C# offers the coalesce operator for situations like these: string stringValue = otherString ?? defaultString; It's true that this only works if otherString is null (rather than empty) but if this can be ensured beforehand (and often it can) it makes the code much more readable. A: I also discovered that just using the "or" operator does pretty well. For instance: finalString = get_override() or defaultString If get_override() returns "" or None, it will always use defaultString. A: Chapter 4 of diveintopython.net has the answer. It's called the and-or trick in Python. A: You can take advantage of the fact that logical expressions return their value, and not just true or false status. For example, you can always use: result = question and firstanswer or secondanswer With the caveat that it doesn't work like the ternary operator if firstanswer is false. This is because question is evaluated first, assuming it's true firstanswer is returned unless firstanswer is false, so this usage fails to act like the ternary operator. If you know the values, however, there is usually no problem. An example would be: result = choice == 7 and "Seven" or "Another Choice" A: If you used ruby, you could write stringValue = otherString.blank? ? defaultString : otherString; the built in blank? method means null or empty. Come over to the dark side...
{ "language": "en", "url": "https://stackoverflow.com/questions/135303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Which list implementation will be the fastest for one pass write, read, then destroy? What is the fastest list implementation (in java) in a scenario where the list will be created one element at a time then at a later point be read one element at a time? The reads will be done with an iterator and then the list will then be destroyed. I know that the Big O notation for get is O(1) and add is O(1) for an ArrayList, while LinkedList is O(n) for get and O(1) for add. Does the iterator behave with the same Big O notation? A: Iterating through a linked list is O(1) per element. The Big O runtime for each option is the same. Probably the ArrayList will be faster because of better memory locality, but you'd have to measure it to know for sure. Pick whatever makes the code clearest. A: Note that iterating through an instance of LinkedList can be O(n^2) if done naively. Specifically: List<Object> list = new LinkedList<Object>(); for (int i = 0; i < list.size(); i++) { list.get(i); } This is absolutely horrible in terms of efficiency due to the fact that the list must be traversed up to i twice for each iteration. If you do use LinkedList, be sure to use either an Iterator or Java 5's enhanced for-loop: for (Object o : list) { // ... } The above code is O(n), since the list is traversed statefully in-place. To avoid all of the above hassle, just use ArrayList. It's not always the best choice (particularly for space efficiency), but it's usually a safe bet. A: There is a new List implementation called GlueList which is faster than all classic List implementations. Disclaimer: I am the author of this library A: You almost certainly want an ArrayList. Both adding and reading are "amortized constant time" (i.e. O(1)) as specified in the documentation (note that this is true even if the list has to increase it's size - it's designed like that see http://java.sun.com/j2se/1.5.0/docs/api/java/util/ArrayList.html ). If you know roughly the number of objects you will be storing then even the ArrayList size increase is eliminated. Adding to the end of a linked list is O(1), but the constant multiplier is larger than ArrayList (since you are usually creating a node object every time). Reading is virtually identical to the ArrayList if you are using an iterator. It's a good rule to always use the simplest structure you can, unless there is a good reason not to. Here there is no such reason. The exact quote from the documentation for ArrayList is: "The add operation runs in amortized constant time, that is, adding n elements requires O(n) time. All of the other operations run in linear time (roughly speaking). The constant factor is low compared to that for the LinkedList implementation." A: It depends largely on whether you know the maximum size of each list up front. If you do, use ArrayList; it will certainly be faster. Otherwise, you'll probably have to profile. While access to the ArrayList is O(1), creating it is not as simple, because of dynamic resizing. Another point to consider is that the space-time trade-off is not clear cut. Each Java object has quite a bit of overhead. While an ArrayList may waste some space on surplus slots, each slot is only 4 bytes (or 8 on a 64-bit JVM). Each element of a LinkedList is probably about 50 bytes (perhaps 100 in a 64-bit JVM). So you have to have quite a few wasted slots in an ArrayList before a LinkedList actually wins its presumed space advantage. Locality of reference is also a factor, and ArrayList is preferable there too. In practice, I almost always use ArrayList. A: I suggest benchmarking it. It's one thing reading the API, but until you try it for yourself, it'd academic. Should be fair easy to test, just make sure you do meaningful operations, or hotspot will out-smart you and optimise it all to a NO-OP :) A: First Thoughts: * *Refactor your code to not need the list. *Simplify the data down to a scalar data type, then use: int[] *Or even just use an array of whatever object you have: Object[] - John Gardner *Initialize the list to the full size: new ArrayList(123); Of course, as everyone else is mentioning, do performance testing, prove your new solution is an improvement. A: I have actually begun to think that any use of data structures with non-deterministic behavior, such as ArrayList or HashMap, should be avoided, so I would say only use ArrayList if you can bound its size; any unbounded list use LinkedList. That is because I mainly code systems with near real time requirements though. The main problem is that any memory allocation (which could happen randomly with any add operation) could also cause a garbage collection, and any garbage collection can cause you to miss a target. The larger the allocation, the more likely this is to occur, and this is also compounded if you are using CMS collector. CMS is non-compacting, so finding space for a new linked list node is generally going to be easier than finding space for a new 10,000 element array. The more rigorous your approach to coding, the closer you can come to real time with a stock JVM. But choosing only data structures with deterministic behavior is one of the first steps you would have to take.
{ "language": "en", "url": "https://stackoverflow.com/questions/135314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: CherryPy server name tag When running a CherryPy app it will send server name tag something like CherryPy/version. Is it possible to rename/overwrite that from the app without modifying CherryPy so it will show something else? Maybe something like MyAppName/version (CherryPy/version) A: Actually asking on IRC on their official channel fumanchu gived me a more clean way to do this (using latest svn): import cherrypy from cherrypy import _cpwsgi_server class HelloWorld(object): def index(self): return "Hello World!" index.exposed = True serverTag = "MyApp/%s (CherryPy/%s)" % ("1.2.3", cherrypy.__version__) _cpwsgi_server.CPWSGIServer.environ['SERVER_SOFTWARE'] = serverTag cherrypy.config.update({'tools.response_headers.on': True, 'tools.response_headers.headers': [('Server', serverTag)]}) cherrypy.quickstart(HelloWorld()) A: This string appears to be being set in the CherrPy Response class: def __init__(self): self.status = None self.header_list = None self._body = [] self.time = time.time() self.headers = http.HeaderMap() # Since we know all our keys are titled strings, we can # bypass HeaderMap.update and get a big speed boost. dict.update(self.headers, { "Content-Type": 'text/html', "Server": "CherryPy/" + cherrypy.__version__, "Date": http.HTTPDate(self.time), }) So when you're creating your Response object, you can update the "Server" header to display your desired string. From the CherrPy Response Object documentation: headers A dictionary containing the headers of the response. You may set values in this dict anytime before the finalize phase, after which CherryPy switches to using header_list ... EDIT: To avoid needing to make this change with every response object you create, one simple way to get around this is to wrap the Response object. For example, you can create your own Response object that inherits from CherryPy's Response and updates the headers key after initializing: class MyResponse(Response): def __init__(self): Response.__init__(self) dict.update(self.headers, { "Server": "MyServer/1.0", }) RespObject = MyResponse() print RespObject.headers["Server"] Then you can can call your object for uses where you need to create a Response object, and it will always have the Server header set to your desired string. A: This can now be set on a per application basis in the config file/dict [/] response.headers.server = "CherryPy Dev01"
{ "language": "en", "url": "https://stackoverflow.com/questions/135317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Mathematica Downvalue Lhs Does anybody know if there is a built-in function in Mathematica for getting the lhs of downvalue rules (without any holding)? I know how to write the code to do it, but it seems basic enough for a built-in For example: a[1]=2; a[2]=3; BuiltInIDoNotKnowOf[a] returns {1,2} A: This seems to work; not sure how useful it is, though: a[1] = 2 a[2] = 3 a[3] = 5 a[6] = 8 Part[DownValues[a], All, 1, 1, 1] A: This is like keys() in Perl and Python and other languages that have built in support for hashes (aka dictionaries). As your example illustrates, Mathematica supports hashes without any special syntax. Just say a[1] = 2 and you have a hash. [1] To get the keys of a hash, I recommend adding this to your init.m or your personal utilities library: keys[f_] := DownValues[f][[All,1,1,1]] (* Keys of a hash/dictionary. *) (Or the following pure function version is supposedly slightly faster: keys = DownValues[#][[All,1,1,1]]&; (* Keys of a hash/dictionary. *) ) Either way, keys[a] now returns what you want. (You can get the values of the hash with a /@ keys[a].) If you want to allow for higher arity hashes, like a[1,2]=5; a[3,4]=6 then you can use this: SetAttributes[removeHead, {HoldAll}]; removeHead[h_[args___]] := {args} keys[f_] := removeHead @@@ DownValues[f][[All,1]] Which returns {{1,2}, {3,4}}. (In that case you can get the hash values with a @@@ keys[a].) Note that DownValues by default sorts the keys, which is probably not a good idea since at best it takes extra time. If you want the keys sorted you can just do Sort@keys[f]. So I would actually recommend this version: keys = DownValues[#,Sort->False][[All,1,1,1]]&; Interestingly, there is no mention of the Sort option in the DownValues documention. I found out about it from an old post from Daniel Lichtblau of Wolfram Research. (I confirmed that it still works in the current version (7.0) of Mathematica.) Footnotes: [1] What's really handy is that you can mix and match that with function definitions. Like: fib[0] = 1; fib[1] = 1; fib[n_] := fib[n-1] + fib[n-2] You can then add memoization by changing that last line to fib[n_] := fib[n] = fib[n-1] + fib[n-2] which says to cache the answer for all subsequent calls.
{ "language": "en", "url": "https://stackoverflow.com/questions/135330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: LinqToSql and WCF Within an n-tier app that makes use of a WCF service to interact with the database, what is the best practice way of making use of LinqToSql classes throughout the app? I've seen it done a couple of different ways but they seemed like they burned a lot of hours creating extra interfaces, message classes, and the like which reduces the benefit you get from not having to write your data access code. Is there a good way to do it currently? Are we stuck waiting for the Entity Framework? A: LINQ to SQL isn't really suitable for use with a distributed app. The change tracking and lazy loading is part of the DataContext which is tied to the database so cannot travel across the wire. You can move L2S entities across the wire, modify them, move them back and update the database by reattaching them to the DataContext but that is pretty limited and you lose all concurrency checks as the old values are never kept around. BTW I believe the same is true for L2E. A: It is certainly not a good idea to pass the linq-to-sql object around to other parts of a distributed system. If you do that, you would couple your clients to the structure of the database, which is never a good idea. This was/is one of the major problems with DataSets by the way. It is better to create your own classes for the transfer of data object. Those classes, of course, would be implemented as DataContracts. In your service layer, you'd convert between the linq-to-sql objects and instances of the data carrier objects. It is tedious but it decouples the clients of the service from the database schema. It also has the advantage of giving you better control of the data that is passed around in your system.
{ "language": "en", "url": "https://stackoverflow.com/questions/135339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Most common examples of misuse of singleton class When should you NOT use a singleton class although it might be very tempting to do so? It would be very nice if we had a list of most common instances of 'singletonitis' that we should take care to avoid. A: Well singletons for the most part are just making things static anyway. So you're either in effect making data global, and we all know global variables are bad or you're writing static methods and that's not very OO now is it? Here is a more detailed rant on why singletons are bad, by Steve Yegge. Basically you shouldn't use singletons in almost all cases, you can't really know that it's never going to be needed in more than one place. A: I know many have answered with "when you have more than one", etc. Since the original poster wanted a list of cases when you shouldn't use Singletons (rather than the top reason), I'll chime in with: Whenever you're using it because you're not allowed to use a global! The number of times I've had a junior engineer who has used a Singleton because they knew that I didn't accept globals in code-reviews. They often seem shocked when I point out that all they did was replace a global with a Singleton pattern and they still just have a global! A: Here is a rant by my friend Alex Miller... It does not exactly enumerate "when you should NOT use a singleton" but it is a comprehensive, excellent post and argues that one should only use a singleton in rare instances, if at all. A: I'm guilty of a big one a few years back (thankfully I've learned my lession since then). What happened is that I came on board a desktop app project that had converted to .Net from VB6, and was a real mess. Things like 40-page (printed) functions and no real class structure. I built a class to encapsulate access to the database. Not a real data tier (yet), just a base class that a real data tier could use. Somewhere I got the bright idea to make this class a singleton. It worked okay for a year or so, and then we needed to build a web interface for the app as well. The singleton ended up being a huge bottleneck for the database, since all web users had to share the same connection. Again... lesson learned. Looking back, it probably actually was the right choice for a short while, since it forced the other developers to be more disciplined about using it and made them aware of scoping issues not previously a problem in the VB6 world. But I should have changed it back after a few weeks before we had too much built up around it. A: Do not use a singleton for something that might evolve into a multipliable resource. This probably sounds silly, but if you declare something a singleton you're making a very strong statement that it is absolutely unique. You're building code around it, more and more. And when you then find out after thousands of lines of code that it is not a singleton at all, you have a huge amount of work in front of you because all the other objects expect "the" sacred object of class WizBang to be a singleton. Typical example: "There is only one database connection this application has, thus it is a singleton." - Bad idea. You may want to have several connections in the future. Better create a pool of database connections and populate it with just one instance. Acts like a Singleton, but all other code will have growable code for accessing the pool. EDIT: I understand that theoretically you can extend a singleton into several objects. Yet there is no real life cycle (like pooling/unpooling) which means there is no real ownership of objects that have been handed out, i.e. the now multi-singleton would have to be stateless to be used simultaneously by different methods and threads. A: Singletons are virtually always a bad idea and generally useless/redundant since they are just a very limited simplification of a decent pattern. Look up how Dependency Injection works. It solves the same problems, but in a much more useful way--in fact, you find it applies to many more parts of your design. Although you can find DI libraries out there, you can also roll a basic one yourself, it's pretty easy. A: I try to have only one singleton - an inversion of control / service locator object. IService service = IoC.GetImplementationOf<IService>(); A: One of the things that tend to make it a nightmare is if it contains modifiable global state. I worked on a project where there were Singletons used all over the place for things that should have been solved in a completely different way (pass in strategies etc.) The "de-singletonification" was in some cases a major rewrite of parts of the system. I would argue that in the bigger part of the cases when people use a Singleton, it's just wrong b/c it looks nice in the first place, but turns into a problem especially in testing. A: Sometimes, you assume there will only be one of a thing, then you turn out to be wrong. Example, a database class. You assume you will only ever connect to your app's database. // Its our database! We'll never need another class Database { }; But wait! Your boss says, hook up to some other guys database. Say you want to add phpbb to the website and would like to poke its database to integrate some of its functionality. Should we make a new singleton or another instance of database? Most people agree that a new instance of the same class is preferred, there is no code duplication. You'd rather have Database ourDb; Database otherDb; than copy-past Database and make: // Copy-pasted from our home-grown database. class OtherGuysDatabase { }; The slippery slope here is that you might stop thinking about making new instance of classes and instead begin thinking its ok to have one type per every instance. A: When you have multiple applications running in the same JVM. A singleton is a singleton across the entire JVM, not just a single application. Even if multiple threads or applications seems to be creating a new singleton object, they're all using the same one if they run in the same JVM. A: In the case of a connection (for instance), it makes sense that you wouldn't want to make the connection itself a singleton, you might need four connections, or you may need to destroy and recreate the connection a number of times. But why wouldn't you access all of your connections through a single interface (i.e. connection manager)?
{ "language": "en", "url": "https://stackoverflow.com/questions/135347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Subversion and shared files across repositories/projects? I am migrating a client's SourceSafe repository (3 projects) to SVN and two of the projects share source files. (These projects are separate products - with different names and release versions, etc) Has SVN resolved this shortcoming? How do people generally handle this scenario? Of the options I know about/can think of * *Use the external or extern or whatever for SVN. I hear this is not a good option for various reasons *Create a new project (maybe called shared) that contains the source. The problem with this is that we still have to get that code (it is not a library) and import it into the project somehow. It can be shown to be the same problem as the one above and it introduces the overhead of an additional product/project. *Just check in the files in both repositories and cross-update them. This requires developers to know about the sharing and to remember to check in. I suppose I could write a script that checks all known shared files and updates them when needed. *Have one repository for the two projects that share. This leaves me with the problem having to create a top level project/repository that contains the two and it is a problem for labeling. I do not really want to label the top pseudo project. (the tags, trunk and branch things are not exactly where I would want them.) I will probably go with the last option. Any other comments? A: I don't know what exactly you heard about svn:externals, but I think that's what you should use here. You can even specify your external to point to a stable branch or release tag of you shared source, or even point to two different tags from the two other projects that use it (you might need this if you fix some bug in the shared code ASAP for project A, but you don't have enough time to test it fully with project B). The worst thing you could do is your option 3 (cross-updating two versions of the same files). A: The safe way to do it is to have a third project that builds the shared source into a library that is used by the other two. The library project has it's own SVN repository. You can even do this with shared files that are just headers, simply leave them in the lib project directory and add it to the list of includes. Having the same file in two projects, even with soruce control is asking for trouble sooner or later! A: Just in case if someone googled this question and wonder how would you introduce external resources with TortoiseSVN then documentation is here: Include a common sub-project External Items A: Use one repository, of course. There might be more shared code later on. What exactly is the problem with tagging? So you have an extra folder in the tag, but it doesn't cost you space. If you really don't want that extra folder, here's an idea for organizing the tree: * */ * *trunk * *prj1 *prj2 *shared *tags * *tag1 * *prj1 *shared That is, when you want to tag prj1, you create tag1, then copy prj1 and shared into it. A: In my experience it is problematic to have two independent projects referencing the same source files because i may add or change code in the shared file to meet the needs of one project only to break the other build. It seems that the trick here is to allow each project to advance independently, so however you set up your repositories you want to stabilize the shared code such that it only changes when you want it to. For example, if the shared code lives in two branches/folders of the same repository or if it is checked seperately into two distinct repositories, or if it lives in a third repository all by itself, you want the step of upgrading that piece of code to be a manual one that does not have side effects, that you can isolate, debug, and bugfix against. I have factored this code out into a third repository and then i periodically migrate that code into my dependent project repositories as internal release and upgrade steps; My individual projects then have a revision that looks like "Upgraded to v4.3.345 of Shared.App.dll" which includes all changes needed to work against that version. If the shared code is part of the same repository as the two dependent projects, then have a separate copy of it in each project and use repository merges to propagate your changes. A: You don't mention what language or build tool you are using, but for Java projects, Maven may be worth looking into. One of features is that is essentially extends ant to allow you to pull in external dependencies. That relieves one of your concerns about having to create, maintain and label a meta-project. It also allows you to either pull from HEAD of the external project, or pull from a given tag, which relieves one of the concerns by a previous poster about sharing common files between projects and inadvertently causing breakage, because you can control when each project uses a newer version of the shared files independently. A: Using a trigger to update the files sounds like a bad idea. Somehow they are going to get out of sync. Really the solution with shared code is to extract it into a library and use that library in multiple solutions. That sounds to be outside of the scope of your project, though, so the last solution is your best bet. A: Have one repository for the two projects that share. This leaves me with the problem having to create a top level project/repository that contains the two and it is a problem for labeling. I do not really want to label the top pseudo project. (the tags, trunk and branch things are not exactly where I would want them.) Like others, I would surely do it the last way. What do you feel is wrong with having 'just' one repository? Repository is about admin rights. Other than that, I would see a single repo have any N number of projects in it.
{ "language": "en", "url": "https://stackoverflow.com/questions/135361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Classic asp - When to use Response.flush? We have a painfully slow report.I added a Response.flush and it seems a great deal better. What are some of the caveats of using this method. A: If Response.Buffer is not set to true, then you'll get a run-time error. Also, If the Flush method is called on an ASP page, the server does not honor Keep-Alive requests for that page. You'll also want to look out if you're using a table-based design as it won't render in some browsers until the entire table is sent.. meaning if you have 10,000 rows, the user would still need to wait for all 10,000 rows to transfer before they'd actually see them. A: Expanding Wayne's answer: if anything you do needs to set Response.Headers, you can't do it after any part of the Response has been Flushed. A: There are no problems with flushing the response like this. It is generally recommended for better performance to buffer the entire page and the flush it to the client but for long running scripts it is often better to display some data to the client so the user sees something is happening. Do remember that flushing manually only have a proper effect when buffering the page from the start, otherwise IIS will flush automatically (stream the page to the client). You should avoid flushing to often as IIS will then have to use resources on flushing the page often instead of processing the script. I.e.: flush every 50 rows instead of ever row. A: Response.flush could be useful to send to the browser the report's header.. then display a "loading message", then your report process and you flush the report, then execute a little piece of javascript to hide the "loading" message. This way you will let your users know that something is hapenning so they won't press STOP BACK or just close the window as they may otherwise be tempted. Also, I've played a lot with what browser render what table and IE seems to be the only one that doesn't render a table unless the tag is received. Which means that all rows could gradually appears in other browser but not in IE.
{ "language": "en", "url": "https://stackoverflow.com/questions/135365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to determine if a file is in ROM in Windows Mobile? Is there a way to programmatically determine if a given file path and name is stored in ROM (i.e. came installed on the device) or RAM (i.e. created after a hard reset)? A: Get the file attributes and check to see if FILE_ATTRIBUTE_ROMMODULE is set. A: There is no 100% of telling if a file is in rom or not... For most files though you check the file attributes for either "FILE_ATTRIBUTE_INROM" or "FILE_ATTRIBUTE_ROMMODULE". "FILE_ATTRIBUTE_INROM" - normal data files. "FILE_ATTRIBUTE_ROMMODULE" - executable code files (dll, exe's, etc) (these files are not like the normal executable files tho as they are "run in place", so they are like a run of code / data from memory). There are other files that are "in rom" but are not marked as such!! There is no real way of telling until you try to delete them, which you can't.
{ "language": "en", "url": "https://stackoverflow.com/questions/135368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Would performance suffer using autoload in php and searching for the class file? I've always struggled with how to best include classes into my php code. Pathing is usually an issue but a few minutes ago i found this question which dramatically helps that. Now I'm reading about __autoload and thinking that it could make the process of developing my applications much easier. The problem is i like to maintain folder structure to separate areas of functionality as opposed to throwing everything into a general /lib folder. So if i override autoload to do a deep search of a class folder including all subfolders, what performance hits can i expect? Obviously this will depend on scale, depth of the folder structure and number of classes but generally I'm asking on a medium scale project will it cause problems. A: __autoload is great, but the cost of stating all the files in a recursive search function is expensive. You might want to look at building a tree of files to use for autoloading. In my framework, I consistently name files for their classes and use a map that is cached for the data. Check out http://trac.framewerk.org/cgi-bin/trac.fcgi/browser/trunk/index.php [dead link] starting at line 68 for an idea of how this can be done. Edit: And to more directly answer your question, without caching, you can expect a performance hit on a site with medium to heavy traffic. A: A common pattern (Pear, Zend Framework as examples...) is to make the classname reflect the path, so Db_Adapter_Mysql will be in at /Db/Adapter/Mysql.php, from somewhere that's added to the include-path. A: There are 2 ways that you could easily do this, first of all, name your classes so that they'll define the structure of where to find them function __autoload($classname) { try { if (class_exists($classname, false) OR interface_exists($classname, false)) { return; } $class = split('_', strtolower(strval($classname))); if (array_shift($class) != 'majyk') { throw new Exception('Autoloader tried to load a class that does not belong to us ( ' . $classname . ' )'); } switch (count($class)) { case 1: // Core Class - matches Majyk_Foo - include /core/class_foo.php $file = MAJYK_DIR . 'core/class_' . $class[0] . '.php'; break; case 2: // Subclass - matches Majyk_Foo_Bar - includes /foo/class_bar.php $file = MAJYK_DIR . $class[0] . '/class_' . $class[1] . '.php'; break; default: throw new Exception('Unknown Class Name ( ' . $classname .' )'); return false; } if (file_exists($file)) { require_once($file); if (!class_exists($classname, false) AND !interface_exists($classname, false)) { throw new Exception('Class cannot be found ( ' . $classname . ' )'); } } else { throw new Exception('Class File Cannot be found ( ' . str_replace(MAJYK_DIR, '', $file) . ' )'); } } catch (Exception $e) { // spl_autoload($classname); echo $e->getMessage(); } } Or, 2, use multiple autoloaders. PHP >=5.1.2 Has the SPL library, which allows you to add multiple autoloaders. You add one for each path, and it'll find it on it's way through. Or just add them to the include path and use the default spl_autoload() An example function autoload_foo($classname) { require_once('foo/' . $classname . '.php'); } function autoload_bar($classname) { require_once('bar/' . $classname . '.php'); } spl_autoload_register('autoload_foo'); spl_autoload_register('autoload_bar'); spl_autoload_register('spl_autoload'); // Default SPL Autoloader A: Autoload is great PHP feature that helps you very much... The perfomance wouldn't suffer if will use the smart taxonomy like: 1. every library stays in the folders "packages" 2. every class is located by replacing the "_" in the class name with the "/" and adding a ".php" at the end class = My_App_Smart_Object file = packages/My/App/Smart/Object.php The benefits of this approach(used by almost any framework) is also a smarter organization of your code :-) A: Hunting for files all over the place will make things slower (many more disk hits). Loading all of your classes in case you might need them will make things take more memory. Specifying which classes you need in every file is difficult to maintain (i.e. they don't get removed if they're no longer used). The real question is which of these is more important to you? They're all tradeoffs, in the end, so you have to pick one. It's arguable, though, that most of the overhead in the second and third options has to do with actually compiling the code. Using something like APC can significantly reduce the overhead of loading and compiling every class on every page load. Given the use of APC, I would likely take the approach of dividing up my code into modules (e.g. the web interface module, the database interaction module, etc.) and have each of those modules import all the classes for their module, plus classes from other modules they may need. It's a tradeoff between the last two, and I've found it works well enough for my needs. A: I tend to use a simple approach where __autoload() consults a hash mapping class names to relative paths, which is contained in a file that's regenerated using a simple script which itself performs the recursive search. This requires that the script be run when adding a new class file or restructuring the code base, but it also avoids "cleverness" in __autoload() which can lead to unnecessary stat() calls, and it has the advantage that I can easily move files around within my code base, knowing that all I need to do is run a single script to update the autoloader. The script itself recursively inspects my includes/ directory, and assumes that any PHP file not named in a short list of exclusions (the autoloader itself, plus some other standard files I tend to have) contains a class of the same name. A: Zend Framework's approach is to do autoload based on the PEAR folder standard (Class_Foo maps to /Class/Foo.php), however rather than using a set base path it uses the include_path. The problem with their approach is there's no way to check beforehand if a file exists so the autoload will try to include a file that doesn't exist in any of the include_path's, error out, and never give any other autoload functions registered with spl_autoload_register a chance to include the file. So a slight deviation is to manually provide an array of base paths where the autoload can expect to find classes setup in the PEAR fashion and just loop over the base paths: <?php //... foreach( $paths as $path ) { if( file_exists($path . $classNameToFilePath) ) include $path . $classNameToFilePath; } //... ?> Granted you'll kinda be search but for each autoload you'll only be doing at worst n searches, where n is the number of base paths you are checking. But if you find yourself still having to recursively scan directories the question is not "Will autoload hurt my performance," the question should be "why am I tossing my class files around in a random structure?" Sticking to the PEAR structure will save you so many headaches, and even if you decide to go with manually doing your includes as opposed to autoload, there will be no guessing as to where the class files are located when you do your include statements.
{ "language": "en", "url": "https://stackoverflow.com/questions/135373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do I make a WPF data template fill the entire width of the listbox? I have a ListBox DataTemplate in WPF. I want one item to be tight against the left side of the ListBox and another item to be tight against the right side, but I can't figure out how to do this. So far I have a Grid with three columns, the left and right ones have content and the center is a placeholder with it's width set to "*". Where am I going wrong? Here is the code: <DataTemplate x:Key="SmallCustomerListItem"> <Grid HorizontalAlignment="Stretch"> <Grid.RowDefinitions> <RowDefinition/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition Width="*"/> <ColumnDefinition/> </Grid.ColumnDefinitions> <WrapPanel HorizontalAlignment="Stretch" Margin="0"> <!--Some content here--> <TextBlock Text="{Binding Path=LastName}" TextWrapping="Wrap" FontSize="24"/> <TextBlock Text=", " TextWrapping="Wrap" FontSize="24"/> <TextBlock Text="{Binding Path=FirstName}" TextWrapping="Wrap" FontSize="24"/> </WrapPanel> <ListBox ItemsSource="{Binding Path=PhoneNumbers}" Grid.Column="2" d:DesignWidth="100" d:DesignHeight="50" Margin="8,0" Background="Transparent" BorderBrush="Transparent" IsHitTestVisible="False" HorizontalAlignment="Stretch"/> </Grid> </DataTemplate> A: Ok, here's what you have: Column 0: WrapPanel Column 1: Nothing Column 2: ListBox It sounds like you want WrapPanel on the left edge, ListBox on the right edge, and space to take up what's left in the middle. Easiest way to do this is actually to use a DockPanel, not a Grid. <DockPanel> <WrapPanel DockPanel.Dock="Left"></WrapPanel> <ListBox DockPanel.Dock="Right"></ListBox> </DockPanel> This should leave empty space between the WrapPanel and the ListBox. A: <Grid.Width> <Binding Path="ActualWidth" RelativeSource="{RelativeSource Mode=FindAncestor, AncestorType={x:Type ScrollContentPresenter}}" /> </Grid.Width> A: Extending Taeke's answer, setting the ScrollViewer.HorizontalScrollBarVisibility="Hidden" for a ListBox allows the child control to take the parent's width and not have the scroll bar show up. <ListBox Width="100" ScrollViewer.HorizontalScrollBarVisibility="Hidden"> <Label Content="{Binding Path=., Mode=OneWay}" HorizontalContentAlignment="Stretch" Height="30" Margin="-4,0,0,0" BorderThickness="0.5" BorderBrush="Black" FontFamily="Calibri" > <Label.Width> <Binding Path="Width" RelativeSource="{RelativeSource Mode=FindAncestor, AncestorType={x:Type ListBox}}" /> </Label.Width> </Label> </ListBox > A: I also had to set: HorizontalContentAlignment="Stretch" on the containing ListBox. A: The Grid should by default take up the whole width of the ListBox because the default ItemsPanel for it is a VirtualizingStackPanel. I'm assuming that you have not changed ListBox.ItemsPanel. Perhaps if you got rid of the middle ColumnDefinition (the others are default "*"), and put HorizontalAlignment="Left" on your WrapPanel and HorizontalAlignment="Right" on the ListBox for phone numbers. You may have to alter that ListBox a bit to get the phone numbers even more right-aligned, such as creating a DataTemplate for them. A: If you want to use a Grid, then you need to change your ColumnDefinitions to be: <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> If you don't need to use a Grid, then you could use a DockPanel: <DockPanel> <WrapPanel DockPanel.Dock="Left"> <!--Some content here--> <TextBlock Text="{Binding Path=LastName}" TextWrapping="Wrap" FontSize="24"/> <TextBlock Text=", " TextWrapping="Wrap" FontSize="24"/> <TextBlock Text="{Binding Path=FirstName}" TextWrapping="Wrap" FontSize="24"/> </WrapPanel> <ListBox DockPanel.Dock="Right" ItemsSource="{Binding Path=PhoneNumbers}" Margin="8,0" Background="Transparent" BorderBrush="Transparent" IsHitTestVisible="False"/> <TextBlock /> </DockPanel> Notice the TextBlock at the end. Any control with no "DockPanel.Dock" defined will fill the remaining space. A: Taeke's answer works well, and as per vancutterromney's answer you can disable the horizontal scrollbar to get rid of the annoying size mismatch. However, if you do want the best of both worlds--to remove the scrollbar when it is not needed, but have it automatically enabled when the ListBox becomes too small, you can use the following converter: /// <summary> /// Value converter that adjusts the value of a double according to min and max limiting values, as well as an offset. These values are set by object configuration, handled in XAML resource definition. /// </summary> [ValueConversion(typeof(double), typeof(double))] public sealed class DoubleLimiterConverter : IValueConverter { /// <summary> /// Minimum value, if set. If not set, there is no minimum limit. /// </summary> public double? Min { get; set; } /// <summary> /// Maximum value, if set. If not set, there is no minimum limit. /// </summary> public double? Max { get; set; } /// <summary> /// Offset value to be applied after the limiting is done. /// </summary> public double Offset { get; set; } public static double _defaultFailureValue = 0; public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { if (value == null || !(value is double)) return _defaultFailureValue; double dValue = (double)value; double minimum = Min.HasValue ? Min.Value : double.NegativeInfinity; double maximum = Max.HasValue ? Max.Value : double.PositiveInfinity; double retVal = dValue.LimitToRange(minimum, maximum) + Offset; return retVal; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } } Then define it in XAML according to the desired max/min values, as well an offset to deal with that annoying 2-pixel size mismatch as mentioned in the other answers: <ListBox.Resources> <con:DoubleLimiterConverter x:Key="conDoubleLimiter" Min="450" Offset="-2"/> </ListBox.Resources> Then use the converter in the Width binding: <Grid.Width> <Binding Path="ActualWidth" RelativeSource="{RelativeSource Mode=FindAncestor, AncestorType={x:Type ScrollContentPresenter}}" Converter="{StaticResource conDoubleLimiter}" /> </Grid.Width> A: The method in Taeke's answer forces a horizontal scroll bar. This can be fixed by adding a converter to reduce the grid's width by the width of the vertical scrollbar control. using System; using System.Globalization; using System.Windows; using System.Windows.Data; using System.Windows.Markup; namespace Converters { public class ListBoxItemWidthConverter : MarkupExtension, IValueConverter { private static ListBoxItemWidthConverter _instance; #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return System.Convert.ToInt32(value) - SystemParameters.VerticalScrollBarWidth; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } #endregion public override object ProvideValue(IServiceProvider serviceProvider) { return _instance ?? (_instance = new ListBoxItemWidthConverter()); } } } Add a namespace to the root node of your XAML. xmlns:converters="clr-namespace:Converters" And update the Grid width to use the converter. <Grid.Width> <Binding Path="ActualWidth" RelativeSource="{RelativeSource Mode=FindAncestor, AncestorType={x:Type ScrollContentPresenter}}" Converter="{converters:ListBoxItemWidthConverter}"/> </Grid.Width>
{ "language": "en", "url": "https://stackoverflow.com/questions/135375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: How can I build Slickedit tags on command line? I want to be able to generate slickedit tags on the commmand line, as soon as I sync a project from revision control - so via a script. Any way to do this? A: See this topic http://community.slickedit.com/index.php?topic=253.0
{ "language": "en", "url": "https://stackoverflow.com/questions/135384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I design an ASP.NET website delaying the style/theme? I need to build a prototype for an intranet website, and I want to focus on usability (layout, navigability, etc) and leave the theme for later (I have very bad taste, so this will probably be done by someone else) I know about ASP.NET's capability of switching themes instantly, but how do I have to design the WebForms for it to be easy later? * *Should I create controls with a class attribute pointing to something that will exist in the future css? *Or do I simply create the controls without worrying about this and it'll be handled easily when we create the theme? A: If you're planning on using ASP.NET Themes, then you can design all of the controls generically and add Skins later on. Depending on the complexity of the design (meaning how many different styles you have for textboxes or gridviews, etc), this might not save you a lot of work, but it's one way to use the built-in .Net support for theming. Make sure you use a MasterPage so that all of your sub pages will have a common base, and give all of your elements good IDs, because you will still need to get your hands dirty with CSS to put it all together. Here's a link to a decent Themes & Skins tutorial. Knowing what you'll have to do in the future to add this in will make it easier to design for it now. A: I'd like to make the argument for ignoring themes altogether and using nothing but CSS. It's never been clear to me what value themes add; it's a very Microsoft-specific approach, and its output isn't always standards-compliant. By using CSS you will widen the pool of potential designers able to work on your project, and you will have a better chance of having a cross-browser and standards-compliant site. If someone else is going to be styling this later, I'd just make sure that you provide enough "hooks" for them to be able to design this. This means adding CSS classes to pretty much everything you do that will be styled similarly, and wrapping things in divs or spans with CSS classes where appropriate, for example <div class="ButtonContainer"> <asp:linkbutton runat="Server" cssclass="Button Cancel" command="Save" text="Save" /> <asp:linkbutton runat="Server" cssclass="Button Save" command="Cancel" text="Cancel" /> </div> If you don't have a solid understanding of CSS and you don't have in-house naming conventions or standard style sheets, though, you won't really know how to structure your divs and classes. In our shop, the way we handle this is the programmer just writes standard ASP.NET markup, and the designer goes through and adds divs, spans, and class names that work with the style sheet they will develop. A: Skins are probably the answer, but something about that irks me (maybe because I don't like being that vendor specific and I'd like my skills to be applicable to other web languages and platforms than .NET). I prefer to use CssClass attributes on all of my ASP.Net controls and carefully class everything else and to use as much semantic markup as possible. Then, when I get the controls set up and working, I use CSS to style everything (although I usually have to tweak the HTML and JavaScript stuff at this point). Another developer I work with prefers to do everything the Microsoft way with Skins and page directives and so on. It's probably the best way to do it, but it feels to me like it's mixing too much of the presentation into the middle and back-end of the application. A: just wrap your controls in divs. will make layout/style much easier in the end
{ "language": "en", "url": "https://stackoverflow.com/questions/135412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: XPI signing linux no gui I'm trying to sign an XPI on linux (no gui) using the NSS cert db (cert8.db and key3.db) i copied from another server of mine, on which I can sign XPI's just fine. On the new box I can sign with a brand new test certificate ok, but when i try to use the old cert db, it complains with: signtool: PROBLEM signing data (Unknown issuer) Certutil lists the cert im trying to use with a * and the CA is present in there as well. Is this cert db transferable between computers like this? Or are there any other files I have to setup? TIA Mike A: im not sure if this is what you need, but here it is: http://www.mercille.org/snippets/xpiSigning.php A: If the certificate chain has an intermediate CA that also needs to be there. NSS is rather picky when it comes to the chain and also needs the certs to have been marked as trusted.
{ "language": "en", "url": "https://stackoverflow.com/questions/135416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Rails and Flex to build an RIA Any thoughts on using Flex to build an RIA for administering a complex rails app? We are starting to find it difficult using ajax to keep our admin section intuitive and easy for users to work with. A: You've got RoR guys working on this program and you've managed to develop a complex rails app that has enough subtleties that the admin section is difficult to use. The answer to this problem is not to use a different programming language to create a whole nother kinda app to do the admin. It will help more to get assistance in simplifying and organizing your admin section. Work through some paper sketches to get a better idea of how to present this complexity and maybe reveal complexity as you go along. Complexity is handled often by using wizards or revealing suboptions as you go along. Spend some time with your users and watch them do their tasks. With more details I could edit this answer with more specifics. A: Try investigating this book: A: Flex is certainly worth considering in your scenario. Generally, Flex is more mature development platform than AJAX is so if your server-side data are exposed via some reasonable interface (web services, REST-full services etc.), building a Flex front-end would make sense. It really depends on your needs - Flex vs. AJAX is an interesting topic on its own. A: If you want to use XML for communication then there isn't much you need to do on the rails side. But if you want to use an AMF gateway you will want to checkout RubyAMF. But I agree with MattK if you just want to redesign your admin section it's not worth adding in Flex. I think you just need to do some usability testing, take that feedback and refractor your interface. A: I would only consider using Flex in your situation if you already have Flex developers or if you could outsource that part of your project. The Flex modules simply call web services (written in Ruby or whatever) so there is a very nice separation between the two parts of your project. Since the interface between the two parts is an easily-mockable web service, outsourcing works well. There should be plenty of web shops local to you who could handle the work. An admin site should take only two to three weeks to develop in Flex if the developers are knowledgeable.
{ "language": "en", "url": "https://stackoverflow.com/questions/135427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I use reflection to invoke a private method? There are a group of private methods in my class, and I need to call one dynamically based on an input value. Both the invoking code and the target methods are in the same instance. The code looks like this: MethodInfo dynMethod = this.GetType().GetMethod("Draw_" + itemType); dynMethod.Invoke(this, new object[] { methodParams }); In this case, GetMethod() will not return private methods. What BindingFlags do I need to supply to GetMethod() so that it can locate private methods? A: Are you absolutely sure this can't be done through inheritance? Reflection is the very last thing you should look at when solving a problem, it makes refactoring, understanding your code, and any automated analysis more difficult. It looks like you should just have a DrawItem1, DrawItem2, etc class that override your dynMethod. A: BindingFlags.NonPublic will not return any results by itself. As it turns out, combining it with BindingFlags.Instance does the trick. MethodInfo dynMethod = this.GetType().GetMethod("Draw_" + itemType, BindingFlags.NonPublic | BindingFlags.Instance); A: Simply change your code to use the overloaded version of GetMethod that accepts BindingFlags: MethodInfo dynMethod = this.GetType().GetMethod("Draw_" + itemType, BindingFlags.NonPublic | BindingFlags.Instance); dynMethod.Invoke(this, new object[] { methodParams }); Here's the BindingFlags enumeration documentation. A: And if you really want to get yourself in trouble, make it easier to execute by writing an extension method: static class AccessExtensions { public static object call(this object o, string methodName, params object[] args) { var mi = o.GetType ().GetMethod (methodName, System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance ); if (mi != null) { return mi.Invoke (o, args); } return null; } } And usage: class Counter { public int count { get; private set; } void incr(int value) { count += value; } } [Test] public void making_questionable_life_choices() { Counter c = new Counter (); c.call ("incr", 2); // "incr" is private ! c.call ("incr", 3); Assert.AreEqual (5, c.count); } A: Invokes any method despite its protection level on object instance. Enjoy! public static object InvokeMethod(object obj, string methodName, params object[] methodParams) { var methodParamTypes = methodParams?.Select(p => p.GetType()).ToArray() ?? new Type[] { }; var bindingFlags = BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Instance | BindingFlags.Static; MethodInfo method = null; var type = obj.GetType(); while (method == null && type != null) { method = type.GetMethod(methodName, bindingFlags, Type.DefaultBinder, methodParamTypes, null); type = type.BaseType; } return method?.Invoke(obj, methodParams); } A: Microsoft recently modified the reflection API rendering most of these answers obsolete. The following should work on modern platforms (including Xamarin.Forms and UWP): obj.GetType().GetTypeInfo().GetDeclaredMethod("MethodName").Invoke(obj, yourArgsHere); Or as an extension method: public static object InvokeMethod<T>(this T obj, string methodName, params object[] args) { var type = typeof(T); var method = type.GetTypeInfo().GetDeclaredMethod(methodName); return method.Invoke(obj, args); } Note: * *If the desired method is in a superclass of obj the T generic must be explicitly set to the type of the superclass. *If the method is asynchronous you can use await (Task) obj.InvokeMethod(…). A: I think you can pass it BindingFlags.NonPublic where it is the GetMethod method. A: Could you not just have a different Draw method for each type that you want to Draw? Then call the overloaded Draw method passing in the object of type itemType to be drawn. Your question does not make it clear whether itemType genuinely refers to objects of differing types. A: It should be noted that calling from a derived class can be problematic. Error prone: this.GetType().GetMethod("PrivateTestMethod", BindingFlags.Instance | BindingFlags.NonPublic) Correct: typeof(CurrentClass).GetMethod("PrivateTestMethod", BindingFlags.Instance | BindingFlags.NonPublic) A: Reflection especially on private members is wrong * *Reflection breaks type safety. You can try to invoke a method that doesn't exists (anymore), or with the wrong parameters, or with too much parameters, or not enough... or even in the wrong order (this one my favourite :) ). By the way return type could change as well. *Reflection is slow. Private members reflection breaks encapsulation principle and thus exposing your code to the following : * *Increase complexity of your code because it has to handle the inner behavior of the classes. What is hidden should remain hidden. *Makes your code easy to break as it will compile but won't run if the method changed its name. *Makes the private code easy to break because if it is private it is not intended to be called that way. Maybe the private method expects some inner state before being called. What if I must do it anyway ? There are so cases, when you depend on a third party or you need some api not exposed, you have to do some reflection. Some also use it to test some classes they own but that they don't want to change the interface to give access to the inner members just for tests. If you do it, do it right * *Mitigate the easy to break: To mitigate the easy to break issue, the best is to detect any potential break by testing in unit tests that would run in a continuous integration build or such. Of course, it means you always use the same assembly (which contains the private members). If you use a dynamic load and reflection, you like play with fire, but you can always catch the Exception that the call may produce. * *Mitigate the slowness of reflection: In the recent versions of .Net Framework, CreateDelegate beat by a factor 50 the MethodInfo invoke: // The following should be done once since this does some reflection var method = this.GetType().GetMethod("Draw_" + itemType, BindingFlags.NonPublic | BindingFlags.Instance); // Here we create a Func that targets the instance of type which has the // Draw_ItemType method var draw = (Func<TInput, Output[]>)_method.CreateDelegate( typeof(Func<TInput, TOutput[]>), this); draw calls will be around 50x faster than MethodInfo.Invoke use draw as a standard Func like that: var res = draw(methodParams); Check this post of mine to see benchmark on different method invocations A: Read this (supplementary) answer (that is sometimes the answer) to understand where this is going and why some people in this thread complain that "it is still not working" I wrote exactly same code as one of the answers here. But I still had an issue. I placed break point on var mi = o.GetType().GetMethod(methodName, BindingFlags.NonPublic | BindingFlags.Instance ); It executed but mi == null And it continued behavior like this until I did "re-build" on all projects involved. I was unit testing one assembly while the reflection method was sitting in third assembly. It was totally confusing but I used Immediate Window to discover methods and I found that a private method I tried to unit test had old name (I renamed it). This told me that old assembly or PDB is still out there even if unit test project builds - for some reason project it tests didn't built. "rebuild" worked
{ "language": "en", "url": "https://stackoverflow.com/questions/135443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "374" }
Q: How can I detect if there is a floppy in a drive? I tried to use DriveInfo.IsReady, but it returns false if an unformatted floppy is in the drive. A: You can always try to read a sector from the floppy and see if it succeeds or not. I have no clue how to do it in .NET, but here is the C/C++ equivalent. SetLastError(0); HANDLE h = CreateFile("\\\\.\\A:", ...); if (!ReadFile(h, buf, 512, &bytes_read, 0)) { DWORD err = GetLastError(); } CreateFile, ReadFile A: Simply speaking: you can't. Floppy drives don't support that. A: what about DriveNotFoundException? I don't have a floppy drive in the computer I'm on currently, so I can't test it. This exception is thrown when the drive is unavailable, which is a condition that I believe would be met when the floppy drive is empty. A: Perhaps you can look at the disk management APIs... That should be able to tell you the capacity of the disk (whether formatted or not)... And if there's no capacity, there's no floppy inserted... A: Trap both DiscNotReady (For no disk in the drive), and write Exceptions (For invalid file system/not formatted). A: Jonas stuff worked: bool MyDll::Class1::HasFloppy( wchar_t driveLetter ) { wchar_t path[] = L"\\\\.\\A:"; path[ 4 ] = driveLetter; SetLastError( 0 ); HANDLE drive = CreateFile( path, //__in LPCTSTR lpFileName, GENERIC_READ, //__in DWORD dwDesiredAccess, 0, //__in DWORD dwShareMode, 0, //__in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, OPEN_EXISTING, //__in DWORD dwCreationDisposition, 0, //__in DWORD dwFlagsAndAttributes, 0 //__in_opt HANDLE hTemplateFile ); DWORD bytes_read; char buf[ 512 ]; DWORD err( 0 ); if( !ReadFile( drive, buf, 512, &bytes_read, 0 ) ) err = GetLastError(); CloseHandle( drive ); return err != ERROR_NOT_READY; } A: If you insert an unformatted floppy disk in your floppy drive, the purpose would normally be to use that floppy drive with that floppy disk. The first step is then logically to format that floppy disk. So, if you detect a non-ready floppy drive, you could try to format the disk, and if that succeeds, your floppy drive should become ready with a newly formatted floppy in it. If the format of the unready floppy drive fails, then there is no floppy disk in it, or the floppy disk in it is faulty. Then you can show a message to insert a floppy disk in the drive.
{ "language": "en", "url": "https://stackoverflow.com/questions/135445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I check if an object has a specific property in JavaScript? How do I check if an object has a specific property in JavaScript? Consider: x = {'key': 1}; if ( x.hasOwnProperty('key') ) { //Do this } Is that the best way to do it? A: hasOwnProperty "can be used to determine whether an object has the specified property as a direct property of that object; unlike the in operator, this method does not check down the object's prototype chain." So most probably, for what seems by your question, you don't want to use hasOwnProperty, which determines if the property exists as attached directly to the object itself,. If you want to determine if the property exists in the prototype chain, you may want to use it like: if (prop in object) { // Do something } A: if (x.key !== undefined) Armin Ronacher seems to have already beat me to it, but: Object.prototype.hasOwnProperty = function(property) { return this[property] !== undefined; }; x = {'key': 1}; if (x.hasOwnProperty('key')) { alert('have key!'); } if (!x.hasOwnProperty('bar')) { alert('no bar!'); } A safer, but slower solution, as pointed out by Konrad Rudolph and Armin Ronacher would be: Object.prototype.hasOwnProperty = function(property) { return typeof this[property] !== 'undefined'; }; A: You can use the following approaches- var obj = {a:1} console.log('a' in obj) // 1 console.log(obj.hasOwnProperty('a')) // 2 console.log(Boolean(obj.a)) // 3 The difference between the following approaches are as follows- * *In the first and third approach we are not just searching in object but its prototypal chain too. If the object does not have the property, but the property is present in its prototype chain it is going to give true. var obj = { a: 2, __proto__ : {b: 2} } console.log('b' in obj) console.log(Boolean(obj.b)) *The second approach will check only for its own properties. Example - var obj = { a: 2, __proto__ : {b: 2} } console.log(obj.hasOwnProperty('b')) *The difference between the first and the third is if there is a property which has value undefined the third approach is going to give false while first will give true. var obj = { b : undefined } console.log(Boolean(obj.b)) console.log('b' in obj); A: Given myObject object and “myKey” as key name: Object.keys(myObject).includes('myKey') or myObject.hasOwnProperty('myKey') or typeof myObject.myKey !== 'undefined' The last was widely used, but (as pointed out in other answers and comments) it could also match on keys deriving from Object prototype. A: Considering the following object in Javascript const x = {key: 1}; You can use the in operator to check if the property exists on an object: console.log("key" in x); You can also loop through all the properties of the object using a for - in loop, and then check for the specific property: for (const prop in x) { if (prop === "key") { //Do something } } You must consider if this object property is enumerable or not, because non-enumerable properties will not show up in a for-in loop. Also, if the enumerable property is shadowing a non-enumerable property of the prototype, it will not show up in Internet Explorer 8 and earlier. If you’d like a list of all instance properties, whether enumerable or not, you can use Object.getOwnPropertyNames(x); This will return an array of names of all properties that exist on an object. Reflections provide methods that can be used to interact with Javascript objects. The static Reflect.has() method works like the in operator as a function. console.log(Reflect.has(x, 'key')); // expected output: true console.log(Reflect.has(x, 'key2')); // expected output: false console.log(Reflect.has(object1, 'toString')); // expected output: true Finally, you can use the typeof operator to directly check the data type of the object property: if (typeof x.key === "undefined") { console.log("undefined"); } If the property does not exist on the object, it will return the string undefined. Else it will return the appropriate property type. However, note that this is not always a valid way of checking if an object has a property or not, because you could have a property that is set to undefined, in which case, using typeof x.key would still return true (even though the key is still in the object). Similarly, you can check if a property exists by comparing directly to the undefined Javascript property if (x.key === undefined) { console.log("undefined"); } This should work unless key was specifically set to undefined on the x object A: Performance Today 2020.12.17 I perform tests on MacOs HighSierra 10.13.6 on Chrome v87, Safari v13.1.2 and Firefox v83 for chosen solutions. Results I compare only solutions A-F because they give valid result for all cased used in snippet in details section. For all browsers * *solution based on in (A) is fast or fastest *solution (E) is fastest for chrome for big objects and fastest for firefox for small arrays if key not exists *solution (F) is fastest (~ >10x than other solutions) for small arrays *solutions (D,E) are quite fast *solution based on losash has (B) is slowest Details I perform 4 tests cases: * *when object has 10 fields and searched key exists - you can run it HERE *when object has 10 fields and searched key not exists - you can run it HERE *when object has 10000 fields and searched key exists - you can run it HERE *when object has 10000 fields and searched key exists - you can run it HERE Below snippet presents differences between solutions A B C D E F G H I J K // SO https://stackoverflow.com/q/135448/860099 // src: https://stackoverflow.com/a/14664748/860099 function A(x) { return 'key' in x } // src: https://stackoverflow.com/a/11315692/860099 function B(x) { return _.has(x, 'key') } // src: https://stackoverflow.com/a/40266120/860099 function C(x) { return Reflect.has( x, 'key') } // src: https://stackoverflow.com/q/135448/860099 function D(x) { return x.hasOwnProperty('key') } // src: https://stackoverflow.com/a/11315692/860099 function E(x) { return Object.prototype.hasOwnProperty.call(x, 'key') } // src: https://stackoverflow.com/a/136411/860099 function F(x) { function hasOwnProperty(obj, prop) { var proto = obj.__proto__ || obj.constructor.prototype; return (prop in obj) && (!(prop in proto) || proto[prop] !== obj[prop]); } return hasOwnProperty(x,'key') } // src: https://stackoverflow.com/a/135568/860099 function G(x) { return typeof(x.key) !== 'undefined' } // src: https://stackoverflow.com/a/22740939/860099 function H(x) { return x.key !== undefined } // src: https://stackoverflow.com/a/38332171/860099 function I(x) { return !!x.key } // src: https://stackoverflow.com/a/41184688/860099 function J(x) { return !!x['key'] } // src: https://stackoverflow.com/a/54196605/860099 function K(x) { return Boolean(x.key) } // -------------------- // TEST // -------------------- let x1 = {'key': 1}; let x2 = {'key': "1"}; let x3 = {'key': true}; let x4 = {'key': []}; let x5 = {'key': {}}; let x6 = {'key': ()=>{}}; let x7 = {'key': ''}; let x8 = {'key': 0}; let x9 = {'key': false}; let x10= {'key': undefined}; let x11= {'nokey': 1}; let b= x=> x ? 1:0; console.log(' 1 2 3 4 5 6 7 8 9 10 11'); [A,B,C,D,E,F,G,H,I,J,K ].map(f=> { console.log( `${f.name} ${b(f(x1))} ${b(f(x2))} ${b(f(x3))} ${b(f(x4))} ${b(f(x5))} ${b(f(x6))} ${b(f(x7))} ${b(f(x8))} ${b(f(x9))} ${b(f(x10))} ${b(f(x11))} ` )}) console.log('\nLegend: Columns (cases)'); console.log('1. key = 1 '); console.log('2. key = "1" '); console.log('3. key = true '); console.log('4. key = [] '); console.log('5. key = {} '); console.log('6. key = ()=>{} '); console.log('7. key = "" '); console.log('8. key = 0 '); console.log('9. key = false '); console.log('10. key = undefined '); console.log('11. no-key '); <script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.20/lodash.min.js" integrity="sha512-90vH1Z83AJY9DmlWa8WkjkV79yfS2n2Oxhsi2dZbIv0nC4E6m5AbH8Nh156kkM7JePmqD6tcZsfad1ueoaovww==" crossorigin="anonymous"> </script> This shippet only presents functions used in performance tests - it not perform tests itself! And here are example results for chrome A: Now with ECMAScript22 we can use hasOwn instead of hasOwnProperty (Because this feature has pitfalls ) Object.hasOwn(obj, propKey) A: Let's cut through some confusion here. First, let's simplify by assuming hasOwnProperty already exists; this is true of the vast majority of current browsers in use. hasOwnProperty returns true if the attribute name that is passed to it has been added to the object. It is entirely independent of the actual value assigned to it which may be exactly undefined. Hence: var o = {} o.x = undefined var a = o.hasOwnProperty('x') // a is true var b = o.x === undefined // b is also true However: var o = {} var a = o.hasOwnProperty('x') // a is now false var b = o.x === undefined // b is still true The problem is what happens when an object in the prototype chain has an attribute with the value of undefined? hasOwnProperty will be false for it, and so will !== undefined. Yet, for..in will still list it in the enumeration. The bottom line is there is no cross-browser way (since Internet Explorer doesn't expose __prototype__) to determine that a specific identifier has not been attached to an object or anything in its prototype chain. A: With Underscore.js or (even better) Lodash: _.has(x, 'key'); Which calls Object.prototype.hasOwnProperty, but (a) is shorter to type, and (b) uses "a safe reference to hasOwnProperty" (i.e. it works even if hasOwnProperty is overwritten). In particular, Lodash defines _.has as: function has(object, key) { return object ? hasOwnProperty.call(object, key) : false; } // hasOwnProperty = Object.prototype.hasOwnProperty A: If you are searching for a property, then "no". You want: if ('prop' in obj) { } In general, you should not care whether or not the property comes from the prototype or the object. However, because you used 'key' in your sample code, it looks like you are treating the object as a hash, in which case your answer would make sense. All of the hashes keys would be properties in the object, and you avoid the extra properties contributed by the prototype. John Resig's answer was very comprehensive, but I thought it wasn't clear. Especially with when to use "'prop' in obj". A: Here is another option for a specific case. :) If you want to test for a member on an object and want to know if it has been set to something other than: * *'' *false *null *undefined *0 ... then you can use: var foo = {}; foo.bar = "Yes, this is a proper value!"; if (!!foo.bar) { // member is set, do something } A: some easier and short options depending on the specific use case: * *to check if the property exists, regardless of value, use the in operator ("a" in b) *to check a property value from a variable, use bracket notation (obj[v]) *to check a property value as truthy, use optional chaining (?.) *to check a property value boolean, use double-not / bang-bang / (!!) *to set a default value for null / undefined check, use nullish coalescing operator (??) *to set a default value for falsey value check, use short-circuit logical OR operator (||) run the code snippet to see results: let obj1 = {prop:undefined}; console.log(1,"prop" in obj1); console.log(1,obj1?.prop); let obj2 = undefined; //console.log(2,"prop" in obj2); would throw because obj2 undefined console.log(2,"prop" in (obj2 ?? {})) console.log(2,obj2?.prop); let obj3 = {prop:false}; console.log(3,"prop" in obj3); console.log(3,!!obj3?.prop); let obj4 = {prop:null}; let look = "prop" console.log(4,"prop" in obj4); console.log(4,obj4?.[look]); let obj5 = {prop:true}; console.log(5,"prop" in obj5); console.log(5,obj5?.prop === true); let obj6 = {otherProp:true}; look = "otherProp" console.log(6,"prop" in obj6); console.log(6,obj6.look); //should have used bracket notation let obj7 = {prop:""}; console.log(7,"prop" in obj7); console.log(7,obj7?.prop || "empty"); I see very few instances where hasOwn is used properly, especially given its inheritance issues A: For testing simple objects, use: if (obj[x] !== undefined) If you don't know what object type it is, use: if (obj.hasOwnProperty(x)) All other options are slower... Details A performance evaluation of 100,000,000 cycles under Node.js to the five options suggested by others here: function hasKey1(k,o) { return (x in obj); } function hasKey2(k,o) { return (obj[x]); } function hasKey3(k,o) { return (obj[x] !== undefined); } function hasKey4(k,o) { return (typeof(obj[x]) !== 'undefined'); } function hasKey5(k,o) { return (obj.hasOwnProperty(x)); } The evaluation tells us that unless we specifically want to check the object's prototype chain as well as the object itself, we should not use the common form: if (X in Obj)... It is between 2 to 6 times slower depending on the use case hasKey1 execution time: 4.51 s hasKey2 execution time: 0.90 s hasKey3 execution time: 0.76 s hasKey4 execution time: 0.93 s hasKey5 execution time: 2.15 s Bottom line, if your Obj is not necessarily a simple object and you wish to avoid checking the object's prototype chain and to ensure x is owned by Obj directly, use if (obj.hasOwnProperty(x)).... Otherwise, when using a simple object and not being worried about the object's prototype chain, using if (typeof(obj[x]) !== 'undefined')... is the safest and fastest way. If you use a simple object as a hash table and never do anything kinky, I would use if (obj[x])... as I find it much more readable. A: You can use this (but read the warning below): var x = { 'key': 1 }; if ('key' in x) { console.log('has'); } But be warned: 'constructor' in x will return true even if x is an empty object - same for 'toString' in x, and many others. It's better to use Object.hasOwn(x, 'key'). A: An ECMAScript 6 solution with reflection. Create a wrapper like: /** Gets an argument from array or object. The possible outcome: - If the key exists the value is returned. - If no key exists the default value is returned. - If no default value is specified an empty string is returned. @param obj The object or array to be searched. @param key The name of the property or key. @param defVal Optional default version of the command-line parameter [default ""] @return The default value in case of an error else the found parameter. */ function getSafeReflectArg( obj, key, defVal) { "use strict"; var retVal = (typeof defVal === 'undefined' ? "" : defVal); if ( Reflect.has( obj, key) ) { return Reflect.get( obj, key); } return retVal; } // getSafeReflectArg A: There is a method, "hasOwnProperty", that exists on an object, but it's not recommended to call this method directly, because it might be sometimes that the object is null or some property exist on the object like: { hasOwnProperty: false } So a better way would be: // Good var obj = {"bar": "here bar desc"} console.log(Object.prototype.hasOwnProperty.call(obj, "bar")); // Best const has = Object.prototype.hasOwnProperty; // Cache the lookup once, in module scope. console.log(has.call(obj, "bar")); A: Showing how to use this answer const object= {key1: 'data', key2: 'data2'}; Object.keys(object).includes('key1') //returns true We can use indexOf as well, I prefer includes A: Yes it is :) I think you can also do Object.prototype.hasOwnProperty.call(x, 'key') which should also work if x has a property called hasOwnProperty :) But that tests for own properties. If you want to check if it has an property that may also be inhered you can use typeof x.foo != 'undefined'. A: if(x.hasOwnProperty("key")){ // … } because if(x.key){ // … } fails if x.key is falsy (for example, x.key === ""). A: You can also use the ES6 Reflect object: x = {'key': 1}; Reflect.has( x, 'key'); // returns true Documentation on MDN for Reflect.has can be found here. The static Reflect.has() method works like the in operator as a function. A: Do not do this object.hasOwnProperty(key)). It's really bad because these methods may be shadowed by properties on the object in question - consider { hasOwnProperty: false } - or, the object may be a null object (Object.create(null)). The best way is to do Object.prototype.hasOwnProperty.call(object, key) or: const has = Object.prototype.hasOwnProperty; // Cache the lookup once, in module scope. console.log(has.call(object, key)); /* Or */ import has from 'has'; // https://www.npmjs.com/package/has console.log(has(object, key)); A: 2022 UPDATE Object.hasOwn() Object.hasOwn() is recommended over Object.hasOwnProperty() because it works for objects created using Object.create(null) and with objects that have overridden the inherited hasOwnProperty() method. While it is possible to workaround these problems by calling Object.prototype.hasOwnProperty() on an external object, Object.hasOwn() is more intuitive. Example const object1 = { prop: 'exists' }; console.log(Object.hasOwn(object1, 'prop')); // expected output: true Original answer I'm really confused by the answers that have been given - most of them are just outright incorrect. Of course you can have object properties that have undefined, null, or false values. So simply reducing the property check to typeof this[property] or, even worse, x.key will give you completely misleading results. It depends on what you're looking for. If you want to know if an object physically contains a property (and it is not coming from somewhere up on the prototype chain) then object.hasOwnProperty is the way to go. All modern browsers support it. (It was missing in older versions of Safari - 2.0.1 and older - but those versions of the browser are rarely used any more.) If what you're looking for is if an object has a property on it that is iterable (when you iterate over the properties of the object, it will appear) then doing: prop in object will give you your desired effect. Since using hasOwnProperty is probably what you want, and considering that you may want a fallback method, I present to you the following solution: var obj = { a: undefined, b: null, c: false }; // a, b, c all found for ( var prop in obj ) { document.writeln( "Object1: " + prop ); } function Class(){ this.a = undefined; this.b = null; this.c = false; } Class.prototype = { a: undefined, b: true, c: true, d: true, e: true }; var obj2 = new Class(); // a, b, c, d, e found for ( var prop in obj2 ) { document.writeln( "Object2: " + prop ); } function hasOwnProperty(obj, prop) { var proto = obj.__proto__ || obj.constructor.prototype; return (prop in obj) && (!(prop in proto) || proto[prop] !== obj[prop]); } if ( Object.prototype.hasOwnProperty ) { var hasOwnProperty = function(obj, prop) { return obj.hasOwnProperty(prop); } } // a, b, c found in modern browsers // b, c found in Safari 2.0.1 and older for ( var prop in obj2 ) { if ( hasOwnProperty(obj2, prop) ) { document.writeln( "Object2 w/ hasOwn: " + prop ); } } The above is a working, cross-browser, solution to hasOwnProperty(), with one caveat: It is unable to distinguish between cases where an identical property is on the prototype and on the instance - it just assumes that it's coming from the prototype. You could shift it to be more lenient or strict, based upon your situation, but at the very least this should be more helpful. A: Note: the following is nowadays largely obsolete thanks to strict mode, and hasOwnProperty. The correct solution is to use strict mode and to check for the presence of a property using obj.hasOwnProperty. This answer predates both these things, at least as widely implemented (yes, it is that old). Take the following as a historical note. Bear in mind that undefined is (unfortunately) not a reserved word in JavaScript if you’re not using strict mode. Therefore, someone (someone else, obviously) could have the grand idea of redefining it, breaking your code. A more robust method is therefore the following: if (typeof(x.attribute) !== 'undefined') On the flip side, this method is much more verbose and also slower. :-/ A common alternative is to ensure that undefined is actually undefined, e.g. by putting the code into a function which accepts an additional parameter, called undefined, that isn’t passed a value. To ensure that it’s not passed a value, you could just call it yourself immediately, e.g.: (function (undefined) { … your code … if (x.attribute !== undefined) … mode code … })(); A: OK, it looks like I had the right answer unless if you don't want inherited properties: if (x.hasOwnProperty('key')) Here are some other options to include inherited properties: if (x.key) // Quick and dirty, but it does the same thing as below. if (x.key !== undefined) A: Another relatively simple way is using Object.keys. This returns an array which means you get all of the features of an array. var noInfo = {}; var info = {something: 'data'}; Object.keys(noInfo).length //returns 0 or false Object.keys(info).length //returns 1 or true Although we are in a world with great browser support. Because this question is so old I thought I'd add this: This is safe to use as of JavaScript v1.8.5. A: JavaScript is now evolving and growing as it now has good and even efficient ways to check it. Here are some easy ways to check if object has a particular property: * *Using hasOwnProperty() const hero = { name: 'Batman' }; hero.hasOwnProperty('name'); // => true hero.hasOwnProperty('realName'); // => false *Using keyword/operator in const hero = { name: 'Batman' }; 'name' in hero; // => true 'realName' in hero; // => false *Comparing with undefined keyword const hero = { name: 'Batman' }; hero.name; // => 'Batman' hero.realName; // => undefined // So consider this hero.realName == undefined // => true (which means property does not exists in object) hero.name == undefined // => false (which means that property exists in object) For more information, check here. A: You need to use the method object.hasOwnProperty(property). It returns true if the object has the property and false if the object doesn't. A: The hasOwnProperty() method returns a boolean indicating whether the object has the specified property as its own property (as opposed to inheriting it). const object1 = {}; object1.property1 = 42; console.log(object1.hasOwnProperty('property1')); // expected output: true console.log(object1.hasOwnProperty('toString')); // expected output: false console.log(object1.hasOwnProperty('hasOwnProperty')); // expected output: false Know more A: Don't over-complicate things when you can do: var isProperty = (objectname.keyname || "") ? true : false; It Is simple and clear for most cases... A: A Better approach for iterating on object's own properties: If you want to iterate on object's properties without using hasOwnProperty() check, use for(let key of Object.keys(stud)){} method: for(let key of Object.keys(stud)){ console.log(key); // will only log object's Own properties } full Example and comparison with for-in with hasOwnProperty() function Student() { this.name = "nitin"; } Student.prototype = { grade: 'A' } let stud = new Student(); // for-in approach for(let key in stud){ if(stud.hasOwnProperty(key)){ console.log(key); // only outputs "name" } } //Object.keys() approach for(let key of Object.keys(stud)){ console.log(key); } A: x?.key returns 1 if x.key exists, otherwise undefined A: I had a hard time understanding the difference between hasOwn and hasOwnProperty even though there are 30 answers trying to explain this. Here's a runnable snippet, so you can see for yourself how it behaves. const object1 = { prop: 'exists' }; object1.property1 = 42; // the following as you might expect output true console.log(object1.hasOwnProperty('property1')); console.log(Object.hasOwn(object1,"prop")); console.log(Object.hasOwn(object1,"property1")); // the following might surpize you, they output false console.log(Object.hasOwnProperty(object1,"prop")); console.log(Object.hasOwnProperty(object1,"property1")); // the following as you rightfully expect output false console.log(object1.hasOwnProperty('toString')); console.log(Object.hasOwn(object1,"toString")); console.log(Object.hasOwnProperty(object1,"toString")); console.log(object1.hasOwnProperty('hasOwnProperty')); console.log(Object.hasOwn(object1,"hasOwnProperty")); console.log(Object.hasOwnProperty(object1,"hasOwnProperty"));
{ "language": "en", "url": "https://stackoverflow.com/questions/135448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1760" }
Q: Can I use a List as a collection of method pointers? (C#) I want to create a list of methods to execute. Each method has the same signature. I thought about putting delegates in a generic collection, but I keep getting this error: 'method' is a 'variable' but is used like a 'method' In theory, here is what I would like to do: List<object> methodsToExecute; int Add(int x, int y) { return x+y; } int Subtract(int x, int y) { return x-y; } delegate int BinaryOp(int x, int y); methodsToExecute.add(new BinaryOp(add)); methodsToExecute.add(new BinaryOp(subtract)); foreach(object method in methodsToExecute) { method(1,2); } Any ideas on how to accomplish this? Thanks! A: Using .NET 3.0 (or 3.5?) you have generic delegates. Try this: List<Func<int, int, int>> methodsToExecute = new List<Func<int, int, int>>(); methodsToExecute.Add(Subtract); methodsToExecute.Add[0](1,2); // equivalent to Subtract(1,2) A: List<Func<int, int, int>> n = new List<Func<int, int, int>>(); n.Add((x, y) => x + y); n.Add((x, y) => x - y); n.ForEach(f => f.Invoke(1, 2)); A: You need to cast the object in the list to a BinaryOp, or, better, use a more specific type parameter for the list: delegate int BinaryOp(int x, int y); List<BinaryOp> methodsToExecute = new List<BinaryOp>(); methodsToExecute.add(Add); methodsToExecute.add(Subtract); foreach(BinaryOp method in methodsToExecute) { method(1,2); } A: I like Khoth's implementation better but I think what is causing your compiler error is that you don't cast method to a BinaryOp before you try to invoke it. In your foreach loop it is merely an "object". Change your foreach to look like Khoth's and I think it would work. A: Whenever I have been tempted to do something like this, I have found that it is generally better to refactor your design to use the command pattern, especially since all of your methods have the same parameters. This way allows for far more flexibility. A: Have them all implement of common interface, say IExecuteable, and then have a List<IExecutable> Also, using delegates: class Example { public delegate int AddDelegate(int x, int y); public List<AddDelegate> methods = new List<AddDelegate>(); int Execute() { int sum = 0; foreach(AddDelegate method in methods) { sum+=method.Invoke(1, 2); } return sum; } } A: Haven't tried it but using an List< Action< t>> type should be able to do it.
{ "language": "en", "url": "https://stackoverflow.com/questions/135451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Will pthread_detach manage my memory for me? Suppose I have the following code: while(TRUE) { pthread_t *thread = (pthread_t *) malloc(sizeof(pthread_t)); pthread_create(thread, NULL, someFunction, someArgument); pthread_detach(*thread); sleep(10); } Will the detached thread free the memory allocated by malloc, or is that something I now have to do? A: No. pthread_create() has no way of knowing that the thread pointer passed to it was dynamically allocated. pthreads doesn't use this value internally; it simply returns the new thread id to the caller. You don't need to dynamically allocate that value; you can pass the address of a local variable instead: pthread_t thread; pthread_create(&thread, NULL, someFunction, someArgument); A: You need to free the memory yourself. It would be preferable to simply allocate the pthread_t variable on the stack as opposed to the heap.
{ "language": "en", "url": "https://stackoverflow.com/questions/135458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What happens when you try to free() already freed memory in c? For example: char * myString = malloc(sizeof(char)*STRING_BUFFER_SIZE); free(myString); free(myString); Are there any adverse side effects of doing this? A: Answer summary: Yes, bad things can and probably will happen. To prevent this do: free(myString); myString = NULL; Note that all references to the memory must be set to NULL if others were created. Also, calling free() with a NULL results in no action. For more info see: man free A: Not so clever. Google for double free vulnerabilities. Set your pointer to NULL after freeing to avoid such bugs. A: It (potentially) makes demons fly out of your nose. A: Depending on which system you run it on, nothing will happen, the program will crash, memory will be corrupted, or any other number of interesting effects. A: Here's the chapter and verse. If the argument [to the free function] does not match a pointer earlier returned by the calloc, malloc, or realloc function, or if the space has been deallocated by a call to free or realloc, the behavior is undefined. (ISO 9899:1999 - Programming languages — C, Section 7.20.3.2) A: Always set a pointer to NULL after freeing it. It is safe to attempt to free a null pointer. It's worth writing your own free wrapper to do this automatically. A: Bad Things (TM) Really, I think it's undefined so anything at all including playing "Global Thermonuclear War" with NORAD's mainframe A: One of nothing, silent memory corruption, or segmentation fault. A: Don't do that. If the memory that got freed is re-allocated to something else between the calls to free, then things will get messed up. A: Yes, you can get a double free error that causes your program to crash. It has to do with malloc's internal data structures to keep track of allocated memory. A: It may crash your program, corrupt memory, or have other more subtle negative effects. After you delete memory, it is a good idea to set it to NULL (0). Trying to free a null pointer does nothing, and is guaranteed to be safe. The same holds true for delete in c++. A: In short: "Undefined Behavior". (Now, what that can include and why that is the case the others have already said. I just though it was worth mentioning the term here as it is quite common). A: The admittedly strange macro below is a useful drop-in replacement for wiping out a few classes of security vulnerabilities as well as aid debugging since accesses to free()'d regions are more likely to segfault instead of silently corrupting memory. #define my_free(x) do { free(x); x = NULL; } while (0) The do-while loop is to help surrounding code more easily digest the multiple-statements. e.g. if (done) my_free(x); A: Another interesting situation: char * myString = malloc(sizeof(char)*STRING_BUFFER_SIZE); char * yourString = myString; if (myString) { free(myString); myString = NULL; } // Now this one is safe, because we keep to the rule for // setting pointers to NULL after deletion ... if (myString) { free(myString); myString = NULL; } // But what about this one: if (yourString) { free(yourString); yourString = NULL; } //?!? :)
{ "language": "en", "url": "https://stackoverflow.com/questions/135474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: How do you measure the progress of a web service call? I have an ASP.NET web service which does some heavy lifting, like say,some file operations, or generating Excel Sheets from a bunch of crystal reports. I don't want to be blocked by calling this web service, so i want to make the web service call asynchronous. Also, I want to call this web service from a web page, and want some mechanism which will allow me to keep polling the server so that i can i can show some indicator of progress on the screen, like say, the number of files that have been processed. Please note that i do not want a notification on completion of the web method call, rather, i want a live progress status. How do i go about it? A: Write a separate method on the server that you can query by passing the ID of the job that has been scheduled and which returns an approximate value between 0-100 (or 0.0 and 1.0, or whatever) of how far along it is. E.g. in REST-style, you could make a GET request to http://yourserver.com/app/jobstatus/4133/ which would return a simple '52' as text/plain. Then you just have to query that every (second? two seconds? ten seconds?) to see how far along it is. How you actually accomplish the monitoring on the backend hugely depends on what your process is and how it works. A: I think XML web service is slow, so creating multiple methods and polling the progress will be extremely slow and will generate huge load on the server. I wouldn't do it in production environment. I see the same (but smaller) problems with database polling. Try SOAP extensions instead. It implements an event-driven model. See Adding a Progress Bar to Your Web Service Client Application on MSDN. A: You can also use SoapExtensions to notify your client of the download/process progress. The server can then send events to the client. Nothing in the client has to be changed if you don't use it. Allows for something like this in your client: //... private localhost.MyWebServiceService _myWebService = new localhost.MyWebServiceService (); _myWebService.processDelegate += ProgressUpdate; _myWebService.CallHeavyMethod(); //... private void ProgressUpdate(object sender, ProgressEventArgs e) { double progress = ((double)e.ProcessedSize / (double)e.TotalSize) * 100.00; //Show Progress... } A: Have the initial "start report generation" web service call create a task in some task pool, and return the caller the ID of the task. Then, provide another method that returns the "percent done" for a given taskId. Provide a third method that returns the actual result for a completed task. A: Easiest way would be to have the Web Service update a field on a database with the progress of the call, and then create a Web Service that queries that field and returns the value. A: Make the web service to return some sort of task ID or session ID. Make another web method to query with that ID, which returns the information needed (% completion, list of files, whatever). Poll this method at some interval from the client. Use a database to store the process information, if you do this in memory of the web service, this will not scale well in web farm environment, as it may happen that the task runs on another server, than the one you are polling. EDIT: I just saw another similar answer, and comment to it. The commenter is right - you can use in-memory table to avoid disk operations, but still using a separate db server.
{ "language": "en", "url": "https://stackoverflow.com/questions/135478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Sharepoint: Deploy Custom Lists and New Columns in lists I've created a custom list & also added a column in the Announcement List. Question is, how can I include those newly created items when I create a fresh Web Application (like a script, feature or something)? Additional Info: It's like when you're to deploy from your development machine to a staging or production server. I'd like to have a script or something to update my production server to have the new column i've added to the Announcement List. Just like SQL Server's ALTER TABLE command to update a SQL Server Table. Is there an equivalent in Sharepoint Lists? TIA! A: Regarding the new custom list, this can be done using features. See How to: Create a Custom List Definition for more information. The Visual Studio Extensions for SharePoint (VS2005 / VS2008) will help you to extract the list definition if you've created it through the SharePoint UI. If you are fortunate enough to be using a custom site definition and don't have any webs created yet, you can set your site definition to create the custom list using feature stapling. If you are attempting to apply these changes to webs that already exist, you can still use a feature to define your custom list. It will just appear as a type of list that can be created. Then to have the custom list automatically created for existing webs or to modify existing lists such as the Announcements list, you can use a feature receiver. This allows you to run any custom code when the feature is activated. See the MSDN article Feature Events for more information. Alternatively, you could not use features at all as they can be difficult, time consuming and painful. In fact, this blog post has a good argument against the idea. You could try the tool mentioned on that page or other applications such as DocAve Content Manager and SharePoint Site Migration Manager. A: Your question is not very clear but I think you may want to look at Application Templates. Microsoft provide 40 pre-built templates in the link below and the same technology is available to you. Links from this page should lead you to information showing you how you can crate your own. Application Templates for Windows SharePoint Services 3.0 http://technet.microsoft.com/en-us/windowsserver/sharepoint/bb407286.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/135496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using WPF, what is the best method to update the background for a custom button control? In WPF, we are creating custom controls that inherit from button with completely drawn-from-scratch xaml graphics. We have a border around the entire button xaml and we'd like to use that as the location for updating the background when MouseOver=True in a trigger. What we need to know is how do we update the background of the border in this button with a gradient when the mouse hovers over it? A: In your ControlTemplate, give the Border a Name and you can then reference that part of its visual tree in the triggers. Here's a very brief example of restyling a normal Button: <Style TargetType="{x:Type Button}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Button}"> <Border Name="customBorder" CornerRadius="5" BorderThickness="1" BorderBrush="Black" Background="{StaticResource normalButtonBG}"> <ContentPresenter HorizontalAlignment="Center" VerticalAlignment="Center" /> </Border> <ControlTemplate.Triggers> <Trigger Property="IsMouseOver" Value="True"> <Setter TargetName="customBorder" Property="Background" Value="{StaticResource hoverButtonBG}" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> If that doesn't help, we'll need to know more, probably seeing your own XAML. Your description doesn't make it very clear to me what your actual visual tree is. A: You would want to add a trigger like this... Make a style like this: <Style x:Key="ButtonTemplate" TargetType="{x:Type Button}"> <Setter Property="Foreground" Value="{StaticResource ButtonForeground}" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Button}"> <Grid SnapsToDevicePixels="True" Margin="0,0,0,0"> <Border Height="20" x:Name="ButtonBorder" BorderBrush="{DynamicResource BlackBorderBrush}"> <TextBlock x:Name="button" TextWrapping="Wrap" Text="{Binding Path=Content, RelativeSource={RelativeSource TemplatedParent}}" SnapsToDevicePixels="True" Foreground="#FFFFFFFF" Margin="6,0,0,0" VerticalAlignment="Center"/> </Border> </Grid> <ControlTemplate.Triggers> <!-- Disabled --> <Trigger Property="IsMouseOver" Value="True"> <Setter TargetName="ButtonBorder" Property="Background" Value="{DynamicResource ButtonBackgroundMouseOver}" /> <Setter TargetName="ButtonBorder" Property="BorderBrush" Value="{DynamicResource ButtonBorderMouseOver}" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> Then add some resources for the gradients, like this: <LinearGradientBrush x:Key="ButtonBackgroundMouseOver" EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF000000" Offset="0.432"/> <GradientStop Color="#FF808080" Offset="0.9"/> <GradientStop Color="#FF848484" Offset="0.044"/> <GradientStop Color="#FF787878" Offset="0.308"/> <GradientStop Color="#FF212121" Offset="0.676"/> </LinearGradientBrush> Please let me know if you need more help with this.
{ "language": "en", "url": "https://stackoverflow.com/questions/135518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: In Access 2003, problem with a memo field if and only if there is a filter on the PK and a sub query is joined I have a problem in a query in Acess 2003 (SP3). I have a query that includes some tables and a sub query. The sub query and tables are all joined to a main table. The query uses some aggregate functions and there is a HAVING clause that filters the result on the primary key (PK). Under these conditions, a memo field of the main table is not displayed properly. Two garbage characters, never the same, are displayed instead of the content of the field. Now what is weird is that if I remove the HAVING clause, or if I use it to filter on something else other than the PK, the field is displayed correctly. If I remove the sub query from the query the field is also displayed correctly even if there is still a filter (HAVING clause) on the PK. Is this a bug in Access (I think it is)? If so, does someone know of a workaround for this bug? A: MSAccess Memo fields truncated to 255 characters (before Access 2000, wouldn't work at all) in GROUP BY queries. However, to take care of the apparent bug try this: Instead of MemoField  use  Left([MemoField,255)
{ "language": "en", "url": "https://stackoverflow.com/questions/135526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Will my index be used if all columns are not used? I have an index on columns A, B, C, D of table T I have a query that pulls from T with A, B, C in the WHERE clause. Will the index be used or will a separate index be needed that only includes A, B, C? A: It depends! WHERE A like '%x%' and B = 1 and C = 1 // WHERE A = 1 OR B = 1 OR C = 1 // WHERE DateAdd(dd, 1, A) = '2008-01-01' AND B = 1 AND C = 1 These will not rely on the index, because the index is not useful. Click on "display estimated execution plan" to confirm potential index usage. A: in Oracle databases this is called a Composite Index ( 12g docs but valid for earlier versions) Composite indexes can speed retrieval of data for SELECT statements in which the WHERE clause references all of the leading portion of the columns in the composite index. Therefore, the order of the columns used in the definition is important. In general, the most commonly accessed columns go first. so in your case, Yes. the index would/could be used. this could be verified by using an explain plan. if MS SQLSERVER is different (and i suspect it might) you'll need a new answer. Edit: Should also mention it will only consider the index for use.. that does not necessarily mean it WILL use it. Edit2: Oracle 11g and later now has an option that will allow it to skip columns in an index. so a query on A,B and D might still use the index A: The index will be used, yes. It's fairly smart about which indexes will produce a more optimal query plan, and it should have no trouble with that. As with this sort of thing, don't take my word for it - benchmark it. Create a table, fill it with representative data, query it, index it, and query it again. A: The fact that the index contains a column which is not used in your query will not prevent it from being used. That's not to say that it definitely will be used, it may be ignored for a different reason (perhaps because one or more other indexes are more useful). As always, take a squizz at the estimated execution plan to see what is likely to happen. A: Start with the simple equals lookup (WHERE A=1 and B='Red' and C=287) yes the index will (most likely) be used. The index will be used first to help the optimizer "guess" the number of rows that will match the selection, and then second, to actually access those rows. In response to David B's comment about the "like" predicate, SQLServer may still use the index, it depends on what you're selecting. For example, if you're selecting a count(*) then SQLServer would likely scan the index and count the hits that match the where clause since the index is smaller and would require fewer IOs to scan. And it may decide to do that even if you're selecting some columns from the base table, depending on how selective SQLServer feels the index is. A: David B is right that you should check the execution plan to verify the index is being used. Will the index be used or will a separate index be needed that only includes A, B, C? To answer this last part of the question, which I think is the core underlying topic (as opposed to the immediate solution), there is almost never a reason to index a subset of your indexed columns. If your index is (A, B, C, D), a WHERE against (A, B, C) will most likely result in an index seek, which is the ideal situation -- the index includes all the information the engine needs to go directly to the result set. I believe this is holds true for numeric types and for equality tests in string types, though it can break down with LIKE '%'s). On the other hand, if your WHERE only referenced D, you would most likely end up with an index scan, which would mean that the SQL engine would have to scan across all combinations of A, B, and C, and then check whether D met your criteria before deciding whether to add the row to the result set. On a particularly large table, when I found myself having to do a lot of queries against column "D", I added an additional index for D only, and saw about 90% performance improvement. Edit: I should also recommend using the Database Engine Tuning Advisor in SQL Management Studio. It will tell you if your table isn't indexed ideally for the query you want to run. A: Generally speaking yes it will be, all modern databases are clever enough to do this. There are exceptions, for example, if the statistics on the table show that the volume of data in it is sufficiently small that a full table read will be more efficient then the index will be discounted, but as a rule, you can rely on it where appropriate. Consequently, you can take advantage of this when designing your indexes. Say for example I have a table which contains A, B, C as key values and columns Y and Z containing data which I know will be retrieved often by the statements SELECT Y FROM table WHERE A = alpha and B = beta and C = gamma SELECT Z FROM table WHERE A = alpha and B = beta and C = gamma The I will generally create an index on A,B,C,X,Z - assuming that X and Z are some reasonably small field. The reason for this is that I know the access pathway in the statements above will use the index, and as the data I want to retrieve is already in the index read then no separate read of the block of data required to retrieve the table data itself will be needed. This strategy can dramatically speed up data retrieval in some circumstances. Of course, you pay for it in update costs and disk space so you need to understand what your database is doing before applying it, but as in most databases reads dramatically outnumber writes it's generally well worth the consideration. A: Here's another "it depends" answer... it also depends on how large your table is... I agree with everyone else who has mentioned checking the execution plan to verify whether or not your index is being used. Here are a couple of articles on reading an execution plan that you should find useful: http://www.sqlservercentral.com/articles/Administering/executionplans/1345/ http://www.codeproject.com/KB/database/sql-tuning-tutorial-1.aspx There's also a good article on seeks vs. scans that I'd recommend: http://blogs.msdn.com/craigfr/archive/2006/06/26/647852.aspx There are a log of good articles on Craig Freedman's blog, here's another one you should find useful. This article is about some of the factors SQL Server uses to determine which index to use... http://blogs.msdn.com/craigfr/archive/2006/07/13/664902.aspx Take care! Jeff
{ "language": "en", "url": "https://stackoverflow.com/questions/135530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: MS CRM Development Projects The shop that I am working part time with is new to Microsoft CRM. I just want to survey what projects have developers done to the system to extend its capabilities. A: I did some work with CRM 3.0. My work enhanced the program and turned it into a Document Management app, where you could scan and upload documents based on a case, contact, customer, vendor etc. The .NET SDK back then could have used a bit more work, but I hear with newer versions of CRM it has gotten better. CRM allows for attachments but not at all levels, more at the case level. A: I (and others) have implemented a LINQ query provider for the web service layer http://www.codeplex.com/LinqtoCRM. A: I can break the work I did into four sections: * *Tailoring - Simple field level changes. A lot of this is just making sure the fields and language suited the business I was developing for. *Customisation - More complex changes, generally needing JavaScript and maybe ASP.NET. Some examples would be to use an IFrame and pass values to it from a CRM form. The IFrame would then do interesting things like mapping, charting or give you buttons to do other things. For buttons I would often times use JavaScript to replace the outerHTML in the HTML dom of an IFRAME to show a button rather. *Integration - using .NET to connect MSCRM to other systems. Connected it to Great Plains, Speech Server, SCOM (was called MOM back then), custom LOB systems etc... One interesting one I did was to develop a SSIS component that wrote into MSCRM via the web services. *Reporting - Building reports. In reporting services and in Excel. Excel made for great dashboards because of the dynamic update nature of it. I have a few (war) stories up at http://www.sadev.co.za/taxonomy/term/7 if you are interested. A: We extended 3.0 to send text messages rather than email. There are tools available (for 4.0 at least) to do that now, though. We also extended it to push service appointments through to Exchange, as due to the workforce being mostly mobile they did not have the CRM client for Outlook. A: Two other major areas of development not yet mentioned are plugins and custom workflow assemblies. Plugins allow you to intercept many of the messages that CRM processes to insert custom logic. The possibilities this gives you are endless because you can intercept all kinds of messages including Create, Update, Delete, and even Read! Custom workflow assemblies are assemblies that you register with CRM that integrate with CRM's workflow UI. Often times, they perform simple tasks that are useful when modifying form data. Other times, they are more complex like sending status reports. Custom workflow assemblies are always listed beneath the native options. In the screenshot, everything below "Stop Workflow" was added via a custom workflow assembly.
{ "language": "en", "url": "https://stackoverflow.com/questions/135533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How would you easily validate a JSP option pick list? I need to validate this simple pick list: <select name="<%= key %>"> <option value="ETC" SELECTED>Select an option...</option> <option value="ONE">Lorem ipsum</option> <option value="TWO">dolor sit amet</option> </select> So the user would never submit the form with the, excuse the repetition, "Select an option..." option selected. In principle I'm allowed to use JavaScript but It'd be interesting to learn how to solve it within JSP too. A: You can never really satisfy the condition 'never submit a given value' because you don't have control over the client side. The user can always manipulate HTML to submit whatever they want. It is a good approach is to use JavaScript to do client-side validation and give the user quick feedback and catch 99%+ of the cases, then do a server-side validation of the submitted parameters to catch the minority that don't have JS enabled or who manipulate the HTML to submit non-expected values. Just remember that the client-side validation is optional, and is good for those 'common mistakes' input validation, but the server-side validation is mandatory for all input whether or not any client-side checks have been done on the given input. A: Nowadays you usually don't validate in JSPs, because they only visualize whatever was processed earlier. So the only validation that you do "in jsps" is usually Javascript. For the rest (the real validation) I second what Pete answered: You have to do it serverside in whatever technique you are using there. When it's displayed in the JSP again, validation has hopefully long been done. As I said "nowadays": When JSP was a shiny new concept, a lot more was done inside the boundaries of a JSP and sometimes even Forms were posted to JSPs. How to validate was even more nonstandard at that time. A: JSP is a "view" in MVC pattern, and should therefore only be used to present the data to the user. Any application logic, including validation logic, should be done server-side. If this JSP is intended to be a part of a large app, I would recommend using Spring MVC to set up your app, and writing a validator to validate the input. But even if we're not talking about some large application here, the validation should still be done server-side, as others before me have already noted.
{ "language": "en", "url": "https://stackoverflow.com/questions/135534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What are the benefits of OO programming? Will it help me write better code? I'm a PHPer, and am not writing object-oriented code. What are the advantages of OO over procedural code, and where can I learn how to apply these ideas to PHP? A: Objects help encapsulate complexity. For most PHP programming, it's impossible to write good, clean code for any reasonably complicated application. Writing OO PHP helps you put that code into its own box, isolating it from everything else. This has several benefits. * *As long as your object has clearly defined inputs and outputs, the way that the object does what it does doesn't matter at all - storing/retrieving data could go from flat file to XML to memcache to MySQL to Oracle, and you only ever have to concern yourself with one single object. *As long as your object has clearly defined inputs and outputs, you can completely replace it with another object that has the same inputs/outputs. Decide at runtime whether you want MySQL, Postgres, memcached, or HTTP POST/GET requests to a sketchy server in Indonesia. *OO makes unit testing easier. If you can define what a specific object should do (i.e what results it should give you for a given input) then you can easily write code to test thousands of values against that code and check the results, and you'll know instantly if something breaks. *The more of your code you 'hide' in objects, the less of it you have to see when you're using that functionality. I wrote a polling application once in PHP that handled all aspects of polling - database interaction, poll generation, voting, ranking, sorting, and displaying - and I only needed one line of code on my website (Poll::Display()) to implement the entirety of what the app could do - which made maintaining my homepage far easier. Keep one thing in mind - OO in PHP (even PHP5) isn't very good OO compared to a language like Python or Ruby. The everything-is-an-object model in Python is what made me OO programming really click for me - and as a former PHP programmer (and doubly-certified Zend engineer), I strongly recommend exploring Python's OO if you want to understand what OO is all about. It will help you write better PHP code, at the very least. A: Yes, if you really get it. It helps you visualize how parts of a larger system can interact with each other. It's very useful at the design level. If you are just writing a few lines of code, the only benefit you will get is that it is generally a little easier to use a library broken into well-designed objects than just functions. To make good use of it, you also need to follow sound OO design practices. Always encapsulating ALL your data, using many small classes, never large "catch-all" classes. Having the class do your work for you instead of asking it for data and doing the work outside the class, etc. It's probably not going to help you much for a while, and possibly never if you are always doing small websites (I can't say this for sure, I don't really do php), but over time and on large projects it can be invaluable. A: One thing no one has mentioned is that OO code facilitates writing readable code: sherry.changePhoneNumber(); phoneCompany.assignNewPhoneNumberTo(sherry); sherry.receive(new PhoneNumber().withAreaCode("555").withNumber("194-2677")); I get a strange satisfaction from such aesthetics. A: The huge win for me is inheritance, or making an object that behaves almost exactly like another but with a few differences. Here's a real-world example from my office: We needed code to process TIFF files that a customer sent to us, convert them to a standard format, insert some information about the file into a database, then send a result email. I wrote this in a set of classes (in Python, but the idea is the same). The "fetcher" class got emails from a POP3 mailbox and handed them to a "container" class which knew how to read attachments from an email. That class handed each image off to a "file object" class that did the necessary processing. Well, one day we got a customer who wanted to send PDF files. I subclassed the "TIFF file object" class and rewrote the "normalize" function to take a PDF as input instead, but left every other bit of code untouched. It worked the first time and I was pretty pleased. A change to our mailserver meant that I needed to fetch emails via IMAP. Again, I subclassed the "POP3 fetcher" so that it could speak IMAP. Problem solved. Another customer wanted to mail us CDs, so I subclassed the "email container" class with a "filesystem directory" class. Voila - done. In each of those cases, the new code was 95% similar to the old code. For example, the "TIFF file object" class has about 15 methods. The "PDF file object" class defines exactly one: the method that converts files in a specific format into our standard. All others it gets from its parent class. Now, you can definitely do the same kind of stuff procedurally, such as by writing: if fileobjecttype == 'TIFF': data = <snip 30 lines of code to read and convert a TIFF file> elif fileobjecttype == 'PDF': data = <another 45 lines to read a PDF> elif fileobjecttype == 'PNG': data = <yep, another one> The biggest difference is that I believe you can make OOP look much cleaner and more organized. My PDF class looks like: class PDFReader(GenericImageReader): def normalize(self): data = <45 lines to read a PDF> and that's it. You can tell at a glance that it only does one thing differently than the class it inherits from. It also forces you - or at least strongly encourages you - to make clean interfaces between the layers of your application. In my example, the PDFReader has no idea and doesn't care whether its image came from a POP3 mailbox or a CD-ROM. The POP3 fetcher knows absolutely nothing about attachments, since its job is merely getting emails and passing them along. In practice, this has allowed us to do some pretty amazing things with the absolute minimum amount of coding or redesign. OOP isn't magic, but it's a pretty fantastic way to keep your code organized. Even if you don't use it everywhere, it's still a skill that you really should develop. A: It doesn't help you automatically. You can write worse "OO" programs than structural programs, and vice versa. OOP is a tool which allows you to create more powerful abstractions. * *As with every powerful tool, you have to use it properly. *As with every powerful tool, it takes time to learn how to use it properly. *As with every powerful tool you will make mistakes. *As with every powerful tool you will have to practice a lot. *As with every powerful tool you should read a lot about it, and read what other people think. Learn from others. *But, as with every powerful tool, there are people out there who misuse it. Learn to not learn bad practices from them. This is hard. A: There was a time, back when i first started programming, that i wrote user-oriented code. It worked great, but was hard to maintain. Then, i learned OO, and the code i wrote become easier to maintain, easier to share between projects, and life was good... for everyone except my users. Now, i know the true silver bullet of computer programming. I write OO code, but first i objectify my users. Treating people as objects may seem rude at first, but it makes everything much more elegant - you get to write all of your software to work with clearly-defined interfaces, and when a user sends an unexpected message you can merely ignore it, or, if marked with a flag signifying sufficient importance, throw an exception at them. Life, with OO, is good... A: I would argue that OOP suits those who think in 'objects', where an object consists of data as well as the functions that operate on that data. * *If you tend to think of functions and the data they operate on as separate things, then you are a procedural programmer. *If you tend to think of functions and the data they operate on as being connected, then you are an object-oriented programmer. I'd caution against going out and learning about patterns. In order to do object-oriented programming well, you need to teach yourself to think like an object-oriented programmer. You'll need to get to the point where you understand and can name the benefits of: * *Encapsulation *Classes vs instances/objects *Inheritance and polymorphism It will help you to be a better programmer only in the sense that the more styles of programming a programmer knows, the more range in his repertoire for solving problems and writing elegant code. You cannot go off and write all your code object-oriented and automatically have good code, but if you really understand how OOP works, and you're not just copy-pasting some popular design patterns of the day, then you can write some pretty good code, especially when writing a large application. A: It seems everybody is responding to your question literally, i.e., the specific benefits/drawbacks of OO. You should learn OO, but not because OO has any specific magic that you need. The more general form is: * *Q: "Should I learn (OO, FP, concurrent, logic-based, event-driven, ...) programming?" *A: "Yes, learning a new paradigm is always useful, even if you don't use it directly every day." A: Objects help keep your code isolated between different sections, so that if you need to make a change to one section you can be confident it won't affect other sections: loose coupling. Then, when you've done that for a while, you'll start finding that objects you created for one app are also useful in others, and you start getting better code re-use as well. So the new app has part of the work already done, and is using time-tested code: software is built faster with fewer bugs. A: I would put it this way: If you write anything complex, you should encode the concepts you think in, rather than trying to think in concepts that are somehow native to the language you are using. This way you make less bugs. The formalization of those concepts is called design. Functional programming lets you define concepts that are associated with verbs, since each function is essentially a verb (e.g., print()). OO programming, on the other hand, lets you also define concepts associated with nouns. A: I'm a long-time procedural PHP programmer who occasionally dabbles in object oriented PHP. Joel's answer above is an excellent summary of the benefits. In my opinion, a subtle secondary benefit, is that it forces you to better define your requirements from the start. You have to understand the relationships between the objects and the methods that will be acting upon them. A good book to help with the transition is Peter Lavin's "Object-Oriented PHP". A: To elaborate on Joeri's answer a little: The International Organisation for Standardization defines encapsulation as, 'The property that the information contained in an object is accessible only through interactions at the interfaces supported by the object.' Thus, as some information is accessible via these interfaces, some information must be hidden and inaccessible within the object. The property such information exhibits is called information hiding, which Parnas defined by arguing that modules should be designed to hide both difficult decisions and decisions that are likely to change. Note that word: change. Information hiding concerns potential events, such as the changing of difficult design decisions in the future. Consider a class with two methods: method a() which is information hidden within the class, and method b() which is public and thus accessible directly by other classes. There is a certain probability that a future change to method a() will require changes in methods in other classes. There is also a certain probability that a future change to method b() will require changes in methods in other classes. The probability that such ripple changes will occur for method a(), however, will usually be lower than that for method b() simply because method b() may be depended upon by more classes. This reduced probability of ripple impacts is a key benefit of encapsulation. Consider the maximum potential number of source code dependencies (MPE - the acronym is from graph theory) in any program. Extrapolating from the definitions above, we can say that, given two programs delivering identical functionality to users, the program with the lowest MPE is better encapsulated, and that statistically the more well-encapsulated program will be cheaper to maintain and develop, because the cost of the maximum potential change to it will be lower than the maximum potential change to the less well-encapsulated system. Consider, furthermore, a language with just methods and no classes and hence no means of information hiding methods from one another. Let's say our program has 1000 methods. What is the MPE of this program? Encapsulation theory tells us that, given a system of n public nodes, the MPE of this system is n(n-1). Thus the MPE of our 1000 public methods is 999,000. Now let's break that system into two classes, each having 500 methods. As we now have classes, we can choose to have some methods public and some methods private. This will be the case unless every method is actually dependent on every other method (which is unlikely). Let's say that 50 methods in each class is public. What would the MPE of the system be? Encapsulation theory tells us it's: n((n/r) -1 + (r-1)p) where r is the number of classes, and p is the number of public methods per class. This would give our two-class system an MPE of 499,000. Thus the maximum potential cost of a change in this two-class system is already substantially lower than that of the unencapsulated system. Let's say you break your system into 3 classes, each having 333 classes (well, one will have 334), and again each with 50 public methods. What's the MPE? Using the above equation again, the MPE would be approximately 482,000. If the system is broken into 4 classes of 250 methods each, the MPE will would be 449,000. If may seem that increasing the number of classes in our system will always decrease its MPE, but this is not so. Encapsulation theory shows that the number of classes into which the system should be decomposed to minimise MPE is: r = sqrt(n/p), which for our system is actually 4. A system with 6 classes, for example, would have an MPE of 465,666. The Principle of Burden takes two forms. The strong form states that the burden of transforming a collection of entities is a function of the number of entities transformed. The weak form states that the maximum potential burden of transforming a collection of entities is a function of the maximum potential number of entities transformed. In slightly more detail, the burden of creating or modifying any software system is a function of the number of program units created or modified. Program units that depend on a particular, modified program unit have a higher probability of being impacted than program units that do not depend on the modified program unit. The maximum potential burden an modified program unit can impose is the impacting of all program units that depend on it. Reducing the dependencies on an modified program unit therefore reduces the probability that its update will impact other program units and so reduces the maximum potential burden that that program unit can impose. Reducing the maximum potential number of dependencies between all program units in a system therefore reduces the probability that an impact to a particular program unit will cause updates to other program units, and thus reduces the maximum potential burden of all updates. Thus, encapsulation is a foundation stone of object orientation and encapsulation helps us to reduce the maximum potential number of dependencies between all program units and to mitigate the weak form of the Principle of Burden. A: People will tell you various things about OOP, from various perspectives. But if you want to form your own opinion, rather than take someone else's, then I suggest reading Bertrand Meyer's "Object-Oriented Software Construction". Essentially, he takes non-OOP programming techniques, and analyses their basic flaws. He then derives an alternative technique which addresses those flaws. Put another way, he derives OOP from first principles. It's a marvellous piece of work, and very convinving. Read it, you'll learn the why, when and what in a way that you can back up with reasoning. A: A large system, such as Wordpress, Drupal, or XOOPS, uses OOP concepts. You can see the benefits of their use there. Code reuse, modularity, maintainability, and extensibility. You have the ability to modify parts of objects and it affects the entire application; no searching to replace every spot you did some operation (and possibly missing it). You can reuse objects all over, saving an awful lot of copying and pasting. Patching a bug requires patching the one object, not 16 pages of code that all do the same thing. When you encapsulate the logic and "hide" the implementation, it's easier to use the objects, both for you 6 months from now when you've forgotten why you did something, and for the nest guy or gal who uses your code. For example, all you do to loop through posts in Wordpress is call a function. I don't need to know how it works, I only need to know how to call it. OOP is really procedural code wrapped in object methods/functions. You do still need to know how to write decent linear code in order to implement methods and functions of objects. It just makes it far easier to reuse, scale, fix, debug, and maintain your things. A: http://www.onlamp.com/pub/a/php/2005/07/28/oo_php.html A: I would say there are two primary benefits: * *Encapsulation: code in libraries that shouldn't be called from outside the library can be hidden, preventing misuse, and easing changes to the internal structure of the library while preserving the external interface. In a good OO design, changes are introduced more easily once the code is complete. *Abstraction: instead of dealing with arrays of arrays you're dealing with employees, departments, and so on. This means you can focus on the business logic, and write fewer lines of code. Fewer lines means fewer bugs. Reuse I wouldn't strictly qualify as an OO benefit, because in a purely procedural model smart library organization lets you reuse code as well. On the other hand, using lots of objects in PHP tends to decrease performance compared to a procedural model, because there is too much object construction overhead for every request. Finding a good balance between procedural-style and oo-style code is imperative. A: To learn OO in PHP I'd recommend try to use some good written OO PHP framework. You may want to look at Zend Framework. A: I am a PHP aswell, although I do have a strong OOP background, I would say the best book on using OOP with PHP has to be PHP 5 Objects Patterns and Practice A: One of the best thing that I did with OOP in PHP is the class generator. In any given table, it will involve almost the same SQL operations: * *insert *update *select *delete *exists (check if a row exists) *search *list Now I don't have to write all these SQL statements anymore, except on special conditions. Not only I've cut down coding to 1 minute in doing above, I've also saved time from debugging codes. So whenever there are changes to the table structure, I simply regenerate the class. Probably you should try the same as it works for me and my customers like it!
{ "language": "en", "url": "https://stackoverflow.com/questions/135535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Best place to Sort a List of Tasks Im building a web application which is a process management app. Several different employee types will be shown a list of Tasks to do and as each one completes a task then it is moved on to the next employee to work on. The Task hierarchy is Batch > Load > Assembly > Part > Task. There are currently 8 rules for determining which Task should be worked on first for each Employee type. These rules apply to size of part, and also how that parts completion will affect the Hierarchy e.g if part A is completed then it completes an entire Batch where as Part B will not as there are still other parts remaining in its Batch to be completed. Anyway that's the elevator pitch of how the system works. What i am trying to figure out is an efficient, fast and maintainable way of doing this bearing in mind the rules might change and more rules may be added. Initially I was intending to let the DB (sql 2005) do all the heavy lifting but i have a worry that the more complicated rules will be difficult to implement with a DB. So an alternative is to pull out a list of Tasks into a middle tier and create a Collection of objects and the apply each of the rules to the Collection. I have no doubt that each rule could be translated to T-SQL in isolation but ordering by up to 8 criteria depending on the task type feels like a lot of hassle. One benefit i can see with the the middle tier approach is that i can create a more loosely restricted system where Task flow can be changed, that would be more difficult in the DB i think. So what would you guys recommend? Is there a third alternative i haven't thought of ? EDIT[1] Just to qualify this a bit more, The DB is not expected to change from what i initially develop it in. A: It is difficult to determine from the details in the question. However, putting your logic in a business logic (middle) layer will mean that your business rules can continue to use the same code no matter what the back-end database may be. At the moment you specify T-SQL but is it possible that in the future you will move to a non-SQL Server environment? A: What platform? .NET 3.5 introduces LinqToSQL that may make this a moot point. You could use a strategy pattern to select from/build an appropriate query based on the task type, then let LINQ do the translation to SQL for you. That way you can build the query in code, but still have it actually executed on the DB.
{ "language": "en", "url": "https://stackoverflow.com/questions/135539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I repair "upgraded" subversion working directories? It may sound stupid, but sometimes I run into version conflicts between two versions of subversion. I mount a directory on a development server with sshfs and then edit the code with my local Vim. For subversion stuff like updating, committing etc. I ssh on the server and do it there. However, sometimes I mix up my shells and accidently do an update or commit in my local shell in the mounted directory. Subversion exits with an error, which is fine. However, when I try to do the same thing on the development server in my ssh session, subversion says that the working directory/subversion has a wrong version. The subversion version on the server is older than the version on my notebook, so I guess my (newer) version somehow upgrades the working directories so they are incompatible with the old version on the development server. Sometimes deleting the .svn/lock files helps, but only if I do it right after I executed the subversion command on my notebook. When I execute the command on the development server afterwards, the lock files disappear and I don't see a way to rescue the checkout. This wouldn't be so bad if the repository wasn't that big. Especially when I made lots of changes and can't commit them. The only solution I see at the moment is to copy the files I changed somewhere, remove the checkout, do a complete new checkout and copy the files back. Is there a better solution to rescue a broken checkout and/or my changes? UPDATE The FAQ Mikael Sundberg linked contained the answer. I write it down here, because he doesn't explictictly mention it. There's a script that can downgrade upgraded repositories, when it's safe: http://svn.apache.org/repos/asf/subversion/trunk/tools/client-side/change-svn-wc-format.py A: Maybe this FAQ-entry could be helpful: http://subversion.tigris.org/faq.html#working-copy-format-change A: I am not sure if I understand correctly your problem, but here are some tips: First one tip -- did you try SVN Clean command? The next one is probably useless for you:), but many people used to use CVS don't realize that SVN has feature like that, so I write it down. If you use Windows and Tortoise SVN client you can choose Show Log command from context menu (right mouse button). When the list of check-ins appear choose the last good one one and choose from context menu command revert to this revision. In other case (not Windows nor Tortoise SVN), investigate documentation for command-line version of commands described above. Next tip could help you avoid mixing your shells. You've written that you use SSH shell -- to avoid problems with mixing shell sessions I always use different background color for each server (and account), which I log-in (i.e. white on black for development, yellow on navy on testing server and yellow on green on production server). This saves me many times :).
{ "language": "en", "url": "https://stackoverflow.com/questions/135543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I interpret error codes from FrontPage Extensions? Wrong answer was autoselected by the stupid bounty system. I'm using front page extensions to interact with SharePoint Services 3.0 as described here. In most samples I have seen the client simply looks for particular English strings in the result and uses that to determine if an error has occurred. However, I am writing an application which may be widely deployed and put on non-English language SharePoint servers so I would like to use the returned error codes instead. Unfortunately, the documentation for the error codes is very poorly defined. It contains such gems as: Although many RPC protocol methods have unique error messages, most rely on a standard error message format to relay information if a method fails to complete properly. Hrm, what would be this "standard error message format"... and The status is the error code from FrontPage Server Extensions for the condition encountered. osstatus is the error code from the operating system.. also sadly entertaining: In general, the codes are integer values and the messages are text strings that summarize the error. but nowhere is a table which describes the possible content of these errors to be found. It seems likely to me that the OS error code would be an HRESULT but I have no idea what to look for in terms of potential sources for SharePoint error codes. My only clue is that status=589826 seems to indicate that a file already exists. Wrong answer was autoselected by the stupid bounty system. A: I guess it refers to this list of "standard" system error codes: http://msdn.microsoft.com/en-us/library/ms681381(VS.85).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/135556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to change settings for a Visual Studio 2005 Deployment Project I have a project containing at least one DLL along with the executable output. I have added a Deployment Project to this solution, which asked me for a name and working directory upon creation like all projects. I named this "MyProduc_Installer" and have been able to modify all aspects of the installation process except for changing the name of the installer itself. Throughout the install process, the user sees messages like "Welcome to the MyProduct_Installer Installer." Even in the Add/Remove Programs list, this is the application's ill conceived title. How do I change this setting? I have tried right click/properties, as well as all the View options. I couldn't find anything in the assembly information for the executable project, or solution properties. I have tried right-clicking on the project in the Explorer to change the properties, but here is what I see: There is no setting here to change the project title. A: If you haven't found the answer to this yet, here is the answer. Visual Studio has 2 sets of properties for Projects - 1 which you can accesss by selecting the Project in Solution Explorer and then right clicking and selecting 'Properties'. 2nd set of properties is in the 'Properies' window which shows up below the Solution explorer. This is the same property window which is displayed for any of the Form property settings or any other control settings. The 'Product Name' and other project properties for 'Setup' project can be found in the second property window. Hope this helps. AC A: The easy way to get to the properties you are interested in is to use the F4 shortcut when the project is highlighted. As stated in previous posts this is a very different list to the one you get by right click and selecting properties. A: If you mean a Setup project like for winforms, it's the ProductName property. In Studio, I just click on the project name in the Explorer and I get the property window typical to other projects, and it's right there. Other properties include the AddRemoveProgramsIcon, InstallAllUsers, and RemovePreviousVersions. A: I happened across this post, where I was having trouble renaming the Product as well.. In regards to using Click Once Publishing. Since updating all the old names I couldn't get the Publishing to correct itself. It was found notepading the project file xyz.vbproj in my case and updating the <ProductName>xyz wrong name</ProductName> element that was still wrong. It was the only place I could find to update it, since the publishing or any property window didn't expose this.
{ "language": "en", "url": "https://stackoverflow.com/questions/135573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I track the sales rank of an item on Amazon programmatically? I've seen several products that will track the sales rank of an item on Amazon. Does Amazon have any web-services published that I can use to get the sales rank of a particular item? I've looked through the AWS and didn't see anything of that nature. A: You should be able to determine the Sales Rank by querying for the SalesRank response group when doing an ItemLookup with the Amazon Associates Web Service. Example query: http://ecs.amazonaws.com/onca/xml? Service=AWSECommerceService& AWSAccessKeyId=[AWS Access Key ID]& Operation=ItemLookup& ItemId=0976925524& ResponseGroup=SalesRank& Version=2008-08-19 Response: <Item> <ASIN>0976925524</ASIN> <SalesRank>68</SalesRank> </Item> See the documentation here: http://docs.amazonwebservices.com/AWSECommerceService/2008-08-19/DG/index.html?RG_SalesRank.html A: Amazon have changed their API so it now requires a signature: http://developer.amazonwebservices.com/connect/ann.jspa?annID=483 so the above example no longer works from August 2009 onwards.
{ "language": "en", "url": "https://stackoverflow.com/questions/135591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Reproducing the blocked exe "unblock" option in file properties in windows 2003 When I download my program from my website to my windows 2003 machine, it has a block on it and you have to right click on the exe, then properties, then select the button "Unblock". I would like to add detection in my installer for when the file is blocked and hence doesn't have enough permissions. But I can't eaisly reproduce getting my exe in this state where it needs to be unblocked. How can I get the unblock to appear on my exe so I can test this functionality? A: This is done using NTFS File Streams. There is a stream named "Zone.Identifier" added to downloaded files. When IE7 downloads certain types of file that stream contains: [ZoneTransfer] ZoneId=3 The simplest way to set it is to create a text file with those contents in it, and use more to add it to the alternate stream. Zone.Identifier.txt: [ZoneTransfer] ZoneId=3 Command: more Zone.Identifier.txt > file.exe:Zone.Identifier Then, the way for you to check it would be to try to open the Zone.Identifier stream and look for ZoneId=3, or simply assume that if the stream exists at all that your user will receive that warning. It's also important to note that this has nothing to do with permissions. Administrators see the same warning; it's to do entirely with the source and type of file. The entire stream goes away when users uncheck the "Always ask before opening this file" box and then click Run. A: There is a supported API for this, documented on MSDN. Search on MSDN for "Persistent Zone Identifier Object". Basically you CoCreateInstance with CLSID_PersistentZoneIdentifier and request an IPersistFile interface. You then call IPersistFile::Load with the name of the file in question. Next, QI for an IZoneIdentifier interface and use IZoneIdentifier::GetId to obtain the zone of the file. If there was no "mark of the web", you should get URLZONE_LOCAL_MACHINE. The ZoneId of 3 mentioned in the other reply is URLZONE_INTERNET. (The enumeration is called URLZONE and is also documented on MSDN, or see sdk\inc\urlmon.h.) You can remove or change the "mark of the web" by calling IZoneIdentifier::Remove or IZoneIdentifier::SetId and then call IPersistFile::Save. There are more details about all of this on MSDN. Good luck! A: Thanks for this it helped me a lot. You can make the process even easier if you create a batch file with the contents. echo [ZoneTransfer] > Zone.Identifier echo ZoneId=3 >> Zone.Identifier more Zone.Identifier > %1:Zone.Identifier This will generate the Zone.Identifier for you and mark the file accordingly. To run it just supply the file name e.g. if the file is called mark.bat mark.bat myfile.txt
{ "language": "en", "url": "https://stackoverflow.com/questions/135600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is there any documentation other than the official documentation? The official documentation seems to be light on explanations of 'how to understand the SproutCore way' e. g., why things are done the way they are, how to intuit what to do from where you're at, etc. A: [Update] Latest resource is new wiki . Which have information on how to start with Version 1.0 Alpha A: There is a GitHub Wiki on sproutcore. They give a good list of howtos and tutorials. http://github.com/sproutit/sproutcore/wikis It is supposed to be a fairly easy javascript framework to learn, but it has its moments.
{ "language": "en", "url": "https://stackoverflow.com/questions/135607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to build escrow style system I have a website where clients pay for a service but have to out the money upfront via escrow, I'm very proficient with PHP (this is what the website is coded in) how would I go about implementing an escrow system that would work automatically? A: You need to break down your design into components and think about the process. Escrow works like this: * *Entity A deposits money into Escrow Account for Entity B. *Entity A and Entity B come to an agreement *The Escrow account is deposited to Entity B. Implementation wise, I'd track escrow deposits like any transaction, I'd also track escrow payments the same way. You need to come up with a way to store "Work Orders" in a database. These would have two columns "AcceptedByA", and "AcceptedByB", as well as the amount of money. The process would be: * *Entity A opens a workorder and pays $X into the Escrow Account *Entity B does whatever the workorder requires (Setting AcceptedByB to true) *Entity A sets "AcceptedByA" to true. *A transaction runs to pay EntityB from the Escrow Account. This is extremely high level, but its a rather simple idea. A: IMHO the question is too generic. Are you looking for a PHP-MySQL level design? Like what tables etc? or are you looking for services that you can use? For simplification, let's assume you are using Paypal to receive and pay money. Buyer A needs to setup and escrow for Seller B. * *You charge A $x and save that amount in your Paypal account. *You create a logical record that tells you that $x have been received and are for B. You'll have to implement transactions (MySQL or layered in PHP) to make sure that both the above mentioned steps happen. If for some reason creating a logical record fails - re-try second step for n number of times and if it still fails, then fail (refund) the first step. You'll have to adopt a similar approach when the user releases the escrow. A: There are many ways to tackle something like this. You can create a one off solution for your own platform or use marketplace payment solutions like Paypal's Braintree, or another platform such as Stripe. Check those out, they give you the ability to market split payments and run an escrow.
{ "language": "en", "url": "https://stackoverflow.com/questions/135621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What's the difference between a midlet and a corelet? It's my understanding that a corelet is a Motorola-ism, but does anyone know what the difference is? Do corelets have certain abilities that midlets don't? A: I believe a corelet is a midlet that has full read access to the phone's internal file system... There's probably something else, but I can't remember it... I'll see if I can find any further info After searching around, it seems they basically add functionality to the phone, rather than run on top of it as a separate application
{ "language": "en", "url": "https://stackoverflow.com/questions/135628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Class design vs. IDE: Are nonmember nonfriend functions really worth it? In the (otherwise) excellent book C++ Coding Standards, Item 44, titled "Prefer writing nonmember nonfriend functions", Sutter and Alexandrescu recommend that only functions that really need access to the members of a class be themselves members of that class. All other operations which can be written by using only member functions should not be part of the class. They should be nonmembers and nonfriends. The arguments are that: * *It promotes encapsulation, because there is less code that needs access to the internals of a class. *It makes writing function templates easier, because you don't have to guess each time whether some function is a member or not. *It keeps the class small, which in turn makes it easier to test and maintain. Although I see the value in these argument, I see a huge drawback: my IDE can't help me find these functions! Whenever I have an object of some kind, and I want to see what operations are available on it, I can't just type "pMysteriousObject->" and get a list of member functions anymore. Keeping a clean design is in the end about making your programming life easier. But this would actually make mine much harder. So I'm wondering if it's really worth the trouble. How do you deal with that? A: Scott Meyers has a similar opinion to Sutter, see here. He also clearly states the following: "Based on his work with various string-like classes, Jack Reeves has observed that some functions just don't "feel" right when made non-members, even if they could be non-friend non-members. The "best" interface for a class can be found only by balancing many competing concerns, of which the degree of encapsulation is but one." If a function would be something that "just makes sense" to be a member function, make it one. Likewise, if it isn't really part of the main interface, and "just makes sense" to be a non-member, do that. One note is that with overloaded versions of eg operator==(), the syntax stays the same. So in this case you have no reason not to make it a non-member non-friend floating function declared in the same place as the class, unless it really needs access to private members (in my experience it rarely will). And even then you can define operator!=() a non-member and in terms of operator==(). A: I don't think it would be wrong to say that between them, Sutter, Alexandrescu, and Meyers have done more for the quality of C++ than anybody else. One simple question they ask is: If a utility function has two independent classes as parameteres, which class should "own" the member function? Another issue, is you can only add member functions where the class in question is under your control. Any helper functions that you write for std::string will have to be non-members since you cannot re-open the class definition. For both of these examples, your IDE will provide incomplete information, and you will have to use the "old fashion way". Given that the most influential C++ experts in the world consider that non-member functions with a class parameter are part of the classes interface, this is more of an issue with your IDE rather than the coding style. Your IDE will likely change in a release or two, and you may even be able to get them to add this feature. If you change your coding style to suit todays IDE you may well find that you have bigger problems in the future with unextendable/unmaintainable code. A: I'm going to have to disagree with Sutter and Alexandrescu on this one. I think if the behavior of function foo() falls within the realm of class Bar's responsibilities, then foo() should be part of bar(). The fact that foo() doesn't need direct access to Bar's member data doesn't mean it isn't conceptually part of Bar. It can also mean that the code is well factored. It's not uncommon to have member functions which perform all their behavior via other member functions, and I don't see why it should be. I fully agree that peripherally-related functions should not be part of the class, but if something is core to the class responsibilities, there's no reason it shouldn't be a member, regardless of whether it is directly mucking around with the member data. As for these specific points: It promotes encapsulation, because there is less code that needs access to the internals of a class. Indeed, the fewer functions that directly access the internals, the better. That means that having member functions do as much as possible via other member functions is a good thing. Splitting well-factored functions out of the class just leaves you with a half-class, that requires a bunch of external functions to be useful. Pulling well-factored functions away from their classes also seems to discourage the writing of well-factored functions. It makes writing function templates easier, because you don't have to guess each time whether some function is a member or not. I don't understand this at all. If you pull a bunch of functions out of classes, you've thrust more responsibility onto function templates. They are forced to assume that even less functionality is provided by their class template arguments, unless we are going to assume that most functions pulled from their classes is going to be converted into a template (ugh). It keeps the class small, which in turn makes it easier to test and maintain. Um, sure. It also creates a lot of additional, external functions to test and maintain. I fail to see the value in this. A: It's true that external functions should not be part of the interface. In theory, your class should only contain the data and expose the interface for what it is intended and not utilitarian functions. Adding utility functions to the interface just grow the class code base and make it less maintainable. I currently maintain a class with around 50 public methods, that's just insane. Now, in reality, I agree that this is not easy to enforce. It's often easier to just add another method to your class, even more if you are using an IDE that can really simply add a new method to an existing class. In order to keep my classes simple and still be able to centralize external function, I often use utility class that works with my class, or even namespaces. I start by creating the class that will wrap my data and expose the simplest possible interface. I then create a new class for every task I have to do with the class. Example: create a class Point, then add a class PointDrawer to draw it to a bitmap, PointSerializer to save it, etc. A: If you give them a common prefix, then maybe your IDE will help if you type ::prefix or namespace::prefix A: In many OOP languages non-friend non-class methods are third-class citizens that reside in an orphanage unconnected to anything. When I write a method, I like to pick good parents - a fitting class - where they have the best chances to feel welcome and help. A: I would have thought the IDE was actually helping you out. The IDE is hiding the protected functions from the list because they are not available to the public just as the designer of the class intended. If you had been within the scope of the class and typed this-> then the protected functions would be displayed in the list.
{ "language": "en", "url": "https://stackoverflow.com/questions/135634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: how to add a simple message to a php-gtk statusbar? I have a simple GtkStatusBar created with glade,and want to add a simple message with the date of the last update of the database (just a simple and plain date). I don't want to use the whole stacks, context and all the other things that gtk developers put on the statusbar code. (I could simply use a label but I would like to keep the drag bar). I'm using Gtk 2.0.1 with php 5.2.6; A: try http://www.kksou.com/php-gtk2/articles/have-a-status-area-using-GtkStatusbar.php To display a message, we need to first get a message id with GtkStatusbar::get_context_id(), then display the message with GtkStatusbar::push().
{ "language": "en", "url": "https://stackoverflow.com/questions/135642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Difference between drop table and truncate table? I have some tables that I build as a part of my report rollup. I don't need them afterwards at all. Someone mentioned to truncate them as it would be faster. A: DROP TABLE deletes the table. TRUNCATE TABLE empties it, but leaves its structure for future data. A: DROP Table DROP TABLE [table_name]; The DROP command is used to remove a table from the database. It is a DDL command. All the rows, indexes and privileges of the table will also be removed. DROP operation cannot be rolled back. DELETE Table DELETE FROM [table_name] WHERE [condition]; DELETE FROM [table_name]; The DELETE command is a DML command. It can be used to delete all the rows or some rows from the table based on the condition specified in WHERE clause. It is executed using a row lock, each row in the table is locked for deletion. It maintain the transaction log, so it is slower than TRUNCATE. DELETE operations can be rolled back. TRUNCATE Table TRUNCATE TABLE [table_name]; The TRUNCATE command removes all rows from a table. It won't log the deletion of each row, instead it logs the deallocation of the data pages of the table, which makes it faster than DELETE. It is executed using a table lock and whole table is locked for remove all records. It is a DDL command. TRUNCATE operations cannot be rolled back. A: TRUNCATE TABLE keeps all of your old indexing and whatnot. DROP TABLE would, obviously, get rid of the table and require you to recreate it later. A: Drop gets rid of the table completely, removing the definition as well. Truncate empties the table but does not get rid of the definition. A: Truncating the table empties the table. Dropping the table deletes it entirely. Either one will be fast, but dropping it will likely be faster (depending on your database engine). If you don't need it anymore, drop it so it's not cluttering up your schema. A: DELETE TableA instead of TRUNCATE TableA? A common misconception is that they do the same thing. Not so. In fact, there are many differences between the two. DELETE is a logged operation on a per row basis. This means that the deletion of each row gets logged and physically deleted. You can DELETE any row that will not violate a constraint, while leaving the foreign key or any other contraint in place. TRUNCATE is also a logged operation, but in a different way. TRUNCATE logs the deallocation of the data pages in which the data exists. The deallocation of data pages means that your data rows still actually exist in the data pages, but the extents have been marked as empty for reuse. This is what makes TRUNCATE a faster operation to perform over DELETE. You cannot TRUNCATE a table that has any foreign key constraints. You will have to remove the contraints, TRUNCATE the table, and reapply the contraints. TRUNCATE will reset any identity columns to the default seed value. A: DROP and TRUNC do different things: TRUNCATE TABLE Removes all rows from a table without logging the individual row deletions. TRUNCATE TABLE is similar to the DELETE statement with no WHERE clause; however, TRUNCATE TABLE is faster and uses fewer system and transaction log resources. DROP TABLE Removes one or more table definitions and all data, indexes, triggers, constraints, and permission specifications for those tables. As far as speed is concerned the difference should be small. And anyway if you don't need the table structure at all, certainly use DROP. A: I think you means the difference between DELETE TABLE and TRUNCATE TABLE. DROP TABLE remove the table from the database. DELETE TABLE without a condition delete all rows. If there are trigger and references then this will process for every row. Also a index will be modify if there one. TRUNCATE TABLE set the row count zero and without logging each row. That it is many faster as the other both. A: truncate removes all the rows, but not the table itself, it is essentially equivalent to deleting with no where clause, but usually faster. A: In the SQL standard, DROP table removes the table and the table schema - TRUNCATE removes all rows. A: I have a correction for one of the statements above... "truncate cannot be rolled back" Truncate can be rolled back. There are some cases when you can't do a truncate or drop table, such as when you have a foreign key reference. For a task such as monthly reporting, I'd probably just drop the table once I didn't need it anymore. If I was doing this rollup reporting more often then I'd probably keep the table instead and use truncate. Hope this helps, here's some more info that you should find useful... Please see the following article for more details: http://sqlblog.com/blogs/denis_gobo/archive/2007/06/13/1458.aspx Also, for more details on delete vs. truncate, see this article: http://www.sql-server-performance.com/faq/delete_truncate_difference_p1.aspx Thanks! Jeff A: TRUNCATE TABLE is functionally identical to DELETE statement with no WHERE clause: both remove all rows in the table. But TRUNCATE TABLE is faster and uses fewer system and transaction log resources than DELETE. The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table's data, and only the page deallocations are recorded in the transaction log. TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints, indexes and so on remain. The counter used by an identity for new rows is reset to the seed for the column. If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement. You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view. From http://msdn.microsoft.com/en-us/library/aa260621(SQL.80).aspx A: None of these answer point out an important difference about these two operations. Drop table is an operation that can be rolled back. However, truncate cannot be rolled back ['TRUNCATE TABLE' can be rolled back as well]. In this way dropping a very large table can be very expensive if there are many rows, because they all have to be recorded in a temporary space in case you decide to roll it back. Usually, if I want to get rid of a large table, I will truncate it, then drop it. This way the data will be nixed without record, and the table can be dropped, and that drop will be very inexpensive because no data needs to be recorded. It is important to point out though that truncate just deletes data, leaving the table, while drop will, in fact, delete the data and the table itself. (assuming foreign keys don't preclude such an action) A: Deleting records from a table logs every deletion and executes delete triggers for the records deleted. Truncate is a more powerful command that empties a table without logging each row. SQL Server prevents you from truncating a table with foreign keys referencing it, because of the need to check the foreign keys on each row. Truncate is normally ultra-fast, ideal for cleaning out data from a temporary table. It does preserve the structure of the table for future use. If you actually want to remove the table definitions as well as the data, simply drop the tables. See this MSDN article for more info A: The answers here match up to the question, but I'm going to answer the question you didn't ask. "Should I use truncate or delete?" If you are removing all rows from a table, you'll typically want to truncate, since it's much much faster. Why is it much faster? At least in the case of Oracle, it resets the high water mark. This is basically a dereferencing of the data and allows the db to reuse it for something else. A: DELETE VS TRUNCATE * *The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table data and records only the page deallocations in the transaction log *We can use WHERE clause in DELETE but in TRUNCATE you cannot use it *When the DELETE statement is executed using a row lock, each row in the table is locked for deletion. TRUNCATE TABLE always locks the table and page but not each row *After a DELETE statement is executed, the table can still contain empty pages.If the delete operation does not use a table lock, the table (heap) will contain many empty pages. For indexes, the delete operation can leave empty pages behind, although these pages will be deallocated quickly by a background cleanup process *TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain *DELETE statement doesn't RESEED identity column but TRUNCATE statement RESEEDS the IDENTITY column *You cannot use TRUNCATE TABLE on tables that: * *Are referenced by a FOREIGN KEY constraint. (You can truncate a table that has a foreign key that references itself.) *Participate in an indexed view. *Are published by using transactional replication or merge replication *TRUNCATE TABLE cannot activate a trigger because the operation does not log individual row deletions A: Delete Statement Delete Statement delete table rows and return the number of rows is deleted from the table.in this statement, we use where clause to deleted data from the table * *Delete Statement is slower than Truncate statement because it deleted records one by one Truncate Statement Truncate statement Deleted or removing all the rows from the table. * *It is faster than the Delete Statement because it deleted all the records from the table *Truncate statement not return the no of rows are deleted from the table Drop statement Drop statement deleted all records as well as the structure of the table A: Drop drop whole table and all its structure truncate delete all rows from table it is different from delete that it also delete indexes of rows A: DELETE The DELETE command is used to remove rows from a table. A WHERE clause can be used to only remove some rows. If no WHERE condition is specified, all rows will be removed. After performing a DELETE operation you need to COMMIT or ROLLBACK the transaction to make the change permanent or to undo it. TRUNCATE TRUNCATE removes all rows from a table. The operation cannot be rolled back ... As such, TRUCATE is faster and doesn't use as much undo space as a DELETE. From: http://www.orafaq.com/faq/difference_between_truncate_delete_and_drop_commands
{ "language": "en", "url": "https://stackoverflow.com/questions/135653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "82" }
Q: How to identify unused CSS definitions from multiple CSS files in a project A bunch of CSS files were pulled in and now I'm trying to clean things up a bit. How can I efficiently identify unused CSS definitions in a whole project? A: I have just found this site – http://unused-css.com/ Looks good but I would need to thoroughly check its outputted 'clean' css before uploading it to any of my sites. Also as with all these tools I would need to check it didn't strip id's and classes with no style but are used as JavaScript selectors. The below content is taken from http://unused-css.com/ so credit to them for recommending other solutions: Latish Sehgal has written a windows application to find and remove unused CSS classes. I haven't tested it but from the description, you have to provide the path of your html files and one CSS file. The program will then list the unused CSS selectors. From the screenshot, it looks like there is no way to export this list or download a new clean CSS file. It also looks like the service is limited to one CSS file. If you have multiple files you want to clean, you have to clean them one by one. Dust-Me Selectors is a Firefox extension (for v1.5 or later) that finds unused CSS selectors. It extracts all the selectors from all the stylesheets on the page you're viewing, then analyzes that page to see which of those selectors are not used. The data is then stored so that when testing subsequent pages, selectors can be crossed off the list as they're encountered. This tool is supposed to be able to spider a whole website but I unfortunately could make it work. Also, I don't believe you can configure and download the CSS file with the styles removed. Topstyle is a windows application including a bunch of tools to edit CSS. I haven't tested it much but it looks like it has the ability to removed unused CSS selectors. This software costs 80 USD. Liquidcity CSS cleaner is a php script that uses regular expressions to check the styles of one page. It will tell you the classes that aren't available in the HTML code. I haven't tested this solution. Deadweight is a CSS coverage tool. Given a set of stylesheets and a set of URLs, it determines which selectors are actually used and lists which can be "safely" deleted. This tool is a ruby module and will only work with rails website. The unused selectors have to be manually removed from the CSS file. Helium CSS is a javascript tool for discovering unused CSS across many pages on a web site. You first have to install the javascript file to the page you want to test. Then, you have to call a helium function to start the cleaning. UnusedCSS.com is web application with an easy to use interface. Type the url of a site and you will get a list of CSS selectors. For each selector, a number indicates how many times a selector is used. This service has a few limitations. The @import statement is not supported. You can't configure and download the new clean CSS file. CSSESS is a bookmarklet that helps you find unused CSS selectors on any site. This tool is pretty easy to use but it won't let you configure and download clean CSS files. It will only list unused CSS files. A: Chrome Developer Tools has an Audits tab which can show unused CSS selectors. Run an audit, then, under Web Page Performance see Remove unused CSS rules A: Google Chrome Developer Tools has (a currently experimental) feature called CSS Overview which will allow you to find unused CSS rules. To enable it follow these steps: * *Open up DevTools (Command+Option+I on Mac; Control+Shift+I on Windows) *Head over to DevTool Settings (Function+F1 on Mac; F1 on Windows) *Click open the Experiments section *Enable the CSS Overview option
{ "language": "en", "url": "https://stackoverflow.com/questions/135657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "423" }
Q: What are the Pros and Cons of using Global.asax? Let's collect some tips for evaluating the appropriate use of global.asax. A: I've used it before to catch errors at the application level, and to do something when a user's session expires. I also tend to use it a lot to provide static properties that read values from the web.config. I think it ok for stuff like that, though I wouldn't put much more than that in there. A: It's simple to use if your session and application initialization code is very small and application-specific. Using an HttpModule is more useful if you want to reuse code, such as setting up rules for URL rewriting, redirects or auth. An HttpModule can cover everything a Global.asax file can. They can also be removed and added easily using your .config. A: * *Initializing ASP.NET MVC. :) *Custom user authentication. *Dependency injection, like extending the Ninject HttpApplication. A: Its a stiff drink, just be careful not to drink too much and you'll be ok. I use it for global error handling, and setting up routes in mvc. You don't want to be writing global page_init code in there though. If you stick to application level events, and make most of the logic actually live in classes that just get called during those events, you will have no problem making use of the global constructs. A: It's a nice spot to grab session initiation, and even request initiation. Others have mentioned the error handing aspect, although be careful of exceptions thrown from non-asp.net threads (eg. threadpool or custom thread) as they'll bypass the global.asax handler. Personally I always have one, I think of it as simply part of the plumbing. A: I used to use Global.asax for things such as error handling etc, however, I have since gone to using HttpModules to replace this as I can copy it from one project to another without editing the global.asax. A: Con of using Global.asax compared to an HttpModule : You will be tempted to write code that's hard to reuse because it will be too tied to that particular application. A: Global.asax can inherit from your own class that inherits httpapplication. Gives you more options as well as putting the bulk of the code you might have in global into a class library. EDIT: Having your HttpApplication class (global.asax parent) in a seperate class library can promote reusability too. Although i agree that using HttpModules is better suited for many tasks, but this still has many uses, for one, cleaner code. A: I haven't really used Global.asax. I used it's equivalent in classic ASP all the time, but that had mostly to do with certain configurations like database connection strings and such. The config in .net makes a lot of these things a lot easier. But if you want to implement application and session level events, this is where ou need to go. A: The Session_Start event in Global.asax is a wicked good place to initialize your session variables.
{ "language": "en", "url": "https://stackoverflow.com/questions/135661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How many bytes per element are there in a Python list (tuple)? For example, how much memory is required to store a list of one million (32-bit) integers? alist = range(1000000) # or list(range(1000000)) in Python 3.0 A: Addressing "tuple" part of the question Declaration of CPython's PyTuple in a typical build configuration boils down to this: struct PyTuple { size_t refcount; // tuple's reference count typeobject *type; // tuple type object size_t n_items; // number of items in tuple PyObject *items[1]; // contains space for n_items elements }; Size of PyTuple instance is fixed during it's construction and cannot be changed afterwards. The number of bytes occupied by PyTuple can be calculated as sizeof(size_t) x 2 + sizeof(void*) x (n_items + 1). This gives shallow size of tuple. To get full size you also need to add total number of bytes consumed by object graph rooted in PyTuple::items[] array. It's worth noting that tuple construction routines make sure that only single instance of empty tuple is ever created (singleton). References: Python.h, object.h, tupleobject.h, tupleobject.c A: A new function, getsizeof(), takes a Python object and returns the amount of memory used by the object, measured in bytes. Built-in objects return correct results; third-party extensions may not, but can define a __sizeof__() method to return the object’s size. kveretennicov@nosignal:~/py/r26rc2$ ./python Python 2.6rc2 (r26rc2:66712, Sep 2 2008, 13:11:55) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 >>> import sys >>> sys.getsizeof(range(1000000)) 4000032 >>> sys.getsizeof(tuple(range(1000000))) 4000024 Obviously returned numbers don't include memory consumed by contained objects (sys.getsizeof(1) == 12). A: "It depends." Python allocates space for lists in such a way as to achieve amortized constant time for appending elements to the list. In practice, what this means with the current implementation is... the list always has space allocated for a power-of-two number of elements. So range(1000000) will actually allocate a list big enough to hold 2^20 elements (~ 1.045 million). This is only the space required to store the list structure itself (which is an array of pointers to the Python objects for each element). A 32-bit system will require 4 bytes per element, a 64-bit system will use 8 bytes per element. Furthermore, you need space to store the actual elements. This varies widely. For small integers (-5 to 256 currently), no additional space is needed, but for larger numbers Python allocates a new object for each integer, which takes 10-100 bytes and tends to fragment memory. Bottom line: it's complicated and Python lists are not a good way to store large homogeneous data structures. For that, use the array module or, if you need to do vectorized math, use NumPy. PS- Tuples, unlike lists, are not designed to have elements progressively appended to them. I don't know how the allocator works, but don't even think about using it for large data structures :-) A: This is implementation specific, I'm pretty sure. Certainly it depends on the internal representation of integers - you can't assume they'll be stored as 32-bit since Python gives you arbitrarily large integers so perhaps small ints are stored more compactly. On my Python (2.5.1 on Fedora 9 on core 2 duo) the VmSize before allocation is 6896kB, after is 22684kB. After one more million element assignment, VmSize goes to 38340kB. This very grossly indicates around 16000kB for 1000000 integers, which is around 16 bytes per integer. That suggests a lot of overhead for the list. I'd take these numbers with a large pinch of salt. A: Useful links: How to get memory size/usage of python object Memory sizes of python objects? if you put data into dictionary, how do we calculate the data size? However they don't give a definitive answer. The way to go: * *Measure memory consumed by Python interpreter with/without the list (use OS tools). *Use a third-party extension module which defines some sort of sizeof(PyObject). Update: Recipe 546530: Size of Python objects (revised) import asizeof N = 1000000 print asizeof.asizeof(range(N)) / N # -> 20 (python 2.5, WinXP, 32-bit Linux) # -> 33 (64-bit Linux)
{ "language": "en", "url": "https://stackoverflow.com/questions/135664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Structure of projects in version control - .NET specific This post is similar to this previously asked question. I really want to set up my SVN repository in TTB format, but when creating a project in Visual Studio 2008 (ASP.NET/VB.NET), the structure created tends to be incompatible when considering the solution file, project files, folders for projects, multiple projects within solutions, etc. Does anyone have a script or procedure to take a newly created ASP.NET project and move it to a TTB format as painlessly as possible? Let me be more specific. Suppose I have a project that I'm creating called StackOverflowIsAwesome. I can put that into my local folder structure (let's say that it's c:\working). When I create it, VS creates c:\working\StackOverflowIsAwesome and a whole bunch of subfolders (bin, app_data, etc.). But I want my repository structure to look like... StackOverflowIsAwesome /trunk /bin /app_data /tags /branches So, is there a clean way to do this consistently or do I need to resort to constantly moving/modifying files and folders to make this work? A: We went with a very simplistic approach: File Structure: * *Solution Folder (contains solution file, build scripts, maybe more?) * *Project Folder *Project Folder 2 *References (contains shared assemblies for the solution). Then we just check the entire solution folder's contents into our repository. We use one repository for each solution. I'm not sure if this is the optimal way to organize the solution, but it works for us. Also, we branch at the highest level, not per project. A: Another way: StackOverflowIsAwesome /trunk /database /datafiles /documents /build /installer /lib /External_DAL (external that points to shared library) /utilities /vendor /src /StackOverFlowIsAwesome /StackOverFlowIsAwesome.csprj /bin /... /StackOverFlowIsAwesomeTests /StackOverFlowIsAwesomeTests.csprj /bin /... /branches /tags This would be for each project. Since we are using a build script, we don't need store our solution file in SVN. A: If your TTB are common rather than per project, there is no problem with it. Or am I missing something? A: You can look at this previous post or this project. The project creates a .NET development tree structure (requires .NET 3.5). A: When dealing with multiple projects that makes up a Visual Studio Solution it is difficult to decide how to structure things properly. One critical aspect that you will need to do with your structure is make it easy to retrieve all the files for a particular release. It is important to make this as easy as possible. In subversion copying a root folder over to the tag branches is easier then remembering to repeat the same operation for X projects. Being able to work for extended periods outside of the main trunk is also important. You will have to consider that as well. You may find that your software has a number of components that naturally group together. You could do something like this /tag /core_library /branch /main /business_logic /branch /main /report_library /branch /main /my_ui /branch /main There is no easy answer. What you do really depends on your specific project. If everything is still coming out a snarled mess then perhaps you need to look at how your project is designed and see if that can be changed to improve understanding. A: For bigger projects we usually use this format here: /Project /trunk /lib/ # Binary imports here (not in svn) /src # Solution file here /Libraries # Library assemblies here /StackOverflowIsAwesome.Common /Products # Delivered products here /StackOverflowIsAwesome.Site /Projects # internal assemblies here /StackOverflowIsAwesome.Tests /branches /1.x /tags /StackOverflowIsAwesome-1.0 Depending on the actual project non source files (documents, etc.) have a directory under the trunk root and extra development resources are under src. Independent projects under are under their own /Project root, but in the same repository. A: I do it this way: * *Create the project in VS *Import everything in the project folder to repos/projectname/trunk *Add the repos/branches and repos/tags folders That gives me a repository structure like: projectname / trunk /bin /obj /Properties projectname.sln /tags /branches And I can just leave all of the files in their default places in the file system.
{ "language": "en", "url": "https://stackoverflow.com/questions/135670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Optimal method to perform a best levenshtein match against Map in Java I have a map in Java. I would like to compare a source string against all items in the map and return the best match based on a levenshtein ratio algorithm. I am wondering what the optimal way to perform this check on every element in the list would be. Thanks, Matt A: You won't be able to get better than O(n) performance with a standard Map - just use the naive approach of testing them sequentially. There are far more efficient ways to do this, though. One of them is called a bk-tree. Basically, you construct an n-way tree, with edges determined by the levenshtein distance between the nodes. Then, you can make use of the triangle inequality to massively cut down the nodes you have to search. For short distances, it's very efficient. Here's a blog article I wrote some time ago, describing it in detail. With a little extra work, you can query it for nearest-neighbour, rather than repeatedly querying with distance 1, 2, etc. A: Since the levenshtein ratio depends both on the source and on the target, the values will change for each source string. Unless there is a high probability that the source string might be repeated on subsequent searches, just iterate over the map elements. If speed is truly an issue, make sure you are using the latest Java compilers and use optimization options. A: And of course, if you're not already doing so, then use an off-the-shelf optimised Levenshtein implementation, like that in commons-lang StringUtils. A: If iterating over all map elements is too costly, you could consider using k-gram indexes.
{ "language": "en", "url": "https://stackoverflow.com/questions/135679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Setting environment variables on OS X What is the proper way to modify environment variables like PATH in OS X? I've looked on Google a little bit and found three different files to edit: * */etc/paths *~/.profile *~/.tcshrc I don't even have some of these files, and I'm pretty sure that .tcshrc is wrong, since OS X uses bash now. Where are these variables, especially PATH, defined? I'm running OS X v10.5 (Leopard). A: I think the OP is looking for a simple, Windows-like solution. Here you go: http://www.apple.com/downloads/macosx/system_disk_utilities/environmentvariablepreferencepane.html A: Feb 2022 (MacOs 12+) Solutions here should work after reboot or application restart. CLI Open your CLI of choice config file. * *For bash open ~/.bash_profile *For zsh open ~/.zshrc add (or replace) export varName=varValue (if varValue has spaces in it - wrap it in ") Make sure to restart command line app. GUI Complete the CLI step. Make sure GUI app is closed. Open GUI app from the command line. For example: open /Applications/Sourcetree.app (you can also alias this command in the .zshrc) Principles * *Mac does not have a configuration options that sets environment variable for all contexts. *Avoid changing anything outside your user profile. Doesn't work anymore * *launchctl config user varName varVal (MacOS 12.1+) *Editing /etc/launchd.conf *xml file with plist suffix A: Solution for both command line and GUI applications from a single source (works with Mac OS X v10.10 (Yosemite) and Mac OS X v10.11 (El Capitan)) Let's assume you have environment variable definitions in your ~/.bash_profile like in the following snippet: export JAVA_HOME="$(/usr/libexec/java_home -v 1.8)" export GOPATH="$HOME/go" export PATH="$PATH:/usr/local/opt/go/libexec/bin:$GOPATH/bin" export PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH" export MANPATH="/usr/local/opt/coreutils/libexec/gnuman:$MANPATH" We need a Launch Agent which will run on each login and anytime on demand which is going to load these variables to the user session. We'll also need a shell script to parse these definitions and build necessary commands to be executed by the agent. Create a file with plist suffix (e.g. named osx-env-sync.plist) in ~/Library/LaunchAgents/ directory with the following contents: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>osx-env-sync</string> <key>ProgramArguments</key> <array> <string>bash</string> <string>-l</string> <string>-c</string> <string> $HOME/.osx-env-sync.sh </string> </array> <key>RunAtLoad</key> <true/> </dict> </plist> -l parameter is critical here; it's necessary for executing the shell script with a login shell so that ~/.bash_profile is sourced in the first place before this script is executed. Now, the shell script. Create it at ~/.osx-env-sync.sh with the following contents: grep export $HOME/.bash_profile | while IFS=' =' read ignoreexport envvar ignorevalue; do launchctl setenv "${envvar}" "${!envvar}" done Make sure the shell script is executable: chmod +x ~/.osx-env-sync.sh Now, load the launch agent for current session: launchctl load ~/Library/LaunchAgents/osx-env-sync.plist (Re)Launch a GUI application and verify that it can read the environment variables. The setup is persistent. It will survive restarts and relogins. After the initial setup (that you just did), if you want to reflect any changes in your ~/.bash_profile to your whole environment again, rerunning the launchctl load ... command won't perform what you want; instead you'll get a warning like the following: <$HOME>/Library/LaunchAgents/osx-env-sync.plist: Operation already in progress In order to reload your environment variables without going through the logout/login process do the following: launchctl unload ~/Library/LaunchAgents/osx-env-sync.plist launchctl load ~/Library/LaunchAgents/osx-env-sync.plist Finally make sure that you relaunch your already running applications (including Terminal.app) to make them aware of the changes. I've also pushed the code and explanations here to a GitHub project: osx-env-sync. I hope this is going to be the ultimate solution, at least for the latest versions of OS X (Yosemite & El Capitan). A: To be concise and clear about what each file is intended for * *~/.profile is sourced every time Terminal.app is launched *~/.bashrc is where "traditionally" all the export statements for Bash environment are set */etc/paths is the main file in Mac OS that contains the list of default paths for building the PATH environment variable for all users */etc/paths.d/ contains files that hold additional search paths Non-terminal programs don't inherit the system wide PATH and MANPATH variables that your terminal does! To set environment for all processes launched by a specific user, thus making environment variables available to Mac OS X GUI applications, those variables must be defined in your ~/.MacOSX/environment.plist (Apple Technical Q&A QA1067) Use the following command line to synchronize your environment.plist with /etc/paths: defaults write $HOME/.MacOSX/environment PATH "$(tr '\n' ':' </etc/paths)" A: Bruno is right on track. I've done extensive research and if you want to set variables that are available in all GUI applications, your only option is /etc/launchd.conf. Please note that environment.plist does not work for applications launched via Spotlight. This is documented by Steve Sexton here. * *Open a terminal prompt *Type sudo vi /etc/launchd.conf (note: this file might not yet exist) *Put contents like the following into the file # Set environment variables here so they are available globally to all apps # (and Terminal), including those launched via Spotlight. # # After editing this file run the following command from the terminal to update # environment variables globally without needing to reboot. # NOTE: You will still need to restart the relevant application (including # Terminal) to pick up the changes! # grep -E "^setenv" /etc/launchd.conf | xargs -t -L 1 launchctl # # See http://www.digitaledgesw.com/node/31 # and http://stackoverflow.com/questions/135688/setting-environment-variables-in-os-x/ # # Note that you must hardcode the paths below, don't use environment variables. # You also need to surround multiple values in quotes, see MAVEN_OPTS example below. # setenv JAVA_VERSION 1.6 setenv JAVA_HOME /System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home setenv GROOVY_HOME /Applications/Dev/groovy setenv GRAILS_HOME /Applications/Dev/grails setenv NEXUS_HOME /Applications/Dev/nexus/nexus-webapp setenv JRUBY_HOME /Applications/Dev/jruby setenv ANT_HOME /Applications/Dev/apache-ant setenv ANT_OPTS -Xmx512M setenv MAVEN_OPTS "-Xmx1024M -XX:MaxPermSize=512m" setenv M2_HOME /Applications/Dev/apache-maven setenv JMETER_HOME /Applications/Dev/jakarta-jmeter *Save your changes in vi and reboot your Mac. Or use the grep/xargs command which is shown in the code comment above. *Prove that your variables are working by opening a Terminal window and typing export and you should see your new variables. These will also be available in IntelliJ IDEA and other GUI applications you launch via Spotlight. A: * *Do: vim ~/.bash_profile The file may not exist (if not, you can just create it). *Type in this and save the file: export PATH=$PATH:YOUR_PATH_HERE *Run source ~/.bash_profile A: The $PATH variable is also subject to path_helper, which in turn makes use of the /etc/paths file and files in /etc/paths.d. A more thorough description can be found in PATH and other environment issues in Leopard (2008-11) A: /etc/launchd.conf is not used in OS X v10.10 (Yosemite), OS X v10.11 (El Capitan), macOS v10.12 (Sierra), or macOS v10.13 (High Sierra). From the launchctl man page: /etc/launchd.conf file is no longer consulted for subcommands to run during early boot time; this functionality was removed for security considerations. The method described in this Ask Different answer works for me (after a reboot): applications launched from the Dock or from Spotlight inherit environment variables that I set in ~/Library/LaunchAgents/my.startup.plist. (In my case, I needed to set LANG, to en_US.UTF-8, for a Sublime Text plugin.) A: Just did this really easy and quick. First create a ~/.bash_profile from terminal: touch .bash_profile then open -a TextEdit.app .bash_profile add export TOMCAT_HOME=/Library/Tomcat/Home save documement and you are done. A: All the magic on iOS only goes with using source with the file, where you export your environment variables. For example: You can create an file like this: export bim=fooo export bom=bar Save this file as bimbom.env, and do source ./bimbom.ev. Voilá, you got your environment variables. Check them with: echo $bim A: There are essentially two problems to solve when dealing with environment variables in OS X. The first is when invoking programs from Spotlight (the magnifying glass icon on the right side of the Mac menu/status bar) and the second when invoking programs from the Dock. Invoking programs from a Terminal application/utility is trivial because it reads the environment from the standard shell locations (~/.profile, ~/.bash_profile, ~/.bashrc, etc.) When invoking programs from the Dock, use ~/.MacOSX/environment.plist where the <dict> element contains a sequence of <key>KEY</key><string>theValue</string> elements. When invoking programs from Spotlight, ensure that launchd has been setup with all the key/value settings you require. To solve both problems simultaneously, I use a login item (set via the System Preferences tool) on my User account. The login item is a bash script that invokes an Emacs lisp function although one can of course use their favorite scripting tool to accomplish the same thing. This approach has the added benefit that it works at any time and does not require a reboot, i.e. one can edit ~/.profile, run the login item in some shell and have the changes visible for newly invoked programs, from either the Dock or Spotlight. Details: Login item: ~/bin/macosx-startup #!/bin/bash bash -l -c "/Applications/Emacs.app/Contents/MacOS/Emacs --batch -l ~/lib/emacs/elisp/macosx/environment-support.el -f generate-environment" Emacs lisp function: ~/lib/emacs/elisp/macosx/envionment-support.el ;;; Provide support for the environment on Mac OS X (defun generate-environment () "Dump the current environment into the ~/.MacOSX/environment.plist file." ;; The system environment is found in the global variable: ;; 'initial-environment' as a list of "KEY=VALUE" pairs. (let ((list initial-environment) pair start command key value) ;; clear out the current environment settings (find-file "~/.MacOSX/environment.plist") (goto-char (point-min)) (setq start (search-forward "<dict>\n")) (search-forward "</dict>") (beginning-of-line) (delete-region start (point)) (while list (setq pair (split-string (car list) "=") list (cdr list)) (setq key (nth 0 pair) value (nth 1 pair)) (insert " <key>" key "</key>\n") (insert " <string>" value "</string>\n") ;; Enable this variable in launchd (setq command (format "launchctl setenv %s \"%s\"" key value)) (shell-command command)) ;; Save the buffer. (save-buffer))) NOTE: This solution is an amalgam of those coming before I added mine, particularly that offered by Matt Curtis, but I have deliberately tried to keep my ~/.bash_profile content platform independent and put the setting of the launchd environment (a Mac only facility) into a separate script. A: Don't expect ~/.launchd.conf to work The man page for launchctl says that it never worked: DEPRECATED AND REMOVED FUNCTIONALITY launchctl no longer has an interactive mode, nor does it accept commands from stdin. The /etc/launchd.conf file is no longer consulted for subcommands to run during early boot time; this functionality was removed for security considerations. While it was documented that $HOME/.launchd.conf would be consulted prior to setting up a user's session, this functionality was never implemented. How to set the environment for new processes started by Spotlight (without needing to reboot) You can set the environment used by launchd (and, by extension, anything started from Spotlight) with launchctl setenv. For example to set the path: launchctl setenv PATH /opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin Or if you want to set up your path in .bashrc or similar, then have it mirrored in launchd: PATH=/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin launchctl setenv PATH $PATH There's no need to reboot though you will need to restart an app if you want it to pick up the changed environment. This includes any shells already running under Terminal.app, although if you're there you can set the environment more directly, e.g. with export PATH=/opt/local/bin:/opt/local/sbin:$PATH for bash or zsh. How to keeping changes after a reboot New method (since 10.10 Yosemite) Use launchctl config user path /bin:/usr/bin:/mystuff. See man launchctl for more information. Previous method The launchctl man page quote at the top of this answer says the feature described here (reading /etc/launchd.conf at boot) was removed for security reasons, so ymmv. To keep changes after a reboot you can set the environment variables from /etc/launchd.conf, like so: setenv PATH /opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin launchd.conf is executed automatically when you reboot. If you want these changes to take effect now, you should use this command to reprocess launchd.conf (thanks @mklement for the tip!) egrep -v '^\s*#' /etc/launchd.conf | launchctl You can find out more about launchctl and how it loads launchd.conf with the command man launchctl. A: For a single user modification, use ~/.profile of the ones you listed. The following link explains when the different files are read by Bash. http://telin.ugent.be/~slippens/drupal/bashrc_and_others If you want to set the environment variable for gui applications you need the ~/.MacOSX/environment.plist file A: Well, I'm unsure about the /etc/paths and ~/.MacOSX/environment.plist files. Those are new. But with Bash, you should know that .bashrc is executed with every new shell invocation and .bash_profile is only executed once at startup. I don't know how often this is with Mac OS X. I think the distinction has broken down with the window system launching everything. Personally, I eliminate the confusion by creating a .bashrc file with everything I need and then do: ln -s .bashrc .bash_profile A: One thing to note in addition to the approaches suggested is that, in OS X 10.5 (Leopard) at least, the variables set in launchd.conf will be merged with the settings made in .profile. I suppose this is likely to be valid for the settings in ~/.MacOSX/environment.plist too, but I haven't verified. A: Set up your PATH environment variable on Mac OS Open the Terminal program (this is in your Applications/Utilities folder by default). Run the following command touch ~/.bash_profile; open ~/.bash_profile This will open the file in the your default text editor. For Android SDK as example: You need to add the path to your Android SDK platform-tools and tools directory. In my example I will use "/Development/android-sdk-macosx" as the directory the SDK is installed in. Add the following line: export PATH=${PATH}:/Development/android-sdk-macosx/platform-tools:/Development/android-sdk-macosx/tools Save the file and quit the text editor. Execute your .bash_profile to update your PATH: source ~/.bash_profile Now every time you open the Terminal program your PATH will include the Android SDK. A: It's simple: Edit ~/.profile and put your variables as follow $ vim ~/.profile In file put: MY_ENV_VAR=value * *Save ( :wq ) *Restart the terminal (Quit and open it again) *Make sure that`s all be fine: $ echo $MY_ENV_VAR $ value A: It's quite simple. Edit file .profile (vi, nano, Sublime Text or other text editor) file. You can found it at the ~/ directory (user directory) and set like this: export MY_VAR=[your value here] Example with Java home: export JAVA_HOME=/Library/Java/JavaVirtualMachines/current Save it and return to the terminal. You can reload it with: source .profile Or close and open your terminal window. A: Another, free, opensource, Mac OS X v10.8 (Mountain Lion) Preference pane/environment.plist solution is EnvPane. EnvPane's source code available on GitHub. EnvPane looks like it has comparable features to RCEnvironment, however, it seems it can update its stored variables instantly, i.e. without the need for a restart or login, which is welcome. As stated by the developer: EnvPane is a preference pane for Mac OS X 10.8 (Mountain Lion) that lets you set environment variables for all programs in both graphical and terminal sessions. Not only does it restore support for ~/.MacOSX/environment.plist in Mountain Lion, it also publishes your changes to the environment immediately, without the need to log out and back in. <SNIP> EnvPane includes (and automatically installs) a launchd agent that runs 1) early after login and 2) whenever the ~/.MacOSX/environment.plist changes. The agent reads ~/.MacOSX/environment.plist and exports the environment variables from that file to the current user's launchd instance via the same API that is used by launchctl setenv and launchctl unsetenv. Disclaimer: I am in no way related to the developer or his/her project. P.S. I like the name (sounds like 'Ends Pain'). A: There are two type of shells at play here. * *Non-login: .bashrc is reloaded every time you start a new copy of Bash *Login: The .profile is loaded only when you either login, or explicitly tell Bash to load it and use it as a login shell. It's important to understand here that with Bash, file .bashrc is only read by a shell that's both interactive and non-login, and you will find that people often load .bashrc in .bash_profile to overcome this limitation. Now that you have the basic understanding, let’s move on to how I would advice you to set it up. * *.profile: create it non-existing. Put your PATH setup in there. *.bashrc: create if non-existing. Put all your aliases and custom methods in there. *.bash_profile: create if non-existing. Put the following in there. .bash_file: #!/bin/bash source ~/.profile # Get the PATH settings source ~/.bashrc # Get Aliases and Functions # A: Login Shells /etc/profile The shell first executes the commands in file /etc/profile. A user working with root privileges can set up this file to establish systemwide default characteristics for users running Bash. .bash_profile .bash_login .profile Next the shell looks for ~/.bash_profile, ~/.bash_login, and ~/.profile (~/ is short- hand for your home directory), in that order, executing the commands in the first of these files it finds. You can put commands in one of these files to override the defaults set in /etc/profile. A shell running on a virtual terminal does not execute commands in these files. .bash_logout When you log out, bash executes commands in the ~/.bash_logout file. This file often holds commands that clean up after a session, such as those that remove temporary files. Interactive Nonlogin Shells /etc/bashrc Although not called by bash directly, many ~/.bashrc files call /etc/bashrc. This setup allows a user working with root privileges to establish systemwide default characteristics for nonlogin bash shells. .bashrc An interactive nonlogin shell executes commands in the ~/.bashrc file. Typically a startup file for a login shell, such as .bash_profile, runs this file, so both login and nonlogin shells run the commands in .bashrc. Because commands in .bashrc may be executed many times, and because subshells inherit exported variables, it is a good idea to put commands that add to existing variables in the .bash_profile file. A: On Mountain Lion all the /etc/paths and /etc/launchd.conf editing doesn't make any effect! Apple's Developer Forums say: "Change the Info.plist of the .app itself to contain an "LSEnvironment" dictionary with the environment variables you want. ~/.MacOSX/environment.plist is no longer supported." So I directly edited the application's Info.plist (right click on "AppName.app" (in this case SourceTree) and then "Show package contents"). And I added a new key/dict pair called: <key>LSEnvironment</key> <dict> <key>PATH</key> <string>/Users/flori/.rvm/gems/ruby-1.9.3-p362/bin:/Users/flori/.rvm/gems/ruby-1.9.3-p362@global/bin:/Users/flori/.rvm/rubies/ruby-1.9.3-p326/bin:/Users/flori/.rvm/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:</string> </dict> (see: LaunchServicesKeys Documentation at Apple) Now the application (in my case Sourcetree) uses the given path and works with Git 1.9.3 :-) PS: Of course you have to adjust the Path entry to your specific path needs. A: Update (2017-08-04) As of (at least) macOS 10.12.6 (Sierra) this method seems to have stopped working for Apache httpd (for both the system and the user option of launchctl config). Other programs do not seem to be affected. It is conceivable that this is a bug in httpd. Original answer This concerns OS X 10.10+ (10.11+ specifically due to rootless mode where /usr/bin is no longer writeable). I've read in multiple places that using launchctl setenv PATH <new path> to set the PATH variable does not work due to a bug in OS X (which seems true from personal experience). I found that there's another way the PATH can be set for applications not launched from the shell: sudo launchctl config user path <new path> This option is documented in the launchctl man page: config system | user parameter value Sets persistent configuration information for launchd(8) domains. Only the system domain and user domains may be configured. The location of the persistent storage is an implementation detail, and changes to that storage should only be made through this subcommand. A reboot is required for changes made through this subcommand to take effect. [...] path Sets the PATH environment variable for all services within the target domain to the string value. The string value should conform to the format outlined for the PATH environment variable in environ(7). Note that if a service specifies its own PATH, the service-specific environment variable will take precedence. NOTE: This facility cannot be used to set general environment variables for all services within the domain. It is intentionally scoped to the PATH environment vari- able and nothing else for security reasons. I have confirmed this to work with a GUI application started from Finder (which uses getenv to get PATH). Note that you only have to do this once and the change will be persistent through reboots. A: Sometimes all of the previous answers simply don't work. If you want to have access to a system variable (like M2_HOME) in Eclipse or in IntelliJ IDEA the only thing that works for me in this case is: First (step 1) edit /etc/launchd.conf to contain a line like this: "setenv VAR value" and then (step 2) reboot. Simply modifying .bash_profile won't work because in OS X the applications are not started as in other Unix'es; they don't inherit the parent's shell variables. All the other modifications won't work for a reason that is unknown to me. Maybe someone else can clarify about this. A: While the answers here aren't "wrong", I'll add another: never make environment variable changes in OS X that affect "all processes", or even, outside the shell, for all processes run interactively by a given user. In my experience, global changes to environment variables like PATH for all processes are even more likely to break things on OS X than on Windows. Reason being, lots of OS X applications and other software (including, perhaps especially, components of the OS itself) rely on UNIX command-line tools under the hood, and assume the behavior of the versions of these tools provided with the system, and don't necessarily use absolute paths when doing so (similar comments apply to dynamically-loaded libraries and DYLD_* environment variables). Consider, for instance, that the highest-rated answers to various Stack Overflow questions about replacing OS X-supplied versions of interpreters like Python and Ruby generally say "don't do this." OS X is really no different than other UNIX-like operating systems (e.g., Linux, FreeBSD, and Solaris) in this respect; the most likely reason Apple doesn't provide an easy way to do this is because it breaks things. To the extent Windows isn't as prone to these problems, it's due to two things: (1) Windows software doesn't tend to rely on command-line tools to the extent that UNIX software does, and (2) Microsoft has had such an extensive history of both "DLL hell" and security problems caused by changes that affect all processes that they've changed the behavior of dynamic loading in newer Windows versions to limit the impact of "global" configuration options like PATH. "Lame" or not, you'll have a far more stable system if you restrict such changes to smaller scopes. A: After chasing the Environment Variables preference pane and discovering that the link is broken and a search on Apple's site seems to indicate they've forgotten about it... I started back onto the trail of the elusive launchd process. On my system (Mac OS X 10.6.8) it appears that variables defined in environment.plist are being reliably exported to apps launched from Spotlight (via launchd). My trouble is that those vars are not being exported to new bash sessions in Terminal. I.e. I have the opposite problem as portrayed here. NOTE: environment.plist looks like JSON, not XML, as described previously I was able to get Spotlight apps to see the vars by editing ~/MacOSX/environment.plist and I was able to force the same vars into a new Terminal session by adding the following to my .profile file: eval $(launchctl export) A: Any of the Bash startup files -- ~/.bashrc, ~/.bash_profile, ~/.profile. There's also some sort of weird file named ~/.MacOSX/environment.plist for environment variables in GUI applications. A: Here is a very simple way to do what you want. In my case, it was getting Gradle to work (for Android Studio). * *Open up Terminal. *Run the following command: sudo nano /etc/paths or sudo vim /etc/paths *Enter your password, when prompted. *Go to the bottom of the file, and enter the path you wish to add. *Hit Control + X to quit. *Enter 'Y' to save the modified buffer. *Open a new terminal window then type: echo $PATH You should see the new path appended to the end of the PATH. I got these details from this post: Add to the PATH on Mac OS X 10.8 Mountain Lion and up A: Up to and including OS X v10.7 (Lion) you can set them in: ~/.MacOSX/environment.plist See: * *https://developer.apple.com/legacy/library/qa/qa1067/_index.html *https://developer.apple.com/library/content/documentation/MacOSX/Conceptual/BPRuntimeConfig/Articles/EnvironmentVars.html For PATH in the Terminal, you should be able to set in .bash_profile or .profile (you'll probably have to create it though) For OS X v10.8 (Mountain Lion) and beyond you need to use launchd and launchctl. A: Much like the answer Matt Curtis gave, I set environment variables via launchctl, but I wrap it in a function called export, so that whenever I export a variable like normal in my .bash_profile, it is also set by launchctl. Here is what I do: * *My .bash_profile consists solely of one line, (This is just personal preference.) source .bashrc *My .bashrc has this: function export() { builtin export "$@" if [[ ${#@} -eq 1 && "${@//[^=]/}" ]] then launchctl setenv "${@%%=*}" "${@#*=}" elif [[ ! "${@//[^ ]/}" ]] then launchctl setenv "${@}" "${!@}" fi } export -f export *The above will overload the Bash builtin "export" and will export everything normally (you'll notice I export "export" with it!), then properly set them for OS X app environments via launchctl, whether you use any of the following: export LC_CTYPE=en_US.UTF-8 # ~$ launchctl getenv LC_CTYPE # en_US.UTF-8 PATH="/usr/local/bin:${PATH}" PATH="/usr/local/opt/coreutils/libexec/gnubin:${PATH}" export PATH # ~$ launchctl getenv PATH # /usr/local/opt/coreutils/libexec/gnubin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin export CXX_FLAGS="-mmacosx-version-min=10.9" # ~$ launchctl getenv CXX_FLAGS # -mmacosx-version-min=10.9 *This way I don't have to send every variable to launchctl every time, and I can just have my .bash_profile / .bashrc set up the way I want. Open a terminal window, check out your environment variables you're interested in with launchctl getenv myVar, change something in your .bash_profile/.bashrc, close the terminal window and re-open it, check the variable again with launchctl, and voilá, it's changed. *Again, like the other solutions for the post-Mountain Lion world, for any new environment variables to be available for apps, you need to launch or re-launch them after the change. A: For Bash, try adding your environment variables to the file /etc/profile to make them available for all users. No need to reboot, just start a new Terminal session. A: My personal practice is .bash_profile. I'm adding paths there and append to Path variable, GOPATH=/usr/local/go/bin/ MYSQLPATH=/usr/local/opt/mysql@5.6/bin PATH=$PATH:$GOPATH:$MYSQLPATH After then I can have individual Path by echo $GOPATH, echo$MYSQLPATH or all by echo $PATH.
{ "language": "en", "url": "https://stackoverflow.com/questions/135688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "923" }
Q: Can you have too many stored procedures? Is there such a thing as too many stored procedures? I know there is not a limit to the number you can have but is this any performance or architectural reason not to create hundreds, thousands?? A: To me the biggest limitation to that hundreds or thousands store procedure is maintainability. Even though that is not a direct performance hit, it should be a consideration. That is an architectural stand point, you have to plan not just for the initial development of the application, but future changes and maintenance. That being said you should design/create as many as your application requires. Although with Hibernate, NHibernate, .NET LINQ I would try to keep as much store procedures logic in the code, and only put it in the database when speed is a factor. A: Yes you can. If you have more than zero, you have too many A: Do you mean besides the fact that you have to maintain all of them when you make a change to the database? That's the big reason for me. It's better to have fewer ones that can handle many scenarios than hundreds that can only handle one. A: I would just mention that with all such things maintenance becomes an issue. Who knows what they all are and what purpose they serve? What happens years down the way when they are just artifacts of a legacy system that no one can recall what they are for but are scared to mess with? I think the main thing is the question of not is it possible, but should it be done. Thats just one thought anyway. A: I have just under 200 in a commercial SQL Server 2005 product of mine, which is likely to increase by about another 10-20 in the next few days for some new reports. Where possible I write 'subroutine' sprocs, so that anytime I find myself putting 3 or 4 identical statements together in more than a couple of sprocs, it's time to turn those few statements into a subroutine, if you see what I mean. I don't tend to use the sprocs to perform all the business logic as such, I just prefer to have sprocs doing anything that could be seen as 'transactional' - so where as my client code (in Delphi but whatever) might do the odd insert or update itself, as soon as something requires a couple of things to be updated or inserted 'at once', it's time for a sproc. I have a simple, crude naming convention to assist in the readability (and in maintenance!) The product's code name is 'Rachel', so we have RP_whatever - these are general sprocs that update/insert data, - or return small result sets RXP_whatever - these are subroutine sprocs that server as 'functions' - or workers to the RP_ type procs REP_whatever - these are sprocs that simply act as glorified views almost - they don't alter data, they return potentially complex result sets - for reporting purposes, etc XX_whatever - these are internal test/development/maintenance sprocs - that the client application (or the local dba) would not normally use the naming is arbitrary, it's just to help distinguish from the sp_ prefix that SQL Server uses. I guess if I found I had 400-500 sprocs in the database I might become concerned, but a few hundred isn't a problem at all as long as you have a system for identifying what kind of activity each sproc is responsible for. I'd rather chase down schema changes in a few hundred sprocs (where the SQL Server tools can help you find dependencies etc) than try to chase down schema changes in my high-level programming language. A: I suppose the deeper question here is- where else should all of that code go? Views can be used to provide convenient access to data through standard SQL. More powerful application languages and libraries can be used to generate SQL. Stored procedures tend to obscure the nature of the data and introduce an unnecessary layer of abstraction, complexity, and maintenance to a system. Given the power of object relational mapping tools to generate basic SQL, any system that is overly dependent on stored procedures is going to be less efficient to develop. With ORM, there is simply a lot less code to write. A: Hell yes, you can have too many. I worked on a system a couple of year ago that had something like 10,000 procs. It was insane. The people who wrote the system didn't really know how to program but they did know how to right badly structured procs, so they put almost all of the application logic in the procs. Some of the procs ran for thousands of line. Managing the mess was a nightmare. Too many (and I can't draw you a specific line in the sand) is probably an indicator of poor design. Besides, as others have pointed out, there are better ways to achieve a granular database interface without resorting to massive numbers of procs. A: My question is why aren't you using some sort of middleware platform if you're talking about hundreds or thousands of stored procedures? Call me old fashioned but I thought the database should just hold the data and the program(s) should be the one making the business logic decisions. A: in Oracle database you can use Packages to group related procedures together. a few of those and your namespace would free up. A: Too much of any particular thing probably means you're doing it wrong. Too many config files, too many buttns on the screen ... Can't speak for other RDBMSs but an Oracle application that uses PL/SQL should be logically bundling procedures and functions into packages to prevent cascading invalidations and manage code better. A: To me, using lots of stored procedures seems to lead you toward something equivalent to the PHP API. Thousands of global functions which may have nothing to do with eachother all grouped together. The only way to relate between them is to have some naming convention where you prefix each function with a module name, similar to the mysql_ functions in PHP. I think this is very hard to maintain, and very hard to to keep everything consistent. I think stored procedures work well for things that really need to take place on the server. A stored procedure for a simple select query, or even a select query with a join is probably going to far. Only use stored procedures where you actually require advanced logic to be handled on the database server. A: As others have said, it really comes down to the management aspect. After a while, it turns into finding the proverbial needle in the haystack. That's even more so when there's poor naming standards in place... A: No, not that I know of. However, you should have a good naming convention for them. For example, starting them with usp_ is something that a lot of poeple like to do (fastr than starting with sp_). Maybe your reporting sprocs should be usp_reporting_, and your bussiness objects should be usp_bussiness_, etc. As long s you can manage them, there shouldn't be a problem with having a lot of them. If they get too big you might want to split the database up into multiple databases. This might make more logical sense and can help with database size and such. A: If you have a lot of stored procedures, you'll find you are tying yourself to one database - some of them may not easily be transferred. Tying yourself to one database isn't good design. Moreover, if you have business logic on the database and in a business layer then maintenance becomes a problem. A: Yep, you can have toooooo many stored procedures. Naming conventions can really help with this. For instance, if you have a naming convention where you always have the table name and InsUpd or Get, it's pretty easy to find the right stored procedure when you need it. If you don't have some kind of standard naming convention it would be really easy to come up with two (or more) procedures that do almost exactly the same thing. It would be easy to find a couple of the following without having a standardized naming convention... usp_GetCustomersOrder, CustomerOrderGet_sp, GetOrdersByCustomer_sp, spOrderDetailFetch, GetCustOrderInfo, etc. Another thing that starts happening when you have a lot of stored procedures is that you'll have some stored procedures that are never used and some that are just rarely used... If you don't have some way of tracking stored procedure usage you'll either end up with a lot of unused procedures... or worse, getting rid of one you think is never used and finding out afterwards that it's only used once a year or once a quarter. :( Jeff A: Not while you have sane naming conventions, up-to-date documentation and enough unit tests to keep them organized. A: My first big project was developed with an Object Relational Mapper that abstracted the database so we did everything object oriented and it was easier to mantain and fix bugs and it was especially easy to make changes since all the data access and business logic was C# code, however, when it came to do complex stuff the system felt slow so we had to work around those things or re-architect the project, especially when the ORM had to do complex joins in the database. Im now working on a different company that has a motto of "our applications are just frontends of the database" and it has its advantages and disadvantages. We have really fast applications since all of the data operations are done on stored procedures, this is using mainly SQL Server 2005, but, when it comes to make a change, or a fix to the software its harder because you have to change both, the C# code and the SQL stored procedures so its like working twice, contrary to when I used an ORM, you dont have refactoring or strongly typed objects in sql, Management Studio and other tools help a lot but normally you spend more time going this way. So I would say that deppending on the needs of the project, if you are not going to keep really complex business data than maybe you could even avoid using stored procedures at all and use an ORM that makes developer life's easier. If you are concerned about performance and need to save all of the resources than you should use stored procedures, of course, the quantity deppends uppon the architecture that you design, so for exmaple if you always have stored procedures for CRUD operations you would need one for inserts, one for update, one for deletion, one for selecting and one for listing since these are the most common operations, if you have 100 business objects, multiply these 4 by that and you get 400 stored procedures just to manage the most basic operations on your objects so, yes, they could be too many. A: i think as long as you name them uspXXX and not spXXX the lookup by name will be direct, so no downside - though you might wonder about generalizing the procs if you have thousands... A: You have to put all that code somewhere. If you are going to have stored procedures at all (I personally do favour them, but many do not), then you might as well use them as much as you need to. At least all the database logic is in one place. The downside is that there isn't typically much structure to a bucket full of stored procs. Your call! :)
{ "language": "en", "url": "https://stackoverflow.com/questions/135701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What are the different types of indexes, what are the benefits of each? What are the different types of indexes, what are the benefits of each? I heard of covering and clustered indexes, are there more? Where would you use them? A: OdeToCode has a good article covering the basic differences As it says in the article: Proper indexes are crucial for good performance in large databases. Sometimes you can make up for a poorly written query with a good index, but it can be hard to make up for poor indexing with even the best queries. Quite true, too... If you're just starting out with it, I'd focus on clustered and composite indexes, since they'll probably be what you use the most. A: I'll add a couple of index types BITMAP - when you have very low number of different possible values, very fast and doesn't take up much space PARTITIONED - allows the index to be partitioned based on some property usually advantageous on very large database objects for storage or performance reasons. FUNCTION/EXPRESSION indexes - used to pre-calculate some value based on the table and store it in the index, a very simple example might be an index based on lower() or a substring function. A: PostgreSQL allows partial indexes, where only rows that match a predicate are indexed. For instance, you might want to index the customer table for only those records which are active. This might look something like: create index i on customers (id, name, whatever) where is_active is true; If your index many columns, and you have many inactive customers, this can be a big win in terms of space (the index will be stored in fewer disk pages) and thus performance. To hit the index you need to, at a minimum, specify the predicate: select name from customers where is_active is true; A: * *Unique - Guarantees unique values for the column(or set of columns) included in the index *Covering - Includes all of the columns that are used in a particular query (or set of queries), allowing the database to use only the index and not actually have to look at the table data to retrieve the results *Clustered - This is way in which the actual data is ordered on the disk, which means if a query uses the clustered index for looking up the values, it does not have to take the additional step of looking up the actual table row for any data not included in the index. A: Conventional wisdom suggests that index choice should be based on cardinality. They'll say, For a low cardinality column like GENDER, use bitmap. For a high cardinality like LAST_NAME, use b-tree. This is not the case with Oracle, where index choice should instead be based on the type of application (OLTP vs. OLAP). DML on tables with bitmap indexes can cause serious lock contention. On the other hand, the Oracle CBO can easily combine multiple bitmap indexes together, and bitmap indexes can be used to search for nulls. As a general rule: For an OLTP system with frequent DML and routine queries, use btree. For an OLAP system with infrequent DML and adhoc queries, use bitmap. I'm not sure if this applies to other databases, comments are welcome. The following articles discuss the subject further: * *Bitmap Index vs. B-tree Index: Which and When? *Understanding Bitmap Indexes A: Different database systems have different names for the same type of index, so be careful with this. For example, what SQL Server and Sybase call "clustered index" is called in Oracle an "index-organised table". A: I suggest you search the blogs of Jason Massie (http://statisticsio.com/) and Brent Ozar (http://www.brentozar.com/) for related info. They have some post about real-life scenario that deals with indexes. A: Oracle has various combinations of b-tree, bitmap, partitioned and non-partitioned, reverse byte, bitmap join, and domain indexes. Here's a link to the 11gR1 documentation on the subject: http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/data_acc.htm#PFGRF004 A: * *Unique *cluster *non-cluster *column store *Index with included column *index on computed column *filtered *spatial *xml *full text A: SQL Server 2008 has filtered indexes, similar to PostgreSQL's partial indexes. Both allow to include in index only rows matching specified criteria. The syntax is identical to PostgreSQL: create index i on Customers(name) where is_alive = cast(1 as bit); A: To view the types of indexes and its meaning visits: https://msdn.microsoft.com/en-us/library/ms175049.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/135730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Page down and page up in Emacs on Windows using the Windows key I am trying to learn Emacs and trying to find best keyboard layout for me. One thing is really annoying me. I have added following lines to .emacs (global-set-key "\C-y" 'scroll-up) (global-set-key "\M-y" 'scroll-down) When I hold Control and press y a few times, it will page down on every press of y. However, when I hold the Windows key (mapped as Meta) and press y a few times it will only page up on the first press of y and all subsequent presses of y I get the ‘y’ character inserted in the buffer. Can the page up behave like page down? I want to hold Meta and keep pressing y to scroll multiple pages up. I am using GNU Emacs 23.0.60.1 (i386-mingw-nt5.1.2600) of 2008-05-12 on LENNART-69DE564 (patched). It is Emacs with EmacsW32 patch. Is this problem with this Emacs? Problem with Meta key? I tried original GNU Emacs (not patched) and it works OK with Alt. But my problem is not that I want to scroll without releasing any key. I release key y and press it multiple times but don't want to have to release Meta key. Same problem is described here: http://groups.google.com/group/gnu.emacs.help/browse_thread/thread/f30f4b75a8b75b10 Problem is not in that I have changed key mapping. It looks like it is a bug in EmacsW32 version. Here is another description of the problem: Unreleased Meta/Win modifier A: * *Use C-v and M-v. *Don't change C-y, M-y default bindings. A: Could this be a side affect of using the Windows key as Meta? I'm thinking this because in a non-Emacs situation if you press and hold the Windows key and another key for a short cut (Win+E for Explorer, Win+R for Run dialog, etc.) the desired action only triggers once, not multiple times if you keep holding it down. I'd try reassigning Meta to Alt and see if the problem persists. If it doesn't, then I'm not sure what other option you have, since likely it's the OS only sending the Windows key press once to the app in focus. A: You should use the patched EmacsW32 version, if you want the Windows key as Meta. From the site about the patches: "Changes that makes it possible to use the window keyboard keys as META in Emacs. Without this patch key sequences like E will always do what they by default does in windows, ie in this case open up Windows Explorer. (This patched is not used by default, you have to turn it on.)"
{ "language": "en", "url": "https://stackoverflow.com/questions/135734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }