text
stringlengths
8
267k
meta
dict
Q: Tasklist replacement for Visual Studio I would like to use the task-list in Visual Studio but it really lacks almost any useful feature a task-list should provide. So I use Todo-List externally, to keep track of the things I need to get done. Would be nice to have it all in one place. So does anyone know of a cool replacement Add-On for the tasklist in Visual Studio? Thanks in advance! A: Assumed: Visual Studio 2008 + ReSharper ReSharper->Windows->ToDo Explorer E- A: For semi-immediate programming tasks I use TODO comments in code and ReSharper for Visual Studio to view them. For longer-term tasks I use Team Foundation Server to record work items. For non-programming tasks I use Google Calendar. A: You can modify the task list in Visual Studio by clicking TOOLS --> OPTIONS --> ENVIRONMENT --> TASK LIST In the Token List you can add more tokens specific to what you want to call your tasks. For example.. I have an EDITING token set up so in any module, class or method that I'm working on I just add the ' EDITING: (Name of whatever method ect..) comment and I can quickly see where I left off and get back too it by double clicking. Here are a few other tokens I find useful... If you would like more advanced project and code tracking you should check out Visual Studio Online. It's free for upto 5 users. http://www.visualstudio.com/en-us/products/visual-studio-online-overview-vs A: I don't know of an add-on (I use Remember The Milk externally), but I think you are onto a good idea there. A: How about the FogBugz add-in for Visual Studio 2005 and 2008? This requires a FogBugz account hosted either locally or by Fog Creek. A free Student and Startup version is available. A: We use Team Foundation Server at work - it is a really superb product, but too expensive for smaller teams. Out of work I'm looking to use CountersSoft Gemini (http://countersoft.com/home.aspx) which has good VS integration and is competitive when looking at the hosted version with unlimited users.
{ "language": "en", "url": "https://stackoverflow.com/questions/94864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I attach an .mdf to sql2005? Running sp_attach_single_file_db gives this error: The log scan number (10913:125:2) passed to log scan in database 'myDB' is not valid Isn't it supposed to re-create the log file? How else would I be able to attach/repair that .mdf file? A: It depends what mode your database was in when it was detached, it's possible there are uncommitted/unwritten transactions in that log file that are needed to attach the database, otherwise there would be data loss. Do you know what recovery mode you were in when it was detached? A: It worked, when I installed another one (with .ldf log file) then the one in question, then detached the first one again. Don't ask me why.
{ "language": "en", "url": "https://stackoverflow.com/questions/94866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Image Processing, In Python? I've recently come across a problem which requires at least a basic degree of image processing, can I do this in Python, and if so, with what? A: There's also pycairo, which might be more suitable depending on your needs. A: There is actually a wonderful Python Imaging Library (PIL). It gives you the ability to alter existing images, including anti-aliasing capabilities, and create new images with text and such. You can also find a decent introductory tutorial in the PIL handbook provided on the aforementioned site. A: The best-known library is PIL. However if you are simply doing basic manipulation, you are probably better off with the Python bindings for ImageMagick, which will be a good deal more efficient than writing the transforms in Python. A: Depending on what you mean by "image processing", a better choice might be in the numpy based libraries: mahotas, scikits.image, or scipy.ndimage. All of these work based on numpy arrays, so you can mix and match functions from one library and another. I started the website http://pythonvision.org which has more information on these. A: If you are creating a custom image processing effect, you may find PythonPixels useful. http://halfhourhacks.blogspot.com/2008/03/pythonpixels.html It is intended for writing and experimenting with image processing. A: VIPS should be fast and uses multiple CPUs: https://github.com/libvips/libvips/wiki/Speed-and-memory-use A: You also have an approach to image processing based on "standard" scientific modules: SciPy has a whole package dedicated to image processing: scipy.ndimage. Scipy is in effect the standard general numerical calculations package; it is based on the de facto standard array-manipulation module NumPy: images can also be manipulated as array of numbers. As for image display, Matplotlib (also part of the "science trilogy") makes displaying images quite simple. SciPy is still actively maintained, so it's a good investment for the future. Furthermore, SciPy currently runs with Python 3 too, while the Python Imaging Library (PIL) does not. A: To complete the list: opencv http://opencv.willowgarage.com/documentation/python/index.html
{ "language": "en", "url": "https://stackoverflow.com/questions/94875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: What is the best method of inter-process communication between Java and .NET 3.5? A third-party application reads some Java code from an XML file, and runs it when a certain event happens. In Java, I want to tell a .NET 3.5 application, running on the same machine, that this event occurred. The total data transferred each time is probably a few characters. What is the best way of using Java to tell the .NET process that something happened? Java doesn't seem to support Named Pipes on Windows, .NET doesn't natively support memory-mapping, and any solution involving web services or RMI is overkill. A: If you don't want the full overhead of RMI, you could do direct socket communcation between the two by opening ports and talking to eachother. You'd have to have some way to have both processes agree on which ports to use, how to handshake/etc, but would be simpler than RMI. ETA: it looks like you can use named pipes from java, you just can't create them? So if the .NET process would create it, you could read/write to it with java. Java just sees it as a file?
{ "language": "en", "url": "https://stackoverflow.com/questions/94882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Generics vs. Array Lists The system I work on here was written before .net 2.0 and didn't have the benefit of generics. It was eventually updated to 2.0, but none of the code was refactored due to time constraints. There are a number of places where the code uses ArraysLists etc. that store things as objects. From performance perspective, how important change the code to using generics? I know from a perfomance perspective, boxing and unboxing etc., it is inefficient, but how much of a performance gain will there really be from changing it? Are generics something to use on a go forward basis, or it there enough of a performance change that a conscience effort should be made to update old code? A: Here's the results I got from a simple parsing of a string from a 100KB file 100,000 times. The Generic List(Of char) took 612.293 seconds to go 100,000 times through the file. The ArrayList took 2,880.415 seconds to go 100,000 times through the file. This means in this scenario (as your mileage will vary) the Generic List(Of char) is 4.7 times faster. Here is the code I ran through 100,000 times: Public Sub Run(ByVal strToProcess As String) Implements IPerfStub.Run Dim genList As New ArrayList For Each ch As Char In strToProcess.ToCharArray genList.Add(ch) Next Dim dummy As New System.Text.StringBuilder() For i As Integer = 0 To genList.Count - 1 dummy.Append(genList(i)) Next End Sub Public Sub Run(ByVal strToProcess As String) Implements IPerfStub.Run Dim genList As New List(Of Char) For Each ch As Char In strToProcess.ToCharArray genList.Add(ch) Next Dim dummy As New System.Text.StringBuilder() For i As Integer = 0 To genList.Count - 1 dummy.Append(genList(i)) Next End Sub A: The only way to know for sure is to profile your code using a tool like dotTrace. http://www.jetbrains.com/profiler/ It's possible that the boxing/unboxing is trivial in your particular application and wouldn't be worth refactoring. Going forward, you should still consider using generics due to the compile-time type safety. A: Generics, whether Java or .NET, should be used for design and type safety, not for performance. Autoboxing is different from generics (essentially implicit object to primitive conversions), and as you mentioned, you should NOT use them in place of a primitive if there is to be a lot of arithmetic or other operations which will cause a performance hit from the repeated implicit object creation/destruction. Overall I would suggest using going forward, and only updating existing code if it needs to be cleaned up for type safety / design purposes, not performance. A: Technically the performance of generics is, as you say, better. However, unless performance is hugely important AND you've already optimised in other areas you're likely to get MUCH better improvements by spending your time elsewhere. I would suggest: * *use generics going forward. *if you have solid unit tests then refactor to generics as you touch code *spend other time doing refactorings/measurement that will significantly improve performance (database calls, changing data structures, etc) rather than a few milliseconds here and there. Of course there's reasons other than performance to change to generics: * *less error prone, since you have compile-time checking of types *more readable, you don't need to cast all over the place and it's obvious what type is stored in a collection *if you're using generics going forward, then it's cleaner to use them everywhere A: It depends, the best answer is to profile your code and see. I like AQTime but a number of packages exist for this. In general, if an ArrayList is being used a LOT it may be worth switching it to a generic version. Really though, it's most likely that you wouldn't even be able to measure the performance difference. Boxing and unboxing are extra steps but modern computers are so fast that it makes almost no difference. As an ArrayList is really just an normal array with a nice wrapper, you would probably see much more performance gained from better data structure selection (ArrayList.Remove is O(n)!) than with the conversion to generics. Edit: Outlaw Programmer has a good point, you will still be boxing and unboxing with generics, it just happens implicitly. All the code around checking for exceptions and nulls from casting and "is/as" keywords would help a bit though. A: The biggest gains, you will find in Maintenance phases. Generics are much easier to deal with and update, without having to deal with conversion and casting issues. If this is code that you continually visit, then by all means take the effort. If this is code that hasn't been touched in years, I wouldn't really bother. A: What does autoboxing/unboxing have to do with generics? This is just a type-safety issue. With a non-generic collection, you are required to explicitly cast back to an object's actual type. With generics, you can skip this step. I don't think there is a performance difference one way or the other. A: My old company actually considered this problem. The approach we took was: if it's easy to refactor, do it; if not (i.e. it will touch too many classes), leave it for a later time. It really depends on whether or not you have the time to do it, or whether there are more important items to be coding (i.e. features you should be implementing for clients). Then again, if you're not working on something for a client, go ahead and spend time refactoring. It'll improve readability of the code for yourself. A: Depends on how much is out there in your code. If you binding or display large lists in the UI, you would probably see a great gain in performance. If your ArrayList are just sprinkled about here and there, then it probably wouldn't be a big deal to just get it cleaned up, but also wouldn't impact overall performance very much. If you are using a lot a ArrayLists throughout your code and it would be a big untertaking to replace them (something that may impact your schedules), then you could adopt a if-you-touch-it-change-it approach. Main thing is, though, that Generics are a lot easier to read, and are more stable across the app due to the strong typing you get from them. You'll see gains not just from performance, but from code maintainablity and stability. If you can do it quickly, I'd say do it. If you can get buy-in from the Product Owner, I'd recommend getting it cleaned up. You love your code more afterward. A: If the entities in the ArrayLists are Object types, you'll gain a little from not casting them to the correct type. If they're Value types (structs or primitives like Int32), then the boxing/unboxing process adds a lot of overhead, and Generic collections should be much faster. Here's an MSDN article on the subject A: Generics has much better performance especially if you'll be using value-type (int, bool, struct etc.) where you'll gain a noticeble performance gain. * *Using Arraylist with value-types causes boxing/unboxing which if done several hundred times is substantialy slower then using generic List. *when storing value-types as object you'll up to four time memory per item. While this amount won't drain your RAM the cache memory that is smaller could contain less items, Which means that while iterating a long collections there would be many copies from the main memory to the cache that would slow your application. I wrote about here. A: Using generics should also mean that your code will be simplier and easy to use if you want to leverage things like linq in the later c# versions.
{ "language": "en", "url": "https://stackoverflow.com/questions/94884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: looking for a good summary of SQL 2005 Partitioning I'm looking at this as a baseline explanation of the SQL 2005 Enterprise partitioning. Is there a resource that goes deeper into fine points and considerations of this issue. Some more examples would be useful too. My main scenario is a time based partition system. With one partition that has the most accessed last X days. This partition will have to somehow slide (at least periodically) to keep it refereeing to the same amount of days. A: Here's an excellant white paper on "SQL Server 2005 Partitioned Tables and Indexes" by Kimberly Tripp. http://www.sqlskills.com/resources/Whitepapers/Partitioning%20in%20SQL%20Server%202005%20Beta%20II.htm A: What about this: Partitioning Data for Query Performance - Where's the benefit? A: I'm not sure if this will help you (how much data you are working with) but this whitepaper below speaks of how to use staging tables and the switch clause to alter partitions. http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/loading_bulk_data_partitioned_table.mspx It's more about bulk loading data into partitions, but it might be worth a read. The example scenario as explained at the recent SQL Summit in Sydney, Australia used a date based partitioning scheme as the example, which might be similar to what your scenario is. A: I've found the SQL Server 2005 Books Online normally has all the information I'm looking for. I found a good resource in the 05 BOL on SQL Server 2005 Partitioning: http://technet.microsoft.com/en-us/library/ms188706(SQL.90).aspx This link goes over designing partitioned tables and indexes: http://technet.microsoft.com/en-us/library/ms175533(SQL.90).aspx Here is a blog post that explains the sliding window case you posted: http://blogs.msdn.com/menzos/archive/2008/06/30/table-partitioning-sliding-window-case.aspx A: This site may help you: http://highscalability.com/ specific tags: http://highscalability.com/tags/shard http://highscalability.com/tags/sharding
{ "language": "en", "url": "https://stackoverflow.com/questions/94885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I return random numbers as a column in SQL Server 2005? I'm running a SQL query on SQL Server 2005, and in addition to 2 columns being queried from the database, I'd also like to return 1 column of random numbers along with them. I tried this: select column1, column2, floor(rand() * 10000) as column3 from table1 Which kinda works, but the problem is that this query returns the same random number on every row. It's a different number each time you run the query, but it doesn't vary from row to row. How can I do this and get a new random number for each row? A: I realize this is an older post... but you don't need a view. select column1, column2, ABS(CAST(CAST(NEWID() AS VARBINARY) AS int)) % 10000 as column3 from table1 A: WARNING Adam's answer involving the view is very inefficient and for very large sets can take out your database for quite a while, I would strongly recommend against using it on a regular basis or in situations where you need to populate large tables in production. Instead you could use this answer. Proof: CREATE VIEW vRandNumber AS SELECT RAND() as RandNumber go CREATE FUNCTION RandNumber() RETURNS float AS BEGIN RETURN (SELECT RandNumber FROM vRandNumber) END go create table bigtable(i int) go insert into bigtable select top 100000 1 from sysobjects a join sysobjects b on 1=1 go select cast(dbo.RandNumber() * 10000 as integer) as r into #t from bigtable -- CPU (1607) READS (204639) DURATION (1551) go select ABS(CAST(CAST(NEWID() AS VARBINARY) AS int)) % 10000 as r into #t1 from bigtable -- Runs 15 times faster - CPU (78) READS (809) DURATION (99) Profiler trace: alt text http://img519.imageshack.us/img519/8425/destroydbxu9.png This is proof that stuff is random enough for numbers between 0 to 9999 -- proof that stuff is random enough select avg(r) from #t -- 5004 select STDEV(r) from #t -- 2895.1999 select avg(r) from #t1 -- 4992 select STDEV(r) from #t1 -- 2881.44 select r,count(r) from #t group by r -- 10000 rows returned select r,count(r) from #t1 group by r -- 10000 row returned A: Adam's answer works really well, so I marked it as accepted. While I was waiting for an answer though, I also found this blog entry with a few other (slightly less random) methods. Kaboing's method was among them. http://blog.sqlauthority.com/2007/04/29/sql-server-random-number-generator-script-sql-query/ A: select RAND(CHECKSUM(NEWID())) A: You need to use a UDF first: CREATE VIEW vRandNumber AS SELECT RAND() as RandNumber second: CREATE FUNCTION RandNumber() RETURNS float AS BEGIN RETURN (SELECT RandNumber FROM vRandNumber) END test: SELECT dbo.RandNumber(), * FROM <table> Above borrowed from Jeff's SQL Server Blog A: For SQLServer, there are a couple of options. 1. A while loop to update an empty column with one random number at a time 2. A .net Assembly that contains a function that returns a random number A: Query select column1, column2, cast(new_id() as varchar(10)) as column3 from table1 A: You might like to consider generating a UUID instead of a random number using the newid function. These are guaranteed to be unique each time generated whereas there is a significant chance that some duplication will occur with a straightforward random number (and depending on what you're using it for could give you a phenominally hard to debug error at a later point) A: newid() i believe is very resource intensive. i recall trying that method on a table of a few million records and the performance wasn't nearly as good as rand(). A: According to my testing, the answer above doesn't generate a value of 10000 ever. This probably isn't much of a problem when you are generating a random between 1 and 10000, but the same algorithm between 1 and 5 would be noticable. Add 1 to your mod. A: This snippet seems to provide a reasonable substitute for rand() in that it returns a float between 0.0 and 1.0. It uses only the last 3 bytes provided by newid() so total randomness may be slightly different than the conversion to VARBINARY then INT then modding from the recommended answer. Have not had a chance to test relative performance but seems fast enough (and random enough) for my purposes. SELECT CAST(SubString(CONVERT(binary(16), newid()), 14, 3) AS INT) / 16777216.0 AS R A: I use c# for dealing with random numbers. It's much cleaner. I have a function I use to return a list of random number and a unique key, then I just join the uniqueKey on the row number. Because I use c#, I can easily specify a range within which the random numbers must fall. Here are the steps to making the function: http://www.sqlwithcindy.com/2013/04/elegant-random-number-list-in-sql-server.html Here is what my query ends up looking like: SELECT rowNumber, name, randomNumber FROM dbo.tvfRandomNumberList(1,10,100) INNER JOIN (select ROW_NUMBER() over (order by int_id) as 'rowNumber', name from client )as clients ON clients.rowNumber = uniqueKey
{ "language": "en", "url": "https://stackoverflow.com/questions/94906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How can I load xml from a url instead of a file path in .NET? We currently have code like this: Dim xDoc = XDocument.Load(myXMLFilePath) The only way we know how to do it currently is by using a file path and impersonation (since this file is on a secured network path). I've looked at XDocument.Load on MSDN, but I don't see anything. A: I would suggest using a WebRequest to get a stream and load the stream into the document. A: That very documentation says that the file parameter is "A URI string that references the file to load into a new XDocument." Furthermore, I have code that does exactly that---uses XDocument.Load with a URI. A: //Sample XML <Product> <Name>Product1</Name> <Price>0.00</Price> </Product> //Reading XML XmlTextReader rdr = new XmlTextReader("http://your-url"); XDocument loaded = XDocument.Load(rdr); //View the loaded contents //Response.ClearHeaders(); //Response.ContentType = "text/xml;charset=UTF-8"; //Response.Write(loaded); //Response.End(); var data = from c in loaded.Descendants("Product") select new { name = c.Element("Name").Value, price = c.Element("Price").Value, }; foreach (var element in data) { //Do something here }
{ "language": "en", "url": "https://stackoverflow.com/questions/94912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Fetch one row per account id from list I have a table with game scores, allowing multiple rows per account id: scores (id, score, accountid). I want a list of the top 10 scorer ids and their scores. Can you provide an sql statement to select the top 10 scores, but only one score per account id? Thanks! A: select username, max(score) from usertable group by username order by max(score) desc limit 10; A: Try this: select top 10 username, max(score) from usertable group by username order by max(score) desc A: First limit the selection to the highest score for each account id. Then take the top ten scores. SELECT TOP 10 AccountId, Score FROM Scores s1 WHERE AccountId NOT IN (SELECT AccountId s2 FROM Scores WHERE s1.AccountId = s2.AccountId and s1.Score > s2.Score) ORDER BY Score DESC A: PostgreSQL has the DISTINCT ON clause, that works this way: SELECT DISTINCT ON (accountid) id, score, accountid FROM scoretable ORDER BY score DESC LIMIT 10; I don't think it's standard SQL though, so expect other databases to do it differently. A: SELECT accountid, MAX(score) as top_score FROM Scores GROUP BY accountid, ORDER BY top_score DESC LIMIT 0, 10 That should work fine in mysql. It's possible you may need to use 'ORDER BY MAX(score) DESC' instead of that order by - I don't have my SQL reference on hand. A: I believe that PostgreSQL (at least 8.3) will require that the DISTINCT ON expressions must match initial ORDER BY expressions. I.E. you can't use DISTINCT ON (accountid) when you have ORDER BY score DESC. To fix this, add it into the ORDER BY: SELECT DISTINCT ON (accountid) * FROM scoretable ORDER BY accountid, score DESC LIMIT 10; Using this method allows you to select all the columns in a table. It will only return 1 row per accountid even if there are duplicate 'max' values for score. This was useful for me, as I was not finding the maximum score (which is easy to do with the max() function) but for the most recent time a score was entered for an accountid.
{ "language": "en", "url": "https://stackoverflow.com/questions/94930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: ColdFusion mail queue stops processing Our CF server occasionally stops processing mail. This is problematic, as many of our clients depend on it. We found suggestions online that mention zero-byte files in the undeliverable folder, so I created a task that removes them every three minutes. However, the stoppage has occurred again. I am looking for suggestions for diagnosing and fixing this issue. * *CF 8 standard *Win2k3 Added: * *There are no errors in the mail log at the time the queue fails *We have not tried to run this without using the queue, due to the large amount of mail we send Added 2: * *It does not seem to be a problem with any of the files in the spool folder. When we restart the mail queue, they all seem to process correctly. Added 3: * *We are not using attachments. A: We have not tried to run this without using the queue, due to the large amount of mail we send Regardless, have you tried turning off spooling? I've seen mail get sent at a rate of 500-600 messages in a half second, and that's on kind of a crappy server. With the standard page timeout at 60 seconds, that would be ~72,000 emails you could send before the page would time out. Are you sending more than 72,000 at a time? An alternative I used before CFMail was this fast was to build a custom spooler. Instead of sending the emails on the fly, save them to a database table. Then setup a scheduled job to send a few hundred of the messages and reschedule itself for a few minutes later, until the table is empty. We scheduled the job to run once a day; and it can re-schedule itself to run again in a couple of minutes if the table isn't empty. Never had a problem with it. A: Have you tried just bypassing the queue altogether? (In CF Admin, under Mail Spool settings, uncheck "Spool mail messages for delivery.") A: I have the same problem sometimes and it isn't due to a zero byte file though that problem did crop up in the past. It seems like one or two files (the oldest ones in the folder) will keep the queue from processing. What I do is move all of the messages to a holding folder, restart the mail queue and copy the messages back in a chunk at a time in reverse chronological order, wait for them to go out and move some more over. The messages which were holding up the queue are put in a separate folder to be examined latter. You can probably programmatically do this by stopping the queue, moving the oldest file to another folder, then start the mail queue and see if sending begins successfully by checking folder file counts and dates. If removing the oldest file doesn't work, repeat the previous process until all of the offending mail files are moved and sending continues successfully. I hope the helps. A: We have actually an identical setup, 32bit CF8 on Win2K3. We employed Ben's solution about a year ago, and that certain has helped auto re-queue emails that get stuck. However recently for no particular reason one of our 7 web servers decided to get into this state with every email attempt. An exception occurred when setting up mail server parameters. This exception was caused by: coldfusion.mail.MailSessionException: An exception occurred when setting up mail server parameters.. Each of our web servers are identical clones of each other, so why it was only happening to that one is bizarre. Another item to note is that we had a script which reboot the machine in the middle of the night due to JRUN's memory management issues. The act of rebooting seemed to initiate the problem. A subsequent restarting of the CF service would then clear it, and the machine would be fine until it rebooted again. We found that the problem is related to the McAfee virus scanner, after updating it to exclude the c:\ColdFusion8 directory, the problem went away. Hope that helps. A: What we ended up doing: I wrote two scheduled tasks. The first checked to see if there were any messages in the queue folder older than n minues (currently set to 30). The second reset the queue every night during low usage. Unfortunately, we never really discovered why the queue would come off the rails, but it only seems to happen when we use Exchange -- other mail servers we've tried do not have this issue. Edit: I was asked to post my code, so here's the one to restart when old mail is found: <cfdirectory action="list" directory="c:\coldfusion8\mail\spool\" name="spool" sort="datelastmodified"> <cfset restart = 0> <cfif datediff('n', spool.datelastmodified, now()) gt 30> <cfset restart = 1> </cfif> <cfif restart> <cfset sFactory = CreateObject("java","coldfusion.server.ServiceFactory")> <cfset MailSpoolService = sFactory.mailSpoolService> <cfset MailSpoolService.stop()> <cfset MailSpoolService.start()> </cfif> A: There is a bug in Ben Doom's code. Thank you anyway ben, the code is great, and we use it now on one of our servers with CF8 installed, but: if directory (\spool) is empty, the code fails (error: Date value passed to date function DateDiff is unspecified or invalid.) That's because if the query object spool is empty (spool.recordcount EQ 0), the datediff function produces an error. we used this now: <!--- check if request for this page is local to prevent "webusers" to request this page over and over, only localhost (server) can get it e.g. by cf scheduled tasks---> <cfsetting requesttimeout="30000"> <cfset who = CGI.SERVER_NAME> <cfif find("localhost",who) LT 1> security restriction, access denied. <cfabort> </cfif> <!--- get spool directory info ---> <cfdirectory action="list" directory="C:\JRun4\servers\cfusion\cfusion-ear\cfusion-war\WEB-INF\cfusion\Mail\Spool\" name="spool" sort="datelastmodified"> <cfset restart = 0> <cfif spool.recordcount GT 0><!--- content there? ---> <cfif datediff('n', spool.datelastmodified, now()) gt 120> <cfset restart = 1> </cfif> </cfif> <cfif restart><!--- restart ---> <cfsavecontent variable="liste"> <cfdump var="#list#"> </cfsavecontent> <!--- info ---> <cfmail to="x@y.com" subject="cfmailqueue restarted by daemon" server="xxx" port="25" from="xxxx" username="xxxx" password="xxx" replyto="xxxx"> 1/2 action: ...try to restart. Send another mail if succeeded! #now()# Mails: #liste# </cfmail> <cfset sFactory = CreateObject("java","coldfusion.server.ServiceFactory")> <cfset MailSpoolService = sFactory.mailSpoolService> <cfset MailSpoolService.stop()> <cfset MailSpoolService.start()> <!--- info ---> <cfmail to="x@y.com" subject="cfmailqueue restarted by daemon" server="xxx" port="25" from="xxxx" username="xxxx" password="xxx" replyto="xxxx"> 2/2 action: ...succeeded! #now()# </cfmail> </cfif> A: There is/was an issue with the mail spooler and messages with attachments in CFMX 8 that was fixed with one of the Hotfixes. Version 8.0.1, at least, should have had that fixed.
{ "language": "en", "url": "https://stackoverflow.com/questions/94932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: What debug logging tools are available from Javascript? I'd like to create a "universal" debug logging function that inspects the JS namespace for well-known logging libraries. For example, currently, it supports Firebug's console.log: var console = window['console']; if (console && console.log) { console.log(message); } Obviously, this only works in Firefox if Firebug is installed/enabled (it'll also work on other browsers with Firebug Lite). Basically, I will be providing a JS library which I don't know what environment it will be pulled into, and I'd like to be able to figure out if there is a way to report debug output to the user. So, perhaps jQuery provides something - I'd check that jQuery is present and use it. Or maybe there are well-known IE plugins that work that I can sniff for. But it has to be fairly well-established and used machinery. I can't check for every obscure log function that people create. Please, only one library/technology per answer, so they can get voted ranked. Also, using alert() is a good short-term solution but breaks down if you want robust debug logging or if blocking the execution is a problem. A: You could try log4javascript. Disclosure: I wrote it. A: I personally use Firebug/Firebug Lite and on IE let Visual Studio do the debugging. None of these do any good when a visitor is using some insane browser though. You really need to get your client side javascript to log its errors to your server. Take a look at the power point presentation I've linked to below. It has some pretty neat ideas on how to get your javascript to log stuff on your server. Basically, you hook window.onerror and your try {} catch(){} blocks with a function that makes a request back to your server with useful debug info. I've just implemented such a process on my own web application. I've got every catch(){} block calling a function that sends a JSON encoded message back to the server, which in turn uses my existing logging infrastructure (in my case log4perl). The presentation I'm linking to also suggests loading an image in your javascript in including the errors as part of the GET request. The only problem is if you want to include stack traces (which IE doesn't generate for you at all), the request will be too large. Tracking ClientSide Errors, by Eric Pascarello PS: I wanted to add that I dont think it is a good idea to use any kind of library like jQuery for "hardcore" logging because maybe the reason for the error you are logging is jQuery or Firebug Lite! Maybe the error is that the browser (cough IE6) did some crazy loading order and is throwing some kind of Null Reference error because it was too stupid to load the library correctly. In my instance, I made sure all my javascript log code is in the <head> and not pulled in as a .js file. This way, I can be reasonably sure that no matter what kinds of curve balls the browser throws, odds are good I am able to log it. A: Firebug lite is a cross browser, lite version of Firefbug that'll at least give you console.log capabilities on most browsers. A: MochiKit has the following functions (included here with full namespace resolution): MochiKit.Logging.logDebug() // prefaces value with "DEBUG: " MochiKit.Logging.log() // prefaces value with "INFO: " MochiKit.Logging.logError() // prefaces value with "ERROR: " MochiKit.Logging.logFatal() // prefaces value with "FATAL: " MochiKit.Logging.logWarning() // prefaces value with "WARNING: " There is a lot more to the MochiKit.Logging namespace than this, but these are the basics. A: If you are already using jQuery, I can heartily recommend the jQuery Debug plugin (a.k.a., jquery.debug.js). See http://trainofthoughts.org/blog/2007/03/16/jquery-plugin-debug/. This plugin allows you to switch debug logging off or on via a global switch. Logging looks like this: $.log('My value is: ' + val); Output is sent to console.log under Firefox and is written to a div block inserted at the bottom of the page on other browsers. A: What about Firebug Lite (for those non-Firefox browsers)? I haven't used it much except when debugging Dojo code in IE. But it tries as best it can to put a Firebug console in IE, Safari, and Opera. Of course there is always the ever reliable 'alert (err_msg);' :D A: There is JQuery Logging, which looks promising. A: Myself, I am a firm believer in the following: alert('Some message/variables');
{ "language": "en", "url": "https://stackoverflow.com/questions/94934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What is the difference between range and xrange functions in Python 2.X? Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about for i in range(0, 20): for i in xrange(0, 20): A: Some of the other answers mention that Python 3 eliminated 2.x's range and renamed 2.x's xrange to range. However, unless you're using 3.0 or 3.1 (which nobody should be), it's actually a somewhat different type. As the 3.1 docs say: Range objects have very little behavior: they only support indexing, iteration, and the len function. However, in 3.2+, range is a full sequence—it supports extended slices, and all of the methods of collections.abc.Sequence with the same semantics as a list.* And, at least in CPython and PyPy (the only two 3.2+ implementations that currently exist), it also has constant-time implementations of the index and count methods and the in operator (as long as you only pass it integers). This means writing 123456 in r is reasonable in 3.2+, while in 2.7 or 3.1 it would be a horrible idea. * The fact that issubclass(xrange, collections.Sequence) returns True in 2.6-2.7 and 3.0-3.1 is a bug that was fixed in 3.2 and not backported. A: In python 2.x range(x) returns a list, that is created in memory with x elements. >>> a = range(5) >>> a [0, 1, 2, 3, 4] xrange(x) returns an xrange object which is a generator obj which generates the numbers on demand. they are computed during for-loop(Lazy Evaluation). For looping, this is slightly faster than range() and more memory efficient. >>> b = xrange(5) >>> b xrange(5) A: xrange only stores the range params and generates the numbers on demand. However the C implementation of Python currently restricts its args to C longs: xrange(2**32-1, 2**32+1) # When long is 32 bits, OverflowError: Python int too large to convert to C long range(2**32-1, 2**32+1) # OK --> [4294967295L, 4294967296L] Note that in Python 3.0 there is only range and it behaves like the 2.x xrange but without the limitations on minimum and maximum end points. A: When testing range against xrange in a loop (I know I should use timeit, but this was swiftly hacked up from memory using a simple list comprehension example) I found the following: import time for x in range(1, 10): t = time.time() [v*10 for v in range(1, 10000)] print "range: %.4f" % ((time.time()-t)*100) t = time.time() [v*10 for v in xrange(1, 10000)] print "xrange: %.4f" % ((time.time()-t)*100) which gives: $python range_tests.py range: 0.4273 xrange: 0.3733 range: 0.3881 xrange: 0.3507 range: 0.3712 xrange: 0.3565 range: 0.4031 xrange: 0.3558 range: 0.3714 xrange: 0.3520 range: 0.3834 xrange: 0.3546 range: 0.3717 xrange: 0.3511 range: 0.3745 xrange: 0.3523 range: 0.3858 xrange: 0.3997 <- garbage collection? Or, using xrange in the for loop: range: 0.4172 xrange: 0.3701 range: 0.3840 xrange: 0.3547 range: 0.3830 xrange: 0.3862 <- garbage collection? range: 0.4019 xrange: 0.3532 range: 0.3738 xrange: 0.3726 range: 0.3762 xrange: 0.3533 range: 0.3710 xrange: 0.3509 range: 0.3738 xrange: 0.3512 range: 0.3703 xrange: 0.3509 Is my snippet testing properly? Any comments on the slower instance of xrange? Or a better example :-) A: xrange returns an iterator and only keeps one number in memory at a time. range keeps the entire list of numbers in memory. A: xrange() and range() in python works similarly as for the user , but the difference comes when we are talking about how the memory is allocated in using both the function. When we are using range() we allocate memory for all the variables it is generating, so it is not recommended to use with larger no. of variables to be generated. xrange() on the other hand generate only a particular value at a time and can only be used with the for loop to print all the values required. A: Do spend some time with the Library Reference. The more familiar you are with it, the faster you can find answers to questions like this. Especially important are the first few chapters about builtin objects and types. The advantage of the xrange type is that an xrange object will always take the same amount of memory, no matter the size of the range it represents. There are no consistent performance advantages. Another way to find quick information about a Python construct is the docstring and the help-function: print xrange.__doc__ # def doc(x): print x.__doc__ is super useful help(xrange) A: range generates the entire list and returns it. xrange does not -- it generates the numbers in the list on demand. A: range creates a list, so if you do range(1, 10000000) it creates a list in memory with 9999999 elements. xrange is a generator, so it is a sequence object is a that evaluates lazily. This is true, but in Python 3, range() will be implemented by the Python 2 xrange(). If you need to actually generate the list, you will need to do: list(range(1,100)) A: xrange uses an iterator (generates values on the fly), range returns a list. A: What? range returns a static list at runtime. xrange returns an object (which acts like a generator, although it's certainly not one) from which values are generated as and when required. When to use which? * *Use xrange if you want to generate a list for a gigantic range, say 1 billion, especially when you have a "memory sensitive system" like a cell phone. *Use range if you want to iterate over the list several times. PS: Python 3.x's range function == Python 2.x's xrange function. A: Everyone has explained it greatly. But I wanted it to see it for myself. I use python3. So, I opened the resource monitor (in Windows!), and first, executed the following command first: a=0 for i in range(1,100000): a=a+i and then checked the change in 'In Use' memory. It was insignificant. Then, I ran the following code: for i in list(range(1,100000)): a=a+i And it took a big chunk of the memory for use, instantly. And, I was convinced. You can try it for yourself. If you are using Python 2X, then replace 'range()' with 'xrange()' in the first code and 'list(range())' with 'range()'. A: From the help docs. Python 2.7.12 >>> print range.__doc__ range(stop) -> list of integers range(start, stop[, step]) -> list of integers Return a list containing an arithmetic progression of integers. range(i, j) returns [i, i+1, i+2, ..., j-1]; start (!) defaults to 0. When step is given, it specifies the increment (or decrement). For example, range(4) returns [0, 1, 2, 3]. The end point is omitted! These are exactly the valid indices for a list of 4 elements. >>> print xrange.__doc__ xrange(stop) -> xrange object xrange(start, stop[, step]) -> xrange object Like range(), but instead of returning a list, returns an object that generates the numbers in the range on demand. For looping, this is slightly faster than range() and more memory efficient. Python 3.5.2 >>> print(range.__doc__) range(stop) -> range object range(start, stop[, step]) -> range object Return an object that produces a sequence of integers from start (inclusive) to stop (exclusive) by step. range(i, j) produces i, i+1, i+2, ..., j-1. start defaults to 0, and stop is omitted! range(4) produces 0, 1, 2, 3. These are exactly the valid indices for a list of 4 elements. When step is given, it specifies the increment (or decrement). >>> print(xrange.__doc__) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'xrange' is not defined Difference is apparent. In Python 2.x, range returns a list, xrange returns an xrange object which is iterable. In Python 3.x, range becomes xrange of Python 2.x, and xrange is removed. A: range() in Python 2.x This function is essentially the old range() function that was available in Python 2.x and returns an instance of a list object that contains the elements in the specified range. However, this implementation is too inefficient when it comes to initialise a list with a range of numbers. For example, for i in range(1000000) would be a very expensive command to execute, both in terms of memory and time usage as it requires the storage of this list into the memory. range() in Python 3.x and xrange() in Python 2.x Python 3.x introduced a newer implementation of range() (while the newer implementation was already available in Python 2.x through the xrange() function). The range() exploits a strategy known as lazy evaluation. Instead of creating a huge list of elements in range, the newer implementation introduces the class range, a lightweight object that represents the required elements in the given range, without storing them explicitly in memory (this might sound like generators but the concept of lazy evaluation is different). As an example, consider the following: # Python 2.x >>> a = range(10) >>> type(a) <type 'list'> >>> b = xrange(10) >>> type(b) <type 'xrange'> and # Python 3.x >>> a = range(10) >>> type(a) <class 'range'> A: The doc clearly reads : This function is very similar to range(), but returns an xrange object instead of a list. This is an opaque sequence type which yields the same values as the corresponding list, without actually storing them all simultaneously. The advantage of xrange() over range() is minimal (since xrange() still has to create the values when asked for them) except when a very large range is used on a memory-starved machine or when all of the range’s elements are never used (such as when the loop is usually terminated with break). A: You will find the advantage of xrange over range in this simple example: import timeit t1 = timeit.default_timer() a = 0 for i in xrange(1, 100000000): pass t2 = timeit.default_timer() print "time taken: ", (t2-t1) # 4.49153590202 seconds t1 = timeit.default_timer() a = 0 for i in range(1, 100000000): pass t2 = timeit.default_timer() print "time taken: ", (t2-t1) # 7.04547905922 seconds The above example doesn't reflect anything substantially better in case of xrange. Now look at the following case where range is really really slow, compared to xrange. import timeit t1 = timeit.default_timer() a = 0 for i in xrange(1, 100000000): if i == 10000: break t2 = timeit.default_timer() print "time taken: ", (t2-t1) # 0.000764846801758 seconds t1 = timeit.default_timer() a = 0 for i in range(1, 100000000): if i == 10000: break t2 = timeit.default_timer() print "time taken: ", (t2-t1) # 2.78506207466 seconds With range, it already creates a list from 0 to 100000000(time consuming), but xrange is a generator and it only generates numbers based on the need, that is, if the iteration continues. In Python-3, the implementation of the range functionality is same as that of xrange in Python-2, while they have done away with xrange in Python-3 Happy Coding!! A: range creates a list, so if you do range(1, 10000000) it creates a list in memory with 10000000 elements. xrange is a generator, so it evaluates lazily. This brings you two advantages: * *You can iterate longer lists without getting a MemoryError. *As it resolves each number lazily, if you stop iteration early, you won't waste time creating the whole list. A: Remember, use the timeit module to test which of small snippets of code is faster! $ python -m timeit 'for i in range(1000000):' ' pass' 10 loops, best of 3: 90.5 msec per loop $ python -m timeit 'for i in xrange(1000000):' ' pass' 10 loops, best of 3: 51.1 msec per loop Personally, I always use range(), unless I were dealing with really huge lists -- as you can see, time-wise, for a list of a million entries, the extra overhead is only 0.04 seconds. And as Corey points out, in Python 3.0 xrange() will go away and range() will give you nice iterator behavior anyway. A: It is for optimization reasons. range() will create a list of values from start to end (0 .. 20 in your example). This will become an expensive operation on very large ranges. xrange() on the other hand is much more optimised. it will only compute the next value when needed (via an xrange sequence object) and does not create a list of all values like range() does. A: In Python 2.x: * *range creates a list, so if you do range(1, 10000000) it creates a list in memory with 9999999 elements. *xrange is a sequence object that evaluates lazily. In Python 3: * *range does the equivalent of Python 2's xrange. To get the list, you have to explicitly use list(range(...)). *xrange no longer exists. A: range(): range(1, 10) returns a list from 1 to 10 numbers & hold whole list in memory. xrange(): Like range(), but instead of returning a list, returns an object that generates the numbers in the range on demand. For looping, this is lightly faster than range() and more memory efficient. xrange() object like an iterator and generates the numbers on demand.(Lazy Evaluation) In [1]: range(1,10) Out[1]: [1, 2, 3, 4, 5, 6, 7, 8, 9] In [2]: xrange(10) Out[2]: xrange(10) In [3]: print xrange.__doc__ xrange([start,] stop[, step]) -> xrange object A: range(x,y) returns a list of each number in between x and y if you use a for loop, then range is slower. In fact, range has a bigger Index range. range(x.y) will print out a list of all the numbers in between x and y xrange(x,y) returns xrange(x,y) but if you used a for loop, then xrange is faster. xrange has a smaller Index range. xrange will not only print out xrange(x,y) but it will still keep all the numbers that are in it. [In] range(1,10) [Out] [1, 2, 3, 4, 5, 6, 7, 8, 9] [In] xrange(1,10) [Out] xrange(1,10) If you use a for loop, then it would work [In] for i in range(1,10): print i [Out] 1 2 3 4 5 6 7 8 9 [In] for i in xrange(1,10): print i [Out] 1 2 3 4 5 6 7 8 9 There isn't much difference when using loops, though there is a difference when just printing it! A: On a requirement for scanning/printing of 0-N items , range and xrange works as follows. range() - creates a new list in the memory and takes the whole 0 to N items(totally N+1) and prints them. xrange() - creates a iterator instance that scans through the items and keeps only the current encountered item into the memory , hence utilising same amount of memory all the time. In case the required element is somewhat at the beginning of the list only then it saves a good amount of time and memory. A: Range returns a list while xrange returns an xrange object which takes the same memory irrespective of the range size,as in this case,only one element is generated and available per iteration whereas in case of using range, all the elements are generated at once and are available in the memory. A: The difference decreases for smaller arguments to range(..) / xrange(..): $ python -m timeit "for i in xrange(10111):" " for k in range(100):" " pass" 10 loops, best of 3: 59.4 msec per loop $ python -m timeit "for i in xrange(10111):" " for k in xrange(100):" " pass" 10 loops, best of 3: 46.9 msec per loop In this case xrange(100) is only about 20% more efficient. A: range :-range will populate everything at once.which means every number of the range will occupy the memory. xrange :-xrange is something like generator ,it will comes into picture when you want the range of numbers but you dont want them to be stored,like when you want to use in for loop.so memory efficient. A: Additionally, if do list(xrange(...)) will be equivalent to range(...). So list is slow. Also xrange really doesn't fully finish the sequence So that's why its not a list, it's a xrange object A: See this post to find difference between range and xrange: To quote: range returns exactly what you think: a list of consecutive integers, of a defined length beginning with 0. xrange, however, returns an "xrange object", which acts a great deal like an iterator
{ "language": "en", "url": "https://stackoverflow.com/questions/94935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "832" }
Q: Restarting ColdFusion mail queue We are currently experiencing intermittent mail queue stoppages. I'm seeking diagnostic help in another area. In the meantime, is there a way to restart the CF mail queue without restarting the service as a whole? CF8 standard Win2k3 Solution: We are now checking the age of the oldest file in the mail queue. When it exceeds a set age (currently 30 min) the mail queue is restarted. A: Yes there is. <cfset sFactory = CreateObject("java","coldfusion.server.ServiceFactory")> <cfset MailSpoolService = sFactory.mailSpoolService> <cfset MailSpoolService.stop()> <cfset MailSpoolService.start()>
{ "language": "en", "url": "https://stackoverflow.com/questions/94948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Python implementation of Parsec? I recently wrote a parser in Python using Ply (it's a python reimplementation of yacc). When I was almost done with the parser I discovered that the grammar I need to parse requires me to do some look up during parsing to inform the lexer. Without doing a look up to inform the lexer I cannot correctly parse the strings in the language. Given than I can control the state of the lexer from the grammar rules I think I'll be solving my use case using a look up table in the parser module, but it may become too difficult to maintain/test. So I want to know about some of the other options. In Haskell I would use Parsec, a library of parsing functions (known as combinators). Is there a Python implementation of Parsec? Or perhaps some other production quality library full of parsing functionality so I can build a context sensitive parser in Python? EDIT: All my attempts at context free parsing have failed. For this reason, I don't expect ANTLR to be useful here. A: PySec is another monadic parser, I don't know much about it, but it's worth looking at here A: An option you may consider, if an LL parser is ok to you, is to give ANTLR a try, it can generate python too (actually it is LL(*) as they name it, * stands for the quantity of lookahead it can cope with). A: Nothing prevents you for diverting your parser from the "context free" path using PLY. You can pass information to the lexer during parsing, and in this way achieve full flexibility. I'm pretty sure that you can parse anything you want with PLY this way. For a hands-on example, consider - it is a parser for ANSI C written in Python with PLY. It solves the classic C typedef - identifier problem (that makes C's grammar non context-sensitive) by populating a symbol table in the parser that is being used in the lexer to resolve symbol names as either types or not. A: I believe that pyparsing is based on the same principles as parsec. A: There's ANTLR, which is LL(*), there's PyParsing, which is more object friendly and is sort of like a DSL, and then there's Parsing which is like OCaml's Menhir. A: ANTLR is great and has the added benefit of working across multiple languages.
{ "language": "en", "url": "https://stackoverflow.com/questions/94952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How do you priortize multiple triggers of a table? I have a couple of triggers on a table that I want to keep separate and would like to priortize them. I could have just one trigger and do the logic there, but I was wondering if there was an easier/logical way of accomplishing this of having it in a pre-defined order ? A: Use sp_settriggerorder. You can specify the first and last trigger to fire depending on the operation. sp_settriggerorder on MSDN From the above link: A. Setting the firing order for a DML trigger The following example specifies that trigger uSalesOrderHeader be the first trigger to fire after an UPDATE operation occurs on the Sales.SalesOrderHeader table. USE AdventureWorks; GO sp_settriggerorder @triggername= 'Sales.uSalesOrderHeader', @order='First', @stmttype = 'UPDATE'; B. Setting the firing order for a DDL trigger The following example specifies that trigger ddlDatabaseTriggerLog be the first trigger to fire after an ALTER_TABLE event occurs in the AdventureWorks database. USE AdventureWorks; GO sp_settriggerorder @triggername= 'ddlDatabaseTriggerLog', @order='First', @stmttype = 'ALTER_TABLE', @namespace = 'DATABASE'; A: See here. A: You can use sp_settriggerorder to define the order of each trigger on a table. However, I would argue that you'd be much better off having a single trigger that does multiple things. This is particularly so if the order is important, since that importance will not be very obvious if you have multiple triggers. Imagine someone trying to support the database months/years down the track. Of course there are likely to be cases where you need to have multiple triggers or it really is better design, but I'd start assuming you should have one and work from there. A: Rememebr if you change the trigger order, someone else could come by later and rearrange it again. And where would you document what the trigger order should be so a maintenance developer knows not to mess with the order or things will break? If two trigger tasks definitely must be performed in a specific order, the only safe route is to put them in the same trigger.
{ "language": "en", "url": "https://stackoverflow.com/questions/94959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you solve the 15-puzzle with A-Star or Dijkstra's Algorithm? I've read in one of my AI books that popular algorithms (A-Star, Dijkstra) for path-finding in simulation or games is also used to solve the well-known "15-puzzle". Can anyone give me some pointers on how I would reduce the 15-puzzle to a graph of nodes and edges so that I could apply one of these algorithms? If I were to treat each node in the graph as a game state then wouldn't that tree become quite large? Or is that just the way to do it? A: A good heuristic for A-Star with the 15 puzzle is the number of squares that are in the wrong location. Because you need at least 1 move per square that is out of place, the number of squares out of place is guaranteed to be less than or equal to the number of moves required to solve the puzzle, making it an appropriate heuristic for A-Star. A: A quick Google search turns up a couple papers that cover this in some detail: one on Parallel Combinatorial Search, and one on External-Memory Graph Search General rule of thumb when it comes to algorithmic problems: someone has likely done it before you, and published their findings. A: This is an assignment for the 8-puzzle problem talked about using the A* algorithm in some detail, but also fairly straightforward: http://www.cs.princeton.edu/courses/archive/spring09/cos226/assignments/8puzzle.html A: The graph theoretic way to solve the problem is to imagine every configuration of the board as a vertex of the graph and then use a breath-first search with pruning based on something like the Manhatten Distance of the board to derive a shortest path from the starting configuration to the solution. One problem with this approach is that for any n x n board where n > 3 the game space becomes so large that it is not clear how you can efficiently mark the visited vertices. In other words there is no obvious way to assess if the current configuration of the board is identical to one that has previously been discovered through traversing some other path. Another problem is that the graph size grows so quickly with n (it's approximately (n^2)!) that it is just not suitable for a brue-force attack as the number of paths becomes computationally infeasible to traverse. This paper by Ian Parberry A Real-Time Algorithm for the (n^2 − 1) - Puzzle describes a simple greedy algorithm that iteritively arrives at a solution by completing the first row, then the first column, then the second row... It arrives at a solution almost immediately, however the solution is far from optimal; essentially it solves the problem the way a human would without leveraging any computational muscle. This problem is closely related to that of solving the Rubik's cube. The graph of all game states it too large to solve by brue force, but there is a fairly simple 7 step method that can be used to solve any cube in about 1 ~ 2 minutes by a dextrous human. This path is of course non-optimal. By learning to recognise patterns that define sequences of moves the speed can be brought down to 17 seconds. However, this feat by Jiri is somewhat superhuman! The method Parberry describes moves only one tile at a time; one imagines that the algorithm could be made better up by employing Jiri's dexterity and moving multiple tiles at one time. This would not, as Parberry proves, reduce the path length from n^3, but it would reduce the coefficient of the leading term. A: Remember that A* will search through the problem space proceeding down the most likely path to goal as defined by your heurestic. Only in the worst case will it end up having to flood fill the entire problem space, this tends to happen when there is no actual solution to your problem. A: Just use the game tree. Remember that a tree is a special form of graph. In your case the leaves of each node will be the game position after you make one of the moves that is available at the current node. A: Here you go http://www.heyes-jones.com/astar.html A: Also. be mindful that with the A-Star algorithm, at least, you will need to figure out a admissible heuristic to determine whether a possible next step is closer to the finished route than another step. A: For my current experience, on how to solve an 8 puzzle. it is required to create nodes. keep track of each step taken and get the manhattan distance from each following steps, taking/going to the one with the shortest distance. update the nodes, and continue until reaches the goal
{ "language": "en", "url": "https://stackoverflow.com/questions/94975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Why aren't variables declared in "try" in scope in "catch" or "finally"? In C# and in Java (and possibly other languages as well), variables declared in a "try" block are not in scope in the corresponding "catch" or "finally" blocks. For example, the following code does not compile: try { String s = "test"; // (more code...) } catch { Console.Out.WriteLine(s); //Java fans: think "System.out.println" here instead } In this code, a compile-time error occurs on the reference to s in the catch block, because s is only in scope in the try block. (In Java, the compile error is "s cannot be resolved"; in C#, it's "The name 's' does not exist in the current context".) The general solution to this issue seems to be to instead declare variables just before the try block, instead of within the try block: String s; try { s = "test"; // (more code...) } catch { Console.Out.WriteLine(s); //Java fans: think "System.out.println" here instead } However, at least to me, (1) this feels like a clunky solution, and (2) it results in the variables having a larger scope than the programmer intended (the entire remainder of the method, instead of only in the context of the try-catch-finally). My question is, what were/are the rationale(s) behind this language design decision (in Java, in C#, and/or in any other applicable languages)? A: In C++ at any rate, the scope of an automatic variable is limited by the curly braces that surround it. Why would anyone expect this to be different by plunking down a try keyword outside the curly braces? A: Like ravenspoint pointed out, everyone expects variables to be local to the block they are defined in. try introduces a block and so does catch. If you want variables local to both try and catch, try enclosing both in a block: // here is some code { string s; try { throw new Exception(":(") } catch (Exception e) { Debug.WriteLine(s); } } A: How could you be sure, that you reached the declaration part in your catch block? What if the instantiation throws the exception? A: The simple answer is that C and most of the languages that have inherited its syntax are block scoped. That means that if a variable is defined in one block, i.e., inside { }, that is its scope. The exception, by the way, is JavaScript, which has a similar syntax, but is function scoped. In JavaScript, a variable declared in a try block is in scope in the catch block, and everywhere else in its containing function. A: According to the section titled "How to Throw and Catch Exceptions" in Lesson 2 of MCTS Self-Paced Training Kit (Exam 70-536): Microsoft® .NET Framework 2.0—Application Development Foundation, the reason is that the exception may have occurred before variable declarations in the try block (as others have noted already). Quote from page 25: "Notice that the StreamReader declaration was moved outside the Try block in the preceding example. This is necessary because the Finally block cannot access variables that are declared within the Try block. This makes sense because depending on where an exception occurred, variable declarations within the Try block might not yet have been executed." A: @burkhard has the question as to why answered properly, but as a note I wanted to add, while your recommended solution example is good 99.9999+% of time, it is not good practice, it is far safer to either check for null before using something instantiate within the try block, or initialize the variable to something instead of just declaring it before the try block. For example: string s = String.Empty; try { //do work } catch { //safely access s Console.WriteLine(s); } Or: string s; try { //do work } catch { if (!String.IsNullOrEmpty(s)) { //safely access s Console.WriteLine(s); } } This should provide scalability in the workaround, so that even when what you're doing in the try block is more complex than assigning a string, you should be able to safely access the data from your catch block. A: The answer, as everyone has pointed out, is pretty much "that's how blocks are defined". There are some proposals to make the code prettier. See ARM try (FileReader in = makeReader(), FileWriter out = makeWriter()) { // code using in and out } catch(IOException e) { // ... } Closures are supposed to address this as well. with(FileReader in : makeReader()) with(FileWriter out : makeWriter()) { // code using in and out } UPDATE: ARM is implemented in Java 7. http://download.java.net/jdk7/docs/technotes/guides/language/try-with-resources.html A: Traditionally, in C-style languages, what happens inside the curly braces stays inside the curly braces. I think that having the lifetime of a variable stretch across scopes like that would be unintuitive to most programmers. You can achieve what you want by enclosing the try/catch/finally blocks inside another level of braces. e.g. ... code ... { string s = "test"; try { // more code } catch(...) { Console.Out.WriteLine(s); } } EDIT: I guess every rule does have an exception. The following is valid C++: int f() { return 0; } void main() { int y = 0; if (int x = f()) { cout << x; } else { cout << x; } } The scope of x is the conditional, the then clause and the else clause. A: You solution is exactly what you should do. You can't be sure that your declaration was even reached in the try block, which would result in another exception in the catch block. It simply must work as separate scopes. try dim i as integer = 10 / 0 ''// Throw an exception dim s as string = "hi" catch (e) console.writeln(s) ''// Would throw another exception, if this was allowed to compile end try A: The variables are block level and restricted to that Try or Catch block. Similar to defining a variable in an if statement. Think of this situation. try { fileOpen("no real file Name"); String s = "GO TROJANS"; } catch (Exception) { print(s); } The String would never be declared, so it can't be depended upon. A: Because the try block and the catch block are 2 different blocks. In the following code, would you expect s defined in block A be visible in block B? { // block A string s = "dude"; } { // block B Console.Out.WriteLine(s); // or printf or whatever } A: While in your example it is weird that it does not work, take this similar one: try { //Code 1 String s = "1|2"; //Code 2 } catch { Console.WriteLine(s.Split('|')[1]); } This would cause the catch to throw a null reference exception if Code 1 broke. Now while the semantics of try/catch are pretty well understood, this would be an annoying corner case, since s is defined with an initial value, so it should in theory never be null, but under shared semantics, it would be. Again this could in theory be fixed by only allowing separated definitions (String s; s = "1|2";), or some other set of conditions, but it is generally easier to just say no. Additionally, it allows the semantics of scope to be defined globally without exception, specifically, locals last as long as the {} they are defined in, in all cases. Minor point, but a point. Finally, in order to do what you want, you can add a set of brackets around the try catch. Gives you the scope you want, although it does come at the cost of a little readability, but not too much. { String s; try { s = "test"; //More code } catch { Console.WriteLine(s); } } A: The C# Spec (15.2) states "The scope of a local variable or constant declared in a block ist the block." (in your first example the try block is the block where "s" is declared) A: In Python they are visible in the catch/finally blocks if the line declaring them didn't throw. A: Two things: * *Generally, Java has just 2 levels of scope: global and function. But, try/catch is an exception (no pun intended). When an exception is thrown and the exception object gets a variable assigned to it, that object variable is only available within the "catch" section and is destroyed as soon as the catch completes. *(and more importantly). You can't know where in the try block the exception was thrown. It may have been before your variable was declared. Therefore it is impossible to say what variables will be available for the catch/finally clause. Consider the following case, where scoping is as you suggested: try { throw new ArgumentException("some operation that throws an exception"); string s = "blah"; } catch (e as ArgumentException) { Console.Out.WriteLine(s); } This clearly is a problem - when you reach the exception handler, s will not have been declared. Given that catches are meant to handle exceptional circumstances and finallys must execute, being safe and declaring this a problem at compile time is far better than at runtime. A: Everyone else has brought up the basics -- what happens in a block stays in a block. But in the case of .NET, it may be helpful to examine what the compiler thinks is happening. Take, for example, the following try/catch code (note that the StreamReader is declared, correctly, outside the blocks): static void TryCatchFinally() { StreamReader sr = null; try { sr = new StreamReader(path); Console.WriteLine(sr.ReadToEnd()); } catch (Exception ex) { Console.WriteLine(ex.ToString()); } finally { if (sr != null) { sr.Close(); } } } This will compile out to something similar to the following in MSIL: .method private hidebysig static void TryCatchFinallyDispose() cil managed { // Code size 53 (0x35) .maxstack 2 .locals init ([0] class [mscorlib]System.IO.StreamReader sr, [1] class [mscorlib]System.Exception ex) IL_0000: ldnull IL_0001: stloc.0 .try { .try { IL_0002: ldsfld string UsingTest.Class1::path IL_0007: newobj instance void [mscorlib]System.IO.StreamReader::.ctor(string) IL_000c: stloc.0 IL_000d: ldloc.0 IL_000e: callvirt instance string [mscorlib]System.IO.TextReader::ReadToEnd() IL_0013: call void [mscorlib]System.Console::WriteLine(string) IL_0018: leave.s IL_0028 } // end .try catch [mscorlib]System.Exception { IL_001a: stloc.1 IL_001b: ldloc.1 IL_001c: callvirt instance string [mscorlib]System.Exception::ToString() IL_0021: call void [mscorlib]System.Console::WriteLine(string) IL_0026: leave.s IL_0028 } // end handler IL_0028: leave.s IL_0034 } // end .try finally { IL_002a: ldloc.0 IL_002b: brfalse.s IL_0033 IL_002d: ldloc.0 IL_002e: callvirt instance void [mscorlib]System.IDisposable::Dispose() IL_0033: endfinally } // end handler IL_0034: ret } // end of method Class1::TryCatchFinallyDispose What do we see? MSIL respects the blocks -- they're intrinsically part of the underlying code generated when you compile your C#. The scope isn't just hard-set in the C# spec, it's in the CLR and CLS spec as well. The scope protects you, but you do occasionally have to work around it. Over time, you get used to it, and it begins to feel natural. Like everyone else said, what happens in a block stays in that block. You want to share something? You have to go outside the blocks ... A: In the specific example you've given, initialising s can't throw an exception. So you'd think that maybe its scope could be extended. But in general, initialiser expressions can throw exceptions. It wouldn't make sense for a variable whose initialiser threw an exception (or which was declared after another variable where that happened) to be in scope for catch/finally. Also, code readability would suffer. The rule in C (and languages which follow it, including C++, Java and C#) is simple: variable scopes follow blocks. If you want a variable to be in scope for try/catch/finally but nowhere else, then wrap the whole thing in another set of braces (a bare block) and declare the variable before the try. A: Part of the reason they are not in the same scope is because at any point of the try block, you can have thrown the exception. If they were in the same scope, its a disaster in waiting, because depending on where the exception was thrown, it could be even more ambiguous. At least when its declared outside of the try block, you know for sure what the variable at minimum could be when an exception is thrown; The value of the variable before the try block. A: When you declare a local variable it is placed on the stack (for some types the entire value of the object will be on the stack, for other types only a reference will be on the stack). When there is an exception inside a try block, the local variables within the block are freed, which means the stack is "unwound" back to the state it was at at the beginning of the try block. This is by design. It's how the try / catch is able to back out of all of the function calls within the block and puts your system back into a functional state. Without this mechanism you could never be sure of the state of anything when an exception occurs. Having your error handling code rely on externally declared variables which have their values changed inside the try block seems like bad design to me. What you are doing is essentially leaking resources intentionally in order to gain information (in this particular case it's not so bad because you are only leaking information, but imagine if it were some other resource? you're just making life harder on yourself in the future). I would suggest breaking up your try blocks into smaller chunks if you require more granularity in error handling. A: When you have a try catch, you should at the most part know that errors that it might throw. Theese Exception classes normaly tell everything you need about the exception. If not, you should make you're own exception classes and pass that information along. That way, you will never need to get the variables from inside the try block, because the Exception is self explainatory. So if you need to do this alot, think about you're design, and try to think if there is some other way, that you can either predict exceptions comming, or use the information comming from the exceptions, and then maybe rethrow your own exception with more information. A: As has been pointed out by other users, the curly braces define scope in pretty much every C style language that I know of. If it's a simple variable, then why do you care how long it will be in scope? It's not that big a deal. in C#, if it is a complex variable, you will want to implement IDisposable. You can then either use try/catch/finally and call obj.Dispose() in the finally block. Or you can use the using keyword, which will automatically call the Dispose at the end of the code section. A: What if the exception is thrown in some code which is above the declaration of the variable. Which means, the declaration itself was not happend in this case. try { //doSomeWork // Exception is thrown in this line. String s; //doRestOfTheWork } catch (Exception) { //Use s;//Problem here } finally { //Use s;//Problem here } A: My thought would be that because something in the try block triggered the exception its namespace contents cannot be trusted - ie referencing the String 's' in the catch block could cause the throw of yet another exception. A: Well if it doesn't throw a compile error, and you could declare it for the rest of the method, then there would be no way to only declare it only within try scope. It's forcing you to be explicit as to where the variable is supposed to exists and doesn't make assumptions. A: If we ignore the scoping-block issue for a moment, the complier would have to work a lot harder in a situation that's not well defined. While this is not impossible, the scoping error also forces you, the author of the code, to realise the implication of the code you write (that the string s may be null in the catch block). If your code was legal, in the case of an OutOfMemory exception, s isn't even guaranteed to be allocated a memory slot: // won't compile! try { VeryLargeArray v = new VeryLargeArray(TOO_BIG_CONSTANT); // throws OutOfMemoryException string s = "Help"; } catch { Console.WriteLine(s); // whoops! } The CLR (and therefore compiler) also force you to initialize variables before they are used. In the catch block presented it can't guarantee this. So we end up with the compiler having to do a lot of work, which in practice doesn't provide much benefit and would probably confuse people and lead them to ask why try/catch works differently. In addition to consistency, by not allowing anything fancy and adhering to the already established scoping semantics used throughout the language, the compiler and CLR are able to provide a greater guarantee of the state of a variable inside a catch block. That it exists and has been initialized. Note that the language designers have done a good job with other constructs like using and lock where the problem and scope is well defined, which allows you to write clearer code. e.g. the using keyword with IDisposable objects in: using(Writer writer = new Writer()) { writer.Write("Hello"); } is equivalent to: Writer writer = new Writer(); try { writer.Write("Hello"); } finally { if( writer != null) { ((IDisposable)writer).Dispose(); } } If your try/catch/finally is hard to understand, try refactoring or introducing another layer of indirection with an intermediate class that encapsulates the semantics of what you are trying to accomplish. Without seeing real code, it's hard to be more specific. A: Instead of a local variable, a public property could be declared; this also should avoid another potential error of an unassigned variable. public string S { get; set; } A: If the assignment operation fails your catch statement will have a null reference back to the unassigned variable. A: C# 3.0: string html = new Func<string>(() => { string webpage; try { using(WebClient downloader = new WebClient()) { webpage = downloader.DownloadString(url); } } catch(WebException) { Console.WriteLine("Download failed."); } return webpage; })();
{ "language": "en", "url": "https://stackoverflow.com/questions/94977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "165" }
Q: How do you rate-limit an IO operation? Suppose you have a program which reads from a socket. How do you keep the download rate below a certain given threshold? A: Assuming a network transport, a TCP/IP based one, Packets are sent in response to ACK/NACK packets going the other way. By limiting the rate of packets acknowledging receipt of the incoming packets, you will in turn reduce the rate at which new packets are sent. It can be a bit imprecise, so its possibly optimal to monitor the downstream rate and adjust the response rate adaptively untill it falls inside a comfortable threshold. ( This will happen really quick however, you send dosens of acks a second ) A: At the application layer (using a Berkeley socket style API) you just watch the clock, and read or write data at the rate you want to limit at. If you only read 10kbps on average, but the source is sending more than that, then eventually all the buffers between it and you will fill up. TCP/IP allows for this, and the protocol will arrange for the sender to slow down (at the application layer, probably all you need to know is that at the other end, blocking write calls will block, nonblocking writes will fail, and asynchronous writes won't complete, until you've read enough data to allow it). At the application layer you can only be approximate - you can't guarantee hard limits such as "no more than 10 kb will pass a given point in the network in any one second". But if you keep track of what you've received, you can get the average right in the long run. A: If you're reading from a socket, you have no control over the bandwidth used - you're reading the operating system's buffer of that socket, and nothing you say will make the person writing to the socket write less data (unless, of course, you've worked out a protocol for that). All that reading slowly would do is fill up the buffer, and cause an eventual stall on the network end - but you have no control of how or when this happens. If you really want to read only so much data at a time, you can do something like this: ReadFixedRate() { while(Data_Exists()) { t = GetTime(); ReadBlock(); while(t + delay > GetTime()) { Delay()' } } } A: It is like when limiting a game to a certain number of FPS. extern int FPS; .... timePerFrameinMS = 1000/FPS; while(1) { time = getMilliseconds(); DrawScene(); time = getMilliseconds()-time; if (time < timePerFrameinMS) { sleep(timePerFrameinMS - time); } } This way you make sure that the game refresh rate will be at most FPS. In the same manner DrawScene can be the function used to pump bytes into the socket stream. A: wget seems to manage it with the --limit-rate option. Here's from the man page: Note that Wget implements the limiting by sleeping the appropriate amount of time after a network read that took less time than specified by the rate. Eventually this strategy causes the TCP transfer to slow down to approximately the specified rate. However, it may take some time for this balance to be achieved, so don't be surprised if limiting the rate doesn't work well with very small files. A: As other have said, the OS kernel is managing the traffic and you are simply reading a copy of the data out of kernel memory. To roughly limit the rate of just one application, you need to delay your reads of the data and allow incoming packets to buffer up in the kernel, which will eventually slow the acknowledgment of incoming packets and reduce the rate on that one socket. If you want to slow all traffic to the machine, you need to go and adjust the sizes of your incoming TCP buffers. In Linux, you would affect this change by altering the values in /proc/sys/net/ipv4/tcp_rmem (read memory buffer sizes) and other tcp_* files. A: To add to Branan's answer: If you voluntarily limit the read speed at the receiver end, eventually queues will fill up at both end. Then the sender will either block in its send() call or return from the send() call with a sent_length less than the expected length passed on to the send() call. If the sender is not ready to deal with this case by sleeping and trying to resend what has not fit into OS buffers, you will ending up have connection issues (the sender may detect this as an error) or losing data (the sender may unknowingly discard data the did not fit into OS buffers). A: Set small socket send and receive buffers, say 1k or 2k, such that the bandwidth*delay product = the buffer size. You may not be able to get it small enough over fast links.
{ "language": "en", "url": "https://stackoverflow.com/questions/94997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Alternative to 'truss -p' instruction I am looking for a command in Unix that returns the status of a process(active, dead, sleeping, waiting for another process, etc.) is there any available? A shell script maybe? A: in linux, something like ps -p somepid --no-headers -o state should work, alternately you can look for the info in proc with grep ^State: /proc/somepid/status A: Try pflags <pid>, which will give you per-thread status information. Example: root@weetbix # pflags $$ 3384: bash data model = _ILP32 flags = ORPHAN|MSACCT|MSFORK /1: flags = ASLEEP waitid(0x7,0x0,0xffbfefc0,0xf) sigmask = 0x00020000,0x00000000 Also check out the manpage for pflags to see other useful tools like pstack, pfiles, pargs etc. A: Playing with ps options doesn't give you what you need?
{ "language": "en", "url": "https://stackoverflow.com/questions/94999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Simplest way to update a client-side javascript array variable during a ASP.NET AJAX postback in an UpdatePanel? If I want to inject a globally scoped array variable into a page's client-side javascript during a full page postback, I can use: this.Page.ClientScript.RegisterArrayDeclaration("WorkCalendar", "\"" + date.ToShortDateString() + "\""); to declare and populate a client-side javascript array on the page. Nice and simple. But I want to do the same from a async postback from an UpdatePanel. The closest I can figure so far is to create a .js file that just contains the var declaration, update the file during the async postback, and then use a ScriptManagerProxy.Scripts.Add to add the .js file to the page's global scope. Is there anything simpler? r iz doin it wrong? A: Use the static method System.Web.UI.ScriptManager.AddStartupScript() The script will run on all full and partial postbacks. A: Sam is correct. ScriptManager.RegisterStartupScript is the correct name of the function . It will run on all full and partial page updates. A: You could also update a hidden label inside the update panel which allows you to write out any javascript you like. I would suggest though using web services or even page methods to fetch the data you need instead of using update panels. Example: myLabel.Text = "...."; ... put your logic in this or you can add [WebMethod] to any public static page method and return data directly.
{ "language": "en", "url": "https://stackoverflow.com/questions/95005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Explain the quantile() function in R I've been mystified by the R quantile function all day. I have an intuitive notion of how quantiles work, and an M.S. in stats, but boy oh boy, the documentation for it is confusing to me. From the docs: Q[i](p) = (1 - gamma) x[j] + gamma x[j+1], I'm with it so far. For a type i quantile, it's an interpolation between x[j] and x [j+1], based on some mysterious constant gamma where 1 <= i <= 9, (j-m)/n <= p < (j-m+1)/ n, x[j] is the jth order statistic, n is the sample size, and m is a constant determined by the sample quantile type. Here gamma depends on the fractional part of g = np+m-j. So, how calculate j? m? For the continuous sample quantile types (4 through 9), the sample quantiles can be obtained by linear interpolation between the kth order statistic and p(k): p(k) = (k - alpha) / (n - alpha - beta + 1), where α and β are constants determined by the type. Further, m = alpha + p(1 - alpha - beta), and gamma = g. Now I'm really lost. p, which was a constant before, is now apparently a function. So for Type 7 quantiles, the default... Type 7 p(k) = (k - 1) / (n - 1). In this case, p(k) = mode[F(x[k])]. This is used by S. Anyone want to help me out? In particular I'm confused by the notation of p being a function and a constant, what the heck m is, and now to calculate j for some particular p. I hope that based on the answers here, we can submit some revised documentation that better explains what is going on here. quantile.R source code or type: quantile.default A: You're understandably confused. That documentation is terrible. I had to go back to the paper its based on (Hyndman, R.J.; Fan, Y. (November 1996). "Sample Quantiles in Statistical Packages". American Statistician 50 (4): 361–365. doi:10.2307/2684934) to get an understanding. Let's start with the first problem. where 1 <= i <= 9, (j-m)/n <= p < (j-m+1)/ n, x[j] is the jth order statistic, n is the sample size, and m is a constant determined by the sample quantile type. Here gamma depends on the fractional part of g = np+m-j. The first part comes straight from the paper, but what the documentation writers omitted was that j = int(pn+m). This means Q[i](p) only depends on the two order statistics closest to being p fraction of the way through the (sorted) observations. (For those, like me, who are unfamiliar with the term, the "order statistics" of a series of observations is the sorted series.) Also, that last sentence is just wrong. It should read Here gamma depends on the fractional part of np+m, g = np+m-j As for m that's straightforward. m depends on which of the 9 algorithms was chosen. So just like Q[i] is the quantile function, m should be considered m[i]. For algorithms 1 and 2, m is 0, for 3, m is -1/2, and for the others, that's in the next part. For the continuous sample quantile types (4 through 9), the sample quantiles can be obtained by linear interpolation between the kth order statistic and p(k): p(k) = (k - alpha) / (n - alpha - beta + 1), where α and β are constants determined by the type. Further, m = alpha + p(1 - alpha - beta), and gamma = g. This is really confusing. What the documentation calls p(k) is not the same as the p from before. p(k) is the plotting position. In the paper, the authors write it as pk, which helps. Especially since in the expression for m, the p is the original p, and the m = alpha + p * (1 - alpha - beta). Conceptually, for algorithms 4-9, the points (pk, x[k]) are interpolated to get the solution (p, Q[i](p)). Each algorithm only differs in the algorithm for the pk. As for the last bit, R is just stating what S uses. The original paper gives a list of 6 "desirable properties for a sample quantile" function, and states a preference for #8 which satisfies all by 1. #5 satisfies all of them, but they don't like it on other grounds (it's more phenomenological than derived from principles). #2 is what non-stat geeks like myself would consider the quantiles and is what's described in wikipedia. BTW, in response to dreeves answer, Mathematica does things significantly differently. I think I understand the mapping. While Mathematica's is easier to understand, (a) it's easier to shoot yourself in the foot with nonsensical parameters, and (b) it can't do R's algorithm #2. (Here's Mathworld's Quantile page, which states Mathematica can't do #2, but gives a simpler generalization of all the other algorithms in terms of four parameters.) A: There are various ways of computing quantiles when you give it a vector, and don't have a known CDF. Consider the question of what to do when your observations don't fall on quantiles exactly. The "types" are just determining how to do that. So, the methods say, "use a linear interpolation between the k-th order statistic and p(k)". So, what's p(k)? One guy says, "well, I like to use k/n". Another guy says, "I like to use (k-1)/(n-1)" etc. Each of these methods have different properties that are better suited for one problem or another. The \alpha's and \beta's are just ways to parameterize the functions p. In one case, they're 1 and 1. In another case, they're 3/8 and -1/4. I don't think the p's are ever a constant in the documentation. They just don't always show the dependency explicitly. See what happens with the different types when you put in vectors like 1:5 and 1:6. (also note that even if your observations fall exactly on the quantiles, certain types will still use linear interpolation). A: I believe the R help documentation is clear after the revisions noted in @RobHyndman's comment, but I found it a bit overwhelming. I am posting this answer in case it helps someone move quickly through the options and their assumptions. To get a grip on quantile(x, probs=probs), I wanted to check out the source code. This too was trickier than I anticipated in R so I actually just grabbed it from a github repo that looked recent enough to run with. I was interested in the default (type 7) behavior, so I annotated that some, but didn't do the same for each option. You can see how the "type 7" method interpolates, step by step, both in the code and also I added a few lines to print some important values as it goes. quantile.default <-function(x, probs = seq(0, 1, 0.25), na.rm = FALSE, names = TRUE , type = 7, ...){ if(is.factor(x)) { #worry about non-numeric data if(!is.ordered(x) || ! type %in% c(1L, 3L)) stop("factors are not allowed") lx <- levels(x) } else lx <- NULL if (na.rm){ x <- x[!is.na(x)] } else if (anyNA(x)){ stop("missing values and NaN's not allowed if 'na.rm' is FALSE") } eps <- 100*.Machine$double.eps #this is to deal with rounding things sensibly if (any((p.ok <- !is.na(probs)) & (probs < -eps | probs > 1+eps))) stop("'probs' outside [0,1]") ##################################### # here is where terms really used in default type==7 situation get defined n <- length(x) #how many observations are in sample? if(na.p <- any(!p.ok)) { # set aside NA & NaN o.pr <- probs probs <- probs[p.ok] probs <- pmax(0, pmin(1, probs)) # allow for slight overshoot } np <- length(probs) #how many quantiles are you computing? if (n > 0 && np > 0) { #have positive observations and # quantiles to compute if(type == 7) { # be completely back-compatible index <- 1 + (n - 1) * probs #this gives the order statistic of the quantiles lo <- floor(index) #this is the observed order statistic just below each quantile hi <- ceiling(index) #above x <- sort(x, partial = unique(c(lo, hi))) #the partial thing is to reduce time to sort, #and it only guarantees that sorting is "right" at these order statistics, important for large vectors #ties are not broken and tied elements just stay in their original order qs <- x[lo] #the values associated with the "floor" order statistics i <- which(index > lo) #which of the order statistics for the quantiles do not land on an order statistic for an observed value #this is the difference between the order statistic and the available ranks, i think h <- (index - lo)[i] # > 0 by construction ## qs[i] <- qs[i] + .minus(x[hi[i]], x[lo[i]]) * (index[i] - lo[i]) ## qs[i] <- ifelse(h == 0, qs[i], (1 - h) * qs[i] + h * x[hi[i]]) qs[i] <- (1 - h) * qs[i] + h * x[hi[i]] # This is the interpolation step: assemble the estimated quantile by removing h*low and adding back in h*high. # h is the arithmetic difference between the desired order statistic amd the available ranks #interpolation only occurs if the desired order statistic is not observed, e.g. .5 quantile is the actual observed median if n is odd. # This means having a more extreme 99th observation doesn't matter when computing the .75 quantile ################################### # print all of these things cat("floor pos=", c(lo)) cat("\nceiling pos=", c(hi)) cat("\nfloor values= ", c(x[lo])) cat( "\nwhich floors not targets? ", c(i)) cat("\ninterpolate between ", c(x[lo[i]]), ";", c(x[hi[i]])) cat( "\nadjustment values= ", c(h)) cat("\nquantile estimates:") }else if (type <= 3){## Types 1, 2 and 3 are discontinuous sample qs. nppm <- if (type == 3){ n * probs - .5 # n * probs + m; m = -0.5 } else {n * probs} # m = 0 j <- floor(nppm) h <- switch(type, (nppm > j), # type 1 ((nppm > j) + 1)/2, # type 2 (nppm != j) | ((j %% 2L) == 1L)) # type 3 } else{ ## Types 4 through 9 are continuous sample qs. switch(type - 3, {a <- 0; b <- 1}, # type 4 a <- b <- 0.5, # type 5 a <- b <- 0, # type 6 a <- b <- 1, # type 7 (unused here) a <- b <- 1 / 3, # type 8 a <- b <- 3 / 8) # type 9 ## need to watch for rounding errors here fuzz <- 4 * .Machine$double.eps nppm <- a + probs * (n + 1 - a - b) # n*probs + m j <- floor(nppm + fuzz) # m = a + probs*(1 - a - b) h <- nppm - j if(any(sml <- abs(h) < fuzz)) h[sml] <- 0 x <- sort(x, partial = unique(c(1, j[j>0L & j<=n], (j+1)[j>0L & j<n], n)) ) x <- c(x[1L], x[1L], x, x[n], x[n]) ## h can be zero or one (types 1 to 3), and infinities matter #### qs <- (1 - h) * x[j + 2] + h * x[j + 3] ## also h*x might be invalid ... e.g. Dates and ordered factors qs <- x[j+2L] qs[h == 1] <- x[j+3L][h == 1] other <- (0 < h) & (h < 1) if(any(other)) qs[other] <- ((1-h)*x[j+2L] + h*x[j+3L])[other] } } else { qs <- rep(NA_real_, np)} if(is.character(lx)){ qs <- factor(qs, levels = seq_along(lx), labels = lx, ordered = TRUE)} if(names && np > 0L) { names(qs) <- format_perc(probs) } if(na.p) { # do this more elegantly (?!) o.pr[p.ok] <- qs names(o.pr) <- rep("", length(o.pr)) # suppress <NA> names names(o.pr)[p.ok] <- names(qs) o.pr } else qs } #################### # fake data x<-c(1,2,2,2,3,3,3,4,4,4,4,4,5,5,5,5,5,5,5,5,5,6,6,7,99) y<-c(1,2,2,2,3,3,3,4,4,4,4,4,5,5,5,5,5,5,5,5,5,6,6,7,9) z<-c(1,2,2,2,3,3,3,4,4,4,4,4,5,5,5,5,5,5,5,5,5,6,6,7) #quantiles "of interest" probs<-c(0.5, 0.75, 0.95, 0.975) # a tiny bit of illustrative behavior quantile.default(x,probs=probs, names=F) quantile.default(y,probs=probs, names=F) #only difference is .975 quantile since that is driven by highest 2 observations quantile.default(z,probs=probs, names=F) # This shifts everything b/c now none of the quantiles fall on an observation (and of course the distribution changed...)... but #.75 quantile is stil 5.0 b/c the observations just above and below the order statistic for that quantile are still 5. However, it got there for a different reason. #how does rescaling affect quantile estimates? sqrt(quantile.default(x^2, probs=probs, names=F)) exp(quantile.default(log(x), probs=probs, names=F))
{ "language": "en", "url": "https://stackoverflow.com/questions/95007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79" }
Q: Deleting rows in joined tables using ADO Now I have seen this question in another forum but it didn't had an acceptable answer. Suppose I have two tables, the Groups table and the Elements table. The tables have no defined relationships. The Elements table has an IdGroup field that refers to the IdGroup (PK) field of the Groups table. I use the following query through an ADO recordset to populate the tables values to a datagrid: SELECT Elements.*, Groups.GroupName FROM Elements INNER JOIN Groups ON Elements.IdGroup = Groups.IdGroup From that grid I want to press Delete in order to delete an Element. Here is my problem. When I used DAO, the DAO Delete() function deleted only the record in the Elements group. This was the expected behavior. When I changed to ADO, the Delete() function deleted records in both tables, the element record and the group to which the element belonged! Is there any way to reproduce the DAO behavior in ADO without having to define relationships into the tables? Note: I know there are alternatives (executing DELETE querys could do the job). Just show me a way to do this in ADO, or say it cannot be done. A: Rewrite you query to: * *replace the INNER JOIN with a WHERE clause consisting of an EXISTS; *use a subquery in the SELECT clause to return the value of Groups.GroupName. Example: SELECT Elements.*, ( SELECT Groups.GroupName FROM Groups WHERE Elements.IdGroup = Groups.IdGroup ) FROM Elements WHERE EXISTS ( SELECT * FROM Groups WHERE Elements.IdGroup = Groups.IdGroup ); I've tested this using SQL Server 2008 with a ADO recordset set as the DataSource property of a Microsoft OLEDB Datagrid Control (MSDATGRD.OCX) then deleting the row via the gird (I assume you are doing something similar) and the row is indeed deleted from table Elements only (i.e. the row in Groups remains undeleted). Note the revised query may have a negative impact on performance when fetching rows.
{ "language": "en", "url": "https://stackoverflow.com/questions/95041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: learning drupal fast track: how to create a stackoverflow clone? I've started figuring out drupal, and so far most of the results are just ugly. May be I need to learn it in something similar to a real-life project. I thought — to reproduce this site's functionality might be a good learning project. But I need help. :) Without assuming this site is based on drupal (it most likely is not — too quick, I think) is there a way to build something similar in functionality (yes, slower, OK if not as fancy as this one, but close) with existing drupal modules and schemes (or with minimal tweaking)? Or drupal is not good enough for that? Or — is it too complicated project for a student? Which existing modules and schemes might help to build something similar? (No competition is intended with stackoverflow.) A: Firstly, Drupal is by no means a slow system, actually it works quite well. Secondly, this has been already asked and answered here. By the way Drupal has a medium learning curve but, once you learn how to use you'll find it simple and you'll find it will satisfy almost everything you want to do with it. Its plugin system is just great and it's very SEO friendly (I don't get paid from Drupal I swear, I just happen to like it a lot) My website is made in drupal if you wanna take a look (is in spanish though). A: -Is there a way to build something similar [to stackoverflow] in functionality with existing drupal modules and schemes (or with minimal tweaking)? It's a good idea to try and learn a new technology by trying to make a real-world project. But if you're intention is to actually learn drupal, then trying to solve the problem with exiting modules and "a minimal amount of tweaking", you might not learn very much! -Or drupal is not good enough for that? Drupal is certainly capable of the type of functionality implemented here, and much more. -Or — is it too complicated project for a student? It depends on the student. Different people have different abilities. Your mileage may vary. A: Some suggestions that might possibly help: * *There's a Drupal distribution for it ... (not just a 'module'), i.e ArrayShift. Quote from its project page: A question/answer site built to emulate the core functionality of sites on the StackExchange platform, such as: * *StackOverflow. *Drupal Answers. *There is a Drupal theme for it, i.e the ArrayShift Theme. Here is a screenshot (from its project pages): It has been in a kind of unsupported status until recently, though the updated project page contains a roadmap to get it going again for D7 (and D8 later on). Disclosure: I'm the (new) maintainer of ArrayShift (and its related modules and theme), I hope this does not violate the site's policy on self-promotion.
{ "language": "en", "url": "https://stackoverflow.com/questions/95053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Java: Best Place to Begin Learning Basic Networking I am trying to write a simple networked chat program in Java. I have almost no networking experience. I was wondering what resources I should begin looking at (beside here of course). Sticking with the core Java API would be best for now. A: I recommend you to first learn networking. If you have time read the Tanenbaum book, the greatest reference in networking. If you want a quick leard, here is a road map: * *OSI layers *UDP and TCP/IP *Sockets *Broadcast and Multicast *Network security Then go with Java: Socket, ServerSocket, DatagramSocket, RMI, etc. A: Nio or the traditional way with ServerSocket or Socket See java.net package Nio docs here. A: I found a great tutorial into networking and java from sun's own website: http://download.oracle.com/javase/tutorial/networking/TOC.html The socket section even has you write a mini client / server chat demo. A: Sun's Java API and official tutorials are probably the best place to get your feet wet. A: It's much more straight-forward than you would think. Honestly I'd just start browsing through the javadocs for the nio package. They should even contain mini-tutorials and source code. Beyond that, java.sun.com should be littered with tutorials. If you don't understand sockets---well I could send you to a reference but it's easier to just tell you--sockets are a way 2 programs talk to each other. They are just a unique number that (when combined with your IP address) give you a unique path to a program. So if I "Listen" on port (socket) 1000, then another program connects to port 1000, anything the connecting program sends, the listening program receives. Use a high port number (higher than, say, 5000) because there are many programs that assign their own port. This is how virtually everything on your computer communicates. You might want to read a really brief intro to socket communications if the API is still confusing. A: Here's a pretty basic, easy to read Java networking tutorial too: http://tutorials.jenkov.com/java-networking/index.html A: Google is your friend. Search for "java socket programming tutorial" or something like that and you'll get lots of results, including the one suggested by zxcv as well as these: http://www.javaworld.com/javaworld/jw-12-1996/jw-12-sockets.html http://www.cafeaulait.org/books/jnp/javanetexamples/index.html A: "Head First Java" is a great beginners book and they do a tutorial on creating a simple chat program. http://oreilly.com/catalog/9780596004651/
{ "language": "en", "url": "https://stackoverflow.com/questions/95055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How can I automate the process of deploying an InfoPath Form to SharePoint Server 2007? It is a real pain to deploy my InfoPath 2007 Forms from the designer into our development environment's SharePoint server. All of our forms require "full trust" since they include business logic written in C#. Here are the manual steps: 1) Run the "Publish Form" wizard in InfoPath, specifying the target site to publish to and location to save the xsn file. 2) De-activate the existing version of the form from the site collection features (if an older version exists). 3) Log into Central Admin on the development server. Navigate to Application Management -> Manage Form Templates and upload the xsn file. 4) Activate the form as a site collection feature. Does anyone have an idea how this can be automated? Maybe via stsadm? A: You can package InfoPath forms in SharePoint solutions (WSP files). These can be deployed by making use of STSADM. For more information: * *http://blogs.importchaos.com/alonsorobles/2008/06/04/creating-a-sharepoint-solution-for-an-infopath-form-template-deployment/#comments *http://www.crsw.com/mark/Lists/Posts/Post.aspx?ID=37 *http://blah.winsmarts.com/2008-8-Deploying_InfoPath_2007_Forms_to_Forms_Server_-and-ndash_Properly.aspx A: We can build our own service to deploy the InfoPath form in Share point Server. I have developed the service to solve my problem. I have used “STSADM” command to deploy the InfoPath form. You have to understand the STSADM syntax so that you can build the script to deploy the InfoPath form. Here I have summarized what I did. It may useful for you to start. I developed the web service that will construct the script using STSADM and save it as bat file and run the batch file using Process command available in C#. A: Another couple of options are: 1) After running the "publish form" wizard use a batch file with stsadm commands as per the following blogpost: http://sharenotes.wordpress.com/2008/03/18/using-stsadm-to-deploy-upgrade-update-infopath-forms-templates-with-managed-code-behind/ 2) Use the InfoPath Form Deployment Tool on Codeplex (or you can simply use the tool to generate the batch files): http://www.codeplex.com/InfoPathFormsInstall
{ "language": "en", "url": "https://stackoverflow.com/questions/95059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Stop Activerecord from loading Blob column How can I tell Activerecord to not load blob columns unless explicitly asked for? There are some pretty large blobs in my legacy DB that must be excluded for 'normal' Objects. A: I believe you can ask AR to load specific columns in your invocation to find: MyModel.find(id, :select => 'every, attribute, except, the, blobs') However, this would need to be updated as you add columns, so it's not ideal. I don't think there is any way to specifically exclude one column in rails (nor in a single SQL select). I guess you could write it like this: MyModel.find(id, :select => (MyModel.column_names - ['column_to_exclude']).join(', ')) Test these out before you take my word for it though. :) A: fd's answer is mostly right, but ActiveRecord doesn't currently accept an array as a :select argument, so you'll need to join the desired columns into a comma-delimited string, like so: desired_columns = (MyModel.column_names - ['column_to_exclude']).join(', ') MyModel.find(id, :select => desired_columns) A: I just ran into this using rail 3. Fortunately it wasn't that difficult to solve. I set a default_scope that removed the particular columns I didn't want from the result. For example, in the model I had there was an xml text field that could be quite long that wasn't used in most views. default_scope select((column_names - ['data']).map { |column_name| "`#{table_name}`.`#{column_name}`"}) You'll see from the solution that I had to map the columns to fully qualified versions so I could continue to use the model through relationships without ambiguities in attributes. Later where you do want to have the field just tack on another .select(:data) to have it included. A: A clean approach requiring NO CHANGES to the way you code else where in your app, i.e. no messing with :select options For whatever reason you need or choose to store blobs in databases. Yet, you do not wish to mix blob columns in the same table as your regular attributes. BinaryColumnTable helps you store ALL blobs in a separate table, managed transparently by an ActiveRecord model. Optionally, it helps you record the content-type of the blob. http://github.com/choonkeat/binary_column_table Usage is simple Member.create(:name => "Michael", :photo => IO.read("avatar.png")) #=> creates a record in "members" table, saving "Michael" into the "name" column #=> creates a record in "binary_columns" table, saving "avatar.png" binary into "content" column m = Member.last #=> only columns in "members" table is fetched (no blobs) m.name #=> "Michael" m.photo #=> binary content of the "avatar.png" file
{ "language": "en", "url": "https://stackoverflow.com/questions/95061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Gamepad code on OS X: Buh? I thought I was a decent programmer until I tried writing gamepad code for OS X. Now I feel deeply useless. Does anyone know of any code that I can legally use in my (non-free) game? Is it really this hard to talk to a gamepad on OS X? What am I missing? A: Check out the HID Manager, especially the new HID Manager APIs in Leopard. It's somewhat verbose, but the essence of it is that you can get callbacks when devices are attached and detached, and get callbacks when events from those devices are enqueued. If you're working with Cocoa, Dave Dribin has DDHidLib which provides a nicer Objective-C API atop the HID Manager, and runs on Tiger as well. A: Turns out the answer was Apple's HID_Utilities, which (somewhat) simplifies the job of talking to HID Manager. John Carmack really hit the nail on the head when he said that Apple don't care about games... A: The quickest way to get gamepad events on OSX is to use SDL, the game library. You don't have to use the whole library, you can just init the joystick subsystem and then poll or wait for SDL_JOYAXISMOTION and SDL_JOYBUTTONUP/DOWN events. SDL has an LGPL license, so you can dynamically link to it in your non-free game. Easy! A: No code, but communicating with gamepads and the like is pretty straightforward with the InputSprocket mechanism. What was the precise problem you had?
{ "language": "en", "url": "https://stackoverflow.com/questions/95071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Setting DataGridViewRow.Height slow I have noticed that setting row height in DataGridView control is slow. Is there a way to make it faster? A: What's caused similar layout delays for myself was related to the AutoSizeRowsMode and AutoSizeColumnsMode DataGridView1.AutoSizeRowsMode = None will likely fix it. Also try ColumnHeadersHeightSizeMode to None and AllowUserToResizeRows to False. A: If you can, try setting the height before you bind the control. If you can't do that, try making the control hidden before setting the height. A: This works in most cases but I'm not sure if this is what you are looking for... Try setting up the RowTemplate and use that to set the rows height. // my test to specify a size for a datagridview row dataGridView1.Columns.Add(new DataGridViewTextBoxColumn { Name = "ColumnNameGoesHere" }); dataGridView1.RowTemplate.Height = 50; for (var x = 0; x <= 10000; x++) { dataGridView1.Rows.Add(x.ToString()); } Here is also a nice page on Windows Forms Programming Best Practices for Scaling the Windows Forms DataGridView Control which you may find to be handy: http://msdn.microsoft.com/en-us/library/ha5xt0d9.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/95074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: storing state across postback What is the best way to store string data across postback. I need to store an ID and name for multiple entities. I was thinking of using a datatable in viewstate, but would that make viewstate grow too large? I can't use a database yet because I'll be inserting a record that those other records need to be related to. So I'll just be storing them temporarily until the user submits the form. A: You actually have a lot of options - the one you choose will entirely depend on the requirements of your own application. * *ViewState - you can add the data to the page's viewstate. The advantages of this is that the data will live only for the lifetime of the user being on the page and posting it back to the server. Another advantage of it over hidden fields is that it is harder for users to hack into it and alter your values (I believe, in fact, that you can encrypt your viewstate). The disadvantage, of course, lies in page sizes - everything you add to the view state is one more thing that gets dropped on a user's page and then gets posted back to the server. This makes it non-optimal for storing large amounts of data. *Cookies - You can toss the information back to the user in the form of cookies. In this case, you can declare how long the information will last - for the scope of the user having their browser open, or for a specific calendar time. The information will be available to any page of your application each time the user hits that page. The bad news is that you are limited in the amount of information you can store, and that users can very readily alter their own cookies. *Session - You're storing the user's information on your own server's memory (I'll leave aside the discussion of various types of session storage). In this case the information will live as long as your user's session lives, and will be available to all pages of your application. There is no risk of user's modifying those values directly, though session hijacking is a risk you may want to explore. The disadvantage, though, is that you are using precious server resources in this case - if your application has a large load, it may affect your scalability in the future. As I said - what you choose to do will entirely depend on the needs and requirements of your application. A: several ways (though not an exhaustive list): * *ViewState *hidden fields *session *query string *cookies A: ViewState is fine. If you are storing it across postbacks, a client-side solution is best. So, you'd be adding size somewhere--either in ViewState or hidden fields. If you want to do this server-side, you can use the Session, but remember to clean it up when you can. A: you could just store them to a cookie, this would allow you to access them from Javascript too. Alternatively you could store a simple string array to the view state. A lot depends on what and how much information you wish to store. A: When I have this scenario I create a structure for my fields that I stuff into Viewstate. I'm okay with having a small structure added into the page size and lifecycle considering the entire page's controls set is there already :) Furthermore it cleans up after itself after you're done with the page, so there's no worrying about filling your Session with crap. A: I concur with the accepted answer but I would also add that if you only want to keep track of a simple key/value collection you would be better putting a generic Dictionary into either ViewState or Session: Dictionary<int, string> myValues = new Dictionary<int, string>(); myValues.Add(1, "Apple"); maValues.Add(2, "Pear");
{ "language": "en", "url": "https://stackoverflow.com/questions/95077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SQLite / Firebird embedded for numeric data I have an experiment streaming up 1Mb/s of numeric data which needs to be stored for later processing. It seems as easy to write directly into a database as to a CSV file and I would then have the ability to easily retrieve subsets or ranges. I have experience of sqlite2 (when it only had text fields) and it seemed pretty much as fast as raw disk access. Any opinions on the best current in-process DBMS for this application? Sorry - should have added this is C++ intially on windows but cross platform is nice. Ideally the DB binary file format shoudl be cross platform. A: If you only need to read/write the data, without any checking or manipulation done in database, then both should do it fine. Firebird's database file can be copied, as long as the system has the same endianess (i.e. you cannot copy the file between systems with Intel and PPC processors, but Intel-Intel is fine). However, if you need to ever do anything with data, which is beyond simple read/write, then go with Firebird, as it is a full SQL server with all the 'enterprise' features like triggers, views, stored procedures, temporary tables, etc. BTW, if you decide to give Firebird a try, I highly recommend you use IBPP library to access it. It is a very thin C++ wrapper around Firebird's C API. I has about 10 classes that encapsulate everything and it's dead-easy to use. A: If all you want to do is store the numbers and be able to easily to range queries, you can just take any standard tree data structure you have available in STL and serialize it to disk. This may bite you in a cross-platform environment, especially if you are trying to go cross-architecture. As far as more flexible/people-friendly solutions, sqlite3 is widely used, solid, stable,very nice all around. BerkeleyDB has a number of good features for which one would use it, but none of them apply in this scenario, imho. I'd say go with sqlite3 if you can accept the license agreement. -D A: Depends what language you are using. If it's C/C++, TCL, or PHP, SQLite is still among the best in the single-writer scenario. If you don't need SQL access, a berkeley DB-style library might be slightly faster, like Sleepycat or gdbm. With multiple writers you could consider a separate client/server solution but it doesn't sound like you need it. If you're using Java, hdqldb or derby (shipped with Sun's JVM under the "JavaDB" branding) seem to be the solutions of choice. A: I suspect that neither database will allow you to write data at such high speed. You can check this yourself to be sure. In my experience - SQLite failed to INSERT more then 1000 rows per second for a very simple table with a single integer primary key. In case of a performance problem - I would use CSV format to write the files, and later I would load their data to the database (SQLite or Firebird) for further processing. A: You may also want to consider a numeric data file format that is specifically geared towards storing these types of large data sets. For example: * *HDF -- the most common and well supported in many languages with free libraries. I highly recommend this. *CDF -- a similar format used by NASA (but useable by anyone). *NetCDF -- another similar format (the latest version is actually a stripped-down HDF5). This link has some info about the differences between the above data set types: http://nssdc.gsfc.nasa.gov/cdf/html/FAQ.html
{ "language": "en", "url": "https://stackoverflow.com/questions/95087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to handle UTF-8 characters in sqlite2 to sqlite3 migration Trying the easy approach: sqlite2 mydb.db .dump | sqlite3 mydb-new.db I got this error: SQL error near line 84802: no such column: Ð In that line the script is this: INSERT INTO vehiculo VALUES(127548,'21K0065217',Ñ,'PA007808',65217,279,1989,3,468,'1998-07-30 00:00:00.000000','14/697/98-07',2,'',1); My guess is that the 'Ñ' without quotes is the problem. any idea? PD: I'm under Windows right now and I would like to use the command-line so it can be automatized (this process will be done on daily basis by a server). A: Simply open the v2 database with the sqlite3 binary CLI, and then save it. The database file will be transparently migrated to v3. $ sqlite3 v2database.db sqlite> .quit $ Note: you may need to insert/delete a row before quitting to force an update. A: Well nobody answer... at the end I end up modifying my original script(the one that created the sqlite2 database in the first place) to create the database directly in sqlite3. I think that a big string processing script(big because mi databases are 800mb and 200mb each) can do the job, but generating the database directly was easier for me. A: Simply open the v2 database with the sqlite3 binary CLI, and then save it. The database file will be transparently migrated to v3. It doesn't work. $sqlite3 db2 SQLite version 3.6.16 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> .tables Error: file is encrypted or is not a database sqlite> .q And the file is not changed (apparently sqlite3 couldn't read it). I think the original problem is a bug in sqlite2. A: I tried to do it without windows intervention: *by calling sqlite2 on old.db, and send the dump directly to a file *and then call sqlite3 on new.db and loading the dump directly from the file. Just in case windows was messing with the characters on the command-line. Same Result.
{ "language": "en", "url": "https://stackoverflow.com/questions/95089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why is Application.Restart() not reliable? Using the method Application.Restart() in C# should restart the current application: but it seems that this is not always working. Is there a reason for this Issue, can somebody tell me, why it doesn't work all the time? A: The only time I've run into this kind of issue is when in my main form I had a custom FormClosing event handler, that performed logic and canceled the event. EDIT: I have now run into another instance and based on your comments it possibly mirrors what you were experiencing. When running a single instance application, using a Mutex, I was calling Application.Restart() from a fairly embedded location, that had a lot of cleanup to do. So it seems the restart was launching a new instance before the previous instance was complete, so the Mutex was keeping the new instance from starting. A: In my program I have a mutex to ensure only one instance of the application running on a computer. This was causing the newly started application to not start because the mutex had not been release in a timely fashion. As a result I put a value into Properties.Settings that indicates that the application is restarting. Before calling Application.Restart() the Properties.Settings value is set to true. In Program.Main() I also added a check for that specific property.settings value so that when true it is reset to false and there is a Thread.Sleep(3000); In your program you may have the logic: if (ShouldRestartApp) { Properties.Settings.Default.IsRestarting = true; Properties.Settings.Default.Save(); Application.Restart(); } In Program.Main() [STAThread] static void Main() { Mutex runOnce = null; if (Properties.Settings.Default.IsRestarting) { Properties.Settings.Default.IsRestarting = false; Properties.Settings.Default.Save(); Thread.Sleep(3000); } try { runOnce = new Mutex(true, "SOME_MUTEX_NAME"); if (runOnce.WaitOne(TimeSpan.Zero)) { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } } finally { if (null != runOnce) runOnce.Close(); } } That's it. A: There could be a lot of reasons for this. It's not that the method doesn't work; rather, many times programmers forget that they've put something in their code that would stop the application from automatically shutting down, or starting up. Two examples: * *The Closing event on a form can stop an app's shutdown *If you're doing checking for an already-running process, the old one may not be closing fast enough to allow the new one to start up. Check your code for gotchas like that. If you're seeing this behaviour within a blank application, then that's more likely to be a problem with the actual function than your code. Check Microsoft's sourcecode of application restart. A: Try locking before dumping. Here's how I initiate a full app-dump. Might work for you, might not. Context.Application.Lock(); Context.Session.Abandon(); Context.Application.RemoveAll(); Context.Application.Restart(); Context.Application.UnLock(); A: In my case (NO single-instance), where Application.Restart(); didn't work, System.Diagnostics.Process.Start(Application.ExecutablePath); Application.Exit(); did the job! A: If the application was first launched from a network location and is unsigned (you get the warning dialog first), it won't restart and will only exit. A: Extension methods: public delegate void MethodDelegate<in TControl>(TControl value); public static void InvokeIfRequired<TControl>(this TControl control, MethodDelegate<TControl> action) where TControl : Control { if (control.InvokeRequired) { control.Invoke(action, control); } else { action(control); } } Class privates: private static bool _exiting; private static readonly object SynchObj = new object(); Working horse: public static void ApplicationRestart(params string[] commandLine) { lock (SynchObj) { if (Assembly.GetEntryAssembly() == null) { throw new NotSupportedException("RestartNotSupported"); } if (_exiting) return; _exiting = true; if (Environment.OSVersion.Version.Major < 6) return; bool cancelExit = true; try { foreach (Form f in Application.OpenForms.OfType<Form>().ToList()) { f.InvokeIfRequired(frm => { frm.FormClosing += (sender, args) => cancelExit = args.Cancel; frm.Close(); }); if (cancelExit) break; } if (cancelExit) return; Process.Start(new ProcessStartInfo { UseShellExecute = true, WorkingDirectory = Environment.CurrentDirectory, FileName = Application.ExecutablePath, Arguments = commandLine.Length > 0 ? string.Join(" ", commandLine) : string.Empty }); Application.Exit(); } finally { _exiting = false; } } } A: I Tried with below code and it is working fine static class Program { static Mutex _mutex = new Mutex(false, "MYAPP"); /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { if (!_mutex.WaitOne(1000, false)) { MessageBox.Show("Another instance is already running!!!", "Already Running", MessageBoxButtons.OK, MessageBoxIcon.Error); return; } Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new PrimaryForm()); _mutex.ReleaseMutex(); } } //the place where i am calling application restart used this code Application.Restart(); Application.ExitThread(); Reference Link https://www.codeproject.com/articles/25674/preventing-multiple-application-instances-when-usi A: Try this code: bool appNotRestarted = true; This code must also be in the function: if (appNotRestarted == true) { appNotRestarted = false; Application.Restart(); Application.ExitThread(); } A: Application.Restart(); Application.ExitThread(); this worked for me thanks. A: I have this very same issue with .Net 4.7 framework. The accepted answer was key for my success. I did had code in the FormClosing event that was taking some time and stopping the restart process. What I did was to put a sentinel like this: If JustCloseIT = False Then 'all closing code, like logging the session log-out to a database and all those goodies we all do. End If only then the Application.Restart() worked! A: I know this is an old thread, but I found a workaround. Hopefully this will help someone else in need. I needed a solution that would trigger the update sequence during a ClickOnce Application startup from code. Applicatoin.Restart did not do this. I wanted a way of being able to check for an update and then invoking the built in update manager so that I didn't have to write a custom one. 'VB Code Sample Dim strStart As String = System.Environment.GetFolderPath(Environment.SpecialFolder.StartMenu) & "\Programs\Folder\YourApplication.appref-ms" Application.Exit() Try Process.Start(strStart) Catch ex As Exception 'Do something with the exception End Try The only issue that I see with this workaround is that a user could delete the shortcut from the start menu. If that is a concern you could write some code to copy the start menu link to the some folder of your choosing, preferably in the ClickOnce application folder. This is important because the start menu icon for your application is not a .lnk or .exe, it is actually a .appref-ms link. See ClickOnce .appref-ms more than a link to .application file? This link explains this in more detail. This code will work with ClickOnce SingleInstance Applications.
{ "language": "en", "url": "https://stackoverflow.com/questions/95098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: Is there any built-in way to convert an integer to a string (of any base) in C#? Convert.ToString() only allows base values of 2, 8, 10, and 16 for some odd reason; is there some obscure way of providing any base between 2 and 16? A: Probably to eliminate someone typing a 7 instead of an 8, since the uses for arbitrary bases are few (But not non-existent). Here is an example method that can do arbitrary base conversions. You can use it if you like, no restrictions. string ConvertToBase(int value, int toBase) { if (toBase < 2 || toBase > 36) throw new ArgumentException("toBase"); if (value < 0) throw new ArgumentException("value"); if (value == 0) return "0"; //0 would skip while loop string AlphaCodes = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"; string retVal = ""; while (value > 0) { retVal = AlphaCodes[value % toBase] + retVal; value /= toBase; } return retVal; } Untested, but you should be able to figure it out from here. A: //untested -- public domain // if you do a lot of conversions, using StringBuilder will be // much, much more efficient with memory and time than using string // alone. string toStringWithBase(int number, int base) { if(0==number) //handle corner case return "0"; if(base < 2) return "ERROR: Base less than 2"; StringBuilder buffer = new StringBuilder(); bool negative = (number < 0) ? true : false; if(negative) { number=-number; buffer.Append('-'); } int digits=0; int factor=1; int runningTotal=number; while(number > 0) { number = number/base; digits++; factor*=base; } factor = factor/base; while(factor >= 1) { int remainder = (number/factor) % base; Char out = '0'+remainder; if(remainder > 9) out = 'A' + remainder - 10; buffer.Append(out); factor = factor/base; } return buffer.ToString } A: string foo = Convert.ToString(myint,base); http://msdn.microsoft.com/en-us/library/14kwkz77.aspx EDIT: My bad, this will throw an argument exception unless you pass in the specified bases (2, 8, 10, and 16) Your probably SOL if you want to use a different base (but why???).
{ "language": "en", "url": "https://stackoverflow.com/questions/95105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Vim shortcut for adding arguments to a function Is there a Vim shortcut for jumping to the argument list of the current function? I often find myself needing to mess with the argument list of a function, and it's kind of annoying to have to do ?def or ?function or 10k or what-have-you until I finally get to it, then /( or t( or 5e to get to the right position in the argument list, and so on. It would be great if I could just hit ,a for example and instantly get put into insert mode at the end/beginning of the argument list. Possible approaches: * *Folding *Tag support (ctags) Also, I'm using Python, so solutions based on curly braces unfortunately won't work. If no such shortcut exists, I'll just write one and post it here as an answer. :-) A: Disclaimer, I don't know Python, I assume a Python function can be identified by "function" or "def" from your question. Just change the regex in consequence. May be something like: :nnoremap <buffer> [m :call search('def\|function', 'b')<cr>f( ? NB: * *I have used search() in order to not mess up the search history ; searchpair() may be a better choice as it will only jump to the definition of the function we are within, instead of the previous function. *As this is intended to work with Python only, I use <buffer> in order to not mess up the key-binding in non-Python files; this mapping is best defined in a python ftplugin. HTH, A: map ,a ma[{F(a Hit ,a to go to the argument list, then `a to return to where you were when you invoked ,a. Caveat: [{ jumps back to the last unmatched { character, so if you're inside a loop or other control structure, it will take you to the beginning of that, instead. I don't know of a way to get to the beginning of the function in a fool-proof way. If you're consistent about your tabbing, you may be able to do something like this: map ,a ma?function :nohlf(a where, if you don't use a single tab before you define your functions, you'd change to appropriate value. A: The fool proof way of getting to the begining of a function is to use [[. So you use map ,a ma[[kf(a so it can take you to the function definition, search for the first occurance of "(" and then put you in the insert mode.
{ "language": "en", "url": "https://stackoverflow.com/questions/95106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I delay code execution in Visual Basic (VB6)? I have a long running process in VB6 that I want to finish before executing the next line of code. How can I do that? Built-in function? Can I control how long to wait? Trivial example: Call ExternalLongRunningProcess Call DoOtherStuff How do I delay 'DoOtherStuff'? A: How To Determine When a Shelled Process Has Terminated: * *Archive 1 *Archive 2 If you're calling an external process then you are, in effect, calling it asynchronously. Refer to the above MS Support document for how to wait until your external process is complete. A: While Nescio's answer (DoEvents) will work, it will cause your application to use 100% of one CPU. Sleep will make the UI unresponsive. What you need is a combination of the two, and the magic combination that seems to work best is: Private Declare Sub Sleep Lib "kernel32" (ByVal dwMilliseconds As Long) While IsStillWaitingForSomething() DoEvents DoEvents Sleep(55) Wend Why two DoEvents, and one sleep for 55 milliseconds? The sleep of 55 milliseconds is the smallest slice that VB6 can handle, and using two DoEvents is sometimes required in instances when super-responsiveness is needed (not by the API, but if you application is responding to outside events, SendMessage, Interupts, etc). A: VB.Net: I would use a WaitOne event handle. VB 6.0: I've seen a DoEvents Loop. Do If isSomeCheckCondition() Then Exit Do DoEvents Loop Finally, You could just sleep: Private Declare Sub Sleep Lib "kernel32" (ByVal dwMilliseconds As Long) Sleep 10000 A: If you want to write a sleep or wait without declaring sleep you can write up a loop that uses the systemtimer. This is what i use for testing/debugging when running the interpreter. This can be added while the interpreter is paused, if you'd need such a thing: Dim TimeStart as currency Dim TimeStop as currency Dim TimePassed as currency Dim TimeWait as currency 'use this block where you need a pause TimeWait = 0.5 'seconds TimeStart = Timer() TimePassed = 0 Do while TimePassed < TimeWait 'seconds TimeStop = timer() TimePassed = TimeStop - TimeStart doevents loop A: Break your code up into 2 processes. Run the first, then run your "long running process", then run the second process. A: Run your long-running process in the middle of your current process and wait for it to complete. A: I wish you could just add the .net framework system.dll or whatever to your project references so that you could just do this: Dim ALongTime As Integer = 2000 System.Threading.Thread.Sleep(ALongTime) ...every time. I have VB6, and VB.net 2008 on my machine, and its always difficult for me to switch between the very different IDE's. A: System.Threading.Thread.Sleep(500)
{ "language": "en", "url": "https://stackoverflow.com/questions/95112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Where is the best place to store user related data in asp.net? When a customer logs in to my site, I need to know their account id and their menu id. This lets me know what data they can see on a page and what menu they get. I don't want to have to read this data over and over. Should I store this in a session variable or customize the membership user and membership provider to contain this information? A: As already suggested, the profile system is super easy. http://msdn.microsoft.com/en-us/library/2y3fs9xs.aspx A: If you're going to use the profile provider, make sure to check out the "Optimize ASP.NET 2.0 Profile Provider" section of this article if you're running a high-traffic site: http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx A: The profile system would probably suit your needs. A: I've used MS Table Profile Provider which allows you to specify your own database table structre to store the data rather than the XML schema used in the default profile system. This has the added bonus of allowing you to write your own data access procedures for accessing common profile data.
{ "language": "en", "url": "https://stackoverflow.com/questions/95120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How does jstl's sql tag work? I'm using the following code to query a database from my jsp, but I'd like to know more about what's happening behind the scenes. These are my two primary questions. Does the tag access the ResultSet directly, or is the query result being stored in a datastructure in memory? When is the connection closed? <%@ taglib prefix="sql" uri="http://java.sun.com/jsp/jstl/sql" %> <sql:query var="query" dataSource="${ds}" sql="${listQuery}"></sql:query> <c:forEach var="row" items="${query.rows}" begin="0"> ${row.data } ${row.more_data } </c:forEach> Note: I've always been against running queries in the jsp, but my result set is too large to store in memory between my action and my jsp. Using this tag library looks like the easiest solution. A: Observations based on the source for org.apache.taglibs.standard.tag.common.sql.QueryTagSupport The taglib traverses through the ResultSet and puts all of the data in arrays, Maps, and Lists. So, everything is loaded into memory before you even start looping. The connection is opened when the query start tag is encountered (doStartTag method). The results are retrieved when the query end tag is encountered (doEndTag method). The connection is closed in the doFinally method. It a nutshell, it is absolutely awful. A: The key thing here is this: javax.servlet.jsp.jstl.sql.Result That's what JSTL uses as the result of a SQL Query. If you look at the interface, it has this method: public java.util.SortedMap[] getRows() c:forEach "knows" about javax.servlet.jsp.jstl.sql.Result, since Result isn't anything else that forEach knows about (Collections, arrays, iterators, etc). So, all of that implies that the SQL query will suck the entire result set in to RAM. If you moved your query in to the JSP because you didn't want to load the entire result set in to a collection, then it doesn't look like the SQL tag will solve that problem for you. In truth you should look up Value List Pattern. But a "simple" solution to your problem would be to create a custom Iterator that "knows" about your ResultSet. This one wraps a result set and closes everything if it encounters an exception or if the result runs its course (like it would in a forEach). Kind of a special purpose thing. public class ResultSetIterator implements Iterator { Connection con; Statement s; ResultSet rs; Object curObject; boolean closed; public ResultSetIterator(Connection con, Statement s, ResultSet rs) { this.con = con; this.s = s; this.rs = rs; closed = false; } public boolean hasNext() { advance(); return curObject != null; } public Object next() { advance(); if (curObject == null) { throw new NoSuchElementException(); } else { Object result = curObject; curObject = null; return result; } } public void remove() { throw new UnsupportedOperationException("Not supported yet."); } private void advance() { if (closed) { curObject = null; return; } if (curObject == null) { try { if (rs.next()) { curObject = bindObject(rs); } } catch (SQLException ex) { shutDown(); throw new RuntimeException(ex); } } if (curObject == null) { // Still no object, must be at the end of the result set shutDown(); } } protected Object bindObject(ResultSet rs) throws SQLException { // Bind result set row to an object, replace or override this method String name = rs.getString(1); return name; } public void shutDown() { closed = true; try { rs.close(); } catch (SQLException ex) { // Ignored } try { s.close(); } catch (SQLException ex) { // Ignored } try { con.close(); } catch (SQLException ex) { // Ignored } } } This is, naturally, untested. But since JSTLs forEach can work with an Iterator, it's the simplest object you could really pass to it. This will prevent you from loading the entire result set in to memory. (As an interesting aside, it's notable how almost, but not quite, completely unlike Iterator a ResultSets behavior is.)
{ "language": "en", "url": "https://stackoverflow.com/questions/95134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Differences between MSIL and Java bytecode? I'm new to .Net and I'm trying to understand the basics first. What is the difference between MSIL and Java bytecode? A: First off let me say that I don't think that the subtle differences between Java bytecode and MSIL is something that should bother a novice .NET developer. They both serve the same purpose of defining an abstract target machine which is a layer above the physical machine being used in the end. MSIL and Java bytecode are very similar, in fact there is was a tool called Grasshopper which translates MSIL to Java bytecode, I was part of the development team for Grasshopper so I can share a bit of my (faded) knowledge. Please note that I stopped working on this around when .NET framework 2.0 came out so some of these things may not be true any more (if so please leave a comment and I'll correct it). * *.NET allows user defined types that have value semantics as apposed to the regular reference semantics (struct). *.NET supports unsigned types, this makes the instruction set a bit richer. *Java includes the exception specification of methods in the bytecode. Although exception specification is usually only enforced by the compiler, it may be enforced by the JVM if a class loader other than the default one is used. *.NET generics are expressed in IL while Java generics only use type erasure. *.NET attributes have no equivalent in Java (is this still true?). *.NET enums are not much more than wrappers around integer types while Java enums are pretty much fully fledged classes (thanks to Internet Friend for commenting). *.NET has out and ref parameters. There are other language differences but most of them are not expressed at the byte code level, for example, if memory serves, Java's non-static inner classes (which do not exist in .NET) are not a bytecode feature, the compiler generates an additional argument to the inner class's constructor and passes the outer object. The same is true for .NET lambda expressions. A: CIL (the proper name for MSIL) and Java bytecode are more the same than they are different. There are some important differences though: 1) CIL was designed from the beginning to serve as a target for multiple languages. As such, it supports a much richer type system including signed and unsigned types, value types, pointers, properties, delegates, events, generics, an object-system with a single root, and more. CIL supports features not required for the initial CLR languages (C# and VB.NET) such as global functions and tail-call optimizations. In comparision, Java bytecode was designed as a target for the Java language and reflects many of the constraints found in Java itself. It would be a lot harder to write C or Scheme using Java bytecode. 2) CIL was designed to integrate easily into native libraries and unmanaged code 3) Java bytecode was designed to be either interpreted or compiled while CIL was designed assuming JIT compilation only. That said, the initial implementation of Mono used an interpreter instead of a JIT. 4) CIL was designed (and specified) to have a human readable and writable assembly language form that maps directly to the bytecode form. I believe that Java bytecode was (as the name implies) meant to be only machine readable. Of course, Java bytecode is relatively easily decompiled back to the original Java and, as shown below, it can also be "disassembled". I should note that the JVM (most of them) is more highly optimized than the CLR (any of them). So, raw performance might be a reason to prefer targeting Java bytecode. This is an implementation detail though. Some people say that the Java bytecode was designed to be multi-platform while CIL was designed to be Windows only. This is not the case. There are some "Windows"isms in the .NET framework but there are none in CIL. As an example of point number 4) above, I wrote a toy Java to CIL compiler a while back. If you feed this compiler the following Java program: class Factorial{ public static void main(String[] a){ System.out.println(new Fac().ComputeFac(10)); } } class Fac { public int ComputeFac(int num){ int num_aux ; if (num < 1) num_aux = 1 ; else num_aux = num * (this.ComputeFac(num-1)) ; return num_aux ; } } my compiler will spit out the following CIL: .assembly extern mscorlib { } .assembly 'Factorial' { .ver 0:0:0:0 } .class private auto ansi beforefieldinit Factorial extends [mscorlib]System.Object { .method public static default void main (string[] a) cil managed { .entrypoint .maxstack 16 newobj instance void class Fac::'.ctor'() ldc.i4 3 callvirt instance int32 class Fac::ComputeFac (int32) call void class [mscorlib]System.Console::WriteLine(int32) ret } } .class private Fac extends [mscorlib]System.Object { .method public instance default void '.ctor' () cil managed { ldarg.0 call instance void object::'.ctor'() ret } .method public int32 ComputeFac(int32 num) cil managed { .locals init ( int32 num_aux ) ldarg num ldc.i4 1 clt brfalse L1 ldc.i4 1 stloc num_aux br L2 L1: ldarg num ldarg.0 ldarg num ldc.i4 1 sub callvirt instance int32 class Fac::ComputeFac (int32) mul stloc num_aux L2: ldloc num_aux ret } } This is a valid CIL program that can be fed into a CIL assembler like ilasm.exe to create an executable. As you can see, CIL is a fully human readable and writable language. You can easily create valid CIL programs in any text editor. You can also compile the Java program above with the javac compiler and then run the resulting class files through the javap "disassembler" to get the following: class Factorial extends java.lang.Object{ Factorial(); Code: 0: aload_0 1: invokespecial #1; //Method java/lang/Object."<init>":()V 4: return public static void main(java.lang.String[]); Code: 0: getstatic #2; //Field java/lang/System.out:Ljava/io/PrintStream; 3: new #3; //class Fac 6: dup 7: invokespecial #4; //Method Fac."<init>":()V 10: bipush 10 12: invokevirtual #5; //Method Fac.ComputeFac:(I)I 15: invokevirtual #6; //Method java/io/PrintStream.println:(I)V 18: return } class Fac extends java.lang.Object{ Fac(); Code: 0: aload_0 1: invokespecial #1; //Method java/lang/Object."<init>":()V 4: return public int ComputeFac(int); Code: 0: iload_1 1: iconst_1 2: if_icmpge 10 5: iconst_1 6: istore_2 7: goto 20 10: iload_1 11: aload_0 12: iload_1 13: iconst_1 14: isub 15: invokevirtual #2; //Method ComputeFac:(I)I 18: imul 19: istore_2 20: iload_2 21: ireturn } The javap output is not compilable (to my knowledge) but if you compare it to the CIL output above you can see that the two are very similar. A: They are essentially doing the same thing, MSIL is Microsoft's version of Java bytecode. The main differences internally are: * *Bytecode was developed for both compilation and interpretation, while MSIL was developed explicitly for JIT compilation *MSIL was developed to support multiple languages (C# and VB.NET, etc.) versus Bytecode being written for just Java, resulting in Bytecode being more similar to Java syntactically than IL is to any specific .NET language *MSIL has more explicit delineation between value and reference types A lot more information and a detailed comparison can be found in this article by K John Gough (postscript document) A: There are not that much differences. Both are intermediate formats of the code you wrote. When executed, the Virtual machines will execute the intermediate language managed that means that the Virtual Machine controls the variables and calls. There is even a language which I don't remeber right now that can run at .Net and Java the same way. Basicly, it's just another format for the same thing Edit: Found the language (besides Scala): It's FAN (http://www.fandev.org/), looks very interesting, but no time yet to evaluate A: CIL aka MSIL is intended to be human-readable. Java bytecode is not. Think of Java bytecode as being machine code for hardware that does not exist (but which JVMs emulate). CIL is more like assembly language - one step from machine code, while still being human-readable. A: Serge Lidin authored a decent book on the details of MSIL: Expert .NET 2.0 IL Assembler. I also was able to pick up MSIL quickly by looking at simple methods using .NET Reflector and Ildasm (Tutorial). The concepts between MSIL and Java bytecode are very similar. A: Agreed, the differences are minute enough to ingore as a beginner. If you want to learn .Net starting from the basics, I'd recommend looking at the Common Language Infrastructure, and the Common Type System. A: I think MSIL should not compare to Java bytecode, but "the instruction that comprise the Java bytecodes". There is no name of disassembled java bytecode. "Java Bytecode" should be an unofficial alias, as I cannot find its name in official document. The Java Class File Disassembler say Prints out disassembled code, i.e., the instructions that comprise the Java bytecodes, for each of the methods in the class. These are documented in the Java Virtual Machine Specification. Both "Java VM instructions" and "MSIL" are assembled into .NET bytecode and Java code, which are not human readable.
{ "language": "en", "url": "https://stackoverflow.com/questions/95163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: Java Serialization with non serializable parts I have: class MyClass extends MyClass2 implements Serializable { //... } In MyClass2 is a property that is not serializable. How can I serialize (and de-serialize) this object? Correction: MyClass2 is, of course, not an interface but a class. A: You will need to implement writeObject() and readObject() and do manual serialization/deserialization of those fields. See the javadoc page for java.io.Serializable for details. Josh Bloch's Effective Java also has some good chapters on implementing robust and secure serialization. A: Depends why that member of MyClass2 isn't serializable. If there's some good reason why MyClass2 can't be represented in a serialized form, then chances are good the same reason applies to MyClass, since it's a subclass. It may be possible to write a custom serialized form for MyClass by implementing readObject and writeObject, in such a way that the state of the MyClass2 instance data in MyClass can be suitably recreated from the serialized data. This would be the way to go if MyClass2's API is fixed and you can't add Serializable. But first you should figure out why MyClass2 isn't serializable, and maybe change it. A: As someone else noted, chapter 11 of Josh Bloch's Effective Java is an indispensible resource on Java Serialization. A couple points from that chapter pertinent to your question: * *assuming you want to serialize the state of the non-serializable field in MyClass2, that field must be accessible to MyClass, either directly or through getters and setters. MyClass will have to implement custom serialization by providing readObject and writeObject methods. *the non-serializable field's Class must have an API to allow getting it's state (for writing to the object stream) and then instantiating a new instance with that state (when later reading from the object stream.) *per Item 74 of Effective Java, MyClass2 must have a no-arg constructor accessible to MyClass, otherwise it is impossible for MyClass to extend MyClass2 and implement Serializable. I've written a quick example below illustrating this. class MyClass extends MyClass2 implements Serializable{ public MyClass(int quantity) { setNonSerializableProperty(new NonSerializableClass(quantity)); } private void writeObject(java.io.ObjectOutputStream out) throws IOException{ // note, here we don't need out.defaultWriteObject(); because // MyClass has no other state to serialize out.writeInt(super.getNonSerializableProperty().getQuantity()); } private void readObject(java.io.ObjectInputStream in) throws IOException { // note, here we don't need in.defaultReadObject(); // because MyClass has no other state to deserialize super.setNonSerializableProperty(new NonSerializableClass(in.readInt())); } } /* this class must have no-arg constructor accessible to MyClass */ class MyClass2 { /* this property must be gettable/settable by MyClass. It cannot be final, therefore. */ private NonSerializableClass nonSerializableProperty; public void setNonSerializableProperty(NonSerializableClass nonSerializableProperty) { this.nonSerializableProperty = nonSerializableProperty; } public NonSerializableClass getNonSerializableProperty() { return nonSerializableProperty; } } class NonSerializableClass{ private final int quantity; public NonSerializableClass(int quantity){ this.quantity = quantity; } public int getQuantity() { return quantity; } } A: You can start by looking into the transient keyword, which marks fields as not part of the persistent state of an object. A: Several possibilities poped out and i resume them here: * *Implement writeObject() and readObject() as sk suggested *declare the property transient and it won't be serialized as first stated by hank *use XStream as stated by boris-terzic *use a Serial Proxy as stated by tom-hawtin-tackline A: MyClass2 is just an interface so techinicaly it has no properties, only methods. That being said if you have instance variables that are themselves not serializeable the only way I know of to get around it is to declare those fields transient. ex: private transient Foo foo; When you declare a field transient it will be ignored during the serialization and deserialization process. Keep in mind that when you deserialize an object with a transient field that field's value will always be it's default (usually null.) Note you can also override the readResolve() method of your class in order to initialize transient fields based on other system state. A: XStream is a great library for doing fast Java to XML serialization for any object no matter if it is Serializable or not. Even if the XML target format doesn't suit you, you can use the source code to learn how to do it. A: A useful approach for serialising instances of non-serializable classes (or at least subclasses of) is known a Serial Proxy. Essentially you implement writeReplace to return an instance of a completely different serializable class which implements readResolve to return a copy of the original object. I wrote an example of serialising java.awt.BasicStroke on Usenet A: If possible, the non-serialiable parts can be set as transient private transient SomeClass myClz; Otherwise you can use Kryo. Kryo is a fast and efficient object graph serialization framework for Java (e.g. JAVA serialization of java.awt.Color requires 170 bytes, Kryo only 4 bytes), which can serialize also non serializable objects. Kryo can also perform automatic deep and shallow copying/cloning. This is direct copying from object to object, not object->bytes->object. Here is an example how to use kryo Kryo kryo = new Kryo(); // #### Store to disk... Output output = new Output(new FileOutputStream("file.bin")); SomeClass someObject = ... kryo.writeObject(output, someObject); output.close(); // ### Restore from disk... Input input = new Input(new FileInputStream("file.bin")); SomeClass someObject = kryo.readObject(input, SomeClass.class); input.close(); Serialized objects can be also compressed by registering exact serializer: kryo.register(SomeObject.class, new DeflateCompressor(new FieldSerializer(kryo, SomeObject.class))); A: If you can modify MyClass2, the easiest way to address this is declare the property transient.
{ "language": "en", "url": "https://stackoverflow.com/questions/95181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: How does one create an index on the date part of DATETIME field in MySql How do I create an index on the date part of DATETIME field? mysql> SHOW COLUMNS FROM transactionlist; +-------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+------------------+------+-----+---------+----------------+ | TransactionNumber | int(10) unsigned | NO | PRI | NULL | auto_increment | | WagerId | int(11) | YES | MUL | 0 | | | TranNum | int(11) | YES | MUL | 0 | | | TranDateTime | datetime | NO | | NULL | | | Amount | double | YES | | 0 | | | Action | smallint(6) | YES | | 0 | | | Uid | int(11) | YES | | 1 | | | AuthId | int(11) | YES | | 1 | | +-------------------+------------------+------+-----+---------+----------------+ 8 rows in set (0.00 sec) TranDateTime is used to save the date and time of a transaction as it happens My Table has over 1,000,000 records in it and the statement SELECT * FROM transactionlist where date(TranDateTime) = '2008-08-17' takes a long time. EDIT: Have a look at this blog post on "Why MySQL’s DATETIME can and should be avoided" A: If I remember correctly, that will run a whole table scan because you're passing the column through a function. MySQL will obediently run the function for each and every column, bypassing the index since the query optimizer can't really know the results of the function. What I would do is something like: SELECT * FROM transactionlist WHERE TranDateTime BETWEEN '2008-08-17' AND '2008-08-17 23:59:59.999999'; That should give you everything that happened on 2008-08-17. A: I don't know about the specifics of mySql, but what's the harm in just indexing the date field in its entirety? Then just search: select * from translist where TranDateTime > '2008-08-16 23:59:59' and TranDateTime < '2008-08-18 00:00:00' If the indexes are b-trees or something else that's reasonable, these should get found quickly. A: Valeriy Kravchuk on a feature request for this very issue on the MySQL site said to use this method. "In the meantime you can use character columns for storing DATETIME values as strings, with only first N characters being indexed. With some careful usage of triggers in MySQL 5 you can create a reasonably robust solution based on this idea." You could write a routine pretty easy to add this column, and then with triggers keep this column synced up. The index on this string column should be pretty quick. A: The one and good solution that is pretty good working is to use timestamp as time, rather than datetime. It is stored as INT and being indexed good enough. Personally i encountered such problem on transactions table, that has about million records and slowed down hard, finally i pointed out that this caused by bad indexed field (datetime). Now it runs very quick. A: I don't mean to sound cute, but a simple way would be to add a new column that only contained the date part and index on that. A: Another option (relevant for version 5.7.3 and above) is to create a generated/virtual column based on the datetime column, then index it. CREATE TABLE `table` ( `my_datetime` datetime NOT NULL, `my_date` varchar(12) GENERATED ALWAYS AS (DATE(`my_datetime`)) STORED, KEY `my_idx` (`my_date`) ) ENGINE=InnoDB; A: You can't create an index on just the date part. Is there a reason you have to? Even if you could create an index on just the date part, the optimiser would probably still not use it for the above query. I think you'll find that SELECT * FROM transactionlist WHERE TranDateTime BETWEEN '2008-08-17' AND '2008-08-18' Is efficient and does what you want. A: I don't know about the specifics of mySQL, but what's the harm in just indexing the date field in its entirety? If you use functional magic for * trees, hashes, ... is gone, because for obtaining values you must call the function. But, because you do not know the results ahead, you have to do a full scan of the table. There is nothing to add. Maybe you mean something like computed (calculated?) indexes... but to date, I have only seen this in Intersystems Caché. I don't think there's a case in relational databases (AFAIK). A good solution, in my opinion, is the following (updated clintp example): SELECT * FROM translist WHERE TranDateTime >= '2008-08-17 00:00:00.0000' AND TranDateTime < '2008-08-18 00:00:00.0000' Whether you use 00:00:00.0000 or 00:00 in my opinion makes no difference (I've generally used it in this format). A: datetime LIKE something% will not catch the index either. Use this: WHERE datetime_field >= curdate(); That will catch the index, and cover today:00:00:00 up to today:23:59:59 Done. A: What does 'explain' say? (run EXPLAIN SELECT * FROM transactionlist where date(TranDateTime) = '2008-08-17') If it's not using your index because of the date() function, a range query should run fast: SELECT * FROM transactionlist where TranDateTime >= '2008-08-17' AND TranDateTime < '2008-08-18' A: Rather than making an index based on a function (if that is even possible in mysql) make your where clause do a range comparison. Something like: Where TranDateTime > '2008-08-17 00:00:00' and TranDateTime < '2008-08-17 11:59:59') This lets the DB use the index on TranDateTime (there is one, right?) to do the select. A: If modifying the table is an option, or you're writing a new one, consider storing date and time in separate columns with respective types. You get performance by having a much smaller key space, and reduced storage (compared to a date-only column derived from a datetime). This also makes it feasible to use in compound keys, even before other columns. In OP's case: +-------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+------------------+------+-----+---------+----------------+ | TransactionNumber | int(10) unsigned | NO | PRI | NULL | auto_increment | | WagerId | int(11) | YES | MUL | 0 | | | TranNum | int(11) | YES | MUL | 0 | | | TranDate | date | NO | | NULL | | | TranTime | time | NO | | NULL | | | Amount | double | YES | | 0 | | | Action | smallint(6) | YES | | 0 | | | Uid | int(11) | YES | | 1 | | | AuthId | int(11) | YES | | 1 | | +-------------------+------------------+------+-----+---------+----------------+ A: Create a new fields with just the dates convert(datetime, left(date_field,10)) and then index that.
{ "language": "en", "url": "https://stackoverflow.com/questions/95183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: CruiseControl + Starteam: not picking up all files Our CruiseControl system checks out from starteam. I've noticed that it is sometimes not checking out new versions of files, only added files. Does anyone know why this is? A: I cannot say why this happens, but for what it's worth, we avoid the problem entirely by having StarTeam delete all of the local files before checking-out. We get all of the files that way. We use the following StarTeam arguments in our NAnt script: delete-local -q -p &quot;${starteam_project_root}&quot; -is -filter &quot;N&quot; -cfgd &quot;${exec_time}&quot; Which translates to something like: delete-local -q -p "user:passwd@SERVER:49201/ProjectName/" -is -filter "N"-cfgd "09/18/2008 14:33:22" A: I recently ran into this same issue. The reason this happens is the same reason you don't see out of date files in the GUI or you have a file with unknown status, the status is not updated. So if you update the status on your files it will then be able to pick up those files that have changed from the source control. We accomplish this by adding a step to our configuration file. <prebuild> <exec> <executable>C:\Program Files\Borland\StarTeam Cross-Platform Client 2006 R2\stcmd.exe</executable > <buildArgs>update-status -nologo -is -q -p "username:password@192.168.0.1:49201/Code Project/Code Path" -fp "C:\projects\My Code Directory"</buildArgs> <buildTimeoutSeconds>0</buildTimeoutSeconds> </exec> </prebuild> A: This is a CI build, so I want to see the diffs on each build, cleaning out the build gives me a fresh build each time, and I don't know what is new. So its a known issue? A: If you are using the StarTeam Ant task, check to see what you have set for the includes and excludes parameters to make sure you are not unintentionally restricting what gets checked out. Also the forced and recursive parameters may be something to look at as well. You can see a full explanation of the checkout task here: http://nantcontrib.sourceforge.net/help/tasks/stcheckout.html
{ "language": "en", "url": "https://stackoverflow.com/questions/95192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can you generate a Makefile from an Xcode project? I want to generate a Makefile from an existing Xcode project on the Mac. Specifically, an existing iPhone, Objective-C program on the Mac. I found PBToMake, but it looks like it is for Xcode 2.1 and when I tried using it, it did not work for an Xcode 3.1 project. A: GNUStep provides 'pbxbuild' to convert a .xcodeproj file into a GNUMakefile. UPDATE: pbxbuild is now deprecated A: It's not an automated way of generating a Makefile, but I found it at least gets a list of source files that are built for a target. I'm using Xcode 4.0.2 in this case.. * *select the project top node *select the target in the target list. *Choose "Build Phases" *Open the "Compile Sources" *Select one of the source files *PRess Command-A to select all *Press Command-C to copy *Paste in your favorite text editor and Regex away to restructure. The pasted output is FULL pathnames to each source file. A: Xcode does not support generating a Makefile from a project. If you just want to build your project outside of the IDE, check out the xcodebuild command-line tool. A: You could try mfg. It is template based and I think (I haven't played with it too much) you could get it to generate a suitable makefile.
{ "language": "en", "url": "https://stackoverflow.com/questions/95211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Can jQuery read/write cookies to a browser? Simple example: I want to have some items on a page (like divs or table rows), and I want to let the user click on them to select them. That seems easy enough in jQuery. To save which items a user clicks on with no server-side post backs, I was thinking a cookie would be a simple way to get this done. * *Is this assumption that a cookie is OK in this case, correct? *If it is correct, does the jQuery API have some way to read/write cookie information that is nicer than the default JavaScript APIs? A: To answer your question, yes. The other have answered that part, but it also seems like you're asking if that's the best way to do it. It would probably depend on what you are doing. Typically you would have a user click what items they want to buy (ordering for example). Then they would hit a buy or checkout button. Then the form would send off to a page and process the result. You could do all of that with a cookie but I would find it to be more difficult. You may want to consider posting your second question in another topic. A: The default JavaScript "API" for setting a cookie is as easy as: document.cookie = 'mycookie=valueOfCookie;expires=DateHere;path=/' Use the jQuery cookie plugin like: $.cookie('mycookie', 'valueOfCookie') A: Take a look at the Cookie Plugin for jQuery. A: It seems the jQuery cookie plugin is not available for download. However, you can download the same jQuery cookie plugin with some improvements described in jQuery & Cookies (get/set/delete & a plugin). A: You can browse all the jQuery plugins tagged with "cookie" here: http://plugins.jquery.com/plugin-tags/cookies Plenty of options there. Check out the one called jQuery Storage, which takes advantage of HTML5's localStorage. If localStorage isn't available, it defaults to cookies. However, it doesn't allow you to set expiration. A: I have managed to write a script allowing the user to choose his/her language, using the cookie script from Klaus Hartl. It took me a few hours work, and I hope I can help others. A: You'll need the cookie plugin, which provides several additional signatures to the cookie function. $.cookie('cookie_name', 'cookie_value') stores a transient cookie (only exists within this session's scope, while $.cookie('cookie_name', 'cookie_value', 'cookie_expiration") creates a cookie that will last across sessions - see http://www.stilbuero.de/2006/09/17/cookie-plugin-for-jquery/ for more information on the JQuery cookie plugin. If you want to set cookies that are used for the entire site, you'll need to use JavaScript like this: document.cookie = "name=value; expires=date; domain=domain; path=path; secure" A: A new jQuery plugin for cookie retrieval and manipulation with binding for forms, etc: http://plugins.jquery.com/project/cookies
{ "language": "en", "url": "https://stackoverflow.com/questions/95213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "99" }
Q: T-SQL triggers firing a "Column name or number of supplied values does not match table definition" error Here's something I haven't been able to fix, and I've looked everywhere. Perhaps someone here will know! I have a table called dandb_raw, with three columns in particular: dunsId (PK), name, and searchName. I also have a trigger that acts on this table: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dandb_raw_searchNames] ON [dandb_raw] FOR INSERT, UPDATE AS SET NOCOUNT ON select dunsId, name into #magic from inserted UPDATE dandb SET dandb.searchName = company_generateSearchName(dandb.name) FROM (select dunsId, name from #magic) i INNER JOIN dandb_raw dandb on i.dunsId = dandb.dunsId --Add new search matches SELECT c.companyId, dandb.dunsId INTO #newMatches FROM dandb_raw dandb INNER JOIN (select dunsId, name from #magic) a on a.dunsId = dandb.dunsId INNER JOIN companies c ON dandb.searchName = c.searchBrand --avoid url matches that are potentially wrong AND (lower(dandb.url) = lower(c.url) OR dandb.url = '' OR c.url = '' OR c.url is null) INSERT INTO #newMatches (companyId, dunsId) SELECT c.companyId, max(dandb.dunsId) dunsId FROM dandb_raw dandb INNER JOIN ( select case when charindex('/',url) <> 0 then left(url, charindex('/',url)-1) else url end urlMatch, * from companies ) c ON dandb.url = c.urlMatch where subsidiaryOf = 1 and isReported = 1 and dandb.url <> '' and c.companyId not in (select companyId from #newMatches) group by companyId having count(dandb.dunsId) = 1 UPDATE cd SET cd.dunsId = nm.dunsId FROM companies_dandb cd INNER JOIN #newMatches nm ON cd.companyId = nm.companyId GO The trigger causes inserts to fail: insert into [dandb_raw](dunsId, name) select 3442355, 'harper' union all select 34425355, 'har 466per' update [dandb_raw] set name ='grap6767e' With this error: Msg 213, Level 16, State 1, Procedure companies_contactInfo_updateTerritories, Line 20 Insert Error: Column name or number of supplied values does not match table definition. The most curious thing about this is that each of the individual statements in the trigger works on its own. It's almost as though inserted is a one-off table that infects temporary tables if you try to move inserted into one of them. So what causes the trigger to fail? How can it be stopped? A: I think David and Cervo combined have hit on the problem here. I'm pretty sure part of what was happening was that we were using #newMatches in multiple triggers. When one trigger changed some rows, it would fire another trigger, which would attempt to use the connection scoped #newMatches. As a result, it would try to, find the table already existed with a different schema, die, and produce the message above. One piece of evidence that would be in favor: Does inserted use a stack style scope (nested triggers have their own inserteds?) Still speculating though - at least things seem to be working now! A: What is companies_contactInfo_updateTerritories? The actual reference mentions procedure "companies_contactInfo_updateTerritories" but I do not see it in the code given. Also I do not see where it is being called. Unless it is from your application that is calling the SQL and hence irrelevant.... If you tested everything and it worked but now it doesn't work, then something must be different. One thing to consider is security. I noticed that you just call the table [dandb_raw] and not [dbo].[dandb_raw]. So if the user had a table of the same name [user].[dandb_raw], that table would be used to check the definitions instead of your table. Also, the trigger creates temp tables. But if some of the temp tables already existed for whatever reason but with different definitions, this may also be a problem. A: I don't see any obvious problem in the code. "SELECT .. INTO" is weak kung-fu. Try explicitly creating the temp table definition: CREATE TABLE #newMatches ( CompanyID int PRIMARY KEY, DunsID int ) When you're done with #newMatches, you should get rid of it so you can create it again later (temp tables are connection scoped!!) DROP TABLE #newMatches A: Trigger code (because it must run everytime the data is updated) must be efficient and must account for multiple record inserts. You've succeeded at the second but not the first. You have made this overly complicated and have used things such as Not in statements that are usually less efficeint than using a left join. Temp tables are unnecessary here (I would never consider using one in a trigger) as they add to the inefficiency of the trigger. There is not reason not to write From inserted i instead of FROM (select dunsId, name from #magic) i The first is likely to be faster and is simpler to read and maintain. Here: JOIN ( select case when charindex('/',url) <> 0 then left(url, charindex('/',url)-1) else url end urlMatch, * from companies ) c ON dandb.url = c.urlMatch You are selecting all the fields in the table even though you only appear to be using one. Why? You are also running that case stament on all the records in company even though after you join you may not need all of them. Also in general I would avoid using select * but especially in a trigger. Suppose you are inserting into another table and you used select * from some table joined to inserted or deleted. Adding a column to that table would cause the trigger to fail and stop all data changes until it was fixed. You've also used a function in the trigger. This coudl be painfully slow if you havea large insert. I suggest you test this by updating a large group of records and see what happens. All data changes do not happen just from the user interface, one record at a time. There will be times when one field is updated from an ad-hoc query in management studio (when all prices need to be adjusted by 10% as the simplest example that comes to mind.) Your trigger needs to be able to handle those types if updates as well as the ones you are expecting. I would run a test case updating 100000 rows and see how much this trigger slows things down. Maybe this isn't really answering your problem, but the trigger just is so far from optimal, I had to say it.
{ "language": "en", "url": "https://stackoverflow.com/questions/95218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to evaluate Eclipse RCP for an in-house project? I have only a basic understanding of Eclipse RCP. I am about to start an in-house application for our technical support team, that will likely grow over time. The team is distributed across continents so I would like to be able to auto-update the application when new versions are made available. The application aims to capture knowledge from technical support incidents while making it easy to replay data fixes across clients. The things that make eclipse RCP look interesting are Eclipse Communication Framework (ECF) and Data Tools Platform (DTP). My constraints are: * *Small Team (basically just me for now :) *Have to manage it as a side project until its usefulness is proven I am basically looking for insights from other developers who have worked with Eclipse RCP or who know a better alternative. A: The best way to evaluate RCP is to create a small project ... I started with the tutorial here: http://www.ibm.com/developerworks/edu/os-dw-os-ecl-rcpapp.html and gradually created a less trivial application. Probably the best single resource I've found is the "Eclipse Rich Client Platform" book (which I initially borrowed from the local university library. The book's web site is here: http://eclipsercp.org/book/. The only downside to RCP is the size of the distributed program, but the automatic software update feature makes this much less painful and, if you modularize the application using plugins, the user doesn't have to download the entire application to receive updates to one plugin.
{ "language": "en", "url": "https://stackoverflow.com/questions/95221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: GLSL major mode for Emacs? I found this link http://artis.imag.fr/~Xavier.Decoret/resources/glsl-mode/, but there isn't a lot of description around it, aside that it's "simple". Ideally, I'd like an extension to CcMode that can do it, or at least a mode that can handle auto-styling and has similar shortcuts to CcMode. If there isn't one, any good elisp references to help me get started writing it myself would be greatly appreciated. EDIT: David's response prompted me to take a closer look at glsl-mode.el, and it is in fact based on cc-mode, so it's exactly what I was looking for in the first place. A: Based on GLSL mode, I wrote a similar one for HLSL which is used in Direct3D effect. Here it is. http://sourceforge.net/projects/hlslmode/files/hlsl-mode.el A: Add the following code to your ~/.emacs file. (autoload 'glsl-mode "glsl-mode" nil t) (add-to-list 'auto-mode-alist '("\\.vert\\'" . glsl-mode)) (add-to-list 'auto-mode-alist '("\\.frag\\'" . glsl-mode)) Put the file http://artis.imag.fr/~Xavier.Decoret/resources/glsl-mode/glsl-mode.el somewhere on your emacs path. You can eval (print load-path) in your scratch buffer to get the list of possible locations. If you don't have write access to any of those, you can append another location to load-paths by adding (setq load-path (cons "~/.emacs.d" load-path)) to your ~/.emacs file.
{ "language": "en", "url": "https://stackoverflow.com/questions/95222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: What's the best way to send a file over a network using C#? Can anyone point me to a tutorial on the best way to open a connection from client to server, read in a binary file and send its contents reliably across the network connection? Even better, is there an open source library that already does this that I could refer to? A: You should look into binary serialization and sending it over a TCP socket. Good explanation on different types of serialization: http://www.dotnetspider.com/resources/408-XML-serialization-Binary-serialization.aspx Good primer on TCP Client/Server in C#: http://www.codeproject.com/KB/IP/tcpclientserver.aspx A: This depends what you mean by network - if you're copying on a local network you can just use the file copy operations inside System.IO. If you're wanting to send to remote servers I do this using web services. I compress byte arrays and send them over and decompress on the remote side. The byte array is super easy to write back to disk using streams. I know some people prefer base 64 strings instead of the byte[]. not sure if it matters. A: I wouldn't use HTTP or FTP, for a single file it's too much overhead and too much to code, especially having a simple TCP server almost already made for you in C#. A: Sockets may be the best route if you're just having to do it over the network. If you use TCP, you get the reliability of communication but take an impact on speed. If you need higher performance, you could try using UDP instead. But the downside to UDP is that packet delivery and order is not guaranteed, so you would need to write all that plumbing yourself. If you are needing to transfer files over the web itself (programatically, and if you can't use FTP), then a web service approach via MTOM might fit your needs. If you are building on top of Windows Server 2003 R2, Windows Vista, or Windows Server 2008 and doing internal network transfers, another option is to leverage the new Remote Differential Compression feature. This not only does a really good job at compressing a file to minimize network traffic, but is also used directly by DFS replication. Downside (as a .NET developer), it's a COM+ technology. A: How about using HTTP or FTP? They were sort of made for this. Alex A: Depending upon where you are sending the file to, you might want to take a look at WebClient.UploadFileAsync and WebClient.UploadFile.
{ "language": "en", "url": "https://stackoverflow.com/questions/95235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to get a table of dates between x and y in sql server 2005 I just want a quick way (and preferably not using a while loop)of createing a table of every date between date @x and date @y so I can left outer join to some stats tables, some of which will have no records for certain days in between, allowing me to mark missing days with a 0 A: Strictly speaking this doesn't exactly answer your question, but its pretty neat. Assuming you can live with specifying the number of days after the start date, then using a Common Table Expression gives you: WITH numbers ( n ) AS ( SELECT 1 UNION ALL SELECT 1 + n FROM numbers WHERE n < 500 ) SELECT DATEADD(day,n-1,'2008/11/01') FROM numbers OPTION ( MAXRECURSION 500 ) A: I would create a Calendar table that just contained every date from a suitable start date until a suitable end date. This wouldn't take up much space in your database and would make these types of query child's play. select ... from Calendar left outer join ... where Calendar.Date >= @x and Calendar.Date <= @y A: You'll have to edit the LEFT JOIN statement below so that it labels your stats tables to fit your usecase. In the meantime, here's something inspired by BigJump's answer, written in TSQL. * *Objective: Return all gap days in the dataset idl_sourceTable, where a gap day is a day for which there are no corresponding records in idl_sourceTable. *Constraints: No loops Requirements: * *A table must be created which contains contiguous dates. *A startDate and endDate must be specifiable as input. *The result should allow detection of missing days from other tables whose records have a DATETIME field. -- Declare parameters based on [timeGenerated] of idl_sourceTable DECLARE @startDate DATE SET @startDate = ( SELECT CAST (MIN ([timeGenerated]) AS DATE) FROM idl_sourceTable ) DECLARE @endDate DATE SET @endDate = ( SELECT CAST (MAX ([timeGenerated]) AS DATE) FROM idl_sourceTable ) DECLARE @dateRange INT SET @dateRange = ( SELECT DATEDIFF (DAY, @startDate, @endDate) ) SELECT @startDate, @endDate, @dateRange; -- Create #tempDateTable containing dates delimited between the MIN and MAX timeGenerated of idl_sourceTable DROP TABLE IF EXISTS #tempDateTable; WITH numbers_CTE ( n ) AS ( SELECT 1 UNION ALL SELECT 1 + n FROM numbers_CTE WHERE n <= @dateRange ) SELECT DATEADD (DAY, n-1, @startDate) AS [date] INTO #tempDateTable FROM numbers_CTE OPTION ( MAXRECURSION 0 ) -- disables the default 100 recursion level for the CTE SELECT * FROM #tempDateTable -- Display dates which are not represented in idl_sourceTable SELECT basis.[date] AS [missingDays] FROM #tempDateTable basis LEFT JOIN ( SELECT DISTINCT CAST ( [timeGenerated] AS DATE ) AS [objectDate] FROM idl_sourceTable ) AS object ON object.[objectDate] = basis.[date] WHERE object.[objectDate] IS NULL A: I think that you might as well just do it in a while loop. I know it's ugly, but it's easy and it works. A: I was actually doing something similar a little while back, but I couldn't come up with a way that didn't use a loop. The best I got was a temp table, and then selecting the dates I wanted to join on into that. The blog bduke linked to is cute, although I think the temp table solution is perhaps a cleaner solution. A: I've found another table that stores every date (it's visitors to the website), so how about this... Declare @FromDate datetime, @ToDate datetime Declare @tmpDates table (StatsDate datetime) Set @FromDate = DateAdd(day,-30,GetDate()) Set @ToDate = GetDate() Insert Into @tmpDates (StatsDate) Select distinct CAST(FLOOR(CAST(visitDate AS DECIMAL(12, 5))) AS DATETIME) FROM tbl_visitorstats Where visitDate between @FromDate And @ToDate Order By CAST(FLOOR(CAST(visitDate AS DECIMAL(12, 5))) AS DATETIME) Select * FROM @tmpDates It does rely on the other table having an entry for every date I want, but it's 98% likely there'll be data for every day. A: A slight twist on the answer given as https://stackoverflow.com/a/95728/395440. Allows days to be specified and also calculates range up to the current date. DECLARE @startDate datetime SET @startDate = '2015/5/29'; WITH number ( n ) AS ( SELECT 1 UNION ALL SELECT 1 + n FROM dates WHERE n < DATEDIFF(Day, @startDate, GETDATE()) ) SELECT DATEADD(day,n-1,@startDate) FROM number where datename(dw, DATEADD(day,n-1,@startDate)) in ('Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday') OPTION ( MAXRECURSION 500 ) A: Just: WHERE col > start-date AND col < end-date A: Just write the loop. Someone has to write a loop for this, be it you - or SQL Server. DECLARE @Dates TABLE ( TheDate datetime PRIMARY KEY ) DECLARE @StartDate datetime, @EndDate datetime SELECT @StartDate = '2000-01-01', @EndDate = '2010-01-01' DECLARE @LoopVar int, @LoopEnd int SELECT @LoopEnd = DateDiff(dd, @StartDate, @EndDate), @LoopVar = 0 WHILE @LoopVar <= @LoopEnd BEGIN INSERT INTO @Dates (TheDate) SELECT DateAdd(dd,@LoopVar,@StartDate) SET @LoopVar = @LoopVar + 1 END SELECT * FROM @Dates
{ "language": "en", "url": "https://stackoverflow.com/questions/95257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do you create a parameterized query in MS Access 2003 and use other queries/forms to fill the parameters and obtain a resultset I'd like to be able to create a parameterized query in MS Access 2003 and feed the values of certain form elements to that query and then get the corresponding resultset back and do some basic calculations with them. I'm coming up short in figuring out how to get the parameters of the query to be populated by the form elements. If I have to use VBA, that's fine. A: References to the controls on the form can be used directly in Access queries, though it's important to define them as parameters (otherwise, results in recent versions of Access can be unpredictable where they were once reliable). For instance, if you want to filter a query by the LastName control on MyForm, you'd use this as your criteria: LastName = Forms!MyForm!LastName Then you'd define the form reference as a parameter. The resulting SQL might look something like this: PARAMETERS [[Forms]!MyForm![LastName]] Text ( 255 ); SELECT tblCustomers.* FROM tblCustomers WHERE tblCustomers.LastName=[Forms]![MyForm]![LastName]; I would, however, ask why you need to have a saved query for this purpose. What are you doing with the results? Displaying them in a form or report? If so, you can do this in the Recordsource of the form/report and leave your saved query untouched by the parameters, so it can be used in other contexts without popping up the prompts to fill out the parameters. On the other hand, if you're doing something in code, just write the SQL on the fly and use the literal value of the form control for constructing your WHERE clause. A: Here is a snippet of code. It updates a table using the parameter txtHospital: Set db = CurrentDb Set qdf = db.QueryDefs("AddHospital") qdf.Parameters!txtHospital = Trim(Me.HospName) qdf.ReturnsRecords = False qdf.Execute dbFailOnError intResult = qdf.RecordsAffected Here is a sample of the SQL: PARAMETERS txtHospital Text(255); INSERT INTO tblHospitals ( [Hospital] ) VALUES ( [txtHospital] ) A: There are three traditional ways to get around this issue: * *Name the parameter something cleaver so that the user will be prompted to enter the value when the query is run. *Reference field on a form (possibly hidden) *Build the query on the fly, and don't use parameters. I think it's just wrong to me that you would ave to inject something like [?enter ISO code of the country] or references to fields on your form like : [Forms]![MyForm]![LastName]. It means we can't re-use the same query in more than one place, with different fields supplying the data or have to rely on the user not to foul up the data entry when the query is run. As I recall, it may be hard to use the same value more than once with the user entered parameter. Typically I've chosen the last option an built the query on the fly, and updated the query object as needed. However, that's rife for an SQL injection attack (accidental or on purpose knowing my users), and it's just icky. So I did some digging and I found the following here (http://forums.devarticles.com/microsoft-access-development-49/pass-parameters-from-vba-to-query-62367.html): 'Ed. Start - for completion of the example dim qryStartDate as date dim qryEndDate as date qryStartDate = #2001-01-01# qryEndDate = #2010-01-01# 'Ed. End 'QUOTEING "stallyon": To pass parameters to a query in VBA ' is really quite simple: 'First we'll set some variables: Dim qdf As Querydef Dim rst As Recordset 'then we'll open up the query: Set qdf = CurrentDB.QueryDefs(qryname) 'Now we'll assign values to the query using the parameters option: qdf.Parameters(0) = qryStartDate qdf.Parameters(1) = qryEndDate 'Now we'll convert the querydef to a recordset and run it Set rst = qdf.OpenRecordset 'Run some code on the recordset 'Close all objects rst.Close qdf.Close Set rst = Nothing Set qdf = Nothing (I haven't tested it myself, just something I collected in my travels, because every once in a while I've wanted to do this to, but ended up using one of my previously mentioned kludges) Edit I finally had cause to use this. Here's the actual code. '... Dim qdf As DAO.QueryDef Dim prmOne As DAO.Parameter Dim prmTwo As DAO.Parameter Dim rst as recordset '... 'open up the query: Set qdf = db.QueryDefs("my_two_param_query") 'params called param_one and 'param_two 'link your DAP.Parameters to the query Set prmOne = qdf.Parameters!param_one Set prmTwo = qdf.Parameters!param_two 'set the values of the parameters prmOne = 1 prmTwo = 2 Set rst = qdf.OpenRecordset(dbOpenDynaset, _ dbSeeChanges) '... treat the recordset as normal 'make sure you clean up after your self Set rst = Nothing Set prmOne = Nothing Set prmTwo = Nothing Set qdf = Nothing A: Let's take an example. the parameterized query looks like that: Select Tbl_Country.* From Tbl_Country WHERE id_Country = _ [?enter ISO code of the country] and you'd like to be able to get this value (the [?enter ... country] one) from a form, where you have your controls and some data in it. Well... this might be possible, but it requires some code normalisation. One solution would be to have your form controls named after a certain logic, such as fid_Country for the control that will hold an id_Country value. Your can then have your query as a string: qr = "Select Tbl_Country.* From Tbl_Country WHERE id_Country = [fid_country]" Once you have entered all requested data in your form, press your "query" button. The logic will browse all controls and check if they are in the query, eventually replacing the parameter by the control's value: Dim ctl as Control For each ctl in Me.controls If instr(qr,"[" & ctl.name & "]") > 0 Then qr = replace(qr,"[" & ctl.name & "]",ctl.value) End if Next i Doing so, you will have a fully updated query, where parameters have been replaced by real data. Depending on the type of fid_country (string, GUID, date, etc), you could have to add some extra double quotes or not, to get a final query such as: qr = "Select Tbl_Country.* From Tbl_Country WHERE id_Country = ""GB""" Which is a fully Access compatible query you can use to open a recordset: Set rsQuery = currentDb.openRecordset(qr) I think you are done here. This subject is critical when your objective is to developp Access applications. You have to offer users a standard way to query data from their GUI, not only to launch queries, but also to filter continuous forms (just in the way Excel do it with its "autofilter" option) and manage reports parameters. Good luck! A: the easy method is here Microsoft 'setparameter' info page DoCmd.SetParameter "frontMthOffset", -3 DoCmd.SetParameter "endMthOffset", -2 DoCmd.OpenQuery "QryShowDifference_ValuesChangedBetweenSELECTEDMonths" where the SQL of the Access query includes [frontMthOffset] actually in the SQL. e.g. "select blah from mytable where dateoffset=[frontMthOffset]" It all just works!
{ "language": "en", "url": "https://stackoverflow.com/questions/95277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Log4Net: set Max backup files on RollingFileAppender with rolling Date I have the following configuration, but I have not able to find any documentation on how to set a maximum backup files on date rolling style. I know that you can do this with size rolling style by using the maxSizeRollBackups. <appender name="AppLogFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="mylog.log" /> <appendToFile value="true" /> <lockingModel type="log4net.Appender.FileAppender+MinimalLock" /> <rollingStyle value="Date" /> <datePattern value=".yyMMdd.'log'" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d %-5p %c - %m%n" /> </layout> </appender> A: You can't. from log4net SDK Reference RollingFileAppender Class CAUTION A maximum number of backup files when rolling on date/time boundaries is not supported. A: Even though its not supported, here is how I handled this situation: This is my configuration: <appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="C:\logs\LoggingTest\logfile.txt" /> <appendToFile value="true" /> <rollingStyle value="Composite" /> <datePattern value="yyyyMMdd" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="1MB" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date - %message%newline" /> </layout> </appender> On application start up I do: XmlConfigurator.Configure(); var date = DateTime.Now.AddDays(-10); var task = new LogFileCleanupTask(); task.CleanUp(date); using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using log4net; using log4net.Appender; using log4net.Config; public class LogFileCleanupTask { #region - Constructor - public LogFileCleanupTask() { } #endregion #region - Methods - /// <summary> /// Cleans up. Auto configures the cleanup based on the log4net configuration /// </summary> /// <param name="date">Anything prior will not be kept.</param> public void CleanUp(DateTime date) { string directory = string.Empty; string filePrefix = string.Empty; var repo = LogManager.GetAllRepositories().FirstOrDefault(); ; if (repo == null) throw new NotSupportedException("Log4Net has not been configured yet."); var app = repo.GetAppenders().Where(x => x.GetType() == typeof(RollingFileAppender)).FirstOrDefault(); if (app != null) { var appender = app as RollingFileAppender; directory = Path.GetDirectoryName(appender.File); filePrefix = Path.GetFileName(appender.File); CleanUp(directory, filePrefix, date); } } /// <summary> /// Cleans up. /// </summary> /// <param name="logDirectory">The log directory.</param> /// <param name="logPrefix">The log prefix. Example: logfile dont include the file extension.</param> /// <param name="date">Anything prior will not be kept.</param> public void CleanUp(string logDirectory, string logPrefix, DateTime date) { if (string.IsNullOrEmpty(logDirectory)) throw new ArgumentException("logDirectory is missing"); if (string.IsNullOrEmpty(logPrefix)) throw new ArgumentException("logPrefix is missing"); var dirInfo = new DirectoryInfo(logDirectory); if (!dirInfo.Exists) return; var fileInfos = dirInfo.GetFiles("{0}*.*".Sub(logPrefix)); if (fileInfos.Length == 0) return; foreach (var info in fileInfos) { if (info.CreationTime < date) { info.Delete(); } } } #endregion } The Sub Method is an Extension Method, it basically wraps string.format like so: /// <summary> /// Extension helper methods for strings /// </summary> [DebuggerStepThrough, DebuggerNonUserCode] public static class StringExtensions { /// <summary> /// Formats a string using the <paramref name="format"/> and <paramref name="args"/>. /// </summary> /// <param name="format">The format.</param> /// <param name="args">The args.</param> /// <returns>A string with the format placeholders replaced by the args.</returns> public static string Sub(this string format, params object[] args) { return string.Format(format, args); } } A: Not sure exactly what you need. Below is an extract from one of my lo4net.config files: <appender name="RollingFile" type="log4net.Appender.RollingFileAppender"> <param name="File" value="App_Data\log"/> <param name="DatePattern" value=".yyyy-MM-dd-tt&quot;.log&quot;"/> <param name="AppendToFile" value="true"/> <param name="RollingStyle" value="Date"/> <param name="StaticLogFileName" value="false"/> <param name="maxSizeRollBackups" value="60" /> <layout type="log4net.Layout.PatternLayout"> <param name="ConversionPattern" value="%r %d [%t] %-5p %c - %m%n"/> </layout> </appender> A: I recently came across this need when attempting to clean up log logs based on a maxAgeInDays configuration value passed into my service... As many have before me, I became exposed to the NTFS 'feature' Tunneling, which makes using FileInfo.CreationDate problematic (though I have since worked around this as well)... Since I had a pattern to go off of, I decided to just roll my own clean up method... My logger is configured programmatically, so I merely call the following after my logger setup has completed... //......................... //Log Config Stuff Above... log4net.Config.BasicConfigurator.Configure(fileAppender); if(logConfig.DaysToKeep > 0) CleanupLogs(logConfig.LogFilePath, logConfig.DaysToKeep); } static void CleanupLogs(string logPath, int maxAgeInDays) { if (File.Exists(logPath)) { var datePattern = "yyyy.MM.dd"; List<string> logPatternsToKeep = new List<string>(); for (var i = 0; i <= maxAgeInDays; i++) { logPatternsToKeep.Add(DateTime.Now.AddDays(-i).ToString(datePattern)); } FileInfo fi = new FileInfo(logPath); var logFiles = fi.Directory.GetFiles(fi.Name + "*") .Where(x => logPatternsToKeep.All(y => !x.Name.Contains(y) && x.Name != fi.Name)); foreach (var log in logFiles) { if (File.Exists(log.FullName)) File.Delete(log.FullName); } } } Probably not the prettiest approach, but working pretty well for our purposes... A: To limit the number of logs, do not include the year or month in the datepattern,e.g. datePattern value="_dd'.log'" This will create a new log each day, and it will get overwritten next month. A: I spent some time looking into this a few months ago. v1.2.10 doesn't support deleting older log files based on rolling by date. It is on the task list for the next release. I took the source code and added the functionality myself, and posted it for others if they are interested. The issue and the patch can be found at https://issues.apache.org/jira/browse/LOG4NET-27 . A: NLog, which is set up nearly the same way as Log4Net (& is actively maintained - even has support for .NET Core), supports rolling logs based on date. A: It's fairly easy to inherit from a log4net appender and add say your own override method which performs the clean up of files. I overrode OpenFile to do this. Here's an example of a custom log4net appender to get you started: https://stackoverflow.com/a/2385874/74585 A: Stopped worrying about a more complex x per date and just specified and arbitrary file count and just sort of threw this one together. Be careful with the [SecurityAction.Demand]. public string LogPath { get; set; } public int MaxFileCount { get; set; } = 10; private FileSystemWatcher _fileSystemWatcher; [PermissionSet(SecurityAction.Demand, Name = "FullTrust")] public async Task StartAsync() { await Task.Yield(); if (!Directory.Exists(LogPath)) { Directory.CreateDirectory(LogPath); } _fileSystemWatcher = new FileSystemWatcher { Filter = "*.*", Path = LogPath, EnableRaisingEvents = true, NotifyFilter = NotifyFilters.FileName | NotifyFilters.LastAccess | NotifyFilters.LastWrite | NotifyFilters.Security | NotifyFilters.Size }; _fileSystemWatcher.Created += OnCreated; } public async Task StopAsync() { await Task.Yield(); _fileSystemWatcher.Created -= OnCreated; // prevents a resource / memory leak. _fileSystemWatcher = null; // not using dispose allows us to re-start if necessary. } private void OnCreated(object sender, FileSystemEventArgs e) { var fileInfos = Directory .GetFiles(LogPath) .Select(filePath => new FileInfo(filePath)) .OrderBy(fileInfo => fileInfo.LastWriteTime) .ToArray(); if (fileInfos.Length <= MaxFileCount) { return; } // For every file (over MaxFileCount) delete, starting with the oldest file. for (var i = 0; i < fileInfos.Length - MaxFileCount; i++) { try { fileInfos[i].Delete(); } catch (Exception ex) { /* Handle */ } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/95286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "78" }
Q: Legality of Encryption in Standard Libraries Some programming languages such as Java and C# include encryption packages in their standard libraries. Others such as Python and Ruby make you download third-party modules to do strong encryption. I assume that this is for legal reasons; perhaps Sun Microsystems has enough lawyers that they aren't afraid of getting sued, while Guido van Rossum feels more vulnerable. But what does the law actually say about this? At this point, would open source authors have anything to fear if they included strong encryption in their programming languages' standard libraries? If so, then why don't they? If not, then how do Sun and Microsoft get away with it. A: There are two issues: importation of encryption software, and exportation of encryption software. Some countries (China, Russia, Iran, Iraq, Myanmar, etc.) restrict the use of cryptography by their citizens. It is illegal to import encryption software to those countries. To enable unlimited encryption strength in the JDK, you have to download a new policy file. The software license there doesn't allow you to use the software if you're in a country that doesn't allow importation of encryption. This is called the "Unlimited Strength Jurisdiction Policy," and below I include part of its README.txt. Other countries, like the US, don't want to export encryption software to the Axis of Evil. So, it can be illegal to export encryption software to those countries. The US export restrictions have eased up considerably, probably in recognition of the futility of keeping encryption out of the hands of enemies, or possibly to encourage use of encryption that has been compromised by the NSA. But, they aren't gone altogether. I don't think the software can be licensed by terrorists. JCE for JDK 5.0 has been through the U.S. export review process. The JCE framework, along with the SunJCE provider that comes standard with it, is exportable. The JCE architecture allows flexible cryptographic strength to be configured via jurisdiction policy files. Due to the import restrictions of some countries, the jurisdiction policy files distributed with the JDK 5.0 software have built-in restrictions on available cryptographic strength. The jurisdiction policy files in this download bundle (the bundle including this README file) contain no restrictions on cryptographic strengths. This is appropriate for most countries. Framework vendors can create download bundles that include jurisdiction policy files that specify cryptographic restrictions appropriate for countries whose governments mandate restrictions. Users in those countries can download an appropriate bundle, and the JCE framework will enforce the specified restrictions. You are advised to consult your export/import control counsel or attorney to determine the exact requirements. A: In the US the important law is ITAR. A: Quick google turned up a Wikipedia article. http://en.wikipedia.org/wiki/Export_of_cryptography But as of now it seems like the "No need to reinvent the wheel" is correct. A: IANAL, But... Java and C# are closed-source, and thus have terms in the EULA that say more-or-less "It's not our fault if you use this somewhere you're not supposed to". They also have teams of lawyers to protect themselves and enforce that clause. Most open-source licenses do not have similar langauge, and even the ones that do, they don't have teams of lawyers on their side, as the OP said. Also, Python and PERL are older than Java and C#, from the days when it was illegal to export cryptographic software from the US. Not adding cryptography since the law was changed is perhaps simply a "consistency-is-good" decision.
{ "language": "en", "url": "https://stackoverflow.com/questions/95297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Looking for regex to extract email addresses from /etc/passwd Most of my users have email addresses associated with their profile in /etc/passwd. They are always in the 5th field, which I can grab, but they appear at different places within a comma-separated list in the 5th field. Can somebody give me a regex to grab just the email address (delimeted by commas) from a line in this file? (I will be using grep and sed from a bash script) Sample lines from file: user1:x:1147:5005:User One,Department,,,email@domain.org:/home/directory:/bin/bash user2:x:1148:5002:User Two,Department2,email2@gmail.com,:/home/directory:/bin/bash A: What about: ,([^@]+@[^,:]+) Where the group contains the email address. [Updated based upon comment that address doesn't always get terminated by a comma] A: Actually, this looks like a perfect job for Awk. Now, like most people I will say "I'm no expert in Awk" before proceeding... awk -F : '{print $5}' /etc/passwd would get the 5th field where ':' is the field separator from /etc/passwd - it's probably the 5th field you are wanting. awk -F , '{print $1}' would get the 1st field from standard input where ',' was he delimimter so awk -F : '{print $5}' /etc/passwd | awk -F , '{print $1}' would get the first comma separated field (the Name field) from the fifth colon separated field (the field with all that kind of cruft in it!) in your /etc/passwd file. Adjust the print $1 to get the field with your emails in it. Doubtless there is away to do this without the pipe in Awk. I use Awk for splitting out fields in things and not much else. I find it confusing, and that's from somebody that loves regular expressions... A: sed -r -e "s/^.*[,:]([^,:]+@[^,:]+).*$/\1/g" /etc/passwd Will do the trick A: Search for all email-valid-characters before and after the @ sign. Like: [-A-z0-9.]+@[-A-z0-9.]+ Greedy matching should pull in everything it can, and it'll stop at the commas or colons. Check which characters are valid in email addresses, though. I've left some out (like +) A: sed 's/,*:\/.*//;s/^.*://;s/.*,//' /etc/passwd A: [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])? should catch most emials A: How about the standard RFC 2822: (?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\]) Yep. That's it. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/95305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What tools and techniques do you use to fix browser memory leaks? I am trying to fix memory leaks in IE 7. Using Drip for investigations but it is not helping much when most dynamically generated DOM elements do not have unique ids. Tips? A: You should try the Javascript Memory Leak detector developed internally at Microsoft. A: Well, Your best bet is to understand what causes them, so you can look critically at your code, identify patterns that may cause a leak, and then avoid or refactor around them. Here's a couple of links to get you started, both very informative: * *http://www-128.ibm.com/developerworks/web/library/wa-memleak/ *http://msdn.microsoft.com/en-us/library/bb250448.aspx A: Just remember that memory leaks are really about you not cleaning up after yourself. All you need is a little organization. In the past, I have created my own proxy object for attaching events to DOM elements. It uses my javascript library's api to actually set and remove events. The proxy itself just keeps track of all of the references so that I can call a method on it to have it clean up all of my potential memory leaks. For my purposes, I was able to just call a single deconstructor on the page that would clean up the leaks for the entire page when the user was leaving the page. You may have to be more granular but the technique is the same.
{ "language": "en", "url": "https://stackoverflow.com/questions/95326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: RSS Statistics/Traffic Metrics I want to track how much traffic I'm getting on an RSS feed that is set up using .Net 2.0 & SQL Server. Is there an industry standard on what metrics I should use, for example, page hits? A: Feedburner analysis gives you statistics like: (source: blogperfume.com) (source: blogperfume.com) (source: blogperfume.com) A: I agree, FeedBurner is probably your best option, almost all large sites use it. A: Instead of tracking it yourself, I would encourage you to use a service like FeedBurner. Even if you don't want to use a 3rd party service it will give you an idea of what other people track for RSS feeds.
{ "language": "en", "url": "https://stackoverflow.com/questions/95346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Visual Studio 2005. RC File includes I'm programming in C++ on Visual Studio 2005. My question deals with .rc files. You can manually place include directives like (#include "blah.h"), at the top of an .rc file. But that's bad news since the first time someone opens the .rc file in the resource editor, it gets overwritten. I know there is a place to make these defines so that they don't get trashed but I can't find it and googling hasn't helped. Anyone know? A: Add your #include to the file in the normal way, but also add it to one the three "TEXTINCLUDE" sections in the file, like so: 2 TEXTINCLUDE BEGIN "#include ""windows.h""\r\n" "#include ""blah.h\r\n" "\0" END Note the following details: * *Each line is contained in quotes *Use pairs of quotes, e.g., "" to place a quote character inline *End each line with \r\n *End the TEXTINCLUDE block with "\0" Statements placed in the "1 TEXTINCLUDE" block will be written to the beginning of the .rc file when the file is re-written by the resource editor. Statements placed in the 2 and 3 blocks follow, so you can guarantee relative include file order by using the appropriately numbered block. If your existing rc file does not already include TEXTINCLUDE blocks, use the new file wizard from the Solution Explorer pane to add a new rc file, then use that as a template. A: You want to Include Resources at Compile Time (MSDN). A: Within Visual Studio IDE, right-click on the .rc file (in the Resource View panel), and select "Resource includes" from the shortcut menu. When the dialog opens, use its "Compile-time directives" area to enter whatever you want to include in the .rc file. For example, if you want your 64-bit and 32-bit builds to use different icons, you could include the appropriate resource file for each build as follows: #ifdef WIN64 #include "Icons64.rc" #else #include "Icons32.rc" #endif It's worth noting that these defines are not set in the resource compiler by default, so for your 64 bit build make sure you add /DWIN64 to the rc build. A: All the gory details can be found in MFC Technote #35. -Ron A: I'm not completely sure why you're trying to do, but modifying the resource files manually probably isn't a good idea. I believe general practice for VC++ for globally-accessible values is to define them in stdafx.h (at least that's how I've seen it done), or to create something like a "globals.h" header file and include that wherever you need it. It really depends on what you're trying to accomplish though.
{ "language": "en", "url": "https://stackoverflow.com/questions/95361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do I modify a ReadOnly Linq property I have a LINQ to SQL generated class with a readonly property: <Column(Name:="totalLogins", Storage:="_TotalLogins", DbType:="Int", UpdateCheck:=UpdateCheck.Never)> _ Public ReadOnly Property TotalLogins() As System.Nullable(Of Integer) Get Return Me._TotalLogins End Get End Property This prevents it from being changed externally, but I would like to update the property from within my class like below: Partial Public Class User ... Public Shared Sub Login(Username, Password) ValidateCredentials(UserName, Password) Dim dc As New MyDataContext() Dim user As User = (from u in dc.Users select u where u.UserName = Username)).FirstOrDefault() user._TotalLogins += 1 dc.SubmitChanges() End Sub ... End Class But the call to user._TotalLogins += 1 is not being written to the database? Any thoughts on how to get LINQ to see my changes? A: Set the existing TotalLogins property as either private or protected and remove the readonly attribute. You may also want to rename it e.g. InternalTotalLogins. Then create a new property by hand in the partial class that exposes it publically as a read-only property: Public ReadOnly Property TotalLogins() As System.Nullable(Of Integer) Get Return Me.InternalTotalLogins End Get End Property A: Make a second property that is protected or internal(?) <Column(Name:="totalLogins", Storage:="_TotalLogins", DbType:="Int", UpdateCheck:=UpdateCheck.Never)> _ protected Property TotalLogins2() As System.Nullable(Of Integer) Get Return Me._TotalLogins End Get Set(byval value as System.Nullable(Of Integer)) Return Me._TotalLogins End Get End Property and then update that . I think it won't save readonly properties by default. And why should it anyway. I now it's a hack but hey that's life. A: Before you change the field call SendPropertyChanging() and then after call SendPropertyChanged(""). This will make the Table entity tracking know something changed.
{ "language": "en", "url": "https://stackoverflow.com/questions/95364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Batch source-code aware spell check What is a tool or technique that can be used to perform spell checks upon a whole source code base and its associated resource files? The spell check should be source code aware meaning that it would stick to checking string literals in the code and not the code itself. Bonus points if the spell checker understands common resource file formats, for example text files containing name-value pairs (only check the values). Super-bonus points if you can tell it which parts of an XML DTD or Schema should be checked and which should be ignored. Many IDEs can do this for the file you are currently working with. The difference in what I am looking for is something that can operate upon a whole source code base at once. Something like a Findbugs or PMD type tool for mis-spellings would be ideal. A: As you mentioned, many IDEs have this functionality already, and one such IDE is Eclipse. However, unlike many other IDEs Eclipse is: A) open source B) designed to be programmable For instance, here's an article on using Eclipse's code formatting functionality from the command line: http://www.peterfriese.de/formatting-your-code-using-the-eclipse-code-formatter/ In theory, you should be able to do something similar with it's spell-checking mechanism. I know this isn't exactly what you're looking for, and if there is a program for doing spell-checking in code then obviously that'd be better, but if not then Eclipse may be the next best thing. A: This seems little old but seems to do a good job Source Code Spell Checker
{ "language": "en", "url": "https://stackoverflow.com/questions/95378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Passing parameters to start as a console or GUI application? I have a console application that will be kicked off with a scheduler. If for some reason, part of that file is not able to be built I need a GUI front end so we can run it the next day with specific input. Is there as way pass parameters to the application entry point to start the console application or the GUI application based on the arguments passed. A: It sounds like what you want is to either run as a console app or a windows app based on a commandline switch. If you look at the last message in this thread, Jeffrey Knight posted code to do what you are asking. However, note that many "hybrid" apps actually ship two different executables (look at visual studio- devenv.exe is the gui, devenv.com is the console). Using a "hybrid" approach can sometimes lead to hard to track down issues. A: Go to your main method (Program.cs). You'll put your logic there, and determine what to do , and conditionally execute Application.Run() A: I think Philip is right. Although I've been using the "hybrid" approach in a widely deployed commercial application without problems. I did have some issues with the "hybrid" code I started out with, so I ended up fixing them and re-releasing the solution. So feel free to take advantage of it. It's actually quite simple to use. The hybrid system is on google code and updates an old codeguru solution of this technique and provides the source code and working example binaries. A: Write the GUI output to a file that the console app checks when loading. This way your console app can do the repair operations and the normal operations in one scheduled operation. A: One solution to this would be to have the console app write the config file for a GUI app (WinForms is simplest). I like the Hybrid approach, the command line switch appears to be fluff. It could be simpler to have two applications using the same engine for common functionality. The way to think of it is the console app is for computers to use while the GUI App is for humans to use. Since the CLI App will execute first then it can communicate it's data through the config file to the GUI App. One side benefit would be the interface to the processing engine would be more concise thus easier to maintain in the future. This would be the simplest, because the Config file mechanism is easily available and you do not have to write a bunch of formatting and parsing routines. If you don't want to use the Config mechanism, you could directly write JSON or XML Serialization to a file to easily transfer data also
{ "language": "en", "url": "https://stackoverflow.com/questions/95379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I force a Java subclass to define an Annotation? If a class defined an annotation, is it somehow possible to force its subclass to define the same annotation? For instance, we have a simple class/subclass pair that share the @Author @interface. What I'd like to do is force each further subclass to define the same @Author annotation, preventing a RuntimeException somewhere down the road. TestClass.java: import java.lang.annotation.*; @Retention(RetentionPolicy.RUNTIME) @interface Author { String name(); } @Author( name = "foo" ) public abstract class TestClass { public static String getInfo( Class<? extends TestClass> c ) { return c.getAnnotation( Author.class ).name(); } public static void main( String[] args ) { System.out.println( "The test class was written by " + getInfo( TestClass.class ) ); System.out.println( "The test subclass was written by " + getInfo( TestSubClass.class ) ); } } TestSubClass.java: @Author( name = "bar" ) public abstract class TestSubClass extends TestClass {} I know I can enumerate all annotations at runtime and check for the missing @Author, but I'd really like to do this at compile time, if possible. A: You can do that with JSR 269, at compile time. See : http://today.java.net/pub/a/today/2006/06/29/validate-java-ee-annotations-with-annotation-processors.html#pluggable-annotation-processing-api Edit 2020-09-20: Link is dead, archived version here : https://web.archive.org/web/20150516080739/http://today.java.net/pub/a/today/2006/06/29/validate-java-ee-annotations-with-annotation-processors.html A: I am quite sure that this is impossible to do at compile time. However, this is an obvious task for a "unit"-test. If you have conventions like this that you would like enforced, but which can be difficult or impossible to check with the compiler, "unit"-tests are a simple way to check them. Another possibility is to implement a custom rule in a static analyzer. There are many options here, too. (I put unit in scare-quotes, since this is really a test of conventions, rather than of a specific unit. But it should run together with your unit-tests). A: You could make an Annotation (e.g. @EnforceAuthor) with @Inherited on the superclass and use compiler annotations (since Java 1.6) to catch up at compile time. Then you have a reference to the subclass and can check if another Annotation (e.g. @Author)) is missing. This would allow to cancel compiling with an error message.
{ "language": "en", "url": "https://stackoverflow.com/questions/95389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What are all the different ways to create an object in Java? Had a conversation with a coworker the other day about this. There's the obvious using a constructor, but what are the other ways there? A: Cloning and deserialization. A: There are various ways: * *Through Class.newInstance. *Through Constructor.newInstance. *Through deserialisation (uses the no-args constructor of the most derived non-serialisable base class). *Through Object.clone (does not call a constructor). *Through JNI (should call a constructor). *Through any other method that calls a new for you. *I guess you could describe class loading as creating new objects (such as interned Strings). *A literal array as part of the initialisation in a declaration (no constructor for arrays). *The array in a "varargs" (...) method call (no constructor for arrays). *Non-compile time constant string concatenation (happens to produce at least four objects, on a typical implementation). *Causing an exception to be created and thrown by the runtime. For instance throw null; or "".toCharArray()[0]. *Oh, and boxing of primitives (unless cached), of course. *JDK8 should have lambdas (essentially concise anonymous inner classes), which are implicitly converted to objects. *For completeness (and Paŭlo Ebermann), there's some syntax with the new keyword as well. A: Also you can use Object myObj = Class.forName("your.cClass").newInstance(); A: This should be noticed if you are new to java, every object has inherited from Object protected native Object clone() throws CloneNotSupportedException; A: Also, you can de-serialize data into an object. This doesn't go through the class Constructor ! UPDATED : Thanks Tom for pointing that out in your comment ! And Michael also experimented. It goes through the constructor of the most derived non-serializable superclass. And when that class has no no-args constructor, a InvalidClassException is thrown upon de-serialization. Please see Tom's answer for a complete treatment of all cases ;-) is there any other way of creating an object without using "new" keyword in java A: There is a type of object, which can't be constructed by normal instance creation mechanisms (calling constructors): Arrays. Arrays are created with A[] array = new A[len]; or A[] array = new A[] { value0, value1, value2 }; As Sean said in a comment, this is syntactically similar to a constructor call and internally it is not much more than allocation and zero-initializing (or initializing with explicit content, in the second case) a memory block, with some header to indicate the type and the length. When passing arguments to a varargs-method, an array is there created (and filled) implicitly, too. A fourth way would be A[] array = (A[]) Array.newInstance(A.class, len); Of course, cloning and deserializing works here, too. There are many methods in the Standard API which create arrays, but they all in fact are using one (or more) of these ways. A: Other ways if we are being exhaustive. * *On the Oracle JVM is Unsafe.allocateInstance() which creates an instance without calling a constructor. *Using byte code manipulation you can add code to anewarray, multianewarray, newarray or new. These can be added using libraries such as ASM or BCEL. A version of bcel is shipped with Oracle's Java. Again this doesn't call a constructor, but you can call a constructor as a seperate call. A: Reflection: someClass.newInstance(); A: Reflection will also do the job for you. SomeClass anObj = SomeClass.class.newInstance(); is another way to create a new instance of a class. In this case, you will also need to handle the exceptions that might get thrown. A: * *using the new operator (thus invoking a constructor) *using reflection clazz.newInstance() (which again invokes the constructor). Or by clazz.getConstructor(..).newInstance(..) (again using a constructor, but you can thus choose which one) To summarize the answer - one main way - by invoking the constructor of the object's class. Update: Another answer listed two ways that do not involve using a constructor - deseralization and cloning. A: There are FIVE different ways to create objects in Java: 1. Using `new` keyword: This is the most common way to create an object in Java. Almost 99% of objects are created in this way. MyObject object = new MyObject();//normal way 2. By Using Factory Method: ClassName ObgRef=ClassName.FactoryMethod(); Example: RunTime rt=Runtime.getRunTime();//Static Factory Method 3. By Using Cloning Concept: By using clone(), the clone() can be used to create a copy of an existing object. MyObjectName anotherObject = new MyObjectName(); MyObjectName object = anotherObjectName.clone();//cloning Object 4. Using `Class.forName()`: If we know the name of the class & if it has a public default constructor we can create an object in this way. MyObjectName object = (MyObjectNmae) Class.forName("PackageName.ClassName").newInstance(); Example: String st=(String)Class.forName("java.lang.String").newInstance(); 5. Using object deserialization: Object deserialization is nothing but creating an object from its serialized form. ObjectInputStreamName inStream = new ObjectInputStreamName(anInputStream ); MyObjectName object = (MyObjectNmae) inStream.readObject(); A: You can also clone existing object (if it implements Cloneable). Foo fooClone = fooOriginal.clone (); A: There are four different ways to create objects in java: A. Using new keyword This is the most common way to create an object in java. Almost 99% of objects are created in this way. MyObject object = new MyObject(); B. Using Class.forName() If we know the name of the class & if it has a public default constructor we can create an object in this way. MyObject object = (MyObject) Class.forName("subin.rnd.MyObject").newInstance(); C. Using clone() The clone() can be used to create a copy of an existing object. MyObject anotherObject = new MyObject(); MyObject object = (MyObject) anotherObject.clone(); D. Using object deserialization Object deserialization is nothing but creating an object from its serialized form. ObjectInputStream inStream = new ObjectInputStream(anInputStream ); MyObject object = (MyObject) inStream.readObject(); You can read them from here. A: Within the Java language, the only way to create an object is by calling its constructor, be it explicitly or implicitly. Using reflection results in a call to the constructor method, deserialization uses reflection to call the constructor, factory methods wrap the call to the constructor to abstract the actual construction and cloning is similarly a wrapped constructor call. A: Method 1 Using new keyword. This is the most common way to create an object in java. Almost 99% of objects are created in this way. Employee object = new Employee(); Method 2 Using Class.forName(). Class.forName() gives you the class object, which is useful for reflection. The methods that this object has are defined by Java, not by the programmer writing the class. They are the same for every class. Calling newInstance() on that gives you an instance of that class (i.e. callingClass.forName("ExampleClass").newInstance() it is equivalent to calling new ExampleClass()), on which you can call the methods that the class defines, access the visible fields etc. Employee object2 = (Employee) Class.forName(NewEmployee).newInstance(); Class.forName() will always use the ClassLoader of the caller, whereas ClassLoader.loadClass() can specify a different ClassLoader. I believe that Class.forName initializes the loaded class as well, whereas the ClassLoader.loadClass() approach doesn’t do that right away (it’s not initialized until it’s used for the first time). Another must read: Java: Thread State Introduction with Example Simple Java Enum Example Method 3 Using clone(). The clone() can be used to create a copy of an existing object. Employee secondObject = new Employee(); Employee object3 = (Employee) secondObject.clone(); Method 4 Using newInstance() method Object object4 = Employee.class.getClassLoader().loadClass(NewEmployee).newInstance(); Method 5 Using Object Deserialization. Object Deserialization is nothing but creating an object from its serialized form. // Create Object5 // create a new file with an ObjectOutputStream FileOutputStream out = new FileOutputStream(""); ObjectOutputStream oout = new ObjectOutputStream(out); // write something in the file oout.writeObject(object3); oout.flush(); // create an ObjectInputStream for the file we created before ObjectInputStream ois = new ObjectInputStream(new FileInputStream("crunchify.txt")); Employee object5 = (Employee) ois.readObject(); A: Yes, you can create objects using reflection. For example, String.class.newInstance() will give you a new empty String object. A: There are five different ways to create an object in Java, 1. Using new keyword → constructor get called Employee emp1 = new Employee(); 2. Using newInstance() method of Class → constructor get called Employee emp2 = (Employee) Class.forName("org.programming.mitra.exercises.Employee") .newInstance(); It can also be written as Employee emp2 = Employee.class.newInstance(); 3. Using newInstance() method of Constructor → constructor get called Constructor<Employee> constructor = Employee.class.getConstructor(); Employee emp3 = constructor.newInstance(); 4. Using clone() method → no constructor call Employee emp4 = (Employee) emp3.clone(); 5. Using deserialization → no constructor call ObjectInputStream in = new ObjectInputStream(new FileInputStream("data.obj")); Employee emp5 = (Employee) in.readObject(); First three methods new keyword and both newInstance() include a constructor call but later two clone and deserialization methods create objects without calling the constructor. All above methods have different bytecode associated with them, Read Different ways to create objects in Java with Example for examples and more detailed description e.g. bytecode conversion of all these methods. However one can argue that creating an array or string object is also a way of creating the object but these things are more specific to some classes only and handled directly by JVM, while we can create an object of any class by using these 5 ways. A: From an API user perspective, another alternative to constructors are static factory methods (like BigInteger.valueOf()), though for the API author (and technically "for real") the objects are still created using a constructor. A: We can also create the object in this way:- String s ="Hello"; Nobody has discuss it. A: We can create an objects in 5 ways: * *by new operator *by reflection (e.g. Class.forName() followed by Class.newInstance()) *by factory method *by cloning *by reflection api A: there is also ClassLoader.loadClass(string) but this is not often used. and if you want to be a total lawyer about it, arrays are technically objects because of an array's .length property. so initializing an array creates an object. A: Depends exactly what you mean by create but some other ones are: * *Clone method *Deserialization *Reflection (Class.newInstance()) *Reflection (Constructor object)
{ "language": "en", "url": "https://stackoverflow.com/questions/95419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "184" }
Q: Unit Testing in .NET: How to Mock Entity Data Provider Does anyone know whether there's a way to mock Entity Data Provider so Unit Tests don't hit the live data? I found this blog but it seems the project hasn't been released: http://www.chrisdoesdev.com/index.php/archives/62 Thanks A: Mattwar has a great article on his blog about mocking up LinqtoSql with reflection -- perhaps you can use that as a starting point? A: I would be interested to know this myself. I don't think that it's possible, because one of the things that got the Agile/Alt.Net community in a tizzy about the Entity Framework was this very problem of the lack of persistence ignorance.
{ "language": "en", "url": "https://stackoverflow.com/questions/95426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I open "Find Files" dialog from command-line in Windows XP to search a specific folder? I'd like to create a hotkey to search for files under a specific folder in Windows XP; I'm using AutoHotkey to create this shortcut. Problem is that I need to know a command-line statement to run in order to open the standard Windows "Find Files/Folders" dialog. I've googled for a while and haven't found any page indicating how to do this. I'm assuming that if I know the command-line statement for bringing up this prompt, it will allow me to pass in a parameter for what folder I want to be searching under. I know you can do this by right-clicking on a folder in XP, so I assume there's some way I could do it on the command line...? A: Use Locate32 This isn't the exact answer to your question, but you could use Locate32 instead of the Windows search facility. It has a whole suite of command-line options plus has the huge benefit of being an indexed search, which means the results will display instantaneously. It's a tool I can't be without on Windows. This is the command you would issue to search for all index.php files in D:\home: locate32.exe -r -p D:\home index.php where the -r switch makes Locate32 search immediately without user intervention (without it, the interface would launch and the fields would be populated, but you'd have to hit Enter to proceed with the search) and -p D:\home is the path to search. Using AutoHotKey, it's simple to assign the above command to a keyboard shortcut. There is also a fully command-line based version of Locate32 in the same package called locate.exe. This uses the same indexes as Locate32, but because it is completely CLI-based, can be used by scripting languages and other tools to take advantage of the blistering search performance it offers. A: F3 or Win+F is a hotkey that will launch Find Files. If you then do a search using the criteria you want, you can save the search using the File menu. This creates a .FND file. The FND file can be launched from the command line or from a hot key created with autohotkey. It is possible to edit the .FND file (binary) and change what it is searching for, but I would avoid doing that unless it's the only way you can accomplish what you want. I tried it and it worked fine. A: just execute this line! (WinKey+R, CmdPrompt, Shortcut, ShellExecute, WinExec, etc) search-ms:query=New%20Folder& Find all shortcuts in your desktop search-ms:query=*.lnk&crumb=folder:%userprofile%\Desktop& Find the text "exe" in the folder "C:\Program Files" search-ms:query=exe&crumb=location:C:\Program Files& Other exemples search-ms:query=microsoft& search-ms:query=vacation&subquery=mydepartment.search-ms& search-ms:query=seattle&crumb=kind:pics& search-ms:query=seattle&crumb=folder:C:\MyFolder& reference here http://msdn.microsoft.com/en-us/library/ff684385.aspx A: There is no way from command line to get Explorer to show the Search Files pane. But you can get over it with some VBScript. Try this 'ExplorerFind.vbs Dim objShell Set objShell = WScript.CreateObject("Shell.Application") objShell.FindFiles And compile it with cscript /nologo ExplorerFind.vbs A: from http://www.pcreview.co.uk/forums/thread-1468270.php @echo off echo CreateObject("Shell.Application").FindFiles >%temp%\myff.vbs cscript.exe //Nologo %temp%\myff.vbs del %temp%\myff.vbs A: Try "Launchy". For windows and linux. Awesome util. A: If you need just a hotkey then use Win+f. A: It's a little unclear whether the end-result you want is the open "find" dialog, or if you're just looking for a command-line way to search an arbitrary directory. If the latter there's FINDSTR (assuming you want to search the content of files and not their names): What are good grep tools for Windows? A: Addition to Ben Dunlap's answer: You could also use FINDSTR on the output of the DIR command (for instance in a FOR loop) This would search for filenames, not in files. A: Based on the answer by Vitim.us from cmd all you need is explorer.exe "search-ms:query=*.exe&crumb=location:C:\Program Files&" Change the location and query as needed A: Why don't you try bashing F3? :)
{ "language": "en", "url": "https://stackoverflow.com/questions/95432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Sniffing traffic between a Flex app and ColdFusion backend What is a good strategy for sniffing/tracing function calls between a Flex application and a ColdFusion-based backend running on ColdFusion server? I understand they use AMF protocol. I'm used to using Fiddler to sniff transactions between HTTP clients and servers, and it works great as long as you're using plain text or XML HTTP requests and responses (including those over SSL) but it isn't much help for binary protocols like AMF over HTTP. In my case, I do have access to the source code for the client and server, but I'm looking for an easy way to passively sniff traffic in any Flex + ColdFusion situation, without having to tweak anything on the server. A: Wireshark: sniffing the glue that holds the internet together http://www.wireshark.org/ A: http://www.charlesproxy.com/ Although not free, will decode AMF binary data and allows to trace SSL connections too. A: ServiceCapture is another option. It decodes the binary AMF for you, if I remember correctly. http://kevinlangdon.com/serviceCapture/ A: The simple and poor man's trick. Create one cfc to log calls to the different cfc's and pages as you need. Dump it all to a table. Filter and sort at will. I have done this in the past and it has worked great. It's like putting in little fish hooks anywhere you want to know. This would likely give you the most application relevant data. If you need an example let me know. A: Firebug with the Flashbug plugin will show all decoded AMF messages both to and from a Flash app. Works well over HTTPS too. https://addons.mozilla.org/en-us/firefox/addon/amf-explorer/. A: ditto for wireshark (the artist formerly known as Ethereal). you can sniff at every protocol layer, and stitch together traffic streams.
{ "language": "en", "url": "https://stackoverflow.com/questions/95442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What's the canonical way to store arbitrary (possibly marked up) text in SQL? What do wikis/stackoverflow/etc. do when it comes to storing text? Is the text broken at newlines? Is it broken into fixed-length chunks? How do you best store arbitrarily long chunks of text? A: I guess if you need to offer the ability to store large chunks of text and you don't mind not being able to look into their content too much when querying, you can use CLobs. A: nvarchar(max) ftw. because over complicating simple things is bad, mmkay? A: This all depends on the RDBMS that you are using as well as the types of text that you are going to store. If the text is formatted into sizable chunks of data that mean something in and of themselves, like, say header/body, then you might want to break the data up into columns of these types. It may take multiple tables to use this method depending on the content that you are dealing with. I don't know how other RDBMS's handle it, but I know that that it's not a good idea to have more than one open ended column in each table (text or varchar(max)). So you will want to make sure that only one column has unlimited characters. A: Regarding PostgreSQL - use type TEXT or BYTEA. If you need to read random chunks you may consider large objects. A: If you need to worry about keeping things like formatting strings, quotes, and other "cruft" in the text, as code would likely have, then the special characters need to be completely escaped first - otherwise on submission the db, they might end up causing an invalid command to be issued. Most scripting languages have tools to do this built-in natively. A: I suspect StackOverflow is storing text in markdown format in arbitrarily-sized 'text' column. Maybe as UTF8 (but it might be UTF16 or something. I'm guessing it's SQL Server, which I don't know much about). As a general rule you want to store stuff in your database in the 'rawest' form possible. That is, do all your decoding, and possibly cleaning, but don't do anything else with it (for example, if it's Markdown, don't encode it to HTML, leave it in its original 'raw' format) A: I guess it depends on where you want to store the text, if you need things like transactions etc. Databases like SQL Server have a type that can store long text fields. In SQL Server 2005 this would primarily be nvarchar(max) for long unicode text strings. By using a database you can benefit from transactions and easy backup/restore assuming you are using the database for other things like StackOverflow.com does. The alternative is to store text in files on disk. This may be fairly simple to implement and can work in environments where a database is not available or overkill. Regards the format of the text that is stored in a database or file, it is probably very close to the input. If it's HTML then you would just push it through a function that would correctly escape it. Something to remember is that you probably want to be using unicode or UTF-8 from creation to storage and vice-versa. This will allow you to support additional languages. Any problem with this encoding mechanism will corrupt your text. Historically people may have defaulted to ASCII based on the assumption they were saving disk space etc. A: For SQL Server: Use a varchar(max) to store. I think the upper limit is 2 GB. Don't try to escape the text yourself. Pass the text through a parameterizing structure that will do the escapes properly for you. In .Net you'd add a parameter to a SqlCommand, or just use LinqToSQL (which then manages the SqlCommand for you).
{ "language": "en", "url": "https://stackoverflow.com/questions/95459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Messaging solution for a serial hardware device I have a serial hardware device that I'd like to share with multiple applications, that may reside on different machines within or spanning multiple networks. A key requirement is that the system must support bi-directional communication, such that clients/serial device can exist behind firewalls and/or on different networks and still talk to each other (send and receive) through a central server. Another requirement of the system is that the clients must be able to determine if the gateway/serial device is offline/online. This serial device is capable of receiving and sending packets to a wireless network. The software that communicates with the serial device is written in Java, and I'd like to keep it a 100% Java solution, if possible. I am currently looking at XMPP, using Jive software's Openfire server and Smack API. With this solution, packets coming off the serial device are delivered to clients via XMPP. Similarly, any client application may send packets to the serial device, via the Smack API. Packets are just byte arrays, and limited is size to around 100 bytes, so they can be converted to hex strings and sent as text in the body of a message. The system should be tolerant of the clients/serial device going offline, meaning they will automatically reconnect when they are available again, but packets will be discarded if the client is offline. The packets must be sent and received in near real-time, so offline delivery is not desired. Security should be provided by messaging system and provided client API. I'd like to hear of other possible solutions. I thought of using JMS but it seems a bit too heavyweight and I'm not sure it will support the requirement of knowing if clients and/or the gateway/serial device is offline. A: You really need to provide a bit more detail... do the clients need guaranteed delivery? What about offline delivery? Is this part of a larger system? Do you need encryption? Security? If you want the smallest footprint possible, then should transmit data using SocketServer, Sockets, and serialization. But then you lose all of the advantages of the 3rd party solutions you mentioned, which typically include reliability, delivery guarantees, security, management, etc. I would personally use JMS, but that's because I'm familiar with it. There are a number of stand-alone servers that can be deployed out-of-the-box with virtually no configuration. They all provide for guaranteed delivery, some security, encryption, and a number of other easy-to-use features. Coding a JMS publisher or subscriber is pretty easy. Update: If you want the most ease in coding, then I would look at the third-party solutions. Looking at Smack/XMPP, the API seems to be a little easier than a JMS for the functionality you asked for. You still have to setup/configure a server, etc. The Smack API also has a lot of extra baggage that you don't need either, but its "concepts" are a little more intuitive since its all chat/IM concepts. I would still look at OpenJMS or ActiveMQ. I think knowing JMS will be more valuable in the future as compared to knowing XMPP. Take a look at their Getting Started documentation or the Sun Tutorial to see how much coding is involved. In JMS parlance, you will want an administered "Topic" and a "Queue" where the Serial Port App will receive and send messages respectively. All of your clients will open a subscription to the Topic and send their outbound messages to the Queue. When you send messages, their delivery mode should be non-persistent. A: Jini might fit the job. It works really well in distributed environments where multicast is available but it also works in unicast, and is quite fast. Not only it provides remote services, but also remote events, and distributed transactions if you need them. A downside is that it only works with Java. Where I work, Jini is used in a infrastructure with more that 1000 machines, with each machine providing remote services used to access the devices connect to the machine serial ports. A: I ended up using XMPP via the Smack API. What led me to this decision was its native support for presence (is the client online/offline) and robust connection handling (it automatically reconnects if a the underlying connection breaks). Another benefit of XMPP is that it's compatible with Google Talk, so I don't need to setup a server. Thanks for the suggestions. In case anyone is interested, I have released the code on Google Code http://code.google.com/p/xbee-xmpp/
{ "language": "en", "url": "https://stackoverflow.com/questions/95485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I convert a date/time to epoch time (unix time/seconds since 1970) in Perl? Given a date/time as an array of (year, month, day, hour, minute, second), how would you convert it to epoch time, i.e., the number of seconds since 1970-01-01 00:00:00 GMT? Bonus question: If given the date/time as a string, how would you first parse it into the (y,m,d,h,m,s) array? A: $ENV{TZ}="GMT"; POSIX::tzset(); $time = POSIX::mktime($s,$m,$h,$d,$mo-1,$y-1900); A: Get Date::Manip from CPAN, then: use Date::Manip; $string = '18-Sep-2008 20:09'; # or a wide range of other date formats $unix_time = UnixDate( ParseDate($string), "%s" ); edit: Date::Manip is big and slow, but very flexible in parsing, and it's pure perl. Use it if you're in a hurry when you're writing code, and you know you won't be in a hurry when you're running it. e.g. Use it to parse command line options once on start-up, but don't use it parsing large amounts of data on a busy web server. See the authors comments. (Thanks to the author of the first comment below) A: If you're using the DateTime module, you can call the epoch() method on a DateTime object, since that's what you think of as unix time. Using DateTimes allows you to convert fairly easily from epoch, to date objects. Alternativly, localtime and gmtime will convert an epoch into an array containing day month and year, and timelocal and timegm from the Time::Local module will do the opposite, converting an array of time elements (seconds, minutes, ..., days, months etc.) into an epoch. A: This is the simplest way to get unix time: use Time::Local; timelocal($second,$minute,$hour,$day,$month-1,$year); Note the reverse order of the arguments and that January is month 0. For many more options, see the DateTime module from CPAN. As for parsing, see the Date::Parse module from CPAN. If you really need to get fancy with date parsing, the Date::Manip may be helpful, though its own documentation warns you away from it since it carries a lot of baggage (it knows things like common business holidays, for example) and other solutions are much faster. If you happen to know something about the format of the date/times you'll be parsing then a simple regular expression may suffice but you're probably better off using an appropriate CPAN module. For example, if you know the dates will always be in YMDHMS order, use the CPAN module DateTime::Format::ISO8601. For my own reference, if nothing else, below is a function I use for an application where I know the dates will always be in YMDHMS order with all or part of the "HMS" part optional. It accepts any delimiters (eg, "2009-02-15" or "2009.02.15"). It returns the corresponding unix time (seconds since 1970-01-01 00:00:00 GMT) or -1 if it couldn't parse it (which means you better be sure you'll never legitimately need to parse the date 1969-12-31 23:59:59). It also presumes two-digit years XX up to "69" refer to "20XX", otherwise "19XX" (eg, "50-02-15" means 2050-02-15 but "75-02-15" means 1975-02-15). use Time::Local; sub parsedate { my($s) = @_; my($year, $month, $day, $hour, $minute, $second); if($s =~ m{^\s*(\d{1,4})\W*0*(\d{1,2})\W*0*(\d{1,2})\W*0* (\d{0,2})\W*0*(\d{0,2})\W*0*(\d{0,2})}x) { $year = $1; $month = $2; $day = $3; $hour = $4; $minute = $5; $second = $6; $hour |= 0; $minute |= 0; $second |= 0; # defaults. $year = ($year<100 ? ($year<70 ? 2000+$year : 1900+$year) : $year); return timelocal($second,$minute,$hour,$day,$month-1,$year); } return -1; } A: My favorite datetime parser is DateTime::Format::ISO8601 Once you've got that working, you'll have a DateTime object, easily convertable to epoch seconds with epoch() A: Possibly one of the better examples of 'There's More Than One Way To Do It", with or without the help of CPAN. If you have control over what you get passed as a 'date/time', I'd suggest going the DateTime route, either by using a specific Date::Time::Format subclass, or using DateTime::Format::Strptime if there isn't one supporting your wacky date format (see the datetime FAQ for more details). In general, Date::Time is the way to go if you want to do anything serious with the result: few classes on CPAN are quite as anal-retentive and obsessively accurate. If you're expecting weird freeform stuff, throw it at Date::Parse's str2time() method, which'll get you a seconds-since-epoch value you can then have your wicked way with, without the overhead of Date::Manip. A: There are many Date manipulation modules on CPAN. My particular favourite is DateTime and you can use the strptime modules to parse dates in arbitrary formats. There are also many DateTime::Format modules on CPAN for handling specialised date formats, but strptime is the most generic. A: For further reference, a one liner that can be applied in, for example, !#/bin/sh scripts. EPOCH="`perl -e 'use Time::Local; print timelocal('${SEC}','${MIN}','${HOUR}','${DAY}','${MONTH}','${YEAR}'),\"\n\";'`" Just remember to avoid octal values! A: To parse a date, look at Date::Parse in CPAN. A: I know this is an old question, but thought I would offer another answer. Time::Piece is core as of Perl 5.9.5 This allows parsing of time in arbitrary formats via the strptime method. e.g.: my $t = Time::Piece->strptime("Sunday 3rd Nov, 1943", "%A %drd %b, %Y"); The useful part is - because it's an overloaded object, you can use it for numeric comparisons. e.g. if ( $t < time() ) { #do something } Or if you access it in a string context: print $t,"\n"; You get: Wed Nov 3 00:00:00 1943 There's a bunch of accessor methods that allow for some assorted other useful time based transforms. https://metacpan.org/pod/Time::Piece A: I'm using a very old O/S that I don't dare install libraries onto, so here's what I use; %MonthMatrix=("Jan",0,"Feb",31,"Mar",59,"Apr",90,"May",120,"Jun",151,"Jul",181,"Aug",212,"Sep",243,"Oct",273,"Nov",304,"Dec",334); $LeapYearCount=int($YearFourDigits/4); $EpochDayNumber=$MonthMatrix{$MonthThreeLetters}; if ($LeapYearCount==($YearFourDigits/4)) { if ($EpochDayNumber<32) { $EpochDayNumber--; }} $EpochDayNumber=($YearFourDigits-1970)*365+$LeapYearCount+$EpochDayNumber+$DayAsNumber-493; $TimeOfDaySeconds=($HourAsNumber*3600)+($MinutesAsNumber*60)+$SecondsAsNumber; $ActualEpochTime=($EpochDayNumber*86400)+$TimeOfDaySeconds; The input variables are; $MonthThreeLetters $DayAsNumber $YearFourDigits $HourAsNumber $MinutesAsNumber $SecondsAsNumber ...which should be self-explanatory. The input variables, of course, assume GMT (UTC). The output variable is "$ActualEpochTime". (Often, I only need $EpochDayNumber, so that's why that otherwise superfluous variable sits on its own.) I've used this formula for years with nary an error. A: Here is a quick example that uses the Perl module Time::Local use Time::Local; $number_of_seconds = timelocal(0,24,2, 26,3,2022); The arguments timelocal needs are: second, minute, hour, day, month, year A: If you're just looking for a command-line utility (i.e., not something that will get called from other functions), try out this script. It assumes the existence of GNU date (present on pretty much any Linux system): #! /usr/bin/perl -w use strict; $_ = (join ' ', @ARGV); $_ ||= <STDIN>; chomp; if (/^[\d.]+$/) { print scalar localtime $_; print "\n"; } else { exec "date -d '$_' +%s"; } Here's how it works: $ Time now 1221763842 $ Time yesterday 1221677444 $ Time 1221677444 Wed Sep 17 11:50:44 2008 $ Time '12:30pm jan 4 1987' 536790600 $ Time '9am 8 weeks ago' 1216915200 A: A filter converting any dates in various ISO-related formats (and who'd use anything else after reading the writings of the Mighty Kuhn?) on standard input to seconds-since-the-epoch time on standard output might serve to illustrate both parts: martind@whitewater:~$ cat `which isoToEpoch` #!/usr/bin/perl -w use strict; use Time::Piece; # sudo apt-get install libtime-piece-perl while (<>) { # date --iso=s: # 2007-02-15T18:25:42-0800 # Other matched formats: # 2007-02-15 13:50:29 (UTC-0800) # 2007-02-15 13:50:29 (UTC-08:00) s/(\d{4}-\d{2}-\d{2}([T ])\d{2}:\d{2}:\d{2})(?:\.\d+)? ?(?:\(UTC)?([+\-]\d{2})?:?00\)?/Time::Piece->strptime ($1, "%Y-%m-%d$2%H:%M:%S")->epoch - (defined ($3) ? $3 * 3600 : 0)/eg; print; } martind@whitewater:~$
{ "language": "en", "url": "https://stackoverflow.com/questions/95492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Can this macro be converted to a function? While refactoring code and ridding myself of all those #defines that we're now taught to hate, I came across this beauty used to calculate the number of elements in a structure: #define STRUCTSIZE(s) (sizeof(s) / sizeof(*s)) Very useful as it is but can it be converted into an inline function or template? OK, ARRAYSIZE would be a better name but this is legacy code (no idea where it came from, it's at least 15 years old) so I pasted it 'as is'. A: KTC's solution is clean but it can't be used at compile-time and it is dependent on compiler optimization to prevent code-bloat and function call overhead. One can calculate array size with a compile-time-only metafunction with zero runtime cost. BCS was on the right track but that solution is incorrect. Here's my solution: // asize.hpp template < typename T > struct asize; // no implementation for all types... template < typename T, size_t N > struct asize< T[N] > { // ...except arrays static const size_t val = N; }; template< size_t N > struct count_type { char val[N]; }; template< typename T, size_t N > count_type< N > count( const T (&)[N] ) {} #define ASIZE( a ) ( sizeof( count( a ).val ) ) #define ASIZET( A ) ( asize< A >::val ) with test code (using Boost.StaticAssert to demonstrate compile-time-only usage): // asize_test.cpp #include <boost/static_assert.hpp> #include "asize.hpp" #define OLD_ASIZE( a ) ( sizeof( a ) / sizeof( *a ) ) typedef char C; typedef struct { int i; double d; } S; typedef C A[42]; typedef S B[42]; typedef C * PA; typedef S * PB; int main() { A a; B b; PA pa; PB pb; BOOST_STATIC_ASSERT( ASIZET( A ) == 42 ); BOOST_STATIC_ASSERT( ASIZET( B ) == 42 ); BOOST_STATIC_ASSERT( ASIZET( A ) == OLD_ASIZE( a ) ); BOOST_STATIC_ASSERT( ASIZET( B ) == OLD_ASIZE( b ) ); BOOST_STATIC_ASSERT( ASIZE( a ) == OLD_ASIZE( a ) ); BOOST_STATIC_ASSERT( ASIZE( b ) == OLD_ASIZE( b ) ); BOOST_STATIC_ASSERT( OLD_ASIZE( pa ) != 42 ); // logic error: pointer accepted BOOST_STATIC_ASSERT( OLD_ASIZE( pb ) != 42 ); // logic error: pointer accepted // BOOST_STATIC_ASSERT( ASIZE( pa ) != 42 ); // compile error: pointer rejected // BOOST_STATIC_ASSERT( ASIZE( pb ) != 42 ); // compile error: pointer rejected return 0; } This solution rejects non-array types at compile time so it will not get confused by pointers as the macro version does. A: None has so far proposed a portable way to get the size of an array when you only have an instance of an array and not its type. (typeof and _countof is not portable so can't be used.) I'd do it the following way: template<int n> struct char_array_wrapper{ char result[n]; }; template<typename T, int s> char_array_wrapper<s> the_type_of_the_variable_is_not_an_array(const T (&array)[s]){ } #define ARRAYSIZE_OF_VAR(v) sizeof(the_type_of_the_variable_is_not_an_array(v).result) #include <iostream> using namespace std; int main(){ int foo[42]; int*bar; cout<<ARRAYSIZE_OF_VAR(foo)<<endl; // cout<<ARRAYSIZE_OF_VAR(bar)<<endl; fails } * *It works when only the value is around. *It is portable and only uses std-C++. *It fails with a descriptiv error message. *It does not evaluate the value. (I can't think up of a situation where this would be a problem because array type can't be returned by a function, but better be safe than sorry.) *It returns the size as compiletime constant. I wrapped the construct into a macro to have some decent syntax. If you want to get rid of it your only option is to do the substitution manually. A: The macro has a very misleading name - the expression in the macro will return the number of elements in an array if an array's name is passed in as the macro parameter. For other types you'll get something more or less meaningless if the type is a pointer or you'll get a syntax error. Usually that macro is named something like NUM_ELEMENTS() or something to indicate its true usefulness. It's not possible to replace the macro with a function in C, but in C++ a template can be used. The version I use is based on code in Microsoft's winnt.h header (please let me know if posting this snippet goes beyond fair use): // // Return the number of elements in a statically sized array. // DWORD Buffer[100]; // RTL_NUMBER_OF(Buffer) == 100 // This is also popularly known as: NUMBER_OF, ARRSIZE, _countof, NELEM, etc. // #define RTL_NUMBER_OF_V1(A) (sizeof(A)/sizeof((A)[0])) #if defined(__cplusplus) && \ !defined(MIDL_PASS) && \ !defined(RC_INVOKED) && \ !defined(_PREFAST_) && \ (_MSC_FULL_VER >= 13009466) && \ !defined(SORTPP_PASS) // // RtlpNumberOf is a function that takes a reference to an array of N Ts. // // typedef T array_of_T[N]; // typedef array_of_T &reference_to_array_of_T; // // RtlpNumberOf returns a pointer to an array of N chars. // We could return a reference instead of a pointer but older compilers do not accept that. // // typedef char array_of_char[N]; // typedef array_of_char *pointer_to_array_of_char; // // sizeof(array_of_char) == N // sizeof(*pointer_to_array_of_char) == N // // pointer_to_array_of_char RtlpNumberOf(reference_to_array_of_T); // // We never even call RtlpNumberOf, we just take the size of dereferencing its return type. // We do not even implement RtlpNumberOf, we just decare it. // // Attempts to pass pointers instead of arrays to this macro result in compile time errors. // That is the point. // extern "C++" // templates cannot be declared to have 'C' linkage template <typename T, size_t N> char (*RtlpNumberOf( UNALIGNED T (&)[N] ))[N]; #define RTL_NUMBER_OF_V2(A) (sizeof(*RtlpNumberOf(A))) // // This does not work with: // // void Foo() // { // struct { int x; } y[2]; // RTL_NUMBER_OF_V2(y); // illegal use of anonymous local type in template instantiation // } // // You must instead do: // // struct Foo1 { int x; }; // // void Foo() // { // Foo1 y[2]; // RTL_NUMBER_OF_V2(y); // ok // } // // OR // // void Foo() // { // struct { int x; } y[2]; // RTL_NUMBER_OF_V1(y); // ok // } // // OR // // void Foo() // { // struct { int x; } y[2]; // _ARRAYSIZE(y); // ok // } // #else #define RTL_NUMBER_OF_V2(A) RTL_NUMBER_OF_V1(A) #endif #ifdef ENABLE_RTL_NUMBER_OF_V2 #define RTL_NUMBER_OF(A) RTL_NUMBER_OF_V2(A) #else #define RTL_NUMBER_OF(A) RTL_NUMBER_OF_V1(A) #endif // // ARRAYSIZE is more readable version of RTL_NUMBER_OF_V2, and uses // it regardless of ENABLE_RTL_NUMBER_OF_V2 // // _ARRAYSIZE is a version useful for anonymous types // #define ARRAYSIZE(A) RTL_NUMBER_OF_V2(A) #define _ARRAYSIZE(A) RTL_NUMBER_OF_V1(A) Also, Matthew Wilson's book "Imperfect C++" has a nice treatment of what's going on here (Section 14.3 - page 211-213 - Arrays and Pointers - dimensionof()). A: As been stated, the code actually work out the number of elements in an array, not struct. I would just write out the sizeof() division explicitly when I want it. If I were to make it a function, I would want to make it clear in its definition that it's expecting an array. template<typename T,int SIZE> inline size_t array_size(const T (&array)[SIZE]) { return SIZE; } The above is similar to xtofl's, except it guards against passing a pointer to it (that says point to a dynamically allocated array) and getting the wrong answer by mistake. EDIT: Simplified as per JohnMcG. EDIT: inline. Unfortunately, the above does not provide a compile time answer (even if the compiler does inline & optimize it to be a constant under the hood), so cannot be used as a compile time constant expression. i.e. It cannot be used as size to declare a static array. Under C++0x, this problem go away if one replaces the keyword inline by constexpr (constexpr is inline implicitly). constexpr size_t array_size(const T (&array)[SIZE]) jwfearn's solution work for compile time, but involve having a typedef which effectively "saved" the array size in the declaration of a new name. The array size is then worked out by initialising a constant via that new name. In such case, one may as well simply save the array size into a constant from the start. Martin York's posted solution also work under compile time, but involve using the non-standard typeof() operator. The work around to that is either wait for C++0x and use decltype (by which time one wouldn't actually need it for this problem as we'll have constexpr). Another alternative is to use Boost.Typeof, in which case we'll end up with #include <boost/typeof/typeof.hpp> template<typename T> struct ArraySize { private: static T x; public: enum { size = sizeof(T)/sizeof(*x)}; }; template<typename T> struct ArraySize<T*> {}; and is used by writing ArraySize<BOOST_TYPEOF(foo)>::size where foo is the name of an array. A: * *function, no template function, yes *template, I think so (but C++ *templates are not my thing) Edit: From Doug's code template <typename T> uint32_t StructSize() // This might get inlined to a constant at compile time { return sizeof(T)/sizeof(*T); } // or to get it at compile time for shure class StructSize<typename T> { enum { result = sizeof(T)/sizeof(*T) }; } I've been told that the 2nd one doesn't work. OTOH something like it should be workable, I just don't use C++ enough to fix it. A page on C++ (and D) templates for compile time stuff A: Your macro is misnamed, it should be called ARRAYSIZE. It is used to determine the number of elements in an array whos size is fixed at compile time. Here's a way it can work: char foo[ 128 ]; // In reality, you'd have some constant or constant expression as the array size. for( unsigned i = 0; i < STRUCTSIZE( foo ); ++i ) { } It's kind of brittle to use, because you can make this mistake: char* foo = new char[128]; for( unsigned i = 0; i < STRUCTSIZE( foo ); ++i ) { } You will now iterate for i = 0 to < 1 and tear your hair out. A: The type of a template function is inferred automatically, in contrast with that of a template class. You can use it even simpler: template< typename T > size_t structsize( const T& t ) { return sizeof( t ) / sizeof( *t ); } int ints[] = { 1,2,3 }; assert( structsize( ints ) == 3 ); But I do agree it doesn't work for structs: it works for arrays. So I would rather call it Arraysize :) A: Simplfying @KTC's, since we have the size of the array in the template argument: template<typename T, int SIZE> int arraySize(const T(&arr)[SIZE]) { return SIZE; } Disadvantage is you will have a copy of this in your binary for every Typename, Size combination. A: I prefer the enum method suggested by [BCS](in Can this macro be converted to a function?) This is because you can use it where the compiler is expecting a compile time constant. The current version of the language does not let you use functions results for compile time consts but I believe this coming in the next version of the compiler: The problem with this method is that it does not generate a compile time error when used with a class that has overloaded the '*' operator (see code below for details). Unfortunately the version supplied by 'BCS' does not quite compile as expected so here is my version: #include <iterator> #include <algorithm> #include <iostream> template<typename T> struct StructSize { private: static T x; public: enum { size = sizeof(T)/sizeof(*x)}; }; template<typename T> struct StructSize<T*> { /* Can only guarantee 1 item (maybe we should even disallow this situation) */ //public: enum { size = 1}; }; struct X { int operator *(); }; int main(int argc,char* argv[]) { int data[] = {1,2,3,4,5,6,7,8}; int copy[ StructSize<typeof(data)>::size]; std::copy(&data[0],&data[StructSize<typeof(data)>::size],&copy[0]); std::copy(&copy[0],&copy[StructSize<typeof(copy)>::size],std::ostream_iterator<int>(std::cout,",")); /* * For extra points we should make the following cause the compiler to generate an error message */ X bad1; X bad2[StructSize<typeof(bad1)>::size]; } A: Yes it can be made a template in C++ template <typename T> size_t getTypeSize() { return sizeof(T)/sizeof(*T); } to use: struct JibbaJabba { int int1; float f; }; int main() { cout << "sizeof JibbaJabba is " << getTypeSize<JibbaJabba>() << std::endl; return 0; } See BCS's post above or below about a cool way to do this with a class at compile time using some light template metaprogramming. A: I don't think that that really does work out the number of elements in a structure. If the structure is packed and you used things smaller than the pointer size (such as char on a 32-bit system) then your results are wrong. Also, if the struct contains a struct you are wrong too! A: xtofl has the right answer for finding an array size. No macro or template should be necessary for finding the size of a struct, since sizeof() should do nicely. I agree the preprocessor is evil, but there are occasions where it is the least evil of the alternatives. A: As JohnMcG's answer, but Disadvantage is you will have a copy of this in your binary for every Typename, Size combination. That's why you'd make it an inline template function. A: Answered in detail here: Array Size determination Part 1 and here: Array Size determination Part 2. A: Windows specific: There is the macro _countof() supplied by the CRT exactly for this purpose. A link to the doc at MSDN A: For C99-style variable-length arrays, it appears that the pure macro approach (sizeof(arr) / sizeof(arr[0])) is the only one that will work.
{ "language": "en", "url": "https://stackoverflow.com/questions/95500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I use JUnitPerf with JWebUnit and JUnit 4? I have a series of functional tests against a web application that correctly run, but each require the class level setup and teardown provided with the @BeforeClass and @AfterClass annotations, and hence require JUnit 4.0 or above. Now I want to perform load testing using a small number of these functional tests, which simulate a large number of users requesting the related page of the web application. In order for each user to have their own "simulated browser" in JWebUnit, I need to use a TestFactory in JUnitPerf to instantiate the class under test, but since JUnit 4 tests are annotated with @Test instead of being derived from TestCase, I'm getting a TestFactory must be constructed with a TestCase class exception. Is anyone successfully using JUnitPerf and its TestFactory with JUnit 4? And what is the secret sauce that lets it all work? A: You need a JUnit4 aware TestFactory. I've included one below. import junit.framework.JUnit4TestAdapter; import junit.framework.TestCase; import junit.framework.TestSuite; import com.clarkware.junitperf.TestFactory; class JUnit4TestFactory extends TestFactory { static class DummyTestCase extends TestCase { public void test() { } } private Class<?> junit4TestClass; public JUnit4TestFactory(Class<?> testClass) { super(DummyTestCase.class); this.junit4TestClass = testClass; } @Override protected TestSuite makeTestSuite() { JUnit4TestAdapter unit4TestAdapter = new JUnit4TestAdapter(this.junit4TestClass); TestSuite testSuite = new TestSuite("JUnit4TestFactory"); testSuite.addTest(unit4TestAdapter); return testSuite; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/95506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to detect whether Vista UAC is enabled? I need my application to behave differently depending on whether Vista UAC is enabled or not. How can my application detect the state of UAC on the user's computer? A: This post has sample code in C# to test if UAC is on and if the current app has been given elevated rights. You can download the code and interpret as needed. Also linked there is a sample that shows the same in C++ http://www.itwriting.com/blog/198-c-code-to-detect-uac-elevation-on-vista.html The code in that post does not just read from the registry. If UAC is enabled, chances are you may not have rights to read that from the registry. A: You don't want to check if UAC is enabled; that doesn't tell you anything. I can be a standard user with UAC disabled. You want to check if the user is running with administrative privileges using CheckTokenMembership: ///This function tells us if we're running with administrative permissions. function IsUserAdmin: Boolean; var b: BOOL; AdministratorsGroup: PSID; begin { This function returns true if you are currently running with admin privileges. In Vista and later, if you are non-elevated, this function will return false (you are not running with administrative privileges). If you *are* running elevated, then IsUserAdmin will return true, as you are running with admin privileges. Windows provides this similar function in Shell32.IsUserAnAdmin. But the function is depricated, and this code is lifted from the docs for CheckTokenMembership: http://msdn.microsoft.com/en-us/library/aa376389.aspx } { Routine Description: This routine returns TRUE if the caller's process is a member of the Administrators local group. Caller is NOT expected to be impersonating anyone and is expected to be able to open its own process and process token. Arguments: None. Return Value: TRUE - Caller has Administrators local group. FALSE - Caller does not have Administrators local group. } b := AllocateAndInitializeSid( SECURITY_NT_AUTHORITY, 2, //2 sub-authorities SECURITY_BUILTIN_DOMAIN_RID, //sub-authority 0 DOMAIN_ALIAS_RID_ADMINS, //sub-authority 1 0, 0, 0, 0, 0, 0, //sub-authorities 2-7 not passed AdministratorsGroup); if (b) then begin if not CheckTokenMembership(0, AdministratorsGroup, b) then b := False; FreeSid(AdministratorsGroup); end; Result := b; end; A: You can do it be examining the DWORD value EnableLUA in the following registry key: HKLM/SOFTWARE/Microsoft/Windows/CurrentVersion/Policies/System If the value is 0 (or does not exist) then the UAC is OFF. If it's present and non-zero, then UAC is ON: BOOL IsUacEnabled( ) { LPCTSTR pszSubKey = _T("SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Policies\\System"); LPCTSTR pszValue = _T("EnableLUA"); DWORD dwType = 0; DWORD dwValue = 0; DWORD dwValueSize = sizeof( DWORD ); if ( ERROR_SUCCESS != SHGetValue( HKEY_LOCAL_MACHINE, pszSubKey, pszValueOn, &dwType, &dwValue, &dwValueSize) ) { return FALSE; } return dwValue != 0; } Note that if the user has changed the state of UAC but has not restarted the computer yet, this function will return an inconsistent result. A: Check for the registry value at HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System The EnableLUA value determines if UAC is active. A: This post is rather ancient, but I wanted to comment on the "why do you need to know" and "check token membership" bits. The fact is that Microsoft's very own documentation says that "If User Account Control has been turned off and a Standard user attempts to perform a task that requires elevation" we should provide an error instead of showing buttons and/or links with the UAC shield that attempt elevation. See http://msdn.microsoft.com/en-us/library/windows/desktop/aa511445.aspx towards the bottom for the details. How are we do to this without a way of checking whether UAC is enabled? Perhaps checking whether the user is running with admin privileges is the right thing to do in this instance, but who knows? The guidance that Microsoft gives is, at best, iffy, if not just downright confusing. A: This registry key should tell you: HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System Value EnableLUA (DWORD) 1 enabled / 0 or missing disabled But that assumes you have the rights to read it. Programmatically you can try to read the user's token and guess if it's an admin running with UAC enabled (see here). Not foolproof, but it may work. The issue here is more of a "why do you need to know" - it has bearing on the answer. Really, there is no API because from a OS behavior point of view, what matters is if the user is an administrator or not - how they choose to protect themselves as admin is their problem. A: For anyone else that finds this and is looking for a VBScript solution. Here is what I came up with to detect if UAC is enabled and if so relaunch my script with elevated privileges. Just put your code in the Body() function. I found there were problems with transportability between XP and Windows 7 if I wrote code to always launch elevated. Using this method I bypass the elevation if there is no UAC. Should also take into account 2008 and above server versions that have UAC enabled. On Error Resume Next UACPath = "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLUA" Dim WshShell Set WshShell = CreateObject("wscript.Shell") UACValue = WshShell.RegRead(UACPath) If UACValue = 1 Then 'Run Elevated If WScript.Arguments.length =0 Then Set objShell = CreateObject("Shell.Application") 'Pass a bogus argument with leading blank space, say [ uac] objShell.ShellExecute "wscript.exe", Chr(34) & _ WScript.ScriptFullName & Chr(34) & " uac", "", "runas", 1 WScript.Quit Else Body() End If Else Body() End If Function Body() MsgBox "This is the body of the script" End Function A: AFAIK, UAC is apolicy setting on the local user or group. So you can read this property from within .Net. Sorry for not having more details but I hope this helps
{ "language": "en", "url": "https://stackoverflow.com/questions/95510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Why is my program asking for permission to run on Vista? I've just built a VS C++ 6.0 program using VS 2008. When I attempt to run or debug the application, Vista asks for permission. What is it about how the program is built that causes this? The program is being built and run from a subfolder of C:\Dev This response made no sense to me as a solution to the problem. A: Possibility 1: Your program is marked as needing admin rights in its manifest Possibility 2: Your program is called setup.exe or install.exe - such program names always cause administrator rights to be required For detailed explanation of those and other possibilities why you see this check Getting to Know User Account Control Technet article A: The MVP was talking about having your code and project run from your user folder for example c:\users\yourname\appdata or something under that path. Do not disable UAC to fix this problem otherwise your application will not run on another machine unless it to has UAC turned off. It is a very bad practice. Your application, in a perfect world, should request elevated permissions from the user. A: Thank you Suma. You're response is the best yet and helped me arrive at a solution. I have determined that cause is explained by your first suggestion. Renaming the file to something not containing the word 'setup" did not help. Turned out I was mistaken. I have both VS 2005 and VS 2008 installed and when I tried opening the old .dsw file, it was 2005 that was launched and offered to upgrade the project. 2005 apparently created a manifest with only one line with the tag "assembly". Once I upgraded the project using VS 2008 a more extensive manifest file was created. I confirmed that the manifest is being embedded in my program by checking the Manifest Tool...Input and Output...Embed Manifest setting. This new manifest includes the following data: <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> A: If you're not an admin, then you probably don't have permission to execute programs in C:\Dev.
{ "language": "en", "url": "https://stackoverflow.com/questions/95525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Subversion Error: "Working copy [directory] not locked" I am trying to merge a directory in subversion, but I get the following error when I do so: svn: Working copy '[directory name]' not locked' I tried deleting the working directory and doing a fresh update, but that did not solve the issue. I also did a cleanup on the directory. Does anyone know how to fix this? In this instance, the parent directory has the same name as the sub directory. I don't know if this has anything to do with the error though. A: Check out this blog posting (Obscure "svn mv" problem solved)... I typically just remove the directory and grab fresh sources. A: I got this error using IDEA. I got around this by doing a tortoise svn cleanup in windows explorer, the equivalent cleanup in IDEA did not work. A: Are you by chance using TortoiseSVN and some other client (Such as subversive or the commandline client?). Sometimes Tortoise can unintentionally gum up other clients. I don't remember what exactly causes this to happen. A: Without seeing your exact directory setup it's hard to say what is happening. One reason for this error message could be that one part of your merge command does refer to a directory that is not under version control. Can you post the exact merge command that triggers the error? A: Try doing a clean-up and then an update. If that not work, please explain better your issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/95543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Should I catch exceptions only to log them? Should I catch exceptions for logging purposes? public foo(..) { try { ... } catch (Exception ex) { Logger.Error(ex); throw; } } If I have this in place in each of my layers (DataAccess, Business and WebService) it means the exception is logged several times. Does it make sense to do so if my layers are in separate projects and only the public interfaces have try/catch in them? Why? Why not? Is there a different approach I could use? A: The general rule of thumb is that you only catch an exception if you can actually do something about it. So at the Business or Data layer, you would only catch the exception in situation's like this: try { this.Persist(trans); } catch(Exception ex) { trans.Rollback(); throw; } My Business/Data Layer attempts to save the data - if an exception is generated, any transactions are rolled back and the exception is sent to the UI layer. At the UI layer, you can implement a common exception handler: Application.ThreadException += new ThreadExceptionEventHandler(Application_ThreadException); Which then handles all exceptions. It might log the exception and then display a user friendly response: static void Application_ThreadException(object sender, ThreadExceptionEventArgs e) { LogException(e.Exception); } static void LogException(Exception ex) { YYYExceptionHandling.HandleException(ex, YYYExceptionHandling.ExceptionPolicyType.YYY_Policy, YYYExceptionHandling.ExceptionPriority.Medium, "An error has occurred, please contact Administrator"); } In the actual UI code, you can catch individual exception's if you are going to do something different - such as display a different friendly message or modify the screen, etc. Also, just as a reminder, always try to handle errors - for example divide by 0 - rather than throw an exception. A: Definitely not. You should find the correct place to handle the exception (actually do something, like catch-and-not-rethrow), and then log it. You can and should include the entire stack trace of course, but following your suggestion would litter the code with try-catch blocks. A: It's good practice is to translate the exceptions. Don't just log them. If you want to know the specific reason an exception was thrown, throw specific exceptions: public void connect() throws ConnectionException { try { File conf = new File("blabla"); ... } catch (FileNotFoundException ex) { LOGGER.error("log message", ex); throw new ConnectionException("The configuration file was not found", ex); } } A: Unless you are going to change the exception, you should only log at the level where you are going to handle the error and not rethrow it. Otherwise your log just has a bunch of "noise", 3 or more of the same message logged, once at each layer. My best practice is: * *Only try/catch in public methods (in general; obviously if you are trapping for a specific error you would check for it there) *Only log in the UI layer right before suppressing the error and redirecting to an error page/form. A: Use your own exceptions to wrap inbuild exception. This way you can distinct between known and unknown errors when catching exception. This is usefull if you have a method that calls other methods that are likely throwing excpetions to react upon expected and unexpected failures A: you may want to lookup standard exception handling styles, but my understanding is this: handle exceptions at the level where you can add extra detail to the exception, or at the level where you will present the exception to the user. in your example you are doing nothing but catching the exception, logging it, and throwing it again.. why not just catch it at the highest level with one try/catch instead of inside every method if all you are doing is logging it? i would only handle it at that tier if you were going to add some useful information to the exception before throwing it again - wrap the exception in a new exception you create that has useful information beyond the low level exception text which usually means little to anyone without some context.. A: Sometimes you need to log data which is not available where the exception is handled. In that case, it is appropriate to log just to get that information out. For example (Java pseudocode): public void methodWithDynamicallyGeneratedSQL() throws SQLException { String sql = ...; // Generate some SQL try { ... // Try running the query } catch (SQLException ex) { // Don't bother to log the stack trace, that will // be printed when the exception is handled for real logger.error(ex.toString()+"For SQL: '"+sql+"'"); throw; // Handle the exception long after the SQL is gone } } This is similar to retroactive logging (my terminology), where you buffer a log of events but don't write them unless there's a trigger event, such as an exception being thrown. A: If you're required to log all exceptions, then it's a fantastic idea. That said, logging all exceptions without another reason isn't such a good idea. A: You may want to log at the highest level, which is usually your UI or web service code. Logging multiple times is sort of a waste. Also, you want to know the whole story when you are looking at the log. In one of our applications, all of our pages are derived from a BasePage object, and this object handles the exception handling and error logging. A: If that's the only thing it does, i think is better to remove the try/catch's from those classes and let the exception be raised to the class that is responsible on handling them. That way you get only one log per exception giving you more clear logs and even you can log the stacktrace so you wont miss from where the exception was originated. A: My method is to log the exceptions only in the handler. The 'real' handler so to speak. Otherwise the log will be very hard to read and the code less structured. A: It depends on the Exception: if this actually should not happen, I definitely would log it. On the other way: if you expect this Exception you should think about the design of the application. Either way: you should at least try to specify the Exception you want to rethrow, catch or log. public foo(..) { try { ... } catch (NullReferenceException ex) { DoSmth(e); } catch (ArgumentExcetion ex) { DoSmth(e); } catch (Exception ex) { DoSmth(e); } } A: You will want to log at a tier boundary. For example, if your business tier can be deployed on a physically separate machine in an n-tier application, then it makes sense to log and throw the error in this way. In this way you have a log of exceptions on the server and don't need to go poking around client machines to find out what happened. I use this pattern in business tiers of applications that use Remoting or ASMX web services. With WCF you can intercept and log an exception using an IErrorHandler attached to your ChannelDispatcher (another subject entirely) - so you don't need the try/catch/throw pattern. A: You need to develop a strategy for handling exceptions. I don't recommend the catch and rethrow. In addition to the superfluous log entries it makes the code harder to read. Consider writing to the log in the constructor for the exception. This reserves the try/catch for exceptions that you want to recover from; making the code easier to read. To deal with unexpected or unrecoverable exceptions, you may want a try/catch near the outermost layer of the program to log diagnostic information. BTW, if this is C++ your catch block is creating a copy of the exception object which can be a potential source of additional problems. Try catching a reference to the exception type: catch (const Exception& ex) { ... } A: This Software Engineering Radio podcast is a very good reference for best practices in error handling. There are actually 2 lectures. A: It's bad practice in general, unless you need to log for very specific reasons. With respect in general log exception, it should be handled in root exception handler.
{ "language": "en", "url": "https://stackoverflow.com/questions/95547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Overriding a MIME type in Rails I want to override the JSON MIME type ("application/json") in Rails to ("text/x-json"). I tried to register the MIME type again in mime_types.rb but that didn't work. Any suggestions? Thanks. A: Try: render :json => var_containing_my_json, :content_type => 'text/x-json' A: This should work (in an initializer, plugin, or some similar place): Mime.send(:remove_const, :JSON) Mime::Type.register "text/x-json", :json
{ "language": "en", "url": "https://stackoverflow.com/questions/95554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there any benefit to using Cocoa's version of MVC with .NET? There's a diagram depicting the difference between traditional MVC and Cocoa MVC here: Cocoa Design Patterns: The Model-View-Controller Design Pattern Are there any benefits of doing it the "Cocoa" way in .NET using Visual Studio? A: There's no reason not to do it that way, if it makes more sense to you. Be aware that a lot of things in the Cocoa framework are the way they are due to higher-level design decisions, for example favoring composition and delegation over subclassing. If you want, you can design C# software that looks like Objective-C software, but people with no Cocoa experience will have to have it explained to them, because the loosely-coupled design will just seem "weird" to them. Oh, right - the advantages of that design include greater re-usability of the UI view and model classes (since they won't have any knowledge of each other), slightly simpler code in the view classes, and more of the "application logic" in a single place (the controller classes). A: A developer on the .Net developer's Journal has been writing about his transition and comparing .Net to Cocoa including using the Cocoa MVC style in .Net http://dotnetaddict.dotnetdevelopersjournal.com/tags/?/cocoa A: Isn't the "Cocoa version of MVC" the pattern used in ASP.NET MVC? So far, all the examples have pointed to communication b/w view and model through the controller with no direct interaction between V and M. Am I understanding this incorrectly? A: The main advantage to using Cocoa's "Mediating controllers" (subclasses of NSController) is that they implement a large part of the standard functionality required to mediate between a model and its view. Things like tracking the part of the model indicated by the view's selection, and transaction support (so that you can, for example, commit or discard a set of modifications to the view or model) are included, 'for free'. Using the NSController subclasses as the 'glue' code between model and view frees you as a developer to focus your efforts on the "Coordinating controller" functionality—the application-specific logic that resides in the controller layer. So, is it worth using this pattern in .Net? Getting a general coordinating controller to work correctly is not trivial (it took Apple a couple of releases to get it all right, for example). Things like tree controllers are particularly tricky. If you're only using this pattern in one or a few projects, it might not be worth the effort. On the other hand, I'm sure the community would appreciate a general controller framework ala Cocoa's NSController hierarchy.
{ "language": "en", "url": "https://stackoverflow.com/questions/95567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to determine the Schemas inside an Oracle Data Pump Export file * *I have an Oracle database backup file (.dmp) that was created with expdp. *The .dmp file was an export of an entire database. *I need to restore 1 of the schemas from within this dump file. *I don't know the names of the schemas inside this dump file. *To use impdp to import the data I need the name of the schema to load. So, I need to inspect the .dmp file and list all of the schemas in it, how do I do that? Update (2008-09-18 13:02) - More detailed information: The impdp command i'm current using is: impdp user/password@database directory=DPUMP_DIR dumpfile=EXPORT.DMP logfile=IMPORT.LOG And the DPUMP_DIR is correctly configured. SQL> SELECT directory_path 2 FROM dba_directories 3 WHERE directory_name = 'DPUMP_DIR'; DIRECTORY_PATH ------------------------- D:\directory_path\dpump_dir\ And yes, the EXPORT.DMP file is in fact in that folder. The error message I get when I run the impdp command is: Connected to: Oracle Database 10g Enterprise Edition ... ORA-31655: no data or metadata objects selected for the job ORA-39154: Objects from foreign schemas have been removed from import This error message is mostly expected. I need the impdp command be: impdp user/password@database directory=DPUMP_DIR dumpfile=EXPORT.DMP SCHEMAS=SOURCE_SCHEMA REMAP_SCHEMA=SOURCE_SCHEMA:MY_SCHEMA But to do that, I need the source schema. A: impdp exports the DDL of a dmp backup to a file if you use the SQLFILE parameter. For example, put this into a text file impdp '/ as sysdba' dumpfile=<your .dmp file> logfile=import_log.txt sqlfile=ddl_dump.txt Then check ddl_dump.txt for the tablespaces, users, and schemas in the backup. According to the documentation, this does not actually modify the database: The SQL is not actually executed, and the target system remains unchanged. A: Assuming that you do not have the log file from the expdp job that generated the file in the first place, the easiest option would probably be to use the SQLFILE parameter to have impdp generate a file of DDL (based on a full import). Then you can grab the schema names from that file. Not ideal, of course, since impdp has to read the entire dump file to extract the DDL and then again to get to the schema you're interested in, and you have to do a bit of text file searching for the various CREATE USER statements, but it should be doable. A: The running the impdp command to produce an sqlfile, you will need to run it as a user which has the DATAPUMP_IMP_FULL_DATABASE role. Or... run it as a low privileged user and use the MASTER_ONLY=YES option, then inspect the master table. e.g. select value_t from SYS_IMPORT_TABLE_01 where name = 'CLIENT_COMMAND' and process_order = -59; col object_name for a30 col processing_status head STATUS for a6 col processing_state head STATE for a5 select distinct object_schema, object_name, object_type, object_tablespace, process_order, duplicate, processing_status, processing_state from sys_import_table_01 where process_order > 0 and object_name is not null order by object_schema, object_name / http://download.oracle.com/otndocs/products/database/enterprise_edition/utilities/pdf/oow2011_dp_mastering.pdf A: Step 1: Here is one simple example. You have to create a SQL file from the dump file using SQLFILE option. Step 2: Grep for CREATE USER in the generated SQL file (here tables.sql) Example here: $ impdp directory=exp_dir dumpfile=exp_user1_all_tab.dmp logfile=imp_exp_user1_tab sqlfile=tables.sql Import: Release 11.2.0.3.0 - Production on Fri Apr 26 08:29:06 2013 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Username: / as sysdba Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 08:29:12 $ grep "CREATE USER" tables.sql CREATE USER "USER1" IDENTIFIED BY VALUES 'S:270D559F9B97C05EA50F78507CD6EAC6AD63969E5E;BBE7786A5F9103' Lot of datapump options explained here http://www.acehints.com/p/site-map.html A: You need to search for OWNER_NAME. cat -v dumpfile.dmp | grep -o '<OWNER_NAME>.*</OWNER_NAME>' | uniq -u cat -v turn the dumpfile into visible text. grep -o shows only the match so we don't see really long lines uniq -u removes duplicate lines so you see less output. This works pretty well, even on large dump files, and could be tweaked for usage in a script. A: My solution (similar to KyleLanser's answer) (on a Unix box): strings dumpfile.dmp | grep SCHEMA_LIST A: If you open the DMP file with an editor that can handle big files, you might be able to locate the areas where the schema names are mentioned. Just be sure not to change anything. It would be better if you opened a copy of the original dump. A: Update (2008-09-19 10:05) - Solution: My Solution: Social engineering, I dug real hard and found someone who knew the schema name. Technical Solution: Searching the .dmp file did yield the schema name. Once I knew the schema name, I searched the dump file and learned where to find it. Places the Schemas name were seen, in the .dmp file: * *<OWNER_NAME>SOURCE_SCHEMA</OWNER_NAME> This was seen before each table name/definition. *SCHEMA_LIST 'SOURCE_SCHEMA' This was seen near the end of the .dmp. Interestingly enough, around the SCHEMA_LIST 'SOURCE_SCHEMA' section, it also had the command line used to create the dump, directories used, par files used, windows version it was run on, and export session settings (language, date formats). So, problem solved :) A: In my case, based on Aldur's and slafs' answers I came up with this expression that should tell you just the name of the original schema: cat -v file.dmp | grep 'SCHEMA_LIST' | uniq -u | grep -o -P '(?<=SCHEMAS\=).*(?=content)' Tested for a DMP file from Oracle 19.8 version.
{ "language": "en", "url": "https://stackoverflow.com/questions/95578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: JQuery error option in $.ajax utility The documentation indicates that the error: option function will make available: XHR instance, a status message string (in this case always error) and an optional exception object returned from the XHR instance (Book: JQuery in Action) Using the following (in the $.ajax call) I was able to determine I had a "parsererror" and a "timeout" (since I added the timeout: option) error error: function(request, error){} What are other things you evaluate in the error option? do you include the optional exception object? EDIT: one of the answers indicates all the return errors...learning more about what is of value (for debugging) in the XHR instance and exception object would be helpful This is a complete $.ajax call: $.ajax({ type: "post", url: "http://myServer/cgi-bin/broker" , dataType: "text", data: { '_service' : 'myService', '_program' : 'myProgram', 'start' : start, 'end' : end }, beforeSend: function() { $("#loading").removeClass("hide"); }, timeout: 5000, error: function(request,error) { $("#loading").addClass("hide"); if (error == "timeout") { $("#error").append("The request timed out, please resubmit"); } else { $("#error").append("ERROR: " + error); } }, success: function(request) { $("#loading").addClass("hide"); var t = eval( "(" + request + ")" ) ; } // End success }); // End ajax method Thanks for the input A: I find the request more useful than the error. error:function(xhr,err){ alert("readyState: "+xhr.readyState+"\nstatus: "+xhr.status); alert("responseText: "+xhr.responseText); } xhr is XmlHttpRequest. readyState values are 1:loading, 2:loaded, 3:interactive, 4:complete. status is the HTTP status number, i.e. 404: not found, 500: server error, 200: ok. responseText is the response from the server - this could be text or JSON from the web service, or HTML from the web server. A: This is an aside, but I think there's a bug in the code you submitted. The line: if (error = "timeout") { should have more equals signs in it: if (error == "timeout") { A: Looking at the jQuery source code, there are four returned statuses, in additon to success: * *timeout - when your specified timeout is exceeded *error - http error, like 404 *notmodified - when requested resource was not modified since last request *parsererror - when an xml/json response is bad A: The second argument that is passed to your error function will either be the string "timeout" "parserror" "error" or "notmodified". The third will be the exception object. This object can be helpful for debugging. A: Are you sure that response is correct? Parse error mean that there is sth wrong with data being evaluted in line var t = eval( "(" + request + ")" ) ;
{ "language": "en", "url": "https://stackoverflow.com/questions/95600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: XNA File Load In XNA, how do I load in a texture or mesh from a file without using the content pipeline? A: The .FromFile method will not work on xbox or zune. You have two choices: * *Just use the content pipeline ... on xbox or zune (if you care about them), you can't have user-supplied content anyways, so it doesn't matter if you only use the content pipeline. *Write code to load the texture (using .SetData), or of course to parse the model file and load the appropriate vertexbuffers, etc. A: For anyone interested in loading a model from a file check out this tutorial: http://creators.xna.com/en-us/sample/winforms_series2 A: This is a windows only Way to load a texture without loading it through the pipeline, As Cory stated above, all content must be compiled before loading it on the Xbox, and Zune. Texture2D texture = Texture2D.FromFile(GraphicsDeviceManager.GraphicsDevice, @Location of your Texture Here.png); A: I believe Texture2D.FromFile(); is what you are looking for. It does not look like you can do this with a Model though. http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.texture2d.fromfile.aspx A: If you really want to load an Xna Xna.Framework.Graphics.Model on PC without the content pipeline (eg for user generated content), there is a way. I used SlimDX to load an X file, and avoid the parsing code, the some reflection tricks to instantiate the Model (it is sealed and has a private constructor so wasn't meant to be extended or customised). See here: http://contenttracker.codeplex.com/SourceControl/changeset/view/20704#346981
{ "language": "en", "url": "https://stackoverflow.com/questions/95606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Append Subject Header in Outlook (VBA) Basically, we have a rule setup to run a script when a code word is detected in the body of an incoming message. The script will append the current subject header with a word in front. For example, Before: "Test Message", After: "Dept - Test Message". Any ideas? A: Or if you need an entire script: Do the Run a script with the MailItem as the parameter. Sub RewriteSubject(MyMail As MailItem) Dim mailId As String Dim outlookNS As Outlook.NameSpace Dim myMailItem As Outlook.MailItem mailId = MyMail.EntryID Set outlookNS = Application.GetNamespace("MAPI") Set myMailItem = outlookNS.GetItemFromID(mailId) ' Do any detection here With myMailItem .Subject = "Dept - " & mailItem.Subject .Save End With Set myMailItem = Nothing Set outlookNS = Nothing End Sub A: Not tested: mailItem.Subject = "Dept - " & mailItem.Subject mailItem.Save A: Sub AppendSubject(MyMail As MailItem) Dim strID As String Dim mailNS As Outlook.NameSpace Dim mailItem As Outlook.MailItem strID = MyMail.EntryID Set mailNS = Application.GetNamespace("MAPI") Set mailItem = mailNS.GetItemFromID(strID) mailItem.Subject = "Dept - " & mailItem.Subject mailItem.Save Set mailItem = Nothing Set mailNS = Nothing End Sub Are we missing anything? EDIT: Doh! You already answered our question with a full script... Thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/95625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to build a 64-bit .NET DLL, with 64-bit COM interop? I need to build a managed DLL, targeted for x64, and expose it via x64 COM. I need a walk through, good article, etc... Interop is fairly straightforward, but when you talk about x64 on both sides, I can't find anything. A: Take a look at this discussion. And this.
{ "language": "en", "url": "https://stackoverflow.com/questions/95628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Open a file with su/sudo inside Emacs Suppose I want to open a file in an existing Emacs session using su or sudo, without dropping down to a shell and doing sudoedit or sudo emacs. One way to do this is C-x C-f /sudo::/path/to/file but this requires an expensive round-trip through SSH. Is there a more direct way? [EDIT] @JBB is right. I want to be able to invoke su/sudo to save as well as open. It would be OK (but not ideal) to re-authorize when saving. What I'm looking for is variations of find-file and save-buffer that can be "piped" through su/sudo. A: The nice thing about Tramp is that you only pay for that round-trip to SSH when you open the first file. Sudo then caches your credentials, and Emacs saves a handle, so that subsequent sudo-opened files take much less time. I haven't found the extra time it takes to save burdening, either. It's fast enough, IMO. A: Tramp does not round-trip sudo via SSH, it uses a subshell. See the manual: https://www.gnu.org/software/tramp/#Inline-methods Therefore, I recommend that you stick with TRAMP. A: Your example doesn't start ssh at all, at least not with my version of TRAMP ("2.1.13-pre"). Both find-file and save-buffer work great. A: At least for saving, a sudo-save package was written exactly for that kind of problem. A: I recommend you to use advising commands. Put this function in your ~/.emacs (defadvice ido-find-file (after find-file-sudo activate) "Find file as root if necessary." (unless (and buffer-file-name (file-writable-p buffer-file-name)) (find-alternate-file (concat "/sudo:root@localhost:" buffer-file-name)))) A: If you use helm, helm-find-files supports opening a file as root with C-c r. A: Not really an answer to the original question, but here's a helper function to make doing the tramp/sudo route a bit easier: (defun sudo-find-file (file-name) "Like find file, but opens the file as root." (interactive "FSudo Find File: ") (let ((tramp-file-name (concat "/sudo::" (expand-file-name file-name)))) (find-file tramp-file-name))) A: (works only locally. Need to be updated to work correctly via tramp) A little bit extended Burton's answer: (defun sudo-find-file (file-name) "Like find file, but opens the file as root." (interactive "FSudo Find File: ") (let ((tramp-file-name (concat "/sudo::" (expand-file-name file-name)))) (find-file tramp-file-name))) (add-hook 'dired-mode-hook (lambda () ;; open current file as sudo (local-set-key (kbd "C-x <M-S-return>") (lambda() (interactive) (message "!!! SUDO opening %s" (dired-file-name-at-point)) (sudo-find-file (dired-file-name-at-point)) )) ) ) A: Ugh. Perhaps you could open a shell in Emacs and exec sudo emacs. The problem is that you presumably don't just want to open the file. You want to be able to save it later. Thus you need your root privs to persist, not just exist for opening the file. Sounds like you want Emacs to become your window manager. It's bloated enough without that. :) A: I find sudo edit function very useful for that. After opening a file, press s-e to have sudo access to edit/save the file.
{ "language": "en", "url": "https://stackoverflow.com/questions/95631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "187" }
Q: What does a just-in-time (JIT) compiler do? What does a JIT compiler specifically do as opposed to a non-JIT compiler? Can someone give a succinct and easy to understand description? A: JIT-Just in time the word itself says when it's needed (on demand) Typical scenario: The source code is completely converted into machine code JIT scenario: The source code will be converted into assembly language like structure [for ex IL (intermediate language) for C#, ByteCode for java]. The intermediate code is converted into machine language only when the application needs that is required codes are only converted to machine code. JIT vs Non-JIT comparison: * *In JIT not all the code is converted into machine code first a part of the code that is necessary will be converted into machine code then if a method or functionality called is not in machine then that will be turned into machine code... it reduces burden on the CPU. *As the machine code will be generated on run time....the JIT compiler will produce machine code that is optimised for running machine's CPU architecture. JIT Examples: * *In Java JIT is in JVM (Java Virtual Machine) *In C# it is in CLR (Common Language Runtime) *In Android it is in DVM (Dalvik Virtual Machine), or ART (Android RunTime) in newer versions. A: After the byte code (which is architecture neutral) has been generated by the Java compiler, the execution will be handled by the JVM (in Java). The byte code will be loaded in to JVM by the loader and then each byte instruction is interpreted. When we need to call a method multiple times, we need to interpret the same code many times and this may take more time than is needed. So we have the JIT (just-in-time) compilers. When the byte has been is loaded in to JVM (its run time), the whole code will be compiled rather than interpreted, thus saving time. JIT compilers works only during run time, so we do not have any binary output. A: A just in time compiler (JIT) is a piece of software which takes receives an non executable input and returns the appropriate machine code to be executed. For example: Intermediate representation JIT Native machine code for the current CPU architecture Java bytecode ---> machine code Javascript (run with V8) ---> machine code The consequence of this is that for a certain CPU architecture the appropriate JIT compiler must be installed. Difference compiler, interpreter, and JIT Although there can be exceptions in general when we want to transform source code into machine code we can use: * *Compiler: Takes source code and returns a executable *Interpreter: Executes the program instruction by instruction. It takes an executable segment of the source code and turns that segment into machine instructions. This process is repeated until all source code is transformed into machine instructions and executed. *JIT: Many different implementations of a JIT are possible, however a JIT is usually a combination of a compiler and an interpreter. The JIT first turn intermediary data (e.g. Java bytecode) which it receives into machine language via interpretation. A JIT can often measures when a certain part of the code is executed often and the will compile this part for faster execution. A: A JIT compiler runs after the program has started and compiles the code (usually bytecode or some kind of VM instructions) on the fly (or just-in-time, as it's called) into a form that's usually faster, typically the host CPU's native instruction set. A JIT has access to dynamic runtime information whereas a standard compiler doesn't and can make better optimizations like inlining functions that are used frequently. This is in contrast to a traditional compiler that compiles all the code to machine language before the program is first run. To paraphrase, conventional compilers build the whole program as an EXE file BEFORE the first time you run it. For newer style programs, an assembly is generated with pseudocode (p-code). Only AFTER you execute the program on the OS (e.g., by double-clicking on its icon) will the (JIT) compiler kick in and generate machine code (m-code) that the Intel-based processor or whatever will understand. A: Just In Time Compiler (JIT) : It compiles the java bytecodes into machine instructions of that specific CPU. For example, if we have a loop statement in our java code : while(i<10){ // ... a=a+i; // ... } The above loop code runs for 10 times if the value of i is 0. It is not necessary to compile the bytecode for 10 times again and again as the same instruction is going to execute for 10 times. In that case, it is necessary to compile that code only once and the value can be changed for the required number of times. So, Just In Time (JIT) Compiler keeps track of such statements and methods (as said above before) and compiles such pieces of byte code into machine code for better performance. Another similar example , is that a search for a pattern using "Regular Expression" in a list of strings/sentences. JIT Compiler doesn't compile all the code to machine code. It compiles code that have a similar pattern at run time. See this Oracle documentation on Understand JIT to read more. A: You have code that is compliled into some IL (intermediate language). When you run your program, the computer doesn't understand this code. It only understands native code. So the JIT compiler compiles your IL into native code on the fly. It does this at the method level. A: I know this is an old thread, but runtime optimization is another important part of JIT compilation that doesn't seemed to be discussed here. Basically, the JIT compiler can monitor the program as it runs to determine ways to improve execution. Then, it can make those changes on the fly - during runtime. Google JIT optimization (javaworld has a pretty good article about it.) A: As other have mentioned JIT stands for Just-in-Time which means that code gets compiled when it is needed, not before runtime. Just to add a point to above discussion JVM maintains a count as of how many time a function is executed. If this count exceeds a predefined limit JIT compiles the code into machine language which can directly be executed by the processor (unlike the normal case in which javac compile the code into bytecode and then java - the interpreter interprets this bytecode line by line converts it into machine code and executes). Also next time this function is calculated same compiled code is executed again unlike normal interpretation in which the code is interpreted again line by line. This makes execution faster. A: JIT compiler only compiles the byte-code to equivalent native code at first execution. Upon every successive execution, the JVM merely uses the already compiled native code to optimize performance. Without JIT compiler, the JVM interpreter translates the byte-code line-by-line to make it appear as if a native application is being executed. Source A: In the beginning, a compiler was responsible for turning a high-level language (defined as higher level than assembler) into object code (machine instructions), which would then be linked (by a linker) into an executable. At one point in the evolution of languages, compilers would compile a high-level language into pseudo-code, which would then be interpreted (by an interpreter) to run your program. This eliminated the object code and executables, and allowed these languages to be portable to multiple operating systems and hardware platforms. Pascal (which compiled to P-Code) was one of the first; Java and C# are more recent examples. Eventually the term P-Code was replaced with bytecode, since most of the pseudo-operations are a byte long. A Just-In-Time (JIT) compiler is a feature of the run-time interpreter, that instead of interpreting bytecode every time a method is invoked, will compile the bytecode into the machine code instructions of the running machine, and then invoke this object code instead. Ideally the efficiency of running object code will overcome the inefficiency of recompiling the program every time it runs. A: just-in-time (JIT) compilation, (also dynamic translation or run-time compilation), is a way of executing computer code that involves compilation during execution of a program – at run time – rather than prior to execution. IT compilation is a combination of the two traditional approaches to translation to machine code – ahead-of-time compilation (AOT), and interpretation – and combines some advantages and drawbacks of both. JIT compilation combines the speed of compiled code with the flexibility of interpretation. Let's consider JIT used in JVM, For example, the HotSpot JVM JIT compilers generate dynamic optimizations. In other words, they make optimization decisions while the Java application is running and generate high-performing native machine instructions targeted for the underlying system architecture. When a method is chosen for compilation, the JVM feeds its bytecode to the Just-In-Time compiler (JIT). The JIT needs to understand the semantics and syntax of the bytecode before it can compile the method correctly. To help the JIT compiler analyze the method, its bytecode are first reformulated in an internal representation called trace trees, which resembles machine code more closely than bytecode. Analysis and optimizations are then performed on the trees of the method. At the end, the trees are translated into native code. A trace tree is a data structure that is used in the runtime compilation of programming code. Trace trees are used in a type of 'just in time compiler' that traces code executing during hotspots and compiles it. Refer this. Refer : * *http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/gc01/index.html *https://en.wikipedia.org/wiki/Just-in-time_compilation A: JIT stands for Just-in-Time which means that code gets compiled when it is needed, not before runtime. This is beneficial because the compiler can generate code that is optimised for your particular machine. A static compiler, like your average C compiler, will compile all of the code on to executable code on the developer's machine. Hence the compiler will perform optimisations based on some assumptions. It can compile more slowly and do more optimisations because it is not slowing execution of the program for the user. A: A non-JIT compiler takes source code and transforms it into machine specific byte code at compile time. A JIT compiler takes machine agnostic byte code that was generated at compile time and transforms it into machine specific byte code at run time. The JIT compiler that Java uses is what allows a single binary to run on a multitude of platforms without modification. A: Jit stands for just in time compiler jit is a program that turns java byte code into instruction that can be sent directly to the processor. Using the java just in time compiler (really a second compiler) at the particular system platform complies the bytecode into particular system code,once the code has been re-compiled by the jit complier ,it will usually run more quickly in the computer. The just-in-time compiler comes with the virtual machine and is used optionally. It compiles the bytecode into platform-specific executable code that is immediately executed. A: 20% of the byte code is used 80% of the time. The JIT compiler gets these stats and optimizes this 20% of the byte code to run faster by adding inline methods, removal of unused locks etc and also creating the bytecode specific to that machine. I am quoting from this article, I found it was handy. http://java.dzone.com/articles/just-time-compiler-jit-hotspot A: Just In Time compiler also known as JIT compiler is used for performance improvement in Java. It is enabled by default. It is compilation done at execution time rather earlier. Java has popularized the use of JIT compiler by including it in JVM. A: JIT refers to execution engine in few of JVM implementations, one that is faster but requires more memory,is a just-in-time compiler. In this scheme, the bytecodes of a method are compiled to native machine code the first time the method is invoked. The native machine code for the method is then cached, so it can be re-used the next time that same method is invoked. A: JVM actually performs compilation steps during runtime for performance reasons. This means that Java doesn't have a clean compile-execution separation. It first does a so called static compilation from Java source code to bytecode. Then this bytecode is passed to the JVM for execution. But executing bytecode is slow so the JVM measures how often the bytecode is run and when it detects a "hotspot" of code that's run very frequently it performs dynamic compilation from bytecode to machinecode of the "hotspot" code (hotspot profiler). So effectively today Java programs are run by machinecode execution.
{ "language": "en", "url": "https://stackoverflow.com/questions/95635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "631" }
Q: Does an application-wide exception handler make sense? Long story short, I have a substantial Python application that, among other things, does outcalls to "losetup", "mount", etc. on Linux. Essentially consuming system resources that must be released when complete. If my application crashes, I want to ensure these system resources are properly released. Does it make sense to do something like the following? def main(): # TODO: main application entry point pass def cleanup(): # TODO: release system resources here pass if __name__ == "__main__": try: main() except: cleanup() raise Is this something that is typically done? Is there a better way? Perhaps the destructor in a singleton class? A: A destructor (as in a __del__ method) is a bad idea, as these are not guaranteed to be called. The atexit module is a safer approach, although these will still not fire if the Python interpreter crashes (rather than the Python application), or if os._exit() is used, or the process is killed aggressively, or the machine reboots. (Of course, the last item isn't an issue in your case.) If your process is crash-prone (it uses fickle third-party extension modules, for instance) you may want to do the cleanup in a simple parent process for more isolation. If you aren't really worried, use the atexit module. A: Application wide handler is fine. They are great for logging. Just make sure that the application wide one is durable and is unlikely to crash itself. A: if you use classes, you should free the resources they allocate in their destructors instead, of course. Use the try: on entire application just if you want to free resources that aren't already liberated by your classes' destructors. And instead of using a catch-all except:, you should use the following block: try: main() finally: cleanup() That will ensure cleanup in a more pythonic way. A: I like top-level exception handlers in general (regardless of language). They're a great place to cleanup resources that may not be immediately related to resources consumed inside the method that throws the exception. It's also a fantastic place to log those exceptions if you have such a framework in place. Top-level handlers will catch those bizarre exceptions you didn't plan on and let you correct them in the future, otherwise, you may never know about them at all. Just be careful that your top-level handler doesn't throw exceptions! A: That seems like a reasonable approach, and more straightforward and reliable than a destructor on a singleton class. You might also look at the "atexit" module. (Pronounced "at exit", not "a tex it" or something like that. I confused that for a long while.) A: Consider writing a context manager and using the with statement.
{ "language": "en", "url": "https://stackoverflow.com/questions/95642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: When should I consider changing thread priority I once was asked to increase thread priority to fix a problem. I refused, saying that changing it was dangerous and was not the root cause of the problem. My question is, under what circumstannces should I conider changing priority of threads? A: When you've made a list of the threads you're using and defined a priority order for them which makes sense in terms of the work they do. If you nudge threads up here and there in order to bodge your way out of a problem, eventually they'll all be high priority and you're back where you started. Don't assume you can fix a race condition with prioritisation when really it needs locking, because chances are you've only fixed it in friendly conditions. There may still be cases where it can fail, such as when the lower-priority thread has undergone priority inheritance because another high-priority thread is waiting on another lock it's holding. If you classify threads along the lines of "these threads fill the audio buffer", "these threads make my app responsive to system events", "these threads make my app responsive to the user", "these threads are getting on with some business and will report when they're good and ready", then the threads ought to be prioritised accordingly. Finally, it depends on the OS. If thread priority is completely secondary to process priority, then it shouldn't be "dangerous" to prioritise threads: the only thing you can starve of CPU is yourself. But if your high-priority threads run in preference to the normal-priority threads of other, unrelated applications, then you have a broader responsibility. You should only be raising priorities of threads which do small amounts of urgent work. The definition of "small" depends what kind of device you're on - with a 3GHz multi-core processor you get away with a lot, but a mobile device might have pseudo real-time expectations that user-level apps can break. Keeping the audio buffer serviced is the canonical example of when to be high priority, though, since small under-runs usually cause nasty crackling. Long downloads (or other slow I/O) are the canonical example of when to be low priority, since there's no urgency processing this chunk of data if the next one won't be along for ages anyway. If you're ever writing a device driver you'll need to make more complex decisions how to play nicely with others. A: Not many. The only time I've ever had to change thread priorities in a positive direction was with a user interface thread. UIs must be extremely snappy in order for the app to feel right, so a lot of times it is best to prioritize painting threads higher than others. For example, the Swing Event Dispatch Thread runs at priority 6 by default (1 higher than the default). I do push threads down in priority quite a bit. Again, this is usually to keep the UI responsive while some long-running background process does its thing. However, this also will sometimes apply to polling daemons and the like which I know that I don't want to be interfering with anything, regardless of how minimal the interference. A: Our app uses a background thread to download data and we didn't want that interfering with the UI thread on single-core machines, so we deliberately prioritized that lower. A: I think it depends on the direction you're looking at changing the priority. Normally you shouldn't ever increase thread priority unless you have a very good reason. Increasing thread priority can cause your app's thread to start taking away time from other applications, which probably isn't what the user wants. If your thread is using up a significant amount of CPU it can make the machine hard to use, as some standard UI threads may start to starve. I'd say the only times you should increase priority above normal is if the user explicitly told your app to do so, but even then you want to prevent "clueless" users from doing so. Maybe if your app doesn't use much CPU normally, but might have brief bursts of really really important activity then it could be OK to have an increased priority, as it wouldn't normally detract from the user's general experience. Decreasing priority is another matter. If your app is doing something that takes a LOT of CPU and runs for a long time, yet isn't critical, then lowering the priority can be good. By lowering the priority you allow the CPU to be used for other things when it's needed, which helps keep the system responding quickly. As long as the system is mostly idling other than your app you'll still get most of the CPU time, but won't take away from tasks that need it more than you. An example of this would be a thread that indexes the hard drive (think google desktop). A: I would say when your original design assumptions about the threads are no longer valid. Thread priority is mostly a design decision about what work is most important. So for some examples of when to reconsider: If you add a new feature that might require its own thread that becomes more important, then reconsider thread priorities. If some requirements change that force you to reconsider the priorities of the work you are doing, then reconsider. Or, if you do performance testing and realize that your "high priority work" as specified in your design do not get the required performance, then tweak priorities. Otherwise, its often a hack.
{ "language": "en", "url": "https://stackoverflow.com/questions/95649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to use oAuth tokens I'm using a library to get an 'oAuth_token' and 'oAuth_token_secret'. If I make a request to a REST web service how are those two keys leveraged to verify authentication? A: Providing a C# example is a little difficult because there are a number of variables i.e. the signature method being used, additional parameters the service might be expecting etc. which would affect the complexity of the example. I've developed an open source OAuth library for .Net and posted an article on beginning to use OAuth that might help to get you started - I tried to find a developers page / API specification to brightkite - but because it's a beta service I don't have access - so perhaps post me a invite to this service via my blog and I can have a go at developing an example brightkite client at which point this answer can be revisited with some concrete example code useful to others.
{ "language": "en", "url": "https://stackoverflow.com/questions/95651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: comparison of ways to maintain state There are various ways to maintain user state using in web development. These are the ones that I can think of right now: * *Query String *Cookies *Form Methods (Get and Post) *Viewstate (ASP.NET only I guess) *Session (InProc Web server) *Session (Dedicated web server) *Session (Database) *Local Persistence (Google Gears) (thanks Steve Moyer) etc. I know that each method has its own advantages and disadvantages like cookies not being secure and QueryString having a length limit and being plain ugly to look at! ;) But, when designing a web application I am always confused as to what methods to use for what application or what methods to avoid. What I would like to know is what method(s) do you generally use and would recommend or more interestingly which of these methods would you like to avoid in certain scenarios and why? A: Security is also an issue; values in the query string or form fields can be trivially changed by the user. User authentication should be saved either in an encrypted or tamper-evident cookie or in the server-side session. Keeping track of values passed in a form as a user completes a process, like a site sign-up, well, that can probably be kept in hidden form fields. The nice (and sometimes dangerous) thing, though, about the query string is that the state can be picked up by anyone who clicks on a link. As mentioned above, this is dangerous if it gives the user some authorization they shouldn't have. It's nice, though, for showing your friends something you found on the site. A: With the increasing use of Web 2.0, I think there are two important methods missing from your list: 8 AJAX applications - since the page doesn't reload and there is no page to page navigation, state isn't an issue (but persisting user data must use the asynchronous XML calls). 9 Local persistence - Browser-based applications can persist their user data and state to the local hard drive using libraries such as Google Gears. As for which one is best, I think they all have their place, but the Query String method is problematic for search engines. A: While this is a very complicated question to answer, I have a few quick-bite things I think about when considering implementing state. * *Query string state is only useful for the most basic tasks -- e.g., maintaining the position of a user within a wizard, perhaps, or providing a path to redirect the user to after they complete a given task (e.g., logging in). Otherwise, query string state is horribly insecure, difficult to implement, and in order to do it justice, it needs to be tied to some server-side state machine by containing a key to tie the client to the server's maintained state for that client. *Cookie state is more or less the same -- it's just fancier than query string state. But it's still totally maintained on the client side unless the data in the cookie is a key to tie the client to some server-side state machine. *Form method state is again similar -- it's useful for hiding fields that tie a given form to some bit of data on the back end (e.g., "this user is editing record #512, so the form will contain a hidden input with the value 512"). It's not useful for much else, and again, is just another implementation of the same idea behind query string and cookie state. *Session state (any of the ways you describe) are all great, since they're infinitely extensible and can handle anything your chosen programming language can handle. The first caveat is that there needs to be a key in the client's hand to tie that client to its state being stored on the server; this is where most web frameworks provide either a cookie-based or query string-based key back to the client. (Almost every modern one uses cookies, but falls back on query strings if cookies aren't enabled.) The second caveat is that you need to put some though into how you're storing your state... will you put it in a database? Does your web framework handle it entirely for you? Again, most modern web frameworks take the work out of this, and for me to go about implementing my own state machine, I need a very good reason... otherwise, I'm likely to create security holes and functionality breakage that's been hashed out over time in any of the mature frameworks. So I guess I can't really imagine not wanting to use session-based state for anything but the most trivial reason. A: Personally, since almost all of my web development is in PHP, I use PHP's session handlers. Sessions are the most flexible, in my experience: they're normally faster than db accesses, and the cookies they generate die when the browser closes (by default). A: Avoid InProc if you plan to host your website on a cheap-n-cheerful host like webhost4life. I've learnt the hard way that because their systems are over subscribed, they recycle the applications very frequently which causes your session to get lost. Very annoying. Their suggestion is to use StateServer which is fine except you have to serialise/deserialise the session eash post back. I love objects and my web app is full of them. I'm concerned about performance when switching to StateServer. I need to refactor to only put the stuff I really need in the session. Wish I'd know that before I started... Cheers, Rob. A: Be careful what state you store client side (query strings, form fields, cookies). Anything security-related should not be stored client-side, except maybe a session identifier if it is reasonably obscured and hard to guess. There are too many websites that have settings like "authenticated=true" and store those in a cookie or query string or hidden form field. It is trivial for a user to bypass something like that. Remember that ANY input coming from a client could have been tampered with and should not be trusted. A: Signed Cookies linked to some sort of database store when you need to grab data. There's no reason to be storing data on the client side if you have a connected back-end; you're just looking for trouble if this is a public facing website. A: It's not some much a question of what to use & what to avoid, but when to use which. Each has a particular circumstances when it is the best, and a different circumstance when it's the worst. The deciding factor is generally lifetime of the data. Session state lives longer than form fields, and so on.
{ "language": "en", "url": "https://stackoverflow.com/questions/95655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: .NET property generating "must declare a body because it is not marked abstract or extern" compilation error I have a .NET 3.5 (target framework) web application. I have some code that looks like this: public string LogPath { get; private set; } public string ErrorMsg { get; private set; } It's giving me this compilation error for these lines: "must declare a body because it is not marked abstract or extern." Any ideas? My understanding was that this style of property was valid as of .NET 3.0. Thanks! The problem turned out to be in my .sln file itself. Although I was changing the target version in my build options, in the .sln file, I found this: TargetFramework = "3.0" Changing that to "3.5" solved it. Thanks, guys! A: Your code is valid - it should work fine. Go in to the property pages of your project and make sure that the "Target Framework" is .NET 3.0 or 3.5. A: The syntax is valid. And you can set different access modifiers. You aren't on an Interface are you? And the class these are in isn't abstract is it? Also, doesn't matter what v. of the framework you target because this is a compiler feature. VS2008 will implement the property w/ backing stores for you. A: add to web.config <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" type="Microsoft.CSharp.CSharpCodeProvider,System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" warningLevel="4"> <providerOption name="CompilerVersion" value="v3.5" /> <providerOption name="WarnAsError" value="false" /> </compiler> </compilers> </system.codedom> A: You are correct; that style is allowed. I'd look into the standard assemblies referenced. I'm not sure which you'd need to get that to compile, but I figure somewhat you're pointing to the .Net v2.0 version of csc.exe. A: That error should not be coming from the code you posted. According to MSDN, you've done this right: http://msdn.microsoft.com/en-us/library/bb384054.aspx Hence I would recommend you re-check the error message, and where the compiler says the error is coming from. The text of the message you posted did not include a reference to properties, and there is a similar message for functions... Anything that is missing an implementation and not on an interface or marked abstract or extern can generate this error. The auto-property is a feature of the C# 3.0 language/compiler. If you are using VS 2008, it should work even if you are targeting .NET 2.0. I JUST tested it to make sure. A: This error can also happen if you are using CodeFile="MyControl.ascx.cs" in your MyControl.ascx instead of CodeBehind="MyControl.ascx.cs". In case of CodeFile, the 2.0 compiler tries to recompile the page, even if you have a WebProject instead of a WebSite and of course - does fail. Changing the attribute name to CodeBehind fixed the problem in my case. A: Where do you define this Properties? Directly in the as*x file or in the codeBehind? (I don't think that can be a reason, but if the build-Target is .NET 3.5 I can't see anything else) A: This also happens on a raw web site project where there was no web.config generated. Although the solution file said 3.5, .Net needed the web.config to state it also to recognize. I ran debug allowing it to create a webconfig, and all was working. So it is like the answer provided, but just make sure you have one. A: It is, as long as you put abstract in front, or implement the methods. public abstract string LogPath { get; private set; } public abstract string ErrorMsg { get; private set; } See http://forums.asp.net/t/1031651.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/95683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: VB.Net (or C#) 2008 Multi Threaded Import I am looking to build a multi-threaded text import facility (generally CSV into SQL Server 2005) and would like to do this in VB.NET but I am not against C#. I have VS 2008 trial and just dont know where to begin. Can anyone point me in the direction of where I can look at and play with the source of a VERY simple multi-threaded application for VS 2008? Thanks! A: This is a great article: http://www.devx.com/DevX/10MinuteSolution/20365 In particular: Dim t As Thread t = New Thread(AddressOf Me.BackgroundProcess) t.Start() Private Sub BackgroundProcess() Dim i As Integer = 1 Do While True ListBox1.Items.Add("Iterations: " + i) i += 1 Thread.CurrentThread.Sleep(2000) Loop End Sub A: The referenced DevX article is from 2001 and .Net Framework 1.1, but today .Net Framework 2.0 provides the BackgroundWorker class. This is the recommended threading class if your application includes a foreground UI component. From MSDN Threads and Threading: If you need to run background threads that interact with the user interface, the .NET Framework version 2.0 provides a BackgroundWorker component that communicates using events, with cross-thread marshaling to the user-interface thread. This example from MSDN BackgroundWorker Class shows a background task, progress %, and cancel option. (The example is longer than the DevX sample, but has a lot more functionality.) Imports System.ComponentModel Partial Public Class Page Inherits UserControl Private bw As BackgroundWorker = New BackgroundWorker Public Sub New() InitializeComponent() bw.WorkerReportsProgress = True bw.WorkerSupportsCancellation = True AddHandler bw.DoWork, AddressOf bw_DoWork AddHandler bw.ProgressChanged, AddressOf bw_ProgressChanged AddHandler bw.RunWorkerCompleted, AddressOf bw_RunWorkerCompleted End Sub Private Sub buttonStart_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) If Not bw.IsBusy = True Then bw.RunWorkerAsync() End If End Sub Private Sub buttonCancel_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) If bw.WorkerSupportsCancellation = True Then bw.CancelAsync() End If End Sub Private Sub bw_DoWork(ByVal sender As Object, ByVal e As DoWorkEventArgs) Dim worker As BackgroundWorker = CType(sender, BackgroundWorker) For i = 1 To 10 If bw.CancellationPending = True Then e.Cancel = True Exit For Else ' Perform a time consuming operation and report progress. System.Threading.Thread.Sleep(500) bw.ReportProgress(i * 10) End If Next End Sub Private Sub bw_RunWorkerCompleted(ByVal sender As Object, ByVal e As RunWorkerCompletedEventArgs) If e.Cancelled = True Then Me.tbProgress.Text = "Canceled!" ElseIf e.Error IsNot Nothing Then Me.tbProgress.Text = "Error: " & e.Error.Message Else Me.tbProgress.Text = "Done!" End If End Sub Private Sub bw_ProgressChanged(ByVal sender As Object, ByVal e As ProgressChangedEventArgs) Me.tbProgress.Text = e.ProgressPercentage.ToString() & "%" End Sub End Class A: About the best threading document I ever found was this http://www.albahari.com/threading/ If I may, the problem with simple examples is that that they're often too simple. Once you get past the counting or sort in background demos you generally need to update the UI or similar and there are some gotchas. Similarly you rarely have to deal with resource contention in simple examples and having threads degrade gracefully when a resource isn't available (such as a Db connection) requires thought. Conceptually you need to decide how you're going to distribute your work across the threads and how many do you want. There's overhead associated with managing threads and some mechanisms use a shared thread pool that could be subject to resource contention itself (for example, any time you run a program that simply displays an empty form, how many threads do you see under task manager). So for your case, you threads doing the actual uploading need to signal back if they've completed, if they've failed (and what the failure was). The controller needs to be able to deal with those and manage the start/stop processes and so on. Finally (almost), assuming that making something multithread will increase performance doesn't always hold true. If for example, you chop a file up into segments but it has to travel across a low speed link (ADSL say), you're constrained by external forces and no amount of threading trickery is going to get around that. The same can apply for database updates, web requests, anything invloving large amounts of disk i/o and so on. Despite all this, I'm not the prophet of doom. The references here are more than adequate to help you achieve what you want but be aware that one of the reasons threading seems complicated is because it can be :) If you want more control than the BackgroundWorker/Threadpool but don't want to do everything yourself there are at least two very good freebie threading libraries knocking around the place (Wintellect & PowerThreading) Cheers Simon
{ "language": "en", "url": "https://stackoverflow.com/questions/95700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What causes Firefox to make a GET request after submitting a form via the POST method? What causes Firefox to follow a POST request with a GET request when submitting a form via the POST method? The GET method is sent to the same url as the POST method but without the request parameters. If you change the form method to GET, it will result in two identical GET requests. A: This is a bug in Firefox 3. This happens when the response to the POST contains an image tag with an empty source attribute. eg <img src=""/> A: The URL POSTed to might be returning a Redirect -- that would cause a GET. This is commonly done so that the page can be refreshed without reposting. A: Probably there is some javascript involved. The form is submitted as a result of an onclick event in an anchor with: href="..." onclick="..form.submit()"
{ "language": "en", "url": "https://stackoverflow.com/questions/95715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the easiest way to convert from asp classic to asp.net? I am a .Net developer that has been tasked with upgrading a classic asp website to asp.net. The website is currently running on luck and bubble gum but there is not enough time or money to stop progress and do a full rewrite. Of course I will still need to be able to deliver new features while I am upgrading. What strategies should I use to make a smooth gradual change to asp.net? Should I convert to a single tier .net solution and then refactor to a proper multi-tier solution or should I design my business and data layers now? Should I go straight to 3.5 or is it easier to just get to 1.1 and upgrade to 2.0 or 3.5 after? A full conversion would probably take 3-5 months. There is also some existing 1.1 code, which is why I am considering using that as a jumping off point. A: Don't throw away your code! It's the single worst mistake you can make (on a large codebase). See Things You Should Never Do, Part 1. You've invested a lot of effort into that old code and worked out many bugs. Throwing it away is a classic developer mistake (and one I've done many times). It makes you feel "better", like a spring cleaning. But you don't need to buy a new apartment and all new furniture to outfit your house. You can work on one room at a time... and maybe some things just need a new paintjob. Hence, this is where refactoring comes in. For new functionality in your app, write it in C# and call it from your classic ASP. You'll be forced to be modular when you rewrite this new code. When you have time, refactor parts of your old code into C# as well, and work out the bugs as you go. Eventually, you'll have replaced your app with all new code. You could also write your own compiler. We wrote one for our classic ASP app a long time ago to allow us to output PHP. It's called Wasabi and I think it's the reason Jeff Atwood thought Joel Spolsky went off his rocker. Actually, maybe we should just ship it, and then you could use that. It allowed us to switch our entire codebase to .NET for the next release while only rewriting a very small portion of our source. It also caused a bunch of people to call us crazy, but writing a compiler is not that complicated, and it gave us a lot of flexibility. Also, if this is an internal only app, just leave it. Don't rewrite it - you are the only customer and if the requirement is you need to run it as classic asp, you can meet that requirement. A: Having been a longtime classic asp programmer, and now an ASP.NET dev, I would take the time and architect it properly in the 2.0 framework (3.5 if you want/need the features). My last job we had a large handful of very badly build classic asp apps that we were rebuilding, and the "nuke and pave" approach was the most successful. Use the existing classic app as your functional spec and wireframes, and build your tasks and tech specs off of that. A: How long would a complete conversion/rewrite take? It's also going to depend on how you've structured your original project. I can answer that you should just target v2.0 (3.5 if you want/need it's features) from the beginning. There's no need to subject yourself to 1.1 of the framework. A: Try these links Migrating from ASP Key Considerations Converting ASP to ASP.NET ASP to ASP.NET Migration Guide A: You may want to look at the new ASP.NET MVC framework. The level of flexibility is amazing and the coding style is slightly more akin to the ASP classic approach, albeit with much better separation of church and state. A: Take a look at Snitz Forums (www.snitz.com) - they are currently in ASP but the port to ASP.NET is almost complete. Both code bases are available for you to look at so you may get an idea of how it has been done there to help you. A: I would avoid going into .NET 1.1 since Microsoft is ending support for v 1.1 of the .NET Framework on 10/14/2008. The extended support runs through 10/8/2013 but is typically expensive to purchase. Any bugs or security holes will not be addressed and are your problem. http://support.microsoft.com/lifecycle/?LN=en-us&x=11&y=10&p1=1249 Paul A: easiest way to do it is to just jump in head first. get some asp.net books and dive into visual studio. Do the examples, play with it, build something fun for yourself. You'll learn by doing. A: I'm also working on a gradual migration from classic ASP to ASP.NET. Our first phase is migrating some common logic from an ASP include to a .NET assembly that is exposed to COM Interop so they can be called by both classic ASP and ASP.NET. I've written some tests using ASPUnit to verify the behavior after migration to the .NET assembly (with the added benefit of safer refactoring). Once the core logic is in .NET, we can begin creating new pages in ASP.NET and migrating individual ASP pages to ASP.NET at our own pace. I would recommend .NET 2.0 or 3.5 over 1.1. ASP.NET MVC looks like an attractive upgrade path.
{ "language": "en", "url": "https://stackoverflow.com/questions/95724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How to convert floats to human-readable fractions? Let's say we have 0.33, we need to output 1/3. If we have 0.4, we need to output 2/5. The idea is to make it human-readable to make the user understand "x parts out of y" as a better way of understanding data. I know that percentages is a good substitute but I was wondering if there was a simple way to do this? A: A C# implementation /// <summary> /// Represents a rational number /// </summary> public struct Fraction { public int Numerator; public int Denominator; /// <summary> /// Constructor /// </summary> public Fraction(int numerator, int denominator) { this.Numerator = numerator; this.Denominator = denominator; } /// <summary> /// Approximates a fraction from the provided double /// </summary> public static Fraction Parse(double d) { return ApproximateFraction(d); } /// <summary> /// Returns this fraction expressed as a double, rounded to the specified number of decimal places. /// Returns double.NaN if denominator is zero /// </summary> public double ToDouble(int decimalPlaces) { if (this.Denominator == 0) return double.NaN; return System.Math.Round( Numerator / (double)Denominator, decimalPlaces ); } /// <summary> /// Approximates the provided value to a fraction. /// http://stackoverflow.com/questions/95727/how-to-convert-floats-to-human-readable-fractions /// </summary> private static Fraction ApproximateFraction(double value) { const double EPSILON = .000001d; int n = 1; // numerator int d = 1; // denominator double fraction = n / d; while (System.Math.Abs(fraction - value) > EPSILON) { if (fraction < value) { n++; } else { d++; n = (int)System.Math.Round(value * d); } fraction = n / (double)d; } return new Fraction(n, d); } } A: The Stern-Brocot Tree induces a fairly natural way to approximate real numbers by fractions with simple denominators. A: I have found David Eppstein's find rational approximation to given real number C code to be exactly what you are asking for. Its based on the theory of continued fractions and very fast and fairly compact. I have used versions of this customized for specific numerator and denominator limits. /* ** find rational approximation to given real number ** David Eppstein / UC Irvine / 8 Aug 1993 ** ** With corrections from Arno Formella, May 2008 ** ** usage: a.out r d ** r is real number to approx ** d is the maximum denominator allowed ** ** based on the theory of continued fractions ** if x = a1 + 1/(a2 + 1/(a3 + 1/(a4 + ...))) ** then best approximation is found by truncating this series ** (with some adjustments in the last term). ** ** Note the fraction can be recovered as the first column of the matrix ** ( a1 1 ) ( a2 1 ) ( a3 1 ) ... ** ( 1 0 ) ( 1 0 ) ( 1 0 ) ** Instead of keeping the sequence of continued fraction terms, ** we just keep the last partial product of these matrices. */ #include <stdio.h> main(ac, av) int ac; char ** av; { double atof(); int atoi(); void exit(); long m[2][2]; double x, startx; long maxden; long ai; /* read command line arguments */ if (ac != 3) { fprintf(stderr, "usage: %s r d\n",av[0]); // AF: argument missing exit(1); } startx = x = atof(av[1]); maxden = atoi(av[2]); /* initialize matrix */ m[0][0] = m[1][1] = 1; m[0][1] = m[1][0] = 0; /* loop finding terms until denom gets too big */ while (m[1][0] * ( ai = (long)x ) + m[1][1] <= maxden) { long t; t = m[0][0] * ai + m[0][1]; m[0][1] = m[0][0]; m[0][0] = t; t = m[1][0] * ai + m[1][1]; m[1][1] = m[1][0]; m[1][0] = t; if(x==(double)ai) break; // AF: division by zero x = 1/(x - (double) ai); if(x>(double)0x7FFFFFFF) break; // AF: representation failure } /* now remaining x is between 0 and 1/ai */ /* approx as either 0 or 1/m where m is max that will fit in maxden */ /* first try zero */ printf("%ld/%ld, error = %e\n", m[0][0], m[1][0], startx - ((double) m[0][0] / (double) m[1][0])); /* now try other possibility */ ai = (maxden - m[1][1]) / m[1][0]; m[0][0] = m[0][0] * ai + m[0][1]; m[1][0] = m[1][0] * ai + m[1][1]; printf("%ld/%ld, error = %e\n", m[0][0], m[1][0], startx - ((double) m[0][0] / (double) m[1][0])); } A: Part of the problem is that so many fractions aren't actually easily construed as fractions. E.g. 0.33 isn't 1/3, it's 33/100. But if you remember your elementary school training, then there is a process of converting decimal values into fractions, however it's unlikely to give you what you want since most of the time decimal numbers aren't stored at 0.33, but 0.329999999999998 or some such. Do yourself a favor and don't bother with this, but if you need to then you can do the following: Multiply the original value by 10 until you remove the fractional part. Keep that number, and use it as the divisor. Then do a series of simplifications by looking for common denominators. So 0.4 would be 4/10. You would then look for common divisors starting with low values, probably prime numbers. Starting with 2, you would see if 2 divides both the numerator and denominator evenly by checking if the floor of division is the same as the division itself. floor(5/2) = 2 5/2 = 2.5 So 5 does not divide 2 evenly. So then you check the next number, say 3. You do this until you hit at or above the square root of the smaller number. After you do that then you need A: This is not an "algorithm", just a Python solution: http://docs.python.org/library/fractions.html >>> from fractions import Fraction >>> Fraction('3.1415926535897932').limit_denominator(1000) Fraction(355, 113) A: This algorithm by Ian Richards / John Kennedy not only returns nice fractions, it also performs very well in terms of speed. This is C# code as taken from this answer by me. It can handle all double values except special values like NaN and +/- infinity, which you'll have to add if needed. It returns a new Fraction(numerator, denominator). Replace by your own type. For more example values and a comparison with other algorithms, go here public Fraction RealToFraction(double value, double accuracy) { if (accuracy <= 0.0 || accuracy >= 1.0) { throw new ArgumentOutOfRangeException("accuracy", "Must be > 0 and < 1."); } int sign = Math.Sign(value); if (sign == -1) { value = Math.Abs(value); } // Accuracy is the maximum relative error; convert to absolute maxError double maxError = sign == 0 ? accuracy : value * accuracy; int n = (int) Math.Floor(value); value -= n; if (value < maxError) { return new Fraction(sign * n, 1); } if (1 - maxError < value) { return new Fraction(sign * (n + 1), 1); } double z = value; int previousDenominator = 0; int denominator = 1; int numerator; do { z = 1.0 / (z - (int) z); int temp = denominator; denominator = denominator * (int) z + previousDenominator; previousDenominator = temp; numerator = Convert.ToInt32(value * denominator); } while (Math.Abs(value - (double) numerator / denominator) > maxError && z != (int) z); return new Fraction((n * denominator + numerator) * sign, denominator); } Example values returned by this algorithm: Accuracy: 1.0E-3 | Richards Input | Result Error ======================| ============================= 3 | 3/1 0 0.999999 | 1/1 1.0E-6 1.000001 | 1/1 -1.0E-6 0.50 (1/2) | 1/2 0 0.33... (1/3) | 1/3 0 0.67... (2/3) | 2/3 0 0.25 (1/4) | 1/4 0 0.11... (1/9) | 1/9 0 0.09... (1/11) | 1/11 0 0.62... (307/499) | 8/13 2.5E-4 0.14... (33/229) | 16/111 2.7E-4 0.05... (33/683) | 10/207 -1.5E-4 0.18... (100/541) | 17/92 -3.3E-4 0.06... (33/541) | 5/82 -3.7E-4 0.1 | 1/10 0 0.2 | 1/5 0 0.3 | 3/10 0 0.4 | 2/5 0 0.5 | 1/2 0 0.6 | 3/5 0 0.7 | 7/10 0 0.8 | 4/5 0 0.9 | 9/10 0 0.01 | 1/100 0 0.001 | 1/1000 0 0.0001 | 1/10000 0 0.33333333333 | 1/3 1.0E-11 0.333 | 333/1000 0 0.7777 | 7/9 1.0E-4 0.11 | 10/91 -1.0E-3 0.1111 | 1/9 1.0E-4 3.14 | 22/7 9.1E-4 3.14... (pi) | 22/7 4.0E-4 2.72... (e) | 87/32 1.7E-4 0.7454545454545 | 38/51 -4.8E-4 0.01024801004 | 2/195 8.2E-4 0.99011 | 100/101 -1.1E-5 0.26... (5/19) | 5/19 0 0.61... (37/61) | 17/28 9.7E-4 | Accuracy: 1.0E-4 | Richards Input | Result Error ======================| ============================= 0.62... (307/499) | 299/486 -6.7E-6 0.05... (33/683) | 23/476 6.4E-5 0.06... (33/541) | 33/541 0 1E-05 | 1/99999 1.0E-5 0.7777 | 1109/1426 -1.8E-7 3.14... (pi) | 333/106 -2.6E-5 2.72... (e) | 193/71 1.0E-5 0.61... (37/61) | 37/61 0 A: "Let's say we have 0.33, we need to output "1/3". " What precision do you expect the "solution" to have? 0.33 is not equal to 1/3. How do you recognize a "good" (easy to read) answer? No matter what, a possible algorithm could be: If you expect to find a nearest fraction in a form X/Y where Y is less then 10, then you can loop though all 9 possible Ys, for each Y compute X, and then select the most accurate one. A: From Python 2.6 on there is the fractions module. (Quoting from the docs.) >>> from fractions import Fraction >>> Fraction('3.1415926535897932').limit_denominator(1000) Fraction(355, 113) >>> from math import pi, cos >>> Fraction.from_float(cos(pi/3)) Fraction(4503599627370497, 9007199254740992) >>> Fraction.from_float(cos(pi/3)).limit_denominator() Fraction(1, 2) A: You can do this in any programming language using the following steps: * *Multiply and Divide by 10^x where x is the power of 10 required to make sure that the number has no decimal places remaining. Example: Multiply 0.33 by 10^2 = 100 to make it 33 and divide it by the same to get 33/100 *Reduce the numerator and the denominator of the resulting fraction by factorization, till you can no longer obtain integers from the result. *The resulting reduced fraction should be your answer. Example: 0.2 =0.2 x 10^1/10^1 =2/10 =1/5 So, that can be read as '1 part out of 5' A: A built-in solution in R: library(MASS) fractions(0.666666666) ## [1] 2/3 This uses a continued fraction method and has optional cycles and max.denominator arguments for adjusting the precision. A: If the the output is to give a human reader a fast impression of the order of the result, it makes no sense return something like "113/211", so the output should limit itself to using one-digit numbers (and maybe 1/10 and 9/10). If so, you can observe that there are only 27 different fractions. Since the underlying math for generating the output will never change, a solution could be to simply hard-code a binary search tree, so that the function would perform at most log(27) ~= 4 3/4 comparisons. Here is a tested C version of the code char *userTextForDouble(double d, char *rval) { if (d == 0.0) return "0"; // TODO: negative numbers:if (d < 0.0)... if (d >= 1.0) sprintf(rval, "%.0f ", floor(d)); d = d-floor(d); // now only the fractional part is left if (d == 0.0) return rval; if( d < 0.47 ) { if( d < 0.25 ) { if( d < 0.16 ) { if( d < 0.12 ) // Note: fixed from .13 { if( d < 0.11 ) strcat(rval, "1/10"); // .1 else strcat(rval, "1/9"); // .1111.... } else // d >= .12 { if( d < 0.14 ) strcat(rval, "1/8"); // .125 else strcat(rval, "1/7"); // .1428... } } else // d >= .16 { if( d < 0.19 ) { strcat(rval, "1/6"); // .1666... } else // d > .19 { if( d < 0.22 ) strcat(rval, "1/5"); // .2 else strcat(rval, "2/9"); // .2222... } } } else // d >= .25 { if( d < 0.37 ) // Note: fixed from .38 { if( d < 0.28 ) // Note: fixed from .29 { strcat(rval, "1/4"); // .25 } else // d >=.28 { if( d < 0.31 ) strcat(rval, "2/7"); // .2857... else strcat(rval, "1/3"); // .3333... } } else // d >= .37 { if( d < 0.42 ) // Note: fixed from .43 { if( d < 0.40 ) strcat(rval, "3/8"); // .375 else strcat(rval, "2/5"); // .4 } else // d >= .42 { if( d < 0.44 ) strcat(rval, "3/7"); // .4285... else strcat(rval, "4/9"); // .4444... } } } } else { if( d < 0.71 ) { if( d < 0.60 ) { if( d < 0.55 ) // Note: fixed from .56 { strcat(rval, "1/2"); // .5 } else // d >= .55 { if( d < 0.57 ) strcat(rval, "5/9"); // .5555... else strcat(rval, "4/7"); // .5714 } } else // d >= .6 { if( d < 0.62 ) // Note: Fixed from .63 { strcat(rval, "3/5"); // .6 } else // d >= .62 { if( d < 0.66 ) strcat(rval, "5/8"); // .625 else strcat(rval, "2/3"); // .6666... } } } else { if( d < 0.80 ) { if( d < 0.74 ) { strcat(rval, "5/7"); // .7142... } else // d >= .74 { if(d < 0.77 ) // Note: fixed from .78 strcat(rval, "3/4"); // .75 else strcat(rval, "7/9"); // .7777... } } else // d >= .8 { if( d < 0.85 ) // Note: fixed from .86 { if( d < 0.83 ) strcat(rval, "4/5"); // .8 else strcat(rval, "5/6"); // .8333... } else // d >= .85 { if( d < 0.87 ) // Note: fixed from .88 { strcat(rval, "6/7"); // .8571 } else // d >= .87 { if( d < 0.88 ) // Note: fixed from .89 { strcat(rval, "7/8"); // .875 } else // d >= .88 { if( d < 0.90 ) strcat(rval, "8/9"); // .8888... else strcat(rval, "9/10"); // .9 } } } } } } return rval; } A: You'll have to figure out what level of error you're willing to accept. Not all decimal fractions will reduce to a simple fraction. I'd probably pick an easily-divisible number, like 60, and figure out how many 60ths is closest to the value, then simplify the fraction. A: One solution is to just store all numbers as rational numbers in the first place. There are libraries for rational number arithmetic (eg GMP). If using an OO language you may be able to just use a rational number class library to replace your number class. Finance programs, among others, would use such a solution to be able to make exact calculations and preserve precision that may be lost using a plain float. Of course it will be a lot slower so it may not be practical for you. Depends on how much calculations you need to do, and how important the precision is for you. a = rational(1); b = rational(3); c = a / b; print (c.asFraction) ---> "1/3" print (c.asFloat) ----> "0.333333" A: I think the best way to do this is to first convert your float value to an ascii representation. In C++ you could use ostringstream or in C, you could use sprintf. Here's how it would look in C++: ostringstream oss; float num; cin >> num; oss << num; string numStr = oss.str(); int i = numStr.length(), pow_ten = 0; while (i > 0) { if (numStr[i] == '.') break; pow_ten++; i--; } for (int j = 1; j < pow_ten; j++) { num *= 10.0; } cout << static_cast<int>(num) << "/" << pow(10, pow_ten - 1) << endl; A similar approach could be taken in straight C. Afterwards you would need to check that the fraction is in lowest terms. This algorithm will give a precise answer, i.e. 0.33 would output "33/100", not "1/3." However, 0.4 would give "4/10," which when reduced to lowest terms would be "2/5." This may not be as powerful as EppStein's solution, but I believe this is more straightforward. A: Let's say we have 0.33, we need to output "1/3". If we have "0.4", we need to output "2/5". It's wrong in common case, because of 1/3 = 0.3333333 = 0.(3) Moreover, it's impossible to find out from suggested above solutions is decimal can be converted to fraction with defined precision, because output is always fraction. BUT, i suggest my comprehensive function with many options based on idea of Infinite geometric series, specifically on formula: At first this function is trying to find period of fraction in string representation. After that described above formula is applied. Rational numbers code is borrowed from Stephen M. McKamey rational numbers implementation in C#. I hope there is not very hard to port my code on other languages. /// <summary> /// Convert decimal to fraction /// </summary> /// <param name="value">decimal value to convert</param> /// <param name="result">result fraction if conversation is succsess</param> /// <param name="decimalPlaces">precision of considereation frac part of value</param> /// <param name="trimZeroes">trim zeroes on the right part of the value or not</param> /// <param name="minPeriodRepeat">minimum period repeating</param> /// <param name="digitsForReal">precision for determination value to real if period has not been founded</param> /// <returns></returns> public static bool FromDecimal(decimal value, out Rational<T> result, int decimalPlaces = 28, bool trimZeroes = false, decimal minPeriodRepeat = 2, int digitsForReal = 9) { var valueStr = value.ToString("0.0000000000000000000000000000", CultureInfo.InvariantCulture); var strs = valueStr.Split('.'); long intPart = long.Parse(strs[0]); string fracPartTrimEnd = strs[1].TrimEnd(new char[] { '0' }); string fracPart; if (trimZeroes) { fracPart = fracPartTrimEnd; decimalPlaces = Math.Min(decimalPlaces, fracPart.Length); } else fracPart = strs[1]; result = new Rational<T>(); try { string periodPart; bool periodFound = false; int i; for (i = 0; i < fracPart.Length; i++) { if (fracPart[i] == '0' && i != 0) continue; for (int j = i + 1; j < fracPart.Length; j++) { periodPart = fracPart.Substring(i, j - i); periodFound = true; decimal periodRepeat = 1; decimal periodStep = 1.0m / periodPart.Length; var upperBound = Math.Min(fracPart.Length, decimalPlaces); int k; for (k = i + periodPart.Length; k < upperBound; k += 1) { if (periodPart[(k - i) % periodPart.Length] != fracPart[k]) { periodFound = false; break; } periodRepeat += periodStep; } if (!periodFound && upperBound - k <= periodPart.Length && periodPart[(upperBound - i) % periodPart.Length] > '5') { var ind = (k - i) % periodPart.Length; var regroupedPeriod = (periodPart.Substring(ind) + periodPart.Remove(ind)).Substring(0, upperBound - k); ulong periodTailPlusOne = ulong.Parse(regroupedPeriod) + 1; ulong fracTail = ulong.Parse(fracPart.Substring(k, regroupedPeriod.Length)); if (periodTailPlusOne == fracTail) periodFound = true; } if (periodFound && periodRepeat >= minPeriodRepeat) { result = FromDecimal(strs[0], fracPart.Substring(0, i), periodPart); break; } else periodFound = false; } if (periodFound) break; } if (!periodFound) { if (fracPartTrimEnd.Length >= digitsForReal) return false; else { result = new Rational<T>(long.Parse(strs[0]), 1, false); if (fracPartTrimEnd.Length != 0) result = new Rational<T>(ulong.Parse(fracPartTrimEnd), TenInPower(fracPartTrimEnd.Length)); return true; } } return true; } catch { return false; } } public static Rational<T> FromDecimal(string intPart, string fracPart, string periodPart) { Rational<T> firstFracPart; if (fracPart != null && fracPart.Length != 0) { ulong denominator = TenInPower(fracPart.Length); firstFracPart = new Rational<T>(ulong.Parse(fracPart), denominator); } else firstFracPart = new Rational<T>(0, 1, false); Rational<T> secondFracPart; if (periodPart != null && periodPart.Length != 0) secondFracPart = new Rational<T>(ulong.Parse(periodPart), TenInPower(fracPart.Length)) * new Rational<T>(1, Nines((ulong)periodPart.Length), false); else secondFracPart = new Rational<T>(0, 1, false); var result = firstFracPart + secondFracPart; if (intPart != null && intPart.Length != 0) { long intPartLong = long.Parse(intPart); result = new Rational<T>(intPartLong, 1, false) + (intPartLong == 0 ? 1 : Math.Sign(intPartLong)) * result; } return result; } private static ulong TenInPower(int power) { ulong result = 1; for (int l = 0; l < power; l++) result *= 10; return result; } private static decimal TenInNegPower(int power) { decimal result = 1; for (int l = 0; l > power; l--) result /= 10.0m; return result; } private static ulong Nines(ulong power) { ulong result = 9; if (power >= 0) for (ulong l = 0; l < power - 1; l++) result = result * 10 + 9; return result; } There are some examples of usings: Rational<long>.FromDecimal(0.33333333m, out r, 8, false); // then r == 1 / 3; Rational<long>.FromDecimal(0.33333333m, out r, 9, false); // then r == 33333333 / 100000000; Your case with right part zero part trimming: Rational<long>.FromDecimal(0.33m, out r, 28, true); // then r == 1 / 3; Rational<long>.FromDecimal(0.33m, out r, 28, true); // then r == 33 / 100; Min period demostration: Rational<long>.FromDecimal(0.123412m, out r, 28, true, 1.5m)); // then r == 1234 / 9999; Rational<long>.FromDecimal(0.123412m, out r, 28, true, 1.6m)); // then r == 123412 / 1000000; because of minimu repeating of period is 0.1234123 in this case. Rounding at the end: Rational<long>.FromDecimal(0.8888888888888888888888888889m, out r)); // then r == 8 == 9; The most interesting case: Rational<long>.FromDecimal(0.12345678m, out r, 28, true, 2, 9); // then r == 12345678 / 100000000; Rational<long>.FromDecimal(0.12345678m, out r, 28, true, 2, 8); // Conversation failed, because of period has not been founded and there are too many digits in fraction part of input value. Rational<long>.FromDecimal(0.12121212121212121m, out r, 28, true, 2, 9)); // then r == 4 / 33; Despite of too many digits in input value, period has been founded. Thus it's possible to convert value to fraction. Other tests and code everyone can find in my MathFunctions library on github. A: Ruby already has a built in solution: 0.33.rationalize.to_s # => "33/100" 0.4.rationalize.to_s # => "2/5" In Rails, ActiveRecord numerical attributes can be converted too: product.size = 0.33 product.size.to_r.to_s # => "33/100" A: Answer in C++, assuming that you have a BigInt class, which can store unlimited-size integers. You can use unsigned long long instead, but it will only work for certain values. void GetRational(double val) { if (val == val+1) // Inf throw "Infinite Value"; if (val != val) // NaN throw "Undefined Value"; bool sign = false; BigInt enumerator = 0; BigInt denominator = 1; if (val < 0) { val = -val; sign = true; } while (val > 0) { unsigned int intVal = (unsigned int)val; val -= intVal; enumerator += intVal; val *= 2; enumerator *= 2; denominator *= 2; } BigInt gcd = GCD(enumerator,denominator); enumerator /= gcd; denominator /= gcd; Print(sign? "-":"+"); Print(enumerator); Print("/"); Print(denominator); // Or simply return {sign,enumerator,denominator} as you wish } BTW, GetRational(0.0) will return "+0/1", so you might wanna handle this case separately. P.S.: I've been using this code in my own 'RationalNum' class for several years, and it's been tested thoroughly. A: Here's a link explaining the math behind converting a decimal to a fraction: http://www.webmath.com/dec2fract.html And here's an example function for how to actually do it using VB (from www.freevbcode.com/ShowCode.asp?ID=582): Public Function Dec2Frac(ByVal f As Double) As String Dim df As Double Dim lUpperPart As Long Dim lLowerPart As Long lUpperPart = 1 lLowerPart = 1 df = lUpperPart / lLowerPart While (df <> f) If (df < f) Then lUpperPart = lUpperPart + 1 Else lLowerPart = lLowerPart + 1 lUpperPart = f * lLowerPart End If df = lUpperPart / lLowerPart Wend Dec2Frac = CStr(lUpperPart) & "/" & CStr(lLowerPart) End Function (From google searches: convert decimal to fraction, convert decimal to fraction code) A: You might want to read What Every Computer Scientist Should Know about Floating Point Arithmetic. You'll have to specify some precision by multiplying by a large number: 3.141592 * 1000000 = 3141592 then you can make a fraction: 3 + (141592 / 1000000) and reduce via GCD... 3 + (17699 / 125000) but there is no way to get the intended fraction out. You might want to always use fractions throughout your code instead --just remember to reduce fractions when you can to avoid overflow! A: Here are Perl and Javascript versions of the VB code suggested by devinmoore: Perl: sub dec2frac { my $d = shift; my $df = 1; my $top = 1; my $bot = 1; while ($df != $d) { if ($df < $d) { $top += 1; } else { $bot += 1; $top = int($d * $bot); } $df = $top / $bot; } return "$top/$bot"; } And the almost identical javascript: function dec2frac(d) { var df = 1; var top = 1; var bot = 1; while (df != d) { if (df < d) { top += 1; } else { bot += 1; top = parseInt(d * bot); } df = top / bot; } return top + '/' + bot; } //Put in your test number here: var floatNumber = 2.56; alert(floatNumber + " = " + dec2frac(floatNumber)); A: You are going to have two basic problems that will make this hard: 1) Floating point isn't an exact representation which means that if you have a fraction of "x/y" which results in a value of "z", your fraction algorithm may return a result other than "x/y". 2) There are infinity many more irrational numbers than rational. A rational number is one that can be represented as a fraction. Irrational being ones that can not. However, in a cheap sort of way, since floating point has limit accuracy, then you can always represent it as some form of faction. (I think...) A: Completed the above code and converted it to as3 public static function toFrac(f:Number) : String { if (f>1) { var parte1:int; var parte2:Number; var resultado:String; var loc:int = String(f).indexOf("."); parte2 = Number(String(f).slice(loc, String(f).length)); parte1 = int(String(f).slice(0,loc)); resultado = toFrac(parte2); parte1 *= int(resultado.slice(resultado.indexOf("/") + 1, resultado.length)) + int(resultado.slice(0, resultado.indexOf("/"))); resultado = String(parte1) + resultado.slice(resultado.indexOf("/"), resultado.length) return resultado; } if( f < 0.47 ) if( f < 0.25 ) if( f < 0.16 ) if( f < 0.13 ) if( f < 0.11 ) return "1/10"; else return "1/9"; else if( f < 0.14 ) return "1/8"; else return "1/7"; else if( f < 0.19 ) return "1/6"; else if( f < 0.22 ) return "1/5"; else return "2/9"; else if( f < 0.38 ) if( f < 0.29 ) return "1/4"; else if( f < 0.31 ) return "2/7"; else return "1/3"; else if( f < 0.43 ) if( f < 0.40 ) return "3/8"; else return "2/5"; else if( f < 0.44 ) return "3/7"; else return "4/9"; else if( f < 0.71 ) if( f < 0.60 ) if( f < 0.56 ) return "1/2"; else if( f < 0.57 ) return "5/9"; else return "4/7"; else if( f < 0.63 ) return "3/5"; else if( f < 0.66 ) return "5/8"; else return "2/3"; else if( f < 0.80 ) if( f < 0.74 ) return "5/7"; else if(f < 0.78 ) return "3/4"; else return "7/9"; else if( f < 0.86 ) if( f < 0.83 ) return "4/5"; else return "5/6"; else if( f < 0.88 ) return "6/7"; else if( f < 0.89 ) return "7/8"; else if( f < 0.90 ) return "8/9"; else return "9/10"; } A: Here is a quick and dirty implementation in javascript that uses a brute force approach. Not at all optimized, it works within a predefined range of fractions: http://jsfiddle.net/PdL23/1/ /* This should convert any decimals to a simplified fraction within the range specified by the two for loops. Haven't done any thorough testing, but it seems to work fine. I have set the bounds for numerator and denominator to 20, 20... but you can increase this if you want in the two for loops. Disclaimer: Its not at all optimized. (Feel free to create an improved version.) */ decimalToSimplifiedFraction = function(n) { for(num = 1; num < 20; num++) { // "num" is the potential numerator for(den = 1; den < 20; den++) { // "den" is the potential denominator var multiplyByInverse = (n * den ) / num; var roundingError = Math.round(multiplyByInverse) - multiplyByInverse; // Checking if we have found the inverse of the number, if((Math.round(multiplyByInverse) == 1) && (Math.abs(roundingError) < 0.01)) { return num + "/" + den; } } } }; //Put in your test number here. var floatNumber = 2.56; alert(floatNumber + " = " + decimalToSimplifiedFraction(floatNumber)); This is inspired by the approach used by JPS. A: As many people have stated you really can't convert a floating point back to a fraction (unless its extremely exact like .25). Of course you could create some type of look up for a large array of fractions and use some sort of fuzzy logic to produce the result you are looking for. Again this wouldn't be exact though and you would need to define a lower bounds of how large your want the denominator to go. .32 < x < .34 = 1/3 or something like that. A: Here is implementation for ruby http://github.com/valodzka/frac Math.frac(0.2, 100) # => (1/5) Math.frac(0.33, 10) # => (1/3) Math.frac(0.33, 100) # => (33/100) A: I came across an especially elegant Haskell solution making use of an anamorphism. It depends on the recursion-schemes package. {-# LANGUAGE AllowAmbiguousTypes #-} {-# LANGUAGE FlexibleContexts #-} import Control.Applicative (liftA2) import Control.Monad (ap) import Data.Functor.Foldable import Data.Ratio (Ratio, (%)) isInteger :: (RealFrac a) => a -> Bool isInteger = ((==) <*>) (realToFrac . floor) continuedFraction :: (RealFrac a) => a -> [Int] continuedFraction = liftA2 (:) floor (ana coalgebra) where coalgebra x | isInteger x = Nil | otherwise = Cons (floor alpha) alpha where alpha = 1 / (x - realToFrac (floor x)) collapseFraction :: (Integral a) => [Int] -> Ratio a collapseFraction [x] = fromIntegral x % 1 collapseFraction (x:xs) = (fromIntegral x % 1) + 1 / collapseFraction xs -- | Use the nth convergent to approximate x approximate :: (RealFrac a, Integral b) => a -> Int -> Ratio b approximate x n = collapseFraction $ take n (continuedFraction x) If you try this out in ghci, it really does work! λ:> approximate pi 2 22 % 7
{ "language": "en", "url": "https://stackoverflow.com/questions/95727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "112" }
Q: Why does an onclick property set with setAttribute fail to work in IE? Ran into this problem today, posting in case someone else has the same issue. var execBtn = document.createElement('input'); execBtn.setAttribute("type", "button"); execBtn.setAttribute("id", "execBtn"); execBtn.setAttribute("value", "Execute"); execBtn.setAttribute("onclick", "runCommand();"); Turns out to get IE to run an onclick on a dynamically generated element, we can't use setAttribute. Instead, we need to set the onclick property on the object with an anonymous function wrapping the code we want to run. execBtn.onclick = function() { runCommand() }; BAD IDEAS: You can do execBtn.setAttribute("onclick", function() { runCommand() }); but it will break in IE in non-standards mode according to @scunliffe. You can't do this at all execBtn.setAttribute("onclick", runCommand() ); because it executes immediately, and sets the result of runCommand() to be the onClick attribute value, nor can you do execBtn.setAttribute("onclick", runCommand); A: This is an amazing function for cross-browser compatible event binding. Got it from http://js.isite.net.au/snippets/addevent With it you can just do Events.addEvent(element, event, function); and be worry free! For example: (http://jsfiddle.net/Zxeka/) function hello() { alert('Hello'); } var button = document.createElement('input'); button.value = "Hello"; button.type = "button"; Events.addEvent(input_0, "click", hello); document.body.appendChild(button); Here's the function: // We create a function which is called immediately, // returning the actual function object. This allows us to // work in a separate scope and only return the functions // we require. var Events = (function() { // For DOM2-compliant browsers. function addEventW3C(el, ev, f) { // Since IE only supports bubbling, for // compatibility we can't use capturing here. return el.addEventListener(ev, f, false); } function removeEventW3C(el, ev, f) { el.removeEventListener(ev, f, false); } // The function as required by IE. function addEventIE(el, ev, f) { // This is to work around a bug in IE whereby the // current element doesn't get passed as context. // We pass it via closure instead and set it as the // context using call(). // This needs to be stored for removeEvent(). // We also store the original wrapped function as a // property, _w. ((el._evts = el._evts || [])[el._evts.length] = function(e) { return f.call(el, e); })._w = f; // We prepend "on" to the event name. return el.attachEvent("on" + ev, el._evts[el._evts.length - 1]); } function removeEventIE(el, ev, f) { for (var evts = el._evts || [], i = evts.length; i--; ) if (evts[i]._w === f) el.detachEvent("on" + ev, evts.splice(i, 1)[0]); } // A handler to call all events we've registered // on an element for legacy browsers. function addEventLegacyHandler(e) { var evts = this._evts[e.type]; for (var i = 0; i < evts.length; ++i) if (!evts[i].call(this, e || event)) return false; } // For older browsers. We basically reimplement // attachEvent(). function addEventLegacy(el, ev, f) { if (!el._evts) el._evts = {}; if (!el._evts[ev]) el._evts[ev] = []; el._evts[ev].push(f); return true; } function removeEventLegacy(el, ev, f) { // Loop through the handlers for this event type // and remove them if they match f. for (var evts = el._evts[ev] || [], i = evts.length; i--; ) if (evts[i] === f) evts.splice(i, 1); } // Select the appropriate functions based on what's // available on the window object and return them. return window.addEventListener ? {addEvent: addEventW3C, removeEvent: removeEventW3C} : window.attachEvent ? {addEvent: addEventIE, removeEvent: removeEventIE} : {addEvent: addEventLegacy, removeEvent: removeEventLegacy}; })(); If you don't want to use such a big function, this should work for almost all browsers, including IE: if (el.addEventListener) { el.addEventListener('click', function, false); } else if (el.attachEvent) { el.attachEvent('onclick', function); } In response to Craig's question. You're going to have to make a new element and copy over the attributes of the old element. This function should do the job: (source) function changeInputType(oldObject, oType) { var newObject = document.createElement('input'); newObject.type = oType; if(oldObject.size) newObject.size = oldObject.size; if(oldObject.value) newObject.value = oldObject.value; if(oldObject.name) newObject.name = oldObject.name; if(oldObject.id) newObject.id = oldObject.id; if(oldObject.className) newObject.className = oldObject.className; oldObject.parentNode.replaceChild(newObject,oldObject); return newObject; } A: to make this work in both FF and IE you must write both ways: button_element.setAttribute('onclick','doSomething();'); // for FF button_element.onclick = function() {doSomething();}; // for IE thanks to this post. UPDATE: This is to demonstrate that sometimes it is necessary to use setAttribute! This method works if you need to take the original onclick attribute from the HTML and add it to the onclick event, so that it doesn't get overridden: // get old onclick attribute var onclick = button_element.getAttribute("onclick"); // if onclick is not a function, it's not IE7, so use setAttribute if(typeof(onclick) != "function") { button_element.setAttribute('onclick','doSomething();' + onclick); // for FF,IE8,Chrome // if onclick is a function, use the IE7 method and call onclick() in the anonymous function } else { button_element.onclick = function() { doSomething(); onclick(); }; // for IE7 } A: Or you could use jQuery and avoid all those issues: var execBtn = $("<input>", { type: "button", id: "execBtn", value: "Execute" }) .click(runCommand); jQuery will take care of all the cross-browser issues as well. A: Actually, as far as I know, dynamically created inline event-handlers DO work perfectly within Internet Explorer 8 when created with the x.setAttribute() command; you just have to position them properly within your JavaScript code. I stumbled across the solution to your problem (and mine) here. When I moved all of my statements containing x.appendChild() to their correct positions (i.e., immediately following the last setAttribute command within their groups), I found that EVERY single setAttribute worked in IE8 as it was supposed to, including all form input attributes (including "name" and "type" attributes, as well as my "onclick" event-handlers). I found this quite remarkable, since all I got in IE before I did this was garbage rendered across the screen, and one error after another. In addition, I found that every setAttribute still worked within the other browsers as well, so if you just remember this simple coding-practice, you'll be good to go in most cases. However, this solution won't work if you have to change any attributes on the fly, since they cannot be changed in IE once their HTML element has been appended to the DOM; in this case, I would imagine that one would have to delete the element from the DOM, and then recreate it and its attributes (in the correct order, of course!) for them to work properly, and not throw any errors. A: Write the function inline, and the interpreter is smart enough to know you're writing a function. Do it like this, and it assumes it's just a string (which it technically is). A: function CheckBrowser(){ if(navigator.userAgent.match(/Android/i)!=null|| navigator.userAgent.match(/BlackBerry/i)!=null|| navigator.userAgent.match(/iPhone|iPad|iPod/i)!=null|| navigator.userAgent.match(/Nokia/i)!=null|| navigator.userAgent.match(/Opera M/i)!=null|| navigator.userAgent.match(/Chrome/i)!=null) { return 'OTHER'; }else{ return 'IE'; } } function AddButt(i){ var new_butt = document.createElement('input'); new_butt.setAttribute('type','button'); new_butt.setAttribute('value','Delete Item'); new_butt.setAttribute('id', 'answer_del_'+i); if(CheckBrowser()=='IE'){ new_butt.setAttribute("onclick", function() { DelElemAnswer(i) }); }else{ new_butt.setAttribute('onclick','javascript:DelElemAnswer('+i+');'); } } A: In some cases the examples listed here didn't work out for me in Internet Explorer. Since you have to set the property with a method like this (without brackets) HtmlElement.onclick = myMethod; it won't work if you have to pass an object-name or even parameters. For the Internet Explorer you should create a new object in runtime: HtmlElement.onclick = new Function('myMethod(' + someParameter + ')'); Works also on other browsers. A: works great! using both ways seem to be unnecessary now: execBtn.onclick = function() { runCommand() }; apparently works in every current browser. tested in current Firefox, IE, Safari, Opera, Chrome on Windows; Firefox and Epiphany on Ubuntu; not tested on Mac or mobile systems. * *Craig: I'd try "document.getElementById(ID).type='password'; *Has anyone checked the "AddEventListener" approach with different engines? A: There is a LARGE collection of attributes you can't set in IE using .setAttribute() which includes every inline event handler. See here for details: http://webbugtrack.blogspot.com/2007/08/bug-242-setattribute-doesnt-always-work.html A: Did you try: execBtn.setAttribute("onclick", function() { runCommand() }); A: Not relevant to the onclick issue, but also related: For html attributes whose name collide with javascript reserved words, an alternate name is chosen, eg. <div class=''>, but div.className, or <label for='...'>, but label.htmlFor. In reasonable browsers, this doesn't affect setAttribute. So in gecko and webkit you'd call div.setAttribute('class', 'foo'), but in IE you have to use the javascript property name instead, so div.setAttribute('className', 'foo'). A: Have you considered an event listener rather than setting the attribute? Among other things, it lets you pass parameters, which was a problem I ran into when trying to do this. You still have to do it twice for IE and Mozilla: function makeEvent(element, callback, param, event) { function local() { return callback(param); } if (element.addEventListener) { //Mozilla element.addEventListener(event,local,false); } else if (element.attachEvent) { //IE element.attachEvent("on"+event,local); } } makeEvent(execBtn, alert, "hey buddy, what's up?", "click"); Just let event be a name like "click" or "mouseover". A: I did this to get around it and move on, in my case I'm not using an 'input' element, instead I use an image, when I tried setting the "onclick" attribute for this image I experienced the same problem, so I tried wrapping the image with an "a" element and making the reference point to the function like this. var rowIndex = 1; var linkDeleter = document.createElement('a'); linkDeleter.setAttribute('href', "javascript:function(" + rowIndex + ");"); var imgDeleter = document.createElement('img'); imgDeleter.setAttribute('alt', "Delete"); imgDeleter.setAttribute('src', "Imagenes/DeleteHS.png"); imgDeleter.setAttribute('border', "0"); linkDeleter.appendChild(imgDeleter);
{ "language": "en", "url": "https://stackoverflow.com/questions/95731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: Error List Freaking Out in VS 2008 while in large Aspx Files I've been having this irritating issue lately. The site I'm currently working on has large aspx pages with tons of client side js code. While I'm typing the Error List window will keep opening and closing at the bottom of VS (where I have the window docked). I mean constantly. This is beyond aggravating as you can imagine. It's happening to a coworker as well. Does anybody else have this issue? Any solutions? (other than smaller pages) We've got 2008 Pro SP1. I've turned off every JS feature I can find since with pages this large it slows the VS to a crawl while it tries to parse it. I've tried closing the Error list completely but it just re-opens itself. Thanks in advance, Geoff A: * *Try deleting the .user file in the project directory *Use Add/Remove programs to do a repair on VS2008 *If neither of those work, copy the markup to a new project and attempt to reproduce. If you can reproduce this issue then update this question with details. A: I finally found the options that were causing the issue. 1. Options->Text Editor-> Miscellaneous,     un-check "Formal HTML on paste 2. Options->Text Editor->JScript->Formatting    un-check The boxes under "Automatic Formatting"
{ "language": "en", "url": "https://stackoverflow.com/questions/95732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I generate a list of function dependencies in MATLAB? In order to distribute a function I've written that depends on other functions I've written that have their own dependencies and so on without distributing every m-file I have ever written, I need to figure out what the full list of dependencies is for a given m-file. Is there a built-in/freely downloadable way to do this? Specifically I am interested in solutions for MATLAB 7.4.0 (R2007a), but if there is a different way to do it in older versions, by all means please add them here. A: For MATLAB 2015a and later you should preferably look at matlab.codetools.requiredFilesAndProducts or doc matlab.codetools.requiredFilesAndProducts because depfun is marked to be removed in a future release. A: For newer releases of Matlab (eg 2007 or 2008) you could use the built in functions: * *mlint *dependency report and *coverage report Another option is to use Matlab's profiler. The command is profile, it can also be used to track dependencies. To use profile, you could do >> profile on % turn profiling on >> foo; % entry point to your matlab function or script >> profile off % turn profiling off >> profview % view the report If profiler is not available, then perhaps the following two functions are (for pre-MATLAB 2015a): * *depfun *depdir For example, >> deps = depfun('foo'); gives a structure, deps, that contains all the dependencies of foo.m. From answers 2, and 3, newer versions of MATLAB (post 2015a) use matlab.codetools.requiredFilesAndProducts instead. See answers EDIT: Caveats thanks to @Mike Katz comments * *Remember that the Profiler will only show you files that were actually used in those runs, so if you don't go through every branch, you may have additional dependencies. The dependency report is a good tool, but only resolves static dependencies on the path and just for the files in a single directory. *Depfun is more reliable but gives you every possible thing it can think of, and still misses LOAD's and EVAL's.
{ "language": "en", "url": "https://stackoverflow.com/questions/95760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: How can I catch AWT thread exceptions in Java? We'd like a trace in our application logs of these exceptions - by default Java just outputs them to the console. A: A little addition to shemnons anwer: The first time an uncaught RuntimeException (or Error) occurs in the EDT it is looking for the property "sun.awt.exception.handler" and tries to load the class associated with the property. EDT needs the Handler class to have a default constructor, otherwise the EDT will not use it. If you need to bring a bit more dynamics into the handling story you are forced to do this with static operations, because the class is instantiated by the EDT and therefore has no chance to access other resources other than static. Here is the exception handler code from our Swing framework we are using. It was written for Java 1.4 and it worked quite fine there: public class AwtExceptionHandler { private static final Logger LOGGER = LoggerFactory.getLogger(AwtExceptionHandler.class); private static List exceptionHandlerList = new LinkedList(); /** * WARNING: Don't change the signature of this method! */ public void handle(Throwable throwable) { if (exceptionHandlerList.isEmpty()) { LOGGER.error("Uncatched Throwable detected", throwable); } else { delegate(new ExceptionEvent(throwable)); } } private void delegate(ExceptionEvent event) { for (Iterator handlerIterator = exceptionHandlerList.iterator(); handlerIterator.hasNext();) { IExceptionHandler handler = (IExceptionHandler) handlerIterator.next(); try { handler.handleException(event); if (event.isConsumed()) { break; } } catch (Throwable e) { LOGGER.error("Error while running exception handler: " + handler, e); } } } public static void addErrorHandler(IExceptionHandler exceptionHandler) { exceptionHandlerList.add(exceptionHandler); } public static void removeErrorHandler(IExceptionHandler exceptionHandler) { exceptionHandlerList.remove(exceptionHandler); } } Hope it helps. A: Since Java 7, you have to do it differently as the sun.awt.exception.handler hack does not work anymore. Here is the solution (from Uncaught AWT Exceptions in Java 7). // Regular Exception Thread.setDefaultUncaughtExceptionHandler(new ExceptionHandler()); // EDT Exception SwingUtilities.invokeAndWait(new Runnable() { public void run() { // We are in the event dispatching thread Thread.currentThread().setUncaughtExceptionHandler(new ExceptionHandler()); } }); A: There is a distinction between uncaught exceptions in the EDT and outside the EDT. Another question has a solution for both but if you want just the EDT portion chewed up... class AWTExceptionHandler { public void handle(Throwable t) { try { // insert your exception handling code here // or do nothing to make it go away } catch (Throwable t) { // don't let the exception get thrown out, will cause infinite looping! } } public static void registerExceptionHandler() { System.setProperty('sun.awt.exception.handler', AWTExceptionHandler.class.getName()) } } A: There are two ways: * */* Install a Thread.UncaughtExceptionHandler on the EDT */ *Set a system property: System.setProperty("sun.awt.exception.handler",MyExceptionHandler.class.getName()); I don't know if the latter works on non-SUN jvms. -- Indeed, the first is not correct, it's only a mechanism for detecting a crashed thread.
{ "language": "en", "url": "https://stackoverflow.com/questions/95767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How do I add FTP support to Eclipse? I'm using Eclipse PHP Development Tools. What would be the easiest way to access a file or maybe create a remote project trough FTP and maybe SSH and SFTP?. A: Install Aptana plugin to your Eclipse installation. It has built-in FTP support, and it works excellently. You can: * *Edit files directly from the FTP server *Perform file/folder management (copy, delete, move, rename, etc.) *Upload/download files to/from FTP server *Synchronize local files with FTP server. You can make several profiles (actually projects) for this so you won't have to reinput over and over again. As a matter of fact the FTP support is so good I'm using Aptana (or Eclipse + Aptana) now for all my FTP needs. Plus I get syntax highlighting/whatever coding support there is. Granted, Eclipse is not the speediest app to launch, but it doesn't bug me so much. A: have you checked RSE (Remote System Explorer) ? I think it's pretty close to what you want to achieve. a blog post about it, with screenshots A: I'm not sure if this works for you, but when I do small solo PHP projects with Eclipse, the first thing I set up is an Ant script for deploying the project to a remote testing environment. I code away locally, and whenever I want to test it, I just hit the shortcut which updates the remote site. Eclipse has good Ant support out of the box, and the scripts aren't hard to make. A: SFTP Plug-in: http://www.jcraft.com/eclipse-sftp/ :) A: Eclipse natively supports FTP and SSH. Aptana is not necessary. Native FTP and SSH support in Eclipse is in the "Remote System Explorer End-User Runtime" Plugin. Install it through Eclipse itself. These instructions may vary slightly with your version of Eclipse: * *Go to 'Help' -> 'Install New Software' (in older Eclipses, this is called something a bit different) *In the 'Work with:' drop-down, select your version's plugin release site. Example: for Kepler, this is Kepler - http://download.eclipse.org/releases/kepler *In the filter field, type 'remote'. *Check the box next to 'Remote System Explorer End-User Runtime' *Click 'Next', and accept the terms. It should now download and install. *After install, Eclipse may want to restart. Using it, in Eclipse: * *Window -> Open Perspective -> (perhaps select 'Other') -> Remote System Explorer *File -> New -> Other -> Remote System Explorer (folder) -> Connection (or type Connection into the filter field) *Choose FTP from the 'Select Remote System Type' panel. *Fill in your FTP host info in the next panel (username and password come later). *In the Remote Systems panel, right-click the hostname and click 'connect'. *Enter username + password and you're good! *Well, not exactly 'good'. The RSE system is fairly unusual, but you're connected. *And you're one smart cookie! You'll figure out the rest. Edit: To change the default port, follow the instructions on this page: http://ikool.wordpress.com/2008/07/25/tips-to-access-ftpssh-on-different-ports-using-eclipse-rse/ A: As none of the other solutions mentioned satisfied me, I wrote a script that uses WinSCP to sync local directories in a project to a FTP(S)/SFTP/SCP Server when eclipse's autobuild feature is triggered. Obviously, this is a Windows-only solution. Maybe someone finds this useful: http://rays-blog.de/2012/05/05/94/use-winscp-to-upload-files-using-eclipses-autobuild-feature/
{ "language": "en", "url": "https://stackoverflow.com/questions/95800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "93" }
Q: In Perl, how do I create a hash whose keys come from a given array? Let's say I have an array, and I know I'm going to be doing a lot of "Does the array contain X?" checks. The efficient way to do this is to turn that array into a hash, where the keys are the array's elements, and then you can just say if($hash{X}) { ... } Is there an easy way to do this array-to-hash conversion? Ideally, it should be versatile enough to take an anonymous array and return an anonymous hash. A: I always thought that foreach my $item (@array) { $hash{$item} = 1 } was at least nice and readable / maintainable. A: There is a presupposition here, that the most efficient way to do a lot of "Does the array contain X?" checks is to convert the array to a hash. Efficiency depends on the scarce resource, often time but sometimes space and sometimes programmer effort. You are at least doubling the memory consumed by keeping a list and a hash of the list around simultaneously. Plus you're writing more original code that you'll need to test, document, etc. As an alternative, look at the List::MoreUtils module, specifically the functions any(), none(), true() and false(). They all take a block as the conditional and a list as the argument, similar to map() and grep(): print "At least one value undefined" if any { !defined($_) } @list; I ran a quick test, loading in half of /usr/share/dict/words to an array (25000 words), then looking for eleven words selected from across the whole dictionary (every 5000th word) in the array, using both the array-to-hash method and the any() function from List::MoreUtils. On Perl 5.8.8 built from source, the array-to-hash method runs almost 1100x faster than the any() method (1300x faster under Ubuntu 6.06's packaged Perl 5.8.7.) That's not the full story however - the array-to-hash conversion takes about 0.04 seconds which in this case kills the time efficiency of array-to-hash method to 1.5x-2x faster than the any() method. Still good, but not nearly as stellar. My gut feeling is that the array-to-hash method is going to beat any() in most cases, but I'd feel a whole lot better if I had some more solid metrics (lots of test cases, decent statistical analyses, maybe some big-O algorithmic analysis of each method, etc.) Depending on your needs, List::MoreUtils may be a better solution; it's certainly more flexible and requires less coding. Remember, premature optimization is a sin... :) A: In perl 5.10, there's the close-to-magic ~~ operator: sub invite_in { my $vampires = [ qw(Angel Darla Spike Drusilla) ]; return ($_[0] ~~ $vampires) ? 0 : 1 ; } See here: http://dev.perl.org/perl5/news/2007/perl-5.10.0.html A: Also worth noting for completeness, my usual method for doing this with 2 same-length arrays @keys and @vals which you would prefer were a hash... my %hash = map { $keys[$_] => $vals[$_] } (0..@keys-1); A: @hash{@array} = (1) x @array; It's a hash slice, a list of values from the hash, so it gets the list-y @ in front. From the docs: If you're confused about why you use an '@' there on a hash slice instead of a '%', think of it like this. The type of bracket (square or curly) governs whether it's an array or a hash being looked at. On the other hand, the leading symbol ('$' or '@') on the array or hash indicates whether you are getting back a singular value (a scalar) or a plural one (a list). A: @hash{@keys} = undef; The syntax here where you are referring to the hash with an @ is a hash slice. We're basically saying $hash{$keys[0]} AND $hash{$keys[1]} AND $hash{$keys[2]} ... is a list on the left hand side of the =, an lvalue, and we're assigning to that list, which actually goes into the hash and sets the values for all the named keys. In this case, I only specified one value, so that value goes into $hash{$keys[0]}, and the other hash entries all auto-vivify (come to life) with undefined values. [My original suggestion here was set the expression = 1, which would've set that one key to 1 and the others to undef. I changed it for consistency, but as we'll see below, the exact values do not matter.] When you realize that the lvalue, the expression on the left hand side of the =, is a list built out of the hash, then it'll start to make some sense why we're using that @. [Except I think this will change in Perl 6.] The idea here is that you are using the hash as a set. What matters is not the value I am assigning; it's just the existence of the keys. So what you want to do is not something like: if ($hash{$key} == 1) # then key is in the hash instead: if (exists $hash{$key}) # then key is in the set It's actually more efficient to just run an exists check than to bother with the value in the hash, although to me the important thing here is just the concept that you are representing a set just with the keys of the hash. Also, somebody pointed out that by using undef as the value here, we will consume less storage space than we would assigning a value. (And also generate less confusion, as the value does not matter, and my solution would assign a value only to the first element in the hash and leave the others undef, and some other solutions are turning cartwheels to build an array of values to go into the hash; completely wasted effort). A: Raldi's solution can be tightened up to this (the '=>' from the original is not necessary): my %hash = map { $_,1 } @array; This technique can also be used for turning text lists into hashes: my %hash = map { $_,1 } split(",",$line) Additionally if you have a line of values like this: "foo=1,bar=2,baz=3" you can do this: my %hash = map { split("=",$_) } split(",",$line); [EDIT to include] Another solution offered (which takes two lines) is: my %hash; #The values in %hash can only be accessed by doing exists($hash{$key}) #The assignment only works with '= undef;' and will not work properly with '= 1;' #if you do '= 1;' only the hash key of $array[0] will be set to 1; @hash{@array} = undef; A: Note that if typing if ( exists $hash{ key } ) isn’t too much work for you (which I prefer to use since the matter of interest is really the presence of a key rather than the truthiness of its value), then you can use the short and sweet @hash{@key} = (); A: %hash = map { $_ => 1 } @array; It's not as short as the "@hash{@array} = ..." solutions, but those ones require the hash and array to already be defined somewhere else, whereas this one can take an anonymous array and return an anonymous hash. What this does is take each element in the array and pair it up with a "1". When this list of (key, 1, key, 1, key 1) pairs get assigned to a hash, the odd-numbered ones become the hash's keys, and the even-numbered ones become the respective values. A: You could also use Perl6::Junction. use Perl6::Junction qw'any'; my @arr = ( 1, 2, 3 ); if( any(@arr) == 1 ){ ... } A: If you do a lot of set theoretic operations - you can also use Set::Scalar or similar module. Then $s = Set::Scalar->new( @array ) will build the Set for you - and you can query it with: $s->contains($m). A: You can place the code into a subroutine, if you don't want pollute your namespace. my $hash_ref = sub{ my %hash; @hash{ @{[ qw'one two three' ]} } = undef; return \%hash; }->(); Or even better: sub keylist(@){ my %hash; @hash{@_} = undef; return \%hash; } my $hash_ref = keylist qw'one two three'; # or my @key_list = qw'one two three'; my $hash_ref = keylist @key_list; If you really wanted to pass an array reference: sub keylist(\@){ my %hash; @hash{ @{$_[0]} } = undef if @_; return \%hash; } my @key_list = qw'one two three'; my $hash_ref = keylist @key_list; A: #!/usr/bin/perl -w use strict; use Data::Dumper; my @a = qw(5 8 2 5 4 8 9); my @b = qw(7 6 5 4 3 2 1); my $h = {}; @{$h}{@a} = @b; print Dumper($h); gives (note repeated keys get the value at the greatest position in the array - ie 8->2 and not 6) $VAR1 = { '8' => '2', '4' => '3', '9' => '1', '2' => '5', '5' => '4' }; A: You might also want to check out Tie::IxHash, which implements ordered associative arrays. That would allow you to do both types of lookups (hash and index) on one copy of your data.
{ "language": "en", "url": "https://stackoverflow.com/questions/95820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "82" }
Q: Should I use haml or erb or erubis for potentially high traffic site? I have been playing with Haml recently and really like the way the resulting code looks to me...the developer. I'm also not too worried about a designer being able to consume or change it...we're a small team. That said, beginning work on a project we believe will generate quite a bit of traffic (who doesn't?). I'm concerned that there are things I just don't know about haml. Is there anything erb can do that haml can't? Does haml have a negative effect as a project grows? Are there other things that should be considered? And finally...how does Haml compare speedwise to erubis? I see that it supposedly beats erb and eruby now... Thanks! A: Haml rocks. I haven't seen any recent performance numbers but it is pretty close to erb these days. I think that it might be faster than erb if you turn on ugly mode (which prevents the pretty indentation) We're doing 2.8 million pageviews a day with Haml. There is a benchmarker checked into the Haml source tree: http://github.com/nex3/haml/tree/master/test Update November 2009 Nathan (Haml's main developer) published some Haml 2.2 benchmarks on his blog. You can see the exact numbers there but in short: * *Normal (pretty printing) mode = 2.8 times slower than ERB *Ugly mode (no pretty tabs added) = equal to ERB You can enable ugly mode by placing Haml::Template::options[:ugly] = true in an initializer or environment file. Note that ugly mode isn't really that ugly - the resulting HTML is actually much prettier than ERB - it's just not indented nicely. A: I think it's entirely a matter of personal preference and maintainability. For me, Haml makes the templates easier to read and understand, and the performance is very acceptable. In the end, the templating language is unlikely to be the place where you need to optimize -- more likely database queries, view or object caching, etc. However, in the case of ERb templates, you will get better performance essentially for free if you use erubis. A: If you use Rails, the performance difference between Haml and erubis is negligible: templates get compiled and cached after the first hit, anyway. Combine this with fragment and page caching and you can rest assured that views are not the performance bottleneck of your application. The question you should be asking yourself is: do you like writing Haml? Does it make you more productive? Then you can decide easier. A: If you like how haml works from a coding point of view, don't worry about the performance of the templating engine too much. (Though, as you've pointed out, it's now fast.) It can definitely generate any output the other engines can. Generally, it's more profitable to put your energy into setting up caching than worrying about your templating engine where you're having performance problems. A: I love HAML since it is a good tool for easily writing structured HTML, and generally it is just a joy to use. But it has very little to do with choosing a tool based on the amount of traffic a site might have. If you are worried about traffic, you should worry about using caching properly. And then you need to apply the principles of general web-application performance - the result is that you will have super snappy responses to page loads. Which is what a high-traffic website really needs. A couple of presentations that show how to improve website performance can be found here: * *Michael Koziarski speaks about Rails Performance on 2008 Paris on Rails *Jeremy Kemper talks about Performance on Rails on RailsConf EU'08 And the best place that I know of to learn how to use rails caching properly is: * *Rails Caching PeepCode screencast A: I would personally recommend us erubis in precompiled templates. Especially if there's no need for dynamic templating. Then your biggest slowdown will be limited by the speed at which ruby can parse ruby. I'd probably set up a small cron job that just monitors for changed source templates and autocompiles them on-change that you can turn off when not in use. Compile once, use many. Oh, and if you're really concerned about speed, Tenjin may be worth a look too ( same creators as erubis ) http://www.kuwata-lab.com/tenjin/rbtenjin-examples.html A: Well, Haml performance continues to improve with each release. Is it at an acceptable place at the current time? That's for you to decide (I'm inclined to say "Yes", but it's your choice based on your needs). If you like the templates and the readability they provide, then the performance drop (however negligible) should really be the final factor in your decision. One of the other tools you should consider using in conjunction with Haml is make_resourceful, another gem by the maintainer of Haml (Nathan Weizenbaum) that simplifies a lot of the RESTful things in a Rails app. If you have any further, more specific questions about Haml (and m_r), I'm sure Nathan would be more than happy to answer them. He can be reached via Jabber/XMPP and email. His contact information can be found here.
{ "language": "en", "url": "https://stackoverflow.com/questions/95821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Replace a character with a string in LaTeX I'm looking for a way to do a substring replace on a string in LaTeX. What I'd like to do is build a command that I can call like this: \replace{File,New} and that would generate something like \textbf{File}$\rightarrow$\textbf{New} This is a simple example, but I'd like to be able to put formatting/structure in a single command rather than everywhere in the document. I know that I could build several commands that take increasing numbers of parameters, but I'm hoping that there is an easier way. Edit for clarification I'm looking for an equivalent of string.replace(",", "$\rightarrow$) something that can take an arbitrary string, and replace a substring with another substring. So I could call the command with \replace{File}, \replace{File,New}, \replace{File,Options,User}, etc., wrap the words with bold formatting, and replace any commas with the right arrow command. Even if the "wrapping with bold" bit is too difficult (as I think it might be), just the replace part would be helpful. A: The general case is rather more tricky (when you're not using commas as separators), but the example you gave can be coded without too much trouble with some knowledge of the LaTeX internals. \documentclass[12pt]{article} \makeatletter \newcommand\formatnice[1]{% \let\@formatsep\@formatsepinit \@for\@ii:=#1\do{% \@formatsep \formatentry{\@ii}% }% } \def\@formatsepinit{\let\@formatsep\formatsep} \makeatother \newcommand\formatsep{,} \newcommand\formatentry[1]{#1} \begin{document} \formatnice{abc,def} \renewcommand\formatsep{\,$\rightarrow$\,} \renewcommand\formatentry[1]{\textbf{#1}} \formatnice{abc,def} \end{document} A: it looks like your "spaces" problem is from a bug in that package. If you surround the "\GetTokens" macro with, say, commas, then you'll see that the extra space is inserted by that macro. Yes there are bugs in tokenizer package. As I said on my blog, the bugfix is to use the following correcting code instead of just "\usepackage[trim]{tokenizer}": \usepackage[trim]{tokenizer} \def\SH@GetTokens#1,#2\@empty{% \def\SH@token{#1}% \ifx\SH@trimtokens\SH@true% strip spaces if requested \TrimSpaces\SH@token% \fi% \SH@DefineCommand{\SH@FirstArgName}{\SH@token}% \SH@DefineCommand{\SH@SecondArgName}{#2}% } \def\SH@CheckTokenSep#1,#2\@empty{% \def\SH@CTSArgTwo{#2}% \ifx\SH@CTSArgTwo\@empty% \edef\SH@TokenValid{\SH@false}% \else% \edef\SH@TokenValid{\SH@true}% \fi% } I will report this bugfix to the developer Sascha Herpers A: Try the xstring package: \usepackage{xstring} […] \StrSubstitute{File,New}{,}{\(\rightarrow\)} A: There's a LaTeX package called tokenizer which may help you to do what you want. Here's a hack (but pure LaTeX, no internals) which gets close to what I think you want, but with some extraneous spaces I haven't quite been able to fix. Perhaps Will Robertson can advise further? Unlike his slightly more polished answer, I haven't parameterised the bits and pieces, Here goes: \usepackage{forloop} \usepackage[trim]{tokenizer} ... \newcounter{rrCount} \newcommand{\replace}[1]{% \GetTokens{rrFirst}{rrRest}{#1,}% \textbf{\rrFirst}% \forloop{rrCount}{0}{\value{rrCount} < 100}{% \ifthenelse{\equal{\rrRest}{}}{% \setcounter{rrCount}{101}% }{% \GetTokens{rrFirst}{rrRest}{\rrRest}% $\rightarrow$\textbf{\rrFirst}% }% }% }% % ----------------------------------------------------------------- \replace{a1}\\ \replace{a2,b2}\\ \replace{a3,b3,c3}\\ A: OK, I withdraw this answer. Thanks for clarifying the question. I suspect this may not be what you want, but here goes anyway: \newcommand{\replace}[2]{\textbf{#1}$\rightarrow$\textbf{#2}} \replace{File}{New} If this isn't what you're looking for, could you clarify the question, please?
{ "language": "en", "url": "https://stackoverflow.com/questions/95824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What is the most efficient way to handle the lifecycle of an object with COM interop? I have a Windows Workflow application that uses classes I've written for COM automation. I'm opening Word and Excel from my classes using COM. I'm currently implementing IDisposable in my COM helper and using Marshal.ReleaseComObject(). However, if my Workflow fails, the Dispose() method isn't being called and the Word or Excel handles stay open and my application hangs. The solution to this problem is pretty straightforward, but rather than just solve it, I'd like to learn something and gain insight into the right way to work with COM. I'm looking for the "best" or most efficient and safest way to handle the lifecycle of the classes that own the COM handles. Patterns, best practices, or sample code would be helpful. A: I can not see what failure you have that does not calls the Dispose() method. I made a test with a sequential workflow that contains only a code activity which just throws an exception and the Dispose() method of my workflow is called twice (this is because of the standard WorkflowTerminated event handler). Check the following code: Program.cs class Program { static void Main(string[] args) { using(WorkflowRuntime workflowRuntime = new WorkflowRuntime()) { AutoResetEvent waitHandle = new AutoResetEvent(false); workflowRuntime.WorkflowCompleted += delegate(object sender, WorkflowCompletedEventArgs e) { waitHandle.Set(); }; workflowRuntime.WorkflowTerminated += delegate(object sender, WorkflowTerminatedEventArgs e) { Console.WriteLine(e.Exception.Message); waitHandle.Set(); }; WorkflowInstance instance = workflowRuntime.CreateWorkflow(typeof(WorkflowConsoleApplication1.Workflow1)); instance.Start(); waitHandle.WaitOne(); } Console.ReadKey(); } } Workflow1.cs public sealed partial class Workflow1: SequentialWorkflowActivity { public Workflow1() { InitializeComponent(); this.codeActivity1.ExecuteCode += new System.EventHandler(this.codeActivity1_ExecuteCode); } [DebuggerStepThrough()] private void codeActivity1_ExecuteCode(object sender, EventArgs e) { Console.WriteLine("Throw ApplicationException."); throw new ApplicationException(); } protected override void Dispose(bool disposing) { if (disposing) { // Here you must free your resources // by calling your COM helper Dispose() method Console.WriteLine("Object disposed."); } } } Am I missing something? Concerning the lifecycle-related methods of an Activity (and consequently of a Workflow) object, please check this post: Activity "Lifetime" Methods. If you just want a generic article about disposing, check this. A: Basically, you should not rely on hand code to call Dispose() on your object at the end of the work. You probably have something like this right now: MyComHelper helper = new MyComHelper(); helper.DoStuffWithExcel(); helper.Dispose(); ... Instead, you need to use try blocks to catch any exception that might be triggered and call dispose at that point. This is the canonical way: MyComHelper helper = new MyComHelper(); try { helper.DoStuffWithExcel(); } finally() { helper.Dispose(); } This is so common that C# has a special construct that generates the same exact code [see note] as shown above; this is what you should be doing most of the time (unless you have some special object construction semantics that make a manual pattern like the above easier to work with): using(MyComHelper helper = new MyComHelper()) { helper.DoStuffWithExcel(); } EDIT: NOTE: The actual code generated is a tiny bit more complicated than the second example above, because it also introduces a new local scope that makes the helper object unavailable after the using block. It's like if the second code block was surrounded by { }'s. That was omitted for clarify of the explanation.
{ "language": "en", "url": "https://stackoverflow.com/questions/95834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to determine internal name of table-valued variable in MS SQL Server 2005 The name of a temporary table such as #t1 can be determined using select @TableName = [Name] from tempdb.sys.tables where [Object_ID] = object_id('tempDB.dbo.#t1') How can I find the name of a table valued variable, i.e. one declared by declare @t2 as table (a int) the purpose is to be able to get meta-information about the table, using something like select @Headers = dbo.Concatenate('[' + c.[Name] + ']') from sys.all_columns c inner join sys.tables t on c.object_id = t.object_id where t.name = @TableName although for temp tables you have to look in tempdb.sys.tables instead of sys.tables. where do you look for table valued variables? I realize now that I can't do what I wanted to do, which is write a generic function for formatting table valued variables into html tables. For starters, in sql server 2005 you can't pass table valued parameters: http://www.sqlteam.com/article/sql-server-2008-table-valued-parameters moreover, in sql server 2008, the parameters have to be strongly typed, so you will always know the number and type of columns. A: Table variable metadata is viewable in tempdb.sys.tables too. This is easily verifiable from the below declare @t2 as table ( [38F055D8-25D9-4AA6-9571-F436FE] int) SELECT t.name, t.object_id FROM tempdb.sys.tables t JOIN tempdb.sys.columns c ON t.object_id = c.object_id WHERE c.name = '38F055D8-25D9-4AA6-9571-F436FE' Example Results name object_id ------------------------------ ----------- #4DB4832C 1303675692 But you will notice the object name is auto generated and bears no relation to the variable name. If you do not have a guaranteed unique column name that you can use to filter on as above and the table variable has at least one row in it you can (from SQL Server 2008 onwards) use %%physloc%% and DBCC PAGE to determine this information. Example below. DECLARE @t2 AS TABLE ( a INT) INSERT INTO @t2 VALUES (1) DECLARE @DynSQL NVARCHAR(100) SELECT TOP (1) @DynSQL = 'DBCC PAGE(2,' + CAST(file_id AS VARCHAR) + ',' + CAST( page_id AS VARCHAR) + ',1) WITH TABLERESULTS' FROM @t2 CROSS APPLY sys.fn_PhysLocCracker( %% physloc %% ) DECLARE @DBCCPage TABLE ( [ParentObject] [VARCHAR](100) NULL, [Object] [VARCHAR](100) NULL, [Field] [VARCHAR](100) NULL, [VALUE] [VARCHAR](100) NULL ) INSERT INTO @DBCCPage EXEC (@DynSQL) SELECT VALUE AS object_id, OBJECT_NAME(VALUE, 2) AS object_name FROM @DBCCPage WHERE Field = 'Metadata: ObjectId' A: From Books Online: A table variable behaves like a local variable. It has a well-defined scope, which is the function, stored procedure, or batch in which it is declared. Given this, there should be no need to look up this value at run-time because you have to know it at design-time. A: On the topic of passing arbitrary lists/arrays into a SQL Server 2005 function or sproc, the least hokey way I know is to use an XML variable. If desired, that XML variable can be a strongly typed XML type that is associated w/ an XML Schema. Given a list passed into a procedure/function as XML, you can extract that list into a table variable or temp table via "shredding". "To shred" XML means to transform in the opposite direction--from XML to rowset(s). (The FOR XML clause causes a rowset to XML transformation.) In the user-defined table function CREATE FUNCTION [dbo].[udtShredXmlInputBondIdList] ( -- Add the parameters for the function here @xmlInputBondIdList xml ) RETURNS @tblResults TABLE ( -- Add the column definitions for the TABLE variable here BondId int ) AS BEGIN -- Should add a schema validation for @xmlInputIssuerIdList here --Place validation here -- Fill the table variable with the rows for your result set INSERT @tblResults SELECT nref.value('.', 'int') as BondId FROM @xmlInputBondIdList.nodes('//BondID') as R(nref) RETURN END if the @xmlInputBondIdList is an XML fragment of the expected structure like that immediately below and is invoked as follows DECLARE @xmlInputBondIdList xml SET @xmlInputBondIdList = '<XmlInputBondIdList> <BondID>8681</BondID> <BondID>8680</BondID> <BondID>8684</BondID> </XmlInputBondIdList> ' SELECT * FROM [CorporateBond].[dbo].[udtShredXmlInputBondIdList] (@xmlInputBondIdList) the result will be the rowset BondId 8681 8680 8684 A couple other examples can be found at http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=678284&SiteID=1 A: I don't believe you can, as table variables are created in memory not in tempdb.
{ "language": "en", "url": "https://stackoverflow.com/questions/95842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }