text
stringlengths
8
267k
meta
dict
Q: Does anyone have a good example of controlling multiple Excel instances from a .Net app? We have an Excel 2002/XP based application that interacts with SQL 2000/5 to process fairly complex actuarial calculations. The application performs its function well, but it's difficult to manage. We're trying to create a "controller" application or service that can manage and monitor these various instances of Excel (start/stop/process commands etc) but it's a bit of an InterOp nightmare unfortunately. Does anyone have a good (i.e. working) example of doing something like this in VB.Net or C#? A: Don't do it! We tried for weeks to get something like that to work and it simply does not behave as advertised. Don't even start - give up immediately! The only options that you really have is a heavy server-side MOSS based implementation - Excel (Web) services (they call it something like that). Windows based COM Excel interop is pretty much dead and will be replaced by MOSS. The other option is to use SpreadsheetGear. It is actually a fantastic product * *It is lightlingly fast *The engine is separate from the UI so you can use it to do Excel stuff server side (with no office installed) *Fairly cheap *Has an API similar to the existing Excel COM api so moving code across is relatively easy It all depends on the formulas that you need in your spreadsheet. Have a look at the formula list for Spreadsheet Gear and if there is a match go for it. A: Interop works fine except that you always end up with references to Excel objects that aren't released, so Excel instances won't close. The following KB article explains why: http://support.microsoft.com/default.aspx/kb/317109/EN-US/ You can avoid the problem if you have very carefully written code for a limited set of Interop scenarios. But in the general case it's very difficult to get it working reliably. A: You might want to take a look at this product: http://www.spreadsheetgear.com/products/spreadsheetgear.net.aspx It's all managed code and direct .NET libraries. No InterOp headaches. I haven't used it myself, but I've heard very good things from people in the finance world. A: We have written a service that controls a single instance of Excel 2003. We never managed to get Excel instances to close cleanly, so we start one instance when the service is first accessed and use only that, serializing client requests.
{ "language": "en", "url": "https://stackoverflow.com/questions/87576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Silverlight DataGrid Control - How do I stop the sorting on a column? Continuing my problem from yesterday, the Silverlight datagrid I have from this issue is now causing Stack Overflow errors when sorting a column with a large amount of data (Like the text column that contains a where clause for a SQL statment). When you sort, it'll fire the SelectedIndexChanged event for the datagrid and then still try to stort. If you click the header again the stack overflow occours. Does anyone have an idea on how to stop the sorting on this control for a column? All the other columns sort fine (but still fire that darn SelectedIndexChanged event), but if I could shut off the column for whereClause it'd be perfect. Does anyone have a better idea at how to get this to work? A: I'm only familiar with the WPF version of this datagrid, but try this: <data:DataGridTextColumn CanUserSort="False" Header="First Name" Binding="{Binding FirstName}" /> Add the CanUserSort="False" attribute on each column you don't want sorted. A: Give this a shot: dataGridView1.Columns[*Numberofthecolumnyoudontwantsorted*].SortMode = DataGridViewColumnSortMode.NotSortable; A: @BKimmel - It won't work since this is in silverlight and apparently that part of the grid column has not yet been worked on. In the XAML of the page it doesn't show up with the attribute for sortmode on the columns, and in the backend code, it doesn't recognize it as it isn't a web control, it's a silverlight control. Thanks though. Anyone else?
{ "language": "en", "url": "https://stackoverflow.com/questions/87587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: textboxes in Datarepeater dynamically 'databound' I need to know if it is possible to dynamically bind a textbox residing within a datarepeater to a 'dynamically' created BindingSource. I am using VB.net. The database I am using is a MySQL database. I have to use the connection dynamically due to the fact that the database my not permanently reside on the same server. [edit] ok, so it seams that I am a dolt when asking questions. The app that I am making is not web based. it is a simple (I hope) app that connects to a MySQL database, accesses a table so I can edit/view it. Current setup is using the Add DataSource wizard. I have successfully connected to the dbase dynamically using the mysql connector dll but without the textboxes set at design time to a datasource, I am unsure on how to 'link' them via the datarepeater. A: Your connection string should be defined in your Web.Config, and if you move your database to a different server, it's just a matter of modifying the web.config entry. As long as you keep the connection string name the same, the BindingSource object will pick up the new value from the config.edit In truth, the same concept should apply here as it does in the web app answer listed above. All of your data objects should be hard-coded, and it's just the connection string (which you'll have to either ask the user for, or push out as update when the DB moves) which will get modified. For example, create a App.Config file in your project. Have one of your configuration values be the connection string. This config value will be where you go to get the connection string whenever you need it. Then your wizard will be there to allow users to easily modify the connection. A: then look in app.config the conenction string should be there. If it is not then you should put it in here as you can change this file at any time and not have to recompile your app.
{ "language": "en", "url": "https://stackoverflow.com/questions/87590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the equivalent of Oracle's REF CURSOR in Postgresql when using JDBC? In Oracle I can declare a reference cursor... TYPE t_spool IS REF CURSOR RETURN spool%ROWTYPE; ...and use it to pass a cursor as the return value... FUNCTION end_spool RETURN t_spool AS v_spool t_spool; BEGIN COMMIT; OPEN v_spool FOR SELECT * FROM spool WHERE key = g_spool_key ORDER BY seq; RETURN v_spool; END end_spool; ...and then capture it as a result set using JDBC... private Connection conn; private CallableStatement stmt; private OracleResultSet rset; [...clip...] stmt = conn.prepareCall("{ ? = call " + call + "}"); stmt.registerOutParameter(1, OracleTypes.CURSOR); stmt.execute(); rset = (OracleResultSet)stmt.getObject(1); What is the equivalent in Postgresql? A: Maybe this will help: http://jdbc.postgresql.org/documentation/83/callproc.html#callproc-resultset-setof I haven't really messed with that before :P
{ "language": "en", "url": "https://stackoverflow.com/questions/87603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Automated integration testing a C++ app with a database I am introducing automated integration testing to a mature application that until now has only been manually tested. The app is Windows based and talks to a MySQL database. What is the best way (including details of any tools recommended) to keep tests independent of each other in terms of the database transactions that will occur? (Modifications to the app source for this particular purpose are not an option.) A: How are you verifying the results? If you need to query the DB (and it sounds like you probably do) for results then I agree with Kris K, except I would endeavor to rebuild the DB after every test case, not just every suite. This helps avoid dangerous interacting tests As for tools, I would recommend CppUnit. You aren't really doing unit tests, but it shouldn't matter as the xUnit framework should give you the set up and teardown framework you'll need to automatically set up your test fixture Obviously this can result in slow-running tests, depending on your database size, population etc. You may be able to attach/detach databases rather than dropping/rebuilding. If you're interested in further research, check out XUnit Test Patterns. It's a fine book and a good website for this kind of thing. And thanks for automating :) Nick A: You can dump/restore the database for each test suite, etc. Since you are automating this, it may be something in the setup/teardown functionality. A: I used to restore the database in the SetUp function of the database related unit test class. This way it was ensured that each test runs under the same conditions. You may consider to prepare special database content for the tests, i.e. with less data than the current production version (to keep the restore times reasonable). A: The best environment for such testing, I believe, is VMWare or an equivalent. Set up your database, transaction log and so on, then record the whole lot - database as well as configuration. Then to re-test, reload the image and database and kick off the tests. This still requires maintenance of the tests as the system changes, but at least the tests are repeatable, which is one of your greatest challenges in integration testing. For test automation, many people use Perl, but we've found that Perl programs grow like Topsy and become convoluted. The use of Python as a scripting language (we run C++ tests) is worthwhile if you're trying to build a series of structured tests. A: As @Kris K. says dumping and restoring the database between each test will probably be the way to go. Since you are looking at doing testing external to the App I would look to build the testing framework in a language where you can take advantage of better testing tools. If you built the testing framework in Java you could take advantage of JUnit and potentially even something like FitNesse. Don't think that just because the application under test is C++ that means you are stuck using C++ for your automated testing. A: Please try AnyDbTest, I think it is the very tool you are finding. (www.anydbtest.com). Features: * *1.Writing test case with Xml, not Java/C++/C#/VB code. Not need those expensive programming tools. *2.Supports all popular databases, such as Oracle/SQL Server/My SQL *3.So many kinds of assertion supported, such as StrictEqual, SetEqual, IsSupersetOf, Overlaps, and RecordCountEqual etc. Plus, most of assertions can prefix logic not operator. *4.Allows using an Excel spreadsheet/Xml as the source of the data for the tests. As you know, Excel spreadsheet is to easily create/edit and maintain the test data. *5.Supports sandbox test model, if one test will be done in sandbox, all database operations on each DB will be rolled back meaning any changes will be undone. *6.Allows performing data pump from one database/Excel into target database in testing initialization and finalization phase. This is easy way to prepare the test data for testing. *7.Unique cross-different-type-database testing, which means target and reference result set can come from two databases, even one is SQL Server, another is Oracle. *8.Set style comparison for recordset. AnyDbTest will tell you what is the intersection, or surplus or absence between the two record sets. *9.Sequential style comparison for recordset or scalar values. It means the two result set will be compared in their original sequence. *10.Allow to export result set of SQL statement into Xml/Excel file.
{ "language": "en", "url": "https://stackoverflow.com/questions/87610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Win32 ShellExecute and a UNC Path I want to fire up a flash presentation inside Powerpoint 2007. I am calling the Win32 ShellExecute() routine. When I run this from a location whose path is a UNC path (\myserver\myfolder\sample.ppt) it does not work. The ShellExecute routine expects 6 arguments, one of which is the path to run it from. I've tried to set this parameter to C:\ as well as using ActivePresentation.Path (which is a UNC path). Neither works. A: I initially tried this approach, but found it caused problems: * *When the presentation was used from a laptop not connected to the network. *If the user did not have access to the UNC. *If the flash file was renamed, moved or deleted. I found a better approach was to embed the file into the Powerpoint file. It can be done as follows using Office XP, Powerpoint From the 'View' menu select 'Toolbars' and tick the 'Control Toolbox' On the 'Control Toolbox' toolbar click on the 'More controls' icon A list of controls will be displayed. Scroll down until you find the 'Shockwave Flash Object' and then click on it. This should change your cursor to a crosshair. Move to the area on the slide where you want to inset the 'Shockwave Flash Object'. Left click, hold and drag to create a box of the required size. Next right click on the control you have just inserted and select 'Properties'. Set the following properties * Autoload = True * EmbedMovie = True * Enabled = True * Loop = True * Playing = True * Visible = True * Movie = c:\flash.swf (Change this to the location of your .swf file) Close the 'Properties' control Save the file. Close the file. Reopen the file. The .swf file should start playing automatically when you reach the slide during the slide show. I found it usefull to include controls (pause/play, time line) in the .swf file A: UNC paths start with a double back-slash. Are you doing that, or was that just a typo in the question? A: I've resorted to mapping a drive to the UNC path. The command line looks something like subst A: "\\ServerName\SomeDirectory" A: well, now it seems to work just fine with the unc path.
{ "language": "en", "url": "https://stackoverflow.com/questions/87612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I map XML to C# objects I have an XML that I want to load to objects, manipulate those objects (set values, read values) and then save those XMLs back. It is important for me to have the XML in the structure (xsd) that I created. One way to do that is to write my own serializer, but is there a built in support for it or open source in C# that I can use? A: LINQ to XML is very powerful if you're using .net 3.5, LINQ to XSD may be useful to you too! A: You can generate serializable C# classes from a schema (xsd) using xsd.exe: xsd.exe dependency1.xsd dependency2.xsd schema.xsd /out:outputDir If the schema has dependencies (included/imported schemas), they must all be included on the same command line. A: Use xsd.exe command line program that comes with visual studio to create class files that you can use in your project/solution, and the System.Xml.Serialization namespace (specifically, the XmlSerializer class) to serialize/deserialze those classes to and from disk. A: using System.Xml.Serialization; this namespace has all the attributes you'll need if you want to map your xml to any random object. Alternatively you can use the xsd.exe tool xsd file.xsd {/classes | /dataset} [/element:element] [/language:language] [/namespace:namespace] [/outputdir:directory] [URI:uri] which will take your xsd files and create c# or vb.net classes out of them. http://msdn.microsoft.com/en-us/library/x6c1kb0s(VS.71).aspx A: This code (C# DotNet 1.0 onwards) works quite well to serialize most objects to XML. (and back) It does not work for objects containing ArrayLists, and if possible stick to using only Arrays using System; using System.IO; using System.Text; using System.Xml.Serialization; using System.Runtime.Serialization; using System.Runtime.Serialization.Formatters.Binary; public static string Serialize(object objectToSerialize) { MemoryStream mem = new MemoryStream(); XmlSerializer ser = new XmlSerializer(objectToSerialize.GetType()); ser.Serialize(mem, objectToSerialize); ASCIIEncoding ascii = new ASCIIEncoding(); return ascii.GetString(mem.ToArray()); } public static object Deserialize(Type typeToDeserialize, string xmlString) { byte[] bytes = Encoding.UTF8.GetBytes(xmlString); MemoryStream mem = new MemoryStream(bytes); XmlSerializer ser = new XmlSerializer(typeToDeserialize); return ser.Deserialize(mem); } A: xsd.exe from Microsoft has a lot of bugs :| Try this open source pearl http://xsd2code.codeplex.com/ A: We have created a framework which can auto-generate C# classes out of your XML. Its a visual item template to which you pass your XML and the classes are generated automatically in your project. Using these classes you can create/read/write your XML. Check this link for the framework and Visual C# item template: click here A: I agree xsd is really crap... But they made another version that hardly anyone knows about. Its called xsd object generator. Its the next version and has way more options. It generates files from XSD and works fantastic. If you have a schema generator like XML spy; create an xsd from your xml and use this tool. I have created very very complex classes using this tool. Then create partial classes for extra properties\methods etc, then when you update your schema you just regen your classes and any edits persist in your partial classes. http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=7075 A: I'll bet NetDataContractSerializer can do what you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/87621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: How to use stored procedures within a DTS data transformation task? I have a DTS package with a data transformation task (data pump). I’d like to source the data with the results of a stored procedure that takes parameters, but DTS won’t preview the result set and can’t define the columns in the data transformation task. Has anyone gotten this to work? Caveat: The stored procedure uses two temp tables (and cleans them up, of course) A: Enter some valid values for the stored procedure parameters so it runs and returns some data (or even no data, you just need the columns). Then you should be able to do the mapping/etc.. Then do a disconnected edit and change to the actual parameter values (I assume you are getting them from a global variable). DECLARE @param1 DataType1 DECLARE @param2 DataType2 SET @param1 = global variable SET @param2 = global variable (I forget exact syntax) --EXEC procedure @param1, @param2 EXEC dbo.proc value1, value2 Basically you run it like this so the procedure returns results. Do the mapping, then in disconnected edit comment out the second EXEC and uncomment the first EXEC and it should work. Basically you just need to make the procedure run and spit out results. Even if you get no rows back, it will still map the columns correctly. I don't have access to our production system (or even database) to create dts packages. So I create them in a dummy database and replace the stored procedure with something that returns the same columns that the production app would run, but no rows of data. Then after the mapping is done I move it to the production box with the real procedure and it works. This works great if you keep track of the database via scripts. You can just run the script to build an empty shell procedure and when done run the script to put back the true procedure. A: You would need to actually load them into a table, then you can use a SQL task to move it from that table into the perm location if you must make a translation. however, I have found that if working with a stored procedure to source the data, it is almost just as fast and easy to move it to its destination at the same time! A: Nope, I could only stored procedures with DTS by having them save the state in scrap tables.
{ "language": "en", "url": "https://stackoverflow.com/questions/87647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is your preferred method for moving directory structures around in Subversion? I have recently run into an issue where I wanted to add a folder to the directory structure of my project that would become the new 'root' directory for the previously housed files. I've been getting help in a related thread but I wanted to put out a more open ended question to see what a best practice might be. Essentially, my situation was that I was working on development and realized that I wanted to have a resources directory that would not be part of the main development thrust but would still be versioned (to hold mockups and such). So I wanted to add a resources directory and an implementation directory, the implementation directory being the new root directory. How would you go about moving all of the previous directory structure into the implementation directory? A: You can do it pretty easily if you use some GUI for SVN. Personally I love TortoiseSVN for when I'm working in Windows. You just open up the "Repository Browser", right-click on some folder, and choose "Move...". Or, you have the option of doing it straight from within Windows Explorer, drag the files/folders you want to move with the RIGHT mouse button, when you drop them in their new location you'll get a menu, one of the options is "Move in SVN". A: Moves in subversion are done by removing the old files and adding the new ones, so there's nothing special to do. The series of 'svn mv' commands in a loop recommended in the other question should probably work just fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/87666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the best way to manipulate Dates and Timestamps in Java? Every time I need to work with date and/or timstamps in Java I always feel like I'm doing something wrong and spend endless hours trying to find a better way of working with the APIs without having to code my own Date and Time utility classes. Here's a couple of annoying things I just ran into: * *0-based months. I realize that best practice is to use Calendar.SEPTEMBER instead of 8, but it's annoying that 8 represents September and not August. *Getting a date without a timestamp. I always need the utility that Zeros out the timestamp portion of the date. *I know there's other issues I've had in the past, but can't recall. Feel free to add more in your responses. So, my question is ... What third party APIs do you use to simplify Java's usage of Date and Time manipulation, if any? Any thoughts on using Joda? Anyone looked closer at JSR-310 Date and Time API? A: The Apache Commons Lang project has a DateUtils class that performs helpful Date operations. I use DateUtils.truncate() a lot, which will "zero out" parts of the Date for you (helpful if you want your Date object to, say, represent a date and not include any time information). Each method works for both Date and Calendar objects too. http://commons.apache.org/lang/ A: I've been using Joda exclusively for three years now and would definitely recommend it - it has the whole area covered with an interface that 'does what it says'. Joda can look complex when you start, as eg it has concepts of periods, duration and intervals which look sort of similar, but you can start off simply by substituting org.joda.time.DateTime (or org.joda.time.DateMidnight) for java.util.Date in your code, and sticking with the many useful methods that those classes contain, before understanding the other areas. A: java.time Java 8 and later now includes the java.time framework. Inspired by Joda-Time, defined by JSR 310, extended by the ThreeTen-Extra project. See the Tutorial. This framework supplants the old java.util.Date/.Calendar classes. Conversion methods let you convert back and forth to work with old code not yet updated for the java.time types. The core classes are: * *InstantA moment on the timeline, always in UTC. *ZoneIdA time zone. The subclass ZoneOffset includes a constant for UTC. *ZonedDateTime = Instant + ZoneIdRepresents a moment on the timeline adjusted into a specific time zone. This framework solves the couple of problems you listed. 0-based months Month numbers are 1-12 in java.time. Even better, an Enum (Month) provides an object instance for each month of the year. So you need not depend on "magic" numbers in your code like 9 or 10. if ( theMonth.equals ( Month.OCTOBER ) ) { … Furthermore, that enum includes some handy utility methods such as getting a month’s localized name. If not yet familiar with Java enums, read the Tutorial and study up. They are surprisingly handy and powerful. A date without a time The LocalDate class represents a date-only value, without time-of-day, without time zone. LocalDate localDate = LocalDate.parse( "2015-01-02" ); Note that determining a date requires a time zone. A new day dawns earlier in Paris than in Montréal where it is still ‘yesterday’. The ZoneId class represents a time zone. LocalDate today = LocalDate.now( ZoneId.of( "America/Montreal" ) ); Similarly, there is a LocalTime class for a time-of-day not yet tied to a date or time zone. About java.time The java.time framework is built into Java 8 and later. These classes supplant the troublesome old legacy date-time classes such as java.util.Date, Calendar, & SimpleDateFormat. The Joda-Time project, now in maintenance mode, advises migration to the java.time classes. To learn more, see the Oracle Tutorial. And search Stack Overflow for many examples and explanations. Specification is JSR 310. Where to obtain the java.time classes? * *Java SE 8, Java SE 9, and later * *Built-in. *Part of the standard Java API with a bundled implementation. *Java 9 adds some minor features and fixes. *Java SE 6 and Java SE 7 * *Much of the java.time functionality is back-ported to Java 6 & 7 in ThreeTen-Backport. *Android * *The ThreeTenABP project adapts ThreeTen-Backport (mentioned above) for Android specifically. *See How to use ThreeTenABP…. The ThreeTen-Extra project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as Interval, YearWeek, YearQuarter, and more. A: This post has a good discussion on comparing the Java Date/Time API vs JODA. I personally just use Gregorian Calendar and SimpleDateFormat any time I need to manipulate dates/times in Java. I've never really had any problems in using the Java API and find it quite easy to use, so have not really looked into any alternatives. A: Im using GregorianCalendar - always and everywhere. Simple java.util.Date is too complex, yeah. So, my advice is - use GC, its simple A: It's the same in javascript. Someone must have been smoking something when they think it's a good idea to let 2008 mean the year 2008, 31 to mean the 31st day in the month, and - this is the best part - 11 to mean the 12th month. On the other hand, they got it right on two out of three. A: The thing that always gets me with Java is the date time library. I've never used Joda, just briefly look at it, looks like its a pretty good implementation, and if I understand JSR-130 correctly its taking knowledge from Joda and eventually having it included in JavaSE. Quite often for past projects I've wrapped the Java date time objects, which in itself was quite a task. Then used the wrappers for date manipulation. A: Date APIs are very difficult to design, especially if they have to deal with localization. Try to roll your own and see, it's worth doing at least once. The fact that Joda was able to do such a good job is a real credit to its developers. To answer your question, I've heard nothing but good things about that library, though I have never played around with it myself. A: A lot of programmers begin by using Date, which has numerous deprecated overloaded constructors (making it difficult to use), but once you figure out GregorianCalendar it becomes a little bit easier to manage. The example here is pretty helpful: http://java.sun.com/j2se/1.4.2/docs/api/java/util/GregorianCalendar.html A: It's really simple to write your own date API which sits on top of the raw Java classes, Date and Calendar. Basically both date and Calendar suffer from the fact that they are trying to cram two concepts into one class: * *Date (i.e. Year-Month-Day) *Instant (i.e. currentTimeMillis) When you understand this, it will just revolutionize how you handle date-like concepts in your code. Things will be simpler, clearer, better. In every sense! For me, Joda is over-complicated, at least for the overwhelming majority of purposes and I particularly don't like the fact that they have gone against standard Java forms, one example being how they parse and format dates. Stephen Colebourne, the guy behind JODA, is the spec lead of JSR-310 and this suffers from the same problems imho (I've followed and contributed to the discussions for the last few years). Do it yourself; it's easy. Just fill in the following classes: MyDate (wrapping year-month-day), Month (an enum), TimeOfDay (hour-min-sec-millis), DayOfWeek (enum), Instant (wrapping a long). Always consider time-zones when converting between Instants and Dates. If this seems daunting, you can use Calendar and SimpleDateFormat under the hood. You'll do this in a day and never regret it.
{ "language": "en", "url": "https://stackoverflow.com/questions/87676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Advice on handling large data volumes So I have a "large" number of "very large" ASCII files of numerical data (gigabytes altogether), and my program will need to process the entirety of it sequentially at least once. Any advice on storing/loading the data? I've thought of converting the files to binary to make them smaller and for faster loading. Should I load everything into memory all at once? If not, is opening what's a good way of loading the data partially? What are some Java-relevant efficiency tips? A: So then what if the processing requires jumping around in the data for multiple files and multiple buffers? Is constant opening and closing of binary files going to become expensive? I'm a big fan of 'memory mapped i/o', aka 'direct byte buffers'. In Java they are called Mapped Byte Buffers are are part of java.nio. (Basically, this mechanism uses the OS's virtual memory paging system to 'map' your files and present them programmatically as byte buffers. The OS will manage moving the bytes to/from disk and memory auto-magically and very quickly. I suggest this approach because a) it works for me, and b) it will let you focus on your algorithm and let the JVM, OS and hardware deal with the performance optimization. All to frequently, they know what is best more so than us lowly programmers. ;) How would you use MBBs in your context? Just create an MBB for each of your files and read them as you see fit. You will only need to store your results. . BTW: How much data are you dealing with, in GB? If it is more than 3-4GB, then this won't work for you on a 32-bit machine as the MBB implementation is defendant on the addressable memory space by the platform architecture. A 64-bit machine & OS will take you to 1TB or 128TB of mappable data. If you are thinking about performance, then know Kirk Pepperdine (a somewhat famous Java performance guru.) He is involved with a website, www.JavaPerformanceTuning.com, that has some more MBB details: NIO Performance Tips and other Java performance related things. A: You might want to have a look at the entries in the Wide Finder Project (do a google search for "wide finder" java). The Wide finder involves reading over lots of lines in log files, so look at the Java implementations and see what worked and didn't work there. A: You could convert to binary, but then you have 1+ something copies of the data, if you need to keep the original around. It may be practical to build some kind of index on top of your original ascii data, so that if you need to go through the data again you can do it faster in subsequent times. To answer your questions in order: Should I load everything into memory all at once? Not if don't have to. for some files, you may be able to, but if you're just processing sequentially, just do some kind of buffered read through the things one by one, storing whatever you need along the way. If not, is opening what's a good way of loading the data partially? BufferedReaders/etc is simplest, although you could look deeper into FileChannel/etc to use memorymapped I/O to go through windows of the data at a time. What are some Java-relevant efficiency tips? That really depends on what you're doing with the data itself! A: Without any additional insight into what kind of processing is going on, here are some general thoughts from when I have done similar work. * *Write a prototype of your application (maybe even "one to throw away") that performs some arbitrary operation on your data set. See how fast it goes. If the simplest, most naive thing you can think of is acceptably fast, no worries! *If the naive approach does not work, consider pre-processing the data so that subsequent runs will run in an acceptable length of time. You mention having to "jump around" in the data set quite a bit. Is there any way to pre-process that out? Or, one pre-processing step can be to generate even more data - index data - that provides byte-accurate location information about critical, necessary sections of your data set. Then, your main processing run can utilize this information to jump straight to the necessary data. So, to summarize, my approach would be to try something simple right now and see what the performance looks like. Maybe it will be fine. Otherwise, look into processing the data in multiple steps, saving the most expensive operations for infrequent pre-processing. Don't "load everything into memory". Just perform file accesses and let the operating system's disk page cache decide when you get to actually pull things directly out of memory. A: This depends a lot on the data in the file. Big mainframes have been doing sequential data processing for a long time but they don't normally use random access for the data. They just pull it in a line at a time and process that much before continuing. For random access it is often best to build objects with caching wrappers which know where in the file the data they need to construct is. When needed they read that data in and construct themselves. This way when memory is tight you can just start killing stuff off without worrying too much about not being able to get it back later. A: You really haven't given us enough info to help you. Do you need to load each file in its entiretly in order to process it? Or can you process it line by line? Loading an entire file at a time is likely to result in poor performance even for files that aren't terribly large. Your best bet is to define a buffer size that works for you and read/process the data a buffer at a time. A: I've found Informatica to be an exceptionally useful data processing tool. The good news is that the more recent versions even allow Java transformations. If you're dealing with terabytes of data, it might be time to pony up for the best-of-breed ETL tools. I'm assuming you want to do something with the results of the processing here, like store it somewhere. A: If your numerical data is regularly sampled and you need to do random access consider to store them in a quadtree. A: I recommend strongly leveraging Regular Expressions and looking into the "new" IO nio package for faster input. Then it should go as quickly as you can realistically expect Gigabytes of data to go. A: If at all possible, get the data into a database. Then you can leverage all the indexing, caching, memory pinning, and other functionality available to you there. A: If you need to access the data more than once, load it into a database. Most databases have some sort of bulk loading utility. If the data can all fit in memory, and you don't need to keep it around or access it that often, you can probably write something simple in Perl or your favorite scripting language.
{ "language": "en", "url": "https://stackoverflow.com/questions/87679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Testing running condition of a Windows app I have several applications that are part of a suite of tools that various developers at our studio use. these applications are mainly command line apps that open a DOS cmd shell. These apps in turn start up a GUI application that tracks output and status (via sockets) of these command line apps. The command line apps can be started with the user is logged in, when their workstation is locked (they fire off a batch file and then immediately lock their workstation), and when they are logged out (via a scheduled task). The problems that I have are with the last two cases. If any of these apps fire off when the user is locked or logged out, these command will spawn the GUI windows which tracks the output/status. That's fine, but say the user has their workstation locked -- when they unlock their workstation, the GUI isn't visible. It's running the task list, but it's not visible. The next time these users run some of our command line apps, the GUI doesn't get launched (because it's already running), but because it's not visible on the desktop, users don't see any output. What I'm looking for is a way to tell from my command line apps if they are running behind a locked workstation or when a user is logged out (via scheduled task) -- basically are they running without a user's desktop visible. If I can tell that, then I can simply not start up our GUI and can prevent a lot of problem. These apps that I need to test are C/C++ Windows applications. I hope that this make sense. A: I found the programmatic answer that I was looking for. It has to do with stations. Apparently anything running on the desktop will run on a station with a particular name. Anything that isn't on the desktop (i.e. a process started by the task manager when logged off or on a locked workstation) will get started with a different station name. Example code: HWINSTA dHandle = GetProcessWindowStation(); if ( GetUserObjectInformation(dHandle, UOI_NAME, nameBuffer, bufferLen, &lenNeeded) ) { if ( stricmp(nameBuffer, "winsta0") ) { // when we get here, we are not running on the real desktop return false; } } If you get inside the 'if' statement, then your process is not on the desktop, but running "somewhere else". I looked at the namebuffer value when not running from the desktop and the names don't mean much, but they are not WinSta0. Link to the docs here. A: You might be able to use SENS (System Event Notification Services). I've never used it myself, but I'm almost positive it will do what you want: give you notification for events like logon, logoff, screen saver, etc. I know that's pretty vague, but hopefully it will get you started. A quick google search turned up this, among others: http://discoveringdotnet.alexeyev.org/2008/02/sens-events.html A: I have successfully used this approach to detect whether the desktop is locked on Windows: bool isDesktopLocked = false; HDESK inputDesktop = OpenInputDesktop(0, FALSE, DESKTOP_CREATEMENU | DESKTOP_CREATEWINDOW | DESKTOP_ENUMERATE | DESKTOP_SWITCHDESKTOP | DESKTOP_WRITEOBJECTS | DESKTOP_READOBJECTS | DESKTOP_WRITE); if (NULL == inputDesktop) { isDesktopLocked = true; } else { CloseDesktop(inputDesktop); }
{ "language": "en", "url": "https://stackoverflow.com/questions/87689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to enter Javascript into a wiki page? How can I, as the wiki admin, enter scripting (Javascript) into a Sharepoint wiki page? I would like to enter a title and, when clicking on that, having displayed under it a small explanation. I usually have done that with javascript, any other idea? A: Assuming you're the administrator of the wiki and are willing display this on mouseover instead of on click, you don't need javascript at all -- you can use straight CSS. Here's an example of the styles and markup: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Test</title> <style type="text/css"> h1 { padding-bottom: .5em; position: relative; } h1 span { font-weight: normal; font-size: small; position: absolute; bottom: 0; display: none; } h1:hover span { display: block; } </style> </head> <body> <h1>Here is the title! <span>Here is a little explanation</span> </h1> <p>Here is some page content</p> </body> </html> With some more involved styles, your tooltip box can look as nice as you'd like. A: If the wiki authors are wise, there's probably no way to do this. The problem with user-contributed JavaScript is that it opens the door for all forms of evil-doers to grab data from the unsuspecting. Let's suppose evil-me posts a script on a public web site: i = new Image(); i.src = 'http://evilme.com/store_cookie_data?c=' + document.cookie; Now I will receive the cookie information of each visitor to the page, posted to a log on my server. And that's just the tip of the iceberg. A: It completely depends on the specific Wiki software you are using. The way I've seen work is to host a js file somewhere else and then include with a script tag with a src attribute. If they don't allow that, maybe they allow an IFRAME that you can set to a page that includes the script. Using the second technique, you won't be allowed to access the host page's DOM. A: I like the CSS answer. When you can use CSS instead of Javascript it results in simpler markup. Another thing to look into is the Community Kit for SharePoint Enhanced Wiki Edition on Codeplex. You can download the source code and add in your own features. Or you can suggest this as a new feature in the forum. A: Use a Content Editor Web Part From the ribbon, select Insert and choose Web Part. From the menu, go to Media and Content and choose Content Editor. From the newly created webpart's dropdown menu, choose Edit Web Part. From the right webpart settings menu, expand Appearance and set Chrome Type to None. Click Click Here to Add Content on the webpart and locate Edit HTML Source in the ribbon. You can use HTML in this area without it being sanitized away. Note: If you plan on adding behaviours to lots of page elements, you may want to upload .js files to Style Library and include only the <script src="..."> tag in the Content Editor. You may also want to look into using a custom master or page layout. Source: https://cobwwweb.com/how-to-run-javascript-on-sharepoint-pages A: That sounds like a security risk. It seems it's possible for the wiki admin to install scripts, see wikipedia's user scripts. A: You should not be able to add javascript code for any public wiki. If you are hosting it yourself, then you need to ask for a specific wiki system so someone can help you to modify the settings - if at all possible for that system. A: If you're talking about using Javascript as part of a web page on this site, you can't. Or any other public wiki for that matter - it's a security risk. If you're talking about posting a code sample, click on the '101010' button above the text box. A: It would of course depend on the wiki engine you're talking to. Most likely though, as sblundy says, it would be a security risk to allow free usage of javascript on the wiki page. A: Add title="text here" to any tag and it should show a text when hovering on it. Internet Explorer shows the text when you use the alt="text here" attribute, although that is not according to the standards. I tested now, and you can add <h2 title="some explanation here">headline</h2> to any system based on wikimedia (the one Wikipedia uses).
{ "language": "en", "url": "https://stackoverflow.com/questions/87692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: I'm using Wincrypt for Diffie-Hellman-- can I export the shared secret in plain text? OK-- thanks to Mike, I was able to get Wincrypt to generate a Diffie-Hellman keypair. I figured out out to export the public key, and how to import the other party's public key. According to the docs, upon import of the other party's public key, the shared secret has been computed. Great. I now need to get ahold of that shared secret, but I don't think its possible. Simply calling CryptExportKey with a type of PLAINTEXTKEYBLOB fails unless I call CryptSetKeyParam to change the algorithm id from CALG_AGREEDKEY_ANY to something... else. But I don't want something else, I want the shared secret. The API, however, seems designed to discourage this. Any ideas out there? I should note that the problem here is that I'm only writing one side of an implementation of WiFi Protected Setup. So the protocol is defined for me, and the other party is not giving me HCRYPTKEYs. A: This looks like what you need... from: http://msdn.microsoft.com/en-us/library/aa381969(VS.85).aspx To import a Diffie-Hellman public key and calculate the secret session key * *Call the CryptAcquireContext function to get a handle to the Microsoft Diffie-Hellman Cryptographic Provider. *Create a Diffie-Hellman key by calling the CryptGenKey function to create a new key, or by calling the CryptGetUserKey function to retrieve an existing key. *To import the Diffie-Hellman public key into the CSP, call the CryptImportKey function, passing a pointer to the public key BLOB in the pbData parameter, the length of the BLOB in the dwDataLen parameter, and the handle to the Diffie-Hellman key in the hPubKey parameter. This causes the calculation, (Y^X) mod P, to be performed, thus creating the shared, secret key and completing the key exchange. This function call returns a handle to the new, secret, session key in the hKey parameter. *At this point, the imported Diffie-Hellman is of type CALG_AGREEDKEY_ANY. Before the key can be used, it must be converted into a session key type. This is accomplished by calling the CryptSetKeyParam function with dwParam set to KP_ALGID and with pbData set to a pointer to a ALG_ID value that represents a session key, such as CALG_RC4. The key must be converted before using the shared key in the CryptEncrypt or CryptDecrypt function. Calls made to either of these functions prior to converting the key type will fail. *The secret session key is now ready to be used for encryption or decryption. *When the key is no longer needed, destroy the key handle by calling the CryptDestroyKey function.
{ "language": "en", "url": "https://stackoverflow.com/questions/87694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Programmatically stream audio in Cocoa on the Mac How do I go about programmatically creating audio streams using Cocoa on the Mac. To make, say a white-noise generator using core frameworks on Mac OSX in Cocoa apps? A: One way is using the CoreAudio DefaultOutputUnit. You can configure it with parameters such as output sampling rate, resolution, and output sample format. Then you can programmatically create a raw sound wave and provide this to the output unit. Take a look at this example on your machine at /Developer/Examples/CoreAudio/SimpleSDK/DefaultOutputUnit/ Which uses the default output unit to play a programmatically rendered sine wave. Using that as a starting point and you can write a routine to render any thing else to output. This location at /Developer/Examples/CoreAudio/ also contains tons of other core audio examples. A: Look at Audio Queue Services.
{ "language": "en", "url": "https://stackoverflow.com/questions/87695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you deal with NULL values in columns of type boolean in MS Access? I was wondering if there is a better way to cope with MS-Access' inability to handle NULL for boolean-values other than change the column-data-type to integer. A: I think you must use a number, and so, it seems does Allen Browne, Access MVP. A: Not that I've found :( I haven't programmed Access in awhile, but what I remember involves quite a lot of isNull checks. A: I think it depends on how you want your app/solution to interpret said NULLs in your data.Do you want to simply "ignore" them in a report... i.e. have them print out as blank spaces or newlines? In that case you can use the handy IsNull function along with the "immediate if" iif() in either the SQL builder or a column in the regular Access query designer as follows: IIF(IsNull(BooleanColumnName), NewLine/BlankSpace/Whatever, BooleanColumnName)On the other hand, if you want to consider the NULLs as "False" values, you had better update the column and just change them with something like:Update table SET BooleanColumnName = FALSE WHERE BooleanColumnName IS NULL
{ "language": "en", "url": "https://stackoverflow.com/questions/87712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Suggestions for human editable data file format/parsing library For example, right now I have a roll-my-own solution that uses data files that include blocks like: PlayerCharacter Fighter Hitpoints 25 Strength 10 StartPosition (0, 0, 0) Art Model BigBuffGuy Footprint LargeFootprint end InventoryItem Sword InventoryItem Shield InventoryItem HealthPotion end * *human editable (w/ minimal junk characters, ideally) *resilient to errors (fewest 'wow i can't parse anything useful anymore' style errors, and thus i've lost all of the data in the rest of the file) - but still able to identify and report them, of course. My example the only complete failure case is missing 'end's. *nested structure style data *array/list style data *customizable foundation types *fast Are there any well known solutions that meet/exceed these requirements? A: Yaml is a good solution and very close to what you have. Search for it. A: I second the YAML suggestion. It's extremely easy to edit, very forgiving of mistakes and widely supported (especially among the dynamic languages). A: I'd say the most common choices are: * *JSON (offical site) - very flexible, though the punctuation can take a bit for people to get used to *INI - super simple to use, but a bit limited in data-types *XML - pretty flexible, common, but way too verbose sometimes A: You could try JSON available at: http://www.json.org/ It was designed for javascript and web usage initially. But it's pretty clean, and supported in many languages. A: Lua was designed to be a programming language where the syntax lets you easily use it as a markup language as well, so that you include data files as if they were code. Many computer games use it for their scripting, such as World of Warcraft due to its speed and ease of use. However it's originally designed and maintained for the energy industry so there's a serious background. Scheme with its S-expressions is also a very nice but different-looking syntax for data. Finally, you've got XML that has the benefit of the most entry-level developers knowing it. You can also roll your own well-defined and efficient parser with a nice development suite such as ANTLR. A: I would suggest JSON. * *Just as readable/editable as YAML *If you happen to use for Web then can be eval()'ed into JavaScript objects *Probably as cross language as YAML
{ "language": "en", "url": "https://stackoverflow.com/questions/87713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you calculate the axis-aligned bounding box of an ellipse? If the major axis of the ellipse is vertical or horizontal, it's easy to calculate the bounding box, but what about when the ellipse is rotated? The only way I can think of so far is to calculate all the points around the perimeter and find the max/min x and y values. It seems like there should be a simpler way. If there's a function (in the mathematical sense) that describes an ellipse at an arbitrary angle, then I could use its derivative to find points where the slope is zero or undefined, but I can't seem to find one. Edit: to clarify, I need the axis-aligned bounding box, i.e. it should not be rotated with the ellipse, but stay aligned with the x axis so transforming the bounding box won't work. A: This is relative simple but a bit hard to explain since you haven't given us the way you represent your ellipse. There are so many ways to do it.. Anyway, the general principle goes like this: You can't calculate the axis aligned boundary box directly. You can however calculate the extrema of the ellipse in x and y as points in 2D space. For this it's sufficient to take the equation x(t) = ellipse_equation(t) and y(t) = ellipse_equation(t). Get the first order derivate of it and solve it for it's root. Since we're dealing with ellipses that are based on trigonometry that's straight forward. You should end up with an equation that either gets the roots via atan, acos or asin. Hint: To check your code try it with an unrotated ellipse: You should get roots at 0, Pi/2, Pi and 3*Pi/2. Do that for each axis (x and y). You will get at most four roots (less if your ellipse is degenerated, e.g. one of the radii is zero). Evalulate the positions at the roots and you get all extreme points of the ellipse. Now you're almost there. Getting the boundary box of the ellipse is as simple as scanning these four points for xmin, xmax, ymin and ymax. Btw - if you have problems finding the equation of your ellipse: try to reduce it to the case that you have an axis aligned ellipse with a center, two radii and a rotation angle around the center. If you do so the equations become: // the ellipse unrotated: temp_x(t) = radius.x * cos(t); temp_y(t) = radius.y * sin(t); // the ellipse with rotation applied: x(t) = temp_x(t) * cos(angle) - temp_y(t) * sin(angle) + center.x; y(t) = temp_x(t) * sin(angle) + temp_y(t) * cos(angle) + center.y; A: You could try using the parametrized equations for an ellipse rotated at an arbitrary angle: x = h + a*cos(t)*cos(phi) - b*sin(t)*sin(phi) [1] y = k + b*sin(t)*cos(phi) + a*cos(t)*sin(phi) [2] ...where ellipse has centre (h,k) semimajor axis a and semiminor axis b, and is rotated through angle phi. You can then differentiate and solve for gradient = 0: 0 = dx/dt = -a*sin(t)*cos(phi) - b*cos(t)*sin(phi) => tan(t) = -b*tan(phi)/a [3] Which should give you many solutions for t (two of which you are interested in), plug that back into [1] to get your max and min x. Repeat for [2]: 0 = dy/dt = b*cos(t)*cos(phi) - a*sin(t)*sin(phi) => tan(t) = b*cot(phi)/a [4] Lets try an example: Consider an ellipse at (0,0) with a=2, b=1, rotated by PI/4: [1] => x = 2*cos(t)*cos(PI/4) - sin(t)*sin(PI/4) [3] => tan(t) = -tan(PI/4)/2 = -1/2 => t = -0.4636 + n*PI We are interested in t = -0.4636 and t = -3.6052 So we get: x = 2*cos(-0.4636)*cos(PI/4) - sin(-0.4636)*sin(PI/4) = 1.5811 and x = 2*cos(-3.6052)*cos(PI/4) - sin(-3.6052)*sin(PI/4) = -1.5811 A: Brilian Johan Nilsson. I have transcribed your code to c# - ellipseAngle are now in degrees: private static RectangleF EllipseBoundingBox(int ellipseCenterX, int ellipseCenterY, int ellipseRadiusX, int ellipseRadiusY, double ellipseAngle) { double angle = ellipseAngle * Math.PI / 180; double a = ellipseRadiusX * Math.Cos(angle); double b = ellipseRadiusY * Math.Sin(angle); double c = ellipseRadiusX * Math.Sin(angle); double d = ellipseRadiusY * Math.Cos(angle); double width = Math.Sqrt(Math.Pow(a, 2) + Math.Pow(b, 2)) * 2; double height = Math.Sqrt(Math.Pow(c, 2) + Math.Pow(d, 2)) * 2; var x= ellipseCenterX - width * 0.5; var y= ellipseCenterY + height * 0.5; return new Rectangle((int)x, (int)y, (int)width, (int)height); } A: I think the most useful formula is this one. An ellipsis rotated from an angle phi from the origin has as equation: where (h,k) is the center, a and b the size of the major and minor axis and t varies from -pi to pi. From that, you should be able to derive for which t dx/dt or dy/dt goes to 0. A: I found a simple formula at http://www.iquilezles.org/www/articles/ellipses/ellipses.htm (and ignored the z axis). I implemented it roughly like this: num ux = ellipse.r1 * cos(ellipse.phi); num uy = ellipse.r1 * sin(ellipse.phi); num vx = ellipse.r2 * cos(ellipse.phi+PI/2); num vy = ellipse.r2 * sin(ellipse.phi+PI/2); num bbox_halfwidth = sqrt(ux*ux + vx*vx); num bbox_halfheight = sqrt(uy*uy + vy*vy); Point bbox_ul_corner = new Point(ellipse.center.x - bbox_halfwidth, ellipse.center.y - bbox_halfheight); Point bbox_br_corner = new Point(ellipse.center.x + bbox_halfwidth, ellipse.center.y + bbox_halfheight); A: Here is the formula for the case if the ellipse is given by its foci and eccentricity (for the case where it is given by axis lengths, center and angle, see e. g. the answer by user1789690). Namely, if the foci are (x0, y0) and (x1, y1) and the eccentricity is e, then bbox_halfwidth = sqrt(k2*dx2 + (k2-1)*dy2)/2 bbox_halfheight = sqrt((k2-1)*dx2 + k2*dy2)/2 where dx = x1-x0 dy = y1-y0 dx2 = dx*dx dy2 = dy*dy k2 = 1.0/(e*e) I derived the formulas from the answer by user1789690 and Johan Nilsson. A: If you work with OpenCV/C++ and use cv::fitEllipse(..) function, you may need bounding rect of ellipse. Here I made a solution using Mike's answer: // tau = 2 * pi, see tau manifest const double TAU = 2 * std::acos(-1); cv::Rect calcEllipseBoundingBox(const cv::RotatedRect &anEllipse) { if (std::fmod(std::abs(anEllipse.angle), 90.0) <= 0.01) { return anEllipse.boundingRect(); } double phi = anEllipse.angle * TAU / 360; double major = anEllipse.size.width / 2.0; double minor = anEllipse.size.height / 2.0; if (minor > major) { std::swap(minor, major); phi += TAU / 4; } double cosPhi = std::cos(phi), sinPhi = std::sin(phi); double tanPhi = sinPhi / cosPhi; double tx = std::atan(-minor * tanPhi / major); cv::Vec2d eqx{ major * cosPhi, - minor * sinPhi }; double x1 = eqx.dot({ std::cos(tx), std::sin(tx) }); double x2 = eqx.dot({ std::cos(tx + TAU / 2), std::sin(tx + TAU / 2) }); double ty = std::atan(minor / (major * tanPhi)); cv::Vec2d eqy{ major * sinPhi, minor * cosPhi }; double y1 = eqy.dot({ std::cos(ty), std::sin(ty) }); double y2 = eqy.dot({ std::cos(ty + TAU / 2), std::sin(ty + TAU / 2) }); cv::Rect_<float> bb{ cv::Point2f(std::min(x1, x2), std::min(y1, y2)), cv::Point2f(std::max(x1, x2), std::max(y1, y2)) }; return bb + anEllipse.center; } A: Here's a typescript function based on the above answers. export function getRotatedEllipseBounds( x: number, y: number, rx: number, ry: number, rotation: number ) { const c = Math.cos(rotation) const s = Math.sin(rotation) const w = Math.hypot(rx * c, ry * s) const h = Math.hypot(rx * s, ry * c) return { minX: x + rx - w, minY: y + ry - h, maxX: x + rx + w, maxY: y + ry + h, width: w * 2, height: h * 2, } } A: This code is based on the code user1789690 contributed above, but implemented in Delphi. I have tested this and as far as I can tell it works perfectly. I spent an entire day searching for an algorithm or some code, tested some that didn't work, and I was very happy to finally find the code above. I hope someone finds this useful. This code will calculate the bounding box of a rotated ellipse. The bounding box is axis aligned and NOT rotated with the ellipse. The radiuses are for the ellipse before it was rotated. type TSingleRect = record X: Single; Y: Single; Width: Single; Height: Single; end; function GetBoundingBoxForRotatedEllipse(EllipseCenterX, EllipseCenterY, EllipseRadiusX, EllipseRadiusY, EllipseAngle: Single): TSingleRect; var a: Single; b: Single; c: Single; d: Single; begin a := EllipseRadiusX * Cos(EllipseAngle); b := EllipseRadiusY * Sin(EllipseAngle); c := EllipseRadiusX * Sin(EllipseAngle); d := EllipseRadiusY * Cos(EllipseAngle); Result.Width := Hypot(a, b) * 2; Result.Height := Hypot(c, d) * 2; Result.X := EllipseCenterX - Result.Width * 0.5; Result.Y := EllipseCenterY - Result.Height * 0.5; end; A: This is my function for finding tight fit rectangle to ellipse with arbitrary orientation I have opencv rect and point for implementation: cg - center of the ellipse size - major, minor axis of ellipse angle - orientation of ellipse cv::Rect ellipse_bounding_box(const cv::Point2f &cg, const cv::Size2f &size, const float angle) { float a = size.width / 2; float b = size.height / 2; cv::Point pts[4]; float phi = angle * (CV_PI / 180); float tan_angle = tan(phi); float t = atan((-b*tan_angle) / a); float x = cg.x + a*cos(t)*cos(phi) - b*sin(t)*sin(phi); float y = cg.y + b*sin(t)*cos(phi) + a*cos(t)*sin(phi); pts[0] = cv::Point(cvRound(x), cvRound(y)); t = atan((b*(1 / tan(phi))) / a); x = cg.x + a*cos(t)*cos(phi) - b*sin(t)*sin(phi); y = cg.y + b*sin(t)*cos(phi) + a*cos(t)*sin(phi); pts[1] = cv::Point(cvRound(x), cvRound(y)); phi += CV_PI; tan_angle = tan(phi); t = atan((-b*tan_angle) / a); x = cg.x + a*cos(t)*cos(phi) - b*sin(t)*sin(phi); y = cg.y + b*sin(t)*cos(phi) + a*cos(t)*sin(phi); pts[2] = cv::Point(cvRound(x), cvRound(y)); t = atan((b*(1 / tan(phi))) / a); x = cg.x + a*cos(t)*cos(phi) - b*sin(t)*sin(phi); y = cg.y + b*sin(t)*cos(phi) + a*cos(t)*sin(phi); pts[3] = cv::Point(cvRound(x), cvRound(y)); long left = 0xfffffff, top = 0xfffffff, right = 0, bottom = 0; for (int i = 0; i < 4; i++) { left = left < pts[i].x ? left : pts[i].x; top = top < pts[i].y ? top : pts[i].y; right = right > pts[i].x ? right : pts[i].x; bottom = bottom > pts[i].y ? bottom : pts[i].y; } cv::Rect fit_rect(left, top, (right - left) + 1, (bottom - top) + 1); return fit_rect; } A: Here's a simple example of bounding box around rotated ellipse in javascript: https://jsfiddle.net/rkn61mjL/1/ The idea is pretty simple and doesn't require complex calculations and solving gradients: * *calculate a simple bounding box of non-rotated ellipse: let p1 = [centerX - radiusX, centerY - radiusY]; let p2 = [centerX + radiusX, centerY - radiusY]; let p3 = [centerX + radiusX, centerY + radiusY]; let p4 = [centerX - radiusX, centerY + radiusY]; *rotate all of the four points around the center of the ellipse: p1 = [(p1[0]-centerX) * Math.cos(radians) - (p1[1]-centerY) * Math.sin(radians) + centerX, (p1[0]-centerX) * Math.sin(radians) + (p1[1]-centerY) * Math.cos(radians) + centerY]; p2 = [(p2[0]-centerX) * Math.cos(radians) - (p2[1]-centerY) * Math.sin(radians) + centerX, (p2[0]-centerX) * Math.sin(radians) + (p2[1]-centerY) * Math.cos(radians) + centerY]; p3 = [(p3[0]-centerX) * Math.cos(radians) - (p3[1]-centerY) * Math.sin(radians) + centerX, (p3[0]-centerX) * Math.sin(radians) + (p3[1]-centerY) * Math.cos(radians) + centerY]; p4 = [(p4[0]-centerX) * Math.cos(radians) - (p4[1]-centerY) * Math.sin(radians) + centerX, (p4[0]-centerX) * Math.sin(radians) + (p4[1]-centerY) * Math.cos(radians) + centerY]; A: Here is another version of Pranay Soni's code, implemented in js codepen I hope someone will find it useful /** * @param {Number} rotation * @param {Number} majorAxis * @param {Nmber} minorAxis * @pivot {Point} pivot {x: number, y: number} * @returns {Object} */ export function getElipseBoundingLines(ratation, majorAxis, minorAxis, pivot) { const {cos, sin, tan, atan, round, min, max, PI} = Math; let phi = rotation / 180 * PI; if(phi === 0) phi = 0.00001; // major axis let a = majorAxis; //minor axis let b = minorAxis; const getX = (pivot, phi, t) => { return round(pivot.x + a * cos(t) * cos(phi) - b * sin(t) * sin(phi)) } const getY = (pivot, phi, t) => { return round(pivot.y + b * sin(t) * cos(phi) + a * cos(t) * sin(phi)) } const X = [], Y = []; let t = atan(-b * tan(phi) / a); X.push(getX(pivot, phi, t)); Y.push(getY(pivot, phi, t)); t = atan(b * (1 / tan(phi) / a)); X.push(getX(pivot, phi, t)); Y.push(getY(pivot, phi, t)); phi += PI; t = atan(-b * tan(phi) / a); X.push(getX(pivot, phi, t)); Y.push(getY(pivot, phi, t)); t = atan(b * (1 / tan(phi)) / a); X.push(getX(pivot, phi, t)); Y.push(getY(pivot, phi, t)); const left = min(...X); const right = max(...X); const top = min(...Y); const bottom = max(...Y); return {left, top, right, bottom}; } A: The general method is to find the zeroes of the derivative of the parametric form of the ellipse along X and Y axes. The position of those zeroes give the edge points along vertical and horizontal directions (derivative is zero). // compute point on ellipse from angle around ellipse (theta) function arc(theta, cx, cy, rx, ry, alpha) { // theta is angle in radians around arc // alpha is angle of rotation of ellipse in radians var cos = Math.cos(alpha), sin = Math.sin(alpha), x = rx*Math.cos(theta), y = ry*Math.sin(theta); return { x: cx + cos*x - sin*y, y: cy + sin*x + cos*y }; } function bb_ellipse(cx, cy, rx, ry, alpha) { var tan = Math.tan(alpha), p1, p2, p3, p4, theta, xmin, ymin, xmax, ymax ; // find min/max from zeroes of directional derivative along x and y // along x axis theta = Math.atan2(-ry*tan, rx); // get point for this theta p1 = arc(theta, cx, cy, rx, ry, alpha); // get anti-symmetric point p2 = arc(theta + Math.PI, cx, cy, rx, ry, alpha); // along y axis theta = Math.atan2(ry, rx*tan); // get point for this theta p3 = arc(theta, cx, cy, rx, ry, alpha); // get anti-symmetric point p4 = arc(theta + Math.PI, cx, cy, rx, ry, alpha); // compute min/max values ymin = Math.min(p3.y, p4.y) xmin = Math.min(p1.x, p2.x); ymax = Math.max(p3.y, p4.y); xmax = Math.max(p1.x, p2.x); // return bounding box vertices return [ {x: xmin, y: ymin}, {x: xmax, y: ymin}, {x: xmax, y: ymax}, {x: xmin, y: ymax} ]; } var cx = 120, cy = 120, rx = 100, ry = 40, alpha = -45; function ellipse(cx, cy, rx, ry, alpha) { // create an ellipse const ellipse = document.createElementNS('http://www.w3.org/2000/svg', 'ellipse'); ellipse.setAttribute('stroke', 'black'); ellipse.setAttribute('fill', 'none'); ellipse.setAttribute('cx', cx); ellipse.setAttribute('cy', cy); ellipse.setAttribute('rx', rx); ellipse.setAttribute('ry', ry); ellipse.setAttribute('transform', 'rotate('+alpha+' '+cx+' '+cy+')'); document.getElementById('svg').appendChild(ellipse); // create the bounding box const bb = bb_ellipse(cx, cy, rx, ry, /*angle in radians*/ alpha*Math.PI/180); const polygon = document.createElementNS('http://www.w3.org/2000/svg', 'polygon'); polygon.setAttribute('stroke', 'red'); polygon.setAttribute('fill', 'none'); polygon.setAttribute('points', bb.map(p => String(p.x) + ' ' + String(p.y)).join(' ')); document.getElementById('svg').appendChild(polygon); } ellipse(cx, cy, rx, ry, alpha); <svg xmlns="http://www.w3.org/2000/svg" id="svg" style="position:relative;width:240px;height:240px" viewBox="0 0 240 240"></svg> You may be interested in my computational geometry library Geometrize (in JavaScript) which constructs and renders many 2D curves and shapes, along with bounding boxes, convex hulls and intersection points.
{ "language": "en", "url": "https://stackoverflow.com/questions/87734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: How do you transfer or export SQL Server 2005 data to Excel I have a simple SQL 'Select' query, and I'd like to dump the results into an Excel file. I'm only able to save as .csv and converting to .xls creates some super ugly output. In any case, as far as I can tell (using Google) this doesn't seem to be so straight forward. Any help would be greatly appreciated. A: SSIS is a no-brainer for doing stuff like this and is very straight forward (and this is just the kind of thing it is for). * *Right-click the database in SQL Management Studio *Go to Tasks and then Export data, you'll then see an easy to use wizard. *Your database will be the source, you can enter your SQL query *Choose Excel as the target *Run it at end of wizard If you wanted, you could save the SSIS package as well (there's an option at the end of the wizard) so that you can do it on a schedule or something (and even open and modify to add more functionality if needed). A: If you are looking for ad-hoc items rather than something that you would put into SSIS. From within SSMS simply highlight the results grid, copy, then paste into excel, it isn't elegant, but works. Then you can save as native .xls rather than .csv A: Here's a video that will show you, step-by-step, how to export data to Excel. It's a great solution for 'one-off' problems where you need to export to Excel: Ad-Hoc Reporting A: Use "External data" from Excel. It can use ODBC connection to fetch data from external source: Data/Get External Data/New Database Query That way, even if the data in the database changes, you can easily refresh. A: It's a LOT easier just to do it from within Excel.!! Open Excel Data>Import/Export Data>Import Data Next to file name click "New Source" Button On Welcome to the Data Connection Wizard, choose Microsoft SQL Server. Click Next. Enter Server Name and Credentials. From the drop down, choose whichever database holds the table you need. Select your table then Next..... Enter a Description if you'd like and click Finish. When your done and back in Excel, just click "OK" Easy. A: I've found an easy way to export query results from SQL Server Management Studio 2005 to Excel. 1) Select menu item Query -> Query Options. 2) Set check box in Results -> Grid -> Include column headers when copying or saving the results. After that, when you Select All and Copy the query results, you can paste them to Excel, and the column headers will be present. A: Create the excel data source and insert the values, insert into OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=D:\testing.xls;', 'SELECT * FROM [SheetName$]') select * from SQLServerTable More informations are available here http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=49926 A: See this This is by far the best post for exporting to excel from SQL: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=49926 To quote from user madhivanan, Apart from using DTS and Export wizard, we can also use this query to export data from SQL Server2000 to Excel Create an Excel file named testing having the headers same as that of table columns and use these queries 1 Export data to existing EXCEL file from SQL Server table insert into OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=D:\testing.xls;', 'SELECT * FROM [SheetName$]') select * from SQLServerTable 2 Export data from Excel to new SQL Server table select * into SQLServerTable FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=D:\testing.xls;HDR=YES', 'SELECT * FROM [Sheet1$]') 3 Export data from Excel to existing SQL Server table Insert into SQLServerTable Select * FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=D:\testing.xls;HDR=YES', 'SELECT * FROM [SheetName$]') 4 If you dont want to create an EXCEL file in advance and want to export data to it, use EXEC sp_makewebtask @outputfile = 'd:\testing.xls', @query = 'Select * from Database_name..SQLServerTable', @colheaders =1, @FixedFont=0,@lastupdated=0,@resultstitle='Testing details' (Now you can find the file with data in tabular format) 5 To export data to new EXCEL file with heading(column names), create the following procedure create procedure proc_generate_excel_with_columns ( @db_name varchar(100), @table_name varchar(100), @file_name varchar(100) ) as --Generate column names as a recordset declare @columns varchar(8000), @sql varchar(8000), @data_file varchar(100) select @columns=coalesce(@columns+',','')+column_name+' as '+column_name from information_schema.columns where table_name=@table_name select @columns=''''''+replace(replace(@columns,' as ',''''' as '),',',',''''') --Create a dummy file to have actual data select @data_file=substring(@file_name,1,len(@file_name)-charindex('\',reverse(@file_name)))+'\data_file.xls' --Generate column names in the passed EXCEL file set @sql='exec master..xp_cmdshell ''bcp " select * from (select '+@columns+') as t" queryout "'+@file_name+'" -c''' exec(@sql) --Generate data in the dummy file set @sql='exec master..xp_cmdshell ''bcp "select * from '+@db_name+'..'+@table_name+'" queryout "'+@data_file+'" -c''' exec(@sql) --Copy dummy file to passed EXCEL file set @sql= 'exec master..xp_cmdshell ''type '+@data_file+' >> "'+@file_name+'"''' exec(@sql) --Delete dummy file set @sql= 'exec master..xp_cmdshell ''del '+@data_file+'''' exec(@sql) After creating the procedure, execute it by supplying database name, table name and file path: EXEC proc_generate_excel_with_columns 'your dbname', 'your table name','your file path' Its a whomping 29 pages but that is because others show various other ways as well as people asking questions just like this one on how to do it. Follow that thread entirely and look at the various questions people have asked and how they are solved. I picked up quite a bit of knowledge just skimming it and have used portions of it to get expected results. To update single cells A member also there Peter Larson posts the following: I think one thing is missing here. It is great to be able to Export and Import to Excel files, but how about updating single cells? Or a range of cells? This is the principle of how you do manage that update OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=c:\test.xls;hdr=no', 'SELECT * FROM [Sheet1$b7:b7]') set f1 = -99 You can also add formulas to Excel using this: update OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=c:\test.xls;hdr=no', 'SELECT * FROM [Sheet1$b7:b7]') set f1 = '=a7+c7' Exporting with column names using T-SQL Member Mladen Prajdic also has a blog entry on how to do this here References: www.sqlteam.com (btw this is an excellent blog / forum for anyone looking to get more out of SQL Server). A: You could always use ADO to write the results out to the worksheet cells from a recordset object A: A handy tool Convert SQL to Excel converts SQL table or SQL query result to Excel file without programming. Main Features - Convert/export a SQL Table to Excel file - Convert/export multiple tables (multiple query results) to multiple Excel worksheets. - Allow flexible TSQL query which can have multiple SELECT statements or other complex query statements. B. Regards, Alex A: There exists several tools to export/import from SQL Server to Excel. Google is your friend :-) We use DbTransfer (which is one of those which can export a complete Database to an Excel file also) here: http://www.dbtransfer.de/Products/DbTransfer. We have used the openrowset feature of sql server before, but i was never happy with it, becuase it's not very easy to use and lacks of features and speed... A: Try the 'Import and Export Data (32-bit)' tool. Available after installing MS SQL Management Studio Express 2012. With this tool it's very easy to select a database, a table or to insert your own SQL query and choose a destination (A MS Excel file for example). A: you can right click on a grid of results in SQL server, and choose save as CSV. you can then you can import this into Excel. Excel gives you a import wizard, ensure you select comma delimited. it works fine for me when i needed to import 50k+ records into excel. A: Check this. Query -> Query Options. Results -> Grid -> Include column headers when copying or saving the results
{ "language": "en", "url": "https://stackoverflow.com/questions/87735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: How do you determine what SQL Tables have an identity column programmatically I want to create a list of columns in SQL Server 2005 that have identity columns and their corresponding table in T-SQL. Results would be something like: TableName, ColumnName A: In SQL 2005: select object_name(object_id), name from sys.columns where is_identity = 1 A: sys.columns.is_identity = 1 e.g., select o.name, c.name from sys.objects o inner join sys.columns c on o.object_id = c.object_id where c.is_identity = 1 A: Another way (for 2000 / 2005/2012/2014): IF ((SELECT OBJECTPROPERTY( OBJECT_ID(N'table_name_here'), 'TableHasIdentity')) = 1) PRINT 'Yes' ELSE PRINT 'No' NOTE: table_name_here should be schema.table, unless the schema is dbo. A: This query seems to do the trick: SELECT sys.objects.name AS table_name, sys.columns.name AS column_name FROM sys.columns JOIN sys.objects ON sys.columns.object_id=sys.objects.object_id WHERE sys.columns.is_identity=1 AND sys.objects.type in (N'U') A: List of tables without Identity column based on Guillermo answer: SELECT DISTINCT TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE (TABLE_SCHEMA = 'dbo') AND (OBJECTPROPERTY(OBJECT_ID(TABLE_NAME), 'TableHasIdentity') = 0) ORDER BY TABLE_NAME A: here's a working version for MSSQL 2000. I've modified the 2005 code found here: http://sqlfool.com/2011/01/identity-columns-are-you-nearing-the-limits/ /* Define how close we are to the value limit before we start throwing up the red flag. The higher the value, the closer to the limit. */ DECLARE @threshold DECIMAL(3,2); SET @threshold = .85; /* Create a temp table */ CREATE TABLE #identityStatus ( database_name VARCHAR(128) , table_name VARCHAR(128) , column_name VARCHAR(128) , data_type VARCHAR(128) , last_value BIGINT , max_value BIGINT ); DECLARE @dbname sysname; DECLARE @sql nvarchar(4000); -- Use an cursor to iterate through the databases since in 2000 there's no sp_MSForEachDB command... DECLARE c cursor FAST_FORWARD FOR SELECT name FROM master.dbo.sysdatabases WHERE name NOT IN('master', 'model', 'msdb', 'tempdb'); OPEN c; FETCH NEXT FROM c INTO @dbname; WHILE @@FETCH_STATUS = 0 BEGIN SET @sql = N'Use [' + @dbname + ']; Insert Into #identityStatus Select ''' + @dbname + ''' As [database_name] , Object_Name(id.id) As [table_name] , id.name As [column_name] , t.name As [data_type] , IDENT_CURRENT(Object_Name(id.id)) As [last_value] , Case When t.name = ''tinyint'' Then 255 When t.name = ''smallint'' Then 32767 When t.name = ''int'' Then 2147483647 When t.name = ''bigint'' Then 9223372036854775807 End As [max_value] From syscolumns As id Join systypes As t On id.xtype = t.xtype Where id.colstat&1 = 1 -- this identifies the identity columns (as far as I know) '; EXECUTE sp_executesql @sql; FETCH NEXT FROM c INTO @dbname; END CLOSE c; DEALLOCATE c; /* Retrieve our results and format it all prettily */ SELECT database_name , table_name , column_name , data_type , last_value , CASE WHEN last_value < 0 THEN 100 ELSE (1 - CAST(last_value AS FLOAT(4)) / max_value) * 100 END AS [percentLeft] , CASE WHEN CAST(last_value AS FLOAT(4)) / max_value >= @threshold THEN 'warning: approaching max limit' ELSE 'okay' END AS [id_status] FROM #identityStatus ORDER BY percentLeft; /* Clean up after ourselves */ DROP TABLE #identityStatus; A: The following query work for me: select TABLE_NAME tabla,COLUMN_NAME columna from INFORMATION_SCHEMA.COLUMNS where COLUMNPROPERTY(object_id(TABLE_SCHEMA+'.'+TABLE_NAME), COLUMN_NAME, 'IsIdentity') = 1 order by TABLE_NAME A: Another potential way to do this for SQL Server, which has less reliance on the system tables (which are subject to change, version to version) is to use the INFORMATION_SCHEMA views: select COLUMN_NAME, TABLE_NAME from INFORMATION_SCHEMA.COLUMNS where COLUMNPROPERTY(object_id(TABLE_SCHEMA+'.'+TABLE_NAME), COLUMN_NAME, 'IsIdentity') = 1 order by TABLE_NAME A: I think this works for SQL 2000: SELECT CASE WHEN C.autoval IS NOT NULL THEN 'Identity' ELSE 'Not Identity' AND FROM sysobjects O INNER JOIN syscolumns C ON O.id = C.id WHERE O.NAME = @TableName AND C.NAME = @ColumnName A: This worked for me using Sql Server 2008: USE <database_name>; GO SELECT SCHEMA_NAME(schema_id) AS schema_name , t.name AS table_name , c.name AS column_name FROM sys.tables AS t JOIN sys.identity_columns c ON t.object_id = c.object_id ORDER BY schema_name, table_name; GO A: Use this : DECLARE @Table_Name VARCHAR(100) DECLARE @Column_Name VARCHAR(100) SET @Table_Name = '' SET @Column_Name = '' SELECT RowNumber = ROW_NUMBER() OVER ( PARTITION BY T.[Name] ORDER BY T.[Name], C.column_id ) , SCHEMA_NAME(T.schema_id) AS SchemaName , T.[Name] AS Table_Name , C.[Name] AS Field_Name , sysType.name , C.max_length , C.is_nullable , C.is_identity , C.scale , C.precision FROM Sys.Tables AS T LEFT JOIN Sys.Columns AS C ON ( T.[Object_Id] = C.[Object_Id] ) LEFT JOIN sys.types AS sysType ON ( C.user_type_id = sysType.user_type_id ) WHERE ( Type = 'U' ) AND ( C.Name LIKE '%' + @Column_Name + '%' ) AND ( T.Name LIKE '%' + @Table_Name + '%' ) ORDER BY T.[Name] , C.column_id A: This worked for SQL Server 2005, 2008, and 2012. I found that the sys.identity_columns did not contain all my tables with identity columns. SELECT a.name AS TableName, b.name AS IdentityColumn FROM sys.sysobjects a JOIN sys.syscolumns b ON a.id = b.id WHERE is_identity = 1 ORDER BY name; Looking at the documentation page the status column can also be utilized. Also you can add the four part identifier and it will work across different servers. SELECT a.name AS TableName, b.name AS IdentityColumn FROM [YOUR_SERVER_NAME].[YOUR_DB_NAME].sys.sysobjects a JOIN [YOUR_SERVER_NAME].[YOUR_DB_NAME].sys.syscolumns b ON a.id = b.id WHERE is_identity = 1 ORDER BY name; Source: https://msdn.microsoft.com/en-us/library/ms186816.aspx A: By some reason sql server save some identity columns in different tables, the code that work for me, is the following: select TABLE_NAME tabla,COLUMN_NAME columna from INFORMATION_SCHEMA.COLUMNS where COLUMNPROPERTY(object_id(TABLE_SCHEMA+'.'+TABLE_NAME), COLUMN_NAME, 'IsIdentity') = 1 union all select o.name tabla, c.name columna from sys.objects o inner join sys.columns c on o.object_id = c.object_id where c.is_identity = 1 A: Get all columns with Identity. Modern version for MSSQL 2017+. Locks down to specific database: SELECT [COLUMN_NAME] , [TABLE_NAME] , [TABLE_CATALOG] FROM [INFORMATION_SCHEMA].[COLUMNS] WHERE COLUMNPROPERTY(OBJECT_ID(CONCAT_WS('.' ,[TABLE_CATALOG] ,[TABLE_SCHEMA] ,[TABLE_NAME])) ,[COLUMN_NAME] ,'IsIdentity') = 1 ORDER BY [TABLE_NAME]
{ "language": "en", "url": "https://stackoverflow.com/questions/87747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "112" }
Q: Resizing an Image without losing any quality How can I resize an image, with the image quality unaffected? A: I believe what you're looking to do is "Resize/Resample" your images. Here is a good site that gives instructions and provides a utility class(That I also happen to use): http://www.codeproject.com/KB/GDI-plus/imgresizoutperfgdiplus.aspx A: Unless you're doing vector graphics, there's no way to resize an image without potentially losing some image quality. A: You can't resize an image without losing some quality, simply because you are reducing the number of pixels. Don't reduce the size client side, because browsers don't do a good job of resizing images. What you can do is programatically change the size before you render it, or as a user uploads it. Here is an article that explains one way to do this in c#: http://www.codeproject.com/KB/GDI-plus/imageresize.aspx A: Unless you resize up, you cannot do this with raster graphics. What you can do with good filtering and smoothing is to resize without losing any noticable quality. You can also alter the DPI metadata of the image (assuming it has some) which will keep exactly the same pixel count, but will alter how image editors think of it in 'real-world' measurements. And just to cover all bases, if you really meant just the file size of the image and not the actual image dimensions, I suggest you look at a lossless encoding of the image data. My suggestion for this would be to resave the image as a .png file (I tend to use paint as a free transcoder for images in windows. Load image in paint, save as in the new format) A: private static Image resizeImage(Image imgToResize, Size size) { int sourceWidth = imgToResize.Width; int sourceHeight = imgToResize.Height; float nPercent = 0; float nPercentW = 0; float nPercentH = 0; nPercentW = ((float)size.Width / (float)sourceWidth); nPercentH = ((float)size.Height / (float)sourceHeight); if (nPercentH < nPercentW) nPercent = nPercentH; else nPercent = nPercentW; int destWidth = (int)(sourceWidth * nPercent); int destHeight = (int)(sourceHeight * nPercent); Bitmap b = new Bitmap(destWidth, destHeight); Graphics g = Graphics.FromImage((Image)b); g.InterpolationMode = InterpolationMode.HighQualityBicubic; g.DrawImage(imgToResize, 0, 0, destWidth, destHeight); g.Dispose(); return (Image)b; } from here A: As rcar says, you can't without losing some quality, the best you can do in c# is: Bitmap newImage = new Bitmap(newWidth, newHeight); using (Graphics gr = Graphics.FromImage(newImage)) { gr.SmoothingMode = SmoothingMode.HighQuality; gr.InterpolationMode = InterpolationMode.HighQualityBicubic; gr.PixelOffsetMode = PixelOffsetMode.HighQuality; gr.DrawImage(srcImage, new Rectangle(0, 0, newWidth, newHeight)); } A: See if you like the image resizing quality of this open source ASP.NET module. There's a live demo, so you can mess around with it yourself. It yields results that are (to me) impossible to distinguish from Photoshop output. It also has similar file sizes - MS did a good job on their JPEG encoder. A: Here you can find also add watermark codes in this class : public class ImageProcessor { public Bitmap Resize(Bitmap image, int newWidth, int newHeight, string message) { try { Bitmap newImage = new Bitmap(newWidth, Calculations(image.Width, image.Height, newWidth)); using (Graphics gr = Graphics.FromImage(newImage)) { gr.SmoothingMode = SmoothingMode.AntiAlias; gr.InterpolationMode = InterpolationMode.HighQualityBicubic; gr.PixelOffsetMode = PixelOffsetMode.HighQuality; gr.DrawImage(image, new Rectangle(0, 0, newImage.Width, newImage.Height)); var myBrush = new SolidBrush(Color.FromArgb(70, 205, 205, 205)); double diagonal = Math.Sqrt(newImage.Width * newImage.Width + newImage.Height * newImage.Height); Rectangle containerBox = new Rectangle(); containerBox.X = (int)(diagonal / 10); float messageLength = (float)(diagonal / message.Length * 1); containerBox.Y = -(int)(messageLength / 1.6); Font stringFont = new Font("verdana", messageLength); StringFormat sf = new StringFormat(); float slope = (float)(Math.Atan2(newImage.Height, newImage.Width) * 180 / Math.PI); gr.RotateTransform(slope); gr.DrawString(message, stringFont, myBrush, containerBox, sf); return newImage; } } catch (Exception exc) { throw exc; } } public int Calculations(decimal w1, decimal h1, int newWidth) { decimal height = 0; decimal ratio = 0; if (newWidth < w1) { ratio = w1 / newWidth; height = h1 / ratio; return height.To<int>(); } if (w1 < newWidth) { ratio = newWidth / w1; height = h1 * ratio; return height.To<int>(); } return height.To<int>(); } } A: There is something out there, context aware resizing, don't know if you will be able to use it, but it's worth looking at, that's for sure A nice video demo (Enlarging appears towards the middle) http://www.youtube.com/watch?v=vIFCV2spKtg Here there could be some code. http://www.semanticmetadata.net/2007/08/30/content-aware-image-resizing-gpl-implementation/ Was that overkill? Maybe there are some easy filters you can apply to an enlarged image to blur the pixels a bit, you could look into that. A: Are you resizing larger, or smaller? By a small % or by a larger factor like 2x, 3x? What do you mean by quality for your application? And what type of images - photographs, hard-edged line drawings, or what? Writing your own low-level pixel grinding code or trying to do it as much as possible with existing libraries (.net or whatever)? There is a large body of knowledge on this topic. The key concept is interpolation. Browsing recommendations: * http://www.all-in-one.ee/~dersch/interpolator/interpolator.html * http://www.cambridgeincolour.com/tutorials/image-interpolation.htm * for C#: https://secure.codeproject.com/KB/GDI-plus/imageprocessing4.aspx?display=PrintAll&fid=3657&df=90&mpp=25&noise=3&sort=Position&view=Quick&fr=26&select=629945 * this is java-specific but might be educational - http://today.java.net/pub/a/today/2007/04/03/perils-of-image-getscaledinstance.html A: Here is a forum thread that provides a C# image resizing code sample. You could use one of the GD library binders to do resampling in C#.
{ "language": "en", "url": "https://stackoverflow.com/questions/87753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "112" }
Q: How to backup LIF formatted disk? I have several old 3.5in floppy disks that I would like to backup. My attempts to create an image of the disks have failed. I tried using the UNIX utility dd_rescue, but when the kernel tries to open (/dev/fd0) I get a kernel error, floppy0: probe failed... I would like an image because some of the floppies are using the LIF file system format. Does anyone have any ideas as to what I should do? HP now Agilent made some tools that could read and write to files on LIF formatted disk. I could use these tools to copy and convert the files to the local disk but not without possibly losing some data in the process. In other words, converting from LIF to some other format back to LIF will lose some information. I just want to backup the raw bytes on the disk and not be concerned with the type of file system. A: I think you'll find the best resource here. Also, if you're going to use raw dd, LIF format has 77 cylinders vs 80 for a normal floppy. A: dd if=/dev/floppy0 of=animage.bin conv=noerror
{ "language": "en", "url": "https://stackoverflow.com/questions/87758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Why do some conversions from wmv to flv with ffmpeg fail? Ive been smashing my head with this for a while. I have 2 completely identical .wmv files encoded with wmv3 codec. I put them both through ffmpeg with the following command: /usr/bin/ffmpeg -i file.wmv -ar 44100 -ab 64k -qscale 9 -s 512x384 -f flv file.flv One file converts just fine, and gives me the following output: FFmpeg version SVN-r11070, Copyright (c) 2000-2007 Fabrice Bellard, et al. configuration: --prefix=/usr --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic --enable-liba52 --enable-libfaac --enable-libfaad --enable-libgsm --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libxvid --enable-libx264 --enable-pp --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-optimizations --disable-strip libavutil version: 49.5.0 libavcodec version: 51.48.0 libavformat version: 51.19.0 built on Jun 25 2008 09:17:38, gcc: 4.1.2 20070925 (Red Hat 4.1.2-33) Seems stream 1 codec frame rate differs from container frame rate: 1000.00 (1000/1) -> 29.97 (30000/1001) Input #0, asf, from 'ok.wmv': Duration: 00:14:22.3, start: 3.000000, bitrate: 467 kb/s Stream #0.0: Audio: wmav2, 44100 Hz, stereo, 64 kb/s Stream #0.1: Video: wmv3, yuv420p, 320x240 [PAR 0:1 DAR 0:1], 400 kb/s, 29.97 tb(r) Output #0, flv, to 'ok.flv': Stream #0.0: Video: flv, yuv420p, 512x384 [PAR 0:1 DAR 0:1], q=2-31, 200 kb/s, 29.97 tb(c) Stream #0.1: Audio: libmp3lame, 44100 Hz, stereo, 64 kb/s Stream mapping: Stream #0.1 -> #0.0 Stream #0.0 -> #0.1 Press [q] to stop encoding frame=25846 fps=132 q=9.0 Lsize= 88486kB time=862.4 bitrate= 840.5kbits/s video:80827kB audio:6738kB global headers:0kB muxing overhead 1.050642% While another file, fails: FFmpeg version SVN-r11070, Copyright (c) 2000-2007 Fabrice Bellard, et al. configuration: --prefix=/usr --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic --enable-liba52 --enable-libfaac --enable-libfaad --enable-libgsm --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libxvid --enable-libx264 --enable-pp --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-optimizations --disable-strip libavutil version: 49.5.0 libavcodec version: 51.48.0 libavformat version: 51.19.0 built on Jun 25 2008 09:17:38, gcc: 4.1.2 20070925 (Red Hat 4.1.2-33) [wmv3 @ 0x3700940d20]Extra data: 8 bits left, value: 0 Seems stream 1 codec frame rate differs from container frame rate: 1000.00 (1000/1) -> 25.00 (25/1) Input #0, asf, from 'bad3.wmv': Duration: 00:06:34.9, start: 4.000000, bitrate: 1666 kb/s Stream #0.0: Audio: 0x0162, 48000 Hz, stereo, 256 kb/s Stream #0.1: Video: wmv3, yuv420p, 512x384 [PAR 0:1 DAR 0:1], 1395 kb/s, 25.00 tb(r) File 'ok.flv' already exists. Overwrite ? [y/N] y Output #0, flv, to 'ok.flv': Stream #0.0: Video: flv, yuv420p, 512x384 [PAR 0:1 DAR 0:1], q=2-31, 200 kb/s, 25.00 tb(c) Stream #0.1: Audio: libmp3lame, 48000 Hz, stereo, 64 kb/s Stream mapping: Stream #0.1 -> #0.0 Stream #0.0 -> #0.1 Unsupported codec (id=0) for input stream #0.0 The only difference I see is with the Input audio codec Working: Stream #0.0: Audio: wmav2, 44100 Hz, stereo, 64 kb/s Not working: Stream #0.0: Audio: 0x0162, 48000 Hz, stereo, 64 kb/s Any ideas? A: It is in fact the audio format, which causes trouble. Audio formats are identified by its TwoCC (0x0162 here). You can look up the different TwoCCs here: http://wiki.multimedia.cx/index.php?title=TwoCC and you'll find: 0x0162 Windows Media Audio Professional V9 This codec isn't supported yet by ffmpeg and mencoder as far as I know. You can search at google for "ffmpeg audio 0x0162" and check for yourself. A: Well the obvious answer is that the audio is encoded differently in the second wmv file, so they are not completely identical. You could try forcing it to use a specific audio codec for the 'bad' wmv, and see if that works. Perhaps it's just having trouble picking the right codec? However it seems more likely that the 'bad' wmv has some sort of audio codec that's not supported by ffmpeg. Also try the usual stuff, make sure you've upgraded to the latest version, check out any development versions that may contain bugfixes etc. A: Or, alternatively, use mencoder
{ "language": "en", "url": "https://stackoverflow.com/questions/87760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to connect to Oracle database? How do you connect to Oracle using PHP on MAC OS X? A: I would think OCI would be the way to go. PHP has a module for it. A: The PDO abstraction layer can be used to connect to, and perform actions on, an Oracle DB. Here's an article on how to use PDO with Oracle from the Oracle website. It's also possible to use OCI. The Oracle PHP Development Centre will have lots more useful information on using Oracle and PHP together. A: For instantclient on osx 10.6 64bit do the following: download the instant client librarys and sdk, stuff it all in a folder. Make sure you get the 64 bit library if you are on a 64 bit machine, 32bit won't work! - test with sqlplus first create this if it does not exist sudo vi /etc/launchd.conf and add to following to the file(with your own path!) setenv DYLD_LIBRARY_PATH /usr/oracle_instantClient64 You probaby need to restart your system at this point for launchd to pass the path to apache to pick up the path, or see if restarting launchd works, though i have a feeling that will restart your system anyway! You should add "extension=oci8.so" to php.ini sudo vi /etc/php.ini if that file does not exist copy php.ini.default sudo cp /etc/php.ini.default /etc/php.ini then add the above extension, there is a section with lots of extensions further down the file, put it there somewhere oci requires a library symlink so do sudo ln -s $DYLD_LIBRARY_PATH/libclntsh.dylib.10.1 $DYLD_LIBRARY_PATH/libclntsh.dylib Also theres some wierd hardcoded library link in the oracle binaries so fix that mkdir -p /b/227/rdbms/ Its only looking for the oracle libraries so link it back ln -s /usr/oracle_instantClient64/ /b/227/rdbms/lib now install oci8 from pear repository. If you have installed snow leopard osx 10.6 without upgrading you may have problems with pear and pecl. If so you will need to install pear first. see: https://discussions.apple.com/thread/2602597?start=0&tstart=0 sudo pecl install oci8 HINT: don't use autodetect, specify the instantclient path when it asks you.. instantclient,/usr/oracle_instantClient64 restart apache sudo apachectl graceful test by navigating to the URL in a browser or you can call the file directly on the command line php index.php thats it use the following as a test file.. <?php $dbHost = "localhostOrDatabaseURL"; $dbHostPort="1521"; $dbServiceName = "servicename"; $usr = "username"; $pswd = "password"; $dbConnStr = "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=".$dbHost.")(PORT=".$dbHostPort.")) (CONNECT_DATA=(SERVICE_NAME=".$dbServiceName.")))"; if(!$dbConn = oci_connect($usr,$pswd,$dbConnStr)){ $err = oci_error(); trigger_error('Could not establish a connection: ' . $err['message'], E_USER_ERROR); }; $strSQL = "SELECT SYSDATE FROM DUAL"; $stmt = oci_parse($dbConn,$strSQL); if ( ! oci_execute($stmt) ){ $err = oci_error($stmt); trigger_error('Query failed: ' . $err['message'], E_USER_ERROR); }; while(oci_fetch($stmt)){ $rslt = oci_result($stmt, 1); print "<h3>query returned: ".$rslt."</h3>"; } ?> A: I dont know the Mac specifically, nor PHP, but you usually need to install the Oracle Client tools (Instant Client). http://www.oracle.com/technology/tech/oci/instantclient/index.html Once installed you modify the TNSNAMES.ORA file to point to the server and instance name of the Oracle database. Then you can use the PHP "database connection" stuff (sorry) to create a connection and running your SQL statements. Use the SQL*PLUS client to check the connection works: ie. c:> SQLPLUS CONNECT scott/tiger@mydatabase If the TNSNAMES.ORA is correct you should get a connection, or at least "username/password incorrect" that proves you got communication with the Oracle instance. If you get TNS-12521 (?) errors then your TNSNAMES.ORA is incorrect. A: Connecting to an oracle database should be no problem with the oci-interface, using "oci_connect()" for example. Further examples are here: http://php.net/manual/en/oci8.setup.php But I do not understand, what the remark MAC OS X means - are you running an apache locally? Hope this helps, Bastian
{ "language": "en", "url": "https://stackoverflow.com/questions/87769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I multiply two 64-bit numbers using x86 assembly language? How would I go about... * *multiplying two 64-bit numbers *multiplying two 16-digit hexadecimal numbers ...using Assembly Language. I'm only allowed to use registers %eax, %ebx, %ecx, %edx, and the stack. EDIT: Oh, I'm using ATT Syntax on the x86 EDIT2: Not allowed to decompile into assembly... A: If this was 64x86, function(x, y, *lower, *higher) movq %rx,%rax #Store x into %rax mulq %y #multiplies %y to %rax #mulq stores high and low values into rax and rdx. movq %rax,(%r8) #Move low into &lower movq %rdx,(%r9) #Move high answer into &higher A: Since you're on x86 you need 4 mull instructions. Split the 64bit quantities into two 32bit words and multiply the low words to the lowest and 2nd lowest word of the result, then both pairs of low and high word from different numbers (they go to the 2nd and 3rd lowest word of the result) and finally both high words into the 2 highest words of the result. Add them all together not forgetting to deal with carry. You didn't specify the memory layout of the inputs and outputs so it's impossible to write sample code. A: This code assumes you want x86 (not x64 code), that you probably only want a 64 bit product, and that you don't care about overflow or signed numbers. (A signed version is similar). MUL64_MEMORY: mov edi, val1high mov esi, val1low mov ecx, val2high mov ebx, val2low MUL64_EDIESI_ECXEBX: mov eax, edi mul ebx xch eax, ebx ; partial product top 32 bits mul esi xch esi, eax ; partial product lower 32 bits add ebx, edx mul ecx add ebx, eax ; final upper 32 bits ; answer here in EBX:ESI This doesn't honor the exact register constraints of OP, but the result fits entirely in the registers offered by the x86. (This code is untested, but I think it's right). [Note: I transferred (my) this answer from another question that got closed, because NONE of the other "answers" here directly answered the question]. A: Use what should probably be your course textbook, Randall Hyde's "The Art of Assembly Language". See 4.2.4 - Extended Precision Multiplication Although an 8x8, 16x16, or 32x32 multiply is usually sufficient, there are times when you may want to multiply larger values together. You will use the x86 single operand MUL and IMUL instructions for extended precision multiplication .. Probably the most important thing to remember when performing an extended precision multiplication is that you must also perform a multiple precision addition at the same time. Adding up all the partial products requires several additions that will produce the result. The following listing demonstrates the proper way to multiply two 64 bit values on a 32 bit processor .. (See the link for full assembly listing and illustrations.) A: It depends what language you are using. From what I remember from learning MIPS assembly, there is a Move From High command and a Move From Lo command, or mflo and mfhi. mfhi stores the top 64bits while mflo stores the lower 64bits of the total number. A: ah assembly, been awhile since i've used it. so i'm assuming the real problem here is that the microcontroller (what i used to write code for in assembly anyways) you're working on doesn't have 64 bit registers? if that's the case, you're going to have the break the numbers you're working with apart and perform multiple multiplications with the pieces. this sounds like it's a homework assignment from the way you've worded it, so i'm not gonna spell it out much further :P A: Just do normal long multiplication, as if you were multiplying a pair of 2-digit numbers, except each "digit" is really a 32-bit integer. If you're multiplying two numbers at addresses X and Y and storing the result in Z, then what you want to do (in pseudocode) is: Z[0..3] = X[0..3] * Y[0..3] Z[4..7] = X[0..3] * Y[4..7] + X[4..7] * Y[0..3] Note that we're discarding the upper 64 bits of the result (since a 64-bit number times a 64-bit number is a 128-bit number). Also note that this is assuming little-endian. Also, be careful about a signed versus an unsigned multiply. A: Find a C compiler that supports 64bit (GCC does IIRC) compile a program that does just that, then get the disassembly. GCC can spit it out on it's own and you can get it out of object file with the right tools. OTOH their is a 32bX32b = 64b op on x86 a:b * c:d = e:f // goes to e:f = b*d; x:y = a*d; e += x; x:y = b*c; e += x; everything else overflows (untested) Edit Unsigned only A: If you want 128 mode try this... __uint128_t AES::XMULTX(__uint128_t TA,__uint128_t TB) { union { __uint128_t WHOLE; struct { unsigned long long int LWORDS[2]; } SPLIT; } KEY; register unsigned long long int __XRBX,__XRCX,__XRSI,__XRDI; __uint128_t RESULT; KEY.WHOLE=TA; __XRSI=KEY.SPLIT.LWORDS[0]; __XRDI=KEY.SPLIT.LWORDS[1]; KEY.WHOLE=TB; __XRBX=KEY.SPLIT.LWORDS[0]; __XRCX=KEY.SPLIT.LWORDS[1]; __asm__ __volatile__( "movq %0, %%rsi \n\t" "movq %1, %%rdi \n\t" "movq %2, %%rbx \n\t" "movq %3, %%rcx \n\t" "movq %%rdi, %%rax \n\t" "mulq %%rbx \n\t" "xchgq %%rbx, %%rax \n\t" "mulq %%rsi \n\t" "xchgq %%rax, %%rsi \n\t" "addq %%rdx, %%rbx \n\t" "mulq %%rcx \n\t" "addq %%rax, %%rbx \n\t" "movq %%rsi, %0 \n\t" "movq %%rbx, %1 \n\t" : "=m" (__XRSI), "=m" (__XRBX) : "m" (__XRSI), "m" (__XRDI), "m" (__XRBX), "m" (__XRCX) : "rax","rbx","rcx","rdx","rsi","rdi" ); KEY.SPLIT.LWORDS[0]=__XRSI; KEY.SPLIT.LWORDS[1]=__XRBX; RESULT=KEY.WHOLE; return RESULT; } A: If you want 128bit multiplication then this should work this is in AT&T format. __uint128_t FASTMUL128(const __uint128_t TA,const __uint128_t TB) { union { __uint128_t WHOLE; struct { unsigned long long int LWORDS[2]; } SPLIT; } KEY; register unsigned long long int __RAX,__RDX,__RSI,__RDI; __uint128_t RESULT; KEY.WHOLE=TA; __RAX=KEY.SPLIT.LWORDS[0]; __RDX=KEY.SPLIT.LWORDS[1]; KEY.WHOLE=TB; __RSI=KEY.SPLIT.LWORDS[0]; __RDI=KEY.SPLIT.LWORDS[1]; __asm__ __volatile__( "movq %0, %%rax \n\t" "movq %1, %%rdx \n\t" "movq %2, %%rsi \n\t" "movq %3, %%rdi \n\t" "movq %%rsi, %%rbx \n\t" "movq %%rdi, %%rcx \n\t" "movq %%rax, %%rsi \n\t" "movq %%rdx, %%rdi \n\t" "xorq %%rax, %%rax \n\t" "xorq %%rdx, %%rdx \n\t" "movq %%rdi, %%rax \n\t" "mulq %%rbx \n\t" "xchgq %%rbx, %%rax \n\t" "mulq %%rsi \n\t" "xchgq %%rax, %%rsi \n\t" "addq %%rdx, %%rbx \n\t" "mulq %%rcx \n\t" "addq %%rax, %%rbx \n\t" "movq %%rsi, %%rax \n\t" "movq %%rbx, %%rdx \n\t" "movq %%rax, %0 \n\t" "movq %%rdx, %1 \n\t" "movq %%rsi, %2 \n\t" "movq %%rdi, %3 \n\t" : "=m"(__RAX),"=m"(__RDX),"=m"(__RSI),"=m"(__RDI) : "m"(__RAX), "m"(__RDX), "m"(__RSI), "m"(__RDI) : "rax","rbx","ecx","rdx","rsi","rdi" ); KEY.SPLIT.LWORDS[0]=__RAX; KEY.SPLIT.LWORDS[1]=__RDX; RESULT=KEY.WHOLE; return RESULT; } A: I'm betting you're a student, so see if you can make this work: Do it word by word, and use bit shifts. Think up the most efficient solution. Beware of the sign bit.
{ "language": "en", "url": "https://stackoverflow.com/questions/87771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: C++ unit testing framework I use the Boost Test framework for my C++ code but there are two problems with it that are probably common to all C++ test frameworks: * *There is no way to create automatic test stubs (by extracting public functions from selected classes for example). *You cannot run a single test - you have to run the entire 'suite' of tests (unless you create lots of different test projects I guess). Does anyone know of a better testing framework or am I forever to be jealous of the test tools available to Java/.NET developers? A: I'm a big fan of UnitTest++, it's very lightweight, but does the job. You can run single tests there easily. A: I've just pushed my own framework, CATCH, out there. It's still under development but I believe it already surpasses most other frameworks. Different people have different criteria but I've tried to cover most ground without too many trade-offs. Take a look at my linked blog entry for a taster. My top five features are: * *Header only *Auto registration of function and method based tests *Decomposes standard C++ expressions into LHS and RHS (so you don't need a whole family of assert macros). *Support for nested sections within a function based fixture *Name tests using natural language - function/ method names are generated It doesn't do generation of stubs - but that's a fairly specialised area. I think Isolator++ is the first tool to truly pull that off. Note that Mocking/ stubbing frameworks are usually independent of unit testing frameworks. CATCH works particularly well with mock objects as test state is not passed around by context. It also has Objective-C bindings. [update] Just happened back across this answer of mine from a few years ago. Thanks for all the great comments! Obviously Catch has developed on a lot in that time. It now has support for BDD style testing (given/ when/ then), tags, now in a single header, and loads of internal improvements and refinements (e.g. richer command line, clear and expressive output etc). A more up-to-date blog post is here. A: Great question! A few years ago I looked around forever for something worth using and came up short. I was looking for something that was very lightweight and did not require me to link in some libraries... you know something I could get up and running in minutes. However, I persisted and ended up running across cxxtest. From the website: * *Doesn't require RTTI *Doesn't require member template functions *Doesn't require exception handling *Doesn't require any external libraries (including memory management, file/console I/O, graphics libraries) *Is distributed entirely as a set of header files (and a python script). Wow... super simple! Include a header file, derive from the Test class and you're off and running. We've been using this for the past four years and I've still yet to find anything that I'm more pleased with. A: Try WinUnit. It sounds excellent, and is recommended by John Robbins. A: I like the Boost unit test framework, principally because it is very lightweight. * *I never heard of a unit-test framework that would generate stubs. I am generally quite unconvinced by code generation, if only because it gets obsolete very quickly. Maybe it becomes useful when you have a large number of classes? *A proponent of Test Driven Development would probably say that it is fundamental that you run the whole test suite every time, as to make sure that you have not introduced a regression. If running all the tests take too much time, maybe your tests are too big, or make too many calls to CPU intensive functions that should be mocked out? If it remains a problem, a thin wrapper around the boost unit-tests should allow you to pick your tests, and would probably be quicker than learn another framework and port all your tests. A: Take a look at the Google C++ Testing Framework. It's used by Google for all of their in-house C++ projects, so it must be pretty good. http://googletesting.blogspot.com/2008/07/announcing-new-google-c-testing.html http://code.google.com/p/googletest A: Boost.Test does allow to run test case by name. Or test suite. Or several of them. Boost.Test does NOT insists on implementing main, though it does make it easy to do so. Boost.Test is NOT necessary to use as a library. It has single header variant. A: I just responded to a very similar question. I ended up using Noel Llopis' UnitTest++. I liked it more than boost::test because it didn't insist on implementing the main program of the test harness with a macro - it can plug into whatever executable you create. It does suffer from the same encumbrance of boost::test in that it requires a library to be linked in. I've used CxxTest, and it does come closer than anything else in C++-land to automatically generating tests (though it requires Perl to be part of your build system to do this). C++ just does not provide the reflection hooks that the .NET languages and Java do. The MsTest tools in Visual Studio Team System - Developer's Edition will auto-generate test stubs of unmanaged C++, but the methods have to be exported from a DLL to do this, so it does not work with static libraries. Other test frameworks in the .NET world may have this ability too, but I'm not familiar with any of those. So right now we use UnitTest++ for unmanaged C++ and I'm currently deciding between MsTest and NUnit for the managed libraries. A: http://groups.google.com/group/googletestframework, but it's pretty new A: I'm using tut-framework A: Aeryn is another framework worth looking at A: Visual Studio has a built-in unit testing framework, this is a great link to setting up a test project for win32 console application: http://codeketchup.blogspot.ie/2012/12/unit-test-for-unmanaged-c-in-visual.html If you are working on a static DLL project it is much easier to set up as other have pointed out external tesing frameworks like GTest and Boost have extra features. A: CppUnit was the original homage to JUnit. A: Eclipse/JUnit is a solid package for java, and there are C++ extensions/equivalents for both. It can do what you're talking about. Of course, you'd have to change IDEs... A: I too am a fan of UnitTest++. The snag is that the source distribution contains almost 40 seperate files. This is absurd. Managing the source control and build tasks for a simple project is dominated by looking after all these unit testing files. I have modified UnitTest++ so that it can be integrated with a project by adding one .h and .cpp file. This I have dubbed "cutest". Details are at http://ravenspoint.com/blog/index.php?entry=entry080704-063557 It does not automatically generate test stubs, as asked for in the original question. I cannot help thinking that such a feature would be more trouble than it is worth, generating vast amounts of useless code "testing" accessor functions. A: I would imagine automatically stubbing out test functions would be more of a function (of scripts for the framework or) the development environment in question. Supposedly CodeGear's C++Builder applications will quickly generate test code for user functions. A: Andrew Marlow's Fructose library's worth checking out... http://fructose.sourceforge.net/ I recall his documents containing a fairly detailed analysis and comparison of other offering at the time he wrote Fructose, but can't find a URL direct to that document. A: I'm trying out Igloo, also a header only C++ test suite, even it's two included dependencies are header only. So, it's pretty straightforward and simple. Besides the included example on github, there's examples and more details at the main site, igloo-testing.org. I'll update this later as I get more experience with it and other frameworks.
{ "language": "en", "url": "https://stackoverflow.com/questions/87794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: How to prevent flickering in ListView when updating a single ListViewItem's text? All I want is to update an ListViewItem's text whithout seeing any flickering. This is my code for updating (called several times): listView.BeginUpdate(); listViewItem.SubItems[0].Text = state.ToString(); // update the state listViewItem.SubItems[1].Text = progress.ToString(); // update the progress listView.EndUpdate(); I've seen some solutions that involve overriding the component's WndProc(): protected override void WndProc(ref Message m) { if (m.Msg == (int)WM.WM_ERASEBKGND) { m.Msg = (int)IntPtr.Zero; } base.WndProc(ref m); } They say it solves the problem, but in my case It didn't. I believe this is because I'm using icons on every item. A: The accepted answer works, but is quite lengthy, and deriving from the control (like mentioned in the other answers) just to enable double buffering is also a bit overdone. But fortunately we have reflection and can also call internal methods if we like to (but be sure what you do!). Be encapsulating this approach into an extension method, we'll get a quite short class: public static class ControlExtensions { public static void DoubleBuffering(this Control control, bool enable) { var method = typeof(Control).GetMethod("SetStyle", BindingFlags.Instance | BindingFlags.NonPublic); method.Invoke(control, new object[] { ControlStyles.OptimizedDoubleBuffer, enable }); } } Which can easily be called within our code: InitializeComponent(); myListView.DoubleBuffering(true); //after the InitializeComponent(); And all flickering is gone. Update I stumbled on this question and due to this fact, the extension method should (maybe) better be: public static void DoubleBuffered(this Control control, bool enable) { var doubleBufferPropertyInfo = control.GetType().GetProperty("DoubleBuffered", BindingFlags.Instance | BindingFlags.NonPublic); doubleBufferPropertyInfo.SetValue(control, enable, null); } A: To end this question, here is a helper class that should be called when the form is loading for each ListView or any other ListView's derived control in your form. Thanks to "Brian Gillespie" for giving the solution. public enum ListViewExtendedStyles { /// <summary> /// LVS_EX_GRIDLINES /// </summary> GridLines = 0x00000001, /// <summary> /// LVS_EX_SUBITEMIMAGES /// </summary> SubItemImages = 0x00000002, /// <summary> /// LVS_EX_CHECKBOXES /// </summary> CheckBoxes = 0x00000004, /// <summary> /// LVS_EX_TRACKSELECT /// </summary> TrackSelect = 0x00000008, /// <summary> /// LVS_EX_HEADERDRAGDROP /// </summary> HeaderDragDrop = 0x00000010, /// <summary> /// LVS_EX_FULLROWSELECT /// </summary> FullRowSelect = 0x00000020, /// <summary> /// LVS_EX_ONECLICKACTIVATE /// </summary> OneClickActivate = 0x00000040, /// <summary> /// LVS_EX_TWOCLICKACTIVATE /// </summary> TwoClickActivate = 0x00000080, /// <summary> /// LVS_EX_FLATSB /// </summary> FlatsB = 0x00000100, /// <summary> /// LVS_EX_REGIONAL /// </summary> Regional = 0x00000200, /// <summary> /// LVS_EX_INFOTIP /// </summary> InfoTip = 0x00000400, /// <summary> /// LVS_EX_UNDERLINEHOT /// </summary> UnderlineHot = 0x00000800, /// <summary> /// LVS_EX_UNDERLINECOLD /// </summary> UnderlineCold = 0x00001000, /// <summary> /// LVS_EX_MULTIWORKAREAS /// </summary> MultilWorkAreas = 0x00002000, /// <summary> /// LVS_EX_LABELTIP /// </summary> LabelTip = 0x00004000, /// <summary> /// LVS_EX_BORDERSELECT /// </summary> BorderSelect = 0x00008000, /// <summary> /// LVS_EX_DOUBLEBUFFER /// </summary> DoubleBuffer = 0x00010000, /// <summary> /// LVS_EX_HIDELABELS /// </summary> HideLabels = 0x00020000, /// <summary> /// LVS_EX_SINGLEROW /// </summary> SingleRow = 0x00040000, /// <summary> /// LVS_EX_SNAPTOGRID /// </summary> SnapToGrid = 0x00080000, /// <summary> /// LVS_EX_SIMPLESELECT /// </summary> SimpleSelect = 0x00100000 } public enum ListViewMessages { First = 0x1000, SetExtendedStyle = (First + 54), GetExtendedStyle = (First + 55), } /// <summary> /// Contains helper methods to change extended styles on ListView, including enabling double buffering. /// Based on Giovanni Montrone's article on <see cref="http://www.codeproject.com/KB/list/listviewxp.aspx"/> /// </summary> public class ListViewHelper { private ListViewHelper() { } [DllImport("user32.dll", CharSet = CharSet.Auto)] private static extern int SendMessage(IntPtr handle, int messg, int wparam, int lparam); public static void SetExtendedStyle(Control control, ListViewExtendedStyles exStyle) { ListViewExtendedStyles styles; styles = (ListViewExtendedStyles)SendMessage(control.Handle, (int)ListViewMessages.GetExtendedStyle, 0, 0); styles |= exStyle; SendMessage(control.Handle, (int)ListViewMessages.SetExtendedStyle, 0, (int)styles); } public static void EnableDoubleBuffer(Control control) { ListViewExtendedStyles styles; // read current style styles = (ListViewExtendedStyles)SendMessage(control.Handle, (int)ListViewMessages.GetExtendedStyle, 0, 0); // enable double buffer and border select styles |= ListViewExtendedStyles.DoubleBuffer | ListViewExtendedStyles.BorderSelect; // write new style SendMessage(control.Handle, (int)ListViewMessages.SetExtendedStyle, 0, (int)styles); } public static void DisableDoubleBuffer(Control control) { ListViewExtendedStyles styles; // read current style styles = (ListViewExtendedStyles)SendMessage(control.Handle, (int)ListViewMessages.GetExtendedStyle, 0, 0); // disable double buffer and border select styles -= styles & ListViewExtendedStyles.DoubleBuffer; styles -= styles & ListViewExtendedStyles.BorderSelect; // write new style SendMessage(control.Handle, (int)ListViewMessages.SetExtendedStyle, 0, (int)styles); } } A: I know this question is quite old, but because this is one of the first search results on Google I wanted to share my fix. The only way i could remove flickering 100% was to combine the answer from Oliver (extension class with double-buffering) and using the BeignUpdate() and EndUpdate() methods. Neither of those on their own could fix flickering for me. Granted, I use a very complex list, that I need to push into the list and also need to update it almost every sec. A: The ListView in CommonControls 6 (XP or newer) supports double buffering. Fortunately, .NET wraps the newest CommonControls on the system. To enable double buffering, send the appropriate Windows message to the ListView control. Here are the details: http://www.codeproject.com/KB/list/listviewxp.aspx A: In .NET Winforms 2.0 there exist a protected property called DoubleBuffered. By inheriting from ListView, then one can set this protected property to true. This will enable double buffering without needing to call SendMessage. Setting the DoubleBuffered property is the same as setting the following style: listview.SetStyle(ControlStyles.OptimizedDoubleBuffer | ControlStyles.AllPaintingInWmPaint, true); http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=94096 A: this will help: class DoubleBufferedListView : System.Windows.Forms.ListView { public DoubleBufferedListView() :base() { this.DoubleBuffered = true; } } A: If you only want to update the text, simply set the changed SubItem's text directly rather than updating the entire ListViewItem (you've not said how you're doing your updates). The override you show is equivalent to simply overriding OnPaintBackground, which would be a "more correct" managed way to do that task, and it's not going to help for a single item. If you still have problems, we'll need clarification on what you've actually tried. A: This is a shot in the dark, but you could try double buffering the control. SetStyle( ControlStyles.AllPaintingInWmPaint | ControlStyles.UserPaint | ControlStyles.DoubleBuffer, true) A: Call the BeginUpdate() method on the ListView before setting any of the list view items and then only call EndUpdate() after all of the items have been added. That will stop the flicker. A: Simple solution is this: yourlistview.BeginUpdate() //Do your update of adding and removing item from the list yourlistview.EndUpdate()
{ "language": "en", "url": "https://stackoverflow.com/questions/87795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: When do transactions become more of a burden than a benefit? Transactional programming is, in this day and age, a staple in modern development. Concurrency and fault-tolerance are critical to an applications longevity and, rightly so, transactional logic has become easy to implement. As applications grow though, it seems that transactional code tends to become more and more burdensome on the scalability of the application, and when you bridge into distributed transactions and mirrored data sets the issues start to become very complicated. I'm curious what seems to be the point, in data size or application complexity, that transactions frequently start becoming the source of issues (causing timeouts, deadlocks, performance issues in mission critical code, etc) which are more bothersome to fix, troubleshoot or workaround than designing a data model that is more fault-tolerant in itself, or using other means to ensure data integrity. Also, what design patterns serve to minimize these impacts or make standard transactional logic obsolete or a non-issue? -- EDIT: We've got some answers of reasonable quality so far, but I think I'll post an answer myself to bring up some of the things I've heard about to try to inspire some additional creativity; most of the responses I'm getting are pessimistic views of the problem. Another important note is that not all dead-locks are a result of poorly coded procedures; sometimes there are mission critical operations that depend on similar resources in different orders, or complex joins in different queries that step on each other; this is an issue that can sometimes seem unavoidable, but I've been a part of reworking workflows to facilitate an execution order that is less likely to cause one. A: I think no design pattern can solve this issue in itself. Good database design, good store procedure programming and especially learning how to keep your transactions short will ease most of the problems. There is no 100% guaranteed method of not having problems though. In basically every case I've seen in my career though, deadlocks and slowdowns were solved by fixing the stored procedures: * *making sure all tables are accessed in order prevents deadlocks *fixing indexes and statistics makes everything faster (hence diminishes the chance of deadlock) *sometimes there was no real need of transactions, it just "looked" like it *sometimes transactions could be eliminated by making multiple statement stored procedures in single statement ones. A: The use of shared resources is wrong in the long run. Because by reusing an existing environment you are creating more and more possibilities. Just review the busy beavers :) The way Erlang goes is the right way to produce fault-tolerant and easily verifiable systems. But transactional memory is essential for many applications in widespread use. If you consult a bank with its millions of customers for example you can't just copy the data for the sake of efficiency. I think monads are a cool concept to handle the difficult concept of changing state. A: If you are talking 'cloud computing' here, the answer would be to localize each transaction to the place where it happens in the cloud. There is no need for the entire cloud to be consistent, as that would kill performance (as you noted). Simply, keep track of what is changed and where and handle multiple small transactions as changes propagate through the system. The situation where user A updates record R and user B at the other end of cloud does not see it (yet) is the same as the one when user A didn't do the change yet in the current strict-transactional environment. This could lead to a discrepancy in an update-heavy system, so systems should be architectured to work with updates as less as possible - moving things to aggregation of data and pulling out the aggregates once the exact figure is critical (i.e. moving requirement for consistency from write-time to critical-read-time). Well, just my POV. It's hard to conceive a system that is application agnostic in this case. A: Try to make changes at the database level in the least number of possible instructions. The general rule is to lock a resource the lest possible time. Using T-SQL, PLSQL, Java on Oracle or any similar way you can reduce the time that each transaction locks a shared resource. I fact transactions in the database are optimized with row-level locks, multi-version, and other kinds of intelligent techniques. If you can make the transaction at the database you save the network latency. Apart from other layers like ODBC/JDBC/OLEBD. Sometimes the programmer tries to obtain the good things of a database ( It is transactional, parallel, distributed, ) but keep a caché of the data. Then they need to add manually some of the database features. A: One approach I've heard of is to make a versioned insert only model where no updates ever occur. During selects the version is used to select only the latest rows. One downside I know of with this approach is that the database can get rather large very quickly. I also know that some solutions, such as FogBugz, don't use enforced foreign keys, which I believe would also help mitigate some of these problems because the SQL query plan can lock linked tables during selects or updates even if no data is changing in them, and if it's a highly contended table that gets locked it can increase the chance of DeadLock or Timeout. I don't know much about these approaches though since I've never used them, so I assume there are pros and cons to each that I'm not aware of, as well as some other techniques I've never heard about. I've also been looking into some of the material from Carlo Pescio's recent post, which I've not had enough time to do it justice unfortunately, but the material seems very interesting.
{ "language": "en", "url": "https://stackoverflow.com/questions/87796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why does Rails cache view files when hosted on VM and codebase on Samba share I have the following setup: * *Code on my local machine (OS X) shared as a Samba share *A Ubuntu VM running within Parallels, mounts the share Running Rails 2.1 (either via Mongrel, WEBrick or passenger) in development mode, if I make changes to my views they don't update without me having to kick the server. I've tried switching to an NFS share instead but I get the same problem. I would assume it was some sort of Samba cache issue but autotest picks up the changes to files instantly. Note: * *This is not render caching or template caching and config.action_view.cache_template_loading is not defined in the development config. *Checking out the codebase direct to the VM doesn't display the same issue (but I'd prefer not to do this) *Editing the view file direct on the VM does not resolve this issue. *Touching the view file after alterations does cause the changes to appear in the browser. *I also noticed that the clock in the VM was an hour fast, changing that to the correct time made no difference. A: I had the exact same problem while developing on andLinux. My andLinux's clock was about three hours ahead of the host Windows, and setting the correct time (actually, a minute or so behind) has solved the problem. A: Actually, setting the correct date & time in the VM does seem to have solved the problem (after I restarted mongrel) -- going to do a little more digging.
{ "language": "en", "url": "https://stackoverflow.com/questions/87802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Castle Windsor: How to specify a constructor parameter from code? Say I have the following class MyComponent : IMyComponent { public MyComponent(int start_at) {...} } I can register an instance of it with castle windsor via xml as follows <component id="sample" service="NS.IMyComponent, WindsorSample" type="NS.MyComponent, WindsorSample"> <parameters> <start_at>1</start_at > </parameters> </component> How would I go about doing the exact same thing but in code? (Notice, the constructor parameter) A: Try this int start_at = 1; container.Register(Component.For().DependsOn(dependency: Dependency.OnValue(start_at))); A: Edit: Used the answers below code with the Fluent Interface :) namespace WindsorSample { using Castle.MicroKernel.Registration; using Castle.Windsor; using NUnit.Framework; using NUnit.Framework.SyntaxHelpers; public class MyComponent : IMyComponent { public MyComponent(int start_at) { this.Value = start_at; } public int Value { get; private set; } } public interface IMyComponent { int Value { get; } } [TestFixture] public class ConcreteImplFixture { [Test] void ResolvingConcreteImplShouldInitialiseValue() { IWindsorContainer container = new WindsorContainer(); container.Register( Component.For<IMyComponent>() .ImplementedBy<MyComponent>() .Parameters(Parameter.ForKey("start_at").Eq("1"))); Assert.That(container.Resolve<IMyComponent>().Value, Is.EqualTo(1)); } } } A: Have you considered using Binsor to configure your container? Rather than verbose and clumsy XML you can configure Windsor using a Boo based DSL. Here's what your config will look like: component IMyComponent, MyComponent: start_at = 1 The advantage is that you have a malleable config file but avoid the problems with XML. Also you don't have to recompile to change your config as you would if you configured the container in code. There's also plenty of helper methods that enable zero friction configuration: for type in Assembly.Load("MyApp").GetTypes(): continue unless type.NameSpace == "MyApp.Services" continue if type.IsInterface or type.IsAbstract or type.GetInterfaces().Length == 0 component type.GetInterfaces()[0], type You can get started with it here. A: You need to pass in an IDictionary when you ask the container for the instance. You'd use this Resolve overload of the IWindsorContainer: T Resolve<T>(IDictionary arguments) or the non generic one: object Resolve(Type service, IDictionary arguments) So, for example: (assuming container is an IWindsorContainer) IDictionary<string, object> values = new Dictionary<string, object>(); values["start_at"] = 1; container.Resolve<IMyComponent>(values); Note that the key values in the dictionary are case sensitive. A: You could use a configuration class to read the app.config. Then register that and get windsor to use it for its dependency. Ideally my MyConfiguration would use an interface. public class MyConfiguration { public long CacheSize { get; } public MyConfiguration() { CacheSize = ConfigurationManager.AppSettings["cachesize"].ToLong(); } } container.Register(Component.For<MyConfiguration>().ImplementedBy<MyConfiguration>()); container.Register(Component.For<MostRecentlyUsedSet<long>>() .ImplementedBy<MostRecentlyUsedSet<long>>(). DependsOn(Dependency.OnValue("size", container.Resolve<MyConfiguration>().CacheSize)) .LifestyleSingleton()); A: You can use the AddComponentWithProperties method of the IWindsorContainer interface to register a service with extended properties. Below is a 'working' sample of doing this with an NUnit Unit Test. namespace WindsorSample { public class MyComponent : IMyComponent { public MyComponent(int start_at) { this.Value = start_at; } public int Value { get; private set; } } public interface IMyComponent { int Value { get; } } [TestFixture] public class ConcreteImplFixture { [Test] void ResolvingConcreteImplShouldInitialiseValue() { IWindsorContainer container = new WindsorContainer(); IDictionary parameters = new Hashtable {{"start_at", 1}}; container.AddComponentWithProperties("concrete", typeof(IMyComponent), typeof(MyComponent), parameters); IMyComponent resolvedComp = container.Resolve<IMyComponent>(); Assert.That(resolvedComp.Value, Is.EqualTo(1)); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/87812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: passing or reading .net cookie in php page Hi I am trying to find a way to read the cookie that i generated in .net web application to read that on the php page because i want the users to login once but they should be able to view .net and php pages ,until the cookie expires user should not need to login in again , but both .net and php web applications are on different servers , help me with this issue please , thanks A: You mention that : but both .net and php web applications are on different servers Are both applications running under the same domain name? (ie: www.mydomain.com) or are they on different domains? If they're on the same domain, then you can do what you're trying to do in PHP by using the $_COOKIE variable. Just get the cookie's value by $myCookie = $_COOKIE["cookie_name"]; Then you can do whatever you want with the value of $myCookie. But if they're on different domains (ie: foo.mydomain.com and bar.mydomain.com), you cannot access the cookie from both sites. The web browser will only send a cookie to pages on the domain that set the cookie. However, if you originally set the cookie with only the top-level domain (mydomain.com), then sub-domains (anything.mydomain.com) should be able to read the cookie. A: Are the two servers on the machine within the same domain? if so you should set the cookie scope to the domain rather than the FQDN; then both machines will be able to read them; Response.Cookies["domain"].Domain = "contoso.com"; would allow contoso.com, www.contoso.com, hotnakedhamsters.contoso.com etc to access it. A: any cookie given to a browser will be readable by server processing the request --- they're language agnostic. try $_COOKIE in PHP A: As long as the site is the same (i.e. www.example.com) then cookies are platform agnostic. As Todd Kennedy said try the super global $_COOKIE. If you site is different though you won't be able to read the cookies, they are supposed to be site specific and prevent this type of cross site access.
{ "language": "en", "url": "https://stackoverflow.com/questions/87818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SQL: IF clause within WHERE clause Is it possible to use an IF clause within a WHERE clause in MS SQL? Example: WHERE IF IsNumeric(@OrderNumber) = 1 OrderNumber = @OrderNumber ELSE OrderNumber LIKE '%' + @OrderNumber + '%' A: Use a CASE statement instead of IF. A: You don't need a IF statement at all. WHERE (IsNumeric(@OrderNumber) = 1 AND OrderNumber = @OrderNumber) OR (IsNumeric(@OrderNumber) = 0 AND OrderNumber LIKE '%' + @OrderNumber + '%') A: You want the CASE statement WHERE OrderNumber LIKE CASE WHEN IsNumeric(@OrderNumber)=1 THEN @OrderNumber ELSE '%' + @OrderNumber END A: To clarify some of the logical equivalence solutions. An if statement if (a) then b is logically equivalent to (!a || b) It's the first line on the Logical equivalences involving conditional statements section of the Logical equivalence wikipedia article. To include the else, all you would do is add another conditional if(a) then b; if(!a) then c; which is logically equivalent to (!a || b) && (a || c) So using the OP as an example: IF IsNumeric(@OrderNumber) = 1 OrderNumber = @OrderNumber ELSE OrderNumber LIKE '%' + @OrderNumber + '%' the logical equivalent would be: (IsNumeric(@OrderNumber) <> 1 OR OrderNumber = @OrderNumber) AND (IsNumeric(@OrderNumber) = 1 OR OrderNumber LIKE '%' + @OrderNumber + '%' ) A: I think that where...like/=...case...then... can work with Booleans. I am using T-SQL. Scenario: Let's say you want to get Person-30's hobbies if bool is false, and Person-42's hobbies if bool is true. (According to some, hobby-lookups comprise over 90% of business computation cycles, so pay close attn.). CREATE PROCEDURE sp_Case @bool bit AS SELECT Person.Hobbies FROM Person WHERE Person.ID = case @bool when 0 then 30 when 1 then 42 end; A: // an example for using a stored procedure to select users filtered by country and site CREATE STORED PROCEDURE GetUsers @CountryId int = null, @SiteId int = null AS BEGIN SELECT * FROM Users WHERE CountryId = CASE WHEN ISNUMERIC(@CountryId) = 1 THEN @CountryId ELSE CountryId END AND SiteId = CASE WHEN ISNUMERIC(@SiteId) = 1 THEN @SiteId ELSE SiteId END END // take from the input countryId AND/OR siteId if exists else don't filter A: Use a CASE statement UPDATE: The previous syntax (as pointed out by a few people) doesn't work. You can use CASE as follows: WHERE OrderNumber LIKE CASE WHEN IsNumeric(@OrderNumber) = 1 THEN @OrderNumber ELSE '%' + @OrderNumber END Or you can use an IF statement like @N. J. Reed points out. A: WHERE (IsNumeric(@OrderNumber) <> 1 OR OrderNumber = @OrderNumber) AND (IsNumber(@OrderNumber) = 1 OR OrderNumber LIKE '%' + @OrderNumber + '%') A: There isn't a good way to do this in SQL. Some approaches I have seen: 1) Use CASE combined with boolean operators: WHERE OrderNumber = CASE WHEN (IsNumeric(@OrderNumber) = 1) THEN CONVERT(INT, @OrderNumber) ELSE -9999 -- Some numeric value that just cannot exist in the column END OR FirstName LIKE CASE WHEN (IsNumeric(@OrderNumber) = 0) THEN '%' + @OrderNumber ELSE '' END 2) Use IF's outside the SELECT IF (IsNumeric(@OrderNumber)) = 1 BEGIN SELECT * FROM Table WHERE @OrderNumber = OrderNumber END ELSE BEGIN SELECT * FROM Table WHERE OrderNumber LIKE '%' + @OrderNumber END 3) Using a long string, compose your SQL statement conditionally, and then use EXEC The 3rd approach is hideous, but it's almost the only think that works if you have a number of variable conditions like that. A: You should be able to do this without any IF or CASE WHERE (IsNumeric(@OrderNumber) AND (CAST OrderNumber AS VARCHAR) = (CAST @OrderNumber AS VARCHAR) OR (NOT IsNumeric(@OrderNumber) AND OrderNumber LIKE ('%' + @OrderNumber)) Depending on the flavour of SQL you may need to tweak the casts on the order number to an INT or VARCHAR depending on whether implicit casts are supported. This is a very common technique in a WHERE clause. If you want to apply some "IF" logic in the WHERE clause all you need to do is add the extra condition with an boolean AND to the section where it needs to be applied. A: CASE Statement is better option than IF always. WHERE vfl.CreatedDate >= CASE WHEN @FromDate IS NULL THEN vfl.CreatedDate ELSE @FromDate END AND vfl.CreatedDate<=CASE WHEN @ToDate IS NULL THEN vfl.CreatedDate ELSE @ToDate END A: WHERE OrderNumber LIKE CASE WHEN IsNumeric(@OrderNumber) = 1 THEN @OrderNumber ELSE '%' + @OrderNumber END In line case Condition will work properly. A: In sql server I had same problem I wanted to use an and statement only if parameter is false and on true I had to show both values true and false so I used it this way (T.IsPublic = @ShowPublic or @ShowPublic = 1) A: The following example executes a query as part of the Boolean expression and then executes slightly different statement blocks based on the result of the Boolean expression. Each statement block starts with BEGIN and completes with END. USE AdventureWorks2012; GO DECLARE @AvgWeight decimal(8,2), @BikeCount int IF (SELECT COUNT(*) FROM Production.Product WHERE Name LIKE 'Touring-3000%' ) > 5 BEGIN SET @BikeCount = (SELECT COUNT(*) FROM Production.Product WHERE Name LIKE 'Touring-3000%'); SET @AvgWeight = (SELECT AVG(Weight) FROM Production.Product WHERE Name LIKE 'Touring-3000%'); PRINT 'There are ' + CAST(@BikeCount AS varchar(3)) + ' Touring-3000 bikes.' PRINT 'The average weight of the top 5 Touring-3000 bikes is ' + CAST(@AvgWeight AS varchar(8)) + '.'; END ELSE BEGIN SET @AvgWeight = (SELECT AVG(Weight) FROM Production.Product WHERE Name LIKE 'Touring-3000%' ); PRINT 'Average weight of the Touring-3000 bikes is ' + CAST(@AvgWeight AS varchar(8)) + '.' ; END ; GO Using nested IF...ELSE statements The following example shows how an IF … ELSE statement can be nested inside another. Set the @Number variable to 5, 50, and 500 to test each statement. DECLARE @Number int SET @Number = 50 IF @Number > 100 PRINT 'The number is large.' ELSE BEGIN IF @Number < 10 PRINT 'The number is small' ELSE PRINT 'The number is medium' END ; GO A: USE AdventureWorks2012; GO IF (SELECT COUNT(*) FROM Production.Product WHERE Name LIKE 'Touring-3000%' ) > 5 PRINT 'There are more than 5 Touring-3000 bicycles.' ELSE PRINT 'There are 5 or less Touring-3000 bicycles.' ; GO A: If @LstTransDt is Null begin Set @OpenQty=0 end else begin Select @OpenQty=IsNull(Sum(ClosingQty),0) From ProductAndDepotWiseMonitoring Where Pcd=@PCd And PtpCd=@PTpCd And TransDt=@LstTransDt end See if this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/87821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "244" }
Q: How do I use my own compiler with Nant? Nant seems very compiler-centric - which is guess is because it's considered a .NET development system. But I know it can be done! I've seen it. The platform we're building on has its own compiler and doesn't use 'cl.exe' for c++. We're building a C++ app on a different platform and would like to override with our own compiler. Can anyone point me at a way to do that or at least how to set up a target of my own that will use our target platform's compiler? A: Here is one I did for Delphi. Each 'arg' is a separate param with a value defined elsewhere. The target is called with the params set up before calling it. <target name="build.application"> <exec program="dcc32" basedir="${Delphi.Bin}" workingdir="${Application.Folder}" verbose="true"> <arg value="${Application.Compiler.Directive}" /> <arg value="-Q" /> <arg value="/B" /> <arg value="/E${Application.Output.Folder}" /> <arg value="/U${Application.Lib.Folder};${Application.Search.Folder}" /> <arg value="${Application.Folder}\${Delphi.Project}" /> </exec> </target> A: You need to write your own task. This is a nice reference. A: Initially, use the <exec> task to run an executable, passing in any required information as parameters and/or environment variables. For future use, you could also investigate writing your own task. I know with standard ant this is done with the <taskdef> task and a java class. I'm not sure of the Nant equivalent unfortunately. A: You could also use the <exec> task.
{ "language": "en", "url": "https://stackoverflow.com/questions/87831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SVN checkout question I am about to move to SVN as my RCS of choice (after many years using CVS) and have a basic question... I have a number of shared projects - code that I want to use with lots of different projects. Is it possible to 'link' these shared folders to the projects that need them, so checking out a project will also checkout the shared code? For example, suppose my repository looks like this: root --project1 --project2 --shared --smtp When I checkout project1, I also want to checkout shared and smtp. Back in my CVS days I would of used a Unix symbolic link in one of the project folders, but as my new SVN repository won't necessarily be hosted on a Unix box, I can't do the same. A: You are looking for the "svn:externals" property. See this section of svnbook: Under the project that you want to use the shared project in, and then set a property on that directory named "svn:externals". This property contains the name of the directory which contains the external respository, and can have some other options so that you always get the same revision. Example (from svnbook, which is an EXCELLENT reference for svn questions): $ svn propget svn:externals calc third-party/sounds http://svn.example.com/repos/sounds third-party/skins -r148 http://svn.example.com/skinproj third-party/skins/toolkit -r21 http://svn.example.com/skin-maker in this example, third-party/sounds would be checked out from http://svn.example.com/repos/sounds. The -rNNN pins the checkout to a revision so that if you're doing more development on that, you can make sure your other projects don't randomly break. Generally instead of doing this revision thing, I external to a tag which holds a stable version. A: SVN Externals are what you want to do. The SVN book explains it in great detail here. That's one thing I love about SVN, the wonderful documentation. A: This is what svn "Externals" are for. A: Yes, have a look at SVN Externals. A: Yes, this mechanism is called "externals". See the book. A: SVN has a feature called "externals" which basically works the same way a symbolic UNIX link works (you point a certain directory that's one place in your repository to some other directory elsewhere in your repository). For further info on how to setup externals, see this: http://svnbook.red-bean.com/en/1.0/ch07s03.html Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/87849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Anyone have sample code for a UserControl with pager controls to be used in a GridView's PagerTemplate? I've got several Gridviews in my application in which I use a custom PagerTemplate. I'd like to turn this custom template into a UserControl so that I don't need to replicate the same logic in multiple pages. I'm pretty sure that such a thing is possible, but I'm unsure of how exactly to wire the UserControl to the Gridview's events, and what interfaces my control may need to implement. I'm using ASP 2.0 frameworks. Has anyone done something like this? And if so, do you have any sample code for your usercontrol? A: Dave Anderson, a co-worker of mine, wrote this server control that could help you get started. Note that we're targeting .NET 3.5. [AspNetHostingPermission( SecurityAction.Demand, Level = AspNetHostingPermissionLevel.Minimal), AspNetHostingPermission(SecurityAction.InheritanceDemand, Level = AspNetHostingPermissionLevel.Minimal), DefaultProperty("Text"), ToolboxData("<{0}:Pager runat=\"server\"> </{0}:Pager>"), Designer(typeof(ServerControls.Design.PagerDesigner)) ] public class Pager : WebControl, INamingContainer { #region Private Constants private const string Command_First = "First"; private const string Command_Prev = "Prev"; private const string Command_Next = "Next"; private const string Command_Last = "Last"; #endregion #region Private members private Control PageableNamingContainer; private PropertyInfo PageCountInfo; private PropertyInfo PageIndexInfo; private DropDownList ddlCurrentPage; private Label lblPageCount; private Button btnFirst; private Button btnPrevious; private Button btnNext; private Button btnLast; #endregion #region Private Properties private int PageCount { get { int Result; if (InsideDataPager) Result = (int)Math.Ceiling((decimal)(TotalRowCount / PageSize)) + 1; else Result = (int)PageCountInfo.GetValue(PageableNamingContainer, null); return Result; } } private int PageIndex { get { int Result; if (InsideDataPager) Result = (int)Math.Floor((decimal)(StartRowIndex / PageSize)); else Result = (int)PageIndexInfo.GetValue(PageableNamingContainer, null); return Result; } } private int StartRowIndex { get { if (InsideDataPager) return MyDataPager.StartRowIndex; else throw new Exception("DataPager functionality requires DataPager."); } } private int TotalRowCount { get { if (InsideDataPager) return MyDataPager.TotalRowCount; else throw new Exception("DataPager functionality requires DataPager."); } } private int PageSize { get { if (InsideDataPager) return MyDataPager.PageSize; else throw new Exception("DataPager functionality requires DataPager."); } } private bool InsideDataPager { get { return ViewState["InsideDataPager"] == null ? false : (bool)ViewState["InsideDataPager"]; } set { ViewState["InsideDataPager"] = value; } } #region DataPager-Specific properties private DataPager MyDataPager { get { if (InsideDataPager) return (DataPager)PageableNamingContainer; else throw new Exception("DataPager functionality requires DataPager."); } } private int PrevPageStartIndex { get { return StartRowIndex >= PageSize ? StartRowIndex - PageSize : 0; } } private int NextPageStartIndex { get { return StartRowIndex + PageSize >= TotalRowCount ? LastPageStartIndex : StartRowIndex + PageSize; } } private int LastPageStartIndex { get { return (PageCount-1) * PageSize; } } #endregion #endregion #region Public Properties [ Category("Behavior"), DefaultValue(""), Description("The stylesheet class to use for the buttons") ] public bool HideInactiveButtons { get; set; } [ Category("Behavior"), DefaultValue("true"), Description("Indicates whether the controls will invoke validation routines") ] public bool CausesValidation { get; set; } [ Category("Appearance"), DefaultValue(""), Description("The stylesheet class to use for the buttons") ] public string ButtonCssClass { get; set; } [ Category("Appearance"), DefaultValue("<<"), Description("The text to be shown on the button that navigates to the First page") ] public string FirstText { get; set; } [ Category("Appearance"), DefaultValue("<"), Description("The text to be shown on the button that navigates to the Previous page") ] public string PreviousText { get; set; } [ Category("Appearance"), DefaultValue(">"), Description("The text to be shown on the button that navigates to the Next page") ] public string NextText { get; set; } [ Category("Appearance"), DefaultValue(">>"), Description("The text to be shown on the button that navigates to the Last page") ] public string LastText { get; set; } #endregion #region Overridden properties public override ControlCollection Controls { get { EnsureChildControls(); return base.Controls; } } #endregion #region Overridden methods/events protected override void OnLoad(EventArgs e) { base.OnLoad(e); if (!GetPageInfo(NamingContainer)) throw new Exception("Unable to locate the Pageable Container."); } protected override void OnPreRender(EventArgs e) { base.OnPreRender(e); if (PageableNamingContainer != null) { EnsureChildControls(); ddlCurrentPage.Items.Clear(); for (int i = 0; i < PageCount; i++) ddlCurrentPage.Items.Add(new ListItem((i + 1).ToString(), (i + 1).ToString())); lblPageCount.Text = PageCount.ToString(); if (HideInactiveButtons) { btnFirst.Visible = btnPrevious.Visible = (PageIndex > 0); btnLast.Visible = btnNext.Visible = (PageIndex < (PageCount - 1)); } else { btnFirst.Enabled = btnPrevious.Enabled = (PageIndex > 0); btnLast.Enabled = btnNext.Enabled = (PageIndex < (PageCount - 1)); } ddlCurrentPage.SelectedIndex = PageIndex; } else ddlCurrentPage.SelectedIndex = 0; } protected override bool OnBubbleEvent(object source, EventArgs args) { // We handle all our events inside this class when // we are inside a DataPager return InsideDataPager; } #endregion #region Event delegate protected void PagerEvent(object sender, EventArgs e) { if (InsideDataPager) { int NewStartingIndex; if (sender.GetType() == typeof(Button)) { string arg = ((Button)sender).CommandArgument.ToString(); switch (arg) { case Command_Prev: NewStartingIndex = PrevPageStartIndex; break; case Command_Next: NewStartingIndex = NextPageStartIndex; break; case Command_Last: NewStartingIndex = LastPageStartIndex; break; case Command_First: default: NewStartingIndex = 0; break; } } else { NewStartingIndex = Math.Min(((DropDownList)sender).SelectedIndex * PageSize, LastPageStartIndex); } MyDataPager.SetPageProperties(NewStartingIndex, MyDataPager.MaximumRows, true); } else { CommandEventArgs ea = new CommandEventArgs("Page", ((DropDownList)sender).SelectedValue); RaiseBubbleEvent(this, ea); } } #endregion #region GetPageableContainer private bool GetPageInfo(Control namingContainer) { if (namingContainer == null || namingContainer.GetType() == typeof(Page)) throw new Exception(this.GetType().ToString() + " must be used in a pageable container like a GridView."); /* * NOTE: If we are inside a DataPager, this will be * our first-level NamingContainer, so there * will never be any reflection in that case. */ if (namingContainer.GetType() == typeof(DataPagerFieldItem)) { InsideDataPager = true; PageableNamingContainer = ((DataPagerFieldItem)namingContainer).Pager; return true; } PageCountInfo = namingContainer.GetType().GetProperty("PageCount"); PageIndexInfo = namingContainer.GetType().GetProperty("PageIndex"); if (PageCountInfo == null || PageIndexInfo == null) return GetPageInfo(namingContainer.NamingContainer); else { PageableNamingContainer = namingContainer; return true; } } #endregion #region Control generation protected override void CreateChildControls() { Controls.Clear(); Controls.Add(BuildControlTable()); } private Table BuildControlTable() { Table ControlTable = new Table(); ControlTable.CssClass = CssClass; TableRow tr = new TableRow(); TableCell td = new TableCell(); td.Text = "Page"; tr.Cells.Add(td); td = new TableCell(); ddlCurrentPage = new DropDownList(); ddlCurrentPage.ID = "ddlCurrentPage"; ddlCurrentPage.AutoPostBack = true; ddlCurrentPage.SelectedIndexChanged += PagerEvent; ddlCurrentPage.CausesValidation = CausesValidation; td.Controls.Add(ddlCurrentPage); tr.Cells.Add(td); td = new TableCell(); td.Text = "of"; tr.Cells.Add(td); td = new TableCell(); lblPageCount = new Label(); td.Controls.Add(lblPageCount); tr.Cells.Add(td); AddButton(tr, ref btnFirst, string.IsNullOrEmpty(FirstText) ? "<<" : FirstText, Command_First); AddButton(tr, ref btnPrevious, string.IsNullOrEmpty(PreviousText) ? "<" : PreviousText, Command_Prev); AddButton(tr, ref btnNext, string.IsNullOrEmpty(NextText) ? ">" : NextText, Command_Next); AddButton(tr, ref btnLast, string.IsNullOrEmpty(LastText) ? ">>" : LastText, Command_Last); ControlTable.Rows.Add(tr); return ControlTable; } private void AddButton(TableRow row, ref Button button, string text, string argument) { button = new Button(); button.Text = text; button.CssClass = ButtonCssClass; button.CommandName = "Page"; button.CommandArgument = argument; button.CausesValidation = CausesValidation; if (InsideDataPager) button.Click += PagerEvent; TableCell td = new TableCell(); td.Controls.Add(button); row.Cells.Add(td); } #endregion }
{ "language": "en", "url": "https://stackoverflow.com/questions/87853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Using DateAdd in umbraco xslt to display next year's date I'm trying to display the date for a year from now in an xslt file using umbraco like so: <xsl:variable name="now" select="umbraco.library:CurrentDate()"/> <xsl:value-of select="umbraco.library:DateAdd($now, 'year', 1)"/> The value-of tag outputs today's date. How can I get the DateAdd to add a year to the current date? A: The constant 'year' is wrong. It expects just 'y'. <xsl:variable name="now" select="umbraco.library:CurrentDate()"/> <xsl:value-of select="umbraco.library:DateAdd($now, 'y', 1)"/>
{ "language": "en", "url": "https://stackoverflow.com/questions/87870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How best to make the selected date of an ASP.NET Calendar control available to JavaScript? How best to make the selected date of an ASP.NET Calendar control available to JavaScript? Most controls are pretty simple, but the calendar requires more than just a simple document.getElementById().value. A: When you click on a date with the calendar, ASP does a postback, you could always put the SelectedDate value of the calendar control into a hidden field on the page during the OnLoad event of the page or the SelectionChanged event of the Calendar control. A: This might help you. It uses YUI, but you can probably port some of that functionality over to another library or custom code it. It should get you started though. http://www.codeproject.com/KB/aspnet/aspnet-yahoouicalendar.aspx A: You might find useful the MS AJAX Calendar control extender, you can get the date just by document.getElementById('<%= DateTextBox.ClientID%>').value; DateTextBox is an asp:TextBox control that will be extended with the AJAX calendar. A: I'm using Page.ClientScript.RegisterClientScriptBlock() to put a small script on the page that just declare a variable with the desired value. I was hoping for some a little less... clunky.
{ "language": "en", "url": "https://stackoverflow.com/questions/87871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Building a Table Dependency Graph With A Recursive Query I am trying to build a dependency graph of tables based on the foreign keys between them. This graph needs to start with an arbitrary table name as its root. I could, given a table name look up the tables that reference it using the all_constraints view, then look up the tables that reference them, and so on, but this would be horrible inefficient. I wrote a recursive query that does this for all tables, but when I add: START WITH Table_Name=:tablename It doesn't return the entire tree. A: select parent, child, level from ( select parent_table.table_name parent, child_table.table_name child from user_tables parent_table, user_constraints parent_constraint, user_constraints child_constraint, user_tables child_table where parent_table.table_name = parent_constraint.table_name and parent_constraint.constraint_type IN( 'P', 'U' ) and child_constraint.r_constraint_name = parent_constraint.constraint_name and child_constraint.constraint_type = 'R' and child_table.table_name = child_constraint.table_name and child_table.table_name != parent_table.table_name ) start with parent = 'DEPT' connect by prior child = parent should work (replace the table name, of course) assuming that everything is in the same schema. Use the DBA_ versions of the data dictionary tables and conditions for the OWNER and R_OWNER columns if you need to handle cross-schema dependencies. On further reflection, this does not account for self-referential constraints (i.e. a constraint on the EMP table that the MGR column references the EMPNO column) either, so you'd have to modify the code to handle that case if you need to deal with self-referential constraints. For testing purposes, I added a few new tables to the SCOTT schema that also reference the DEPT table (including a grandchild dependency) SQL> create table dept_child2 ( 2 deptno number references dept( deptno ) 3 ); Table created. SQL> create table dept_child3 ( 2 dept_child3_no number primary key, 3 deptno number references dept( deptno ) 4 ); Table created. SQL> create table dept_grandchild ( 2 dept_child3_no number references dept_child3( dept_child3_no ) 3 ); Table created. and verified that the query returned the expected output SQL> ed Wrote file afiedt.buf 1 select parent, child, level from ( 2 select parent_table.table_name parent, child_table.table_name child 3 from user_tables parent_table, 4 user_constraints parent_constraint, 5 user_constraints child_constraint, 6 user_tables child_table 7 where parent_table.table_name = parent_constraint.table_name 8 and parent_constraint.constraint_type IN( 'P', 'U' ) 9 and child_constraint.r_constraint_name = parent_constraint.constraint_name 10 and child_constraint.constraint_type = 'R' 11 and child_table.table_name = child_constraint.table_name 12 and child_table.table_name != parent_table.table_name 13 ) 14 start with parent = 'DEPT' 15* connect by prior child = parent SQL> / PARENT CHILD LEVEL ------------------------------ ------------------------------ ---------- DEPT DEPT_CHILD3 1 DEPT_CHILD3 DEPT_GRANDCHILD 2 DEPT DEPT_CHILD2 1 DEPT EMP 1 A: Simplest way to do this is to copy all the FK info into a simple, 2-column (parent,child) table, and then use the following algorithm: while (rows left in that table) list = rows where table name exists in child but not in parent print list remove list from rows that's all. Basically, you first print and remove all the nodes that don't depend on anything. After that being done, some other nodes will get free and you can repeat process. P.S. Make sure you don't insert self-referencing tables in the initial list (child=parent)
{ "language": "en", "url": "https://stackoverflow.com/questions/87877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Control which columns become primary keys with Microsoft Access ODBC link to Oracle When you create a Microsoft Access 2003 link to an Oracle table using Oracle's ODBC driver, you are sometimes asked to state which columns are the primary key(s). I would like to know how to change that initial assignment, or even how to get Access/ODBC to forget the assignment. In my limited testing I wonder if the assignment isn't cached by the ODBC driver itself. The columns I initial chose are not correct. Update: I never did get a full answer on this one, deleting the links then restoring them didn't work. I think it's an obscure bug. I've moved on and haven't had to worry about this oddity since. A: You must delete the link to the table and create a new one. When a table is linked all the connection info about the table's path, structure (including primary key), permissions, passwords and statistics are stored in the Access db. If any of those items change in the linked table, refreshing links won't automatically update it on the Access side because Access continues to use the previously stored info. You must delete or drop the linked table and recreate the link, storing the current connection information. Don't know for sure if this next bit also applies to odbc linked tables, but I suspect it does. For Jet tables, it's a good idea to periodically delete all links and recreate them to improve query performance, because if a linked table's statistics are made on a table with few records, once that table is filled with many more records, new statistics will tell Jet's optimizer whether using indexes or a full table scan would be the better course of action when running a query. A: It is not possible to delete the link and then relink?
{ "language": "en", "url": "https://stackoverflow.com/questions/87883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Java jdb remote debugging command line tool anyone have any experience using this? if so, is it worth while? A: I just used jdb for the first time yesterday and am really pleased with the results. You see, I program in Eclipse on my laptop, then deploy to a VM to make sure the whole shebang still works. Very occasionaly, I'll have to work on something that gets executed standalone, as a commandline. These things sometimes need debugging. This has always been a problem, because I don't want to go to the trouble of installing Eclipse on the VM (it's slow enough already!), yet I don't know of an easy way to get it to connect to my commandline-running class before it finishes running. jdb to the rescue! It works a treat - small and functional, almost to the point where it is bare... this forces you to apply your mind more than you apply the tool (like I said here). Make sure to print out the reference (solaris, windows, java 1.5 - I think they're all about the same, really) and have your source code open and browsable on your second screen. I hope you have a second screen, or you'll be alt-tabbing a lot. A: Assume your program is started by the following command: java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=<port> <class> You can attach to this process by jdb: jdb -attach <port> In some cases you need to use the following command . jdb -sourcepath \.src -connect com.sun.jdi.SocketAttach:hostname=localhost,port= <port> A: JDB is incredibly difficult to use. Placing System.outs or using an IDE debugger will produce better results. And for the more interesting features (e.g. tracking threads, heap size, etc.), you can get the information graphically with the JConsole tool.
{ "language": "en", "url": "https://stackoverflow.com/questions/87885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Game Engine Scripting Languages I am trying to build out a useful 3d game engine out of the Ogre3d rendering engine for mocking up some of the ideas i have come up with and have come to a bit of a crossroads. There are a number of scripting languages that are available and i was wondering if there were one or two that were vetted and had a proper following. LUA and Squirrel seem to be the more vetted, but im open to any and all. Optimally it would be best if there were a compiled form for the language for distribution and ease of loading. A: One interesting option is stackless-python. This was used in the Eve-Online game. A: The syntax is a matter of taste, Lua is like Javascript but with curly braces replaced with Pascal-like keywords. It has the nice syntactic feature that semicolons are never required but whitespace is still not significant, so you can even remove all line breaks and have it still work. As someone who started with C I'd say Python is the one with esoteric syntax compared to all the other languages. LuaJIT is also around 10 times as fast as Python and the Lua interpreter is much much smaller (150kb or around 15k lines of C which you can actually read through and understand). You can let the user script your game without having to embed a massive language. On the other hand if you rip the parser part out of Lua it becomes even smaller. A: The Python/C API manual is longer than the whole Lua manual (including the Lua/C API). Another reason for Lua is the built-in support for coroutines (co-operative multitasking within the one OS thread). It allows one to have like 1000's of seemingly individual scripts running very fast alongside each other. Like one script per monster/weapon or so. ( Why do people write Lua in upper case so much on SO? It's "Lua" (see here). ) A: One more vote for Lua. Small, fast, easy to integrate, what's important for modern consoles - you can easily control its memory operations. A: I'd go with Lua since writing bindings is extremely easy, the license is very friendly (MIT) and existing libraries also tend to be under said license. Scheme is also nice and easy to bind which is why it was chosen for the Gimp image editor for example. But Lua is simply great. World of Warcraft uses it, as a very high profile example. LuaJIT gives you native-compiled performance. It's less than an order of magnitude from pure C. A: I wouldn't recommend LUA, it has a peculiar syntax so takes some time to get used to. Depending on who will be doing the scripting, this may not be a problem, but I would try to use something fairly accessible. I would probably choose python. It normally compiles to bytecode, so you would need to embed the interpreter. However, if you must you can use PyPy to for example translate the code to C, and then compile it. A: Embedding the interpreter is no issue. I am more interested in features and performance at this point in time. LUA and Squirrel are both interpreted, which is nice because one of the games i am building out is to include modifiable code, which has an editor in game. I would love to hear about python, as i have seen its use within the battlefield series i believe. A: python is also nice because it has actual OGRE bindings, just in case you need to modify something lower-level on the fly. I don't know of any equivalent bindings for Lua. A: Since it's a C++ library, I would suggest either JavaScript or Squirrel, the latter being my personal favorite of the two for being even closer to C++, in particular to how it handles tables/structs and classes. It would be the easiest to get used to for a C++ coder because of all the similarities. However, if you go with JavaScript and find an HTML5 version of Ogre3D, you should be able to port your game code directly into the web version with minimal (if any) changes necessary. Both of these are a good choice, and both have their pros and cons, but both would definitely be the easiest to learn since you're likely already working in C++. If you're working with Java, the same may hold true, and if it's Game Maker, you wouldn't need either one unless you're trying to make an executable that people wouldn't need Game Maker itself to run, in which case, good luck finding an extension to run either of these.
{ "language": "en", "url": "https://stackoverflow.com/questions/87889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the status of POSIX asynchronous I/O (AIO)? There are pages scattered around the web that describe POSIX AIO facilities in varying amounts of detail. None of them are terribly recent. It's not clear what, exactly, they're describing. For example, the "official" (?) web site for Linux kernel asynchronous I/O support here says that sockets don't work, but the "aio.h" manual pages on my Ubuntu 8.04.1 workstation all seem to imply that it works for arbitrary file descriptors. Then there's another project that seems to work at the library layer with even less documentation. I'd like to know: * *What is the purpose of POSIX AIO? Given that the most obvious example of an implementation I can find says it doesn't support sockets, the whole thing seems weird to me. Is it just for async disk I/O? If so, why the hyper-general API? If not, why is disk I/O the first thing that got attacked? *Where are there example complete POSIX AIO programs that I can look at? *Does anyone actually use it, for real? *What platforms support POSIX AIO? What parts of it do they support? Does anyone really support the implied "Any I/O to any FD" that <aio.h> seems to promise? The other multiplexing mechanisms available to me are perfectly good, but the random fragments of information floating around out there have made me curious. A: Doing socket I/O efficiently has been solved with kqueue, epoll, IO completion ports and the likes. Doing asynchronous file I/O is sort of a late comer (apart from windows' overlapped I/O and solaris early support for posix AIO). If you're looking for doing socket I/O, you're probably better off using one of the above mechanisms. The main purpose of AIO is hence to solve the problem of asynchronous disk I/O. This is most likely why Mac OS X only supports AIO for regular files, and not sockets (since kqueue does that so much better anyway). Write operations are typically cached by the kernel and flushed out at a later time. For instance when the read head of the drive happens to pass by the location where the block is to be written. However, for read operations, if you want the kernel to prioritize and order your reads, AIO is really the only option. Here's why the kernal can (theoretically) do that better than any user level application: * *The kernel sees all disk I/O, not just your applications disk jobs, and can order them at a global level *The kernel (may) know where the disk read head is, and can pick the read jobs you pass on to it in optimal order, to move the head the shortest distance *The kernel can take advantage of native command queuing to optimize your read operations further *You may be able to issue more read operations per system call using lio_listio() than with readv(), especially if your reads are not (logically) contiguous, saving a tiny bit of system call overhead. *Your program might be slightly simpler with AIO since you don't need an extra thread to block in a read or write call. That said, posix AIO has a quite awkward interface, for instance: * *The only efficient and well supported mean of event callbacks are via signals, which makes it hard to use in a library, since it means using signal numbers from the process-global signal namespace. If your OS doesn't support realtime signals, it also means you have to loop through all your outstanding requests to figure out which one actually finished (this is the case for Mac OS X for instance, not Linux). Catching signals in a multi-threaded environment also makes for some tricky restrictions. You can typically not react to the event inside the signal handler, but you have to raise a signal, write to a pipe or use signalfd() (on linux). *lio_suspend() has the same issues as select() does, it doesn't scale very well with the number of jobs. *lio_listio(), as implemented has fairly limited number of jobs you can pass in, and it's not trivial to find this limit in a portable way. You have to call sysconf(_SC_AIO_LISTIO_MAX), which may fail, in which case you can use the AIO_LISTIO_MAX define, which are not necessarily defined, but then you can use 2, which is defined as guaranteed to be supported. As for real-world application using posix AIO, you could take a look at lighttpd (lighty), which also posted a performance measurement when introducing support. Most posix platforms supports posix AIO by now (Linux, BSD, Solaris, AIX, tru64). Windows supports it via its overlapped file I/O. My understanding is that only Solaris, Windows and Linux truly supports async. file I/O all the way down to the driver, whereas the other OSes emulate the async. I/O with kernel threads. Linux being the exception, its posix AIO implementation in glibc emulates async operations with user level threads, whereas its native async I/O interface (io_submit() etc.) are truly asynchronous all the way down to the driver, assuming the driver supports it. I believe it's fairly common among OSes to not support posix AIO for any fd, but restrict it to regular files. A: Network I/O is not a priority for AIO because everyone writing POSIX network servers uses an event based, non-blocking approach. The old-style Java "billions of blocking threads" approach sucks horribly. Disk write I/O is already buffered and disk read I/O can be prefetched into buffer using functions like posix_fadvise. That leaves direct, unbuffered disk I/O as the only useful purpose for AIO. Direct, unbuffered I/O is only really useful for transactional databases, and those tend to write their own threads or processes to manage their disk I/O. So, at the end that leaves POSIX AIO in the position of not serving any useful purpose. Don't use it. A: There is aio_write - implemented in glibc; first call of the aio_read or aio_write function spawns a number of user mode threads, aio_write or aio_read post requests to that thread, the thread does pread/pwrite and when it is finished the answer is posted back to the blocked calling thread. Ther is also 'real' aio - supported by the kernel level (need libaio for that, see the io_submit call http://linux.die.net/man/2/io_submit ); also need O_DIRECT for that (also may not be supported by all file systems, but the major ones do support it) see here: http://lse.sourceforge.net/io/aio.html http://linux.die.net/man/2/io_submit Difference between POSIX AIO and libaio on Linux? A: A libtorrent developer provides a report on this: http://blog.libtorrent.org/2012/10/asynchronous-disk-io/
{ "language": "en", "url": "https://stackoverflow.com/questions/87892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "99" }
Q: Regex to match name1.name2[.name3] I am trying to validate user id's matching the example: smith.jack or smith.jack.s In other words, any number of non-whitespace characters (except dot), followed by exactly one dot, followed by any number of non-whitespace characters (except dot), optionally followed by exactly one dot followed by any number of non-whitespace characters (except dot). I have come up with several variations that work fine except for allowing consecutive dots! For example, the following Regex ^([\S][^.]*[.]{1}[\S][^.]*|[\S][^.]*[.]{1}[\S][^.]*[.]{1}[\S][^.]*)$ matches "smith.jack" and "smith.jack.s" but also matches "smith..jack" "smith..jack.s" ! My gosh, it even likes a dot as a first character. It seems like it would be so simple to code, but it isn't. I am using .NET, btw. Frustrating. A: that helps? /^[^\s\.]+(?:\.[^\s\.]+)*$/ or, in extended format, with comments (ruby-style) / ^ # start of line [^\s\.]+ # one or more non-space non-dot (?: # non-capturing group \. # dot something [^\s\.]+ # one or more non-space non-dot )* # zero or more times $ # end of line /x you're not clear on how many times you can have dot-something, but you can replace the * with {1,3} or something, to specify how many repetitions are allowed. i should probably make it clear that the slashes are the literal regex delimiter in ruby (and perl and js, etc). A: You are using the * duplication, which allows for 0 iterations of the given component. You should be using plus, and putting the final .[^.]+ into a group followed by ? to represent the possibility of an extra set. Might not have the perfect syntax, but something similar to the following should work. ^[^.\s]+[.][^.\s]+([.][^.\s]+)?$ Or in simple terms, any non-zero number of non-whitespace non-dot characters, followed by a dot, followed by any non-zero number of non-whitespace non-dot characters, optionally followed by a dot, followed by any non-zero number of non-whitespace non-dot characters. A: ^([^.\s]+)\.([^.\s]+)(?:\.([^.\s]+))?$ A: I'm not familiar with .NET's regexes. This will do what you want in Perl. /^\w+\.\w+(?:\.\w+)?$/ If .NET doesn't support the non-capturing (?:xxx) syntax, use this instead: /^\w+\.\w+(\.\w+)?$/ Note: I'm assuming that when you say "non-whitespace, non-dot" you really mean "word characters." A: I realise this has already been solved, but I find Regexpal extremely helpful for prototyping regex's. The site has a load of simple explanations of the basics and lets you see what matches as you adjust the expression. A: [^\s.]+\.[^\s.]+(\.[^\s.]+)? BTW what you asked for allows "." and ".." A: I think you'd benefit from using + which means "1 or more", instead of * meaning "any number including zero". A: (^.)+|(([^.]+)[.]([^.]+))+ But this would match x.y.z.a.b.c and from your description, I am not sure if this is sufficiently restrictive. BTW: feel free to modify if I made a silly mistake (I haven't used .NET, but have done plently of regexs) A: [^.\s]+\.[^.\s]+(\.([^\s.]+?)? has unmatched paren. If corrected to [^.\s]+\.[^.\s]+(\.([^\s.]+?))? is still too liberal. Matches a.b. as well as a.b.c.d. and .a.b If corrected to [^.\s]+\.[^.\s]+(\.([^\s.]+?)?) doesn't match a.b A: ^([^.\W]+)\.?([^.\W]+)\.?([^.\W]+)$ This should capture as described, group the parts of the id and stop duplicate periods A: I took a slightly different approach. I figured you really just wanted a string of non-space characters followed by only one dot, but that dot is optional (for the last entry). Then you wanted this repeated. ^([^\s\.]+\.?)+$ Right now, this means you have to have at least one string of characters, e.g. 'smith' to match. You, of course could limit it to only allow one to three repetitions with ^([^\s\.]+\.?){1,3}$ I hope that helps. A: RegexBuddy Is a good (non-free) tool for regex stuff
{ "language": "en", "url": "https://stackoverflow.com/questions/87902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ColdFusion Template Request count optimization In ColdFusion, under Request Tuning in the administrator, how do I determine what is an optimal number (or at least a good guess) for the Maximum Number of Simultaneous Template Requests? Environment: CF8 Standard IIS 6 Win2k3 SQL2k5 on a separate box A: The way of finding the right number of requests is load testing. That is, measuring changes in throughput under load when you vary the request number. Any significant change would require retesting. But I suspect most folks are going to baulk at that amount of work. I think a good rule of thumb is about 8 threads per CPU (core). In terms of efficiency, the lower the thread count (up to a point) the less swapping will be going on as the CPU processes your requests. If your pages execute very quickly then a lower number of requests is optimal. If you have longer running requests, and especially if you have requests that are waiting on third-parties (like a database) then increasing the number of working threads will improve your throughput. That is, if your CPU is not tied up processing stuff you can afford to have more simultaneous requests working on the tasks at hand. Although its a little bit dated, many of the principles on request tuning in Grant Straker's book on CF Performance & Troubleshooting would be worthwhile. A: I would say at least 8 per core, not per CPU. And I think 8 is a little low given modern CPU cores, I would say at least 12.
{ "language": "en", "url": "https://stackoverflow.com/questions/87904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Setting Page Title from a SWF Is it possible to set the title of a page when it's simply a loaded SWF? A: This is how I would do it: ExternalInterface.call("document.title = 'Hello World'"); Or more generalized: function setPageTitle( newTitle : String ) : void { var jsCode : String = "function( title ) { document.title = title; }"; ExternalInterface.call(jsCode, newTitle); } A: Sure. This should fix you up: getURL('javascript:var x = (document.getElementsByTagName("head")[0].getElementsByTagName("title")[0].firstChild.nodeValue = "This is a test!");'); Just replace "This is a test!" with your new title. A: I would think you would be able to do it. You would have to access the javascript DOM. A couple links that may steer you down the correct path.. http://homepage.ntlworld.com/kayseycarvey/document2.html http://www.permadi.com/tutorial/flashjscommand/ A: You could use SWFAddress, it has a setTitle method. Plus, you get the added benefit of beng able to modify the URL for deep-linking. EDIT: This won't work if the SWF is loaded directly in the browser, only if it is embedded in HTML. A: I came up with the same problem of setting up the title of my page. Madw hell lot of efforts from downloading aspFlash controls to those swfObject.... Finally my team lead came up the solution.... open the pop of one page and in that page use one IFrame and use the Iframe to load the swf file. So there are 2 page outer one is our control so just set the title.. inner one is the Iframe which is just another page so load the swf file directly by setting the scr="file path"
{ "language": "en", "url": "https://stackoverflow.com/questions/87909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Tool/framework for automated web app testing in Google Chrome browser? Is an opensource/commercial tool/framework, available for automated web app testing in Google Chrome browser on Windows XP / Vista? ( An alpha/beta Tool is also OK) Thanks A: Selenium supports Chrome pretty much out of the box because it works by injecting javascript in the web page. http://selenium-rc.openqa.org/ Webdriver has an early version of Chrome driver. http://code.google.com/p/webdriver/ Both are open source and works on Windows. A: I found a tool called QA Agent (http://qaagent.com). This is a web based IDE where you can develop your tests using jQuery and javascript. Currently it supports only Chrome so may be it will be a good choice for you. And of course it is free. A: For those who are not developers you could try FRET for automated web testing with Chrome. It's still in beta and even though it states that no programming is needed a basic understanding of HTML / CSS etc. is recommended.
{ "language": "en", "url": "https://stackoverflow.com/questions/87911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Attribute & Reflection libraries for C++? Most mature C++ projects seem to have an own reflection and attribute system, i.e for defining attributes which can be accessed by string and are automatically serializable. At least many C++ projects I participated in seemed to reinvent the wheel. Do you know any good open source libraries for C++ which support reflection and attribute containers, specifically: * *Defining RTTI and attributes via macros *Accessing RTTI and attributes via code *Automatic serialisation of attributes *Listening to attribute modifications (e.g. OnValueChanged) A: You could have a look at the two tools below. I've never used either of them, so I can't tell you how (im)practical they are. XRTTI: Xrtti is a tool and accompanying C++ library which extends the standard runtime type system of C++ to provide a much richer set of reflection information about classes and methods to manipulate these classes and their members. OpenC++: OpenC++ is C++ frontend library (lexer+parser+DOM/MOP) and source-to-source translator. OpenC++ enables development of C++ language tools, extensions, domain specific compiler optimizations and runtime metaobject protocols. A: I looked at these things for quite a while but they tend to be very heavy-handed. They might prevent you from using inheritance, or having strange constructors etc etc. In the end they ended up being too much of a burden instead of a convenience. This approach for exposing members that I now use is quite lightweight and lets you explore a class for serialization or setting all fields called "x" to 0, for example. It's also statically determined so is very very fast. No layers of library code or code-gen to worry about messing with the build process. It generalises to hierarchies of nested types. Set your editor up with some macros to automate writing some of these things. struct point { int x; int y; // add this to your classes template <typename Visitor> void visit(Visitor v) { v->visit(x, "x"); v->visit(y, "y"); } }; /** Outputs any type to standard output in key=value format */ struct stdout_visitor { template <typename T> void visit(const T& rhs) { rhs.visit(this); } template <typename Scalar> void visit (const Scalar& s, const char* name) { std::cout << name << " = " << s << " "; } } A: This is a notorious weakness of the C++ language in general because the things that would need to be standardized to make reflection implementations portable and worthwhile aren't standard. Calling conventions, object layouts, and symbol mangling come to mind, but there are others as well. The lack of direction from the standard means that compiler implementers will do some things differently, which means that very few people have the motivation to write a portable reflection library, which means that people who need reflection re-invent the wheel, but only just enough for what they need. This happens ad infinitum, and here we are. A: Looked at this for a while too. The current easiest solution seems to be BOOST_FUSION_ADAPT_STRUCT. Practically once you have a library/header you only need to add your struct fields into the BOOST_FUSION_ADAPT_STRUCT() macro, as the last segment of the code shows. Yes it has restrictions many other people have mentioned. And it does not support listeners directly. The other promising solutions I looked into are * *CAMP and XRTTI/gccxml, however both seem to be a hurdle to bring external tools dependency into your project. *Years ago I used perl c2ph/pstruct to dump the meta info from the output of gcc -gstabs, that is less intrusive but needs more work though it worked perfectly for me. Regarding the boost/__cxa approach, once you figure out all the small details, adding/changing structs or fields is simple to maintain. we currently use it to build a custom types binding layer on top of dbus, to serialize the API and hide the transport/RPC details for a managed object service subsystem. A: Not a general one but QT supports this via a meta compiler, and is GPL. My understanding from talking to the QT people was that this isn't possible with pure C++, hence the need for the moc. A: There is a new project providing reflection in C++ using a totally different approach: CAMP. https://github.com/tegesoft/camp CAMP doesn't use a precompiler, the classes/properties/functions/... are declared manually using a syntax similar to boost.python or luabind. Of course, people can use a precompiler like gccxml or open-c++ to generate this declaration if they prefer. It's based on pure C++ and boost headers only, and thanks to the power of template meta-programming it supports any kind of bindable entity (inheritance and strange constructors are not a problem, for example). It is distributed under the MIT licence (previously LGPL). A: This is what you get when C++ meets Reflection: Whatever you choose, it'll probably have horrible macros, hard to debug code or weird build steps. I've seen one system automatically generate the serialisation code from DevStudio's PDB file. Seriously though, for small projects, it'll be easier to write save/load functions (or use streaming operators). In fact, that might hold for big projects too - it's obvious what's going on and you'd usually need to change code anyway if the structure changes. A: Automatic introspection/reflection toolkit. Use meta compiler like Qt's and adding meta information directly into object files. Intuitive easy to use. No external dependencies. Even allow automatically reflect std::string and then use it in scripts. Please visit IDK
{ "language": "en", "url": "https://stackoverflow.com/questions/87932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: JavaScript and why capital letters sometimes work and sometimes don't In Notepad++, I was writing a JavaScript file and something didn't work: an alert had to be shown when a button was clicked, but it wasn't working. I has used the auto-complete plugin provided with Notepad++, which presented me with onClick. When I changed the capital C to a small c, it did work. So first of all, when looking at the functions in the auto-completion, I noticed a lot of functions using capitals. But when you change getElementById to getelementbyid, you also get an error, and to make matters worse, my handbook from school writes all the stuff with capital letters but the solutions are all done in small letters. So what is it with JavaScript and its selective nature towards which functions can have capital letters in them and which can't? A: A few objects is IE aren't always case-sensitive, including some/most/all ActiveX -- why both XHR.onReadyStateChange and XHR.onreadystatechange would work fine in IE5 or IE6, but only the latter would work with the native XMLHttpRequest object in IE7, FF, etc. But, a quick reference for "standard" API casing: * *UPPERCASE - Constants (generally symbolic, since const isn't globally supported) *Capitalized - Classes/Object functions *lowercase - Events *camelCase - everything else No 100% guarantees. But, majority-wise, this is accurate. A: Javascript is ALWAYS case-sensitive, html is not. It sounds as thought you are talking about whether html attributes (e.g. onclick) are or are not case-sensitive. The answer is that the attributes are not case sensitive, but the way that we access them through the DOM is. So, you can do this: <div id='divYo' onClick="alert('yo!');">Say Yo</div> // Upper-case 'C' or: <div id='divYo' onclick="alert('yo!');">Say Yo</div> // Lower-case 'C' but through the DOM you must use the correct case. So this works: getElementById('divYo').onclick = function() { alert('yo!'); }; // Lower-case 'C' but you cannot do this: getElementById('divYo').onClick = function() { alert('yo!'); }; // Upper-case 'C' EDIT: CMS makes a great point that most DOM methods and properties are in camelCase. The one exception that comes to mind are event handler properties and these are generally accepted to be the wrong way to attach to events anyway. Prefer using addEventListener as in: document.getElementById('divYo').addEventListener('click', modifyText, false); A: JavaScript API methods are almost all called with lowerCamelCase names, and JavaScript is case-sensitive A: Javascript should always be case sensitive, but I've seen cases in Internet Explorer where it tolerates all upper case for some function names but not others. I think it is limited to functions that also exist in Visual Basic, as there is some odd inbreeding between the interpreters. Clearly this behavior should be avoided, unless of course your intention is to make code that only runs in one browser :)
{ "language": "en", "url": "https://stackoverflow.com/questions/87934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: endian-ness of new macs - are all pc platforms the same now? Does the change of macs over to Intel chips mean we are done with the bit twiddling on numbers in binary resources for cross platform data distributions? Is that the last of this problem or are there some other platforms I'm not aware of? A: You seem to forget the endianness transcends processor architectures. There are plenty of algorithms and protocols that demand a particular byte order. For example, I spent two weeks trying to get an MD5 hashing algorithm to work, only to realize that I had assumed network byte order (Big Endian) while Ronald Rivest had assumed (without stating so in the RFC) that the implementor would use Little Endian byte order. A: Well, actually there are plenty of big endian CPUs left over. Actually the PPC is not dead. You are aware, that the Xbox360 uses PPC CPUs (and it is a good example, that these CPUs are not as bad as their reputation - the Xbox360 is anything but slow). Okay, this one may not count as a PC. But does a server count as a PC? There are still plenty of servers using Sun's UltraSparc CPUs, that are generally big endian, though the latest models can be either big or little endian. There are many CPUs that can be either one or the other (e.g. ARM, still used in many devices like mobile phones and the like), as supporting both adds greatest flexibility for the hardware and for the software vendors. Even the IA64 CPUs (the Intanium, that was intended to replace x86 before AMD invented x86-64, that was true 64 bit and could only emulate 32 bit, unlike x86-64 that can be both) is one of the CPUs that can be switched to big endian. CPUs that can be both are called bi-endian. Actually if you ignore Intel (and compatible CPUs) for a second, most CPUs on the market are either big endian or at least bi-endian, though most of these are not used in any consumer PCs as far as I know. However, I see no endian problem as many programmers do. Every modern CPU can swap endian in hardware. Actually if you'd write a program on a little endian Intel CPU, that swaps endianess of every integer read from memory and again when writing back to memory, this will cause maybe a performance penalty as little as 5%; and in practice you only need to swap endianess for data coming in and going out of your application, as within your application the endianess is constant, of course. Also note: Almost all network protocols I know specify byte order to be big endian, TCP/IP being the most familiar family. So if you work on lower network layers, you will always have to continue swapping bytes. A: I was thinking the same question: since Macs are now Intel, is the endian issue dead? Nope. Aside from certain supercomputers (which, let's face it, us lay-folk will never have to deal with) there is still one major area where big-endian order is used: network protocols, particularly: the Internet Protocol (as in: "IP" of TCP/IP). A: This is certainly not the last of this problem, particularly if you are writing for embedded systems, including Pocket PCs, etc. MIPS, ARM, and other architectures support bi-endian architectures which can select their endian-ness on system start-up. If you're writing code that depends on byte ordering, you need to care about endian-ness. Don't expect this "problem" to go away anytime soon. A: Pesky x86's dirtying up my memory registers with their segment pointers! ;) I believe you don't need to flip words between PCs and Macs anymore, assuming you're eschewing backwards-compatibility with PowerPC. A: Now, more than ever, a person's main computer is less likely to be a deskop computer running a general purpose operating system. Although that is still quite common, many other folks are using smartphones or umpc devices that are purpose built, ie for browsing the web. These platforms do not neccesarily have x86 cpus. More often, especially with smartphone devices, they are using an ARM core, which is big endian. A: Define PC, what do you consider a PC? I am currently typing this from an Linux distribution that is running on an arm 9 processor, which can be set into different endianness, but the default is big endian. Little endian is used by Intel, AMD and Via (x86 compatible). Endian-ness won't go away any time soon, anytime you transmit anything over the network you have to make sure that it is in the right endianness, since the endian specified by Internet Protocol is actually big endian. See the Wikipedia article on Endianness for more information.
{ "language": "en", "url": "https://stackoverflow.com/questions/87935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How do you overcome the svn 'out of date' error? I've been attempting move a directory structure from one location to another in Subversion, but I get an Item '*' is out of date commit error. I have the latest version checked out (so far as I can tell). svn st -u turns up no differences other than the mv commands. A: I sometimes get this with TortoiseSVN on windows. The solution for me is to svn update the directory, even though there are no revisions to download or update. It does something to the metadata, which magically fixes it. A: Thank you. That just resolved it for me. svn update --force /path to filename/ If your recent file in the local directory is the same, there are no prompts. If the file is different, it prompts for tf, mf etc... chosing mf (mine full) insures nothing is overwritten and I could commit when done. Jay CompuMatter A: I manage to solve it by hitting a update button A: After trying all the obvious things, and some of the other suggestions here, with no luck whatsoever, a Google search led to this link (link not working anymore) - Subversion says: Your file or directory is probably out-of-date In a nutshell, the trick is to go to the .svn directory (in the directory that contains the offending file), and delete the "all-wcprops" file. Worked for me when nothing else did. A: Like @Alexander-Klyubin suggests, do the move in the repository. It's also going to be much faster, especially if you have a large amount of data to move, because you won't have to transfer all that data over the network again. svn mv https://username@server/svn/old/ https://username@server/svn/new/ should work just fine A: I believe this problem is coming from the .svn file. It's either incorrect in the old parent, the new parent or the old one. I would try reverting back to your starting point. Use an export to get a clean copy of the folder. Move the clean copy to the new location, and use an add and delete to do the move. That's manually doing what SVN does, but it might work. A: I've found that this works for me: svn update svn resolved <dir> svn commit A: Remove your file or your path using before execute the command do a bk of your changes sudo rm -r /path/to/dir/ after : svn up and commit or delete A: Are you sure you've checked out the head and not a lower revision? Also, have you done an update to make sure you've got the latest version? There's a discussion about this on http://svn.haxx.se/users/archive-2007-01/0170.shtml. A: Perform the move directly in the repository. A: There is at least one other cause of the message "out of date" error. In my case the problem was .svn/dir-props which was created by running "svn propset svn:ignore -F .gitignore ." for the first time. Deleting .svn/dir-props seems like a bad idea and can cause other errors, so it may be best to use "svn propdel" to clean up the errant "svn propset". # Normal state, works fine. > svn commit -m"bump" Sending eac_cpf.xsl Transmitting file data . Committed revision 509. # Set a property, but forget to commit. > svn propset svn:ignore -F .gitignore . property 'svn:ignore' set on '.' # Edit a file. Should have committed before the edit. > svn commit -m"bump" Sending . svn: Commit failed (details follow): svn: File or directory '.' is out of date; try updating svn: resource out of date; try updating # Delete the property. > svn propdel svn:ignore . property 'svn:ignore' deleted from '.'. # Now the commit works fine. > svn commit -m"bump" Sending eac_cpf.xsl Transmitting file data . Committed revision 510. A: If you're using the github svn bridge, it is likely because something changed on github's side of things. The solution is simple, you just have to run svn switch, which lets it properly find itself, then update and everything will work. Just run the following from the root of your checkout svn info | grep Relative svn switch path_from_previous_command svn update or svn switch `svn info | grep Relative | sed 's_.*: __'` svn update The basis for this solution comes from Lee Preimesberger's blog A: "Clean Up" It will get you on track. Right click on the svn folder and click 'Clean Up', do this if you get that error. A: Tried to update the local copy, and revert the item in question, and still got the 'out of date' error. This worked for some reason: svn update --force /path/to/dir/or/file A: I just had the same problem in several folders and this is what I did to commit: 1) In "Team Synchronize" perspective, right click on the folder > Override and Update 2) Delete the folder again 3) Commit and be happy A: Are you moving it using svn mv, or just mv? I think using just mv may cause this issue. A: I moved the dir to my local machine for safe-keeping, then svn deleted the stupid directory, then committed. When I tried to add the folder from my local machine it STILL threw the error (SVN move did the same thing when I tried to rename the folder). So I reverted, then I did a mkdir DIRNAME, added, and committed. Then I added the contents in and committed, and it worked. A: I randomly recieved this error after deleting a few directories each containing some files. I deleted the directories through Netbeans and realised that it didn't actually delete them. It seemed to just delete everything inside the directories and removed the reference to the directory within Netbeans. They did still exist on the filesystem though. Make sure they're deleted from the filesystem and try the commit again. A: If once solved a similar issue by simply checking out a new working copy and replacing the .svn directory throwing the commit errors with this newly checked out one. The reason in my case was that after a repository corruption and restore from a backup the working copy was pointing towards a revision that didn't exist in the restored repository. Also got "item out of date" errors. Updating the working copy before commit didn't solve this but replacing the .svn as described above did. A: In my case only deleting of the local version and re checkout of fresh copy was a solution. A: I did this and it worked for me: 1. Take a back-up of your file. You can simply copy your code to a text file. 2. Right Click the file you want to commit >> Team >> Show History. 3. In "Show History" Panel you will see all the revisions of that file. Right click on latest revision of the file >> Get Revision: It will override your local changes. 4. Now, merge your code with the latest file with the back-up file (step#1). 5. Synchronise and Commit the newly merged file. A: Upgrade your server and client to Subversion 1.9. If the out of date error randomly occurs when it normally should not, when you run commit, it may indicate that you are using an outdated and unsupported Subversion 1.7 or older client or server. You should upgrade the server and clients in order to solve the problem. See the relevant Subversion 1.9 Release Notes entry: "Out of date" errors when committing over HTTPv1. A: Error is because you didn't updated that particular file, first update then only you can commit the file. A: Tried all but change in .svn directly. Nothing helped so here's my solution. In Eclipse > Window > Show View > History I've seen that file is not at the newest Revision, although I made multiple svn "Override & Update" / "Revert" / delete file and checkout. So I went Package Explorer > Right click on file > Replace with > Latest from Repository. Another look in the History View showed that file was now on latest Revision. A: This happened when I updated a branch of an earlier release with files from the trunk. I used Windows Explorer to copy folders from my trunk checkout folder, and pasted them into my Eclipse view of the release branch checkout folder. Now Windows Explorer was configured not to show "hidden" files starting with ".", so I was oblivious to all the incorrect .svn files being pasted into my release branch checkout folder. Doh! My solution was to blow away the damaged Eclipse project, check it out again, and then copy the new files in more carefully. I also changed Windows to show "hidden" files. A: i got this error when trying to commit some files, only it was a file/folder that didn't exist in my working copy. I REALLY didn't want to go through the hassle of moving the files and rechecking out, in the end, i ended up editing the .svn/entries file and removed the offending directory reference. A: I just got this error. What I recommend is you first check on your server if the original file is there. Sometimes the changes aren't made in your local folder. If this is your situation, just delete your folder and checkout again. A: To solve, I needed to revert the file with problem, and update my working copy, and later I modified the file again and after these steps the error didn't happened anymore. A: Just do svn up into command line or if you are in windows select the svn update option. * *Once this will done, this allow you to make further action like committing and others. A: I just got this while I was trying to commit from a trunk directory. Doing svn update from the trunk directory did not solve the error; however, doing svn update from the parent directory (where the .svn directory belongs) did solve the error. My guess about what happened (a use case among other, there may be multiple reasons for this “svn: E160024: resource out of date; try updating”): along to trunk, there was a branches directory. I pulled a branches/branch-1 into master from GitHub. Doing svn update from the parent directory (that is, the root of my working copy) instead of trunk seems to have done something in branches in addition to trunk. When I tried to commit again, there was no error. However, as I said above, this is one case among probably many others. Side note: unlike what someone suggested, I don't believe it's a good idea to play manually in the .svn directory. A: is more isyly make this: 1)i copy my modify code in a notepad. 2) next , update the file. 3) copy the code of notepad in a file updated. 4) commit in svn.
{ "language": "en", "url": "https://stackoverflow.com/questions/87950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "347" }
Q: If you could recommend only one blog on software testing, which one would it be? I found a question here about blogs on software development, but I would like to know which blogs on software testing this community reads. If you just have to recommend more than one blog, post each one in separate answer, so others can vote on specific blog. :) Thanks! Edit: I am not interested in sites that aggregate other blogs, because as @Alan said (in his answer) there are both good and not so good blogs there. A: My blog, of course, is quite interesting - but will not be to everyone. TestingReflections is nice because it aggregates a bunch of random test blogs, but the problem is that it aggregates the bad with the good. Many of the posts that make it to the site don't have much use. It also depends on what you're looking for - are you looking for a blog on testing philosophy, one about functional test techniques, something about writing automated tests, something all-encompassing, or something different? A: Abakas. It's written by my boss, but I'd recommend it even if that weren't the case. A: Google Testing Blog A: James Bach at http://www.satisfice.com/blog/ . Very good. A: For Russian-speaking community is definitely: http://it4business.ru/forum/index.php?showforum=192 But I think everybody knows that :)
{ "language": "en", "url": "https://stackoverflow.com/questions/87957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: C#: Is Implicit Arraylist assignment possible? I'd like to populate an arraylist by specifying a list of values just like I would an integer array, but am unsure of how to do so without repeated calls to the "add" method. For example, I want to assign { 1, 2, 3, "string1", "string2" } to an arraylist. I know for other arrays you can make the assignment like: int[] IntArray = {1,2,3}; Is there a similar way to do this for an arraylist? I tried the addrange method but the curly brace method doesn't implement the ICollection interface. A: Array list has ctor which accepts ICollection, which is implemented by the Array class. object[] myArray = new object[] {1,2,3,"string1","string2"}; ArrayList myArrayList = new ArrayList(myArray); A: Depending on the version of C# you are using, you have different options. C# 3.0 has collection initializers, detail at Scott Gu's Blog Here is an example of your problem. ArrayList list = new ArrayList {1,2,3}; And if you are initializing a collection object, most have constructors that take similar components to AddRange, although again as you mentioned this may not be an option. A: (kind of answering my own question but...) The closest thing I've found to what I want is to make use of the ArrayList.Adapter method: object[] values = { 1, 2, 3, "string1", "string2" }; ArrayList AL = new ArrayList(); AL = ArrayList.Adapter(values); //or during intialization ArrayList AL2 = ArrayList.Adapter(values); This is sufficient for what I need, but I was hoping it could be done in one line without creating the temporary array as someone else had suggested. A: Your comments imply you chose ArrayList because it was the first component you found. Assuming you are simply looking for a list of integers, this is probably the best way of doing that. List<int> list = new List<int>{1,2,3}; And if you are using C# 2.0 (Which has generics, but not collection initializers). List<int> list = new List<int>(new int[] {1, 2, 3}); Although the int[] format may not be correct in older versions, you may have to specify the number of items in the array. A: I assume you're not using C# 3.0, which has collection initializers. If you're not bothered about the overhead of creating a temp array, you could do it like this in 1.1/2.0: ArrayList list = new ArrayList(new object[] { 1, 2, 3, "string1", "string2"});
{ "language": "en", "url": "https://stackoverflow.com/questions/87970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Xcode 3.1.1 and static libraries I'm an experienced VS.NET user and trying to get up and running on Xcode 3.1.1. Here's what I'm trying to accomplish: I'd like a static library ("Lib") to have its own xcodeproj file. I'd an executable application ("App") that makes use of Lib to reference Lib's xcodeproj file so that changes to Lib cause App to relink. Ideally, I'd like to be able to edit Lib's source files inside App's Xcode workspace so I don't have to task around all the time to make changes. I figured out from the online help that I can simply drag the static lib xcodeproj in to my app's project and it gets the reference. I see that once my static lib xcodeproj is in my app's project, I can simply drag it to the App's target and it understands that App depends on Lib. This seems like the right path, but things aren't quite working the way I'd like yet. Here are my questions: * *It seems that simply having App depend on Lib doesn't cause App to link with Lib. It seems that I have to explicitly drag libLib.a from the Lib folder into App's "Link Binary With Libraries" build stage. In VS.NET, simply specifying the project as a solution dependency adds it to the link line. I just want to make sure I'm not missing anything. *When I have App open in Xcode and I drag Lib.xcodeproj into it, I don't get any of Lib's source files there. I only get libLib.a under the "Lib.xcodeproj" folder. In VS.NET, I can edit Lib's source files right there and rebuild it, etc... but with this approach in Xcode, changes to Lib.cpp don't cause Lib to rebuild when I rebuild App. Ideally, I'd get all of Lib's source files and targets to show up when I drag Lib.xcodeproj into App. Is there any way of doing this? Thanks in advance for any responses! A: You're correct that making target A depend upon target B (whether within the same project or across projects) does not cause target A to link against target B. You need to specify them distinctly; this is because they're separate concepts, and you might have dependencies between targets that you don't want to link to each other — for example, a command-line tool that gets built by target C and is used as part of the build process for target A. Also, you're correct that referencing project B from within project A will not let you see project B's source code in project A's window. That's because Xcode does not have the same "workspace" model that Visual Studio and Eclipse do; you above alluded to the existence of "a workspace containing project A" but Xcode doesn't really have any such thing, just a window representing project A. A: Open the App project. Right-click on the App target and choose "Get Info." Then go to the "General Tab" and find "Direct Dependencies." Click the ( + ) (plus sign) button to add a direct dependency. The Lib.xcodeproj should appear among a list of possibilities for you. Choose the Lib target from that list. That should accomplish that the Lib project must build (or rebuild) when you build the App target. (Editing my own post now. I realize I said nothing about point number 2 in the question. I am actually still thinking about number 2. I am not sure if that is possible or not.) A: I am also a fairly new user of Xcode. Most of what I know I learned from an Xcode book by James Bucanek (ISBN 047175479x). It is an older book that was written for/with Xcode 2.2, but I find that pretty much all of it still applies for me today, and I currently use Xcode 3.1 You can probably find a cheap used copy if you are interested. A: I'm also novice to Xcode 3.1, just played with mentioned by you issues and found that there is no problem regarding to your second question. Whatever application you use to edit the dependence library source code, your main project will rebuild the dependence target. I checked it by: * *edited the source file, of the library your app depend on, by notepad application. *Selected dependence library project reference, mouse right-click, and select 'Open With Finder', then selected wanted source file and edited it. Everything working well.
{ "language": "en", "url": "https://stackoverflow.com/questions/87979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Sql Server 2005 error handling - inner exception In C# you can get the original error and trace the execution path (stack trace) using the inner exception that is passed up. I would like to know how this can be achieved using the error handling try/catch in sql server 2005 when an error occurs in a stored procedure nested 2 or 3 levels deep. I am hoping that functions like ERROR_MESSAGE(), ERROR_LINE(), ERROR_PROCEDURE(), ERROR_SEVERITY() can be easily passed up the line so that the top level stored proc can access them. A: The best way to handle this is using OUTPUT parameters and XML. The sample code below will demonstrate how and you can modify what you do with the XML in the TopProcedure to better handle your response to the error. USE tempdb go CREATE PROCEDURE SubProcedure @RandomNumber int, @XMLErrors XML OUTPUT AS BEGIN BEGIN TRY IF @RandomNumber > 50 RaisError('Bad number set!',16,1) else select @RandomNumber END TRY BEGIN CATCH SET @XMLErrors = (SELECT * FROM (SELECT ERROR_MESSAGE() ErrorMessage, ERROR_LINE() ErrorLine, ERROR_PROCEDURE() ErrorProcedure, ERROR_SEVERITY() ErrorSeverity) a FOR XML AUTO, ELEMENTS, ROOT('root')) END CATCH END go CREATE PROCEDURE TopProcedure @RandomNumber int AS BEGIN declare @XMLErrors XML exec SubProcedure @RandomNumber, @XMLErrors OUTPUT IF @XMLErrors IS NOT NULL select @XMLErrors END go exec TopProcedure 25 go exec TopProcedure 55 go DROP PROCEDURE TopProcedure GO DROP PROCEDURE SubProcedure GO The initial call to TopProcedure will return 25. The second will return an XML block that looks like this: <root> <a> <ErrorMessage>Bad number set!</ErrorMessage> <ErrorLine>6</ErrorLine> <ErrorProcedure>SubProcedure</ErrorProcedure> <ErrorSeverity>16</ErrorSeverity> </a> </root> Enjoy A: One way you could do this would be to create an in memory table and insert rows into it when you catch an exception. You would then re-raise the exception and the next function up the chain would then have a chance to handle the exception or also log the exception to the in memory table. It's nasty, but unfortunately there doesn't seem to be a way to get the T-SQL call stack :(
{ "language": "en", "url": "https://stackoverflow.com/questions/87986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Voice Recognition Software For Developers Well the docs finally said it, I need to take it easy on my wrist for a few months. Being that I'm a .NET Developer this could end my livelihood for a little while, something I'm not anxious to do. That said, are there any good handsfree options for developers? Anyone had success using any of the speech recognition software out there? POSTSCRIPT: I've recovered my arm again to the point where two-handed programming isn't a problem. Dragon Naturally speaking worked well enough, but was slower, not like the keyboard where I was programming faster than I thought. A: Check out Using Python to Code by Voice. A: Another idea is to find another good developer to pair program with. It worked really well for me. I get to rest my hands without necessarily slowing down, end up producing better quality code - or at least not having to review as much of it. A: For all Linux folks, I'd like to share some links. Let's begin from Simon - open source speech recognition software: * *Simon listens - non-profit organization for research and apprenticeship *simon: Open-Source Speech Recognition - related blog *HTK speech recognition toolkit - engine used internally *Open-Source Large Vocabulary CSR Engine Julius *Not on shlashdot about Simon fetures and some other: * *Gnome Voice Control *https://wiki.ubuntu.com/SpeechRecognition *http://en.wikipedia.org/wiki/Speech_recognition_in_Linux *VoiceCode and * *Related stackoverflow question about text to speech recognition tools for Linux. A: I know I am little bit off-topic here, and know nothing about voice recognition software; however, you might find it useful to investigate changing your keyboard to the Dvorak layout, which I have heard is a lot kinder on the wrists. http://en.wikipedia.org/wiki/Dvorak_Simplified_Keyboard A: I tried Dragon a couple of years ago and it was a nightmare of mish-mashed words and phrases - not recommended. I understand that it was the best thing going at that point in time so I'm not optimistic. As a fellow sufferer, recommendations would be: * *Find a job that demands as little OT as possible *Try a variety of keyboards. In my experience, working on a laptop full-time worked best. *Start a program of low-moderate stress weight lifting. A: As to the wrist issue, I learned using mouse with both hands some 10 years back. It's surprisingly easy, and relieves the tension substantially. Currently, I'm using a laptop and pressing the touchpad button is straining my thumb. Be careful. These problems can last way longer than one would think. p.s. you might add a tag 'ergonomics' or something - the title can be seen to be about developing for voice recognition. A: As mentioned above, Dragon Naturally Speaking is the best speech recognition software out there, however Microsoft Speech Recognition isn't far behind and comes bundled with Vista. Vocola has recently been ported to MSR, and has a .Net integration feature. A few tips * *Learning to dictate takes some time. Just because you can speak doesn't mean you know how to use speech recognition software *Getting proficient with a mix of SR and keyboard/mouse is much easier than full hands-free operation. *Use CodeRush or equivalent to type less. A: The gold standard for programming by voice is VoiceCode. If I remember correctly, they supports C++ and Python. A: It's out there, and it works... There are quite a few speech recognition programs out there, of which Dragon NaturallySpeaking is, I think, one of the most widely used ones. I've used it myself, and have been impressed with its quality. That being a couple of years ago, I guess things have improved even further by now. ...but it ain't easy... Even though it works amazingly well, I won't say it's an easy solution. It takes time to train the program, and even then, it'll make mistakes. It's painstakingly slow compared to typing, so I had to keep saying to myself "Don't grab the keyboard, don't grab the keyboard, ..." (after which I'd grab the keyboard anyway). I myself tend to mumble a bit, which didn't make things much better, either ;-). Especially the first weeks can be frustrating. You can even get voice-related problems if you strain your voice too much. ...especially for programmers! All in all, it's certainly a workable solution for people writing normal text/prose. As a programmer, you're in a completely different realm, for which there are no real solutions. Things might have changed by now, but I'd be surprised if they have. What's the problem? Most SR software is built to recognize normal language. Programmers write very cryptic stuff, and it's hard, if not impossible, to find software that does the conversion between normal language and code. For example, how would you dictate: if (somevar == 'a') { print('You pressed a!'); } Using the commands in your average SR program, this is a huge pain: "if space left bracket equal sign equal sign apostrophe spell a apostrophe ...". And I'm not even talking about navigating your code. Ever noticed how much you're using the keyboard while programming, and how different that usage is from how a 'normal' user uses the keyboard? How to make the best of it Thus far, I've only worked with Dragon NaturallySpeaking (DNS), so I can only speak for that product. There are some interesting add-ons and websites targeted for people like programmers: * *Vocola is an unofficial plugin that allows you to easily add your own commands to DNS. I found it essential, basically. You'll also be able to find command sets written by other programmers, for e.g. navigating code. It's based on a software package written in Python, so there are also some more advanced and fancy packages around. Also check out Vocola's Resources page. (Warning: when I used it, there were some problems with installing Vocola; check out the newsgroup below for info!) *SpeechComputing.com is a forum/newsgroup with lots of interesting discussions. A good place to start. Closing remarks It seems that the best solution to this problem is, really: * *Find ways around actual coding. *Try to recover. I'm somewhat reluctant to recommend this book, but it seems to work amazingly well for people with RSI/carpal tunnel and other chronic pain issues: J.E. Sarno, Mindbody prescription. I'm working with it right now, and I think it's definitely worth reading. A: I started using my left hand for the mouse, this not only help me a bit, but allowed me to use my right hand more freely, if you write a lot of stuff dwon while you code this helps you a lot.. you can scroll and write down at the same time... When my problems began i put a water bag under my wrist, i loved it! the bag i had was perfect it was long and i put it before the keyboard so i could rest my wrists there.... until one day i step on it... A: Dragon Naturally Speaking Preferred and Vocola. Autohotkey to automate as much as possible. Not easy to program though. I tried; almost impossible. Check out John Sarno's Healing Back Pain. It made me better. I'm back to programming all day! A: Another bit off-topic here, I've found that splitted keboard into two parts and other special keyboards helps, just check-out kinesis. I collected info about such hardware at diigo: * *http://www.diigo.com/user/wierzowiecki/keyboard%20ergonomics *http://www.diigo.com/user/wierzowiecki/keyboard%20rsi . What about direct links: * *http://www.fentek-ind.com/ergo.htm - different stuff *http://www.kinesis-ergo.com/contoured.htm - Kinesis Advantage Contoured Keyboards (Programmable!) *http://www.kinesis-ergo.com/foot.htm some foot switches can be connected to keyboard (for example for window swiching) *http://www.kinesis-ergo.com/freestyle.htm - So far, I've found that even using two normal keyboards is better than one, so I think that splitted freestyle might work as well *http://www.typematrix.com/ - another solution *http://www.maltron.com/ - sometimes when one hand suffers more than other typing with one hand *http://www.keybowl.com/ - this looks interesting as well *I believe there are many other interesting solutions One more thing! Remember about breaks for exercises. Regular exercises (for example small exercise - every half and hour different one) make really really things better ! * *http://www.workrave.org/ - this will remind you about breaks ;) A: I dictate VB.net and TSQL using Dragon NaturallySpeaking 10 Professional. VB.net is inherently closer to a "spoken" language, but I don't see any reason why it couldn't work for C# or others. I start with a completely empty vocabulary, and build it from scratch to suit my needs (which is why I use the professional version). Here's the basic steps (this assumes you have already created and trained a user): * *Create a new vocabulary based on "Base General - Empty Dictation". *Don't have it scan your documents or email. *Add lists of keywords with pronunciation specific to your programming language (Dim, ByVal\by-val, etc.). *Create a .txt document that contains all of your code minus comments. *Harvest words from this document and add them with pronunciations. *Use the document to train the vocabulary's language model. I'll write up something with more detail when I get a chance if anyone is interested. Edit: Here's how to dictate SQL code. The word list created here can be included in other vocabularies if you are a database developer. A: Scott Hanselman uses voice recognition quite a bit. A: I used Dragon Dictate in 1996 for the same reason as you. It was slow going, but better than not working. I found it easier to write code by filling 4x8 white board with code and then getting someone else to type it in. Then I used DD to debug. And while you're at it, you might look at Deborah Quilter's books about RSI. They're very informative. A: I can't find a link to one (I did look) but there are keyboards with only 5 keys, allowing you to type with one hand, I assume that you only have one bad wrist. If I find a link I'll try to message you.
{ "language": "en", "url": "https://stackoverflow.com/questions/87999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Unit testing a method that can have random behaviour I ran across this situation this afternoon, so I thought I'd ask what you guys do. We have a randomized password generator for user password resets and while fixing a problem with it, I decided to move the routine into my (slowly growing) test harness. I want to test that passwords generated conform to the rules we've set out, but of course the results of the function will be randomized (or, well, pseudo-randomized). What would you guys do in the unit test? Generate a bunch of passwords, check they all pass and consider that good enough? A: A unit test should do the same thing every time that it runs, otherwise you may run into a situation where the unit test only fails occasionally, and that could be a real pain to debug. Try seeding your pseudo-randomizer with the same seed every time (in the test, that is--not in production code). That way your test will generate the same set of inputs every time. If you can't control the seed and there is no way to prevent the function you are testing from being randomized, then I guess you are stuck with an unpredictable unit test. :( A: The function is a hypothesis that for all inputs, the output conforms to the specifications. The unit test is an attempt to falsify that hypothesis. So yes, the best you can do in this case is to generate a large amount of outputs. If they all pass your specification, then you can be reasonably sure that your function works as specified. Consider putting the random number generator outside this function and passing a random number to it, making the function deterministic, instead of having it access the random number generator directly. This way, you can generate a large number of random inputs in your test harness, pass them all to your function, and test the outputs. If one fails, record what that value is so that you have a documented test case. A: In addition to testing a few to make sure that they pass, I'd write a test to make sure that passwords that break the rules fail. Is there anything in the codebase that's checking the passwords generated to make sure they're random enough? If not, I may look at creating the logic to check the generated passwords, testing that, and then you can state that the random password generator is working (as "bad" ones won't get out). Once you've got that logic you can probably write an integration type test that would generate boatloads of passwords and pass it through the logic, at which point you'd get an idea of how "good" your random password generate is. A: Well, considering they are random, there is no really way to make sure, but testing for 100 000 password should clear most doubts :) A: You can also look into mutation testing (Jester for Java, Heckle for Ruby) A: You could seed your random number generator with a constant value in order to get non-random results and test those results. A: I'm assuming that the user-entered passwords conform to the same restrictions as the random generated ones. So you probably want to have a set of static passwords for checking known conditions, and then you'll have a loop that does the dynamic password checks. The size of the loop isn't too important, but it should be large enough that you get that warm fuzzy feeling from your generator, but not so large that your tests take forever to run. If anything crops up over time, you can add those cases to your static list. In the long run though, a weak password isn't going to break your program, and password security falls in the hands of the user. So your priority would be to make sure that the dynamic generation and strength-check doesn't break the system. A: Without knowing what your rules are it's hard to say for sure, but assuming they are something like "the password must be at least 8 characters with at least one upper case letter, one lower case letter, one number and one special character" then it's impossible even with brute force to check sufficient quantities of generated passwords to prove the algorithm is correct (as that would require somewhere over 8^70 = 1.63x10^63 checks depending on how many special characters you designate for use, which would take a very, very long time to complete). Ultimately all you can do is test as many passwords as is feasible, and if any break the rules then you know the algorithm is incorrect. Probably the best thing to do is leave it running overnight, and if all is well in the morning you're likely to be OK. If you want to be doubly sure in production, then implement an outer function that calls the password generation function in a loop and checks it against the rules. If it fails then log an error indicating this (so you know you need to fix it) and generate another password. Continue until you get one that meets the rules. A: In my humble opinion you do not want a test that sometimes pass and sometimes fails. Some people may even consider that this kind of test is not a unit test. But the main idea is be sure that the function is OK when you see the green bar. With this principle in mind you may try to execute it a reasonable number of times so that the chance of having a false correct is almost cero. However, une single failure of the test will force you to make more extensive tests apart from debbuging the failure. A: Either use fixed random seed or make it reproducible (i.e.: derive from the current day) A: Firstly, use a seed for your PRNG. Your input is no longer random and gets rid of the problem of unpredictable output - i.e. now your unit test is deterministic. This doesn't however solve the problem of testing the implementation, but here is an example of how a typical method that relies upon randomness can be tested. Imagine we've implemented a function that takes a collection of red and blue marbles and picks one at random, but a weighting can be assigned to the probability, i.e. weights of 2 and 1 would mean red marbles are twice as likely to be picked as blue marbles. We can test this by setting the weight of one choice to zero and verifying that in all cases (in practice, for a large amount of test input) we always get e.g. blue marbles. Reversing the weights should then give the opposite result (all red marbles). This doesn't guarantee our function is behaving as intended (if we pass in an equal number of red and blue marbles and have equal weights do we always get a 50/50 distribution over a large number of trials?) but in practice it is often sufficient.
{ "language": "en", "url": "https://stackoverflow.com/questions/88007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Make apache automatically strip off the www.? For various reasons, such as cookies, SEO, and to keep things simple, I would like to make apache automatically redirect any requests for http://www.foobar.com/anything to http://foobar.com/anything. The best I could come up with is a mod_rewrite-based monstrosity, is there some easy simple way to tell it "Redirect all requests for domain ABC to XYZ"? PS: I found this somewhat related question, but it's for IIS and does the opposite of what I want. Also it's still complex. A: It's as easy as: <VirtualHost 10.0.0.1:80> ServerName www.example.com Redirect permanent / http://example.com/ </VirtualHost> Adapt host names and IPs as needed :) A: simpler and easier to copy from site to site: RewriteCond %{HTTP_HOST} ^www\.(.+)$ RewriteRule ^(.*)$ http://%1/$1 [R=301,L] A: Pretty simple if you use mod_rewrite, as we all do ;) This is part of the .htaccess from my live website: RewriteEngine on # Catches www.infinite-labs.net and redirects to the # same page on infinite-labs.net to normalize things. RewriteCond %{HTTP_HOST} ^www\.infinite-labs\.net$ RewriteRule ^(.*)$ http://infinite-labs.net/$1 [R=301,L] A: Use an .htaccess file with some mod_rewrite rules: RewriteEngine On RewriteRule ^www.SERVERNAME(.*) http://SERVERNAME$1 [L,QSA] I'm not sure I got the syntax right with the $1 there, but it's well documented. L sends a location: header to the browser, and QSA means Query String Append. A: Since you mentioned using mod_rewrite, I'd suggest a simple rule in your .htaccess - doesn't seem monstrous to me :) RewriteCond %{HTTP_HOST} ^www\.foobar\.com$ [NC] RewriteRule ^(.*)$ http://foobar.com/$1 [L,R=301] A: RewriteEngine On RewriteCond %{HTTP_HOST} ^www.domain.com$ [NC] RewriteRule ^(.*)$ http://domain.com/$1 [R=301,L] That should do the trick.
{ "language": "en", "url": "https://stackoverflow.com/questions/88011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: C#: Import/Export Settings into/from a File What's the best way to import/export app internal settings into a file from within an app? I have the Settings.settings file, winform UI tied to the settings file, and I want to import/export settings, similar to Visual Studio Import/Export Settings feature. A: If you are using the Settings.settings file, it's saving to the config file. By calling YourNamespace.Properties.Settings.Save() after updating your settings, they will be saved to the config files. However, I have no idea what you mean by "multiple sets of settings." If the settings are user settings, each user will have its own set of settings. If you are having multiple sets of settings for a single user, you probably should not use the .settings files; instead you'll want to use a database. A: You can use DataSet, which you bind to the form. And you can save/restore it. A: You could just use sections, or are you breaking out to other files for a specific reason? A: A tried and tested way I have used is to design a settings container class. This container class can have sub-classes for different types of setting categories. It works well since you reference your "settings" via property name and therefore if something changes in future, you will get compile time errors. It is also expandible, since you can always create new settings by adding more properties to your individual setting classes and assign default values to the private variable of a property that will be used should that specific setting not exist in an older version of your application. Once the new container is saved, the new settings will be persisted as well. Another advantage is the obvious human / computer readability of XML which is nice for settings. To save, serialize the container object to XML data, then write the data to file. To load, read the data from file and deserialize back into your settings container class. To serialize via standard C# code: public static string SerializeToXMLString(object ObjectToSerialize) MemoryStream mem = new MemoryStream(); System.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(ObjectToSerialize.GetType()); ser.Serialize(mem,ObjectToSerialize); ASCIIEncoding ascii = new ASCIIEncoding(); return ascii.GetString(mem.ToArray()); To deserialize via standard C# code: public static object DeSerializeFromXMLString(System.Type TypeToDeserialize, string xmlString) byte[] bytes = System.Text.Encoding.UTF8.GetBytes(xmlString); MemoryStream mem = new MemoryStream(bytes); System.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(TypeToDeserialize); return ser.Deserialize(mem); Once last nice thing about a serializable settings class is because it is an object, you can use IntelliSense to quickly navigate to a particular setting. Note: After you instantiated your settings container class, you should make it a static property of another static managing class (you can call it SettingsManager if you want) This managing class allows you to access your settings from anywhere in your application (since its static) and you can also have static functions to handle the loading and saving of the class.
{ "language": "en", "url": "https://stackoverflow.com/questions/88030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Does the unmodifiable wrapper for java collections make them thread safe? I need to make an ArrayList of ArrayLists thread safe. I also cannot have the client making changes to the collection. Will the unmodifiable wrapper make it thread safe or do I need two wrappers on the collection? A: On a related topic - I've seen several replies suggesting using synchronized collection in order to achieve thread safety. Using synchronized version of a collection doesn't make it "thread safe" - although each operation (insert, count etc.) is protected by mutex when combining two operations there is no guarantee that they would execute atomically. For example the following code is not thread safe (even with a synchronized queue): if(queue.Count > 0) { queue.Add(...); } A: The unmodifiable wrapper only prevents changes to the structure of the list that it applies to. If this list contains other lists and you have threads trying to modify these nested lists, then you are not protected against concurrent modification risks. A: It depends. The wrapper will only prevent changes to the collection it wraps, not to the objects in the collection. If you have an ArrayList of ArrayLists, the global List as well as each of its element Lists need to be wrapped separately, and you may also have to do something for the contents of those lists. Finally, you have to make sure that the original list objects are not changed, since the wrapper only prevents changes through the wrapper reference, not to the original object. You do NOT need the synchronized wrapper in this case. A: From looking at the Collections source, it looks like Unmodifiable does not make it synchronized. static class UnmodifiableSet<E> extends UnmodifiableCollection<E> implements Set<E>, Serializable; static class UnmodifiableCollection<E> implements Collection<E>, Serializable; the synchronized class wrappers have a mutex object in them to do the synchronized parts, so looks like you need to use both to get both. Or roll your own! A: I believe that because the UnmodifiableList wrapper stores the ArrayList to a final field, any read methods on the wrapper will see the list as it was when the wrapper was constructed as long as the list isn't modified after the wrapper is created, and as long as the mutable ArrayLists inside the wrapper aren't modified (which the wrapper can't protect against). A: An immutable object is by definition thread safe (assuming no-one retains references to the original collections), so synchronization is not necessary. Wrapping the outer ArrayList using Collections.unmodifiableList() prevents the client from changing its contents (and thus makes it thread safe), but the inner ArrayLists are still mutable. Wrapping the inner ArrayLists using Collections.unmodifiableList() too prevents the client from changing their contents (and thus makes them thread safe), which is what you need. Let us know if this solution causes problems (overhead, memory usage etc); other solutions may be applicable to your problem. :) EDIT: Of course, if the lists are modified they are NOT thread safe. I assumed no further edits were to be made. A: It will be thread-safe if the unmodifiable view is safely published, and the modifiable original is never ever modified (including all objects recursively contained in the collection!) after publication of the unmodifiable view. If you want to keep modifying the original, then you can either create a defensive copy of the object graph of your collection and return an unmodifiable view of that, or use an inherently thread-safe list to begin with, and return an unmodifiable view of that. You cannot return an unmodifiableList(synchonizedList(theList)) if you still intend to access theList unsynchronized afterwards; if mutable state is shared between multiple threads, then all threads must synchronize on the same locks when they access that state. A: This is neccessary if: * *There is still a reference to the original modifiable list. *The list will possibly be accessed though an iterator. If you intend to read from the ArrayList by index only you could assume this is thread-safe. When in doubt, chose the synchronized wrapper. A: Not sure if I understood what you are trying to do, but I'd say the answer in most cases is "No". If you setup an ArrayList of ArrayList and both, the outer and inner lists can never be changed after creation (and during creation only one thread will have access to either inner and outer lists), they are probably thread safe by a wrapper (if both, outer and inner lists are wrapped in such a way that modifying them is impossible). All read-only operations on ArrayLists are most likely thread-safe. However, Sun does not guarantee them to be thread-safe (also not for read-only operations), so even though it might work right now, it could break in the future (if Sun creates some internal caching of data for quicker access for example).
{ "language": "en", "url": "https://stackoverflow.com/questions/88036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Can a .msi file install itself (presumably via a Custom Action)? I wand to construct an MSI which, in its installation process, will deploy itself along with its contained Files/Components, to the TargetDir. So MyApp.msi contains MyApp.exe and MyAppBootstrapperEmpty.exe (with no resources) in its File Table. The user launches a MyAppBootstrapperPackaged.exe (containing MyApp.msi as a resource, obtained from the internet somewhere, or email or otherwise). MyAppBootStrapperPackaged.exe extracts MyApp.msi to a temp folder and executes it via msiexec.exe. After the msiexec.exe process completes, I want MyApp.msi, MyBootstrapperEmpty.exe (AND MyApp.exe in %ProgramFiles%\MyApp folder so MyApp.exe can be assured access to MyApp.msi when it runs (for creating the below-mentioned packaged content). MyAppBootstrapper*.exe could try and copy MyApp.msi to %ProgramFiles%\MyApp folder, but would need elevation to do so, and would not allow for its removal via Windows Installer uninstall process (from Add/Remove Programs or otherwise), which should be preserved. Obviously (I think it's obvious - am I wrong?) I can't include the MSI as a file in my Media/CAB (chicken and egg scenario), so I believe it would have to be done via a Custom Action before the install process, adding the original MSI to the MSI DB's Media/CAB and the appropriate entry in the File table on the fly. Can this be done and if so how? Think of a content distribution model where content files are only ever to be distributed together with the App. Content is produced by the end user via the App at run time, and packaged into a distributable EXE which includes both the App and the content. MyApp's installer must remain an MSI, but may be executed by a Bootstrapper EXE. The installed MyApp.exe must have access to both MyApp.msi and EXE is to be "assembled" at runtime by the App from a base (empty) MyAppBootstrapper.exe, which is also installed by the MSI, and the content created by the end-user. The EXE's resource MSI must be the same as that used to install the App which is doing the runtime packaging. WIX is not to be installed with MyApp. There can be no network dependencies at run-/packaging- time (i.e. can't do the packaging via a Webservice - must be done locally). I am familiar with (and using) Custom Actions (managed and unmanaged, via DTF and otherwise). A: Add an uncompressed medium to your wxs like this: <Media Id='2'/> And then create a component with a File element like this: <File Source='/path/to/myinstaller.msi' Compressed='no' DiskId='2' /> This will make the installer look for a file called "myinstaller.msi" on the installation medium, in the same folder as the msi that is being installed. The source path above should point to a dummy file, it is only there to appease wix. Edit: The following sample test.wxs demonstrates that it works. It produces a test.msi file which installs itself to c:\program files\test. Note that you need to put a dummy test.msi file in the same folder as text.wxs to appease wix. <?xml version='1.0' encoding='utf-8'?> <Wix xmlns='http://schemas.microsoft.com/wix/2006/wi'> <Product Name='ProductName' Id='*' Language='1033' Version='0.0.1' Manufacturer='ManufacturerName' > <Package Keywords='Installer' Description='Installer which installs itself' Manufacturer='ManufactererName' InstallerVersion='100' Languages='1033' Compressed='yes' SummaryCodepage='1252'/> <Media Id='1' Cabinet='test.cab' EmbedCab='yes'/> <Media Id='2' /> <Directory Id='TARGETDIR' Name="SourceDir"> <Directory Id='ProgramFilesFolder'> <Directory Id='TestFolder' Name='Test' > <Component Id="InstallMyself"> <File Source="./test.msi" Compressed="no" DiskId="2" /> </Component> </Directory> </Directory> </Directory> <Feature Id='Complete' Display='expand' Level='1' Title='Copy msi file to program files folder' Description='Test'> <ComponentRef Id="InstallMyself" /> </Feature> </Product> </Wix> A: Having one .MSI package launch another .MSI package from "within" itself is called a nested install, and it's bad juju (see Rule 20). Windows Installer has some global data that it uses to manage the current install, and it doesn't handle well multiple installs at the same time. For the same reason, if you start one install and then try to start another while the first is still in progress, you'll usually see a pop-up to the effect of "another install in progress, please wait until it's done". You can have a program, usually called a bootstrapper (I think that's what you're referring to) which is itself not an install package, but which contains an install package (such as an .MSI or an .EXE) as a resource, possibly compressed. The action of the bootstrapper program is to extract/expand the resource to a file, commonly in a %TEMP% directory, then either launch the extracted .EXE or run MSIEXEC on the extracted .MSI. The bootstrapper can contain multiple resources and extract+install them one by one, if you need to install prerequisites before the main package. Or you can ship multiple packages as separate files, and have the bootstrapper execute/install them directly from the distribution media one by one, or copy them down to the target machine and run the series of install from there, or... WiX itself does not get installed, no. It's a tool with which .MSI packages can be built. The WiX project has on its wishlist a generic bootstrapper program, but it hasn't been implemented yet. There are other bootstrappers available, e.g. this one. You won't need a custom action -- in fact, since the bootstrapper isn't itself a Windows Installer installation package, "custom action" has no meaning to it. And, if you're familiar enough with CAs to know about managed/unmanaged/DTF, then you know enough to avoid custom actions whenever you can. (grin) A: I think it's much easier for your bootstrapper to extract MSI file to some predefined location rather than to the temp folder. For example, to C:\Documents and Settings\All Users\Application Data\My Company\My Product Install Cache. After installation finishes bootstrapper would leave MSI file sitting there. If at some stage user decides to reinstall your product Windows Installer will be able to locate source MSI file. Also, add path to this file to RemoveFile table so that it gets deleted on uninstall. You can use RemoveFile element in WiX for that. A: So if I understand, then I think I would have the app create a transform (MST) that has the content files and apply that to the base MSI. I'm still not convinced that I understand though. :) A: I'd configure the MSI cache path to a known location. Then at runtime if you need to "edit" the MSI use VBScript or similar. But still, I ask WHY!?! A: I am also working on a way to deploy multiple MSI files. I have a bootstrapper.exe program that bundles the MSI files and runs them one at a time. This solves my problem for most cases. The case it does not solve is GPO (Global Policy Object) distribution of the install. GPO requires a dot-msi file to run an install. To do this here's what I did which almost solved the problem (but not quite). I put the dot-msi files in the file table of an installer and put my bootstrapper in the binary table and run it from a custom action inserted after InstallFinalize in the InstallExecuteSequence. Of course the bootstrapper won't be able to run other MSI's because the top level MSI holds the _MSIExecute mutex. It was pretty easy to get a little further. I made the bootstrapper return control to the top level installer and continute. And then I added a WaitForSingleObject call to wait for the top level install to finish, and the bootstrapper can then continue to finish the install. My problem is that the GPO installation happens at boot time and the top level install completes before the sub installers are done and GPO reboots the machine. The toplevel install also returns a success status when the install may actually fail later on. I'm still looking for a way to block the top level install from completing until after the bootstrapper completes.
{ "language": "en", "url": "https://stackoverflow.com/questions/88078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What's the best method to enable or disable a feature in a .net desktop application It can be either at compile time or at run-time using a config file. Is there a more elegant way than simple (and many) if statements? I am targeting especially sets of UI controls that comes for a particular feature. A: Unless your program must squeeze out 100% performance, do it with a config file. It will keep your code cleaner. If one option changes many parts of code, don't write many conditionals, write one conditional that picks which class you delegate to. For instance if a preference picks TCP versus UDP, have your conditional instantiate a TcpProvider or UdpProvider which the rest of your code uses with minimal muss or fuss. A: Compiler directives aren't as flexible, but they are appropriate in some circumstances. For instance, by default when you compile in DEBUG mode in VS.NET, there is a 'DEBUG' symbol defined...so you can do void SomeMethod() { #if DEBUG //do something here #else //do something else #endif } this will result in only one of those blocks being compiled depending if the DEBUG symbol is defined. Also, you can define additional symbols in Project Properties -> Build -> Conditional compilation symbols. Or, via the command line compiler using the /define: switch A: Perhaps I am assuming too much, but: switch (setting) { case "development": dostuff; break case "production": dootherstuff; break; default: dothebeststuff; break; }
{ "language": "en", "url": "https://stackoverflow.com/questions/88082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: MOSS 2007 SSL error when configuring Search Settings We’re getting the following error message when we click on “Search Settings” for a Shared Services Provider: “Authentication failed because the remote party has closed the transport stream.” This is a new server environment with two web front ends, one database server, and one index server, all running Windows 2003 x64. Does anyone have any thoughts related to if this could be related to 64-bit, or what could be causing the error. Here are the full details from ULS: 09/17/2008 16:30:34.13 w3wp.exe (0x0E84) 0x030C Search Server Common MS Search Administration 86x4 High Configuring the Search Application web service Url to 'https://mushni-sptwb04q:56738/Shared%20Services%20Portal/Search/SearchAdmin.asmx'. 09/17/2008 16:30:34.14 w3wp.exe (0x0E84) 0x030C Search Server Common MS Search Administration 86ze High Exception caught in Search Admin web-service proxy (client). System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. ---> System.IO.IOException: Authentication failed because the remote party has closed the transport stream. at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.Co... 09/17/2008 16:30:34.14* w3wp.exe (0x0E84) 0x030C Search Server Common MS Search Administration 86ze High ...mpilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHt... 09/17/2008 16:30:34.14* w3wp.exe (0x0E84) 0x030C Search Server Common MS Search Administration 86ze High ...tpClientProtocol.Invoke(String methodName, Object[] parameters) at Microsoft.Office.Server.Search.Administration.SearchWebServiceProxy.RunWithSoapExceptionHandling[T](String methodName, Object[] parameters) A: I guess you find this exception in the index server, right? Are you able to browse to 'https://mushni-sptwb04q:56738/Shared%20Services%20Portal/Search/SearchAdmin.asmx' from the index server? It seems like SSL is not properly provisioned on the front-end servers. This might solve your issue: * *Remove the SSL certificate of the front-end servers *Remove the index server from the farm *Move the search and index roles to one of the front-ends *Join the index server back to the farm *Add the index/search roles to the index server *Apply the SSL certificate (you can generate them using SelfSSL) to both front-ends A: Be careful with SelfSSL, its better to use Use SSLDiag. SelfSSL has a bug where if you use it to assign certificates to multiple sites on the same box, only the last site will work. You can run SslDiag from the command line like so: ssldiag /selfssl /V:999 /N:CN=<hostname> /S:<siteId> Use metabase explorer to find the side it. A: Could be an SSL issue. Do have a look into profiles settings, do you get any error when accessing to the User Profiles settings for that same SSP? A: I'm having the same problem. The "Office Server Web Services" (henceforth OSWS) site is available through HTTP on my app server, but not via HTTPS. It doesn't matter where I try to hit the HTTPS URL from, it just flat-out fails (read: no HTTP error code). However, I have come up with some more information. When the app server was joined to the farm, it gave OSWS a different site identifier than exists in the rest of the farm. I tried changing the site identifier, but that didn't work. I've also tried installing the IIS diagnostics toolkit. That pointed me towards the certificate that MOSS installed when the machine was joined to the farm. The line of interest is this one: #WARNING: AcquireCredentialsHandle failed with error -2146893043(0x8009030d) Unfortunately, it looks like Microsoft has embedded some information in the certificate that would prevent me from using SelfSSL or similar tools. Here's the subject (suitably scrubbed): CN={hostname},L=951338967,OU=SharePoint,O=Microsoft The "L" parameter matches the original (and incorrect) site identifier that the site was given and not the one that matches the rest of the farm. My next step is to see if I can generate something that looks appropriate and install it with winhttpcertcfg.exe A: We are also running x64 windows and moss 2007 with .net 3.5 sp1,same issues. I suspect this is the culprit. A: To resolve this issue download the IIS6 resource kit and run the following command Selfssl /s:(IIS ID of the Office Server Web Services site) /v:9999 Cheers, -Ivan
{ "language": "en", "url": "https://stackoverflow.com/questions/88087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How many game updates per second? What update rate should I run my fixed-rate game logic at? I've used 60 updates per second in the past, but that's hard because it's not an even number of updates per second (16.666666). My current games uses 100, but that seems like overkill for most things. A: I used to maintain a Quake3 mod and this was a constant source of user-questions. Q3 uses 20 'ticks per second' by default - the graphics subsystem interpolates so you get smooth motion on the screen. I initially thought this was way low, but it turns out to be fine, and there really aren't many games at all with faster action than q3 I'd personally go with the "good enough for john carmack, good enough for me" A: I like 50 for fixed rate pc games. I can't really tell the difference between 50 and 60 (and if you are making a game that can/cares you should probably be at 100). you'll notice the question is 'fixed-rate game logic' and not 'draw loop'. For clarity, the code will look something like: while(1) { while(CurrentTime() < lastUpdate + TICK_LENGTH) { UpdateGame(); lastUpdate += TICK_LENGTH; } Draw(); } The question is what should TICK_LENGTH be? A: None of the above. For the smoothest gameplay possible, your game should be time-based, not frame-locked. Frame-locking works for simple games where you can tweak the logic and lock down the framerate. It doesn't do so well with modern 3D titles where the framerate jumps all over the board and the screen may not be VSynced. All you need to do is figure out how fast an object should be going (i.e. virtual units per second), compute the amount of time since the last frame, scale the number of virtual units to match the amount of time that has passed, then add those values to your object's position. Voila! Time-based movement. A: Bear in mind that unless your code is measured down to the cycle, not each game loop will take the same number of milliseconds to complete - so 16.6666 being irrational is not an issue really as you will need to time and compensate anyway. Besides it's not 16.6666 updates per second, but the average number of milliseconds your game loop should be targeting. A: Such variables are generally best found via the guess and check strategy. Implement your game logic in such a way that is refresh agnostic (Say for instance, exposing the ms/update as a variable, and using it in any calculations), then play around with the refresh until it works, and then keep it there. As a short term solution, if you want an even update rate but don't care about the evenness of the updates per second, 15ms is close to 60 updates/sec. While if you are about both, your closest options is 20ms or 50 updates/sec is probably the closest you are going to get. In either case, I would simply treat time as a double (Or a long with high-resolution), and provide the rate to your game as a variable, rather then hard coding them. A: The ideal is to run at the same refresh-rate as the monitor. That way your visuals and the game updates don't go in and out of phase with each other. The fact that each frame doesn't last an integral number of milliseconds shouldn't matter to you; why is that a problem? A: I usually use 30 or 33. It's often enough for the user to feel the flow and rare enough not to hog the CPU too much. A: Normally I don't limit the FPS of the game, instead I change all my logic to take the time elapsed from last frame as input. As far as fixed-rate goes, unless you need a high rate for any reason, you should use something like 25/30. That should be enough rate, and will be making your game a little lighter on CPU usage. A: Your engine should both "tick" (update) and draw at 60fps with vertical sync (vsync). This refresh rate is enough to provide: * *low input lag for a feeling of responsiveness, *and smooth motion even when the player and scene are moving rapidly. Both the game physics and the renderer should be able to drop frames if they need to, but optimize your game to run as close to this 60hz standard as possible. Also, some subsystems like AI can tick closer to 10-20fps, and make sure your physics are interpolated on a frame-to-frame time delta, like this: http://gafferongames.com/game-physics/fix-your-timestep/
{ "language": "en", "url": "https://stackoverflow.com/questions/88093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you guarantee the ASPNET user gets assigned the correct default directory rights? I seem to make this mistake every time I set up a new development box. Is there a way to make sure you don't have to manually assign rights for the ASPNET user? I usually install .Net then IIS, then Visual Studio but it seems I still have to manually assign rights to the ASPNET user to get everything running correctly. Is my install order wrong? A: Install IIS, then .NET. The .NET installation will automatically register the needed things with IIS. If you install .NET first, run this: %windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i to run the registration parts, and %windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -ga userA to set up the security rights for userA A: If you install first IIS and then .Net, it'll be OK. In your scenario - use Aspnet_regiis.exe -qa user (not available for .Net < 2.0)
{ "language": "en", "url": "https://stackoverflow.com/questions/88094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Do you know a Bulked/Batched Flows Library for C# I am working on a project with peek performance requirements, so we need to bulk (batch?) several operations (for example persisting the data to a database) for efficiency. However, I want our code to maintain an easy to understand flow, like: input = Read(); parsed = Parse(input); if (parsed.Count > 10) { status = Persist(parsed); ReportSuccess(status); return; } ReportFailure(); The feature I'm looking for here is automatically have Persist() happen in bulks (and ergo asynchronously), but behave to its user as if it's synchronous (user should block until the bulk action completes). I want the implementor to be able to implement Persist(ICollection). I looked into flow-based programming, with which I am not highly familiar. I saw one library for fbp in C# here, and played a bit with Microsoft's Workflow Foundation, but my impression is that both are overkill for what I need. What would you use to implement a bulked flow behavior? Note that I would like to get code that is exactly like what I wrote (simple to understand & debug), so solutions that involve yield or configuration in order to connect flows to one another are inadequate for my purpose. Also, chaining is not what I'm looking for - I don't want to first build a chain and then run it, I want code that looks as if it is a simple flow ("Do A, Do B, if C then do D"). A: Common problem - instead of calling Persist I usually load up commands (or smt along those lines) into a Persistor class then after the loop is finished I call Persistor.Persist to persist the batch. Just a few pointers - If you're generating sql the commands you add to the persistor can represent your queries somehow (with built-in objects, custom objects or just query strings). If you're calling stored procedures you can use the commands to append stuff to a piece of xml tha will be passed down to the SP when you call the persist method. hope it helps - Pretty sure there's a pattern for this but dunno the name :) A: I don't know if this is what you need, because it's sqlserver based, but have you tried taking a look to SSIS and or DTS? A: One simple thing that you can do is to create a MemoryBuffer where you push the messages which simply add them to a list and returns. This MemoryBuffer has a System.Timers.Timer which gets invoked periodically and do the "actual" updates. One such implementation can be found in a Syslog Server (C#) at http://www.fantail.net.nz/wordpress/?p=5 in which the syslog messages gets logged to a SQL Server periodically in a batch. This approach might not be good if the info being pushed to database is important, as if something goes wrong, you will lose the messages in MemoryBuffer. A: How about using the BackgroundWorker class to persist each item asynchronously on a separate thread? For example: using System; using System.Collections; using System.Collections.Generic; using System.ComponentModel; using System.Threading; class PersistenceManager { public void Persist(ICollection persistable) { // initialize a list of background workers var backgroundWorkers = new List<BackgroundWorker>(); // launch each persistable item in a background worker on a separate thread foreach (var persistableItem in persistable) { var worker = new BackgroundWorker(); worker.DoWork += new DoWorkEventHandler(worker_DoWork); backgroundWorkers.Add(worker); worker.RunWorkerAsync(persistableItem); } // wait for all the workers to finish while (true) { // sleep a little bit to give the workers a chance to finish Thread.Sleep(100); // continue looping until all workers are done processing if (backgroundWorkers.Exists(w => w.IsBusy)) continue; break; } // dispose all the workers foreach (var w in backgroundWorkers) w.Dispose(); } void worker_DoWork(object sender, DoWorkEventArgs e) { var persistableItem = e.Argument; // TODO: add logic here to save the persistableItem to the database } }
{ "language": "en", "url": "https://stackoverflow.com/questions/88096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I use NAnt to compile WPF controls I have a WPF project and I'm trying to setup a NAnt build script for it. The problem is that when it tries to compile the WPF controls, the .g.cs files are not being generated as they are when building from within Visual Studio. I'm using the csc build task. From my reading it seems that when Visual Studio builds, it performs a pre-build step that generates the .g.cs files. Is it possible to do this via NAnt? I found this post about WPF, .g.cs and baml: http://stuff.seans.com/2008/07/13/hello-wpf-world-part-2-why-xaml/ Any ideas? A: You might want to try using the msbuild task
{ "language": "en", "url": "https://stackoverflow.com/questions/88121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Will a VS2008 setup project update Net 3.5 SP1? I just started using the WPF WebBrowser that is included in Net 3.5 SP1. I built my setup project (which I have been using prior to moving to 3.5 SP1) and installed it on a test machine but the WebBrowser was not available. What must I do to be sure that the setup.exe/msi combination checks for and installs SP1? A: Open the properties of the Setup Project, then click on the Prerequesites button. Then check the prerequisites to install. Then you can define how the user gets the pre-reqs. Here is a link to framework version information and an excerpt from Scott Hanselman's blog: Online/Download Experience The best way to get a user with reasonable Internet connectivity up on the 3.5 SP1 .NET Framework is with the 2.7 Meg "bootstrapper." This will detect what they need and only download what they need. The worst-case scenario for a x86 machine is around 60 megs, as seen in the table above. What's the "Client Profile?" The Client Profile is an even smaller install option for .NET 3.5 SP1 on XP. It's small 277k bootstrapper. When it's run on a Windows XP SP2 machines with no .NET Framework installed, it will download a 28 meg payload and give you a client-specific subset of .NET 3.5. If the Client Profile bootstrapper is run on a machine with any version of .NET on it, it'll act the same as the 3.5 SP1 web installer and detect what it needs to download, then go get it. There's more details in the Client Profile Deployment Guide. http://www.hanselman.com/blog/CommentView.aspx?guid=af453d70-64b3-417e-9492-d115f929195d A: On my way to answering my own question. Double-clicking on the Microsoft .net Framework in the Detected dependencies one can choose the version. Now the question is which is appropriate, 3.5.30729 or 3.5 SP1 Client? EDIT: 3.5.30729 works. Any ideas of the difference between the two? EDIT: Double-clicking on the .net Framework above shows .NET Framework as a Launch condition. This is where I changed the version. (I'd add a screenshot, but I don't have one at a URL, only on my desktop. A: In the setup project, add some launch conditions. This page shows you how exactly: http://jelle.druyts.net/2005/04/09/CheckingForNET11ServicePack1InAnMSI.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/88136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to get all file attributes including author, title, mp3 tags, etc, in one sweep I would like to write all meta data (including advanced summary properties) for my files in a windows folder to a csv file. Is there a way to collect all the attributes? I see mp3 files have a different set of attributes compared to jpg files. (c#) This can also be a script (vb, perl) Update: by looking at libextractor (thank you) I can see this can be achieved by writing different plugins for different type of files. I gather this meta data is not a simple collection... A: In Perl, you can use MP3::Tag or MP3::Info A: If you can cope w/ VB.Net: http://www.codeproject.com/KB/vb/mp3id3v1.aspx If you can cope w/ C++/.Net: http://www.codeproject.com/KB/audio-video/mp3fileinfo.aspx For either (assuming the C++) is compiled to .Net, you can use Reflector to disassemble the binary and convert it to C#. Check w/ the respective authors about their licenses first (usually Code Project articles are under an open license like CPOL). A: In a library? Try libextractor if your software is GPL. A: Ok, after the clarification edits, I would suggest looking at the introspection available in .Net. I will warn you however that I think you will get more satisfying results if you forgo introspection and define the specific properties that you want for the file types that you expect to see. Since scripting is valid, then if this were my problem to solve I would use Powershell since the .net introspection is baked in. A: It may not be worth it to add all of the data from a jpeg file (exif data). I would hand pick what attributes I wanted from those files.
{ "language": "en", "url": "https://stackoverflow.com/questions/88181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Any other tools/plugins like VisualAssist that will change my life (MSVS)? I was introduced to VisualAssist a few years ago and for me there's no going back. Are there any other tools I'm missing out on? A: If you're a vim user, ViEmu is indispensable. It's a plugin available for Visual Studio (SQL Server and Office as well, although it's sold separately) that transforms the editor into Vim. Another plugin by the same company is Codekana. In its current incarnation, it spruces up code structure considerably, and makes reading code much more pleasurable. Based on several chats with the author, he's planning on growing it into other areas as well. A: BeyondCompare : Life-changing folder & file diff with many installable extensions for additional file types. Don't know what I'd do without it. A: There's a few things that get installed on every computer I use for development: * *ExamDiff is the best light-weight diff program I've found. *Tortoise SVN is the best version control client *Perforce is a way to make your life worse when your company inflicts it upon you. A: Just after installing VisualAssist I go after WinMerge, which also significantly simplified my life. A: I tried Resharper for a while. It was great but too expensive for my taste and I could not get my employer to purchase it when the trial expired. You might take a look. A: I own all of these tools and use them on a regualar basis. * *Resharper *CodeRush/Refactor Pro! *NDepend *Gallio/MbUnit
{ "language": "en", "url": "https://stackoverflow.com/questions/88190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: When using an ORM, how to safely send loaded entities across the tiers When a system has N tiers, and when using an ORM, how do you send loaded entities across the tiers ? Do you use DTO ? When DTO are not used and the entities are directly sent, how do you protect againt the uninitialized lazy loaded relationship errors ? Note : this is not a "should N tiers be used ?" question. I assume that the system already has N tiers. A: Well I don't know if there is a better way, but when we use Hibernate we just turn lazy loading off so that it loads everything. It obviously costs more to do this, but I wasn't sure how to get away from the lazy loading methods that Hibernate would create. If a Containers has sets of data that are not used often then they will not be loaded and it is up to the requesting UI Form to call it and send it for update. (We built update classes to pass all the information together) In the case of UI Forms that loaded lots of Containers we just make special classes and fill in what we need for them. They are sort of read-only containers that aren't used for persistence. There may be better ways.. but I am learning :) A: I'm just trying to find my way with ORMs. It's an appealing concept. Like you I don't want other tiers in the application to know that the ORM exists. What I'm looking at currently is using interfaces that I design and using partial classes (a C#/.net thing, without partial classes I guess I'd write a wrapper) to add the implementation of the interface onto the types that are generated by the ORM. As far as lazy loading / deferred execution goes, that also should be invisible to the application. It's a nice service for the ORM to provide and I'm happy that it does but my application should not need to know or care about it. So if the ORM doesn't hide that from you then again I'd look at a wrapper that took care of this so that the application does not need to know or care.
{ "language": "en", "url": "https://stackoverflow.com/questions/88192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Setting Environment Variables for Mercurial Hook I am trying to call a shell script that sets a bunch of environment variables on our server from a mercurial hook. The shell script gets called fine when a new changegroup comes in, but the environment variables aren't carrying over past the call to the shell script. My hgrc file on the respository looks like this: [hooks] changegroup = shell_script changegroup.env = env I can see the output of the shell script, and then the output of the env command, but the env command doesn't include the new environment variables set by the shell script. I have verified that the shell script works fine when run by itself but when run in the context of the mercurial hook it does not properly set the environment. A: Shell scripts can't modify their enviroment. http://tldp.org/LDP/abs/html/gotchas.html A script may not export variables back to its parent process, the shell, or to the environment. Just as we learned in biology, a child process can inherit from a parent, but not vice versa $ cat > eg.sh export FOO="bar"; ^D $ bash eg.sh $ echo $FOO; $ also, the problem is greater, as you have multiple calls of bash bash 1 -> hg -> bash 2 ( shell script ) -> bash 3 ( env call ) it would be like thinking I could set a variable in one php script and then magically get it with another simply by running one after the other.
{ "language": "en", "url": "https://stackoverflow.com/questions/88194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you convert your office to build automation? The title should say it all, then I can solidify 2 more ticks on the Joel test. I've implemented build automation using a makefile and a python script already and I understand the basics and the options. But how can I, the new guy who reads the blogs, convince my cohort of its inherent efficacy? A: Ask for forgiveness, instead of permission. Get it working in private (which it looks like you have) and then demonstrate its advantages. One thing that always gets people is using CruiseControl's Tray utility - people love it when they can see, through their system tray, that the build succeeded. (this is assuming you're in a Windows environment, that CruiseControl will work with your existing systems, etc.) NOTE: If asking for forgiveness instead of permission will result in instant termination, you might not want to do the above. You might also want to look for work somewhere else. Your mileage may vary. A: Implement build lights ... we did something similar with lava lamps and it was a huge hit. For added bonus marks give every developer a red light over their desk and have the right light come on when the build breaks. A: Grab an old spare computer & put it in the corner of your office. Set it up to build your project. Write a small script that does: * *Get latest version of all files. *If there was a file change, build *Notify you if there's a failure. When you catch a break, compassionately get it fixed. Consider adding a step to run unit tests, too. If you can avoid scolding people for their mistakes, pretty soon people will be impressed with how reliable the build has been since you arrived. Build from there. The trick is to spend very little of your time to generate a lot of value for your team, without pissing anyone off. A: Set up an autobuilder. Once you have it building and running the tests automatically, it won't matter if you convince other people to save their own time :) If you're using git for version control, here's an autobuilder that automatically finds the exact checkin that started causing the tests to fail: http://github.com/apenwarr/gitbuilder/ A: I would take a spare box, install a continuous integration server (Hudson or CruiseControl in the Java world) and set up a job that builds your application each time someone checks in some code. You can either try to convince your coworker or just wait until someone breaks the build. In the latter case, just send the following email: to: all developers Guys, I've just noticed that I can build our software using the latest version because of the following error: ... I you want to be notified by our continuous build system (attached is the mail I received when it failed to build our application), just let me know. Usually it doesn't take that long until everyone is on the list A: I would set up the automated build as a nightly process such that every night it grabs the most recent code revision, builds it, and generates a report. Now you will know first thing every morning whether or not the build is broken, and if it is, you can notify the team. If broken builds are much of a problem on your project, people will probably start coming to you first to find out if it is safe to sync to the latest code, since you will be the person who tends to know on any given day whether or not the build is broken (by the way, an automated suite of unit tests helps a great deal with this as well). With any luck, people will start to realize that your nightly build is a useful thing to have, and you'll be able to just set up your daily build report as an email that goes out. A: James Shora has two great links: For hardware http://jamesshore.com/Blog/Continuous-Integration-on-a-Dollar-a-Day.html For "Humanware" http://jamesshore.com/Change-Diary/ ( The history of how he did it. The read is long but changing an organization is harder ) A: When the build is needed by the team on a regular basis, it's pretty easy. You appoint a team member (rotated periodically) to do the build. If the build process is complicated enough, the team will on its own come up with a way of at least partially automating the build. In the worst case, you'll have to automate the build yourself, but no-one will be against the automation. A: Demonstration is the best, and really the only way to change anyone's mind who is resistant to doing things differently. Here we showed how useful automated builds are by having the ability for QA to grab a green light build straight from the build server and install it and test without any direction from the developers. They are able to continue working, they know that it at least passes it's unit tests. It helped integrate testing and development reducing time bugs were in the system.
{ "language": "en", "url": "https://stackoverflow.com/questions/88211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Finding differences between versions of a Java class file I am working with a large Java web application from a commercial vendor. I've received a patch from the vendor in the form of a new .class file that is supposed to resolve an issue we're having with the software. In the past, applying patches from this vendor have caused new and completely unrelated problems to arise, so I want to understand the change being made even before applying it to a test instance. I've got the two .class files side by side, the one extracted from the currently running version and the updated one from the vendor. JAD and JReversePro both decompile and disassemble (respectively) the two versions to the same output. However, the .class files are different sizes and I see differences in the output of od -x, so they're definitely not identical. What other steps could I take to determine the difference between the two files? Conclusion: Thanks for the great responses. Since javap -c output is also identical for the two class files, I am going to conclude that Davr's right and the vendor sent me a placebo. While I'm accepting Davr's answer for that reason, it was Chris Marshall and John Meagher who turned me on to javap, so thanks to all three of you. A: It's possible that they just compiled it with a new version of the java compiler, or with different optimization settings etc, so that the functionality is the same, and the code is the same, but the output bytecode is slightly different. A: If you are looking for API level differences the javap tool can be a big help. It will output the method signatures and those can be output to a plain text files and compared using normal diff tools. A: You could try using a diff tool (such as SourceGear's free DiffMerge tool) on the decompiled sources. That should pick up the file differences, although it will likely pick up "insignificant" differences, for example if variables have been named differently in the two versions. http://www.sourcegear.com/diffmerge/ A: You can use javap (in $JDK_HOME/bin) to decompile java .class files. It will tell you (for example) the class file version among other things
{ "language": "en", "url": "https://stackoverflow.com/questions/88216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Stored Procs - Best way to pass messages back to user application I'd like know what people think about using RAISERROR in stored procedures to pass back user messages (i.e. business related messages, not error messages) to the application. Some of the senior developers in my firm have been using this method and catching the SqlException in our C# code to pick up the messages and display them to the user. I am not happy with this method and would like to know how other people deal with these types of user messages from stored procs. A: I've done this, but it was usually to pass along business "error" messages, essentially a data configuration had to be in place that couldn't be enforced with standard FK constraints for whatever reason. If they are actually "errors", I don't have much of a problem with it. If it's inserting a record and using RAISERROR to throw ("You have successfully registered for XYZ!"), then you've got a problem. If that was the case, I'd probably come up with a team/department/company development standard for using out parameters. A: Using RAISERROR like this is really not a good idea. It's just like using Exceptions as flow control logic, which is generally frowned upon. Why not use an OUT parameter instead? That's exactly what they are for. I can't think of a database or a client API that doesn't support OUT parameters. A: Make your stored procedure return 2 sets of data. The first can contain the actual returned data, then the second can return a text message. Your app code can then use the data where it needs to, then display whatever message comes back. A: Is it bad form to answer a question this old? Anywho... For your everyday status messages, this would be a Bad Thing and I agree with pretty much every answer above. I have however seen this used quite effectively for showing progress during long batches. See Getting feedback / progress from batches and stored procedures by Jens K for an example. You've got to have a pretty hardcore reason for doing it, but when you need it, you need it and it is awesome. A: Exceptions should be thrown only when the situation is exceptional and you do not have a handling logic. Also raising and catching exception is expensive. Avoid using exceptions unless * *The situation is exceptional *You do not have handling logic for the exceptional situation Use output parameters or return multiple resultsets to pass info from the stored procedures. A: I would try to avoid getting my stored procs from returning Business Related messages because by definition these kind of messages probably ought to be handled/generated in a Business Logic tier. Execeptions should be exceptional (infrequent). I would use RAISEERROR only for errors (like hey I tried to import all this data and one of the rows had goofy data so I rolled back the transaction). You also need to be very careful with the severity of the error raised this can have a huge affect on how the error propogates and what happens to your connection. Try using a return value or an output variable if this isn't enough. A: I guess if you don't mind messing with checking columns and such you could return something different based on what happened. If everything is fine, return the data as normal. If something isn't fine, return a result with a column named Error that describes what was bad. Check the column names for this column before you process data and act accordingly. Off the top of my head if you really object to RAISERROR. A: I've used raiseerror to return from the depths of multiple nested stored procedures, but the final layer of stored procedure always catches the exception prior to being raised to the calling language (in our case Java via JDBC). The stored procedure catch in the outer most layer is transformed into an XML message to be transported to the JDBC call and the root element, by our convention, must contain a feedback attribute. The value of the feedback attribute always has a decorator of either ok, alert, or error. Ok means go on, nothing to see here. Alert means go on, but show the rest of the feedback to the user. Error means punt, call the help desk. A: I'd use the RETURN value from the stored procedure, like this: CREATE PROCEDURE checkReturnValue AS BEGIN DECLARE @err AS INT SET @err = 0 IF (rand() < 0.5) BEGIN SET @err = 1 END SELECT * FROM table PRINT @err RETURN @err END Check the RETURN value in your application calling the stored procedure. A: Only if the "business" error messages are database error messages in the sense that the database constraints which have been put in place to satisfy basic low-level business requirements at the database level are being violated. It should not be used for high-level business logic, which should be avoided in the database layer. The database layer is always the slowest to change, so only very slowly changing and unchanging business logic should be there. So maybe yes for message about order for an inactive/disabled customer, but not for an order for a customer who has a balance in 90 days. The first rule may be permanent, the second is likely to be configurable, subject to whims of the business on a monthly basis. A: We raise errors when errors occur, and we return status information in output variables or return values. A: You should be using SQL stored procedure output parameters. From http://msdn.microsoft.com/en-us/library/ms378108.aspx: CREATE PROCEDURE GetImmediateManager @employeeID INT, @msg varchar(50) OUTPUT AS BEGIN SELECT ManagerID FROM HumanResources.Employee WHERE EmployeeID = @employeeID SELECT @msg = 'here is my message' END A: My blasphemous 2 cents: Text-based messages work really well on the web, like, HTTP for example. It's easy to create, send, debug systems where the messaging is done in human-readable text. So, maybe the same thing with the messaging between your SQL Server layer and the layer above it. Using text as part of the messaging might be making your development more agile and your debugging easier. Maybe the senior developers are being pragmatic and you should maybe be open to setting aside pre-conceived notions of correctness. Debate the design choice on its own merits, not based on notions of correctness. (There is fashion in software development too) A: If it can't be checked/caught earlier it might be difficult to do anything else. I've had to write a raiseerror in a procedure I wrote and used as a constraint on inserting/updating a table because it was the 'last stop' for the data and I had to check it there. I think in general if you are getting errors back from the DB.. its a pain in the butt and harder to give 'nice' feedback to the user without a lot of effort, but sometimes you just don't know until you insert/update :P
{ "language": "en", "url": "https://stackoverflow.com/questions/88222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Page index is not working Help me ..my page index is not working in visual studio. my page load is as follows: protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { CustomerView.DataSource = Customer.GetAll(); CustomerView.DataBind(); } } protected void CustomerView_PageIndexChanging(object sender, System.Web.UI.WebControls.GridViewPageEventArgs e) { int newPageNumber = e.NewPageIndex + 1; CustomerView.DataSource = Customer.GetAll(); CustomerView.DataBind(); } what am i doing wrong my page index in not working. A: Try this. I think you have to set the GridView's PageIndex property manually. protected void CustomerView_PageIndexChanging(object sender, System.Web.UI.WebControls.GridViewPageEventArgs e) { CustomerView.PageIndex = e.NewPageIndex; CustomerView.DataSource = Customer.GetAll(); CustomerView.DataBind(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/88231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Dealing with "java.lang.OutOfMemoryError: PermGen space" error Recently I ran into this error in my web application: java.lang.OutOfMemoryError: PermGen space It's a typical Hibernate/JPA + IceFaces/JSF application running on Tomcat 6 and JDK 1.6. Apparently this can occur after redeploying an application a few times. What causes it and what can be done to avoid it? How do I fix the problem? A: * *Open tomcat7w from Tomcat's bin directory or type Monitor Tomcat in start menu (a tabbed window opens with various service information). *In the Java Options text area append this line: -XX:MaxPermSize=128m *Set Initial Memory Pool to 1024 (optional). *Set Maximum Memory Pool to 1024 (optional). *Click Ok. *Restart the Tomcat service. A: Perm gen space error occurs due to the use of large space rather then jvm provided space to executed the code. The best solution for this problem in UNIX operating systems is to change some configuration on the bash file. The following steps solve the problem. Run command gedit .bashrc on terminal. Create JAVA_OTPS variable with following value: export JAVA_OPTS="-XX:PermSize=256m -XX:MaxPermSize=512m" Save the bash file. Run command exec bash on the terminal. Restart the server. I hope this approach will work on your problem. If you use a Java version lower than 8 this issue occurs sometimes. But if you use Java 8 the problem never occurs. A: Common mistakes people make is thinking that heap space and permgen space are same, which is not at all true. You could have lot of space remaining in the heap but still can run out of memory in permgen. Common causes of OutofMemory in PermGen is ClassLoader. Whenever a class is loaded into JVM, all its meta data, along with Classloader, is kept on PermGen area and they will be garbage collected when the Classloader which loaded them is ready for garbage collection. In Case Classloader has a memory leak than all classes loaded by it will remain in memory and cause permGen outofmemory once you repeat it a couple of times. The classical example is Java.lang.OutOfMemoryError:PermGen Space in Tomcat. Now there are two ways to solve this: 1. Find the cause of Memory Leak or if there is any memory leak. 2. Increase size of PermGen Space by using JVM param -XX:MaxPermSize and -XX:PermSize. You can also check 2 Solution of Java.lang.OutOfMemoryError in Java for more details. A: The solution was to add these flags to JVM command line when Tomcat is started: -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled You can do that by shutting down the tomcat service, then going into the Tomcat/bin directory and running tomcat6w.exe. Under the "Java" tab, add the arguments to the "Java Options" box. Click "OK" and then restart the service. If you get an error the specified service does not exist as an installed service you should run: tomcat6w //ES//servicename where servicename is the name of the server as viewed in services.msc Source: orx's comment on Eric's Agile Answers. A: Also if you are using log4j in your webapp, check this paragraph in log4j documentation. It seems that if you are using PropertyConfigurator.configureAndWatch("log4j.properties"), you cause memory leaks when you undeploy your webapp. A: Increasing Permanent Generation size or tweaking GC parameters will NOT help if you have a real memory leak. If your application or some 3rd party library it uses, leaks class loaders the only real and permanent solution is to find this leak and fix it. There are number of tools that can help you, one of the recent is Plumbr, which has just released a new version with the required capabilities. A: Use the command line parameter -XX:MaxPermSize=128m for a Sun JVM (obviously substituting 128 for whatever size you need). A: jrockit resolved this for me as well; however, I noticed that the servlet restart times were much worse, so while it was better in production, it was kind of a drag in development. A: I have a combination of Hibernate+Eclipse RCP, tried using -XX:MaxPermSize=512m and -XX:PermSize=512m and it seems to be working for me. A: Set -XX:PermSize=64m -XX:MaxPermSize=128m. Later on you may also try increasing MaxPermSize. Hope it'll work. The same works for me. Setting only MaxPermSize didn't worked for me. A: I tried several answers and the only thing what finally did the job was this configuration for the compiler plugin in the pom: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <fork>true</fork> <meminitial>128m</meminitial> <maxmem>512m</maxmem> <source>1.6</source> <target>1.6</target> <!-- prevent PermGen space out of memory exception --> <!-- <argLine>-Xmx512m -XX:MaxPermSize=512m</argLine> --> </configuration> </plugin> hope this one helps. A: Try -XX:MaxPermSize=256m and if it persists, try -XX:MaxPermSize=512m A: The configuration of the memory depends on the nature of your app. What are you doing? What's the amount of transactions precessed? How much data are you loading? etc. etc. etc Probably you could profile your app and start cleaning up some modules from your app. Apparently this can occur after redeploying an application a few times Tomcat has hot deploy but it consumes memory. Try restarting your container once in a while. Also you will need to know the amount of memory needed to run in production mode, this seems a good time for that research. A: They Say that the latest rev of Tomcat (6.0.28 or 6.0.29) handles the task of redeploying servlets much better. A: I run into exactly the same problem, but unfortunately none of the suggested solutions really worked for me. The problem did not happen during deployment, and I was neither doing any hot deployments. In my case the problem occurred every time at the same point during the execution of my web-application, while connecting (via hibernate) to the database. This link (also mentioned earlier) did provide enough insides to resolve the problem. Moving the jdbc-(mysql)-driver out of the WEB-INF and into the jre/lib/ext/ folder seems to have solved the problem. This is not the ideal solution, since upgrading to a newer JRE would require you to reinstall the driver. Another candidate that could cause similar problems is log4j, so you might want to move that one as well A: First step in such case is to check whether the GC is allowed to unload classes from PermGen. The standard JVM is rather conservative in this regard – classes are born to live forever. So once loaded, classes stay in memory even if no code is using them anymore. This can become a problem when the application creates lots of classes dynamically and the generated classes are not needed for longer periods. In such a case, allowing the JVM to unload class definitions can be helpful. This can be achieved by adding just one configuration parameter to your startup scripts: -XX:+CMSClassUnloadingEnabled By default this is set to false and so to enable this you need to explicitly set the following option in Java options. If you enable CMSClassUnloadingEnabled, GC will sweep PermGen too and remove classes which are no longer used. Keep in mind that this option will work only when UseConcMarkSweepGC is also enabled using the below option. So when running ParallelGC or, God forbid, Serial GC, make sure you have set your GC to CMS by specifying: -XX:+UseConcMarkSweepGC A: Assigning Tomcat more memory is NOT the proper solution. The correct solution is to do a cleanup after the context is destroyed and recreated (the hot deploy). The solution is to stop the memory leaks. If your Tomcat/Webapp Server is telling you that failed to unregister drivers (JDBC), then unregister them. This will stop the memory leaks. You can create a ServletContextListener and configure it in your web.xml. Here is a sample ServletContextListener: import java.sql.Driver; import java.sql.DriverManager; import java.sql.SQLException; import java.util.Enumeration; import javax.servlet.ServletContextEvent; import javax.servlet.ServletContextListener; import org.apache.log4j.Logger; import com.mysql.jdbc.AbandonedConnectionCleanupThread; /** * * @author alejandro.tkachuk / calculistik.com * */ public class AppContextListener implements ServletContextListener { private static final Logger logger = Logger.getLogger(AppContextListener.class); @Override public void contextInitialized(ServletContextEvent arg0) { logger.info("AppContextListener started"); } @Override public void contextDestroyed(ServletContextEvent arg0) { logger.info("AppContextListener destroyed"); // manually unregister the JDBC drivers Enumeration<Driver> drivers = DriverManager.getDrivers(); while (drivers.hasMoreElements()) { Driver driver = drivers.nextElement(); try { DriverManager.deregisterDriver(driver); logger.info(String.format("Unregistering jdbc driver: %s", driver)); } catch (SQLException e) { logger.info(String.format("Error unregistering driver %s", driver), e); } } // manually shutdown clean up threads try { AbandonedConnectionCleanupThread.shutdown(); logger.info("Shutting down AbandonedConnectionCleanupThread"); } catch (InterruptedException e) { logger.warn("SEVERE problem shutting down AbandonedConnectionCleanupThread: ", e); e.printStackTrace(); } } } And here you configure it in your web.xml: <listener> <listener-class> com.calculistik.mediweb.context.AppContextListener </listener-class> </listener> A: I added -XX: MaxPermSize = 128m (you can experiment which works best) to VM Arguments as I'm using eclipse ide. In most of JVM, default PermSize is around 64MB which runs out of memory if there are too many classes or huge number of Strings in the project. For eclipse, it is also described at answer. STEP 1 : Double Click on the tomcat server at Servers Tab STEP 2 : Open launch Conf and add -XX: MaxPermSize = 128m to the end of existing VM arguements. A: You better try -XX:MaxPermSize=128M rather than -XX:MaxPermGen=128M. I can not tell the precise use of this memory pool, but it have to do with the number of classes loaded into the JVM. (Thus enabling class unloading for tomcat can resolve the problem.) If your applications generates and compiles classes on the run it is more likely to need a memory pool bigger than the default. A: I've been butting my head against this problem while deploying and undeploying a complex web application too, and thought I'd add an explanation and my solution. When I deploy an application on Apache Tomcat, a new ClassLoader is created for that app. The ClassLoader is then used to load all the application's classes, and on undeploy, everything's supposed to go away nicely. However, in reality it's not quite as simple. One or more of the classes created during the web application's life holds a static reference which, somewhere along the line, references the ClassLoader. As the reference is originally static, no amount of garbage collecting will clean this reference up - the ClassLoader, and all the classes it's loaded, are here to stay. And after a couple of redeploys, we encounter the OutOfMemoryError. Now this has become a fairly serious problem. I could make sure that Tomcat is restarted after each redeploy, but that takes down the entire server, rather than just the application being redeployed, which is often not feasible. So instead I've put together a solution in code, which works on Apache Tomcat 6.0. I've not tested on any other application servers, and must stress that this is very likely not to work without modification on any other application server. I'd also like to say that personally I hate this code, and that nobody should be using this as a "quick fix" if the existing code can be changed to use proper shutdown and cleanup methods. The only time this should be used is if there's an external library your code is dependent on (In my case, it was a RADIUS client) that doesn't provide a means to clean up its own static references. Anyway, on with the code. This should be called at the point where the application is undeploying - such as a servlet's destroy method or (the better approach) a ServletContextListener's contextDestroyed method. //Get a list of all classes loaded by the current webapp classloader WebappClassLoader classLoader = (WebappClassLoader) getClass().getClassLoader(); Field classLoaderClassesField = null; Class clazz = WebappClassLoader.class; while (classLoaderClassesField == null && clazz != null) { try { classLoaderClassesField = clazz.getDeclaredField("classes"); } catch (Exception exception) { //do nothing } clazz = clazz.getSuperclass(); } classLoaderClassesField.setAccessible(true); List classes = new ArrayList((Vector)classLoaderClassesField.get(classLoader)); for (Object o : classes) { Class c = (Class)o; //Make sure you identify only the packages that are holding references to the classloader. //Allowing this code to clear all static references will result in all sorts //of horrible things (like java segfaulting). if (c.getName().startsWith("com.whatever")) { //Kill any static references within all these classes. for (Field f : c.getDeclaredFields()) { if (Modifier.isStatic(f.getModifiers()) && !Modifier.isFinal(f.getModifiers()) && !f.getType().isPrimitive()) { try { f.setAccessible(true); f.set(null, null); } catch (Exception exception) { //Log the exception } } } } } classes.clear(); A: The java.lang.OutOfMemoryError: PermGen space message indicates that the Permanent Generation’s area in memory is exhausted. Any Java applications is allowed to use a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup. Java memory is separated into different regions which can be seen in the following image: Metaspace: A new memory space is born The JDK 8 HotSpot JVM is now using native memory for the representation of class metadata and is called Metaspace; similar to the Oracle JRockit and IBM JVM's. The good news is that it means no more java.lang.OutOfMemoryError: PermGen space problems and no need for you to tune and monitor this memory space anymore using Java_8_Download or higher. A: "They" are wrong because I'm running 6.0.29 and have the same problem even after setting all of the options. As Tim Howland said above, these options only put off the inevitable. They allow me to redeploy 3 times before hitting the error instead of every time I redeploy. A: In case you are getting this in the eclipse IDE, even after setting the parameters --launcher.XXMaxPermSize, -XX:MaxPermSize, etc, still if you are getting the same error, it most likely is that the eclipse is using a buggy version of JRE which would have been installed by some third party applications and set to default. These buggy versions do not pick up the PermSize parameters and so no matter whatever you set, you still keep getting these memory errors. So, in your eclipse.ini add the following parameters: -vm <path to the right JRE directory>/<name of javaw executable> Also make sure you set the default JRE in the preferences in the eclipse to the correct version of java. A: The only way that worked for me was with the JRockit JVM. I have MyEclipse 8.6. The JVM's heap stores all the objects generated by a running Java program. Java uses the new operator to create objects, and memory for new objects is allocated on the heap at run time. Garbage collection is the mechanism of automatically freeing up the memory contained by the objects that are no longer referenced by the program. A: I was having similar issue. Mine is JDK 7 + Maven 3.0.2 + Struts 2.0 + Google GUICE dependency injection based project. Whenever i tried running mvn clean package command, it was showing following error and "BUILD FAILURE" occured org.apache.maven.surefire.util.SurefireReflectionException: java.lang.reflect.InvocationTargetException; nested exception is java.lang.reflect.InvocationTargetException: null java.lang.reflect.InvocationTargetException Caused by: java.lang.OutOfMemoryError: PermGen space I tried all the above useful tips and tricks but unfortunately none worked for me. What worked for me is described step by step below :=> * *Go to your pom.xml *Search for <artifactId>maven-surefire-plugin</artifactId> *Add a new <configuration> element and then <argLine> sub element in which pass -Xmx512m -XX:MaxPermSize=256m as shown below => <configuration> <argLine>-Xmx512m -XX:MaxPermSize=256m</argLine> </configuration> Hope it helps, happy programming :) A: Alternatively, you can switch to JRockit which handling permgen differently then sun's jvm. It generally has better performance as well. http://www.oracle.com/technetwork/middleware/jrockit/overview/index.html A: 1) Increasing the PermGen Memory Size The first thing one can do is to make the size of the permanent generation heap space bigger. This cannot be done with the usual –Xms(set initial heap size) and –Xmx(set maximum heap size) JVM arguments, since as mentioned, the permanent generation heap space is entirely separate from the regular Java Heap space, and these arguments set the space for this regular Java heap space. However, there are similar arguments which can be used(at least with the Sun/OpenJDK jvms) to make the size of the permanent generation heap bigger: -XX:MaxPermSize=128m Default is 64m. 2) Enable Sweeping Another way to take care of that for good is to allow classes to be unloaded so your PermGen never runs out: -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled Stuff like that worked magic for me in the past. One thing though, there’s a significant performance trade off in using those, since permgen sweeps will make like an extra 2 requests for every request you make or something along those lines. You’ll need to balance your use with the tradeoffs. You can find the details of this error. http://faisalbhagat.blogspot.com/2014/09/java-outofmemoryerror-permgen.html A: App server PermGen errors that happen after multiple deployments are most likely caused by references held by the container into your old apps' classloaders. For example, using a custom log level class will cause references to be held by the app server's classloader. You can detect these inter-classloader leaks by using modern (JDK6+) JVM analysis tools such as jmap and jhat to look at which classes continue to be held in your app, and redesigning or eliminating their use. Usual suspects are databases, loggers, and other base-framework-level libraries. See Classloader leaks: the dreaded "java.lang.OutOfMemoryError: PermGen space" exception, and especially its followup post. A: I had the problem we are talking about here, my scenario is eclipse-helios + tomcat + jsf and what you were doing is making a deploy a simple application to tomcat. I was showing the same problem here, solved it as follows. In eclipse go to servers tab double click on the registered server in my case tomcat 7.0, it opens my file server General registration information. On the section "General Information" click on the link "Open launch configuration" , this opens the execution of server options in the Arguments tab in VM arguments added in the end these two entries -XX: MaxPermSize = 512m -XX: PermSize = 512m and ready. A: The simplest answer these days is to use Java 8. It no longer reserves memory exclusively for PermGen space, allowing the PermGen memory to co-mingle with the regular memory pool. Keep in mind that you will have to remove all non-standard -XXPermGen...=... JVM startup parameters if you don't want Java 8 to complain that they don't do anything. A: You can also solve this problem by doing a: rm -rf <tomcat-dir>/work/* <tomcat-dir>/temp/* Clearing out the work and temp directories makes Tomcat do a clean startup. A: If any one is struggling with the same error in netbeans, then here is how I fixed it. In Netbeans: Go to services tab --> Right on server -->Choose properties --> go to platform tab -->Inside vm options type -Xms1024m In my case, I have given -Xms4096m Here is the screenshot: A: To who is getting this same problem in IntelliJ when trying to debug JBoss Application: I just added this -XX: MaxPermSize = 128m to the VM Options In Run/Debug COnfigurations. You can increase it to 256m to be more guaranteed that will work. A: Increase Tomcat Memory C:\Program Files\Apache Software Foundation\Tomcat 9.0\bin or anywhere you used tomcat. and run tomcat9w or any version you used. after that follow the picture Change 128 to 1024 and also max change it to 1024 or more as you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/88235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1241" }
Q: Files not extracted from .jar file when run I have updated my ant build.xml file to include a new file and a new folder. After creating the .jar I check if they exist in the jar by 'unzip\extract', and they are there. But when executing the .jar neither the folder or the file gets extracted. Am I missing a step? A: Look into getResourceAsStream. It'll keep you from having to extract the files from the jar file. Unless that's your goal. A: Your application should be able to use the file directly from within the jar, no need for extracting it. Or do you mean something else? A: Are you doing something specific to extract the jar file? I ask because normally jar files are not extracted when executing them. If you run "java -jar myJar.jar" or "java -cp myJar.jar com.example.MyMainClass" the jar files that is referenced will not be extracted. Java will load your classes and resources directly from the jar file without extracting it. A: If you wrap your application up using One-JAR, you can specify an attribute in the Manifest file to extract files that you want (See the One-Jar-Expand manifest attribute). As a bonus, you will also be able to wrap any dependent libraries along with your code, creating a single distributable jar.
{ "language": "en", "url": "https://stackoverflow.com/questions/88243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using the same App_Code classes across websites Let's say you have a solution with two website projects, Website A and Website B. Now inside Website A's App_Code folder, there is a Class X defined in a ClassX.cs file. What do you do if Website B also needs access to ClassX.cs? Is there any way to share this file across App_Code folders? Assume that moving the file to a common library is out of the question. A: Please please don't use these unholy website projects. Use Web Application projects instead, pack your shared classes into a library project and reference it from all your Web Applications. A: Pack your shared classes into a Library (a DLL) and from each site right-click on add reference and select the library that you have created. A: With the restriction of "Assume that moving the file to a common library is out of the question." the only way you could do this is to use NTFS junction points to essentially create a symlink to have the same .cs file in both folders. This is a terrible option though (for versioning reasons)...moving it to a common library is the best option. Here's the Wikipedia entry on NTFS junction points http://en.wikipedia.org/wiki/NTFS_junction_point and here's a tool for creating them http://technet.microsoft.com/en-us/sysinternals/bb896768.aspx A: I don't believe that there is a way without moving ClassX into a new code library project. .NET requires all an assembly's dependencies to exist in the same folder as the assembly itself, or in the GAC, to be automatically detected. You could try loading the assembly manually via the Reflection classes, although it's a bit hacky. The best solution, if you have the time available and the inclination to undertake it, would be to go with JRoppert's solution of moving it to a web application project. You could then use web references (which work about as nicely as regular references inside VS) to refer to ClassX. HTH
{ "language": "en", "url": "https://stackoverflow.com/questions/88252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you configure Django for simple development and deployment? I tend to use SQLite when doing Django development, but on a live server something more robust is often needed (MySQL/PostgreSQL, for example). Invariably, there are other changes to make to the Django settings as well: different logging locations / intensities, media paths, etc. How do you manage all these changes to make deployment a simple, automated process? A: Update: django-configurations has been released which is probably a better option for most people than doing it manually. If you would prefer to do things manually, my earlier answer still applies: I have multiple settings files. * *settings_local.py - host-specific configuration, such as database name, file paths, etc. *settings_development.py - configuration used for development, e.g. DEBUG = True. *settings_production.py - configuration used for production, e.g. SERVER_EMAIL. I tie these all together with a settings.py file that firstly imports settings_local.py, and then one of the other two. It decides which to load by two settings inside settings_local.py - DEVELOPMENT_HOSTS and PRODUCTION_HOSTS. settings.py calls platform.node() to find the hostname of the machine it is running on, and then looks for that hostname in the lists, and loads the second settings file depending on which list it finds the hostname in. That way, the only thing you really need to worry about is keeping the settings_local.py file up to date with the host-specific configuration, and everything else is handled automatically. Check out an example here. A: The most simplistic way I found was: 1) use the default settings.py for local development and 2) create a production-settings.py starting with: import os from settings import * And then just override the settings that differ in production: DEBUG = False TEMPLATE_DEBUG = DEBUG DATABASES = { 'default': { .... } } A: Personally, I use a single settings.py for the project, I just have it look up the hostname it's on (my development machines have hostnames that start with "gabriel" so I just have this: import socket if socket.gethostname().startswith('gabriel'): LIVEHOST = False else: LIVEHOST = True then in other parts I have things like: if LIVEHOST: DEBUG = False PREPEND_WWW = True MEDIA_URL = 'http://static1.grsites.com/' else: DEBUG = True PREPEND_WWW = False MEDIA_URL = 'http://localhost:8000/static/' and so on. A little bit less readable, but it works fine and saves having to juggle multiple settings files. A: At the end of settings.py I have the following: try: from settings_local import * except ImportError: pass This way if I want to override default settings I need to just put settings_local.py right next to settings.py. A: Somewhat related, for the issue of deploying Django itself with multiple databases, you may want to take a look at Djangostack. You can download a completely free installer that allows you to install Apache, Python, Django, etc. As part of the installation process we allow you to select which database you want to use (MySQL, SQLite, PostgreSQL). We use the installers extensively when automating deployments internally (they can be run in unattended mode). A: I have two files. settings_base.py which contains common/default settings, and which is checked into source control. Each deployment has a separate settings.py, which executes from settings_base import * at the beginning and then overrides as needed. A: In addition to the multiple settings files mentioned by Jim, I also tend to place two settings into my settings.py file at the top BASE_DIR and BASE_URL set to the path of the code and the URL to the base of the site, all other settings are modified to append themselves to these. BASE_DIR = "/home/sean/myapp/" e.g. MEDIA_ROOT = "%smedia/" % BASEDIR So when moving the project I only have to edit these settings and not search the whole file. I would also recommend looking at fabric and Capistrano (Ruby tool, but it can be used to deploy Django applications) which facilitate automation of remote deployment. A: I have my settings.py file in an external directory. That way, it doesn't get checked into source control, or over-written by a deploy. I put this in the settings.py file under my Django project, along with any default settings: import sys import os.path def _load_settings(path): print "Loading configuration from %s" % (path) if os.path.exists(path): settings = {} # execfile can't modify globals directly, so we will load them manually execfile(path, globals(), settings) for setting in settings: globals()[setting] = settings[setting] _load_settings("/usr/local/conf/local_settings.py") Note: This is very dangerous if you can't trust local_settings.py. A: Well, I use this configuration: At the end of settings.py: #settings.py try: from locale_settings import * except ImportError: pass And in locale_settings.py: #locale_settings.py class Settings(object): def __init__(self): import settings self.settings = settings def __getattr__(self, name): return getattr(self.settings, name) settings = Settings() INSTALLED_APPS = settings.INSTALLED_APPS + ( 'gunicorn',) # Delete duplicate settings maybe not needed, but I prefer to do it. del settings del Settings A: So many complicated answers! Every settings.py file comes with : BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) I use that directory to set the DEBUG variable like this (reaplace with the directoy where your dev code is): DEBUG=False if(BASE_DIR=="/path/to/my/dev/dir"): DEBUG = True Then, every time the settings.py file is moved, DEBUG will be False and it's your production environment. Every time you need different settings than the ones in your dev environment just use: if(DEBUG): #Debug setting else: #Release setting A: Why make things so much complicated? I come into Django from a PHP/Laravel background. I use .env and you can easily configure it. Install this package django-environ Now, in the folder where you've settings.py, create a file .env (make sure to put this file in gitignore) In the .env file, put the env variables like debug setting state, secret key, mail credentials etc A snapshot of example .env SECRET_KEY="django-insecure-zy%)s5$=aql=#ox54lzfjyyx!&uv1-q0kp^54p(^251&_df75i" DB_NAME=bugfree DB_USER=postgres DB_PASSWORD=koushik DB_PORT=5433 DB_HOST=localhost APP_DEBUG=True # everything is string here In the settings, make sure to instantiate it using this import environ env = environ.Env() environ.Env.read_env() Now you can import values from the .env file and put them wherever you want. Some examples in settings.py SECRET_KEY = env('SECRET_KEY') DEBUG = bool(env('APP_DEBUG', False)) You can also put default value too like this env('DB_NAME', 'default value here') TIP You can create another .env.example in the same folder where you've .env file and you can have a template of .env and you can commit the .example file. It helps the future dev to know easily what env variables are there. .env.example would be something like this SECRET_KEY=VALUE_HERE DB_NAME=VALUE_HERE DB_USER=VALUE_HERE DB_PASSWORD=VALUE_HERE DB_PORT=VALUE_HERE DB_HOST=VALUE_HERE EMAIL_HOST=VALUE_HERE EMAIL_PORT=VALUE_HERE EMAIL_HOST_USER=VALUE_HERE EMAIL_HOST_PASSWORD=VALUE_HERE DEFAULT_FROM_EMAIL=VALUE_HERE A: I think it depends on the size of the site as to whether you need to step up from using SQLite, I've successfully used SQLite on several smaller live sites and it runs great. A: I use environment: if os.environ.get('WEB_MODE', None) == 'production' : from settings_production import * else : from settings_dev import * I believe this is a much better approach, because eventually you need special settings for your test environment, and you can easily add it to this condition. A: This is an older post but I think if I add this useful library it will simplify things. Use django-configuration Quickstart pip install django-configurations Then subclass the included configurations.Configuration class in your project's settings.py or any other module you're using to store the settings constants, e.g.: # mysite/settings.py from configurations import Configuration class Dev(Configuration): DEBUG = True Set the DJANGO_CONFIGURATION environment variable to the name of the class you just created, e.g. in ~/.bashrc: export DJANGO_CONFIGURATION=Dev and the DJANGO_SETTINGS_MODULE environment variable to the module import path as usual, e.g. in bash: export DJANGO_SETTINGS_MODULE=mysite.settings Alternatively supply the --configuration option when using Django management commands along the lines of Django's default --settings command line option, e.g.: python manage.py runserver --settings=mysite.settings --configuration=Dev To enable Django to use your configuration you now have to modify your manage.py or wsgi.py script to use django-configurations' versions of the appropriate starter functions, e.g. a typical manage.py using django-configurations would look like this: #!/usr/bin/env python import os import sys if __name__ == "__main__": os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings') os.environ.setdefault('DJANGO_CONFIGURATION', 'Dev') from configurations.management import execute_from_command_line execute_from_command_line(sys.argv) Notice in line 10 we don't use the common tool django.core.management.execute_from_command_line but instead configurations.management.execute_from_command_line. The same applies to your wsgi.py file, e.g.: import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings') os.environ.setdefault('DJANGO_CONFIGURATION', 'Dev') from configurations.wsgi import get_wsgi_application application = get_wsgi_application() Here we don't use the default django.core.wsgi.get_wsgi_application function but instead configurations.wsgi.get_wsgi_application. That's it! You can now use your project with manage.py and your favorite WSGI enabled server. A: In fact you should probably consider having the same (or almost the same) configs for your development and production environment. Otherwise, situations like "Hey, it works on my machine" will happen from time to time. So in order to automate your deployment and eliminate those WOMM issues, just use Docker.
{ "language": "en", "url": "https://stackoverflow.com/questions/88259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "117" }
Q: How do you get selenium to recognize that a page loaded? In certain unknown situations selenium does not detect that a page has loaded when using the open method. I am using the Java API. For example (This code will not produce this error. I don't know of an externally visible page that will.): Selenium browser = new DefaultSelenium("localhost", 4444, "*firefox", "http://www.google.com"); browser.start(); browser.open("http://www.google.com/webhp?hl=en"); browser.type("q", "hello world"); When the error occurs, the call to 'open' times out, even though you can clearly see that the page has loaded successfully before the timeout occurs. Increasing the timeout does not help. The call to 'type' never occurs, no progress is made. How do you get selenium to recognize that the page has loaded when this error occurs? A: I faced this problem quite recently. All JS-based solutions didn't quite fit ICEFaces 2.x + Selenium 2.x/Webdriver combination I have. What I did and what worked for me is the following: In the corner of the screen, there's connection activity indicator. <ice:outputConnectionStatus id="connectStat" showPopupOnDisconnect="true"/> In my Java unit test, I wait until its 'idle' image comes back again: private void waitForAjax() throws InterruptedException { for (int second = 0;; second++) { if (second >= 60) fail("timeout"); try { if ("visibility: visible;".equals( selenium.getAttribute("top_right_form:connectStat:connection-idle@style"))) { break; } } catch (Exception e) { } Thread.sleep(1000); } } You can disable rendering of this indicator in production build, if showing it at the page is unnecessary, or use empty 1x1 gifs as its images. Works 100% (with popups, pushed messages etc.) and relieves you from the hell of specifying waitForElement(...) for each element separately. Hope this helps someone. A: Maybe this will help you.... Consider the following method is in page called Functions.java public static void waitForPageLoaded(WebDriver driver) { ExpectedCondition<Boolean> expectation = new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver driver) { return ((JavascriptExecutor)driver).executeScript("return document.readyState").equals("complete"); } }; WebDriverWait wait = new WebDriverWait(driver,30); try { wait.until(expectation); } catch(Throwable error) { Assert.assertFalse(true, "Timeout waiting for Page Load Request to complete."); } } And you can call this method into your function. Since it is a static method, you can directly call with the class name. public class Test(){ WebDriver driver; @Test public void testing(){ driver = new FirefoxDriver(); driver.get("http://www.gmail.com"); Functions.waitForPageLoaded(driver); } } A: When I do Selenium testing, I wait to see if a certain element is visible (waitForVisible), then I do my action. I usually try to use an element after the one I'm typing in. A: Using 'openAndWait' in place of 'open' will do the trick. From the website: Many Actions can be called with the "AndWait" suffix, e.g. "clickAndWait". This suffix tells Selenium that the action will cause the browser to make a call to the server, and that Selenium should wait for a new page to load. A: Enabling the 'multiWindow' feature solved the issue, though I am not clear why. SeleniumServer(int port, boolean slowResources, boolean multiWindow) SeleniumServer server = new SeleniumServer(4444, false, true); Any clarification would be helpful. A: I've run into similar issues when using Selenium to test an application with iFrames. Basically, it seemed that once the primary page (the page containing the iframes) was loaded, Selenium was unable to determine when the iframe content had finished loading. From looking at the source for the link you're trying to load, it looks like there's some Javascript that's creating additional page elements once the page has loaded. I can't be sure, but it's possible that this is what's causing the problem since it seems similar to the situation that I've encountered above. Do you get the same sort of errors loading a static page? (ie, something with straight html) If you're unable to get a better answer, try the selenium forums, they're usually quite active and the Selenium devs do respond to good questions. http://clearspace.openqa.org/community/selenium_remote_control Also, if you haven't already tried it, add a call to browser.WaitForPageToLoad("15000") after the call to open. I've found that doing this after every page transition makes my tests a little more solid, even though it shouldn't technically be required. (When Selenium detects that the page actually has loaded, it continues, so the actual timeout variable isn't really a concern.. A: Not a perfect solution, but I am using this method $t1 = time(); // current timestamp $this->selenium->waitForPageToLoad(30); $t2 = time(); if ($t2 - $t1 >= 28) { // page was not loaded } So, it is kind of checking if the page was not loaded during the specified time, so it is not loaded. A: If you page has no AJAX, try to seek footer of page (I also use Junit fail(""), you may use System.err.println() instead): element.click(); int timeout =120; // one loop = 0.5 sec, co it will be one minute WebElement myFooter = null; for(int i=0; i<timeout; i++){ myFooter = driver.findElement(By.id("footer")); if(myFooter!= null){ break; } else{ timeout--; } } if(timeout==0 && myFooter == null){ fail("ERROR! PAGE TIMEOUT"); } A: another idea is to modify AJAX API (to add some text after AJAX actions). After ajax action was finished, before return, set invisible field to TRUE, selenium will find it and read as green-light in html: <input type='hidden' id="greenlight"> in selenium if(driver.findElement(By.id("greenlight")).getAttr("value").equals("TRUE")){ // do something after page loading }
{ "language": "en", "url": "https://stackoverflow.com/questions/88269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: In ColdFusion 8, can you declare a function as private using cfscript? Normally you create a function using cfscript like: <cfscript> function foo() { return "bar"; } </cfscript> Is there a way to declare this as a private function, available only to other methods inside the same cfc? I know you can do it with tags: <cffunction name="foo" access="private"> <cfreturn "bar"> </cffunction> But I don't want to have to rewrite this large function thats already written in cfscript. A: Not in ColdFusion 8. It was added in CF9, though. You don't need to rewrite the whole function, you can do this: <cffunction name="foo" returntype="string" output="false" access="private"> <cfscript> return "bar"; </cfscript> </cffunction> If you have access to CF9, the new syntax is: private string function foo() output="false" { return "bar"; }
{ "language": "en", "url": "https://stackoverflow.com/questions/88274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Example/How-To create calendar appointment using Exchange 03 Webdav and PHP? I've been trying to figure this out for about two weeks. I'm able to create email items in people's folders, read the folders, all that stuff but for the life of me I can not get anything to work with the calendars. I can provide examples of the XML I'm sending to WebDav but hoping someone out there has done this and has an example? A: I did this in a Java program a few years back, and the way I did it was to PUT a VCALENDAR document into the folder. One quirk is that the VCALENDAR had to be enclosed within an RFC822 message. It's a bizarre combination of WebDAV, email, and iCAL/VCAL, but it worked at the time on Exchange 2003 hosted at Link2Exchange. I'm sure there is an easier way, but this is what worked for me. Below I show a tcpdump packet trace of what happened. You should probably use ngrep/tcpdump on your own Outlook/Entourage client to see what it does. Note that "Cal2" is the name of my test calendar folder. You'd use "Calendar" for the main calendar folder. T 10.0.1.95:59741 -> 66.211.136.9:80 [AP] PUT /exchange/yourname.domainname.com/Cal2/CC1.1163646061548.0.eml HTTP/1.1. translate: f. Content-Type: message/rfc822. Pragma: no-cache. Accept: */*. Cache-Control: no-cache. Authorization: Basic NOYOUCANTSEEMYPASSWORDYOUBASTARDS. User-Agent: Jakarta Commons-HttpClient/2.0final. Host: e1.exmx.net. Cookie: sessionid=29486b50-d398-4f76-9604-8421950c7dcd:0x0. Content-Length: 478. Expect: 100-continue. . T 66.211.136.9:80 -> 10.0.1.95:59741 [AP] HTTP/1.1 100 Continue. . T 10.0.1.95:59741 -> 66.211.136.9:80 [AP] content-class: urn:content-classes:appointment. Content-Type: text/calendar;. .method=REQUEST;. .charset="utf-8". Content-Transfer-Encoding: 8bit. . BEGIN:VCALENDAR. BEGIN:VEVENT. UID:E1+1382+1014+495066799@I1+1382+1014+6+495066799. SUMMARY:Voice Architecture Leads Meeting. PRIORITY:5. LOCATION:x44444 pc:6879. DTSTART:20061122T193000Z. DTEND:20061122T203000Z. DTSTAMP:20061110T074856Z. DESCRIPTION:this is a description. SUMMARY:this is a summary. END:VEVENT. END:VCALENDAR. T 66.211.136.9:80 -> 10.0.1.95:59741 [AP] HTTP/1.1 201 Created. Date: Thu, 16 Nov 2006 03:00:16 GMT. Server: Microsoft-IIS/6.0. X-Powered-By: ASP.NET. MS-Exchange-Permanent-URL: http://e1.exmx.net/exchange/yourname.yourdomain.com/-FlatUrlSpace-/122cda661de1da48936f9 44bda4dde6e-3af8a8/122cda661de1da48936f944bda4dde6e-3f3383. Location: http://e1.exmx.net/exchange/yourname.yourdomain.com/Cal2/CC1.1163646061548.0.eml. Repl-UID: <rid:122cda661de1da48936f944bda4dde6e0000003f3383>. Content-Type: text/html. Content-Length: 110. Allow: OPTIONS, TRACE, GET, HEAD, DELETE, PUT, COPY, MOVE, PROPFIND, PROPPATCH, SEARCH, SUBSCRIBE, UNSUBSCRIBE, PO LL, BDELETE, BCOPY, BMOVE, BPROPPATCH, BPROPFIND, LOCK, UNLOCK. ResourceTag: <rt:122cda661de1da48936f944bda4dde6e0000003f3383122cda661de1da48936f944bda4dde6e0000003f4671>. GetETag: "122cda661de1da48936f944bda4dde6e0000003f4671". MS-WebStorage: 6.5.7638. Cache-Control: no-cache. . T 66.211.136.9:80 -> 10.0.1.95:59741 [AP] <body><h1>/exchange/yourname.yourdomain.com/Cal2/CC1.1163646061548.0.eml was created successfully</h1></body>. You can verify that it worked using something like Cadaver to query the object's properties via WebDAV like so: dav:/exchange/yourname@yourdomain.com/Cal2/> propget CC1.1163646061548.0.eml Fetching properties for `CC1.1163646061548.0.eml': textdescription = this is a description contentclass = urn:content-classes:appointment supportedlock = <lockentry><locktype><transaction><groupoperation></groupoperation></transaction></locktype><locks cope><local></local></lockscope></lockentry> permanenturl = http://e1.exmx.net/exchange/yourname@yourdomain.com/-FlatUrlSpace-/122cda661de1da48936f944bda4dde6e- 3af8a8/122cda661de1da48936f944bda4dde6e-3f3383 getcontenttype = message/rfc822 id = AQEAAAAAOvioAQAAAAA/M4MAAAAA mid = -8992774761696198655 uid = E1+1382+1014+495066799@I1+1382+1014+6+495066799 isfolder = 0 resourcetype = method = PUBLISH getetag = "122cda661de1da48936f944bda4dde6e0000003f4671" lockdiscovery = outlookmessageclass = IPM.Appointment creationdate = 2006-11-16T03:00:16.549Z outlookmessageclass = IPM.Appointment creationdate = 2006-11-16T03:00:16.549Z ntsecuritydescriptor = CAAEAAAAAAABAC+MMAAAAEwAAAAAAAAAFAAAAAIAHAABAAAAARAUAL8PHwABAQAAAAAABQcAAAABBQAAAAAABRUAAAC nkePD6LEa8iIT/+gqDAAAAQUAAAAAAAUVAAAAp5Hjw+ixGvIiE//oAQIAAA== dtstamp = 2006-11-10T07:48:56.000Z lastmodified = 2006-11-16T03:00:16.565Z dtstart = 2006-11-22T19:30:00.000Z location = x44444 pc:6879 duration = 3600 htmldescription = <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN"> <HTML> <HEAD> <META NAME="Generator" CONTENT="MS Exchange Server version 6.5.7638.1"> <TITLE>this is a summary</TITLE> </HEAD> <BODY> <!-- Converted from text/plain format --> <P><FONT SIZE=2>this is a description</FONT> </P> </BODY> </HTML> ishidden = 0 parentname = http://e1.exmx.net/exchange/yourname@yourdomain.com/Cal2/ meetingstatus = TENTATIVE subject = this is a summary getcontentlength = 631 normalizedsubject = this is a summary isstructureddocument = 0 repl-uid = rid:122cda661de1da48936f944bda4dde6e0000003f3383 timezoneid = 0 displayname = CC1.1163646061548.0.eml href = http://e1.exmx.net/exchange/yourname@yourdomain.com/Cal2/CC1.1163646061548.0.eml nomodifyexceptions = 1 patternend = 2006-11-22T20:30:00.000Z isreadonly = 0 instancetype = 0 uid = AQEAAAAAPzODAAAAAAAAAAAAAAAA getlastmodified = 2006-11-16T03:00:16.565Z created = 2006-11-16T03:00:16.549Z sensitivity = 0 dtend = 2006-11-22T20:30:00.000Z hasattachment = 0 iscollection = 0 read = 1 resourcetag = rt:122cda661de1da48936f944bda4dde6e0000003f3383122cda661de1da48936f944bda4dde6e0000003f4671 patternstart = 2006-11-22T19:30:00.000Z priority = 0 sequence = 0 A: have a look at this http://golemlab.wordpress.com/2009/09/13/php-owa-2003-calendar-fun/
{ "language": "en", "url": "https://stackoverflow.com/questions/88276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Time Synchronization Ubuntu Server Under Parallels I've installed Ubuntu Server (8.04) into Parallels and found that the system time/clock ran fast to the extent that it would gain hours over time. A: What about using an NTP service to keep it sync'd? A: You could just want to install ntpd, it works well enough on real servers, should also do the trick on virtual ones. Another possibility is to check if Parallels has a configuration option like "Sync guest clock to host clock".
{ "language": "en", "url": "https://stackoverflow.com/questions/88289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: macro support in F# After reading Practical Common Lisp I finally understood what the big deal about macros was, and I have been looking for a language for the .NET platform that supports this. There are a few lisp dialects for .NET but from what I have been able to gather all are either very beta or abandoned. Recently my interest has been sparked by Clojure, but it's for the java platform and while on probably could use ikvm it doesn't feel some integrated. Especially when you want to do stuff like WPF. Recently I have been hearing whisper about F#, I tried to look at the documentation if I could find anything about macro support, but haven't found it. So does anyone know? Thanks :) A: Nope. No macros for F#. A: Have you looked at Boo? While Boo doesn't have macros, it has an open compiler pipeline, which is a good alternative to macros for syntactic metaprogramming. [EDIT] As noted in the comments, Boo does have macros now. A: but good horrors the syntax in those ocaml examples looks obscure There you're running into the same fundamental syntactic trade-off you do with Lisp. If you want the power of lisp-like macros, you tend to either end up with lisp-like syntax for the language, or else your macro syntax looks quite different from your regular syntax... nothing wrong with either approach, just different choices A: I thought I should point out that there is now a pretty active .NET/Mono port of Clojure. Clojure supports LISP style macros as is noted in the question. As others have said, macros are not supported in F# at this point (late 2010). A: Recently I have been hearing whisper about F#, I tried to look at the documentation if I could find anything about macro support, but haven't found it. So does anyone know? F# does not support macros and it is unlikely that it ever will. A: How about using F# quotations? http://tomasp.net/blog/fsquotations.aspx A: Nemerle, at http://nemerle.org/ , is a .NET language (also supporting mono) that supports a lot of of the functional programming paradigm while staying visually close to C#. It has extensive macro support. A: Well, F# is based on OCaml and OCaml has a rather extensive macro system. Given the syntactic and semantic similarities of F# and OCaml you may be able to port over the Ocaml macro system to F#. Other than stealing Ocaml's macro system I'm unaware of a canned macro system for F#. A: That may be the other way around than what you want, but do you know about RDNZL? It's a foerign-function interface (FFI) that lets you call .NET libraries from your Lisp code. They are most probably much less mature than any Common Lisp or Scheme implementation, but there are Lisp dialects for .NET: L# and DotLisp. A: There are two actively developed Lisps for .net IronScheme - DLR based scheme implementation Xronos - DLR based port of clojure A: I'm currently investigating possibilities of meta-programming in F#. If we define macros as text template which expands into code then there are two obvious approaches: * *T4 templates. There is implementation for F#: https://github.com/kerams/Templatus *I've seen somewhere invocation of F# from strings into separate assembly and then loading of the assembly.
{ "language": "en", "url": "https://stackoverflow.com/questions/88302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How can I add cookies to Seaside responses without redirecting? I'm making a small web application in Seaside. I have a login component, and after the user logs in I want to send along a cookie when the next component renders itself. Is there a way to get at the object handling the response so I can add something to the headers it will output? I'm trying to avoid using WASession>>redirectWithCookies since it seems pretty kludgey to redirect only because I want to set a cookie. Is there another way that already exist to add a cookie that will go out on the next response? A: There is currently no built-in way to add cookies during the action/callback phase of request processing. This is most likely a defect and is noted in this issue: http://code.google.com/p/seaside/issues/detail?id=48 This is currently slated to be fixed for Seaside 2.9 but I don't know if it will even be backported to 2.8 or not. Keep in mind that there is already (by default) a redirection between the action and rendering phases to prevent a Refresh from re-triggering the callbacks, so in the grand scheme of things, one more redirect in this case isn't so bad. If you still want to dig further, have a look at WARenderContinuation>>handleRequest:. That's where callback processing is triggered and the redirect or rendering phase begun. Edited to add: The issue has now been fixed and (in the latest development code) you can now properly add cookies to the current response at any time. Simply access the response object in the current request context and add the cookie. For example, you might do something like: self requestContext response addCookie: aCookie This is unlikely to be backported to Seaside 2.8 as it required a fairly major shift in the way responses are handled. A: I've just looked into this in depth, and the answer seems to be no. Specifically, there's no way to get at the response from the WARenderCanvas or anything it can access (it holds onto the WARenderingContext, which holds onto the WAHtmlStreamDocument, which holds onto the response's stream but not the response itself). I think it would be reasonable to give the context access to the current response, precisely to be able to set headers on it, but you asked if there was already a way, so: no. That said, Seaside does a lot of extra redirecting, and it doesn't seem to have much impact on the user experience, so maybe the thing to do is to stop worrying about it seeming kludgey and go with the flow of the API that's already there :)
{ "language": "en", "url": "https://stackoverflow.com/questions/88306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to generate a random string in Ruby I'm currently generating an 8-character pseudo-random uppercase string for "A" .. "Z": value = ""; 8.times{value << (65 + rand(25)).chr} but it doesn't look clean, and it can't be passed as an argument since it isn't a single statement. To get a mixed-case string "a" .. "z" plus "A" .. "Z", I changed it to: value = ""; 8.times{value << ((rand(2)==1?65:97) + rand(25)).chr} but it looks like trash. Does anyone have a better method? A: Why not use SecureRandom? require 'securerandom' random_string = SecureRandom.hex # outputs: 5b5cd0da3121fc53b4bc84d0c8af2e81 (i.e. 32 chars of 0..9, a..f) SecureRandom also has methods for: * *base64 *random_bytes *random_number see: http://ruby-doc.org/stdlib-1.9.2/libdoc/securerandom/rdoc/SecureRandom.html A: Another method I like to use: rand(2**256).to_s(36)[0..7] Add ljust if you are really paranoid about the correct string length: rand(2**256).to_s(36).ljust(8,'a')[0..7] A: Here is one simple code for random password with length 8: rand_password=('0'..'z').to_a.shuffle.first(8).join A: Be aware: rand is predictable for an attacker and therefore probably insecure. You should definitely use SecureRandom if this is for generating passwords. I use something like this: length = 10 characters = ('A'..'Z').to_a + ('a'..'z').to_a + ('0'..'9').to_a password = SecureRandom.random_bytes(length).each_char.map do |char| characters[(char.ord % characters.length)] end.join A: SecureRandom.base64(15).tr('+/=lIO0', 'pqrsxyz') Something from Devise A: I think this is a nice balance of conciseness, clarity and ease of modification. characters = ('a'..'z').to_a + ('A'..'Z').to_a # Prior to 1.9, use .choice, not .sample (0..8).map{characters.sample}.join Easily modified For example, including digits: characters = ('a'..'z').to_a + ('A'..'Z').to_a + (0..9).to_a Uppercase hexadecimal: characters = ('A'..'F').to_a + (0..9).to_a For a truly impressive array of characters: characters = (32..126).to_a.pack('U*').chars.to_a A: Just adding my cents here... def random_string(length = 8) rand(32**length).to_s(32) end A: [*('A'..'Z')].sample(8).join Generate a random 8 letter string (e.g. NVAYXHGR) ([*('A'..'Z'),*('0'..'9')]-%w(0 1 I O)).sample(8).join Generate a random 8 character string (e.g. 3PH4SWF2), excludes 0/1/I/O. Ruby 1.9 A: You can use String#random from the Facets of Ruby Gem facets. It basically does this: class String def self.random(len=32, character_set = ["A".."Z", "a".."z", "0".."9"]) characters = character_set.map { |i| i.to_a }.flatten characters_len = characters.length (0...len).map{ characters[rand(characters_len)] }.join end end A: This solution needs external dependency, but seems prettier than another. * *Install gem faker *Faker::Lorem.characters(10) # => "ang9cbhoa8" A: I was doing something like this recently to generate an 8 byte random string from 62 characters. The characters were 0-9,a-z,A-Z. I had an array of them as was looping 8 times and picking a random value out of the array. This was inside a Rails app. str = '' 8.times {|i| str << ARRAY_OF_POSSIBLE_VALUES[rand(SIZE_OF_ARRAY_OF_POSSIBLE_VALUES)] } The weird thing is that I got good number of duplicates. Now randomly this should pretty much never happen. 62^8 is huge, but out of 1200 or so codes in the db i had a good number of duplicates. I noticed them happening on hour boundaries of each other. In other words I might see a duple at 12:12:23 and 2:12:22 or something like that...not sure if time is the issue or not. This code was in the before create of an ActiveRecord object. Before the record was created this code would run and generate the 'unique' code. Entries in the DB were always produced reliably, but the code (str in the above line) was being duplicated much too often. I created a script to run through 100000 iterations of this above line with small delay so it would take 3-4 hours hoping to see some kind of repeat pattern on an hourly basis, but saw nothing. I have no idea why this was happening in my Rails app. A: Given: chars = [*('a'..'z'),*('0'..'9')].flatten Single expression, can be passed as an argument, allows duplicate characters: Array.new(len) { chars.sample }.join A: My favorite is (:A..:Z).to_a.shuffle[0,8].join. Note that shuffle requires Ruby > 1.9. A: I can't remember where I found this, but it seems like the best and the least process intensive to me: def random_string(length=10) chars = 'abcdefghjkmnpqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ0123456789' password = '' length.times { password << chars[rand(chars.size)] } password end A: We've been using this on our code: class String def self.random(length=10) ('a'..'z').sort_by {rand}[0,length].join end end The maximum length supported is 25 (we're only using it with the default anyway, so hasn't been a problem). Someone mentioned that 'a'..'z' is suboptimal if you want to completely avoid generating offensive words. One of the ideas we had was removing vowels, but you still end up with WTFBBQ etc. A: With this method you can pass in an abitrary length. It's set as a default as 6. def generate_random_string(length=6) string = "" chars = ("A".."Z").to_a length.times do string << chars[rand(chars.length-1)] end string end A: I like Radar's answer best, so far, I think. I'd tweak a bit like this: CHARS = ('a'..'z').to_a + ('A'..'Z').to_a def rand_string(length=8) s='' length.times{ s << CHARS[rand(CHARS.length)] } s end A: ''.tap {|v| 4.times { v << ('a'..'z').to_a.sample} } A: 2 solutions for a random string consisting of 3 ranges: (('a'..'z').to_a + ('A'..'Z').to_a + (0..9).to_a).sample(8).join ([*(48..57),*(65..90),*(97..122)]).sample(8).collect(&:chr)*"" One Character from each Range. And if you need at least one character from each range, such as creating a random password that has one uppercase, one lowercase letter and one digit, you can do something like this: ( ('a'..'z').to_a.sample(8) + ('A'..'Z').to_a.sample(8) + (0..9).to_a.sample(8) ).shuffle.join #=> "Kc5zOGtM0H796QgPp8u2Sxo1" A: My 2 cents: def token(length=16) chars = [*('A'..'Z'), *('a'..'z'), *(0..9)] (0..length).map {chars.sample}.join end A: I just write a small gem random_token to generate random tokens for most use case, enjoy ~ https://github.com/sibevin/random_token A: Another trick that works with Ruby 1.8+ and is fast is: >> require "openssl" >> OpenSSL::Random.random_bytes(20).unpack('H*').join => "2f3ff53dd712ba2303a573d9f9a8c1dbc1942d28" It get's you random hex string. Similar way you should be able to generate base64 string ('M*'). A: require 'securerandom' SecureRandom.urlsafe_base64(9) A: I use this for generating random URL friendly strings with a guaranteed maximum length: string_length = 8 rand(36**string_length).to_s(36) It generates random strings of lowercase a-z and 0-9. It's not very customizable but it's short and clean. A: If you want a string of specified length, use: require 'securerandom' randomstring = SecureRandom.hex(n) It will generate a random string of length 2n containing 0-9 and a-f A: try this out def rand_name(len=9) ary = [('0'..'9').to_a, ('a'..'z').to_a, ('A'..'Z').to_a] name = '' len.times do name << ary.choice.choice end name end I love the answers of the thread, have been very helpful, indeed!, but if I may say, none of them satisfies my ayes, maybe is the rand() method. it's just doesn't seems right to me, since we've got the Array#choice method for that matter. A: Here is another method: * *It uses the secure random number generator instead of rand() *Can be used in URLs and file names *Contains uppercase, lowercase characters and numbers *Has an option not to include ambiguous characters I0l01 Needs require "securerandom" def secure_random_string(length = 32, non_ambiguous = false) characters = ('a'..'z').to_a + ('A'..'Z').to_a + ('0'..'9').to_a %w{I O l 0 1}.each{ |ambiguous_character| characters.delete ambiguous_character } if non_ambiguous (0...length).map{ characters[ActiveSupport::SecureRandom.random_number(characters.size)] }.join end A: If you are on a UNIX and you still must use Ruby 1.8 (no SecureRandom) without Rails, you can also use this: random_string = `openssl rand -base64 24` Note this spawns new shell, this is very slow and it can only be recommended for scripts. A: This solution generates a string of easily readable characters for activation codes; I didn't want people confusing 8 with B, 1 with I, 0 with O, L with 1, etc. # Generates a random string from a set of easily readable characters def generate_activation_code(size = 6) charset = %w{ 2 3 4 6 7 9 A C D E F G H J K M N P Q R T V W X Y Z} (0...size).map{ charset.to_a[rand(charset.size)] }.join end A: Array.new(n){[*"0".."9"].sample}.join, where n=8 in your case. Generalized: Array.new(n){[*"A".."Z", *"0".."9"].sample}.join, etc. From: "Generate pseudo random string A-Z, 0-9". A: Others have mentioned something similar, but this uses the URL safe function. require 'securerandom' p SecureRandom.urlsafe_base64(5) #=> "UtM7aa8" p SecureRandom.urlsafe_base64 #=> "UZLdOkzop70Ddx-IJR0ABg" p SecureRandom.urlsafe_base64(nil, true) #=> "i0XQ-7gglIsHGV2_BNPrdQ==" The result may contain A-Z, a-z, 0-9, “-” and “_”. “=” is also used if padding is true. A: Since Ruby 2.5, it's really easy with SecureRandom.alphanumeric: len = 8 SecureRandom.alphanumeric(len) => "larHSsgL" It generates random strings containing A-Z, a-z and 0-9 and therefore should be applicable in most use-cases. And they are generated randomly secure, which might be a benefit, too. This is a benchmark to compare it with the solution having the most upvotes: require 'benchmark' require 'securerandom' len = 10 n = 100_000 Benchmark.bm(12) do |x| x.report('SecureRandom') { n.times { SecureRandom.alphanumeric(len) } } x.report('rand') do o = [('a'..'z'), ('A'..'Z'), (0..9)].map(&:to_a).flatten n.times { (0...len).map { o[rand(o.length)] }.join } end end user system total real SecureRandom 0.429442 0.002746 0.432188 ( 0.432705) rand 0.306650 0.000716 0.307366 ( 0.307745) So the rand solution only takes about 3/4 of the time of SecureRandom. That might matter if you generate a lot of strings, but if you just create some random string from time to time I'd always go with the more secure implementation since it is also easier to call and more explicit. A: Here is one line simple code for random string with length 8: random_string = ('0'..'z').to_a.shuffle.first(8).join You can also use it for random password having length 8: random_password = ('0'..'z').to_a.shuffle.first(8).join A: require 'sha1' srand seed = "--#{rand(10000)}--#{Time.now}--" Digest::SHA1.hexdigest(seed)[0,8] A: Ruby 1.9+: ALPHABET = ('a'..'z').to_a #=> ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"] 10.times.map { ALPHABET.sample }.join #=> "stkbssowre" # or 10.times.inject('') { |s| s + ALPHABET.sample } #=> "fdgvacnxhc" A: (0...8).map { (65 + rand(26)).chr }.join I spend too much time golfing. (0...50).map { ('a'..'z').to_a[rand(26)] }.join And a last one that's even more confusing, but more flexible and wastes fewer cycles: o = [('a'..'z'), ('A'..'Z')].map(&:to_a).flatten string = (0...50).map { o[rand(o.length)] }.join If you want to generate some random text then use the following: 50.times.map { (0...(rand(10))).map { ('a'..'z').to_a[rand(26)] }.join }.join(" ") this code generates 50 random word string with words length less than 10 characters and then join with space A: This is based on a few other answers, but it adds a bit more complexity: def random_password specials = ((32..47).to_a + (58..64).to_a + (91..96).to_a + (123..126).to_a).pack('U*').chars.to_a numbers = (0..9).to_a alpha = ('a'..'z').to_a + ('A'..'Z').to_a %w{i I l L 1 O o 0}.each{ |ambiguous_character| alpha.delete ambiguous_character } characters = (alpha + specials + numbers) password = Random.new.rand(8..18).times.map{characters.sample} password << specials.sample unless password.join =~ Regexp.new(Regexp.escape(specials.join)) password << numbers.sample unless password.join =~ Regexp.new(Regexp.escape(numbers.join)) password.shuffle.join end Essentially it ensures a password that is 8 - 20 characters in length, and which contains at least one number and one special character. A: 10.times do alphabet = ('a'..'z').to_a string += alpha[rand(alpha.length)] end A: For devise secure_validatable you can use this (0...8).map { ([65, 97].sample + rand(26)).chr }.push(rand(99)).join A: Here is a improve of @Travis R answer: def random_string(length=5) chars = 'abdefghjkmnpqrstuvwxyzABDEFGHJKLMNPQRSTUVWXYZ' numbers = '0123456789' random_s = '' (length/2).times { random_s << numbers[rand(numbers.size)] } (length - random_s.length).times { random_s << chars[rand(chars.size)] } random_s.split('').shuffle.join end At @Travis R answer chars and numbers were together, so sometimes random_string could return only numbers or only characters. With this improve at least half of random_string will be characters and the rest are numbers. Just in case if you need a random string with numbers and characters A: To make your first into one statement: (0...8).collect { |n| value << (65 + rand(25)).chr }.join() A: `pwgen 8 1`.chomp A: Create an empty string or a pre-fix if require: myStr = "OID-" Use this code to populate the string with random numbers: begin; n = ((rand * 43) + 47).ceil; myStr << n.chr if !(58..64).include?(n); end while(myStr.length < 12) Notes: (rand * 43) + 47).ceil It will generate random numbers from 48-91 (0,1,2..Y,Z) !(58..64).include?(n) It is used to skip special characters (as I am not interested to include them) while(myStr.length < 12) It will generate total 12 characters long string including prefix. Sample Output: "OID-XZ2J32XM" A: Here's a solution that is flexible and allows dups: class String # generate a random string of length n using current string as the source of characters def random(n) return "" if n <= 0 (chars * (n / length + 1)).shuffle[0..n-1].join end end Example: "ATCG".random(8) => "CGTGAAGA" You can also allow a certain character to appear more frequently: "AAAAATCG".random(10) => "CTGAAAAAGC" Explanation: The method above takes the chars of a given string and generates a big enough array. It then shuffles it, takes the first n items, then joins them. A: Array.new(8).inject(""){|r|r<<('0'..'z').to_a.shuffle[0]} # 57 (1..8).inject(""){|r|r<<('0'..'z').to_a.shuffle[0]} # 51 e="";8.times{e<<('0'..'z').to_a.shuffle[0]};e # 45 (1..8).map{('0'..'z').to_a.shuffle[0]}.join # 43 (1..8).map{rand(49..122).chr}.join # 34 A: a='';8.times{a<<[*'a'..'z'].sample};p a or 8.times.collect{[*'a'..'z'].sample}.join A: Use 'SafeRandom' Gem GithubLink It will provide the easiest way to generate random values for Rails2, Rails 3, Rails 4, Rails 5 compatible. A: The following worked well for me def generate_random_password(min_length, max_length) length = SecureRandom.random_number(max_length - min_length) + min_length character_sets = [ ('a'..'z').to_a, ('A'..'Z').to_a, ('0'..'9').to_a, "~!@^&*()_-+=[]|:;<,>.?".split('') ] retval = [] # # Add one character from each set # character_sets.each do |character_set| character = character_set[SecureRandom.random_number(character_set.count)] retval.push character end # # Fill the rest of the password with a random character from a random set # i = character_sets.count - 1 while i < length character_set = character_sets[SecureRandom.random_number(character_sets.count)] character = character_set[SecureRandom.random_number(character_set.count)] retval.push character i += 1 end retval.shuffle.join end A: I don't know ruby, so I can't give you the exact syntax, but I would set a constant string with the list of acceptable characters, then use the substring operator to pick a random character out of it. The advantage here is that if the string is supposed to be user-enterable, then you can exclude easily confused characters like l and 1 and i, 0 and O, 5 and S, etc. A: This is almost as ugly but perhaps as step in right direction? (1..8).map{|i| ('a'..'z').to_a[rand(26)]}.join A: In ruby 1.9 one can use Array's choice method which returns random element from array
{ "language": "en", "url": "https://stackoverflow.com/questions/88311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "823" }
Q: How do I unit test an __init__() method of a python class with assertRaises()? I have a class: class MyClass: def __init__(self, foo): if foo != 1: raise Error("foo is not equal to 1!") and a unit test that is supposed to make sure the incorrect arg passed to the constructor properly raises an error: def testInsufficientArgs(self): foo = 0 self.assertRaises((Error), myClass = MyClass(Error, foo)) But I get... NameError: global name 'Error' is not defined Why? Where should I be defining this Error object? I thought it was built-in as a default exception type, no? A: How about this: class MyClass: def __init__(self, foo): if foo != 1: raise Exception("foo is not equal to 1!") import unittest class Tests(unittest.TestCase): def testSufficientArgs(self): foo = 1 MyClass(foo) def testInsufficientArgs(self): foo = 2 self.assertRaises(Exception, MyClass, foo) if __name__ == '__main__': unittest.main() A: 'Error' in this example could be any exception object. I think perhaps you have read a code example that used it as a metasyntatic placeholder to mean, "The Appropriate Exception Class". The baseclass of all exceptions is called 'Exception', and most of its subclasses are descriptive names of the type of error involved, such as 'OSError', 'ValueError', 'NameError', 'TypeError'. In this case, the appropriate error is 'ValueError' (the value of foo was wrong, therefore a ValueError). I would recommend replacing 'Error' with 'ValueError' in your script. Here is a complete version of the code you are trying to write, I'm duplicating everything because you have a weird keyword argument in your original example that you seem to be conflating with an assignment, and I'm using the 'failUnless' function name because that's the non-aliased name of the function: class MyClass: def __init__(self, foo): if foo != 1: raise ValueError("foo is not equal to 1!") import unittest class TestFoo(unittest.TestCase): def testInsufficientArgs(self): foo = 0 self.failUnlessRaises(ValueError, MyClass, foo) if __name__ == '__main__': unittest.main() The output is: . ---------------------------------------------------------------------- Ran 1 test in 0.007s OK There is a flaw in the unit testing library 'unittest' that other unit testing frameworks fix. You'll note that it is impossible to gain access to the exception object from the calling context. If you want to fix this, you'll have to redefine that method in a subclass of UnitTest: This is an example of it in use: class TestFoo(unittest.TestCase): def failUnlessRaises(self, excClass, callableObj, *args, **kwargs): try: callableObj(*args, **kwargs) except excClass, excObj: return excObj # Actually return the exception object else: if hasattr(excClass,'__name__'): excName = excClass.__name__ else: excName = str(excClass) raise self.failureException, "%s not raised" % excName def testInsufficientArgs(self): foo = 0 excObj = self.failUnlessRaises(ValueError, MyClass, foo) self.failUnlessEqual(excObj[0], 'foo is not equal to 1!') I have copied the failUnlessRaises function from unittest.py from python2.5 and modified it slightly. A: I think you're thinking of Exceptions. Replace the word Error in your description with Exception and you should be good to go :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/88325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Does elmah handle caught exceptions as well Does ELMAH logged exceptions even when they do not bubble up to the application? I'd like to pop up a message when an exception occurs and still log the exception. Currently I've been putting everything in try catch blocks and spitting out messages, but this gets tedious. A: A filter is the cleanest way to handle this problem. Check this solution here https://stackoverflow.com/a/5936867/965935 A: ELMAH has been updated to support a new feature called Signaling. This allows you to handle exceptions how you want, while still logging them to ELMAH. try { int i = 5; int j = 0; i = i / j; //Throws exception } catch (Exception ex) { MyPersonalHandlingCode(ex); ErrorSignal.FromCurrentContext().Raise(ex); //ELMAH Signaling } Re-throwing exceptions can be a bad practice as it makes it difficult to trace the flow of an application. Using Signaling is a much better approach if you intended to handle the error in some fashion and simply want to document it. Please check out this excellent guide by DotNetSlackers on ELMAH
{ "language": "en", "url": "https://stackoverflow.com/questions/88326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55" }
Q: Which is better: shipping a buggy feature or not shipping the feature at all? this is a bit of a philosophical question. I am adding a small feature to my software which I assume will be used by most users but only maybe 10% of the times they use the software. In other words, the software has been fine without it for 3 months, but 4 or 5 users have asked for it, and I agree that it should be there. The problem is that, due to limitations of the platform I'm working with (and possibly limitations of my brain), "the best I can do" still has some non-critical but noticeable bugs - let's say the feature as coded is usable but "a bit wonky" in some cases. What to do? Is a feature that's 90% there really "better than nothing"? I know I'll get some bug reports which I won't be able to fix: what do I tell customers about those? Should I live with unanswered feature requests or unanswered bug reports? A: There will always be unanswered feature requests and bug reports. Ship it, but include a readme with "known issues" and workarounds when possible. A: You need to think of this from your user's perspective - which will cause less frustration? Buggy code is usually more frustrating than missing features. A: Perfectionists may answer "don't do it". Business people may answer "do it". I guess where the balance is is up to you. I would be swaying towards putting the feature in there if the bugs are non-critical. Most users don't see your software the same way you do. You're a craftsman/artist, which means your more critical than regular people. Is there any way that you can get a beta version to the 4-5 people who requested the feature? Then, once you get their feedback, it may be clear which decision to make. A: Make sure people know, that you know, that there are problems. That there are bugs. And give them an easy way to proide feedback. What about having a "closed beta" with the "4 or 5 users" who suggested the feature in the first place? A: Precisely document the wonkiness and ship it. Make sure a user is likely to see and understand your documentation of the wonkiness. You could even discuss the decision with users who have requested the feature: do some market research. Just because you can't fix it now, doesn't mean you won't be able to in the future. Things change. A: Label what you have now as a 'beta version' and send it out to those people who have asked for it. Get their feedback on how well it works, fix whatever they complain about, and you should then be ready to roll it out to larger groups of users. A: Ship early, ship often, constant refactoring. What I mean is, don't let it stop you from shipping, but don't give up on fixing the problems either. An inability to resolve wonkiness is a sign of problems in your code base. Spend more time refactoring than adding features. A: I guess it depends on your standards. For me, buggy code is not production ready and so shouldn't be shipped. Could you have a beta version with a known issues list so users know what to expect under certain conditions? They get the benefit of using the new features but also know that it's not perfect (use that their own risk). This may keep those 4 or 5 customers that requested the feature happy for a while which gives you more time to fix the bugs (if possible) and release to production later for the masses. Just some thoughts depending on your situation. A: Depends. On the bugs, their severity and how much effort you think it will take to fix them. On the deadline and how much you think you can stretch it. On the rest of the code and how much the client can do with it. A: I would not expect coders to deliver known problems into test let alone to release to a customer. Mind you, I believe in zero tolerance of bugs. Interestingly I find that it is usually developers/ testers who are keenest to remove all bugs - it is often the project manager and/ or customer who are willing to accept bugs. If you must release the code, then document every feature/ bug that you are aware of, and commit to fixing each one. Why don't you post more information about the limitations of the platform you are working on, and perhaps some of the clever folk here can help get your bug list down. A: If the demand is for a feature NOW, rather than a feature that works. You may have to ship. In this situation though: * *Make sure you document the bug(s) and consequences (both to the user and other developers). *Be sure to add the bug(s) to your bug tracking database. *If you write unit tests (I hope so), make sure that tests are written which highlight the bugs, before you ship. This will mean that when you come to fix the bugs in the future, you know where and what they are, without having to remember. *Schedule the work to fix the bugs ASAP. You do fix bugs before writing new code, don't you? A: If bugs can cause death or can lose users' files then don't ship it. If bugs can cause the application to crash itself then ship it with a warning (a readme or whatever). If crashes might cause the application to corrupt the users' files that they were in the middle of editing with this exact application, then display a warning each time they start up the application, and remind them to backup their files first. If bugs can cause BSODs then be very careful about who you ship it to. A: If it doesn't break anything else, why not ship it? It sounds like you have a good relationship with your customers, so those who want the feature will be happy to get it even if it's not all the way there, and those who don't want it won't care. Plus you'll get lots of feedback to improve it in the next release! A: The important question you need to answer is if your feature will solve a real business need given the design you've come up with. Then it's only a matter of making the implementation match the design - making the "bugs" being non-bugs by defining them as not part of the intended behaviour of the feature (which should be covered by the design). This boils down to a very real choice of paths: is a bug something that doesn't work properly, that wasn't part of the intended behaviour and design? Or is it a bug only if if doesn't work in accordance to the intended behaviour? I am a firm believer in the latter; bugs are the things that do not work the way they were intended to work. The implementation should capture the design, that should capture the business need. If the implementation is used to address a different business need that wasn't covered by the design, it is the design that is at fault, not the implementation; thus it is not a bug. The former attitude is by far the most common amongst programmers in my experience. It is also the way the user views software issues. From a software development perspective, however, it is not a good idea to adopt this view, because it leads you to fix bugs that are not bugs, but design flaws, instead of redesigning the solution to the business need. A: Coming from someone who has to install buggy software for their users - don't ship it with that feature enabled. It doesn't matter if you document it, the end users will forget about that bug the first time they hit it, and that bug will become critical to them not being able to do their job.
{ "language": "en", "url": "https://stackoverflow.com/questions/88339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to get Intellisense on error-marked code in Visual Studio 2005? When I try to compile code on VS 2005 and it fails, the line which causes the error gets underlined blue, and mouse-hovering over it displays the error message. Fine, but you can't see object types or whatever, because Intellisense will show the error message, and not object info. In this image, I wanted to see what type DateTime.Subtract() returns, but VS insists on showing the error message: alt text http://img502.imageshack.us/img502/6962/vs2005errordl7.png Does anyone know how to get the error message out of the way, once you've got enough of it? A: Cut the first part of the line ("DateTime duracao = ") into the clipboard, then you should be able to hover over "Subtract" and see the return type. Not ideal, but I find myself doing it all the time! A: Select "Build|Clean Solution" - this cleans up intermediate files and other things. More importantly, it also clears the list of error messages, restoring normal behaviour of Intellisense. A: ctrl-space inside the parens A: I just found that the equivalent to mouse hovering is View -> IntelliSense -> Quick Info. If the solution does not arise here, I'll just use the shortcut Ctrl+K, Ctrl+I. A: Since duracao is a DateTime, and the error message is 'cannot convert Timespan to DateTime' - you can already see the subtract function is returning a Timespan
{ "language": "en", "url": "https://stackoverflow.com/questions/88343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the best C# to VB.net converter? While searching the interweb for a solution for my VB.net problems I often find helpful articles on a specific topic, but the code is C#. That is no big problem but it cost some time to convert it to VB manually. There are some sites that offer code converters from C# to VB and vice versa, but to fix all the flaws after the code-conversion is nearly as time-consuming as doing it by myself in the first place. Till now I am using http://labs.developerfusion.co.uk/convert/csharp-to-vb.aspx Do you know something better? A: I am using a free Visual Studio 2012 plug-in named Language Convert It works perfectly on 2010/2012, unfortunately isn't working at VS 2013 yet. The conversion is not 100% accurate, but it is definitely very helpful, to launch for the first time it is a bit tricky, check before the image below : A: Last I checked, SharpDevelop has one and it is open source too. A: You can load your DLL or EXE into Redgate's (formerly Lutz Roeder's) .Net Reflector, select your method and then the desired language from the language combo. The code of the selected method will be displayed in the selected language. I hope this helps. A: You can try this one converter. There is functionality for C# to VB and VB to C#. Hope this helps. A: Telerik has a good converter that is based on SharpDevelop that has worked pretty well over the years, though it has not been updated in years (due to it being based on SharpDevelop). I've recently come across a roslyn based converter as well. I don't know how well it works or how well maintained it is, but as it's open source you can always fork it and update it as needed. A: Carlos Aguilar Mares has had an online converter for about 40 forevers - Code Translator but I would agree that Reflector is the better answer. A: While not answering your question, I will say that I have been in a similar position. I realised that code samples in C# were awkward when I was really starting out in .NET, but a few weeks into my first project (after I grown more familiar with the .NET framework and VB.NET itself), I found that it was interesting and sometimes beneficial to have to reverse-engineer the C# code. Not just in terms of syntax, but also learning about the subtle differences in approach - it's useful to be open-minded in this respect. I'm sticking with VB.NET as I learn more and more about the framework, but before long I'll dip my to into C# with the intention of becoming 'multi-lingual'. A: Currently I use a plugin for VS2005 that I found on CodeProject (http://www.codeproject.com/KB/cs/Code_convert_add-in.aspx); it use an external service (http://www.carlosag.net/Tools/CodeTranslator/) to perform translation. Occasionally, when I'm offline, I use a converter tool (http://www.kamalpatel.net/ConvertCSharp2VB.aspx). A: If you cannot find a good converter, you could always compile the c# code and use the dissasembler in Reflector to see Visual Basic code. Some of the variable names will change. A: I currently use these two most often: http://converter.telerik.com/ http://www.carlosag.net/tools/codetranslator/ But have also had some success with these others: http://converter.atomproject.net/ http://www.dotnetspider.com/convert/Csharp-To-Vb.aspx http://www.developerfusion.com/tools/convert/csharp-to-vb/ A: SharpDevelop has a built-in translator between C# and VB.NET. Is not perfect thought (e.g. the optional values in VB.NET doesn't have an equivalent in C#, so the signature of the converter method must be edited), but you can save some time, as you are making all operations inside an IDE and not a webpage (copy C# code, paste, hit button, copy VB.NET code, paste on IDE :P ) A: I think the best thing to do is learn enough of the other language so that you can rewrite by hand, there's some quite difficult differences in certain aspects that I'm not sure a converter would handle very well. For example, compare my translation from C# to VB of the following: public class FileSystemEventSubscription : EventSubscription { private FileSystemWatcher fileSystemWatcher; public FileSystemEventSubscription(IComparable queueName, Guid workflowInstanceId, FileSystemWatcher fileSystemWatcher) : base(queueName, workflowInstanceId) { this.fileSystemWatcher = fileSystemWatcher; } becomes Public Class FileSystemEventSubscription Inherits EventSubscription Private myFileSystemWatcher As FileSystemWatcher Public Sub New(ByVal QueueName As IComparable, ByVal WorkflowInstanceID As Guid, ByVal Watcher As FileSystemWatcher) MyBase.New(QueueName, WorkflowInstanceID) Me.myFileSystemWatcher = Watcher End Sub The C# is from the Custom Activity Framework sample, and I'm afraid I've lost the link to it. But it contains some nasty looking inheritance (from a VB point of view). A: The one at http://www.developerfusion.com/tools/convert/csharp-to-vb/ (new url) now supports .NET 3.5 syntax (thanks to the #develop guys once again), and will automatically copy the results to your clipboard :)
{ "language": "en", "url": "https://stackoverflow.com/questions/88359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: Is there an elegant way to compare a checkbox and a textbox using ASP.NET validators? I have an Asp.Net repeater, which contains a textbox and a checkbox. I need to add client-side validation that verifies that when the checkbox is checked, the textbox can only accept a value of zero or blank. I would like to use one or more of Asp.Net's validator controls to accomplish this, to provide a consistent display for client side errors (server-side errors are handled by another subsystem). The Asp:CompareValidator doesn't seem to be flexible enough to perform this kind of complex comparison, so I'm left looking at the Asp:CustomValidator. The problem I'm running into is that there doesn't seem to be any way to pass custom information into the validation function. This is an issue because the ClientIds of the checkbox and the textbox are unknown to me at runtime (as they're part of a Repeater). So... My options seem to be: * *Pass the textbox and checkbox to the CustomValidator somehow (doesn't seem to be possible). *Find the TextBox through JavaScript based on the arguments passed in by the CustomValidator. Is this even possible, what with the ClientId being ambiguous? *Forget validation entirely, and emit custom JavaScript (allowing me to pass both ClientIds to a custom function). Any ideas on what might be a better way of implementing this? A: I think the best way would be to inherit BaseValidator in a new class, and pass those IDs to your control as attributes. You should be able to resolve the IDs within your validator, without knowing the full client side ID that is generated at runtime. You should get the data validating on the server first, and on the client second. A: Can you not put the CustomValidator inside the repeater? If not, you can create it dynamically when the repeater is bound and user FindControl() protected MyDataBound(object sender, RepeaterItemEventArgs e) { (CheckBox)cb = (CheckBox)e.Item.FindControl("myCheckboxName"); (TextBox)tb = (TextBox)e.Item.FindControl("myTextBox"); } ...or something like that. I did the code off the top of my head.
{ "language": "en", "url": "https://stackoverflow.com/questions/88361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: SQLServer tempDB growing infinitely we have several "production environments" (three servers each, with the same version of our system. Each one has a SQL Server Database as production database). In one of this environment the tempdb transaction log starts to grow fast and infinitely, we can´t find why. Same version of SO, SQL Server, application. No changes in the environment. Someone know how to figure what´s happening ou how to fix this? A: You might be in Full recovery model mode - if you are doing regular backups you can change this to simple and it will reduce the size of the log after the backup. Here is some more info. A: Have you tried running Profiler? This will allow you to view all of the running queries on the server. This may give you some insight into what is creating items in tempdb. A: Your best bet is to fire up SQL Server Profiler and see what's going on. Look for high values in the "Writes" column or Spool operators, these are both likely to cause high temp usage. If it is only the transaction log growing then try this, open transactions prevent the log from being shrunk down as it goes. This should be run in tempdb: DBCC OPENTRAN A: ok, i think this question is the same as mine. the tempdb grow fast. the common reason is that the programmer create the procedure, and use the temportary table. when we create these tables, or other operation,like trigger, dbcc command, they are all use the tempdb. create the temportary tables, sqlserver will alloc space for table, like GAM,SGAM or IAM,but sqlserver must sure the Physical consistency, so there can only be a person do it every time, the others objects must wait. that caused tempdb grow fast. i find the sovlution from MS, about like that, hope can help you: 1.create the data files for tempdb, the number will the same as CPU, ec:your host have 16cpu,you need to create 16 date files for tempdb. and every file must has the same size. 2.you need monitor these files , sure they are not full. 3.if these files space not enough big, that will auto grow, you need to put others the same size. my english is not good, and if you are cant solve it, use the procedure sp_helpfile , check it. and paste the result at here. when i was in singapore, i find this situation. good luck.
{ "language": "en", "url": "https://stackoverflow.com/questions/88381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Where do I start learning about GUI programming? I have a pretty good knowledge of programming languages like C/C++, Java, Python. But they were all mostly learnt in a college / high school class room setting where the best user interface was a numbered menu. You know, the standard data structures, implementation of various algorithms, file handling and the like. What I want to do now is to get into GUI programming. I am not sure if I am asking the right way, but I am looking at using the WIMP paradigm (windows icons menus pointers). I want to place buttons and forms. Event driven programming, I believe is the right word, where my application waits till the user clicks something, types something etc. Given my background, where would be a good place to start? I am looking at the following requirements - 1> Preferably cross platform. 2> Lots of documentations, tutorials, and if possible sample code that I can learn off of 3> A good GUI builder tool, where I can drag / drop stuff the way I want them to be displayed. Any ideas or suggestions to get me started? A: I'l try the book About face: Essential of User Interface Design, its centered on design practices for UI as well as designing taking into account the user goals, that is, what the user wants to acomplish trying to get you away for the "developer GUI design". It also reviews some history about GUI design from Microsoft, Apple and other companys. Things like defaults for MacOsX (where the accept and cancel buttons are usually located, etc) as well and the whys beneath that. I'll also look up the Office 2007 UI Design Guidelines for Microsoft as it's probably "gonna be a thing". A: Shoes for Ruby is cross platform, very easy, and is a brief introduction to building a window and handling events that happen in it. fun too :) A: You're looking for Qt. It's a cross-platform C++ GUI framework, and it includes everything you asked for and more. It's free for open-source projects to use, provided you're using the GPL. A: There is a great deal of language and UI framework specific resources available for people interested in building application UIs. However, before delving into specific technologies, there is much to be learned about Human-Computer Interaction and how it applies to user interface design. Some references to look at: * *http://www.useit.com/ *The Design of Everyday Things (book) *http://worrydream.com/MagicInk (takes awhile to load but very worthwhile) After researching what makes a good UI, it is time to explore how: * *Mozilla XULRunner *If you decide to use Java Swing, I strongly recommend the relative layout manager Of course there are many options out there, including QT, Fltk and SWT A: I was thinking exactly the same thing recently. Qt seems like a good cross platform GUI framework, and Python seems like a good language to work in. So PyQt is my (uneducated) suggestion. It does include a drag and drop GUI design tool. A: Take a look at Glade and Gtk. Both are really easy to use. Glade is the GUI builder, and Gtk the toolkit. It's both cross platform and cross language. You can load the Glade files in almost any language. Here is a Glade/Gtk tutorial A: I would look into C# .NET development and its WinForms API. It's much easier to program GUI desktop apps for Windows with that than with the Win32 API. You can always ease into Win32 API stuff later, if it's still relevant. For cross-platform solutions, look into Gtk+, perhaps PyGtk. Another good one is WxWidgets. If you want to get really funky, check out Shoes for Ruby. A: Many years ago, I made the quickest progress in this area using Visual Basic. I assume it's still dead easy to pick up and the code/run/debug cycle is insanely productive and you'll learn a lotta useful stuff fast. Tons of documentation, and all the other goodies you want... A: Seeing as you already have knowledge of Java you should check out the Swing API here, it provides a pretty powerful set of packages that can be used to create complex GUI's. Moreover, its cross platform, there's tons of documentation and it can be used with the Netbeans IDE. A: Java's Swing API is cross platform and relatively simple, and NetBeans is a good GUI builder. A: Netbeans is cross platform, and while it is centered towards Java developers, you can easily install addons to work with C/C++, Ruby, etc. I use it for developing Swing GUI programs because it has a very simple interface for simple drag and drop GUI creation. There is a lot of good documentation on developing with Java Swing, and I'm sure there is plenty of documentation on using Netbeans also.
{ "language": "en", "url": "https://stackoverflow.com/questions/88382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: How do I duplicate a whole line in Emacs? I saw this same question for VIM and it has been something that I myself wanted to know how to do for Emacs. In ReSharper I use CTRL-D for this action. What is the least number of commands to perform this in Emacs? A: install duplicate-thing from melpa: M-x package-install RET duplicate-thing and add this keybinding to init file : (global-set-key (kbd "M-c") 'duplicate-thing) A: Place cursor on line, if not at beginning do a CTRL-A, then: CTRL-K CTRL-K CTRL-Y CTRL-Y A: My version of a function to duplicate a line that works nice with undo and doesn't mess with the cursor position. It was the result of a discussion in gnu.emacs.sources from November 1997. (defun duplicate-line (arg) "Duplicate current line, leaving point in lower line." (interactive "*p") ;; save the point for undo (setq buffer-undo-list (cons (point) buffer-undo-list)) ;; local variables for start and end of line (let ((bol (save-excursion (beginning-of-line) (point))) eol) (save-excursion ;; don't use forward-line for this, because you would have ;; to check whether you are at the end of the buffer (end-of-line) (setq eol (point)) ;; store the line and disable the recording of undo information (let ((line (buffer-substring bol eol)) (buffer-undo-list t) (count arg)) ;; insert the line arg times (while (> count 0) (newline) ;; because there is no newline in 'line' (insert line) (setq count (1- count))) ) ;; create the undo information (setq buffer-undo-list (cons (cons eol (point)) buffer-undo-list))) ) ; end-of-let ;; put the point in the lowest line and return (next-line arg)) Then you can define CTRL-D to call this function: (global-set-key (kbd "C-d") 'duplicate-line) A: Instead of kill-line (C-k) as in C-a C-k C-k C-y C-y use the kill-whole-line command: C-S-Backspace C-y C-y The advantages over C-k include that it does not matter where point is on the line (unlike C-k which requires being at start of the line) and it also kills the newline (again something C-k does not do). A: I have copy-from-above-command bound to a key and use that. It's provided with XEmacs, but I don't know about GNU Emacs. `copy-from-above-command' is an interactive compiled Lisp function -- loaded from "/usr/share/xemacs/21.4.15/lisp/misc.elc" (copy-from-above-command &optional ARG) Documentation: Copy characters from previous nonblank line, starting just above point. Copy ARG characters, but not past the end of that line. If no argument given, copy the entire rest of the line. The characters copied are inserted in the buffer before point. A: I don't quite remember how line duplication works anywhere else, but as a former SciTE user I liked one thing about SciTE-way: it doesn't touch the cursor position! So all the recipies above weren't good enough for me, here's my hippie-version: (defun duplicate-line () "Clone line at cursor, leaving the latter intact." (interactive) (save-excursion (let ((kill-read-only-ok t) deactivate-mark) (toggle-read-only 1) (kill-whole-line) (toggle-read-only 0) (yank)))) Note that nothing gets actually killed in process, leaving marks and current selection intact. BTW, why you guys so fond of jerking cursor around when there's this nice'n'clean kill-whole-line thingy (C-S-backspace)? A: something you might want to have in your .emacs is (setq kill-whole-line t) Which basically kills the entire line plus the newline whenever you invoke kill-line (i.e. via C-k). Then without extra code, you can just do C-a C-k C-y C-y to duplicate the line. It breaks down to C-a go to beginning of line C-k kill-line (i.e. cut the line into clipboard) C-y yank (i.e. paste); the first time you get the killed line back; second time gives the duplicated line. But if you use this often then maybe a dedicated key binding might be a better idea, but the advantage of just using C-a C-k C-y C-y is you can duplicate the line elsewhere, instead of just below the current line. A: ' I wrote my own version of duplicate-line, cause I don't want to screw up the killing ring. (defun jr-duplicate-line () "EASY" (interactive) (save-excursion (let ((line-text (buffer-substring-no-properties (line-beginning-position) (line-end-position)))) (move-end-of-line 1) (newline) (insert line-text)))) (global-set-key "\C-cd" 'jr-duplicate-line) A: There is package called Avy It has command avy-copy-line. When you use that command, every line in your window gets letter combination. Then you just have to type combination and you get that line. This also works for region. Then you just have to type two combination. Here you can see interface: A: Here's yet another function for doing this. My version doesn't touch the kill ring, and the cursor ends up on the new line where it was on the original. It will duplicate the region if it's active (transient mark mode), or default to duplicating the line otherwise. It will also make multiple copies if given a prefix arg, and comment out the original line if given a negative prefix arg (this is useful for testing a different version of a command/statement while keeping the old one). (defun duplicate-line-or-region (&optional n) "Duplicate current line, or region if active. With argument N, make N copies. With negative N, comment out original line and use the absolute value." (interactive "*p") (let ((use-region (use-region-p))) (save-excursion (let ((text (if use-region ;Get region if active, otherwise line (buffer-substring (region-beginning) (region-end)) (prog1 (thing-at-point 'line) (end-of-line) (if (< 0 (forward-line 1)) ;Go to beginning of next line, or make a new one (newline)))))) (dotimes (i (abs (or n 1))) ;Insert N times, or once if not specified (insert text)))) (if use-region nil ;Only if we're working with a line (not a region) (let ((pos (- (point) (line-beginning-position)))) ;Save column (if (> 0 n) ;Comment out original with negative arg (comment-region (line-beginning-position) (line-end-position))) (forward-line 1) (forward-char pos))))) I have it bound to C-c d: (global-set-key [?\C-c ?d] 'duplicate-line-or-region) This should never be re-assigned by a mode or anything because C-c followed by a single (unmodified) letter is reserved for user bindings. A: because i don't know, i'll start this round of golf with a slowball: ctrl-k, y, y A: C-a C-k C-k C-y C-y A: The defaults are horrible for this. However, you can extend Emacs to work like SlickEdit and TextMate, that is, copy/cut the current line when no text is selected: (transient-mark-mode t) (defadvice kill-ring-save (before slick-copy activate compile) "When called interactively with no active region, copy a single line instead." (interactive (if mark-active (list (region-beginning) (region-end)) (message "Copied line") (list (line-beginning-position) (line-beginning-position 2))))) (defadvice kill-region (before slick-cut activate compile) "When called interactively with no active region, kill a single line instead." (interactive (if mark-active (list (region-beginning) (region-end)) (list (line-beginning-position) (line-beginning-position 2))))) Place the above in .emacs. Then, to copy a line, M-w. To delete a line, C-w. To duplicate a line, C-a M-w C-y C-y C-y .... A: I liked FraGGod's version, except for two things: (1) It doesn't check whether the buffer is already read-only with (interactive "*"), and (2) it fails on the last line of the buffer if that last line is empty (as you cannot kill the line in that case), leaving your buffer read-only. I made the following changes to resolve that: (defun duplicate-line () "Clone line at cursor, leaving the latter intact." (interactive "*") (save-excursion ;; The last line of the buffer cannot be killed ;; if it is empty. Instead, simply add a new line. (if (and (eobp) (bolp)) (newline) ;; Otherwise kill the whole line, and yank it back. (let ((kill-read-only-ok t) deactivate-mark) (toggle-read-only 1) (kill-whole-line) (toggle-read-only 0) (yank))))) A: With recent emacs, you can use M-w anywhere in the line to copy it. So it becomes: M-w C-a RET C-y A: When called interactively with no active region, COPY (M-w) a single line instead : (defadvice kill-ring-save (before slick-copy activate compile) "When called interactively with no active region, COPY a single line instead." (interactive (if mark-active (list (region-beginning) (region-end)) (message "Copied line") (list (line-beginning-position) (line-beginning-position 2))))) When called interactively with no active region, KILL (C-w) a single line instead. (defadvice kill-region (before slick-cut activate compile) "When called interactively with no active region, KILL a single line instead." (interactive (if mark-active (list (region-beginning) (region-end)) (message "Killed line") (list (line-beginning-position) (line-beginning-position 2))))) Also, on a related note: (defun move-line-up () "Move the current line up." (interactive) (transpose-lines 1) (forward-line -2) (indent-according-to-mode)) (defun move-line-down () "Move the current line down." (interactive) (forward-line 1) (transpose-lines 1) (forward-line -1) (indent-according-to-mode)) (global-set-key [(meta shift up)] 'move-line-up) (global-set-key [(meta shift down)] 'move-line-down) A: I saw very complex solutions, anyway... (defun duplicate-line () "Duplicate current line" (interactive) (kill-whole-line) (yank) (yank)) (global-set-key (kbd "C-x M-d") 'duplicate-line) A: This functionality should match up with JetBrains' implementation in terms of duplicating both by line or region, and then leaving the point and/ or active region as expected: Just a wrapper to around the interactive form: (defun wrx/duplicate-line-or-region (beg end) "Implements functionality of JetBrains' `Command-d' shortcut for `duplicate-line'. BEG & END correspond point & mark, smaller first `use-region-p' explained: http://emacs.stackexchange.com/questions/12334/elisp-for-applying-command-to-only-the-selected-region#answer-12335" (interactive "r") (if (use-region-p) (wrx/duplicate-region-in-buffer beg end) (wrx/duplicate-line-in-buffer))) Which calls this, (defun wrx/duplicate-region-in-buffer (beg end) "copy and duplicate context of current active region |------------------------+----------------------------| | before | after | |------------------------+----------------------------| | first <MARK>line here | first line here | | second item<POINT> now | second item<MARK>line here | | | second item<POINT> now | |------------------------+----------------------------| TODO: Acts funky when point < mark" (set-mark-command nil) (insert (buffer-substring beg end)) (setq deactivate-mark nil)) Or this (defun wrx/duplicate-line-in-buffer () "Duplicate current line, maintaining column position. |--------------------------+--------------------------| | before | after | |--------------------------+--------------------------| | lorem ipsum<POINT> dolor | lorem ipsum dolor | | | lorem ipsum<POINT> dolor | |--------------------------+--------------------------| TODO: Save history for `Cmd-Z' Context: http://stackoverflow.com/questions/88399/how-do-i-duplicate-a-whole-line-in-emacs#answer-551053" (setq columns-over (current-column)) (save-excursion (kill-whole-line) (yank) (yank)) (let (v) (dotimes (n columns-over v) (right-char) (setq v (cons n v)))) (next-line)) And then I have this bound to meta+shift+d (global-set-key (kbd "M-D") 'wrx/duplicate-line-or-region) A: Nathan's addition to your .emacs file is the way to go but it could be simplified slightly by replacing (open-line 1) (next-line 1) with (newline) yielding (defun duplicate-line() (interactive) (move-beginning-of-line 1) (kill-line) (yank) (newline) (yank) ) (global-set-key (kbd "C-d") 'duplicate-line) A: ctrl-k, ctrl-k, (position to new location) ctrl-y Add a ctrl-a if you're not starting at the beginning of the line. And the 2nd ctrl-k is to grab the newline character. It can be removed if you just want the text. A: @[Kevin Conner]: Pretty close, so far as I know. The only other thing to consider is turning on kill-whole-line to include the newline in the C-k. A: Here's a function for duplicating current line. With prefix arguments, it will duplicate the line multiple times. E.g., C-3 C-S-o will duplicate the current line three times. Doesn't change kill ring. (defun duplicate-lines (arg) (interactive "P") (let* ((arg (if arg arg 1)) (beg (save-excursion (beginning-of-line) (point))) (end (save-excursion (end-of-line) (point))) (line (buffer-substring-no-properties beg end))) (save-excursion (end-of-line) (open-line arg) (setq num 0) (while (< num arg) (setq num (1+ num)) (forward-line 1) (insert line)) ))) (global-set-key (kbd "C-S-o") 'duplicate-lines) A: If you're using Spacemacs, you can simply use duplicate-line-or-region, bound to: SPC x l d A: There's a package called 'move-dup' on Melpa that can help you with that. Disclaimer: I'm the author of that package. A: I use C-a C-SPACE C-n M-w C-y which breaks down to * *C-a: move cursor to start of line *C-SPACE: begin a selection ("set mark") *C-n: move cursor to next line *M-w: copy region *C-y: paste ("yank") The aforementioned C-a C-k C-k C-y C-y amounts to the same thing (TMTOWTDI) * *C-a: move cursor to start of line *C-k: cut ("kill") the line *C-k: cut the newline *C-y: paste ("yank") (we're back at square one) *C-y: paste again (now we've got two copies of the line) These are both embarrassingly verbose compared to C-d in your editor, but in Emacs there's always a customization. C-d is bound to delete-char by default, so how about C-c C-d? Just add the following to your .emacs: (global-set-key "\C-c\C-d" "\C-a\C- \C-n\M-w\C-y") (@Nathan's elisp version is probably preferable, because it won't break if any of the key bindings are changed.) Beware: some Emacs modes may reclaim C-c C-d to do something else. A: In addition to the previous answers you can also define your own function to duplicate a line. For example, putting the following in your .emacs file will make C-d duplicate the current line. (defun duplicate-line() (interactive) (move-beginning-of-line 1) (kill-line) (yank) (open-line 1) (next-line 1) (yank) ) (global-set-key (kbd "C-d") 'duplicate-line) A: I write one for my preference. (defun duplicate-line () "Duplicate current line." (interactive) (let ((text (buffer-substring-no-properties (point-at-bol) (point-at-eol))) (cur-col (current-column))) (end-of-line) (insert "\n" text) (beginning-of-line) (right-char cur-col))) (global-set-key (kbd "C-c d l") 'duplicate-line) But I found this would have some problem when current line contains multi-byte characters (e.g. CJK characters). If you encounter this issue, try this instead: (defun duplicate-line () "Duplicate current line." (interactive) (let* ((text (buffer-substring-no-properties (point-at-bol) (point-at-eol))) (cur-col (length (buffer-substring-no-properties (point-at-bol) (point))))) (end-of-line) (insert "\n" text) (beginning-of-line) (right-char cur-col))) (global-set-key (kbd "C-c d l") 'duplicate-line) A: I cannot believe all these complicated solutions. This is two keystrokes: <C-S-backspace> runs the command kill-whole-line C-/ runs the command undo So <C-S-backspace> C-/ to "copy" a whole line (kill and undo). You can, of course, combine this with numeric and negative args to kill multiple lines either forward or backward. A: As mentioned in other answers, binding key strokes to lisp code is a better idea than binding them to another key strokes. With @mw's answer, code duplicates the line and moves the mark to end of new line. This modification keeps the mark position at same column on the new line: fun duplicate-line () (interactive) (let ((col (current-column))) (move-beginning-of-line 1) (kill-line) (yank) (newline) (yank) (move-to-column col))) A: With prefix arguments, and what is (I hope) intuitive behaviour: (defun duplicate-line (&optional arg) "Duplicate it. With prefix ARG, duplicate ARG times." (interactive "p") (next-line (save-excursion (let ((beg (line-beginning-position)) (end (line-end-position))) (copy-region-as-kill beg end) (dotimes (num arg arg) (end-of-line) (newline) (yank)))))) The cursor will remain on the last line. Alternatively, you might want to specify a prefix to duplicate the next few lines at once: (defun duplicate-line (&optional arg) "Duplicate it. With prefix ARG, duplicate ARG times." (interactive "p") (save-excursion (let ((beg (line-beginning-position)) (end (progn (forward-line (1- arg)) (line-end-position)))) (copy-region-as-kill beg end) (end-of-line) (newline) (yank))) (next-line arg)) I find myself using both often, using a wrapper function to switch the behavior of the prefix argument. And a keybinding: (global-set-key (kbd "C-S-d") 'duplicate-line) A: ;; http://www.emacswiki.org/emacs/WholeLineOrRegion#toc2 ;; cut, copy, yank (defadvice kill-ring-save (around slick-copy activate) "When called interactively with no active region, copy a single line instead." (if (or (use-region-p) (not (called-interactively-p))) ad-do-it (kill-new (buffer-substring (line-beginning-position) (line-beginning-position 2)) nil '(yank-line)) (message "Copied line"))) (defadvice kill-region (around slick-copy activate) "When called interactively with no active region, kill a single line instead." (if (or (use-region-p) (not (called-interactively-p))) ad-do-it (kill-new (filter-buffer-substring (line-beginning-position) (line-beginning-position 2) t) nil '(yank-line)))) (defun yank-line (string) "Insert STRING above the current line." (beginning-of-line) (unless (= (elt string (1- (length string))) ?\n) (save-excursion (insert "\n"))) (insert string)) (global-set-key (kbd "<f2>") 'kill-region) ; cut. (global-set-key (kbd "<f3>") 'kill-ring-save) ; copy. (global-set-key (kbd "<f4>") 'yank) ; paste. add the elisp above to you init.el, and you get cut/copy whole line function now, then you can F3 F4 to duplicate a line. A: The simplest way is Chris Conway's method. C-a C-SPACE C-n M-w C-y That's the default way mandated by EMACS. In my opinion, it's better to use the standard. I am always careful towards customization one's own key-binding in EMACS. EMACS is already powerful enough, I think we should try our best to adapt to its own key-bindings. Though it's a bit lengthy, but when you are used to it, you can do fast and will find this is fun! A: This feels more natural, with respect to the selected answer by Chris Conway. (global-set-key "\C-c\C-d" "\C-a\C- \C-n\M-w\C-y\C-p\C-e") This allows you to duplicate a line multiple times by simply repeating the \C-c\C-d key strokes. A: well ive usually used: Ctl-Space (set the mark) move to end of line Ctl-K kill line Ctl-Y * 2 (yank the line back) there may be a much better way though :P
{ "language": "en", "url": "https://stackoverflow.com/questions/88399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "174" }
Q: ISS error CS0433: name collision In our application we've run into an error numerous times where we get error CS0433, which complains about a name collison in two separate dlls. This is an ASP.NET app developed in C# using webforms. It always complained about A TimeLog page. Anyone have advice for resolving this error? A: I found a link in the MSDN that describes this error. To summarize, a naming conflict can happen between the file name of a page (TimeLogTab.aspx) and the class in the code behind (public class TimeLogTab). The link recommends renaming one of them. I changed my class to Time_LogTab and the error went away. A: The error can happen intermittently: I'm using "Publish Web Site" for a VS 2005 Web Application Project with "Delete all existing files prior to publish" and then XCOPY-Deploy to the target IIS folder (which won't delete existing files there). Today I ran into that error for the first time (no new .ascx/.aspx files since weeks), but simply recompiling and redeploying the same project solved the problem. The only difference: For the 2nd time, I hit the page causing the problem first. Now I'm wondering whether the exact click order really matters or rather whether an arbitrary unlucky click order effectively can crash an ASP.NET site?
{ "language": "en", "url": "https://stackoverflow.com/questions/88403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: In AIML, what's the XSD-valid way to use the element ? In file Atomic.aiml, part of the annotated ALICE AIML files, there are a lot of categories like this: <category> <pattern>ANSWER MY QUESTION</pattern> <template> Please try asking <set name="it">your question</set> another way. </template> </category> This code isn't valid according to the AIML XSD; the validator says that No character data is allowed in content model (regarding the your question character data inside the set element). If I delete your question the error disappears, but then "it" wouldn't be defined correctly. How do I fix the code above so it passes the validation ? A: Which Validator are you using because the following complete file validates according to Xerces? <aiml xmlns="http://alicebot.org/2001/AIML-1.0.1" version="1.0.1"> <category> <pattern>ANSWER MY QUESTION</pattern> <template> Please try asking <set name="it">your question</set> another way. </template> </category> </aiml>
{ "language": "en", "url": "https://stackoverflow.com/questions/88417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I detect if caps lock is toggled in Swing? I'm trying to build a better username/password field for my workplace and would like to be able to complain when they have their caps lock on. Is this possible? And if so I'd like to have it detected before the client types their first letter. Is there a non-platform specific way to do this? A: Try this, from java.awt.Toolkit, returns a boolean: Toolkit.getDefaultToolkit().getLockingKeyState(KeyEvent.VK_CAPS_LOCK) A: here is some info on the class http://java.sun.com/j2se/1.5.0/docs/api/java/awt/Toolkit.html#getLockingKeyState(int) A: In addition to Nick's answer, to react to this condition before the user presses a key, you can listen to the focus event of your text entry component and test the caps-lock as the component receives focus.
{ "language": "en", "url": "https://stackoverflow.com/questions/88434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Project in Ruby I've been coding alot of web-stuff all my life, rails lately. And i can always find a website to code, but i'm kind of bored with it. Been taking alot of courses of Java and C lately so i've become a bit interested in desktop application programming. Problem: I can't for the life of me think of a thing to code for desktop. I just can't think of anything i can code that isn't already out there for download. So what do i do? I need some project suggestions that i can set as a goal. A: I would say you should roam through github or some other open source site and find an existing young or old project that you can contribute to. Maybe there is something that is barely off the ground, or maybe there is a mature project that could use some improvement. A: I find to complete a project, it needs to be something I am passionate about. I feel you need to find your own project I'm afraid. There is always the Netflix Prize though! A: I would write a ray tracer. Oops, sorry... you're looking for an original idea. :) Ray tracers are still cool, though, and easy to get started on. Maybe you'll get an idea for a game while you're working on it. A: Visit shoooes.net for a UI toolkit that's easy and fun, and then the-shoebox.org to see the kinds of things people are doing with it. A: If you could make a Ruby ANSI (and xbin, and idf, and adf...) Editor, I would love you. Because that means you would have written ANSI parsing routines that I can hope you release to the open source community. ... but that is a selfish answer. Oh, and a cross-platform editor would be nice as well (although TundraDraw somewhat takes care of that).
{ "language": "en", "url": "https://stackoverflow.com/questions/88438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Flex Framework - How to tell if user is using cached framework? I have a public facing application deployed with Flex. I want to switch to using the cached framework (.swz) but need to know if for my user base this is an effective solution or not (most users will only visit the site once and its just not worth it). What I want to do is track whether or not a user has loaded the .swz/.swf file during that session - or if they are using a cached version they had previously downloaded from me or another site. If say 80% of users are downloading the framework .swz then i may as well embed the cutdown framework. But if 60% of users already have the framework I'd rather allow that cached version to be used. The best solution I have now is to look at the web server log and count the .swz file downloads vs. the number of times my main application .swf file is loaded. This is clumsy and a pain and I havent even been able to go to the effort of doing it yet. I cannot seem to find anything indicating what .swz or .swf files are loaded. I'd like to track against the current user session if i can determine this. A: My advice is to use the cached framework regardless of your user-base. The fact is, you won't be alone in doing so and it will only be a matter of time before it pays off (even if it is on return visits). A: This is probably not the solution you want, but just to help you with the log parsing, you can use this to get counts for each from the logs (assuming you're on a linux server) : grep -c \.swz web_log_dir/* grep -c \.swf web_log_dir/*
{ "language": "en", "url": "https://stackoverflow.com/questions/88448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }