text
stringlengths
8
267k
meta
dict
Q: How to get scientific results from non-experimental data (datamining?) * *I want to obtain maximum performance out of a process with many variables, many of which cannot be controlled. *I cannot run thousands of experiments, so it'd be nice if I could run hundreds of experiments and * *vary many controllable parameters *collect data on many parameters indicating performance *'correct,' as much as possible, for those parameters I couldn't control *Tease out the 'best' values for those things I can control, and start all over again It feels like this would be called data mining, where you're going through tons of data which doesn't immediately appear to relate, but does show correlation after some effort. So... Where do I start looking at algorithms, concepts, theory of this sort of thing? Even related terms for purposes of search would be useful. Background: I like to do ultra-marathon cycling, and keep logs of each ride. I'd like to keep more data, and after hundreds of rides be able to pull out information about how I perform. However, everything varies - routes, environment (temp, pres., hum., sun load, wind, precip., etc), fuel, attitude, weight, water load, etc, etc, etc. I can control a few things, but running the same route 20 times to test out a new fuel regime would just be depressing, and take years to perform all the experiments that I'd like to do. I can, however, record all these things and more(telemetry on bicycle FTW). A: It sounds like you want to do some regression analysis. You certainly have plenty of data! Regression analysis is an extremely common modeling technique in statistics and science. (It could be argued that statistics is the art and science of regression analysis.) There are many statistics packages out there to do the computation you'll need. (I'd recommend one, but I'm years out of date.) Data mining has gotten a bad name because far too often people assume correlation equals causation. I found that a good technique is to start with variables you know have an influence and build a statistical model around them first. So you know that wind, weight and climb have an influence on how fast you can travel and statistical software can take your dataset and calculate what the correlation between those factors are. That will give you a statistical model or linear equation: speed = x*weight + y*wind + z*climb + constant When you explore new variables, you will be able to see if the model is improved or not by comparing a goodness of fit metric like R-squared. So you might check if temperature or time of day adds anything to the model. You may want to apply a transformation to you data. For instance, you might find that you perform better on colder days. But really cold days and really hot days might hurt performance. In that case, you could assign temperatures to bins or segments: < 0°C; 0°C to 40°C; > 40°C, or some such. The key is to transform the data in a way that matches a rational model of what is going on in the real world, not just the data itself. In case someone thinks this is not a programming related topic, notice that you can use these same techniques to analyze system performance. A: With that many variables you have too many dimensions and you may want to look at Principal Component Analysis. It takes some of the "art" out of regression analysis and lets the data speak for itself. Some software to do that sort of analysis is shown at the bottom of the link. A: I have used the Perl module Statistics::Regression for somewhat similar problems in the past. Be warned, however, that regression analysis is definitely an art. As the warning in the Perl module says, it won't make sense to you if you haven't learned the appropriate math.
{ "language": "en", "url": "https://stackoverflow.com/questions/105996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I get the value of the jdbc.batch_size property at runtime for a Web application using Spring MVC and Hibernate? According to what I have found so far, I can use the following code: LocalSessionFactoryBean sessionFactory = (LocalSessionFactoryBean)super.getApplicationContext().getBean("&sessionFactory"); System.out.println(sessionFactory.getConfiguration().buildSettings().getJdbcBatchSize()); but then I get a Hibernate Exception: org.hibernate.HibernateException: No local DataSource found for configuration - dataSource property must be set on LocalSessionFactoryBean Can somebody shed some light? A: On the versions of Hibernate that I've checked, getConfiguration is not a public method of SessionFactory. In a few desperate cases, I've cast a Session or SessionFactory into its underlying implementation to get at some values that weren't publicly available. In this case that would be: ((SessionFactoryImplementor)sessionFactory).getSettings().getJdbcBatchSize() Of course, that's dangerous because it could break if they change the implementation. I usually only do this for optimizations that I can live without and then wrap the whole thing in a try/catch Throwable block just to make sure it won't hurt anything if it fails. A better idea might be to set the value yourself when you initialize Hibernate so you already know what it is from the beginning. A: Try the following (I can't test it since I don't use Spring): System.out.println(sessionFactory.getConfiguration().getProperty("hibernate.jdbc.batch_size"))
{ "language": "en", "url": "https://stackoverflow.com/questions/105998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Which SharePoint 2007 features are not available to Office 2003 users? I have been tasked with coming up with a compatibility guide for SharePoint 2007 comparing Office 2003 and Office 2007. Does anyone know where to find such a list? I have been searching for awhile but I cannot seem to find a comprehensive list. Thanks :) A: There is an entire MS white paper on Office integration with SharePoint: http://download.microsoft.com/download/5/d/c/5dcfc15a-c31e-4a14-93cf-b44bce3e447e/Microsoft%20Office%20and%20SharePoint%20Integration%20White%20Paper.doc A: This post might be helpful: http://www.sharepointusecases.com/index.php/2008/08/office-2003-and-sharepoint-2007-comparision
{ "language": "en", "url": "https://stackoverflow.com/questions/106000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Parameterized SQL Columns? I have some code which utilizes parameterized queries to prevent against injection, but I also need to be able to dynamically construct the query regardless of the structure of the table. What is the proper way to do this? Here's an example, say I have a table with columns Name, Address, Telephone. I have a web page where I run Show Columns and populate a select drop-down with them as options. Next, I have a textbox called Search. This textbox is used as the parameter. Currently my code looks something like this: result = pquery('SELECT * FROM contacts WHERE `' + escape(column) + '`=?', search); I get an icky feeling from it though. The reason I'm using parameterized queries is to avoid using escape. Also, escape is likely not designed for escaping column names. How can I make sure this works the way I intend? Edit: The reason I require dynamic queries is that the schema is user-configurable, and I will not be around to fix anything hard-coded. A: Instead of passing the column names, just pass an identifier that you code will translate to a column name using a hardcoded table. This means you don't need to worry about malicious data being passed, since all the data is either translated legally, or is known to be invalid. Psudoish code: @columns = qw/Name Address Telephone/; if ($columns[$param]) { $query = "select * from contacts where $columns[$param] = ?"; } else { die "Invalid column!"; } run_sql($query, $search); A: The trick is to be confident in your escaping and validating routines. I use my own SQL escape function that is overloaded for literals of different types. Nowhere do I insert expressions (as opposed to quoted literal values) directly from user input. Still, it can be done, I recommend a separate — and strict — function for validating the column name. Allow it to accept only a single identifier, something like /^\w[\w\d_]*$/ You'll have to rely on assumptions you can make about your own column names. A: I use ADO.NET and the use of SQL Commands and SQLParameters to those commands which take care of the Escape problem. So if you are in a Microsoft-tool environment as well, I can say that I use this very sucesfully to build dynamic SQL and yet protect my parameters best of luck A: Make the column based on the results of another query to a table that enumerates the possible schema values. In that second query you can hardcode the select to the column name that is used to define the schema. if no rows are returned then the entered column is invalid. A: In standard SQL, you enclose delimited identifiers in double quotes. This means that: SELECT * FROM "SomeTable" WHERE "SomeColumn" = ? will select from a table called SomeTable with the shown capitalization (not a case-converted version of the name), and will apply a condition to a column called SomeColumn with the shown capitalization. Of itself, that's not very helpful, but...if you can apply the escape() technique with double quotes to the names entered via your web form, then you can build up your query reasonably confidently. Of course, you said you wanted to avoid using escape - and indeed you don't have to use it on the parameters where you provide the ? place-holders. But where you are putting user-provided data into the query, you need to protect yourself from malicious people. Different DBMS have different ways of providing delimited identifiers. MS SQL Server, for instance, seems to use square brackets [SomeTable] instead of double quotes. A: Column names in some databases can contain spaces, which mean you'd have to quote the column name, but if your database contains no such columns, just run the column name through a regular expression or some sort of check before splicing into the SQL: if ( $column !~ /^\w+$/ ) { die "Bad column name [$column]"; }
{ "language": "en", "url": "https://stackoverflow.com/questions/106001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do I call a .NET assembly from C/C++? Suppose I am writing an application in C++ and C#. I want to write the low level parts in C++ and write the high level logic in C#. How can I load a .NET assembly from my C++ program and start calling methods and accessing the properties of my C# classes? A: I would definitely investigate C++/CLI for this and avoid COM and all the registration hassles that tends to produce. What is the motivation for using C++? If it is simply style then you might find you can write everything in C++/CLI. If it is performance then calling back and forth between managed C++ and unmanaged code is relatively straight forward. But it is never going to be transparent. You can't pass a managed pointer to unmanaged code first without pinning it so that the garbage collector won't move it, and of course unmanaged code won't know about your managed types. But managed (C++) code can know about your unmanaged types. One other thing to note is that C++/CLI assemblies that include unmanaged code will be architecture specific. You will need separates builds for x86 and x64 (and IA64). A: You should really look into C++/CLI. It makes tasks like this nearly trivial. Otherwise, you'll have to generate COM wrappers around the C# code and have your C++ app call the COM wrappers. A: [Guid("123565C4-C5FA-4512-A560-1D47F9FDFA20")] public interface IConfig { [DispId(1)] string Destination{ get; } [DispId(2)] void Unserialize(); [DispId(3)] void Serialize(); } [ComVisible(true)] [Guid("12AC8095-BD27-4de8-A30B-991940666927")] [ClassInterface(ClassInterfaceType.None)] public sealed class Config : IConfig { public Config() { } public string Destination { get { return ""; } } public void Serialize() { } public void Unserialize() { } } After that, you need to regasm your assembly. Regasm will add the necessary registry entries to allow your .NET component to be see as a COM Component. After, you can call your .NET Component in C++ in the same way as any other COM component. A: If you can have both managed and unmanaged code in your process, you can create a C++ class with virtual functions. Implement the class with mixed mode C++/CLI. Inject the implementation to your C++ code, so that the (high-level) implementation can be called from your (low-level) C++ code. A: You can wrap the .NET component in a COM component - which is quite easy with the .NET tools - and call it via COM. A: If the low level parts in in C++ then typically you call that from the C# code passing in the values that are needed. This should work in the standard way that you're probably accustomed to. You'll need to read up on marshalling for example. You could look at this blog to get some concrete details. A: Create your .NET assembly as normal, but be sure to mark the class with the ClassInterface(ClassInterfaceType.AutoDual) and be sure an assembly info SetAssemblyAtribute to ComVisible( true ). Then, create the COM wrapper with REGASM: regasm mydll.dll /tlb:mydll.tbl /codebase f:_code\ClassLibraryForCom be sure to use the /codebase directive -- it is necessary if you aren't going to give the assembly a strong name. rp A: Since C# can import C++ standard exports, it might be easier to load up your C++ dll inside of a C# application instead of using COM from C++. See documentation for System.Runtime.InteropServices.DllImport. Also, here is a complete list of the types of Interop that you can do between managed and unmanaged code: http://blogs.msdn.com/deeptanshuv/archive/2005/06/26/432870.aspx In a nutshell: (a) Using COM-Interop (b) Using imports/pinvoke (explicit method calls) (c) IJW and MC++ apps : MC++ & IJW apps can freely call back and forth to each other. (d) Hosting. This is rare, but the CLR can be hosted by an unmanaged app which means that the runtime invokes a bunch of hosting callbacks. A: I found this link to embedding Mono: http://www.mono-project.com/Embedding_Mono It provides what seems to be a pretty straightforward interface for interacting with assemblies. This could be an attractive option, especially if you want to be cross-platform
{ "language": "en", "url": "https://stackoverflow.com/questions/106033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: How do I Yield to the UI thread to update the UI while doing batch processing in a WinForm app? I have a WinForms app written in C# with .NET 3.5. It runs a lengthy batch process. I want the app to update status of what the batch process is doing. What is the best way to update the UI? A: The quick and dirty way is using Application.DoEvents() But this can cause problems with the order events are handled. So it's not recommended The problem is probably not that you have to yield to the ui thread but that you do the processing on the ui thread blocking it from handling messages. You can use the backgroundworker component to do the batch processing on a different thread without blocking the UI thread. A: Run the lengthy process on a background thread. The background worker class is an easy way of doing this - it provides simple support for sending progress updates and completion events for which the event handlers are called on the correct thread for you. This keeps the code clean and concise. To display the updates, progress bars or status bar text are two of the most common approaches. The key thing to remember is if you are doing things on a background thread, you must switch to the UI thread in order to update windows controls etc. A: To beef out what people are saying about DoEvents, here's a description of what can happen. Say you have some form with data on it and your long running event is saving it to the database or generating a report based on it. You start saving or generating the report, and then periodically you call DoEvents so that the screen keeps painting. Unfortunately the screen isn't just painting, it will also react to user actions. This is because DoEvents stops what you're doing now to process all the windows messages waiting to be processed by your Winforms app. These messages include requests to redraw, as well as any user typing, clicking, etc. So for example, while you're saving the data, the user can do things like making the app show a modal dialog box that's completely unrelated to the long running task (eg Help->About). Now you're reacting to new user actions inside the already running long running task. DoEvents will return when all the events that were waiting when you called it are finished, and then your long running task will continue. What if the user doesn't close the modal dialog? Your long running task waits forever until this dialog is closed. If you're committing to a database and holding a transaction, now you're holding a transaction open while the user is having a coffee. Either your transaction times out and you lose your persistence work, or the transaction doesn't time out and you potentially deadlock other users of the DB. What's happening here is that Application.DoEvents makes your code reentrant. See the wikipedia definition here. Note some points from the top of the article, that for code to be reentrant, it: * *Must hold no static (or global) non-constant data. *Must work only on the data provided to it by the caller. *Must not rely on locks to singleton resources. *Must not call non-reentrant computer programs or routines. It's very unlikely that long running code in a WinForms app is working only on data passed to the method by the caller, doesn't hold static data, holds no locks, and calls only other reentrant methods. As many people here are saying, DoEvents can lead to some very weird scenarios in code. The bugs it can lead to can be very hard to diagnose, and your user is not likely to tell you "Oh, this might have happened because I clicked this unrelated button while I was waiting for it to save". A: Use Backgroundworker, and if you are also trying to update the GUI thread by handling the ProgressChanged event(like, for a ProgressBar), be sure to also set WorkerReportsProgress=true, or the thread that is reporting progress will die the first time it tries to call ReportProgress... an exception is thrown, but you might not see it unless you have 'when thrown' enabled, and the output will just show that the thread exited. A: The BackgroundWorker sounds like the object you want. A: Use the backgroundworker component to run your batch processing in a seperate thread, this will then not impact on the UI thread. A: I want to restate what my previous commenters noted: please avoid DoEvents() whenever possible, as this is almost always a form of "hack" and causes maintenance nightmares. If you go the BackgroundWorker road (which I suggest), you'll have to deal with cross-threading calls to the UI if you want to call any methods or properties of Controls, as these are thread-affine and must be called only from the thread they were created on. Use Control.Invoke() and/or Control.BeginInvoke() as appropriate. A: If you are running in a background/worker thread, you can call Control.Invoke on one of your UI controls to run a delegate in the UI thread. Control.Invoke is synchronous (Waits until the delegate returns). If you don't want to wait you use .BeginInvoke() to only queue the command. The returnvalue of .BeginInvoke() allows you to check if the method completed or to wait until it completed. A: Application.DoEvents() or possibly run the batch on a separate thread? A: DoEvents() was what I was looking for but I've also voted up the backgroundworker answers because that looks like a good solution that I will investigate some more.
{ "language": "en", "url": "https://stackoverflow.com/questions/106036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: how can I debug exe with soem switch flags from command prompt for e.g from command prompt I need to launch the exe with some switch flags under debugger. How do I do it? This is an exe from c/c++ and built using VS2005 environment that I need debug. I pass some flags to this exe to perform some stuff. A: You'll need to give more information about your development environment to get a specific answer. For example, with a C# project in Visual Studio, you can right-click the project->Properties and then fill out the "Command line arguments" field in the "Debug" tab. A: I think I have it worked. Right click on the project->Properties and then fill out the "Command line arguments" field in the "Debug" tab. bkane solution worked. thx.
{ "language": "en", "url": "https://stackoverflow.com/questions/106038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Best approach to web service powered by a daemon I am relatively new to web services and am wondering what the standard "best approach" is. Basically, the way things work is I need to have a task running the background constantly. The the web service will connect to the daemon and return with an appropriate response. Currently, the communication is over unix domain sockets (Linux is the expected server platform). Is this the "right" way to do this? Or is there a more proper way to have a background task that your web-server is based on? A: That's pretty much the best practice. You may be familiar with this pattern from other web applications: The daemon is frequently a database. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/106045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I make a batch file to act like a simple grep using Perl? I already know the obvious answer to this question: "just download <insert favorite windows grep or grep-like tool here>". However, I work in an environment with strict controls by the local IT staff as to what we're allowed to have on our computers. Suffice it to say: I have access to Perl on Windows XP. Here's a quick Perl script I came up with that does what I want, but I haven't figured up how to set up a batch file such that I can either pipe a command output into it, or pass a file (or list of files?) as an argument after the "expression to grep": perl -n -e "print $_ if (m![expression]!);" [filename] How do I write a batch script that I can do something like, for example: dir | grep.bat mypattern grep.bat mypattern myfile.txt EDIT: Even though I marked another "answer", I wanted to give kudos to Ray Hayes answer, as it is really the "Windows Way" to do it, even if another answer is technically closer to what I wanted. A: I wrote this a while back: @rem = '--*-Perl-*-- @echo off perl -x -S %0 %* goto endofperl @rem -- BEGIN PERL -- '; #!d:/Perl/bin/perl.exe -w #line 10 use strict; #use Test::Setup; use Getopt::Long; Getopt::Long::Configure ("bundling"); my $ignore_case = 0; my $number_line = 0; my $invert_results = 0; my $verbose = 0; my $result = GetOptions( 'i|ignore_case' => \$ignore_case, 'n|number' => \$number_line, 'v|invert' => \$invert_results, 'verbose' => \$verbose, ); my $regex = shift; if ( $ignore_case ) { $regex = "(?i:$regex)"; } $regex = qr/$regex/; print "\$regex=$regex\n"; if ( $verbose ) { print "Verbose: Ignoring case.\n" if $ignore_case; print "Verbose: Printing file name and line number.\n" if $number_line; print "Verbose: Inverting result set.\n" if $invert_results; print "\n"; } @ARGV = map { glob "$_" } @ARGV; while ( <> ) { my $matches = m/$regex/; next unless $matches ^ $invert_results; print "$ARGV\:$.:" if $number_line; print; } __END__ :endofperl A: First, turn it into a real script instead of a one-liner: use strict; use warnings; my $pattern = shift or die "Usage: $0 <pattern> [files|-]\n"; while (<>) { print if /$pattern/ } Then turn it into a batch file using pl2bat: pl2bat mygrep.pl This will create "mygrep.bat". For a full-featured grep (and many other Unix applications) written completely in Perl, see the Perl Power Tools project. While the Perl Power Tools are good if you can only run Perl, I generally prefer the set of GnuWin32 tools. They don't require installation. (You don't need administrative privileges, just a directory you can write to.) A: Most of the power of grep is already available on your machine in the Windows application FindStr.exe which is part of all Windows 2000, XP and Vista machines! It offers RegExpr etc. Far easier than a batch file which in turn calls Perl! c:\>FindStr /? Searches for strings in files. FINDSTR [/B] [/E] [/L] [/R] [/S] [/I] [/X] [/V] [/N] [/M] [/O] [/P] [/F:file] [/C:string] [/G:file] [/D:dir list] [/A:color attributes] [/OFF[LINE]] strings [[drive:][path]filename[ ...]] /B Matches pattern if at the beginning of a line. /E Matches pattern if at the end of a line. /L Uses search strings literally. /R Uses search strings as regular expressions. /S Searches for matching files in the current directory and all subdirectories. /I Specifies that the search is not to be case-sensitive. /X Prints lines that match exactly. /V Prints only lines that do not contain a match. /N Prints the line number before each line that matches. /M Prints only the filename if a file contains a match. /O Prints character offset before each matching line. /P Skip files with non-printable characters. /OFF[LINE] Do not skip files with offline attribute set. /A:attr Specifies color attribute with two hex digits. See "color /?" /F:file Reads file list from the specified file(/ stands for console). /C:string Uses specified string as a literal search string. /G:file Gets search strings from the specified file(/ stands for console). /D:dir Search a semicolon delimited list of directories strings Text to be searched for. [drive:][path]filename Specifies a file or files to search. Use spaces to separate multiple search strings unless the argument is prefixed with /C. For example, 'FINDSTR "hello there" x.y' searches for "hello" or "there" in file x.y. 'FINDSTR /C:"hello there" x.y' searches for "hello there" in file x.y. Regular expression quick reference: . Wildcard: any character * Repeat: zero or more occurances of previous character or class ^ Line position: beginning of line $ Line position: end of line [class] Character class: any one character in set [^class] Inverse class: any one character not in set [x-y] Range: any characters within the specified range \x Escape: literal use of metacharacter x \<xyz Word position: beginning of word xyz\> Word position: end of word A: Download and install ack. It's a superior replacement to grep and - thanks to Perl's magic dual mode .BAT / Perl script magic - it'll work on the command line for you. A: You need to do something like this: @echo off perl -x -S script.pl %1 The "%1" will pass the argument to the Perl script. Save it as a .bat file, and you're good to go. A: I agree with Axeman and Mr. Hayes about using a better tool for the job. That said, you could try something like this in your batch file to run your custom script against a file wildcard expression: @echo off for /f "usebackq delims==" %%f in (`dir /w /b %2`) do ( perl -n -e "print $_ if (m!%1!);" "%%f" REM or something like: myperlscript.pl %1 "%%f" ) In this way, you can do things like "grep mypattern myfile.txt", "grep mypattern .", "grep mypattern *.doc", etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/106053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: QDrag destroyed while dragging I have a Windows/Linux Qt 4.3 application that uses drag and drop in a QTreeView. I have two very similar applications which use the same set of Qt libraries. Drag and drop works in both on Linux but in only in one on Windows. In the application that does not work the QDrag object gets deleted as soon as the mouse is moved. It is deleted by a DeferredDelete event from the event queue which is still processed in Qt during a drag. I do not know how to see what is causing the QDrag object to get deleted prematurely. I can not figure out a good way to debug this problem. I have compared the source and cannot find anything obvious. I have tried using the code from one of the applications in the other application. Any suggestions? Update: The reason the QDrag operation failed is because COM was not initialized successfully so the call to DoDragDrop in QDrag::exec returned immediately. QApplication tried to initialize COM by calling OleInitialize in qt_init but it failed with the error "Cannot change thread mode after it is set". The interesting thing is that this happens even when OleInitialize is the first thing done in main so the thread mode is getting set initially by some external dependency. One of the differences between the applications that work on Windows is that the one that fails also contains .NET code so maybe that is the problem. Solved: This problem is a COM/CLR interop issue. The CLR sets the apartment state to MTA when it initializes and then when Qt attempts to initialize COM it fails. This problem and an old solution are discussed by Adam Nathan in Gotcha with STAThreadAttribute and Managed C++. In Visual Studio 2005 you can set the /CLRTHREADATTRIBUTE:STA compiler option in Configuration Properties > Linker > Advanced to set the threading attribute to STA without needing to create a new entry point. A: I have no idea what can cause this, but I would try to find out by subclassing QDrag, overwrite deleteLater() (well, reimplement it, but as it's a slot, it will get called anyway), use this instead of a QDrag and put a breakpoint in deleteLater().
{ "language": "en", "url": "https://stackoverflow.com/questions/106056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Practical example of Lisp's flexibility? Someone is trying to sell Lisp to me, as a super powerful language that can do everything ever, and then some. Is there a practical code example of Lisp's power?(Preferably alongside equivalent logic coded in a regular language.) A: The thing that I like most about Lisp (and Smalltalk) systems, is that they feel alive. You can easily probe & modify Lisp systems while they are running. If this sounds mysterious, start Emacs, and type some Lisp code. Type C-M-x and voilà! You just changed Emacs from within Emacs. You can go on and redefine all Emacs functions while it is running. Another thing is that the code = list equivalence make the frontier between code and data very thin. And thanks to macros, it is very easy to extend the language and make quick DSLs. For instance, it is possible to code a basic HTML builder with which the code is very close to the produced HTML output: (html (head (title "The Title")) (body (h1 "The Headline" :class "headline") (p "Some text here" :id "content"))) => <html> <head> <title>The title</title> </head> <body> <h1 class="headline">The Headline</h1> <p id="contents">Some text here</p> </body> </html> In the Lisp code, auto indentation make the code look like the output, except there aren't any closing tags. A: I like macros. Here's code to stuff away attributes for people from LDAP. I just happened to have that code lying around and fiigured it'd be useful for others. Some people are confused over a supposed runtime penalty of macros, so I've added an attempt at clarifying things at the end. In The Beginning, There Was Duplication (defun ldap-users () (let ((people (make-hash-table :test 'equal))) (ldap:dosearch (ent (ldap:search *ldap* "(&(telephonenumber=*) (cn=*))")) (let ((mail (car (ldap:attr-value ent 'mail))) (uid (car (ldap:attr-value ent 'uid))) (name (car (ldap:attr-value ent 'cn))) (phonenumber (car (ldap:attr-value ent 'telephonenumber)))) (setf (gethash uid people) (list mail name phonenumber)))) people)) You can think of a "let binding" as a local variable, that disappears outside the LET form. Notice the form of the bindings -- they are very similar, differing only in the attribute of the LDAP entity and the name ("local variable") to bind the value to. Useful, but a bit verbose and contains duplication. On the Quest for Beauty Now, wouldn't it be nice if we didn't have to have all that duplication? A common idiom is is WITH-... macros, that binds values based on an expression that you can grab the values from. Let's introduce our own macro that works like that, WITH-LDAP-ATTRS, and replace it in our original code. (defun ldap-users () (let ((people (make-hash-table :test 'equal))) ; equal so strings compare equal! (ldap:dosearch (ent (ldap:search *ldap* "(&(telephonenumber=*) (cn=*))")) (with-ldap-attrs (mail uid name phonenumber) ent (setf (gethash uid people) (list mail name phonenumber)))) people)) Did you see how a bunch of lines suddenly disappeared, and was replaced with just one single line? How to do this? Using macros, of course -- code that writes code! Macros in Lisp is a totally different animal than the ones you can find in C/C++ through the use of the pre-processor: here, you can run real Lisp code (not the #define fluff in cpp) that generates Lisp code, before the other code is compiled. Macros can use any real Lisp code, i.e., ordinary functions. Essentially no limits. Getting Rid of Ugly So, let's see how this was done. To replace one attribute, we define a function. (defun ldap-attr (entity attr) `(,attr (car (ldap:attr-value ,entity ',attr)))) The backquote syntax looks a bit hairy, but what it does is easy. When you call LDAP-ATTRS, it'll spit out a list that contains the value of attr (that's the comma), followed by car ("first element in the list" (cons pair, actually), and there is in fact a function called first you can use, too), which receives the first value in the list returned by ldap:attr-value. Because this isn't code we want to run when we compile the code (getting the attribute values is what we want to do when we run the program), we don't add a comma before the call. Anyway. Moving along, to the rest of the macro. (defmacro with-ldap-attrs (attrs ent &rest body) `(let ,(loop for attr in attrs collecting `,(ldap-attr ent attr)) ,@body)) The ,@-syntax is to put the contents of a list somewhere, instead of the actual list. Result You can easily verify that this will give you the right thing. Macros are often written this way: you start off with code you want to make simpler (the output), what you want to write instead (the input), and then you start molding the macro until your input gives the correct output. The function macroexpand-1 will tell you if your macro is correct: (macroexpand-1 '(with-ldap-attrs (mail phonenumber) ent (format t "~a with ~a" mail phonenumber))) evaluates to (let ((mail (car (trivial-ldap:attr-value ent 'mail))) (phonenumber (car (trivial-ldap:attr-value ent 'phonenumber)))) (format t "~a with ~a" mail phonenumber)) If you compare the LET-bindings of the expanded macro with the code in the beginning, you'll find that it is in the same form! Compile-time vs Runtime: Macros vs Functions A macro is code that is run at compile-time, with the added twist that they can call any ordinary function or macro as they please! It's not much more than a fancy filter, taking some arguments, applying some transformations and then feeding the compiler the resulting s-exps. Basically, it lets you write your code in verbs that can be found in the problem domain, instead of low-level primitives from the language! As a silly example, consider the following (if when wasn't already a built-in):: (defmacro my-when (test &rest body) `(if ,test (progn ,@body))) if is a built-in primitive that will only let you execute one form in the branches, and if you want to have more than one, well, you need to use progn:: ;; one form (if (numberp 1) (print "yay, a number")) ;; two forms (if (numberp 1) (progn (assert-world-is-sane t) (print "phew!")))) With our new friend, my-when, we could both a) use the more appropriate verb if we don't have a false branch, and b) add an implicit sequencing operator, i.e. progn:: (my-when (numberp 1) (assert-world-is-sane t) (print "phew!")) The compiled code will never contain my-when, though, because in the first pass, all macros are expanded so there is no runtime penalty involved! Lisp> (macroexpand-1 '(my-when (numberp 1) (print "yay!"))) (if (numberp 1) (progn (print "yay!"))) Note that macroexpand-1 only does one level of expansions; it's possible (most likely, in fact!) that the expansion continues further down. However, eventually you'll hit the compiler-specific implementation details which are often not very interesting. But continuing expanding the result will eventually either get you more details, or just your input s-exp back. Hope that clarifies things. Macros is a powerful tool, and one of the features in Lisp I like. A: See how you can extend Common Lisp with XML templating: cl-quasi-quote XML example, project page, (babel:octets-to-string (with-output-to-sequence (*html-stream*) <div (constantAttribute 42 someJavaScript `js-inline(print (+ 40 2)) runtimeAttribute ,(concatenate 'string "&foo" "&bar")) <someRandomElement <someOther>>>)) => "<div constantAttribute=\"42\" someJavaScript=\"javascript: print((40 + 2))\" runtimeAttribute=\"&amp;foo&amp;bar\"> <someRandomElement> <someOther/> </someRandomElement> </div>" This is basically the same thing as Lisp's backtick reader (which is for list quasi quoting), but it also works for various other things like XML (installed on a special <> syntax), JavaScript (installed on `js-inline), etc. To make it clear, this is implemented in a user library! And it compiles the static XML, JavaScript, etc. parts into UTF-8 encoded literal byte arrays that are ready to be written to the network stream. With a simple , (comma) you can get back to lisp and interleave runtime generated data into the literal byte arrays. This is not for the faint of heart, but this is what the library compiles the above into: (progn (write-sequence #(60 100 105 118 32 99 111 110 115 116 97 110 116 65 116 116 114 105 98 117 116 101 61 34 52 50 34 32 115 111 109 101 74 97 118 97 83 99 114 105 112 116 61 34 106 97 118 97 115 99 114 105 112 116 58 32 112 114 105 110 116 40 40 52 48 32 43 32 50 41 41 34 32 114 117 110 116 105 109 101 65 116 116 114 105 98 117 116 101 61 34) *html-stream*) (write-quasi-quoted-binary (let ((*transformation* #<quasi-quoted-string-to-quasi-quoted-binary {1006321441}>)) (transform-quasi-quoted-string-to-quasi-quoted-binary (let ((*transformation* #<quasi-quoted-xml-to-quasi-quoted-string {1006326E51}>)) (locally (declare (sb-ext:muffle-conditions sb-ext:compiler-note)) (let ((it (concatenate 'string "runtime calculated: " "&foo" "&bar"))) (if it (transform-quasi-quoted-xml-to-quasi-quoted-string/attribute-value it) nil)))))) *html-stream*) (write-sequence #(34 62 10 32 32 60 115 111 109 101 82 97 110 100 111 109 69 108 101 109 101 110 116 62 10 32 32 32 32 60 115 111 109 101 79 116 104 101 114 47 62 10 32 32 60 47 115 111 109 101 82 97 110 100 111 109 69 108 101 109 101 110 116 62 10 60 47 100 105 118 62 10) *html-stream*) +void+) For reference, the two big byte vectors in the above look like this when converted to string: "<div constantAttribute=\"42\" someJavaScript=\"javascript: print((40 + 2))\" runtimeAttribute=\"" And the second one: "\"> <someRandomElement> <someOther/> </someRandomElement> </div>" And it combines well with other Lisp structures like macros and functions. now, compare this to JSPs... A: One thing I like is the fact that I can upgrade code "run-time" without losing application state. It's a thing only useful in some cases, but when it is useful, having it already there (or, for only a minimal cost during development) is MUCH cheaper than having to implement it from scratch. Especially since this comes at "no to almost no" cost. A: I was an AI student at MIT in the 1970s. Like every other student, I thought language was paramount. Nevertheless, Lisp was the primary language. These are some things I still think it is pretty good for: * *Symbolic math. It is easy and instructive to write symbolic differentiation of an expression, and algebraic simplification. I still do those, even though I do them in C-whatever. *Theorem proving. Every now & then I go on a temporary AI binge, like trying to prove that insertion sort is correct. For that I need to do symbolic manipulation, and I usually fall back on Lisp. *Little domain-specific-languages. I know Lisp isn't really practical, but if I want to try out a little DSL without having to get all wrapped up in parsing, etc., Lisp macros make it easy. *Little play algorithms like minimax game tree search can be done in like three lines. *Want to try lambda calculus? It's easy in Lisp. Mainly what Lisp does for me is mental exercise. Then I can carry that over into more practical languages. P.S. Speaking of lambda calculus, what also started in the 1970s, in that same AI millieu, was that OO started invading everybody's brain, and somehow, interest in what it is seems to have crowded out much interest in what it is good for. I.e. work on machine learning, natural language, vision, problem solving, all sort of went to the back of the room while classes, messages, types, polymorphism, etc. went to the front. A: I like this macro example from http://common-lisp.net/cgi-bin/viewcvs.cgi/cl-selenium/?root=cl-selenium It's a Common Lisp binding to Selenium (a web browser test framework), but instead of mapping every method, it reads Selenium's own API definition XML document at compile time and generates the mapping code using macros. You can see the generated API here: common-lisp.net/project/cl-selenium/api/selenium-package/index.html This is essentially driving macros with external data, which happens to be an XML document in this case, but could have been as complex is reading from a database or network. This is the power of having the entire Lisp environment available to you at compile time. A: Have you taken a look at this explanation of why macros are powerful and flexible? No examples in other languages though, sorry, but it might sell you on macros. A: @Mark, While there is some truth to what you are saying, I believe it is not always as straight forward. Programmers and people in general don't always take the time to evaluate all the possibilities and decide to switch languages. Often It's the managers that decide, or the schools that teach the first languages ... and programmers never have the need to invest enough amount of time to get to a certain level were they can decide this language saves me more time than that language. Plus you have to admit that languages that have the backing of huge commercial entities such as Microsoft or Sun will always have an advantage in the market compared to languages without such backing. In order to answer the original question, Paul Graham tries to give an example here even though I admit it is not necessarily as practical as I would like :-) A: One specific thing that impressed me is the ability to write your own object-oriented programming extension, if you happen not to like the included CLOS. One of them is in Garnet, and one in Paul Graham's On Lisp. There's also a package called Screamer that allows nondeterministic programming (which I haven't evaluated). Any language that allows you to change it to support different programming paradigms has to be flexible. A: You might find this post by Eric Normand helpful. He describes how as a codebase grows, Lisp helps by letting you build the language up to your application. While this often takes extra effort early on, it gives you a big advantage later. A: The simple fact that it's a multi-paradigm language makes it very very flexible. A: The best example I can think of that is widely available is the book by Paul Graham, On Lisp. The full PDF can be downloaded from the link I just gave. You could also try Practical Common Lisp (also fully available on the web). I have a lot of unpractical examples. I once wrote a program in about 40 lines of lisp which could parse itself, treat its source as a lisp list, do a tree traversal of the list and build an expression that evaluated to WALDO if the waldo identifier existed in the source or evaluate to nil if waldo was not present. The returned expression was constructed by adding calls to car/cdr to the original source that was parsed. I have no idea how to do this in other languages in 40 lines of code. Perhaps perl can do it in even fewer lines. A: You may find this article helpful: http://www.defmacro.org/ramblings/lisp.html That said, it's very, very hard to give short, practical examples of Lisp's power because it really shines only in non-trivial code. When your project grows to a certain size, you will appreciate Lisp's abstraction facilities and be glad that you've been using them. Reasonably short code samples, on the other hand, will never give you a satisfying demonstration of what makes Lisp great because other languages' predefined abbreviations will look more attractive in small examples than Lisp's flexibility in managing domain-specific abstractions. A: There are plenty of killer features in Lisp, but macros is one I love particularily, because there's not really a barrier anymore between what the language defines and what I define. For example, Common Lisp doesn't have a while construct. I once implemented it in my head, while walking. It's straightforward and clean: (defmacro while (condition &body body) `(if ,condition (progn ,@body (do nil ((not ,condition)) ,@body)))) Et voilà! You just extended the Common Lisp language with a new fundamental construct. You can now do: (let ((foo 5)) (while (not (zerop (decf foo))) (format t "still not zero: ~a~%" foo))) Which would print: still not zero: 4 still not zero: 3 still not zero: 2 still not zero: 1 Doing that in any non-Lisp language is left as an exercise for the reader... A: Actually, a good practical example is the Lisp LOOP Macro. http://www.ai.sri.com/pkarp/loop.html The LOOP macro is simply that -- a Lisp macro. Yet it basically defines a mini looping DSL (Domain Specific Language). When you browse through that little tutorial, you can see (even as a novice) that it's difficult to know what part of the code is part of the Loop macro, and which is "normal" Lisp. And that's one of the key components of Lisps expressiveness, that the new code really can't be distinguished from the system. While in, say, Java, you may not (at a glance) be able to know what part of a program comes from the standard Java library versus your own code, or even a 3rd party library, you DO know what part of the code is the Java language rather than simply method calls on classes. Granted, it's ALL the "Java language", but as programmer, you are limited to only expressing your application as a combination of classes and methods (and now, annotations). Whereas in Lisp, literally everything is up for grabs. Consider the Common SQL interface to connect Common Lisp to SQL. Here, http://clsql.b9.com/manual/loop-tuples.html, they show how the CL Loop macro is extended to make the SQL binding a "first class citizen". You can also observe constructs such as "[select [first-name] [last-name] :from [employee] :order-by [last-name]]". This is part of the CL-SQL package and implemented as a "reader macro". See, in Lisp, not only can you make macros to create new constructs, like data structures, control structures, etc. But you can even change the syntax of the language through a reader macro. Here, they're using a reader macro (in the case, the '[' symbol) to drop in to a SQL mode to make SQL work like embedded SQL, rather than as just raw strings like in many other languages. As application developers, our task is to convert our processes and constructs in to a form that the processor can understand. That means we, inevitably, have to "talk down" to the computer language, since it "doesn't understand" us. Common Lisp is one of the few environments where we can not only build our application from the top down, but where we can lift the language and environment up to meet us half way. We can code at both ends. Mind, as elegant as this can be, it's no panacea. Obviously there are other factors that influence language and environment choice. But it's certainly worth learning and playing with. I think learning Lisp is a great way to advance your programming, even in other languages. A: I like Common Lisp Object System (CLOS) and multimethods. Most, if not all, object-oriented programming languages have the basic notions of classes and methods. The following snippet in Python defines the classes PeelingTool and Vegetable (something similar to the Visitor pattern): class PeelingTool: """I'm used to peel things. Mostly fruit, but anything peelable goes.""" def peel(self, veggie): veggie.get_peeled(self) class Veggie: """I'm a defenseless Veggie. I obey the get_peeled protocol used by the PeelingTool""" def get_peeled(self, tool): pass class FingerTool(PeelingTool): ... class KnifeTool(PeelingTool): ... class Banana(Veggie): def get_peeled(self, tool): if type(tool) == FingerTool: self.hold_and_peel(tool) elif type(tool) == KnifeTool: self.cut_in_half(tool) You put the peel method in the PeelingTool and have the Banana accept it. But, it must belong to the PeelingTool class, so it can only be used if you have an instance of the PeelingTool class. The Common Lisp Object System version: (defclass peeling-tool () ()) (defclass knife-tool (peeling-tool) ()) (defclass finger-tool (peeling-tool) ()) (defclass veggie () ()) (defclass banana (veggie) ()) (defgeneric peel (veggie tool) (:documentation "I peel veggies, or actually anything that wants to be peeled")) ;; It might be possible to peel any object using any tool, ;; but I have no idea how. Left as an exercise for the reader (defmethod peel (veggie tool) ...) ;; Bananas are easy to peel with our fingers! (defmethod peel ((veggie banana) (tool finger-tool)) (with-hands (left-hand right-hand) *me* (hold-object left-hand banana) (peel-with-fingers right-hand tool banana))) ;; Slightly different using a knife (defmethod peel ((veggie banana) (tool knife-tool)) (with-hands (left-hand right-hand) *me* (hold-object left-hand banana) (cut-in-half tool banana))) Anything can be written in any language that's Turing complete; the difference between the languages is how many hoops you have to jump through to get the equivalent result. A powerful languages like Common Lisp, with functionality such as macros and the CLOS, allows you to achieve results fast and easy without jumping through so many hoops that you either settle for a subpar solution, or find yourself becoming a kangaroo. A: I found this article quite interesting: Programming Language Comparison: Lisp vs C++ The author of the article, Brandon Corfman, writes about a study that compares solutions in Java, C++ and Lisp to a programming problem, and then writes his own solution in C++. The benchmark solution is Peter Norvig's 45 lines of Lisp (written in 2 hours). Corfman finds that it is difficult to reduce his solution to less than 142 lines of C++/STL. His analysis of why, is an interesting read. A: John Ousterhout made this interesting observation regarding Lisp in 1994: Language designers love to argue about why this language or that language must be better or worse a priori, but none of these arguments really matter a lot. Ultimately all language issues get settled when users vote with their feet. If [a language] makes people more productive then they will use it; when some other language comes along that is better (or if it is here already), then people will switch to that language. This is The Law, and it is good. The Law says to me that Scheme (or any other Lisp dialect) is probably not the "right" language: too many people have voted with their feet over the last 30 years. http://www.vanderburg.org/OldPages/Tcl/war/0009.html
{ "language": "en", "url": "https://stackoverflow.com/questions/106058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: regular expression to replace two (or more) consecutive characters by only one? In java, which regular expression can be used to replace these, for example: before: aaabbb after: ab before: 14442345 after: 142345 thanks! A: "14442345".replaceAll("(.)\\1+", "$1"); A: In perl s/(.)\1+/$1/g; Does the trick, I assume if java has perl compatible regexps it should work too. Edit: Here is what it means s { (.) # match any charater ( and capture it ) \1 # if it is followed by itself + # One or more times }{$1}gx; # And replace the whole things by the first captured character (with g modifier to replace all occurences) Edit: As others have pointed out, the syntax in Java would become original.replaceAll("(.)\\1+", "$1"); remember to escape the \1 A: originalString.replaceAll( "(.)\\1+", "$1" ); A: String a = "aaabbb"; String b = a.replaceAll("(.)\\1+", "$1"); System.out.println("'" + a + "' -> '" + b + "'"); A: match pattern (in Java/languages where \ must be escaped): (.)\\1+ or (in languages where you can use strings which don't treat \ as escape character) (.)\1+ replacement: $1 A: in TextEdit (assuming posix expressions) find: [a]+[b]+ replace with: ab A: In Perl: tr/a-z0-9//s; Example: $ perl -E'@a = (aaabbb, 14442345); for(@a) { tr/a-z0-9//s; say }' ab 142345 If Java has no tr analog then: s/(.)\1+/$1/sg; #NOTE: `s` modifier. It takes into account consecutive newlines. Example: $ perl -E'@a = (aaabbb, 14442345); for(@a) { s/(.)\1+/$1/sg; say }' ab 142345 A: Sugared with a Java 7 : Named Groups static String cleanDuplicates(@NonNull final String val) { assert val != null; return val.replaceAll("(?<dup>.)\\k<dup>+","${dup}"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/106067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: How to create an AxHost solely in code [C#] I'm using a COM Wrapper to interact with Windows Media Player. The it is using an AxHost to somehow wrap the player, for me it's all just magic under the hood^^ The AxHost.AttachInterfaces looks like this protected override void AttachInterfaces() { try { //Get the IOleObject for Windows Media Player. IOleObject oleObject = this.GetOcx() as IOleObject; //Set the Client Site for the WMP control. oleObject.SetClientSite(this as IOleClientSite); Player = this.GetOcx() as WMPLib.WindowsMediaPlayer; ... Everything is working find as long as I host this AxHost in a Windows Forms control. But I can't hook up the events in a constructor. This for example doesn't work: public WMPMediaRating() { var remote = new WMPRemote.RemotedWindowsMediaPlayer(); _WMP = remote.Player; _WMP.MediaChange += new _WMPOCXEvents_MediaChangeEventHandler(_WMP_MediaChange); } remote.Player is always null and the program crashes with a NullReferencesException. The code in AttachInterfaces() is somehow only executed after the Form has been drawn, or after everything else is done. I tried calling AttachInterfaces() by hand, but that didn't work either because GetOcx() returns nothing. So how can I instantiate my AxHost-inherited control without Windows Forms, to use it for example in a console application? A: FYI: nobody stops you from using a hidden window in your console application. You'll not be able to host the media player in a non-windows application - it requires hosting. If you want to play some music you can use the Media Graphs to create a graph that renders (plays) your music file - it'll not require any extra hosting.
{ "language": "en", "url": "https://stackoverflow.com/questions/106081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: In MATLAB, can a class method act as a uicontrol callback without being public? In MATLAB 2008a, is there a way to allow a class method to act as a uicontrol callback function without having to make the method public? Conceptually, the method should not be public because it should never be called by a user of the class. It should only be called as a result of a UI event triggering a callback. However, if I set the method's access to private or protected, the callback doesn't work. My class is derived from hgsetget and is defined using the 2008a classdef syntax. The uicontrol code looks something like: methods (Access = public) function this = MyClass(args) this.someClassProperty = uicontrol(property1, value1, ... , 'Callback', ... {@(src, event)myCallbackMethod(this, src, event)}); % the rest of the class constructor code end end The callback code looks like: methods (Access = private) % This doesn't work because it's private % It works just fine if I make it public instead, but that's wrong conceptually. function myCallbackMethod(this, src, event) % do something end end A: Storing the function handle of the callback as a private property seems to workaround the problem. Try this: classdef MyClass properties handle; end properties (Access=private) callback; end methods function this = MyClass(args) this.callback = @myCallbackMethod; this.handle = uicontrol('Callback', ... {@(src, event)myCallbackMethod(this, src, event)}); end end methods (Access = private) function myCallbackMethod(this, src, event) disp('Hello world!'); end end end
{ "language": "en", "url": "https://stackoverflow.com/questions/106086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What's the Meaning of and Fix for the Error 'The Controls Collection cannot...' Using ASP.NET 2.0, I have a web app where I am trying to use JavaScript to make one tab in a tab-container the active tab. The recommendations have been based on: var mX=document.getElementById('<%= tc1.ClientID%>') $find('<%= tc1.ClientID%>').set_activeTabIndex(1); Which both produce the error: The Controls collection cannot be modified because the control contains code blocks (i.e. <% ... %>). I've tried moving the code out of the head tag and into the body tag; same error. I've also tried the alternative <%# tc1.ClientID%>, as in: var mX = document.getElementById('<%# tc1.ClientID %>') mX.ActiveTabIndex="2"; Generates a null error - code above is rendered in the html as: var mX = document.getElementById('') mX.ActiveTabIndex="2"; Can anyone explain in plain(er) language what this means and what the solution is? A: This is a fix to this obscure error if you're just beginning to use ASP.NET MVC. I leave this note here in case another person just like me is having this problem and I really want to save that guy from wasting hours of his time due to this lame error: CHANGE THE LOCATION OF THE <SCRIPT> tags!!! Most likely you wanted to put some javascript on the Site.Master page but when you did it, you did it below the tags when you included jQuery. No sir! Please include the <script> tag with the javascript code out of the <head> and into the <body> . Good luck! A: I've actually run into that before. Here's an explanation: http://west-wind.com/WebLog/posts/6148.aspx For example, if your markup looks like: <asp:Panel id="whatever" runat="server"> <script type="text/javascript"> var mX=document.getElementById('<%= tc1.ClientID%>'); //and so on... </script> </asp:Panel> And if you try to programatically add a control to that Panel it'll fail with the error you're getting. One solution is to put your Javascript somewhere else in the page. Another way (although a hack) is this: <asp:Panel id="whatever" runat="server"> <asp:PlaceHolder id="dontCare" runat="server"> <script type="text/javascript"> var mX=document.getElementById('<%= tc1.ClientID%>'); //and so on... </script> </asp:PlaceHolder> </asp:Panel> Now the <%= ... %> part is inside the PlaceHolder, not directly inside the Panel. Adding controls in your C# or VB code to the Panel should now work (although adding controls to the PlaceHolder would fail.) EDIT: Yeah, I tried using <%# ... %> instead too, but that's only for inside a DataBound control. For example, that would work if it was in the middle of a DataGrid and I called it's DataBind() method this PostBack. A: It looks and sounds like the code snippets are not themselves offensive, but some other code that was modifying the controls collection is now upset about them. Can you tell where in your program the error is actually occuring? By the way, the <%# %> is not appropriate here — it's only for data-bound controls. A: You can't have code blocks mixed with controls, instead store the value in javascript and assign it that way... something like this: <div runat="server" id="rawr"> <span id="myspan">HI</span> </div> </form> <script type="text/javascript"> var obj = '<%= DateTime.Now.ToShortDateString() %>'; var ele = document.getElementById("myspan"); ele.innerHTML = obj; </script> A: move the javascript from head.. and make sure that you added scriptmanager to your default.aspx while using ajaxextenders..
{ "language": "en", "url": "https://stackoverflow.com/questions/106095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Popup Window similar to Modal Window I need to open a popup Window to a cross domain content. I should not allow the user either to access or close the parent window, till i am done with Child window. The main issue with Modal window is that stops any asynchrnous process running on the main window. For example, timers and auto refresh wont be working in the parent window. Is there any perfect way to do the above said. Thanks in advance A: How about instead of popping up an actual window, you just open a pseudo-window...that is a div with some borders, make it draggable if you want, and place a large semi-transparent div that covers the rest of the page and blocks it from being clicked on. Basically do something like how Lightbox works A: You could use a fake window built via javascript. Several widget libraries have support for this. For example, see ExtJS, which also supports modal windows but it might be overkill for your application. For jQuery, browse through the plugins, like this one A: I think Telerik has a control for this if you are working on ASP.Net. Uses a div in its implementation as @Davr suggested. Modal windows are a bad option anyhow as they are not supported on all browsers. A: In addition to what Davr and thoriann said, you will likely need to make an Ajax call to grab the content. Since Ajax calls via the browser enforce the same-domain policy, you will need to make an Ajax call to your OWN server, which in-turn will need to make an xmlhttp sever-to-server request to grab the content the third-party server. A: I feel the above answers wont fit for the following reasons.. JasonS Solution - The application is developed on J2EE technologies. Other's solution - Some of the the URL Launched in the child window will communicate to the parent window through standard APIs. If i am using div or other in-built plug-in windows, then those javascript API will fail. A: Check out the Jquery plug in "BlockUI". When BlockUI is called the parent window is not accessable. You can do what you want on the modal then call "UnblockUI" to close the popup and give parent control again. Pete
{ "language": "en", "url": "https://stackoverflow.com/questions/106112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: GCC - "expected unqualified-id before ')' token" Please bear with me, I'm just learning C++. I'm trying to write my header file (for class) and I'm running into an odd error. cards.h:21: error: expected unqualified-id before ')' token cards.h:22: error: expected `)' before "str" cards.h:23: error: expected `)' before "r" What does "expected unqualified-id before ')' token" mean? And what am I doing wrong? Edit: Sorry, I didn't post the entire code. /* Card header file [Author] */ // NOTE: Lanugage Docs here http://www.cplusplus.com/doc/tutorial/ #define Card #define Hand #define AppError #include <string> using namespace std; // TODO: Docs here class Card { // line 17 public: enum Suit {Club, Diamond, Spade, Heart}; enum Rank {Two, Three, Four, Five, Six, Seven, Eight, Nine, Ten, Jack, Queen, King, Ace}; Card(); // line 22 Card(string str); Card(Rank r, Suit s); Edit: I'm just trying to compile the header file by itself using "g++ file.h". Edit: Closed question. My code is working now. Thanks everyone! Edit: Reopened question after reading Etiquette: Closing your posts A: (edited for updated question) Remove the #define statements, they're mangling the file. Were you trying to implement an include guard? That would be something like this: #ifndef CARD_H #define CARD_H class Card ... ... #endif old answer: It means that string is not defined in the current line. Try std::string. A: Just my two cents, but I guess you used the pre-compiled header #define Card #define Hand #define AppError as if you wanted to tell the compiler "Hey, the classes Card, Hand and AppError are defined elsewhere" (i.e. forward-declarations). Even if we ignore the fact macros are a pain for the exact reasons your code did not compile (as John Millikin put it, mangling your file), perhaps what you wanted to write was something like: class Card ; class Hand ; class AppError ; Which are forward-declarations of those classes. A: Your issue is your #define. You did #define Card, so now everywhere Card is seen as a token, it will be replaced. Usually a #define Token with no additional token, as in #define Token Replace will use the value 1. Remove the #define Card, it's making line 22 read: 1(); or ();, which is causing the complaint. A: Remove the #define Card.
{ "language": "en", "url": "https://stackoverflow.com/questions/106117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Tools to enable translation of A-Box ontology data Does anyone know of any tools capable of defining a declarative mapping from T-Box structures from one ontology to another, which when executed can effect translation of A-Box instance data from one ontology's form to another's? I have recently written such a tool to meet my needs, but I was wondering if I reinvented the wheel. A: There is no such tool that I know of. Generally, you simply copy the tbox and abox definitions from one ontology to another, and write a transform tool. I think this is the first ontology question I've seen on this site. I hope more people use the tag. A: I've used SPARQL CONSTRUCT queries where I query on one model and construct new statements with properties from a different namespace. I save these and then load them into the target model. I have found this to be flexible but there are many places where I'd like to do additional processing on literals. I have also wanted to use SWRL rules.
{ "language": "en", "url": "https://stackoverflow.com/questions/106134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Where do you put your CSS Margins? When you want to add whitespace between HTML elements (using CSS), to which element do you attach it? I'm regularly in situations along these lines: <body> <h1>This is the heading</h1> <p>This is a paragraph</p> <h1>Here's another heading</h1> <div>This is a footer</div> </body> Now, say I wanted 1em of space between each of these elements, but none above the first h1 or below the last div. To which elements would I attach it? Obviously, there's no real technical difference between this: h1, p { margin-bottom: 1em; } ...and this... div { margin-top: 1em; } p { margin-top: 1em; margin-bottom: 1em } What I'm interested is secondary factors: * *Consistency *Applicability to all situations *Ease / Simplicity *Ease of making changes For example: in this particular scenario, I'd say that the first solution is better than the second, as it's simpler; you're only attaching a margin-bottom to two elements in a single property definition. However, I'm looking for a more general-purpose solution. Every time I do CSS work, I get the feeling that there's a good rule of thumb to apply... but I'm not sure what it is. Does anyone have a good argument? A: I tend to use a bottom margin on elements when I want them to have space before the next element, and then to use a ".last" class in the css to remove the margin from the last element. <body> <h1>This is the heading</h1> <p>This is a paragraph</p> <h1>Here's another heading</h1> <div class="last">This is a footer</div> </body> div { margin-bottom: 1em; } p { margin-bottom: 1em; } h1 { margin-bottom: 1em; } .last {margin-bottom: 0; } In your example though, this probably isn't that applicable, as a footer div would most likely have it's own class and specific styling. Still the ".last" approach I used works for me when I have several identical elements one after the other (paragraphs and what-not). Of course, I cherry-picked the technique from the "Elements" CSS framework. A: Using advanced CSS 2 selectors, another solution would be possible that does not rely on a class last to introduce layout info into the HTML. The solution uses the adjacent selectors. Unfortunately, MSIE 6 doesn't support it yet so reluctance to use it is understandable. h1 { margin: 0 0 1em; } div, p, p + h1, div + h1 { margin: 1em 0 0; } This way, the first h1 won't have a top margin while any h1 that immediately follows a paragraph or a box has a top margin. A: If you want some space around an element, give it a margin. That means, in this case, don't just give the <h1> a bottom margin, but give <p> a top margin. Remember, when two elements are vertically adjacent and they don't have a border or padding between them, their margins collapse. That means that only the larger of the two margins is considered - they don't add together. So this: h1, p { margin: 1em; } <h1>...</h1> <p>...</p> ...would result in a 1em space between the heading and the paragraph. A: This going to be driven partly by the specifics of what you're designing for, but there's a sort of rough heirarchy to these things in, say, a typical blog index: * *You're going to have one footer on a page. *You're going to have one header per entry. *You're going to have n paragraphs per entry. Establish whitespace for your paragraphs knowing that they're going to sometimes occur in sequence -- you need to worry about how they look as a series. From there, adjust your headers to deal with boundaries between entries. Finally, adjust your footer/body spacing to make sure the bottom of the page looks decent. It's a thumbnail sketch. How you ultimately end up assigning your padding is entirely up to you, but if you approach it from an bottom-up perspective you'll likely see less surprises as you tweak first the most common/plentiful elements and then later the less common ones. A: The point that Jim is making is the key. Margins collapse between elements, they are not additive. If what you want is to ensure that there is a 1em margin above and below paragraphs and headings and that there is a 1em margin below the header and above the footer, then your css should reflect that. Given this markup (I've added a header and placed ids on the header/footer): <body> <div id="header"></div> <h1>This is the heading</h1> <p>This is a paragraph</p> <h1>Here's another heading</h1> <div id="footer">This is a footer</div> </body> You should use this css: #header { margin-bottom: 1em; } #footer { margin-top: 1em; } h1, p { margin: 1em 0; } Now the order of your elements doesn't matter. If you use two consecutive headings, or start the page with a paragraph instead of a heading it will still render the way that you indended. A: I'm a relative newbie, but my own solution to the thing I think both you and I came up against (changing margins for one tag may sort out spacing in one part of a site, only to disturb a previously good spacing elsewhere?) is now to allocate an equal margin top and bottom for all tags. You might well want more space above and below an H1 than for an H3 or a p, but in general I think pages look good with equal spaces above and below any given piece of text, so this solution works well for me and meets your 'simple rule of thumb' spec too, hopefully! A: This is how it should be done: body > * { margin-bottom: 1em; } body > *:last-child { margin-bottom: 0; } Now you don't have to worry about what element is first and what element is last, and you don't have to always place a special class on your last element. The only time this won't "work" is when the last child is one that is not rendered. In this situation you might consider applying margin-bottom:0; using a class on your last visible child. A: I tend to agree with you that the first option is better. It's generally what I like to do. However, there is an argument to be made that you should specify a margin for each one of those elements and zero it out if you don't want to apply it since browsers all handle margins differently. The <p> (and the <h1> tag too I think) will usually have default margins given by the browser if none are specified. A: I've just used first-child and last child. So for example in plain CSS: h1{ margin-top:10px; margin-bottom:10px; } h1:first-child{ margin-top:0px; } p{ margin:10px; } p:first-child{ margin-top:0px; } p:last-child{ margin-bottom:0px; } This is a basic example, but you can apply this to more elements and structure is nicer if using SASS, LESS etc :)
{ "language": "en", "url": "https://stackoverflow.com/questions/106137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: ClickOnce Online-Only Application as a TS RemoteApp I've attempted just about everything to get our ClickOnce VB.NET app to run under Terminal Services as a RemoteApp. I have a batch file that runs the .application file for the app. This works fine via RDP desktop session on the terminal server. As a TS RemoteApp, however, well... not so much. I get a quick flash of command prompt (the batch file) on the client system and then... nothing... Same goes for having it point to the .application file directly (without using a batch file) or even copying the publication locally and having it point to that. I found a technet.microsoft.com discussion about a similar issue, but there's no resolution to it listed. For anyone who has run into this before and got it working, what did you have to do? We currently use RemoteApp's for everything else on that server, so I'm hoping to stick with that if possible. The current workaround is to build and run an MSI-based installer for the app on our terminal server whenever we publish via OneClick out to the network, but this can be quite a pain at times and is easy to forget to do. Since the app works fine via Terminal Services when run in full desktop mode but not during RemoteApp, I don't think it's anything specific to Terminal Server permissions so much as ClickOnce requiring something that isn't available when running as a RemoteApp. A: The Key to getting it to work is to use Windows Explorer "C:\windows\explorer.exe". This process is the base process when you login to a full session. If you setup the RemoteApp to use Windows Explorer and the command line argument of the path to the .application file for the ClickOnce application then it will work when launched as a remote application. Windows Explorer will flash for a second when it starts, but it will disappear then the ClickOnce application will launch. A: Why does it have to be a ClickOnce application? I would consider just deploying the exe file and assemblies. I know it only half a solution, but if the application does not change much, it might be a good solution. A: I believe your problem is related to the fact that ClickOnce needs to store it's data in a special user folder called the ClickOnce application cache. Apparently because of how Terminal Services sets up user folders ClickOnce can't access this in TerminalServices mode. See this link for more information. http://msdn.microsoft.com/en-us/library/267k390a(VS.80).aspx There may not be a way to do it :( A: Can you launch the .exe directly? It's buried under your profile in \AppData\Local\Apps\2.0[obfuscated folders], but you should be able to find it. That will skip the built-in update process, but if it can be launched that way you could then write code to do a manual update after the application starts. A: Faced the same problem this morning and got it resolved by copying the clickonce app's directory from the user settings folder to somewhere like c:\MyApp\ - I know its nasty and not very ideal.. but good enough for me! A: We recently ran across this issue and decided to post a bug report on this issue to the Visual Studio development team. Feel free to comment on the bug report. It has to be a bug in ClickOnce caused by some changes in Server 2008. https://connect.microsoft.com/VisualStudio/feedback/details/653362/net-clickonce-deployment-not-working-as-remoteapp-or-citrix-xenapp-on-server-2008-server-2008-r2 We also have a discussion on the MSDN forums covering this issue: http://social.msdn.microsoft.com/Forums/en-US/winformssetup/thread/7f41667d-287a-4157-be71-d408751358d9/#92a7e5d9-22b6-44ba-9346-ef87a3b85edc A: Try using RegMon and FileMon when starting the app - You may be able to track it down to a file and/or registry permission issue. A: Also maybe check the event logs to see if anything's getting logged when the process fails.
{ "language": "en", "url": "https://stackoverflow.com/questions/106164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: VB.NET - How to get the instance used by a With statement in the immediate window VB.NET has a very handy "with" statement, but it also lets you use it on an unnamed variable, like this: With New FancyClass() .Level = "SuperSpiffy" .Style = Slimming .Execute() End With Is there a way to get at the "hidden" instance, so I can view its properties in the Immediate window? I doubt I'll get it in the watch windows, so immediate is fine. If you try to access the instance the same way (say, when .Execute() throws an exception) from the Immediate window, you get an error: ? .Style 'With' contexts and statements are not valid in debug windows. Is there any trick that can be used to get this, or do I have to convert the code to another style? If With functioned more like a Using statement, (e.g. "With v = New FancyClass()") this wouldn't pose a problem. I know how With is working, what alternatives exist, what the compiler does, etc. I just want to know if this is possible. A: What's wrong with defining a variable on one line and using it in a with-statement on the next? I realise it keeps the variable alive longer but is that so appalling? Dim x = new SomethingOrOther() With x .DoSomething() End With x = Nothing ' for the memory conscious Two extra lines wont kill you =) Edit: If you're just looking for a yes/no, I'd have to say: No. A: As answered, the simple answer is "no". But isn't another way to do it: instead of declaring and then cleaning up the variable is to use the "Using". Using fc as new FancyClass() With fc .Level = "SuperSpiffy" .Style = Slimming .Execute() End With End Using Then you can use fc in the immediate window and don't have to remember to write a fc=nothing line. Just some more thoughts on it ;) A: I hope there really isn't a way to get at it, since the easy answer is "no", and I haven't found a way yet either. Either way, nothing said so far really has a rationale for being "no", just that no one has =) It's just one of those things you figure the vb debugger team would have put in, considering how classic "with" is =) Anyway, I know all about usings and Idisposable, I know how to fix the code, as some would call it, but I might not always want to. As for Using, I don't like implementing IDisposable on my classes just to gain a bit of sugar. What we really need is a "With var = New FancyClass()", but that might just be confusing! A: You're creating a variable either way - in the first case (your example) the compiler is creating an implicit variable that you aren't allowed to really get to, and the in the second case (another answer, by Oli) you'd be creating the variable explicitly. If you create it explicitly you can use it in the immediate window, and you can explicitly destroy it when you're through with it (I'm one of the memory conscious, I guess!), instead of leaving those clean up details to the magic processes. I don't think there is any way to get at an implicit variable in the immediate window. (and I don't trust the magic processes, either. I never use multiple-dot notation or implicit variables for this reason)
{ "language": "en", "url": "https://stackoverflow.com/questions/106175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Regular expression to match DNS hostname or IP Address? Does anyone have a regular expression handy that will match any legal DNS hostname or IP address? It's easy to write one that works 95% of the time, but I'm hoping to get something that's well tested to exactly match the latest RFC specs for DNS hostnames. A: The hostname regex of smink does not observe the limitation on the length of individual labels within a hostname. Each label within a valid hostname may be no more than 63 octets long. ValidHostnameRegex="^([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])\ (\.([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9]))*$" Note that the backslash at the end of the first line (above) is Unix shell syntax for splitting the long line. It's not a part of the regular expression itself. Here's just the regular expression alone on a single line: ^([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])(\.([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9]))*$ You should also check separately that the total length of the hostname must not exceed 255 characters. For more information, please consult RFC-952 and RFC-1123. A: I don't seem to be able to edit the top post, so I'll add my answer here. For hostname - easy answer, on egrep example here -- http: //www.linuxinsight.com/how_to_grep_for_ip_addresses_using_the_gnu_egrep_utility.html egrep '([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}' Though the case doesn't account for values like 0 in the fist octet, and values greater than 254 (ip addres) or 255 (netmask). Maybe an additional if statement would help. As for legal dns hostname, provided that you are checking for internet hostnames only (and not intranet), I wrote the following snipped, a mix of shell/php but it should be applicable as any regular expression. first go to ietf website, download and parse a list of legal level 1 domain names: tld=$(curl -s http://data.iana.org/TLD/tlds-alpha-by-domain.txt | sed 1d | cut -f1 -d'-' | tr '\n' '|' | sed 's/\(.*\)./\1/') echo "($tld)" That should give you a nice piece of re code that checks for legality of top domain name, like .com .org or .ca Then add first part of the expression according to guidelines found here -- http: //www.domainit.com/support/faq.mhtml?category=Domain_FAQ&question=9 (any alphanumeric combination and '-' symbol, dash should not be in the beginning or end of an octet. (([a-z0-9]+|([a-z0-9]+[-]+[a-z0-9]+))[.])+ Then put it all together (PHP preg_match example): $pattern = '/^(([a-z0-9]+|([a-z0-9]+[-]+[a-z0-9]+))[.])+(AC|AD|AE|AERO|AF|AG|AI|AL|AM|AN|AO|AQ|AR|ARPA|AS|ASIA|AT|AU|AW|AX|AZ|BA|BB|BD|BE|BF|BG|BH|BI|BIZ|BJ|BM|BN|BO|BR|BS|BT|BV|BW|BY|BZ|CA|CAT|CC|CD|CF|CG|CH|CI|CK|CL|CM|CN|CO|COM|COOP|CR|CU|CV|CX|CY|CZ|DE|DJ|DK|DM|DO|DZ|EC|EDU|EE|EG|ER|ES|ET|EU|FI|FJ|FK|FM|FO|FR|GA|GB|GD|GE|GF|GG|GH|GI|GL|GM|GN|GOV|GP|GQ|GR|GS|GT|GU|GW|GY|HK|HM|HN|HR|HT|HU|ID|IE|IL|IM|IN|INFO|INT|IO|IQ|IR|IS|IT|JE|JM|JO|JOBS|JP|KE|KG|KH|KI|KM|KN|KP|KR|KW|KY|KZ|LA|LB|LC|LI|LK|LR|LS|LT|LU|LV|LY|MA|MC|MD|ME|MG|MH|MIL|MK|ML|MM|MN|MO|MOBI|MP|MQ|MR|MS|MT|MU|MUSEUM|MV|MW|MX|MY|MZ|NA|NAME|NC|NE|NET|NF|NG|NI|NL|NO|NP|NR|NU|NZ|OM|ORG|PA|PE|PF|PG|PH|PK|PL|PM|PN|PR|PRO|PS|PT|PW|PY|QA|RE|RO|RS|RU|RW|SA|SB|SC|SD|SE|SG|SH|SI|SJ|SK|SL|SM|SN|SO|SR|ST|SU|SV|SY|SZ|TC|TD|TEL|TF|TG|TH|TJ|TK|TL|TM|TN|TO|TP|TR|TRAVEL|TT|TV|TW|TZ|UA|UG|UK|US|UY|UZ|VA|VC|VE|VG|VI|VN|VU|WF|WS|XN|XN|XN|XN|XN|XN|XN|XN|XN|XN|XN|YE|YT|YU|ZA|ZM|ZW)[.]?$/i'; if (preg_match, $pattern, $matching_string){ ... do stuff } You may also want to add an if statement to check that string that you checking is shorter than 256 characters -- http://www.ops.ietf.org/lists/namedroppers/namedroppers.2003/msg00964.html A: You can use the following regular expressions separately or by combining them in a joint OR expression. ValidIpAddressRegex = "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$"; ValidHostnameRegex = "^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])$"; ValidIpAddressRegex matches valid IP addresses and ValidHostnameRegex valid host names. Depending on the language you use \ could have to be escaped with \. ValidHostnameRegex is valid as per RFC 1123. Originally, RFC 952 specified that hostname segments could not start with a digit. http://en.wikipedia.org/wiki/Hostname The original specification of hostnames in RFC 952, mandated that labels could not start with a digit or with a hyphen, and must not end with a hyphen. However, a subsequent specification (RFC 1123) permitted hostname labels to start with digits. Valid952HostnameRegex = "^(([a-zA-Z]|[a-zA-Z][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z]|[A-Za-z][A-Za-z0-9\-]*[A-Za-z0-9])$"; A: It's worth noting that there are libraries for most languages that do this for you, often built into the standard library. And those libraries are likely to get updated a lot more often than code that you copied off a Stack Overflow answer four years ago and forgot about. And of course they'll also generally parse the address into some usable form, rather than just giving you a match with a bunch of groups. For example, detecting and parsing IPv4 in (POSIX) C: #include <arpa/inet.h> #include <stdio.h> int main(int argc, char *argv[]) { for (int i=1; i!=argc; ++i) { struct in_addr addr = {0}; printf("%s: ", argv[i]); if (inet_pton(AF_INET, argv[i], &addr) != 1) printf("invalid\n"); else printf("%u\n", addr.s_addr); } return 0; } Obviously, such functions won't work if you're trying to, e.g., find all valid addresses in a chat message—but even there, it may be easier to use a simple but overzealous regex to find potential matches, and then use the library to parse them. For example, in Python: >>> import ipaddress >>> import re >>> msg = "My address is 192.168.0.42; 192.168.0.420 is not an address" >>> for maybeip in re.findall(r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}', msg): ... try: ... print(ipaddress.ip_address(maybeip)) ... except ValueError: ... pass A: To match a valid IP address use the following regex: (25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3} instead of: ([01]?[0-9][0-9]?|2[0-4][0-9]|25[0-5])(\.([01]?[0-9][0-9]?|2[0-4][0-9]|25[0-5])){3} Explanation Many regex engine match the first possibility in the OR sequence. For instance, try the following regex: 10.48.0.200 Test Test the difference between good vs bad A: def isValidHostname(hostname): if len(hostname) > 255: return False if hostname[-1:] == ".": hostname = hostname[:-1] # strip exactly one dot from the right, # if present allowed = re.compile("(?!-)[A-Z\d-]{1,63}(?<!-)$", re.IGNORECASE) return all(allowed.match(x) for x in hostname.split(".")) A: I think this is the best Ip validation regex. please check it once!!! ^(([01]?[0-9]?[0-9]|2([0-4][0-9]|5[0-5]))\.){3}([01]?[0-9]?[0-9]|2([0-4][0-9]|5[0-5]))$ A: /^(?:[a-zA-Z0-9]+|[a-zA-Z0-9][-a-zA-Z0-9]+[a-zA-Z0-9])(?:\.[a-zA-Z0-9]+|[a-zA-Z0-9][-a-zA-Z0-9]+[a-zA-Z0-9])?$/ A: "^((\\d{1,2}|1\\d{2}|2[0-4]\\d|25[0-5])\.){3}(\\d{1,2}|1\\d{2}|2[0-4]\\d|25[0-5])$" A: This works for valid IP addresses: regex = '^([0-9]|[1-9][0-9]|[1][0-9][0-9]|[2][0-5][0-5])[.]([0-9]|[1-9][0-9]|[1][0-9][0-9]|[2][0-5][0-5])[.]([0-9]|[1-9][0-9]|[1][0-9][0-9]|[2][0-5][0-5])[.]([0-9]|[1-9][0-9]|[1][0-9][0-9]|[2][0-5][0-5])$' A: >>> my_hostname = "testhostn.ame" >>> print bool(re.match("^(([a-zA-Z]|[a-zA-Z][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z]|[A-Za-z][A-Za-z0-9\-]*[A-Za-z0-9])$", my_hostname)) True >>> my_hostname = "testhostn....ame" >>> print bool(re.match("^(([a-zA-Z]|[a-zA-Z][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z]|[A-Za-z][A-Za-z0-9\-]*[A-Za-z0-9])$", my_hostname)) False >>> my_hostname = "testhostn.A.ame" >>> print bool(re.match("^(([a-zA-Z]|[a-zA-Z][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z]|[A-Za-z][A-Za-z0-9\-]*[A-Za-z0-9])$", my_hostname)) True A: The new Network framework has failable initializers for struct IPv4Address and struct IPv6Address which handle the IP address portion very easily. Doing this in IPv6 with a regex is tough with all the shortening rules. Unfortunately I don't have an elegant answer for hostname. Note that Network framework is recent, so it may force you to compile for recent OS versions. import Network let tests = ["192.168.4.4","fkjhwojfw","192.168.4.4.4","2620:3","2620::33"] for test in tests { if let _ = IPv4Address(test) { debugPrint("\(test) is valid ipv4 address") } else if let _ = IPv6Address(test) { debugPrint("\(test) is valid ipv6 address") } else { debugPrint("\(test) is not a valid IP address") } } output: "192.168.4.4 is valid ipv4 address" "fkjhwojfw is not a valid IP address" "192.168.4.4.4 is not a valid IP address" "2620:3 is not a valid IP address" "2620::33 is valid ipv6 address" A: Here is a regex that I used in Ant to obtain a proxy host IP or hostname out of ANT_OPTS. This was used to obtain the proxy IP so that I could run an Ant "isreachable" test before configuring a proxy for a forked JVM. ^.*-Dhttp\.proxyHost=(\w{1,}\.\w{1,}\.\w{1,}\.*\w{0,})\s.*$ A: I found this works pretty well for IP addresses. It validates like the top answer but it also makes sure the ip is isolated so no text or more numbers/decimals are after or before the ip. (?<!\S)(?:(?:\d|[1-9]\d|1\d\d|2[0-4]\d|25[0-5])\b|.\b){7}(?!\S) A: AddressRegex = "^(ftp|http|https):\/\/([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}:[0-9]{1,5})$"; HostnameRegex = /^(ftp|http|https):\/\/([a-z0-9]+\.)?[a-z0-9][a-z0-9-]*((\.[a-z]{2,6})|(\.[a-z]{2,6})(\.[a-z]{2,6}))$/i this re are used only for for this type validation work only if http://www.kk.com http://www.kk.co.in not works for http://www.kk.com/ http://www.kk.co.in.kk http://www.kk.com/dfas http://www.kk.co.in/ A: try this: ((2[0-4]\d|25[0-5]|[01]?\d\d?)\.){3}(2[0-4]\d|25[0-5]|[01]?\d\d?) it works in my case. A: Regarding IP addresses, it appears that there is some debate on whether to include leading zeros. It was once the common practice and is generally accepted, so I would argue that they should be flagged as valid regardless of the current preference. There is also some ambiguity over whether text before and after the string should be validated and, again, I think it should. 1.2.3.4 is a valid IP but 1.2.3.4.5 is not and neither the 1.2.3.4 portion nor the 2.3.4.5 portion should result in a match. Some of the concerns can be handled with this expression: grep -E '(^|[^[:alnum:]+)(([0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5])\.){3}([0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5])([^[:alnum:]]|$)' The unfortunate part here is the fact that the regex portion that validates an octet is repeated as is true in many offered solutions. Although this is better than for instances of the pattern, the repetition can be eliminated entirely if subroutines are supported in the regex being used. The next example enables those functions with the -P switch of grep and also takes advantage of lookahead and lookbehind functionality. (The function name I selected is 'o' for octet. I could have used 'octet' as the name but wanted to be terse.) grep -P '(?<![\d\w\.])(?<o>([0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5]))(\.\g<o>){3}(?![\d\w\.])' The handling of the dot might actually create a false negatives if IP addresses are in a file with text in the form of sentences since the a period could follow without it being part of the dotted notation. A variant of the above would fix that: grep -P '(?<![\d\w\.])(?<x>([0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5]))(\.\g<x>){3}(?!([\d\w]|\.\d))' A: There's a further nuance here that's missing. It's true that a HOSTNAME should match, basically, what's been given above. What's missing is that a REFERENCE TO a hostname can be the same, plus an optional period on the end. For example, with a trailing period, ping foo.bar.svc.cluster.local. will ping that hostname only, and not attempt any DNS search options in resolv.conf. tldr - If you provide an input box to receive a hostname, what's entered does not actually need to be a valid hostname. A: Checking for host names like... mywebsite.co.in, thangaraj.name, 18thangaraj.in, thangaraj106.in etc., [a-z\d+].*?\\.\w{2,4}$ A: I thought about this simple regex matching pattern for IP address matching \d+[.]\d+[.]\d+[.]\d+ A: how about this? ([0-9]{1,3}\.){3}[0-9]{1,3} A: on php: filter_var(gethostbyname($dns), FILTER_VALIDATE_IP) == true ? 'ip' : 'not ip'
{ "language": "en", "url": "https://stackoverflow.com/questions/106179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "400" }
Q: How can I run an OpenGL application installed on a linux machine from my windows machine? In the spirit of being helpful, this is a problem I had and solved, so I will answer the question here. Problem I have: An application that has to be installed on on Redhat or SuSE enterprise. It has huge system requirements and requires OpenGL. It is part of a suite of tools that need to operate together on one machine. This application is used for a time intensive task in terms of man hours. I don't want to sit in the server room working on this application. So, the question came up... how do I run this application from a remote windows machine? I'll outline my solution. Feel free to comment on alternatives. This solution should work for simpler environments as well. My case is somewhat extreme. A: If you want the OpenGL rendering to be performed on your local machine, using a Windows X server, like Xming is a good solution. However, if you want rendering to be done on the remote end with just images sent to the local machine, you want a specialized VNC system that can handle remote OpenGL rendering, like VirtualGL. A: You could also use VNC ( like cross platform remote desktop ) X is more efficent since it only sends draw commands rather than pixels, but if you are using opengl it is likely that most of the data is a rendered image anyway. Another big advantage of VNC is that you can start the program locally on the server and then connect to it with VNC, drop the connection, reconnect from another machine etc without disturbing the main running program. A: Solution I installed two pieces of software: PuTTY XMing-mesa The mesa part is important. PuTTY configuration Connection->Seconds Between Keepalives: 30 Connection->Enable TCP Keepalives: Yes Connection->SSH->X11->Enable X11 forwarding: Yes Connection->SSH->X11->X display location: localhost:0:0 Lauching Run Xming which will put simply start a process and put an icon in your system tray. Launch putty, pointing to your linux box, with the above configuration. Run program Hopefully, Success! A: For OpenGL, running an X server is definitely a better solution. Just make sure the application is developed to be networked. It should NOT use immediate mode for rendering and textures should be RARELY transferred. Why is X server a better solution in this case (as opposed to VNC)? Because you get acceleration on workstation, while VNC'ed solution is usually not even accelerated on the mainframe. So as long as data is buffered on the X server (using vertex arrays, vertex buffer objects, texture objects, etc) you should get much higher speed than using VNC, especially with complex scenes since VNC has to analyze, transfer and decode them as pixels. A: If you need server glx version 1.2 the free version of Xming (Mesa 2007) works fine. But if your application needs version 1.4, example qt5, the X Server from Cygwin works free to run it use this commands: [On server] sudo vi /etc/ssh/ssh_config Add: X11Forwarding yes X11DisplayOffset 10 X11UseLocalHost no AllowTcpForwarding yes TCPKeepAlive yes ClientAliveInterval 30 ClientAliveCountMax 10000 sudo vi ~/.bashrc Add: export DISPLAY=ip_from_remote:0 Now restart ssh server [On Client slide] Install Cygwin64 (with support to X package) after that run this command: d:\cygwin64\bin\run.exe --quote /usr/bin/bash.exe -l -c "cd; /usr/bin/xinit /etc/X11/xinit/startxwinrc -- /usr/bin/XWin :0 -ac -multiwindow -listen tcp" Now execute ssh client: d:\cygwin64\bin\mintty.exe -i /Cygwin-Terminal.ico -e /usr/bin/ssh -Y user_name@ip_from_server
{ "language": "en", "url": "https://stackoverflow.com/questions/106201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Fastest way to remove non-numeric characters from a VARCHAR in SQL Server I'm writing an import utility that is using phone numbers as a unique key within the import. I need to check that the phone number does not already exist in my DB. The problem is that phone numbers in the DB could have things like dashes and parenthesis and possibly other things. I wrote a function to remove these things, the problem is that it is slow and with thousands of records in my DB and thousands of records to import at once, this process can be unacceptably slow. I've already made the phone number column an index. I tried using the script from this post: T-SQL trim &nbsp (and other non-alphanumeric characters) But that didn't speed it up any. Is there a faster way to remove non-numeric characters? Something that can perform well when 10,000 to 100,000 records have to be compared. Whatever is done needs to perform fast. Update Given what people responded with, I think I'm going to have to clean the fields before I run the import utility. To answer the question of what I'm writing the import utility in, it is a C# app. I'm comparing BIGINT to BIGINT now, with no need to alter DB data and I'm still taking a performance hit with a very small set of data (about 2000 records). Could comparing BIGINT to BIGINT be slowing things down? I've optimized the code side of my app as much as I can (removed regexes, removed unneccessary DB calls). Although I can't isolate SQL as the source of the problem anymore, I still feel like it is. A: Simple function: CREATE FUNCTION [dbo].[RemoveAlphaCharacters](@InputString VARCHAR(1000)) RETURNS VARCHAR(1000) AS BEGIN WHILE PATINDEX('%[^0-9]%',@InputString)>0 SET @InputString = STUFF(@InputString,PATINDEX('%[^0-9]%',@InputString),1,'') RETURN @InputString END GO A: create function dbo.RemoveNonNumericChar(@str varchar(500)) returns varchar(500) begin declare @startingIndex int set @startingIndex=0 while 1=1 begin set @startingIndex= patindex('%[^0-9]%',@str) if @startingIndex <> 0 begin set @str = replace(@str,substring(@str,@startingIndex,1),'') end else break; end return @str end go select dbo.RemoveNonNumericChar('aisdfhoiqwei352345234@#$%^$@345345%^@#$^') A: replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(string,'a',''),'b',''),'c',''),'d',''),'e',''),'f',''),'g',''),'h',''),'i',''),'j',''),'k',''),'l',''),'m',''),'n',''),'o',''),'p',''),'q',''),'r',''),'s',''),'t',''),'u',''),'v',''),'w',''),'x',''),'y',''),'z',''),'A',''),'B',''),'C',''),'D',''),'E',''),'F',''),'G',''),'H',''),'I',''),'J',''),'K',''),'L',''),'M',''),'N',''),'O',''),'P',''),'Q',''),'R',''),'S',''),'T',''),'U',''),'V',''),'W',''),'X',''),'Y',''),'Z','')*1 AS string, :) A: In case you didn't want to create a function, or you needed just a single inline call in T-SQL, you could try: set @Phone = REPLACE(REPLACE(REPLACE(REPLACE(@Phone,'(',''),' ',''),'-',''),')','') Of course this is specific to removing phone number formatting, not a generic remove all special characters from string function. A: From SQL Server 2017 the native TRANSLATE function is available. If you have a known list of all characters to remove then you can simply use the following (to first convert all bad characters to a single known bad character and then to strip that specific character out with a REPLACE) DECLARE @BadCharacters VARCHAR(256) = 'abcdefghijklmnopqrstuvwxyz'; SELECT REPLACE( TRANSLATE(YourColumn, @BadCharacters, REPLICATE(LEFT(@BadCharacters,1),LEN(@BadCharacters))), LEFT(@BadCharacters,1), '') FROM @YourTable If the list of possible "bad" characters is too extensive to enumerate all in advance then you can use a double TRANSLATE DECLARE @CharactersToKeep VARCHAR(30) = '0123456789', @ExampleBadCharacter CHAR(1) = CHAR(26); SELECT REPLACE(TRANSLATE(YourColumn, bad_chars, REPLICATE(@ExampleBadCharacter, LEN(bad_chars + 'X') - 1)), @ExampleBadCharacter, '') FROM @YourTable CROSS APPLY (SELECT REPLACE( TRANSLATE(YourColumn, @CharactersToKeep, REPLICATE(LEFT(@CharactersToKeep, 1), LEN(@CharactersToKeep))), LEFT(@CharactersToKeep, 1), '')) ca(bad_chars) A: I may misunderstand, but you've got two sets of data to remove the strings from one for current data in the database and then a new set whenever you import. For updating the existing records, I would just use SQL, that only has to happen once. However, SQL isn't optimized for this sort of operation, since you said you are writing an import utility, I would do those updates in the context of the import utility itself, not in SQL. This would be much better performance wise. What are you writing the utility in? Also, I may be completely misunderstanding the process, so I apologize if off-base. Edit: For the initial update, if you are using SQL Server 2005, you could try a CLR function. Here's a quick one using regex. Not sure how the performance would compare, I've never used this myself except for a quick test right now. using System; using System.Data; using System.Text.RegularExpressions; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class UserDefinedFunctions { [Microsoft.SqlServer.Server.SqlFunction] public static SqlString StripNonNumeric(SqlString input) { Regex regEx = new Regex(@"\D"); return regEx.Replace(input.Value, ""); } }; After this is deployed, to update you could just use: UPDATE table SET phoneNumber = dbo.StripNonNumeric(phoneNumber) A: I saw this solution with T-SQL code and PATINDEX. I like it :-) CREATE Function [fnRemoveNonNumericCharacters](@strText VARCHAR(1000)) RETURNS VARCHAR(1000) AS BEGIN WHILE PATINDEX('%[^0-9]%', @strText) > 0 BEGIN SET @strText = STUFF(@strText, PATINDEX('%[^0-9]%', @strText), 1, '') END RETURN @strText END A: can you remove them in a nightly process, storing them in a separate field, then do an update on changed records right before you run the process? Or on the insert/update, store the "numeric" format, to reference later. A trigger would be an easy way to do it. A: Working with varchars is fundamentally slow and inefficient compared to working with numerics, for obvious reasons. The functions you link to in the original post will indeed be quite slow, as they loop through each character in the string to determine whether or not it's a number. Do that for thousands of records and the process is bound to be slow. This is the perfect job for Regular Expressions, but they're not natively supported in SQL Server. You can add support using a CLR function, but it's hard to say how slow this will be without trying it I would definitely expect it to be significantly faster than looping through each character of each phone number, however! Once you get the phone numbers formatted in your database so that they're only numbers, you could switch to a numeric type in SQL which would yield lightning-fast comparisons against other numeric types. You might find that, depending on how fast your new data is coming in, doing the trimming and conversion to numeric on the database side is plenty fast enough once what you're comparing to is properly formatted, but if possible, you would be better off writing an import utility in a .NET language that would take care of these formatting issues before hitting the database. Either way though, you're going to have a big problem regarding optional formatting. Even if your numbers are guaranteed to be only North American in origin, some people will put the 1 in front of a fully area-code qualified phone number and others will not, which will cause the potential for multiple entries of the same phone number. Furthermore, depending on what your data represents, some people will be using their home phone number which might have several people living there, so a unique constraint on it would only allow one database member per household. Some would use their work number and have the same problem, and some would or wouldn't include the extension which would cause artificial uniqueness potential again. All of that may or may not impact you, depending on your particular data and usages, but it's important to keep in mind! A: I would try Scott's CLR function first but add a WHERE clause to reduce the number of records updated. UPDATE table SET phoneNumber = dbo.StripNonNumeric(phoneNumber) WHERE phonenumber like '%[^0-9]%' If you know that the great majority of your records have non-numeric characters it might not help though. A: I know it is late to the game, but here is a function that I created for T-SQL that quickly removes non-numeric characters. Of note, I have a schema "String" that I put utility functions for strings into... CREATE FUNCTION String.ComparablePhone( @string nvarchar(32) ) RETURNS bigint AS BEGIN DECLARE @out bigint; -- 1. table of unique characters to be kept DECLARE @keepers table ( chr nchar(1) not null primary key ); INSERT INTO @keepers ( chr ) VALUES (N'0'),(N'1'),(N'2'),(N'3'),(N'4'),(N'5'),(N'6'),(N'7'),(N'8'),(N'9'); -- 2. Identify the characters in the string to remove WITH found ( id, position ) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (n1+n10) DESC), -- since we are using stuff, for the position to continue to be accurate, start from the greatest position and work towards the smallest (n1+n10) FROM (SELECT 0 AS n1 UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9) AS d1, (SELECT 0 AS n10 UNION SELECT 10 UNION SELECT 20 UNION SELECT 30) AS d10 WHERE (n1+n10) BETWEEN 1 AND len(@string) AND substring(@string, (n1+n10), 1) NOT IN (SELECT chr FROM @keepers) ) -- 3. Use stuff to snuff out the identified characters SELECT @string = stuff( @string, position, 1, '' ) FROM found ORDER BY id ASC; -- important to process the removals in order, see ROW_NUMBER() above -- 4. Try and convert the results to a bigint IF len(@string) = 0 RETURN NULL; -- an empty string converts to 0 RETURN convert(bigint,@string); END Then to use it to compare for inserting, something like this; INSERT INTO Contacts ( phone, first_name, last_name ) SELECT i.phone, i.first_name, i.last_name FROM Imported AS i LEFT JOIN Contacts AS c ON String.ComparablePhone(c.phone) = String.ComparablePhone(i.phone) WHERE c.phone IS NULL -- Exclude those that already exist A: I'd use an Inline Function from performance perspective, see below: Note that symbols like '+','-' etc will not be removed CREATE FUNCTION [dbo].[UDF_RemoveNumericStringsFromString] ( @str varchar(100) ) RETURNS TABLE AS RETURN WITH Tally (n) as ( -- 100 rows SELECT TOP (Len(@Str)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM (VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) a(n) CROSS JOIN (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) b(n) ) SELECT OutStr = STUFF( (SELECT SUBSTRING(@Str, n,1) st FROM Tally WHERE ISNUMERIC(SUBSTRING(@Str, n,1)) = 1 FOR XML PATH(''),type).value('.', 'varchar(100)'),1,0,'') GO /*Use it*/ SELECT OutStr FROM dbo.UDF_RemoveNumericStringsFromString('fjkfhk759734977fwe9794t23') /*Result set 759734977979423 */ You can define it with more than 100 characters... A: "Although I can't isolate SQL as the source of the problem anymore, I still feel like it is." Fire up SQL Profiler and take a look. Take the resulting queries and check their execution plans to make sure that index is being used. A: Thousands of records against thousands of records is not normally a problem. I've used SSIS to import millions of records with de-duping like this. I would clean up the database to remove the non-numeric characters in the first place and keep them out. A: Looking for a super simple solution: SUBSTRING([Phone], CHARINDEX('(', [Phone], 1)+1, 3) + SUBSTRING([Phone], CHARINDEX(')', [Phone], 1)+1, 3) + SUBSTRING([Phone], CHARINDEX('-', [Phone], 1)+1, 4) AS Phone A: I would recommend enforcing a strict format for phone numbers in the database. I use the following format. (Assuming US phone numbers) Database: 5555555555x555 Display: (555) 555-5555 ext 555 Input: 10 digits or more digits embedded in any string. (Regex replacing removes all non-numeric characters)
{ "language": "en", "url": "https://stackoverflow.com/questions/106206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70" }
Q: What does OSGi solve? I've read on Wikipedia and other sites about OSGi, but I don't really see the big picture. It says that it's a component-based platform, and that you can reload modules at runtime. Also the "practical example" given everywhere is the Eclipse Plugin Framework. My questions are: * *What is the clear and simple definition of OSGi? *What common problems does it solve? By "common problems" I mean problems we face everyday, like "What can OSGi do for making our jobs more efficient/fun/simple?" A: I've found the following benefits from OSGi: * *Each plugin is a versioned artifact that has its own classloader. *Each plugin depends on both specific jars that it contains and also other specific versioned plug-ins. *Because of the versioning and isolated classloaders, different versions of the same artifact can be loaded at the same time. If one component of your application relies on one version of a plug-in and another depends on another version, they both can be loaded at the same time. With this, you can structure your application as a set of versioned plugin artifacts that are loaded on demand. Each plugin is a standalone component. Just as Maven helps you structure your build so it is repeatable and defined by a set of specific versions of artifacts it is created by, OSGi helps you do this at runtime. A: OSGi makes your code throw NoClassDefFoundError and ClassNotFoundException for no apparent reason (most probably because you forgot to export a package in OSGi configuration file); since it has ClassLoaders it can make your class com.example.Foo fail to be cast to com.example.Foo since it's actually two different classes loaded by two different classloaders. It can make your Eclipse boot into an OSGi console after installing an Eclipse plugin. For me, OSGi only added complexity (because it added one more mental model for me to grok), added annoyances because of exceptions; I never really needed the dynamicity it "offers". It was intrusive since it required OSGi bundle configuration for all modules; it was definitely not simple (in a larger project). Because of my bad experience, I tend to stay away from that monster, thank you very much. I'd rather suffer from jar dependency hell, since that's way way more easily understandable than the classloader hell OSGi introduces. A: I don't care too much about the hotplugability of OSGi modules (at least currently). It's more the enforced modularity. Not having millions of "public" classes available on the classpath at any time protects well from circular dependencies: You have to really think about your public interfaces - not just in terms of the java language construct "public", but in terms of your library/module: What (exactly) are the components, that you want to make available for others? What (exactly) are the interfaces (of other modules) you really need to implement your functionality? It's nice, that hotplug comes with it, but I'd rather restart my usual applications than testing all combinations of hotplugability... A: I am yet to be a "fan" of OSGi... I have been working with an enterprise application at Fortune 100 companies. Recently, the product we use has "upgraded" to an OSGi implementation. starting local cba deployment... [2/18/14 8:47:23:727 EST] 00000347 CheckForOasis finally deployed and "the following bundles will be quiesced and then restarted" [2/18/14 9:38:33:108 EST] 00000143 AriesApplicat I CWSAI0054I: As part of an update operation for application 51 minutes... each time code changes... The previous version (non-OSGi) would deploy in less than 5 minutes on older development machines. on a machine with 16 gig ram and 40 free gig disk and Intel i5-3437U 1.9 GHz CPU The "benefit" of this upgrade was sold as improving (production) deployments - an activity that we do about 4 times a year with maybe 2-4 small fix deployments a year. Adding 45 minutes per day to 15 people (QA and developers) I can't imagine ever being justified. In big enterprise applications, if your application is a core application, then changing it is, rightly so (small changes have potential for far reaching impacts - must be communicated and planned with consumers all over the enterprise), a monumental activity - wrong architecture for OSGi. If your application is not an enterprise application - i.e. each consumer can have their own tailored module likely hitting their own silo of data in their own silo'd database and running on a server that hosts many applications, then maybe look at OSGi. At least, that is my experience thus far. A: If a Java based application requires adding or removing modules (extending the base functionality of application), without shutting down the JVM, OSGI can be employed. Usually if the cost of shutting down JVM is more, just to update or to enhance functionality. Examples: * *Eclipse: Provides platform for plugins to install, uninstall, update and inter-depend. *AEM: WCM application, where functionality change will be business driven, which can not afford down times for maintenance. Note: Spring framework stopped supporting OSGI spring bundles, considering it as unnecessary complexity for transaction based applications or for some point in these lines. I personally do not consider OSGI unless it is absolutely necessary, in something big like building a platform. A: I've been doing work with OSGi almost 8 or so years and I have to say that you should consider OSGi only if you have a business need to update, remove, install or replace a component on runtime. This also means that you should have a modular mindset and understanding what modularity means. There's some arguments that OSGi is lightweight - yes, that is true but there are also some other frameworks that are lightweight and easier to maintain and develop. Same goes to secure java blah blah. OSGi requires a solid architecture to be used correctly and it's quite easy to make OSGi-system that could just as easily be a standalone-runnable-jar without any OSGi being involved. A: * *You can, analogically speaking, change the motor of your car without turning it off. *You can customize complex systems for the customers. See the power of Eclipse. *You can reuse entire components. Better than just objects. *You use a stable platform to develop component based Applications. The benefits of this are huge. *You can build Components with the black box concept. Other components don't need to know about hidden interfaces, them see just the published interfaces. *You can use in the same system several equal components, but in different releases, without compromise the application. OSGi solves the Jar Hell problem. *With OSGi you develop thinking to architect systems with CBD There are a lot of benefits (I reminded just these now), available for everyone who uses Java. A: The OSGi provides following benefit: ■ A portable and secure execution environment based on Java ■ A service management system, which can be used to register and share services across bundles and decouple service providers from service consumers ■ A dynamic module system, which can be used to dynamically install and uninstall Java modules, which OSGi calls bundles ■ A lightweight and scalable solution A: edited for clarity. OSGi page gave a better simple answer than mine A simple answer: An OSGi Service Platform provides a standardized, component-oriented computing environment for cooperating networked services. This architecture significantly reduces the overall complexity of building, maintaining and deploying applications. The OSGi Service Platform provides the functions to change the composition dynamically on the device of a variety of networks, without requiring a restarts. In a single application structure, say the Eclipse IDE, it's not a big deal to restart when you install a new plugin. Using the OSGi implementation completely, you should be able to add plugins at runtime, get the new functionality, but not have to restart eclipse at all. Again, not a big deal for every day, small application use. But, when you start to look at multi-computer, distributed application frameworks, that's where it starts to get interesting. When you have to have 100% uptime for critical systems, the capability to hotswap components or add new functionality at runtime is useful. Granted, there are capabilities for doing this now for the most part, but OSGi is trying to bundle everything into a nice little framework with common interfaces. Does OSGi solve common problems, I'm not sure about that. I mean, it can, but the overhead may not be worth it for simpler problems. But it's something to consider when you are starting to deal with larger, networked, applications. A: A Few Things that drive me nuts on OSGi: 1) The implentations and their context loaders have a lot of quirks to them, and can be somewhat async (We use felix inside of confluence). Compared to a pure spring (no DM) where [main] is pretty much running through everything sync. 2)Classes are not equal after a hot load. Say, for instance you have a tangosol cache layer on hibernate. It is filled with Fork.class, outside of the OSGi scope. You hotload a new jar, and Fork has not changed. Class[Fork] != Class[Fork]. It also appears during serialization, for the same underlying causes. 3)Clustering. You can work around these things, but it is a major major pain, and makes your architecture look flawed. And to those of you advertising the hotplugging.. OSGi's #1 Client? Eclipse. What does Eclipse do after loading the bundle? It restarts. A: what benefits does OSGi's component system provide you? Well, Here is quite a list: Reduced Complexity - Developing with OSGi technology means developing bundles: the OSGi components. Bundles are modules. They hide their internals from other bundles and communicate through well defined services. Hiding internals means more freedom to change later. This not only reduces the number of bugs, it also makes bundles simpler to develop because correctly sized bundles implement a piece of functionality through well defined interfaces. There is an interesting blog that describes what OSGi technology did for their development process. Reuse - The OSGi component model makes it very easy to use many third party components in an application. An increasing number of open source projects provide their JARs ready made for OSGi. However, commercial libraries are also becoming available as ready made bundles. Real World - The OSGi framework is dynamic. It can update bundles on the fly and services can come and go. Developers used to more traditional Java see this as a very problematic feature and fail to see the advantage. However, it turns out that the real world is highly dynamic and having dynamic services that can come and go makes the services a perfect match for many real world scenarios. For example, a service could model a device in the network. If the device is detected, the service is registered. If the device goes away, the service is unregistered. There are a surprising number of real world scenarios that match this dynamic service model. Applications can therefore reuse the powerful primitives of the service registry (register, get, list with an expressive filter language, and waiting for services to appear and disappear) in their own domain. This not only saves writing code, it also provides global visibility, debugging tools, and more functionality than would have implemented for a dedicated solution. Writing code in such a dynamic environment sounds like a nightmare, but fortunately, there are support classes and frameworks that take most, if not all, of the pain out of it. Easy Deployment - The OSGi technology is not just a standard for components. It also specifies how components are installed and managed. This API has been used by many bundles to provide a management agent. This management agent can be as simple as a command shell, a TR-69 management protocol driver, OMA DM protocol driver, a cloud computing interface for Amazon's EC2, or an IBM Tivoli management system. The standardized management API makes it very easy to integrate OSGi technology in existing and future systems. Dynamic Updates - The OSGi component model is a dynamic model. Bundles can be installed, started, stopped, updated, and uninstalled without bringing down the whole system. Many Java developers do not believe this can be done reliably and therefore initially do not use this in production. However, after using this in development for some time, most start to realize that it actually works and significantly reduces deployment times. Adaptive - The OSGi component model is designed from the ground up to allow the mixing and matching of components. This requires that the dependencies of components need to be specified and it requires components to live in an environment where their optional dependencies are not always available. The OSGi service registry is a dynamic registry where bundles can register, get, and listen to services. This dynamic service model allows bundles to find out what capabilities are available on the system and adapt the functionality they can provide. This makes code more flexible and resilient to changes. Transparency - Bundles and services are first class citizens in the OSGi environment. The management API provides access to the internal state of a bundle as well as how it is connected to other bundles. For example, most frameworks provide a command shell that shows this internal state. Parts of the applications can be stopped to debug a certain problem, or diagnostic bundles can be brought in. Instead of staring at millions of lines of logging output and long reboot times, OSGi applications can often be debugged with a live command shell. Versioning - OSGi technology solves JAR hell. JAR hell is the problem that library A works with library B;version=2, but library C can only work with B;version=3. In standard Java, you're out of luck. In the OSGi environment, all bundles are carefully versioned and only bundles that can collaborate are wired together in the same class space. This allows both bundle A and C to function with their own library. Though it is not advised to design systems with this versioning issue, it can be a life saver in some cases. Simple - The OSGi API is surprisingly simple. The core API is only one package and less than 30 classes/interfaces. This core API is sufficient to write bundles, install them, start, stop, update, and uninstall them and includes all listener and security classes. There are very few APIs that provide so much functionality for so little API. Small - The OSGi Release 4 Framework can be implemented in about a 300KB JAR file. This is a small overhead for the amount of functionality that is added to an application by including OSGi. OSGi therefore runs on a large range of devices: from very small, to small, to mainframes. It only asks for a minimal Java VM to run and adds very little on top of it. Fast - One of the primary responsibilities of the OSGi framework is loading the classes from bundles. In traditional Java, the JARs are completely visible and placed on a linear list. Searching a class requires searching through this (often very long, 150 is not uncommon) list. In contrast, OSGi pre-wires bundles and knows for each bundle exactly which bundle provides the class. This lack of searching is a significant speed up factor at startup. Lazy - Lazy in software is good and the OSGi technology has many mechanisms in place to do things only when they are really needed. For example, bundles can be started eagerly, but they can also be configured to only start when other bundles are using them. Services can be registered, but only created when they are used. The specifications have been optimized several times to allow for these kind of lazy scenarios that can save tremendous runtime costs. Secure - Java has a very powerful fine grained security model at the bottom but it has turned out very hard to configure in practice. The result is that most secure Java applications are running with a binary choice: no security or very limited capabilities. The OSGi security model leverages the fine grained security model but improves the usability (as well as hardening the original model) by having the bundle developer specify the requested security details in an easily audited form while the operator of the environment remains fully in charge. Overall, OSGi likely provides one of the most secure application environments that is still usable short of hardware protected computing platforms. Non Intrusive - Applications (bundles) in an OSGi environment are left to their own. They can use virtually any facility of the VM without the OSGi restricting them. Best practice in OSGi is to write Plain Old Java Objects and for this reason, there is no special interface required for OSGi services, even a Java String object can act as an OSGi service. This strategy makes application code easier to port to another environment. Runs Everywhere - Well, that depends. The original goal of Java was to run anywhere. Obviously, it is not possible to run all code everywhere because the capabilities of the Java VMs differ. A VM in a mobile phone will likely not support the same libraries as an IBM mainframe running a banking application. There are two issue to take care of. First, the OSGi APIs should not use classes that are not available on all environments. Second, a bundle should not start if it contains code that is not available in the execution environment. Both of these issues have been taken care of in the OSGi specifications. Source : www.osgi.org/Technology/WhyOSGi A: It is also being used to bring additional portability of middleware and applications on the mobile side. Mobile side is available for WinMo, Symbian, Android for example. As soon as integration with device features occurs, can get fragmented. A: At the very least, OSGi makes you THINK about modularity, code reuse, versioning and in general the plumbing of a project. A: Others have already outlined the benefits in detail, I hereby explain the practical usecases I have either seen or used OSGi. * *In one of our application, we have event based flow and flow is defined in plugins based on OSGi platform so tomorrow if some client wants different/additional flow then he just have to deploy one more plugin, configure it from our console and he is done. *It is used for deploying different Store connectors, for example, suppose we already have Oracle DB connector and tomorrow mongodb is required to be connected then write a new connector and deploy it and configure the details through console and again you are done. deployment of connnectors is handled by OSGi plugin framework. A: There is already a quite convincing statement in its official site, I may quote as The key reason OSGi technology is so successful is that it provides a very mature component system that actually works in a surprising number of environments. The OSGi component system is actually used to build highly complex applications like IDEs (Eclipse), application servers (GlassFish, IBM Websphere, Oracle/BEA Weblogic, Jonas, JBoss), application frameworks (Spring, Guice), industrial automation, residential gateways, phones, and so much more. As for the benefits to developer? DEVELOPERS: OSGi reduces complexity by providing a modular architecture for today’s large-scale distributed systems as well as small, embedded applications. Building systems from in-house and off-the-shelf modules significantly reduces complexity and thus development and maintenance expenses. The OSGi programming model realizes the promise of component-based systems. Please check the details in Benefits of Using OSGi.
{ "language": "en", "url": "https://stackoverflow.com/questions/106222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "294" }
Q: lsof survival guide lsof is an increadibly powerful command-line utility for unix systems. It lists open files, displaying information about them. And since most everything is a file on unix systems, lsof can give sysadmins a ton of useful diagnostic data. What are some of the most common and useful ways of using lsof, and which command-line switches are used for that? A: lsof -i :port will tell you what programs are listening on a specific port. A: lsof -i will provide a list of open network sockets. The -n option will prevent DNS lookups, which is useful when your network connection is slow or unreliable. A: lsof +D /some/directory Will display recursively all the files opened in a directory. +d for just the top-level. This is useful when you have high wait% for IO, correlated to use on a particular FS and want to see which processes are chewing up your io. A: See what files a running application or daemon has open: lsof -p pid Where pid is the process ID of the application or daemon. A: To show all networking related to a given port: lsof -iTCP -i :port lsof -i :22 To show connections to a specific host, use @host lsof -i@192.168.1.5 Show connections based on the host and the port using @host:port lsof -i@192.168.1.5:22 grepping for LISTEN shows what ports your system is waiting for connections on: lsof -i| grep LISTEN Show what a given user has open using -u: lsof -u daniel See what files and network connections a command is using with -c lsof -c syslog-ng The -p switch lets you see what a given process ID has open, which is good for learning more about unknown processes: lsof -p 10075 The -t option returns just a PID lsof -t -c Mail Using the -t and -c options together you can HUP processes kill -HUP $(lsof -t -c sshd) You can also use the -t with -u to kill everything a user has open kill -9 $(lsof -t -u daniel) A: lsof +f -- /mountpoint lists the processes using files on the mount mounted at /mountpoint. Particularly useful for finding which process(es) are using a mounted USB stick or CD/DVD.
{ "language": "en", "url": "https://stackoverflow.com/questions/106234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "108" }
Q: Calculate the Hilbert value of a point for use in a Hilbert R-Tree? I have an application where a Hilbert R-Tree (wikipedia) (citeseer) would seem to be an appropriate data structure. Specifically, it requires reasonably fast spatial queries over a data set that will experience a lot of updates. However, as far as I can see, none of the descriptions of the algorithms for this data structure even mention how to actually calculate the requisite Hilbert Value; which is the distance along a Hilbert Curve to the point. So any suggestions for how to go about calculating this? A: Fun question! I did a bit of googling, and the good news is, I've found an implementation of Hilbert Value. The potentially bad news is, it's in Haskell... http://www.serpentine.com/blog/2007/01/11/two-dimensional-spatial-hashing-with-space-filling-curves/ It also proposes a Lebesgue distance metric you might be able to compute more easily. A: Below is my java code adapted from C code in the paper "Encoding and decoding the Hilbert order" by Xian Lu and Gunther Schrack, published in Software: Practice and Experience Vol. 26 pp 1335-46 (1996). Hope this helps. Improvements welcome ! Michael /** * Find the Hilbert order (=vertex index) for the given grid cell * coordinates. * @param x cell column (from 0) * @param y cell row (from 0) * @param r resolution of Hilbert curve (grid will have Math.pow(2,r) * rows and cols) * @return Hilbert order */ public static int encode(int x, int y, int r) { int mask = (1 << r) - 1; int hodd = 0; int heven = x ^ y; int notx = ~x & mask; int noty = ~y & mask; int temp = notx ^ y; int v0 = 0, v1 = 0; for (int k = 1; k < r; k++) { v1 = ((v1 & heven) | ((v0 ^ noty) & temp)) >> 1; v0 = ((v0 & (v1 ^ notx)) | (~v0 & (v1 ^ noty))) >> 1; } hodd = (~v0 & (v1 ^ x)) | (v0 & (v1 ^ noty)); return interleaveBits(hodd, heven); } /** * Interleave the bits from two input integer values * @param odd integer holding bit values for odd bit positions * @param even integer holding bit values for even bit positions * @return the integer that results from interleaving the input bits * * @todo: I'm sure there's a more elegant way of doing this ! */ private static int interleaveBits(int odd, int even) { int val = 0; // Replaced this line with the improved code provided by Tuska // int n = Math.max(Integer.highestOneBit(odd), Integer.highestOneBit(even)); int max = Math.max(odd, even); int n = 0; while (max > 0) { n++; max >>= 1; } for (int i = 0; i < n; i++) { int bitMask = 1 << i; int a = (even & bitMask) > 0 ? (1 << (2*i)) : 0; int b = (odd & bitMask) > 0 ? (1 << (2*i+1)) : 0; val += a + b; } return val; } A: See uzaygezen. A: The code and java code above are fine for 2D data points. But for higher dimensions you may need to look at Jonathan Lawder's paper: J.K.Lawder. Calculation of Mappings Between One and n-dimensional Values Using the Hilbert Space-filling Curve. A: I figured out a slightly more efficient way to interleave bits. It can be found at the Stanford Graphics Website. I included a version that I created that can interleave two 32 bit integers into one 64 bit long. public static long spreadBits32(int y) { long[] B = new long[] { 0x5555555555555555L, 0x3333333333333333L, 0x0f0f0f0f0f0f0f0fL, 0x00ff00ff00ff00ffL, 0x0000ffff0000ffffL, 0x00000000ffffffffL }; int[] S = new int[] { 1, 2, 4, 8, 16, 32 }; long x = y; x = (x | (x << S[5])) & B[5]; x = (x | (x << S[4])) & B[4]; x = (x | (x << S[3])) & B[3]; x = (x | (x << S[2])) & B[2]; x = (x | (x << S[1])) & B[1]; x = (x | (x << S[0])) & B[0]; return x; } public static long interleave64(int x, int y) { return spreadBits32(x) | (spreadBits32(y) << 1); } Obviously, the B and S local variables should be class constants but it was left this way for simplicity. A: Michael, thanks for your Java code! I tested it and it seems to work fine, but I noticed that the bit-interleaving function overflows at recursion level 7 (at least in my tests, but I used long values), because the "n"-value is calculated using highestOneBit()-function, which returns the value and not the position of the highest one bit; so the loop does unnecessarily many interleavings. I just changed it to the following snippet, and after that it worked fine. int max = Math.max(odd, even); int n = 0; while (max > 0) { n++; max >>= 1; } A: If you need a spatial index with fast delete/insert capabilities, have a look at the PH-tree. It partly based on quadtrees but faster and more space efficient. Internally it uses a Z-curve which has slightly worse spatial properties than an H-curve but is much easier to calculate. Paper: http://www.globis.ethz.ch/script/publication/download?docid=699 Java implementation: http://globis.ethz.ch/files/2014/11/ph-tree-2014-11-10.zip Another option is the X-tree, which is also available here: https://code.google.com/p/xxl/ A: Suggestion: A good simple efficient data structure for spatial queries is a multidimensional binary tree. In a traditional binary tree, there is one "discriminant"; the value that's used to determine whether you take the left branch or the right branch. This can be considered to be the one-dimensional case. In a multidimensional binary tree, you have multiple discriminants; consecutive levels use different discriminants. For example, for two dimensional spacial data, you could use the X and Y coordinates as discriminants. Consecutive levels would use X, Y, X, Y... For spatial queries (for example finding all nodes within a rectangle) you do a depth-first search of the tree starting at the root, and you use the discriminant at each level to avoid searching down branches that contain no nodes in the given rectangle. This allows you to potentially cut the search space in half at each level, making it very efficient for finding small regions in a massive data set. (BTW, this data structure is also useful for partial-match queries, i.e. queries that omit one or more discriminants. You just search down both branches at levels with an omitted discriminant.) A good paper on this data structure: http://portal.acm.org/citation.cfm?id=361007 This article has good diagrams and algorithm descriptions: http://en.wikipedia.org/wiki/Kd-tree
{ "language": "en", "url": "https://stackoverflow.com/questions/106237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How do I synchronize the address book in my app using MAPI? The system I'm working on contains an address book. I am looking for sample code that will synchronize addresses with the current users address book through MAPI. I need two-way sync. If you know of any open-source library with easy to use functions for this, I'd be glad to hear about it. If you know of a library that is not open-source, well, that is fine too. The best would be a library which license will allow me to use it in our own solution. And if you, god forbid, know of a library that will make it easy for me to publish my address book in a MAPI provider - well, then I'm dying to hear about it! Using an external address book and ditching our own is not an option that would serve our customers. A good, working code sample using vanilla MAPI is of course also acceptable. ;-) A: Zarafa just released their 100% MAPI compatible groupware suite as GPL. Maybe that's useful for you? EDIT: The link is slashdotted. More info here.
{ "language": "en", "url": "https://stackoverflow.com/questions/106243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I write a custom validation method with parameters for my ActiveRecord model? In my model I have: validate :my_custom_validation def my_custom_validation errors.add_to_base("error message") if condition.exists? end I would like to add some parameters to mycustomer vaildation like so: validate :my_custom_validation, :parameter1 => x, :parameter2 => y How do I write the mycustomvalidation function to account for parameters? A: Validators usualy have an array parameter indicating, first, the fields to validate and lastly (if it exists) a hash with the options. In your example: :my_custom_validation, parameter1: x, parameter2: y :my_custom_validation would be a field name, while parameter1: x, parameter2: y would be a hash: { parameter1: x, parameter2: y} Therefore, you'd do something like: def my_custom_validation(*attr) options = attr.pop if attr.last.is_a? Hash # do something with options errors.add_to_base("error message") if condition.exists? end A: You can just do something like this: def validate errors.add('That particular field', 'can not be the value you presented') if !self.field_to_check.blank? && self.field_to_check == 'I AM COOL' end No need to call reference it, as I believe the validate method is processed (if it exists) after any validates_uniqueness_of -like validations. Added: More information in the Rails API docs here. A: You should also be able to use a Ruby lambda to help with method based validation of your model attributes (x, y) like below: validate -> { my_custom_validation(parameter1: x, parameter2: y) }
{ "language": "en", "url": "https://stackoverflow.com/questions/106251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How to get a distinct list of words used in all Field Records using MS SQL? If I have a table field named 'description', what would be the SQL (using MS SQL) to get a list of records of all distinct words used in this field. For example: If the table contains the following for the 'description' field: Record1 "The dog jumped over the fence." Record2 "The giant tripped on the fence." ... The SQL record output would be: "The","giant","dog","jumped","tripped","on","over","fence" A: I do not think you can do this with a SELECT. The best chance is to write a user defined function that returns a table with all the words and then do SELECT DISTINCT on it. Disclaimer: Function dbo.Split is from http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=50648 CREATE TABLE test ( id int identity(1, 1) not null, description varchar(50) not null ) INSERT INTO test VALUES('The dog jumped over the fence') INSERT INTO test VALUES('The giant tripped on the fence') CREATE FUNCTION dbo.Split ( @RowData nvarchar(2000), @SplitOn nvarchar(5) ) RETURNS @RtnValue table ( Id int identity(1,1), Data nvarchar(100) ) AS BEGIN Declare @Cnt int Set @Cnt = 1 While (Charindex(@SplitOn,@RowData)>0) Begin Insert Into @RtnValue (data) Select Data = ltrim(rtrim(Substring(@RowData,1,Charindex(@SplitOn,@RowData)-1))) Set @RowData = Substring(@RowData,Charindex(@SplitOn,@RowData)+1,len(@RowData)) Set @Cnt = @Cnt + 1 End Insert Into @RtnValue (data) Select Data = ltrim(rtrim(@RowData)) Return END CREATE FUNCTION dbo.SplitAll(@SplitOn nvarchar(5)) RETURNS @RtnValue table ( Id int identity(1,1), Data nvarchar(100) ) AS BEGIN DECLARE My_Cursor CURSOR FOR SELECT Description FROM dbo.test DECLARE @description varchar(50) OPEN My_Cursor FETCH NEXT FROM My_Cursor INTO @description WHILE @@FETCH_STATUS = 0 BEGIN INSERT INTO @RtnValue SELECT Data FROM dbo.Split(@description, @SplitOn) FETCH NEXT FROM My_Cursor INTO @description END CLOSE My_Cursor DEALLOCATE My_Cursor RETURN END SELECT DISTINCT Data FROM dbo.SplitAll(N' ') A: I just had a similar problem and tried using SQL CLR to solve it. Might be handy to someone using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using System.Collections; using System.Collections.Generic; public partial class UserDefinedFunctions { private class SplitStrings : IEnumerable { private List<string> splits; public SplitStrings(string toSplit, string splitOn) { splits = new List<string>(); // nothing, return empty list if (string.IsNullOrEmpty(toSplit)) { return; } // return one word if (string.IsNullOrEmpty(splitOn)) { splits.Add(toSplit); return; } splits.AddRange( toSplit.Split(new string[] { splitOn }, StringSplitOptions.RemoveEmptyEntries) ); } #region IEnumerable Members public IEnumerator GetEnumerator() { return splits.GetEnumerator(); } #endregion } [Microsoft.SqlServer.Server.SqlFunction(FillRowMethodName = "readRow", TableDefinition = "word nvarchar(255)")] public static IEnumerable fnc_clr_split_string(string toSplit, string splitOn) { return new SplitStrings(toSplit, splitOn); } public static void readRow(object inWord, out SqlString word) { string w = (string)inWord; if (string.IsNullOrEmpty(w)) { word = string.Empty; return; } if (w.Length > 255) { w = w.Substring(0, 254); } word = w; } }; A: It is not the fastest approach but might be used by somebody for a small amount of data: declare @tmp table(descr varchar(400)) insert into @tmp select 'The dog jumped over the fence.' union select 'The giant tripped on the fence.' /* the actual doing starts here */ update @tmp set descr = replace(descr, '.', '') --get rid of dots in the ends of sentences. declare @xml xml set @xml = '<c>' + replace( (select ' ' + descr from @tmp for xml path('') ), ' ', '</c><c>') + '</c>' ;with allWords as ( select section.Cols.value('.', 'varchar(250)') words from @xml.nodes('/c') section(Cols) ) select words from allWords where ltrim(rtrim(words)) <> '' group by words A: In SQL on it's own it would probably need to be a big stored procedure, but if you read all the records out to the scripting language of your choice, you can easily loop over them and split each out into arrays/hashes. A: it'd be a messy stored procedure with a temp table and a SELECT DISTINCT at the end. if you had the words already as records, you would use SELECT DISTINCT [WordsField] from [owner].[tablename]
{ "language": "en", "url": "https://stackoverflow.com/questions/106275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the most stable, least intrusive way to track web traffic between two sites? I need to track traffic between a specific set of web sites. I would then store the number of clicks in a database table with the fields fromSite, toSite, day, noOfClicks. The complete urls are unimportant - only web site identity is needed. I've ruled out redirects since I don't want my server to be a single point of failure. I want the links to work even if the tracking application or server is down or overloaded. Another goal is to minimize the work each participating site has to do in order for the tracking to work. What would be the best way to solve this problem? A: The best way is to use an analytics program like Google Analytics, and to review the reports for each domain. A: Do you have access to the logs on all of the sites in question? If so, you should be able to extract that data from the log files (Referer header). A: You could place an onclick event on to each link that goes between each site. the onclick would then call some javascript which could do a server call back to register the click.
{ "language": "en", "url": "https://stackoverflow.com/questions/106285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is causing a stack overflow? You may think that this is a coincidence that the topic of my question is similar to the name of the forum but I actually got here by googling the term "stack overflow". I use the OPNET network simulator in which I program using C. I think I am having a problem with big array sizes. It seems that I am hitting some sort of memory allocation limitation. It may have to do with OPNET, Windows, my laptop memory or most likely C language. The problem is caused when I try to use nested arrays with a total number of elements coming to several thousand integers. I think I am exceeding an overall memory allocation limit and I am wondering if there is a way to increase this cap. Here's the exact problem description: I basically have a routing table. Let's call it routing_tbl[n], meaning I am supporting 30 nodes (routers). Now, for each node in this table, I keep info. about many (hundreds) available paths, in an array called paths[p]. Again, for each path in this array, I keep the list of nodes that belong to it in an array called hops[h]. So, I am using at least nph integers worth of memory but this table contains other information as well. In the same function, I am also using another nested array that consumes almost 40,000 integers as well. As soon as I run my simulation, it quits complaining about stack overflow. It works when I reduce the total size of the routing table. What do you think causes the problem and how can it be solved? Much appreciated Ali A: Somehow you are using a lot of stack. Possible causes include that you're creating the routing table on the stack, you're passing it on the stack, or else you're generating lots of calls (eg by recursively processing the whole thing). In the first two cases you should create it on the heap and pass around a pointer to it. In the third case you'll need to rewrite your algorithm in an iterative form. A: It may help if you post some code. Edit the question to include the problem function and the error. Meanwhile, here's a very generic answer: The two principal causes of a stack overflow are 1) a recursive function, or 2) the allocation of a large number of local variables. Recursion if your function calls itself, like this: int recurse(int number) { return (recurse(number)); } Since local variables and function arguments are stored on the stack, then it will in fill the stack and cause a stack overflow. Large local variables If you try to allocate a large array of local variables then you can overflow the stack in one easy go. A function like this may cause the issue: void hugeStack (void) { unsigned long long reallyBig[100000000][1000000000]; ... } There is quite a detailed answer to this similar question. A: Stack overflows can happen in C when the number of embedded recursive calls is too high. Perhaps you are calling a function from itself too many times? This error may also be due to allocating too much memory in static declarations. You can switch to dynamic allocations through malloc() to fix this type of problem. Is there a reason why you cannot use the debugger on this program? A: It depends on where you have declared the variable. A local variable (i.e. one declared on the stack is limited by the maximum frame size) This is a limit of the compiler you are using (and can usually be adjusted with compiler flags). A dynamically allocated object (i.e. one that is on the heap) is limited by the amount of available memory. This is a property of the OS (and can technically by larger the physical memory if you have a smart OS). A: Many operating systems dynamically expand the stack as you use more of it. When you start writing to a memory address that's just beyond the stack, the OS assumes your stack has just grown a bit more and allocates it an extra page (usually 4096Kib on x86 - exactly 1024 ints). The problem is, on the x86 (and some other architectures) the stack grows downwards but C arrays grow upwards. This means if you access the start of a large array, you'll be accessing memory that's more than a page away from the edge of the stack. If you initialise your array to 0 starting from the end of the array (that's right, make a for loop to do it), the errors might go away. If they do, this is indeed the problem. You might be able to find some OS API functions to force stack allocation, or compiler pragmas/flags. I'm not sure about how this can be done portably, except of course for using malloc() and free()! A: You are unlikely to run into a stack overflow with unthreaded compiled C unless you do something particularly egregious like have runaway recursion or a cosmic memory leak. However, your simulator probably has a threading package which will impose stack size limits. When you start a new thread it will allocate a chunk of memory for the stack for that thread. Likely, there is a parameter you can set somewhere that establishes the the default stack size, or there may be a way to grow the stack dynamically. For example, pthreads has a function pthread_attr_setstacksize() which you call prior to starting a new thread to set its size. Your simulator may or may not be using pthreads. Consult your simulator reference documentation.
{ "language": "en", "url": "https://stackoverflow.com/questions/106298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I plan an enterprise level web application? I'm at a point in my freelance career where I've developed several web applications for small to medium sized businesses that support things such as project management, booking/reservations, and email management. I like the work but find that eventually my applications get to a point where the overhear for maintenance is very high. I look back at code I wrote 6 months ago and find I have to spend a while just relearning how I originally coded it before I can make a fix or feature additions. I do try to practice using frameworks (I've used Zend Framework before, and am considering Django for my next project) What techniques or strategies do you use to plan out an application that is capable of handling a lot of users without breaking and still keeping the code clean enough to maintain easily? If anyone has any books or articles they could recommend, that would be greatly appreciated as well. A: Although there are certainly good articles on that topic, none of them is a substitute of real-world experience. Maintainability is nothing you can plan straight ahead, except on very small projects. It is something you need to take care of during the whole project. In fact, creating loads of classes and infrastructure code in advance can produce code which is even harder to understand than naive spaghetti code. So my advise is to clean up your existing projects, by continuously refactoring them. Look at the parts which were a pain to change, and strive for simpler solutions that are easier to understand and to adjust. If the code is even too bad for that, consider rewriting it from scratch. Don't start new projects and expect them to succeed, just because your read some more articles or used a new framework. Instead, identify the failures of your existing projects and fix their specific problems. Whenever you need to change your code, ask yourself how to restructure it to support similar changes in the future. This is what you need to do anyway, because there will be similar changes in the future. By doing those refactorings you'll stumble across various specific questions you can ask and read articles about. That way you'll learn more than by just asking general questions and reading general articles about maintenance and frameworks. Start cleaning up your code today. Don't defer it to your future projects. (The same is true for documentation. Everyone's first docs were very bad. After several months they turn out to be too verbose and filled with unimportant stuff. So complement the documentation with solutions to the problems you really had, because chances are good that next year you'll be confronted with a similar problem. Those experiences will improve your writing style more than any "how to write good" style guide.) A: I'd honestly recommend looking at Martin Fowlers Patterns of Enterprise Application Architecture. It discusses a lot of ways to make your application more organized and maintainable. In addition, I would recommend using unit testing to give you better comprehension of your code. Kent Beck's book on Test Driven Development is a great resource for learning how to address change to your code through unit tests. A: To improve the maintainability you could: * *If you are the sole developer then adopt a coding style and stick to it. That will give you confidence later when navigating through your own code about things you could have possibly done and the things that you absolutely wouldn't. Being confident where to look and what to look for and what not to look for will save you a lot of time. *Always take time to bring documentation up to date. Include the task into development plan; include that time into the plan as part any of change or new feature. *Keep documentation balanced: some high level diagrams, meaningful comments. Best comments tell that cannot be read from the code itself. Like business reasons or "whys" behind certain chunks of code. *Include into the plan the effort to keep code structure, folder names, namespaces, object, variable and routine names up to date and reflective of what they actually do. This will go a long way in improving maintainability. Always call a spade "spade". Avoid large chunks of code, structure it by means available within your language of choice, give chunks meaningful names. *Low coupling and high coherency. Make sure you up to date with techniques of achieving these: design by contract, dependency injection, aspects, design patterns etc. *From task management point of view you should estimate more time and charge higher rate for non-continuous pieces of work. Do not hesitate to make customer aware that you need extra time to do small non-continuous changes spread over time as opposed to bigger continuous projects and ongoing maintenance since the administration and analysis overhead is greater (you need to manage and analyse each change including impact on the existing system separately). One benefit your customer is going to get is greater life expectancy of the system. The other is accurate documentation that will preserve their option to seek someone else's help should they decide to do so. Both protect customer investment and are strong selling points. *Use source control if you don't do that already *Keep a detailed log of everything done for the customer plus any important communication (a simple computer or paper based CMS). Refresh your memory before each assignment. *Keep a log of issues left open, ideas, suggestions per customer; again refresh your memory before beginning an assignment. *Plan ahead how the post-implementation support is going to be conducted, discuss with the customer. Make your systems are easy to maintain. Plan for parameterisation, monitoring tools, in-build sanity checks. Sell post-implementation support to customer as part of the initial contract. *Expand by hiring, even if you need someone just to provide that post-implementation support, do the admin bits. Recommended reading: * *"Code Complete" by Steve Mcconnell *Anything on design patterns are included into the list of recommended reading. A: The most important advice I can give having helped grow an old web application into an extremely high available, high demand web application is to encapsulate everything. - in particular * *Use good MVC principles and frameworks to separate your view layer from your business logic and data model. *Use a robust persistance layer to not couple your business logic to your data model *Plan for statelessness and asynchronous behaviour. Here is an excellent article on how eBay tackles these problems http://www.infoq.com/articles/ebay-scalability-best-practices A: * *Use a framework / MVC system. The more organised and centralized your code is the better. *Try using Memcache. PHP has a built in extension for it, it takes about ten minutes to set up and another twenty to put in your application. You can cache whatever you want to it - I cache all my database records in it - for every application. It does wanders. *I would recommend using a source control system such as Subversion if you aren't already. A: You should consider maybe using SharePoint. It's an environment that is already designed to do all you have mentioned, and has many other features you maybe haven't thought about (but maybe you will need in the future :-) ) Here's some information from the official site. There are 2 different SharePoint environments you can use: Windows Sharepoint Services (WSS) or Microsoft Office Sharepoint Server (MOSS). WSS is free and ships with Windows Server 2003, while MOSS isn't free, but has much more features and covers almost all you enterprise's needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/106299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How can I best connect Seam and GWT in a stateful web application? We have a web application that was implemented using GWT. What it presents is fetched from a Jboss/Seam server using the remoting mechanism, and this works fine. However, the application is now extended to support sessions and users. The Seam GWT service doesn't seem to provide a way to let me log in such that Seam can return restricted data back to the GWT application, and so it looks to me that I will have to wrap the GWT application in facelets. It is not obvious to me that a login using the Seam session mechanism will help me get correct data into the GWT application however, so my question is whether I will be lucky and it will just work, or if I need to do some client side magic, server side magic or if my perception of missing login functionality in the Seam GWT service actually is wrong. Bonus points to anyone that can provide me with a complete example showing something similar. A: It turns out that things are "just working" as I hoped. By using Seam's Identity and login mechanism, I can access the current logged in user via Identity.instance().getUsername(); in the service code that gets requests from the GWT portion of the application. I tried to put a @Restrict annotation on the service, but this did not appear to work, however this is not something that is not needed as long as I can provide results to the GWT application based on the logged in user. A: How about this complete GWT app on google code -- http://code.google.com/p/tocollege-net/ ?
{ "language": "en", "url": "https://stackoverflow.com/questions/106310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SQLPlus settings to generate tab-separated data file Anyone have a good set of sqlplus configuration directives to help transform a given sql query into nicely tab separated output for pulling into a spreadsheet or further processing? A: As Justin pointed out in his link, using the set colsep function SQLPlus command saves typing a separator for each column. But for tab-delimited, set colsep Chr(9) won't work. For UNIX or LINUX, use set colsep ' ' with the space between the single-quotes being a typed tab. For Windows, use these settings: col TAB# new_value TAB NOPRINT select chr(9) TAB# from dual; set colsep "&TAB" select * from table; A: One particular script that I have stolen on more than one occasion comes from an AskTom thread on extracting data to a flat file. If I needed a quick and dirty flat file out of SQL*Plus. I would tend to prefer the DUMP_CSV function Tom posted earlier on that thread for any sort of ongoing process, though. A: I got a stupid solution. It worked very well. Solution SELECT column1 || CHR(9) || column2 || CHR(9) || column3 ... ... FROM table principle behind Actually, it's just a string concatenation. CHR(9) -> '\t' column1 || CHR(9) || column2 -> concat(column1, '\t', column2) A: Tab characters are invisible, but, if you type the following:- set colsep Z but instead of the Z, press the TAB key one your keyboard, followed by enter, it works. SQLPlus understands that the next character after the space (the invisible tab) is the colsep. I've placed all my settings into a file named /cli.sql so I just do this:- @/cli.sql to load them all. set serveroutput on SET NEWPAGE NONE set feedback off set echo off set feedback off set heading off set colsep set pagesize 0 SET UNDERLINE OFF set pagesize 50000 set linesize 32767 connect use/password (BEWARE - there is an invisible tab after an invisible space on the end of the line:) set colsep Enjoy! A: Check out the Oracle documentation: * *Formatting SQLPlus Reports *Generating HTML Reports from SQLPlus You can generate a tab in Oracle by using the tab's ASCII value 9 and the chr function: select chr(9) from dual;
{ "language": "en", "url": "https://stackoverflow.com/questions/106323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Is there a case where delegate syntax is preferred over lambda expression for anonymous methods? With the advent of new features like lambda expressions (inline code), does it mean we dont have to use delegates or anonymous methods anymore? In almost all the samples I have seen, it is for rewriting using the new syntax. Any place where we still have to use delegates and lambda expressions won't work? A: lambda is shortcut for anonymous delegate, but you will always be using delegates. the delegate specifies the methods signature. you can just do this: delegate(int i) { Console.WriteLine(i.ToString()) } can be replaced with f => Console.WriteLine(f.ToString()) A: Lambda expression is not (and was not meant to be) a silver bullet that would replace (hide) delegates. It is great with small local things like: List<string> names = GetNames(); names.ForEach(Console.WriteLine); * *it makes code more readable thus simple to understand. *It makes code shorter thus less work for us ;) On the other hand it is very simple to misuse them. Long or/and complex lambda expressions are tending to be: * *Hard to understand for new developers *Less object oriented *Much harder to read So “does it mean we don’t have to use delegates or anonymous methods anymore?” No – use Lambda expression where you win time/readability otherwise consider using delegates. A: One not so big advantage for the older delegate syntax is that you need not specify the parameters if you dont use it in the method body. From msdn There is one case in which an anonymous method provides functionality not found in lambda expressions. Anonymous methods enable you to omit the parameter list. This means that an anonymous method can be converted to delegates with a variety of signatures. This is not possible with lambda expressions. For example you can do: Action<int> a = delegate { }; //takes 1 argument, but not specified on the RHS While this fails: Action<int> a = => { }; //omitted parameter, doesnt compile. This technique mostly comes handy when writing event-handlers, like: button.onClicked += delegate { Console.WriteLine("clicked"); }; This is not a strong advantage. It's better to adopt the newer syntax always imho. A: Delegate have two meanings in C#. The keyword delegate can be used to define a function signature type. This is usually used when defininge the signature of higher-order functions, i.e. functions that take other functions as arguments. This use of delegate is still relevant. The delegate keyword can also be used to define an inline anonymous function. In the case where the function is just a single expression, the lambda syntax is a simpler alternative. A: Lambda expressions are just "syntactic sugar", the compiler will generate appropriate delegates for you. You can investigate this by using Lutz Roeder's Reflector. A: Lamda's are just syntactic sugar for delegates, they are not just inline, you can do the following: s.Find(a => { if (a.StartsWith("H")) return a.Equals("HI"); else return !a.Equals("FOO"); }); And delegates are still used when defining events, or when you have lots of arguments and want to actually strongly type the method being called. A: Yes there are places where directly using anonymous delegates and lambda expressions won't work. If a method takes an untyped Delegate then the compiler doesn't know what to resolve the anonymous delegate/lambda expression to and you will get a compiler error. public static void Invoke(Delegate d) { d.DynamicInvoke(); } static void Main(string[] args) { // fails Invoke(() => Console.WriteLine("Test")); // works Invoke(new Action(() => Console.WriteLine("Test"))); Console.ReadKey(); } The failing line of code will get the compiler error "Cannot convert lambda expression to type 'System.Delegate' because it is not a delegate type".
{ "language": "en", "url": "https://stackoverflow.com/questions/106324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do you programmatically identify the number of references to a method with C# I've recently inherited C# console application that is in need of some pruning and clean up. Long story short, the app consists of a single class containing over 110,000 lines of code. Yup, over 110,000 lines in a single class. And, of course, the app is core to our business, running 'round the clock updating data used on a dynamic website. Although I'm told my predecessor was "a really good programmer", it obvious he was not at all into OOP (or version control). Anyway... while familiarizing myself with the code I've found plenty of methods that are declared, but never referenced. It looks as if copy/paste was used to version the code, for example say I have a method called getSomethingImportant(), chances are there is another method called getSomethingImortant_July2007() (the pattern is functionName_[datestamp] in most cases). It looks like when the programmer was asked to make a change to getSomethingImportant() he would copy/paste then rename to getSomethingImortant_Date, make changes to getSomethingImortant_Date, then change any method calls in the code to the new method name, leaving the old method in the code but never referenced. I'd like to write a simple console app that crawls through the one huge class and returns a list of all methods with the number of times each method was referenced. By my estimates there are well over 1000 methods, so doing this by hand would take a while. Are there classes within the .NET framework that I can use to examine this code? Or any other usefull tools that may help identify methods that are declared but never referenced? (Side question: Has anyone else ever seen a C# app like this, one reeeealy big class? It's more or less one huge procedural process, I know this is the first I've seen, at least of this size.) A: To complete the Romain Verdier answer, lets dig a bit into what NDepend can bring to you here. (Disclaimer: I am a developer of the NDepend team) NDepend lets query your .NET code with some LINQ queries. Knowing which methods call and is called by which others, is as simple as writing the following LINQ query: from m in Application.Methods select new { m, m.MethodsCalled, m.MethodsCallingMe } The result of this query is presented in a way that makes easy to browse callers and callees (and its 100% integrated into Visual Studio). There are many other NDepend capabilities that can help you. For example you can right click a method in Visual Studio > NDepend > Select methods... > that are using me (directly or indirectly) ... The following code query is generated... from m in Methods let depth0 = m.DepthOfIsUsing("NUnit.Framework.Constraints.ConstraintExpression.Property(String)") where depth0 >= 0 orderby depth0 select new { m, depth0 } ... which matches direct and indirect callers, with the depth of calls (1 means direct caller, 2 means caller of direct callers and so on). And then by clicking the button Export to Graph, you get a call graph of your pivot method (of course it could be the other way around, i.e method called directly or indirectly by a particular pivot method). A: Download the free trial of Resharper. Use the Resharper->Search->Find Usages in File (Ctrl-Shift-F7) to have all usages highlighted. Also, a count will appear in the status bar. If you want to search across multiple files, you can do that too using Ctrl-Alt-F7. If you don't like that, do text search for the function name in Visual Studio (Ctrl-Shift-F), this should tell you how many occurrences were found in the solution, and where they are. A: You could try to use NDepend if you just need to extract some stats about your class. Note that this tool relies on Mono.Cecil internally to inspect assemblies. A: I don't think you want to write this yourself - just buy NDepend and use its Code Query Language A: The Analyzer window in Reflector can show you where a method is called (Used By). Sounds like it would take a very long time to get the information that way though. You might look at the API that Reflector provides for writing add-ins and see if you can get the grunt work of the analysis that way. I would expect that the source code for the code metrics add-in could tell you a bit about how to get information about methods from the reflector API. Edit: Also the code model viewer add-in for Reflector could help too. It's a good way to explore the Reflector API. A: There is no easy tool to do that in .NET framework itself. However I don't think you really need a list of unused methods at once. As I see it, you'll just go through the code and for each method you'll check if it's unused and then delete it if so. I'd use Visual Studio "Find References" command to do that. Alternatively you can use Resharper with its "Analize" window. Or you can just use Visual Studio code analysis tool to find all unused private methods. A: FXCop has a rule that will identify unused private methods. So you could mark all the methods private and have it generate a list. FXCop also has a language if you wanted to get fancier http://www.binarycoder.net/fxcop/ A: If you don't want to shell out for NDepend, since it sounds like there is just a single class in a single assembly - comment out the methods and compile. If it compiles, delete them - you aren't going to have any inheritance issues, virtual methods or anything like that. I know it sounds primitive, but sometimes refactoring is just grunt work like this. This is kind of assuming you have unit tests you run after each build until you've got the code cleaned up (Red/Green/Refactor). A: I don't know of anything that's built to handle this specific case, but you could use Mono.Cecil. Reflect the assemblies and then count references in the IL. Shouldn't be too tough. A: Try having the compiler emit assembler files, as in x86 instructions, not .NET assemblies. Why? Because it's much easier to parse assembler code than it is C# code or .NET assemblies. For instance, a function/method declaration looks something like this: .string "w+" .text .type create_secure_tmpfile, @function create_secure_tmpfile: pushl %ebp movl %esp, %ebp subl $24, %esp movl $-1, -8(%ebp) subl $4, %esp and function/method references will look something like this: subl $12, %esp pushl 24(%ebp) call create_secure_tmpfile addl $16, %esp movl 20(%ebp), %edx movl %eax, (%edx) When you see "create_secure_tmpfile:" you know you have a function/method declaration, and when you see "call create_secure_tmpfile" you know you have a function/method reference. This may be good enough for your purposes, but if not it's just a few more steps before you can generate a very cute call-tree for your entire application.
{ "language": "en", "url": "https://stackoverflow.com/questions/106329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I find out what type each object is in a ArrayList? I have a ArrayList made up of different elements imported from a db, made up of strings, numbers, doubles and ints. Is there a way to use a reflection type technique to find out what each type of data each element holds? FYI: The reason that there is so many types of data is that this is a piece of java code being written to be implemented with different DB's. A: You can use the getClass() method, or you can use instanceof. For example for (Object obj : list) { if (obj instanceof String) { ... } } or for (Object obj : list) { if (obj.getClass().equals(String.class)) { ... } } Note that instanceof will match subclasses. For instance, of C is a subclass of A, then the following will be true: C c = new C(); assert c instanceof A; However, the following will be false: C c = new C(); assert !c.getClass().equals(A.class) A: In Java just use the instanceof operator. This will also take care of subclasses. ArrayList<Object> listOfObjects = new ArrayList<Object>(); for(Object obj: listOfObjects){ if(obj instanceof String){ }else if(obj instanceof Integer){ }etc... } A: import java.util.ArrayList; /** * @author potter * */ public class storeAny { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub ArrayList<Object> anyTy=new ArrayList<Object>(); anyTy.add(new Integer(1)); anyTy.add(new String("Jesus")); anyTy.add(new Double(12.88)); anyTy.add(new Double(12.89)); anyTy.add(new Double(12.84)); anyTy.add(new Double(12.82)); for (Object o : anyTy) { if(o instanceof String){ System.out.println(o.toString()); } else if(o instanceof Integer) { System.out.println(o.toString()); } else if(o instanceof Double) { System.out.println(o.toString()); } } } } A: for (Object object : list) { System.out.println(object.getClass().getName()); } A: Just call .getClass() on each Object in a loop. Unfortunately, Java doesn't have map(). :) A: Instanceof works if you don't depend on specific classes, but also keep in mind that you can have nulls in the list, so obj.getClass() will fail, but instanceof always returns false on null. A: Since Java 8 mixedArrayList.forEach((o) -> { String type = o.getClass().getSimpleName(); switch (type) { case "String": // treat as a String break; case "Integer": // treat as an int break; case "Double": // treat as a double break; ... default: // whatever } }); A: instead of using object.getClass().getName() you can use object.getClass().getSimpleName(), because it returns a simple class name without a package name included. for instance, Object[] intArray = { 1 }; for (Object object : intArray) { System.out.println(object.getClass().getName()); System.out.println(object.getClass().getSimpleName()); } gives, java.lang.Integer Integer A: You almost never want you use something like: Object o = ... if (o.getClass().equals(Foo.class)) { ... } because you aren't accounting for possible subclasses. You really want to use Class#isAssignableFrom: Object o = ... if (Foo.class.isAssignableFrom(o)) { ... } A: In C#:Fixed with recommendation from Mike ArrayList list = ...; // List<object> list = ...; foreach (object o in list) { if (o is int) { HandleInt((int)o); } else if (o is string) { HandleString((string)o); } ... } In Java: ArrayList<Object> list = ...; for (Object o : list) { if (o instanceof Integer)) { handleInt((Integer o).intValue()); } else if (o instanceof String)) { handleString((String)o); } ... } A: You say "this is a piece of java code being written", from which I infer that there is still a chance that you could design it a different way. Having an ArrayList is like having a collection of stuff. Rather than force the instanceof or getClass every time you take an object from the list, why not design the system so that you get the type of the object when you retrieve it from the DB, and store it into a collection of the appropriate type of object? Or, you could use one of the many data access libraries that exist to do this for you. A: If you expect the data to be numeric in some form, and all you are interested in doing is converting the result to a numeric value, I would suggest: for (Object o:list) { Double.parseDouble(o.toString); }
{ "language": "en", "url": "https://stackoverflow.com/questions/106336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "88" }
Q: Secure a DLL file with a license file What is the best way to secure the use/loading of a DLL with a license file? A: A couple of things you might want to consider: Check sum the DLL. Using a cryptographic hash function, you can store this inside the license file or inside the DLL. This provides a verification method to determined if my original DLL file is unhacked, or if it is the license file for this DLL. A few simple byte swapping techniques can quickly take your hash function off the beaten track (and thus not easy to reproduce). Don't store you hash as a string, split it into unsigned shorts in different places. As Larry said, a MAC address is fairly common. There are lots of examples of how to get that on The Code Project, but be aware it's easy to fake these days. My suggestion, should be use private/public keys for license generation. In short, modes of attack will be binary (modify the instructions of your DLL file) so protect against this, or key generation so make each license user, machine, and even the install specific. A: You can check for a license inside of DllMain() and die if it's not found. A: It also depends on how your license algorithm works. I'd suggest you look into using something like a Diffie–Hellman key exchange (or even RSA) to generate some sort of public/private key that can be passed to your users, based on some information. (Depending on the application, I know of one case where I wrote the license code on contract for a company, they used a MAC address, and some other data, hashed it, and encrypted the hash, giving them the "key value", if the registration number was correct). This ensures that the key file can't be moved, (or given) to another machine, thus 'stealing' the software. If you want to dig deeper and avoid hackers, that's a whole 'nother topic....
{ "language": "en", "url": "https://stackoverflow.com/questions/106347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What tools do you use to monitor a web service? From basic things likes page views per second to more advanced stuff like cpu or memory usage. Any ideas? A: I think someone has asked the same type of question before here? Though I'm not too sure how helpful it is. For CPU usage, etc, I would try RRDTool, or maybe something like Cacti. A: Web service or web site? Since you mention page views: I believe you mean web site. Google Analytics will probably give you everything you need to track usage statistics and best of all is free under most circumstances. You might also want to monitor site up-time and have something to send email alerts if the site is down for some reason. We've used Nagios in the past and it works just fine. A: I've been using monit (http://www.tildeslash.com/monit/) for years. It monitors CPU and memory usage as well as downtime for apache/mysql/etc... you can also configure it to notify you of outages and automatically restart services in real time. I also use munin for reporting: http://munin.projects.linpro.no/ If you want reports on pageviews and whatnot, AWStats is the best I've used. A: I use Nagios for general machine monitoring on Linux and I pretty much rely on Google Analytics for website reporting - I know that's not for everyone since some folks have privacy concerns about giving all their site data to Google. Both are free and easy to install (Nagios is generally available through apt-get and Analytics is a pretty easy install on a site). Nagios, however, can be a bear to configure. A: I cast my vote for monit as well. The nice thing about is that it understands apache-status info and can notify/take actions when say 80% of max num of apache workers are in "busy" state. but you need something else for hardware and general monitoring, something SNMP-aware, like zennos or zabbix A: Munin and Cacti provide very nice interfaces and pre-built scripts for rrdtool. They can also monitor multiple servers and send out warnings and alerts through naigos.
{ "language": "en", "url": "https://stackoverflow.com/questions/106358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Add non-ASCII file names to zip in Java What is the best way to add non-ASCII file names to a zip file using Java, in such a way that the files can be properly read in both Windows and Linux? Here is one attempt, adapted from https://truezip.dev.java.net/tutorial-6.html#Example, which works in Windows Vista but fails in Ubuntu Hardy. In Hardy the file name is shown as abc-ЖДФ.txt in file-roller. import java.io.IOException; import java.io.PrintStream; import de.schlichtherle.io.File; import de.schlichtherle.io.FileOutputStream; public class Main { public static void main(final String[] args) throws IOException { try { PrintStream ps = new PrintStream(new FileOutputStream( "outer.zip/abc-åäö.txt")); try { ps.println("The characters åäö works here though."); } finally { ps.close(); } } finally { File.umount(); } } } Unlike java.util.zip, truezip allows specifying zip file encoding. Here's another sample, this time explicitly specifiying the encoding. Neither IBM437, UTF-8 nor ISO-8859-1 works in Linux. IBM437 works in Windows. import java.io.IOException; import de.schlichtherle.io.FileOutputStream; import de.schlichtherle.util.zip.ZipEntry; import de.schlichtherle.util.zip.ZipOutputStream; public class Main { public static void main(final String[] args) throws IOException { for (String encoding : new String[] { "IBM437", "UTF-8", "ISO-8859-1" }) { ZipOutputStream zipOutput = new ZipOutputStream( new FileOutputStream(encoding + "-example.zip"), encoding); ZipEntry entry = new ZipEntry("abc-åäö.txt"); zipOutput.putNextEntry(entry); zipOutput.closeEntry(); zipOutput.close(); } } } A: In Zip files, according to the spec owned by PKWare, the encoding of file names and file comments is IBM437. In 2007 PKWare extended the spec to also allow UTF-8. This says nothing about the encoding of the files contained within the zip. Only the encoding of the filenames. I think all tools and libraries (Java and non Java) support IBM437 (which is a superset of ASCII), and fewer tools and libraries support UTF-8. Some tools and libs support other code pages. For example if you zip something using WinRar on a computer running in Shanghai, you will get the Big5 code page. This is not "allowed" by the zip spec but it happens anyway. The DotNetZip library for .NET does Unicode, but of course that doesn't help you if you are using Java! Using the Java built-in support for ZIP, you will always get IBM437. If you want an archive with something other than IBM437, then use a third party library, or create a JAR. A: Miracles indeed happen, and Sun/Oracle did really fix the long-living bug/rfe: Now it's possible to set up filename encodings upon creating the zip file/stream (requires Java 7). A: You can still use the Apache Commons implementation of the zip stream : http://commons.apache.org/compress/apidocs/org/apache/commons/compress/archivers/zip/ZipArchiveOutputStream.html#setEncoding%28java.lang.String%29 Calling setEncoding("UTF-8") on your stream should be enough. A: From a quick look at the TrueZIP manual - they recommend the JAR format: It uses UTF-8 for file name encoding and comments - unlike ZIP, which only uses IBM437. This probably means that the API is using the java.util.zip package for its implementation; that documentation states that it is still using a ZIP format from 1996. Unicode support wasn't added to the PKWARE .ZIP File Format Specification until 2006. A: The encoding for the File-Entries in ZIP is originally specified as IBM Code Page 437. Many characters used in other languages are impossible to use that way. The PKWARE-specification refers to the problem and adds a bit. But that is a later addition (from 2007, thanks to Cheeso for clearing that up, see comments). If that bit is set, the filename-entry have to be encoded in UTF-8. This extension is described in 'APPENDIX D - Language Encoding (EFS)', that is at the end of the linked document. For Java it is a known bug, to get into trouble with non-ASCII-characters. See bug #4244499 and the high number of related bugs. My colleague used as workaround URL-Encoding for the filenames before storing them into the ZIP and decoding after reading them. If you control both, storing and reading, that may be a workaround. EDIT: At the bug someone suggests using the ZipOutputStream from Apache Ant as workaround. This implementation allows the specification of an encoding. A: Did it actually fail or was just a font issue? (e.g. font having different glyphs for those charcodes) I've seen similar issues in Windows where rendering "broke" because the font didn't support the charset but the data was actually intact and correct. A: Non-ASCII file names are not reliable across ZIP implementations and are best avoided. There is no provision for storing a charset setting in ZIP files; clients tend to guess with 'the current system codepage', which is unlikely to be what you want. Many combinations of client and codepage can result in inaccessible files. Sorry!
{ "language": "en", "url": "https://stackoverflow.com/questions/106367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Getting multiple UI threads in Windows Forms I'm trying to figure out a way to make user controls run in their own UI threads. Is this possible? I'm trying to prevent a module-based application from crashing due to a single module. Any thoughts? A: That's not possible. However, with some non-trivial code, you can have different windows running in separate threads. Each window will have its own message loop. Update: Another way you could think of is to write your controls in a special way. You can handle all events in your controls by creating a new thread that will run all the logic. A: Unfortunately all UI controls run on the same UI thread. Therefore any code running on this thread that could potentially lead to a hang situation would need to be coded with some sort of timeout logic. DateTime startTime = DateTime.Now; while(DateTime.Now.Subtract(startTime).TotalSeconds < 30) { //do something } Otherwise, as Orlangur stated earlier, all event handler code would need to be run in separate threads. You would still however need to monitor these threads to determine if they've been running too long and shut them down. As such you might as well implement the type of logic above as it would be a lot less work and more maintainable. A: I suppose it's not a matter of the program crashing. Exceptions can be caught of course, but the issue is in hanging controls. For the sake of this situation, here's an example: public void Button1_Click(object sender, EventArgs args) { while(true) {} } If this code were to run in a control, an exception wouldn't throw, but it would hang. I'm trying to determine a way to catch this and remove the control module from the application. A: Running controls in different threads should be possible. A little hacking and windows overrides and it should be doable. I am thinking you can create a GUI control in another thread, then move it to a common window (main gui thread) with the win api SetParent. SetParent can be used to "hijack" other windows, so you should be able to grab the controls this way. But of course there might be focus issues and other issues, but might be doable. I used that once to put my own button onto MS Messenger.
{ "language": "en", "url": "https://stackoverflow.com/questions/106378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: C# - Can publicly inherited methods be hidden (e.g. made private to derived class) Suppose I have BaseClass with public methods A and B, and I create DerivedClass through inheritance. e.g. public DerivedClass : BaseClass {} Now I want to develop a method C in DerivedClass that uses A and B. Is there a way I can override methods A and B to be private in DerivedClass so that only method C is exposed to someone who wants to use my DerivedClass? A: It's not possible, why? In C#, it is forced upon you that if you inherit public methods, you must make them public. Otherwise they expect you not to derive from the class in the first place. Instead of using the is-a relationship, you would have to use the has-a relationship. The language designers don't allow this on purpose so that you use inheritance more properly. For example one might accidentally confuse a class Car to derive from a class Engine to get it's functionality. But an Engine is functionality that is used by the car. So you would want to use the has-a relationship. The user of the Car does not want to have access to the interface of the Engine. And the Car itself should not confuse the Engine's methods with it's own. Nor Car's future derivations. So they don't allow it to protect you from bad inheritance hierarchies. What should you do instead? Instead you should implement interfaces. This leaves you free to have functionality using the has-a relationship. Other languages: In C++ you simply specify a modifier before the base class of private, public or protected. This makes all members of the base that were public to that specified access level. It seems silly to me that you can't do the same in C#. The restructured code: interface I { void C(); } class BaseClass { public void A() { MessageBox.Show("A"); } public void B() { MessageBox.Show("B"); } } class Derived : I { public void C() { b.A(); b.B(); } private BaseClass b; } I understand the names of the above classes are a little moot :) Other suggestions: Others have suggested to make A() and B() public and throw exceptions. But this doesn't make a friendly class for people to use and it doesn't really make sense. A: That sounds like a bad idea. Liskov would not be impressed. If you don't want consumers of DerivedClass to be able to access methods DeriveClass.A() and DerivedClass.B() I would suggest that DerivedClass should implement some public interface IWhateverMethodCIsAbout and the consumers of DerivedClass should actually be talking to IWhateverMethodCIsAbout and know nothing about the implementation of BaseClass or DerivedClass at all. A: What you need is composition not inheritance. class Plane { public Fly() { .. } public string GetPilot() {...} } Now if you need a special kind of Plane, such as one that has PairOfWings = 2 but otherwise does everything a plane can.. You inherit plane. By this you declare that your derivation meets the contract of the base class and can be substituted without blinking wherever a base class is expected. e.g. LogFlight(Plane) would continue to work with a BiPlane instance. However if you just need the Fly behavior for a new Bird you want to create and are not willing to support the complete base class contract, you compose instead. In this case, refactor the behavior of methods to reuse into a new type Flight. Now create and hold references to this class in both Plane and Bird. You don't inherit because the Bird does not support the complete base class contract... ( e.g. it cannot provide GetPilot() ). For the same reason, you cannot reduce the visibility of base class methods when you override.. you can override and make a base private method public in the derivation but not vice versa. e.g. In this example, if I derive a type of Plane "BadPlane" and then override and "Hide" GetPilot() - make it private; a client method LogFlight(Plane p) will work for most Planes but will blow up for "BadPlane" if the implementation of LogFlight happens to need/call GetPilot(). Since all derivations of a base class are expected to be 'substitutable' wherever a base class param is expected, this has to be disallowed. A: @Brian R. Bondy pointed me to an interesting article on Hiding through inheritance and the new keyword. http://msdn.microsoft.com/en-us/library/aa691135(VS.71).aspx So as workaround I would suggest: class BaseClass { public void A() { Console.WriteLine("BaseClass.A"); } public void B() { Console.WriteLine("BaseClass.B"); } } class DerivedClass : BaseClass { new public void A() { throw new NotSupportedException(); } new public void B() { throw new NotSupportedException(); } public void C() { base.A(); base.B(); } } This way code like this will throw a NotSupportedException: DerivedClass d = new DerivedClass(); d.A(); A: When you, for instance, try to inherit from a List<object>, and you want to hide the direct Add(object _ob) member: // the only way to hide [Obsolete("This is not supported in this class.", true)] public new void Add(object _ob) { throw NotImplementedException("Don't use!!"); } It's not really the most preferable solution, but it does the job. Intellisense still accepts, but at compile time you get an error: error CS0619: 'TestConsole.TestClass.Add(TestConsole.TestObject)' is obsolete: 'This is not supported in this class.' A: The only way to do this that I know of is to use a Has-A relationship and only implement the functions you want to expose. A: Hiding is a pretty slippery slope. The main issues, IMO, are: * *It's dependent upon the design-time declaration type of the instance, meaning if you do something like BaseClass obj = new SubClass(), then call obj.A(), hiding is defeated. BaseClass.A() will be executed. *Hiding can very easily obscure behavior (or behavior changes) in the base type. This is obviously less of a concern when you own both sides of the equation, or if calling 'base.xxx' is part of your sub-member. *If you actually do own both sides of the base/sub-class equation, then you should be able to devise a more manageable solution than institutionalized hiding/shadowing. A: I would say that if you have a codebase that you are wanting to do this with, it is not the best designed code base. It's typically a sign of a class in one level of the heirarchy needing a certain public signature while another class derived from that class doesn't need it. An upcoming coding paradigm is called "Composition over Inheritance." This plays directly off of the principles of object-oriented development (especially the Single Responsibility Principle and Open/Closed Principle). Unfortunately, the way a lot of us developers were taught object-orientation, we have formed a habit of immediately thinking about inheritance instead of composition. We tend to have larger classes that have many different responsibilities simply because they might be contained with the same "Real World" object. This can lead to class hierarchies that are 5+ levels deep. An unfortunate side-effect that developers don't normally think about when dealing with inheritance is that inheritance forms one of the strongest forms of dependencies that you can ever introduce into your code. Your derived class is now strongly dependant on the class it was inherited from. This can make your code more brittle in the long run and lead to confounding problems where changing a certain behavior in a base class breaks derived classes in obscure ways. One way to break your code up is through interfaces like mentioned in another answer. This is a smart thing to do anyways as you want a class's external dependencies to bind to abstractions, not concrete/derived types. This allows you to change the implementation without changing the interface, all without effecting a line of code in your dependent class. I would much rather than maintain a system with hundreds/thousands/even more classes that are all small and loosely-coupled, than deal with a system that makes heavy use of polymorphism/inheritance and has fewer classes that are more tightly coupled. Perhaps the best resource out there on object-oriented development is Robert C. Martin's book, Agile Software Development, Principles, Patterns, and Practices. A: If they're defined public in the original class, you cannot override them to be private in your derived class. However, you could make the public method throw an exception and implement your own private function. Edit: Jorge Ferreira is correct. A: While the answer to the question is "no", there is one tip I wish to point out for others arriving here (given that the OP was sort of alluding to assembly access by 3rd parties). When others reference an assembly, Visual Studio should be honoring the following attribute so it will not show in intellisense (hidden, but can STILL be called, so beware): [System.ComponentModel.EditorBrowsable(System.ComponentModel.EditorBrowsableState.Never)] If you had no other choice, you should be able to use new on a method that hides a base type method, return => throw new NotSupportedException();, and combine it with the attribute above. Another trick depends on NOT inheriting from a base class if possible, where the base has a corresponding interface (such as IList<T> for List<T>). Implementing interfaces "explicitly" will also hide those methods from intellisense on the class type. For example: public class GoodForNothing: IDisposable { void IDisposable.Dispose() { ... } } In the case of var obj = new GoodForNothing(), the Dispose() method will not be available on obj. However, it WILL be available to anyone who explicitly type-casts obj to IDisposable. In addition, you could also wrap a base type instead of inheriting from it, then hide some methods: public class MyList<T> : IList<T> { List<T> _Items = new List<T>(); public T this[int index] => _Items[index]; public int Count => _Items.Count; public void Add(T item) => _Items.Add(item); [System.ComponentModel.EditorBrowsable(System.ComponentModel.EditorBrowsableState.Never)] void ICollection<T>.Clear() => throw new InvalidOperationException("No you may not!"); // (hidden) /*...etc...*/ }
{ "language": "en", "url": "https://stackoverflow.com/questions/106383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70" }
Q: Is it possible to detect 32 bit vs 64 bit in a bash script? I am writing a bash script to deal with some installations in an automated way... I have the possibility of getting one such program in 32 or 64 bit binary... is it possible to detect the machine architecture from bash so I can select the correct binary? This will be for Ubuntu machines. A: MACHINE_TYPE=`uname -m` if [ ${MACHINE_TYPE} == 'x86_64' ]; then # 64-bit stuff here else # 32-bit stuff here fi A: slot8(msd):/opt # uname -a Linux slot8a 2.6.21_mvlcge500-electra #1 SMP PREEMPT Wed Jun 18 16:29:33 \ EDT 2008 ppc64 GNU/Linux Remember, there are other CPU architectures than Intel/AMD... A: getconf LONG_BIT seems to do the trick as well, which makes it even easier to check this since this returns simply the integer instead of some complicated expression. if [ `getconf LONG_BIT` = "64" ] then echo "I'm 64-bit" else echo "I'm 32-bit" fi A: Does uname -a give you anything you can use? I don't have a 64-bit machine to test on. Note from Mike Stone: This works, though specifically uname -m Will give "x86_64" for 64 bit, and something else for other 32 bit types (in my 32 bit VM, it's "i686"). A: You could do something like this: if $(uname -a | grep 'x86_64'); then echo "I'm 64-bit" else echo "I'm 32-bit" fi A: Be careful, in a chrooted 32-bit env, the uname is still answering like the 64-bit host system. getconf LONG_BIT works fine. file /bin/cp or any well-known executable or library should do the trick if you don't have getconf (but you can store programs you can't use, and maybe there are not at this place). A: You can use , the follow script (i extract this from officially script of "ioquake3") : for example archs=`uname -m` case "$archs" in i?86) archs=i386 ;; x86_64) archs="x86_64 i386" ;; ppc64) archs="ppc64 ppc" ;; esac for arch in $archs; do test -x ./ioquake3.$arch || continue exec ./ioquake3.$arch "$@" done ================================================================================== I'm making a script to detect the "Architecture", this is my simple code (I am using it with wine , for my Windows Games , under Linux , by each game , i use diferrent version of WineHQ, downloaded from "PlayOnLinux" site. # First Obtain "kernel" name KERNEL=$(uname -s) if [ $KERNEL = "Darwin" ]; then KERNEL=mac elif [ $Nucleo = "Linux" ]; then KERNEL=linux elif [ $Nucleo = "FreeBSD" ]; then KERNEL=linux else echo "Unsupported OS" fi # Second get the right Arquitecture ARCH=$(uname -m) if [ $ARCH = "i386" ]; then PATH="$PWD/wine/$KERNEL/x86/bin:$PATH" export WINESERVER="$PWD/wine/$KERNEL/x86/bin/wineserver" export WINELOADER="$PWD/wine/$KERNEL/x86/bin/wine" export WINEPREFIX="$PWD/wine/data" export WINEDEBUG=-all:$WINEDEBUG ARCH="32 Bits" elif [ $ARCH = "i486" ]; then PATH="$PWD/wine/$KERNEL/x86/bin:$PATH" export WINESERVER="$PWD/wine/$KERNEL/x86/bin/wineserver" export WINELOADER="$PWD/wine/$KERNEL/x86/bin/wine" export WINEPREFIX="$PWD/wine/data" export WINEDEBUG=-all:$WINEDEBUG ARCH="32 Bits" elif [ $ARCH = "i586" ]; then PATH="$PWD/wine/$KERNEL/x86/bin:$PATH" export WINESERVER="$PWD/wine/$KERNEL/x86/bin/wineserver" export WINELOADER="$PWD/wine/$Nucleo/x86/bin/wine" export WINEPREFIX="$PWD/wine/data" export WINEDEBUG=-all:$WINEDEBUG ARCH="32 Bits" elif [ $ARCH = "i686" ]; then PATH="$PWD/wine/$KERNEL/x86/bin:$PATH" export WINESERVER="$PWD/wine/$KERNEL/x86/bin/wineserver" export WINELOADER="$PWD/wine/$KERNEL/x86/bin/wine" export WINEPREFIX="$PWD/wine/data" export WINEDEBUG=-all:$WINEDEBUG ARCH="32 Bits" elif [ $ARCH = "x86_64" ]; then export WINESERVER="$PWD/wine/$KERNEL/x86_64/bin/wineserver" export WINELOADER="$PWD/wine/$KERNEL/x86_64/bin/wine" export WINEPREFIX="$PWD/wine/data" export WINEDEBUG=-all:$WINEDEBUG ARCH="64 Bits" else echo "Unsoportted Architecture" fi ================================================================================== Now i use this in my bash scripts , because works better in any distro . # Get the Kernel Name Kernel=$(uname -s) case "$Kernel" in Linux) Kernel="linux" ;; Darwin) Kernel="mac" ;; FreeBSD) Kernel="freebsd" ;; * ) echo "Your Operating System -> ITS NOT SUPPORTED" ;; esac echo echo "Operating System Kernel : $Kernel" echo # Get the machine Architecture Architecture=$(uname -m) case "$Architecture" in x86) Architecture="x86" ;; ia64) Architecture="ia64" ;; i?86) Architecture="x86" ;; amd64) Architecture="amd64" ;; x86_64) Architecture="x86_64" ;; sparc64) Architecture="sparc64" ;; * ) echo "Your Architecture '$Architecture' -> ITS NOT SUPPORTED." ;; esac echo echo "Operating System Architecture : $Architecture" echo A: Yes, uname -a should do the trick. see: http://www.stata.com/support/faqs/win/64bit.html.
{ "language": "en", "url": "https://stackoverflow.com/questions/106387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "74" }
Q: Flash toggle snapping long, long ago in a galaxy far far away, You used to be able to toggle snapping in Flash with the ctrl key. Let's say you were dragging the end of a line very close to another object. You could very easily hold the ctrl key down to shut off the snapping and get it in there nice and close without the snap. At some point, Macromedia removed this functionality. I'm wondering if that single-key-toggle-snapping functionality has gone somewhere else within the app or do I have to click through the menus every time? A: I don't think there is, I've scoured the docs and could not find anything about it. The weird thing is also that you can't set a keyboard shortcut for toggling it. The option is there, but it's grayed out for some reason. The best I could manage was setting a shortcut for the object-snapping (since that's what i use the most) and just toggling that. Also, you don't need to go through the menus, the magnet button on the toolbar toggles snapping too.
{ "language": "en", "url": "https://stackoverflow.com/questions/106395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Accessing an image in the projects resources? How do I access an image during run time that I have added to the projects resources? I would like to be able to do something like this: if (value) { picBox1.image = Resources.imageA; } else { picBox2.image = Resources.imageB; } A: something.Image = Namespace.ProjectName.Properties.Resources.ImageName;
{ "language": "en", "url": "https://stackoverflow.com/questions/106396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Fetch top X users, plus a specific user (if they're not in the top X) I have a list of ranked users, and would like to select the top 50. I also want to make sure one particular user is in this result set, even if they aren't in the top 50. Is there a sensible way to do this in a single mysql query? Or should I just check the results for the particular user and fetch him separately, if necessary? Thanks! A: If I understand correctly, you could do: select * from users order by max(rank) desc limit 0, 49 union select * from users where user = x This way you get 49 top users plus your particular user. A: Regardless if a single, fancy SQL query could be made, the most maintainable code would probably be two queries: select user from users where id = "fred"; select user from users where id != "fred" order by rank limit 49; Of course "fred" (or whomever) would usually be replaced by a placeholder but the specifics depend on the environment. A: declare @topUsers table( userId int primary key, username varchar(25) ) insert into @topUsers select top 50 userId, userName from Users order by rank desc insert into @topUsers select userID, userName from Users where userID = 1234 --userID of special user select * from @topUsers A: The simplest solution depends on your requirements, and what your database supports. If you don't mind the possibility of having duplicate results, then a simple union (as Mariano Conti demonstrated) is fine. Otherwise, you could do something like select distinct <columnlist> from (select * from users order by max(rank) desc limit 0, 49 union select * from users where user = x) if you database supports it.
{ "language": "en", "url": "https://stackoverflow.com/questions/106400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Validate an incoming SOAP request to the WSDL in PHP The built-in PHP extension for SOAP doesn't validate everything in the incoming SOAP request against the XML Schema in the WSDL. It does check for the existence of basic entities, but when you have something complicated like simpleType restrictions the extension pretty much ignores their existence. What is the best way to validate the SOAP request against XML Schema contained in the WSDL? A: Besides the native PHP5 SOAP libs, I can also tell you that neither the PEAR nor Zend SOAP libs will do schema validation of messages at present. (I don't know of any PHP SOAP implementation that does, unfortunately.) What I would do is load the XML message into a DOMDocument object and use DOMDocument's methods to validate against the schema. A: Been digging around on this matter a view hours. Neither the native PHP SoapServer nore the NuSOAP Library does any Validation. PHP SoapServer simply makes a type cast. For Example if you define <xsd:element name="SomeParameter" type="xsd:boolean" /> and submit <get:SomeParameter>dfgdfg</get:SomeParameter> you'll get the php Type boolean (true) NuSOAP simply casts everthing to string although it recognizes simple types: from the nuSOAP debug log: nusoap_xmlschema: processing typed element SomeParameter of type http://www.w3.org/2001/XMLSchema:boolean So the best way is joelhardi solution to validate yourself or use some xml Parser like XERCES A: Typically one doesn't validate against the WSDL. If the WSDL is designed properly there should be an underlying xml schema (XSD) to validate the body of the request against. Your XML parser should be able to do this. The rest is up to how you implement the web service and which SOAP engine you are using. I am not directly familiar with the PHP engine. For WSDL/interface level "validation" I usually do something like this: * *Does the body of the request match a known request type and is it valid (by XSD)? *Does the message make sense in this context and can i map it to an operation/handler? *If so, start processing it *Otherwise: error A: Using native SoapServer PHP is a little bit tricky but is possible too: function validate(string $xmlEnvelope, string $wsdl) : ?array{ libxml_use_internal_errors(true); //extracting schema from WSDL $xml = new DOMDocument(); $wsdl_string = file_get_contents($wsdl); //extracting namespaces from WSDL $outer = new SimpleXMLElement($wsdl_string); $wsdl_namespaces = $outer->getDocNamespaces(); //extracting the schema tag inside WSDL $xml->loadXML($wsdl_string); $xpath = new DOMXPath($xml); $xpath->registerNamespace('xsd', 'http://www.w3.org/2001/XMLSchema'); $schemaNode = $xpath->evaluate('//xsd:schema'); $schemaXML = ""; foreach ($schemaNode as $node) { //add namespaces from WSDL to schema foreach($wsdl_namespaces as $prefix => $ns){ $node->setAttribute("xmlns:$prefix", $ns); } $schemaXML .= simplexml_import_dom($node) ->asXML(); } //capturing de XML envelope $xml = new DOMDocument(); $xml->loadXML($xmlEnvelope); //extracting namespaces from soap Envelope $outer = new SimpleXMLElement($xmlEnvelope); $envelope_namespaces = $outer->getDocNamespaces(); $xpath = new DOMXPath($xml); $xpath->registerNamespace('soapEnv', 'http://schemas.xmlsoap.org/soap/envelope/'); $envelopeBody = $xpath->evaluate('//soapEnv:Body/*[1]'); $envelopeBodyXML = ""; foreach ($envelopeBody as $node) { //add namespaces from envelope to the body content foreach($envelope_namespaces as $prefix => $ns){ $node->setAttribute("xmlns:$prefix", $ns); } $envelopeBodyXML .= simplexml_import_dom($node) ->asXML(); } $doc = new DOMDocument(); $doc->loadXML($envelopeBodyXML); // load xml $is_valid_xml = $doc->schemaValidateSource($schemaXML); // path to xsd file return libxml_get_errors(); } and inside your SoapServer function implementation: function myFunction($param) { $xmlEnvelope = file_get_contents("php://input"); $errors = validate($xmlEnvelope, $wsdl); } A: Some time ago I've create a proof of concept web service with PHP using NuSOAP. I don't know if it validates the input, but I would assume it does. A: I was not able to find any simple way to perform the validation and in the end had validation code in the business logic.
{ "language": "en", "url": "https://stackoverflow.com/questions/106401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Can Spring Parse and Inject Properties Files? I already know how to: Load properties files into my Spring configuration using: <context:property-placeholder location="aaa/bbb/ccc/stuff.properties"/> Build properties objects on the fly using: <props><prop key="abc">some value</prop></props> But what I cant do, and would be really useful, is to have Spring load a properties file and then build the matching properties object. I could then inject this into a bean in the normal way. I've searched for this elsewhere without success. Any ideas? A: Take a look at util:properties <util:properties id="myProperties" location="classpath:com/foo/my.properties"/> Then, to inject the Properties into your Spring-managed Bean, it's as simple as this: @Resource(name = "myProperties") private Properties myProperties;
{ "language": "en", "url": "https://stackoverflow.com/questions/106402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What is the recommended version of GNU autotools? We maintain a RPM based software distribution at work so that we have a common set of software across all the platforms that we support. As a result we have to build a lot of third party software, and frequently find situations where we need to run autoconf/automake/libtoolize/etc to get it to build on Solaris or another platform. I've had very mixed results with this. It seems that these tools are fairly brittle and frequently the files only work with the version of autoconf/automake/etc that they were originally written for. Ideally I'd like to only have to support one version of the GNU autotools, but I get the impression that I'm really going to end up having to have a copy of every version lying around. Is this unusual, or do other people have the same problems? Is there a subset of the versions of autotools that will cover all cases? A: It is true that autotools can be brittle and version specific. But remember that you only need to use these tools on your development machines. Deploying the project to the target machine doesn't require that any of the tools are installed on the target. Even test machines don't need any of the tools. They are really only need to run when the dependencies change, such as adding additional files or libraries to the project. We have been using these tools for inhouse projects for many years, and haven't come across a better solution. If you are in a Unix world, don't underestimate the benefit of having a system where configure; make; make install just works. A: Your experiences are not unusual. Autotools is brittle like that, specially for complex projects. There seems to be no alternative for having a lot of versions of it around, sadly.
{ "language": "en", "url": "https://stackoverflow.com/questions/106405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there a good general method for debugging C++ macros? In general, I occasionally have a chain of nested macros with a few preprocessor conditional elements in their definitions. These can be painful to debug since it's hard to directly see the actual code being executed. A while ago I vaguely remember finding a compiler (gcc) flag to expand them, but I had trouble getting this to work in practice. A: For MSVC users, you can right-click on the file/project, view the settings and change the file properties to output preprocessed source (which typically in the obj directory). A: This might not be applicable in your situation, but macros really do hamper debugging and often are overused and avoidable. Can you replace them with inline functions or otherwise get rid of them all together? A: You should probably start moving away form Macros and start using inline and templates. Macros are an old tool, the right tool sometimes. As a last resort remember printf is your friend (and actually printf isn't that bad a friend when your doing multithreaded stuff) A: gcc -E will output the preprocessed source to stdout. A: Debug the dissasembly with the symbols loaded. A: gcc -save-temps will write out a .i (or .ii file for C++) which is the output of the C preprocessor, before it gets handed to the compiler. This can often be enlightening. A: GCC and compatible compilers use the -E option to output the preprocessed source to standard out. gcc -E foo.cpp Sun Studio also supports this flag: CC -E foo.cpp But even better is -xdumpmacros. You can find more information in Suns' docs.
{ "language": "en", "url": "https://stackoverflow.com/questions/106412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How to refer to the path to an assembly in the GAC within registry entries added by a Windows Installer package? I have a .NET assembly that contains classes to be registered as ServicedComponent through EnterpriseServices (COM+) and invoked through COM RPC by a third-party application. Therefore, I need to both add it to the GAC and add a registry entry under HKEY_CLASSES_ROOT\CLSID\{clsid}\CodeBase with the path to the assembly DLL within the GAC folder. (I can't rely on regsvcs to do it, because this is a 32-bit assembly --- it relies on 32-bit third-party components --- and the third-party application I referred to before cannot see classes in Wow6432Node) So the question is: Are paths to assemblies to be created in the GAC, or at least the path to the GAC folder itself, available in Windows Installer as properties that can be used in values of registry keys etc.? A: If you have a component per file, which you should anyway, the KeyPath of the component points to the location where the file gets installed (in this case the GAC). You can use the component key as a token in the value field of the entry in the Registry table in your MSI. Assuming you have an assembly with a File key in the File table of "assmb.dll" and its corresponding component, also "assmb.dll". You can set the value field in the Registry table to register your assembly to [$assmb.dll], and it will get resolved to the install location of the assembly. If this directory is the GAC, it will be resolved to the location of the GAC. You can find more information about Formatted fields in an MSI here.
{ "language": "en", "url": "https://stackoverflow.com/questions/106414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What's a good source code search engine? The codebase I work on is huge, and grepping it takes about 20 minutes. I'm looking for a good web-based source code search engine.. something like an intranet version of koders.com. The only thing I've found is Krugle Enterprise Edition, which doesn't post its prices... and if you have to ask, you can't afford it. I'd really prefer a plain old search engine, without a lot of other bells and whistles. The source is mostly ASP.NET/C# and Javascript. A: I recommend OpenGrok. There are some other engines, here's a quick review of them. A: 20 minutes is outrageous! I'm working with a million+ line source code base these days and grepping takes a few seconds at most (I use ack). Our home directories are stored on a file server and mounted over NFS, and to speed up grepping we do that while logged in to the file server. I'm not sure how long it takes over NFS, but it's certainly longer. We also do source control operations while logged in to the file server, for the same performance reasons. A: On Linux I use the GNU ID Utils These have similar functions to grep but work from an index so they are incredibly fast. You run mkid to create an index and then one of the other utilities such as "gid" which is the ID Tools version of grep to grep across the index. I have a cron job that runs mkid occasionally. The ID tools work on Windows as well, either with cygwin or as a standard windows program A: Lxr works great on big code bases, as proved with the linux kernel. I think it's only for C (you didn't specify the languages used). A: If you have that much source code, you may need to put a bit of time into setting up a search engine to index it. I would recommend Lucene - its free, its fast, it is is pretty easy to set up a powerful index on any content for anyone with programming experience. http://lucene.apache.org/ A: Since you're saying 'grepping' I imagine you're not disinterested in command-line solutions. A tool like ctags will index and search C# and JavaScript codebases (among many others). What's very neat about ctags is that it can be combined with vim with either the taglist plugin to allow source code browsing or with vim omnicomplete to enable code completion. A: I've used cs2project for a while, it's an open source c# code search engine based on Lucene.NET. Unfortunately it's no longer being developed. A: I have used OpenGrok before and was quite happy with it. Another alternative is: Gonzui http://gonzui.sourceforge.net/screenshots.html (source: sourceforge.net) A: See our SD Source Code Search Engine. Language aware and handles many languages (C, C++, C#, Java, ObjectiveC, PHP, VB.net, VB6, Ada, Fortran, COBOL, ...). Takes 2.8 seconds to search across the Linux Kernal (7.3 million lines, 18000+ files). Because it is language aware, it can ignore langauge elements irrelevant to your search (e.g., ignore comments, formatting and whitespace if you are only interested in an identifier or an expression). It can search inside identifiers, strings and comments. It has a full regular-expression string search option if you really want to do that. It has been used for systems of 10s of millions of lines of code, and in one case we know about, a system with over a million files. A: I had a similar problem. I work for a software company where the project involves c#, c++, asp.net, db scripts and even vb6 source code (yeah it is a headache compiling multiple vb6 projects when there is no concept of solution like in later version of visual studio...) I have been using Visual Studio 2010 but had to use 3rd party text editor to search in db scripts and vb6 source code. I did some research and found KodeEx (http://kodeex.com) and have been happy withit. It is an index based source code search tool. You don't have to build anything (like other people suggested you do with Lucene. Lucene is a nice open source project by the way =) ). Just install it and let it index your projects. After that it usually returns result within a few seconds. A: Perhaps you should invest some time and/or money in an editor or IDE that supports symbol tagging. You only need to make one pass through the entire source tree to tag it, and thereafter the editor uses an index search or map lookup to find the symbol definition or references. Some examples of editors or IDEs that support tagging are Eclipse, Visual Studio, SlickEdit. Some IDEs might call the feature Symbol Browser or something similar.
{ "language": "en", "url": "https://stackoverflow.com/questions/106419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Load external JS from bookmarklet? How can I load an external JavaScript file using a bookmarklet? This would overcome the URL length limitations of IE and generally keep things cleaner. A: 2015 Update Content security policy will prevent this from working in many sites now. For example, the code below won't work on Facebook. 2008 answer Use a bookmarklet that creates a script tag which includes your external JS. As a sample: javascript:(function(){document.body.appendChild(document.createElement('script')).src='** your external file URL here **';})(); A: Firefox and perhaps others support multiline bookmarklets, no need for one liner. When you paste in the code it just replaces newlines with spaces. javascript: var q = document.createElement('script'); q.src = 'http://svnpenn.github.io/bm/yt.js'; document.body.appendChild(q); void 0; Example A: If I can add method tested in FF & Chrome (for readibility split to multiple lines): javascript:var r = new XMLHttpRequest(); r.open("GET", "https://...my.js", true); r.onloadend = function (oEvent) { new Function(r.responseText)(); /* now you can use your code */ }; r.send(); undefined A: It is no longer recommended to do this as CSP on most website will make it fail. But if you still want to use it: 2022 example (() => { const main = () => { // write your code here alert($('body')[0].innerHTML) } const scriptEle = document.createElement('script') scriptEle.onload = main scriptEle.src = 'https://cdn.jsdelivr.net/npm/jquery@3.6.1/dist/jquery.min.js' document.body.appendChild(scriptEle) })(); A: I always prefer to use a popular open source project loadjs it is cross browser tested and has more functionality/comfort features. So the code will look like this: loadjs=function(){function e(e,n){var t,r,i,c=[],o=(e=e.push?e:[e]).length,f=o;for(t=function(e,t){t.length&&c.push(e),--f||n(c)};o--;)r=e[o],(i=s[r])?t(r,i):(u[r]=u[r]||[]).push(t)}function n(e,n){if(e){var t=u[e];if(s[e]=n,t)for(;t.length;)t[0](e,n),t.splice(0,1)}}function t(e,n,r,i){var o,s,u=document,f=r.async,a=(r.numRetries||0)+1,h=r.before||c;i=i||0,/(^css!|\.css$)/.test(e)?(o=!0,(s=u.createElement("link")).rel="stylesheet",s.href=e.replace(/^css!/,"")):((s=u.createElement("script")).src=e,s.async=void 0===f||f),s.onload=s.onerror=s.onbeforeload=function(c){var u=c.type[0];if(o&&"hideFocus"in s)try{s.sheet.cssText.length||(u="e")}catch(e){u="e"}if("e"==u&&(i+=1)<a)return t(e,n,r,i);n(e,u,c.defaultPrevented)},!1!==h(e,s)&&u.head.appendChild(s)}function r(e,n,r){var i,c,o=(e=e.push?e:[e]).length,s=o,u=[];for(i=function(e,t,r){if("e"==t&&u.push(e),"b"==t){if(!r)return;u.push(e)}--o||n(u)},c=0;c<s;c++)t(e[c],i,r)}function i(e,t,i){var s,u;if(t&&t.trim&&(s=t),u=(s?i:t)||{},s){if(s in o)throw"LoadJS";o[s]=!0}r(e,function(e){e.length?(u.error||c)(e):(u.success||c)(),n(s,e)},u)}var c=function(){},o={},s={},u={};return i.ready=function(n,t){return e(n,function(e){e.length?(t.error||c)(e):(t.success||c)()}),i},i.done=function(e){n(e,[])},i.reset=function(){o={},s={},u={}},i.isDefined=function(e){return e in o},i}(); loadjs('//path/external/js', { success: function () { console.log('something to run after the script was loaded'); });
{ "language": "en", "url": "https://stackoverflow.com/questions/106425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: What makes Ometa special? Ometa is "a new object-oriented language for pattern matching." I've encountered pattern matching in languages like Oz tools to parse grammars like Lexx/Yacc or Pyparsing before. Despite looking at example code, reading discussions, and talking to a friend, I still am not able to get a real understanding of what makes Ometa special (or at least, why some people think it is). Any explanation? A: It's a metalanguage, from what I can tell. You can create new language constructs, and create DSLs; but the most compelling thing is that you can subclass from existing parsers to extend a language. That's what I can remember about it, anyway. I found this to be interesting: http://www.moserware.com/2008/06/ometa-who-what-when-where-why.html A: Also, most important to me, the Squeak port of Ometa allows for left-recursive rules. From its PEG heritage it gets backtracking and unlimited lookahead. Memoization of previous parse results allows for linear parse times (nearly all the time (*)). Higher-order productions allow one to easily refactor a grammar. This paper - Packrat Parsers Can Support Left Recursion - explains the left recursive properties. (*) Section 5 of the paper explains that one can suffer superlinear parse times, but this problem doesn't manifest in practical grammars.
{ "language": "en", "url": "https://stackoverflow.com/questions/106431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: EJB3 Transaction Propagation I have a stateless bean something like: @Stateless public class MyStatelessBean implements MyStatelessLocal, MyStatelessRemote { @PersistenceContext(unitName="myPC") private EntityManager mgr; @TransationAttribute(TransactionAttributeType.SUPPORTED) public void processObjects(List<Object> objs) { // this method just processes the data; no need for a transaction for(Object obj : objs) { this.process(obj); } } @TransationAttribute(TransactionAttributeType.REQUIRES_NEW) public void process(Object obj) { // do some work with obj that must be in the scope of a transaction this.mgr.merge(obj); // ... this.mgr.merge(obj); // ... this.mgr.flush(); } } The typically usage then is the client would call processObjects(...), which doesn't actually interact with the entity manager. It does what it needs to do and calls process(...) individually for each object to process. The duration of process(...) is relatively short, but processObjects(...) could take a very long time to run through everything. Therefore I don't want it to maintain an open transaction. I do need the individual process(...) operations to operate within their own transaction. This should be a new transaction for every call. Lastly I'd like to keep the option open for the client to call process(...) directly. I've tried a number of different transaction types: never, not supported, supported (on processObjects) and required, requires new (on process) but I get TransactionRequiredException every time merge() is called. I've been able to make it work by splitting up the methods into two different beans: @Stateless @TransationAttribute(TransactionAttributeType.NOT_SUPPORTED) public class MyStatelessBean1 implements MyStatelessLocal1, MyStatelessRemote1 { @EJB private MyStatelessBean2 myBean2; public void processObjects(List<Object> objs) { // this method just processes the data; no need for a transaction for(Object obj : objs) { this.myBean2.process(obj); } } } @Stateless public class MyStatelessBean2 implements MyStatelessLocal2, MyStatelessRemote2 { @PersistenceContext(unitName="myPC") private EntityManager mgr; @TransationAttribute(TransactionAttributeType.REQUIRES_NEW) public void process(Object obj) { // do some work with obj that must be in the scope of a transaction this.mgr.merge(obj); // ... this.mgr.merge(obj); // ... this.mgr.flush(); } } but I'm still curious if it's possible to accomplish this in one class. It looks to me like the transaction manager only operates at the bean level, even when individual methods are given more specific annotations. So if I mark one method in a way to prevent the transaction from starting calling other methods within that same instance will also not create a transaction, no matter how they're marked? I'm using JBoss Application Server 4.2.1.GA, but non-specific answers are welcome / preferred. A: Matt, the question you ask is a pretty classic one, I think the self-reference solution by Herval/Pascal is neat. There is a more general solution not mentioned here. This is a case for EJB "user" transactions. Since you are in a session bean you can get the user transaction from the session context. Here's how your code will look with user transactions: // supposing processObjects defined on MyStatelessRemote1 and process defined on MyStatelessLocal1 @Stateless @TransationAttribute(TransactionAttributeType.NOT_SUPPORTED) public class MyStatelessBean1 implements MyStatelessLocal1, MyStatelessRemote1 { @Resource private SessionContext ctx; @EJB private MyStatelessLocal1 myBean2; public void processObjects(List<Object> objs) { // this method just processes the data; no need for a transaction for(Object obj : objs) { this.myBean2.process(obj); } } public void process(Object obj) { UserTransaction tx = ctx.getUserTransaction(); tx.begin(); // do some work with obj that must be in the scope of a transaction this.mgr.merge(obj); // ... this.mgr.merge(obj); // ... this.mgr.flush(); tx.commit(); } } A: Another way to do it is actually having both methods on the same bean - and having an @EJB reference to itself! Something like that: // supposing processObjects defined on MyStatelessRemote1 and process defined on MyStatelessLocal1 @Stateless @TransationAttribute(TransactionAttributeType.NOT_SUPPORTED) public class MyStatelessBean1 implements MyStatelessLocal1, MyStatelessRemote1 { @EJB private MyStatelessLocal1 myBean2; public void processObjects(List<Object> objs) { // this method just processes the data; no need for a transaction for(Object obj : objs) { this.myBean2.process(obj); } } @TransationAttribute(TransactionAttributeType.REQUIRES_NEW) public void process(Object obj) { // do some work with obj that must be in the scope of a transaction this.mgr.merge(obj); // ... this.mgr.merge(obj); // ... this.mgr.flush(); } } This way you actually 'force' the process() method to be accessed via the ejb stack of proxies, therefore taking the @TransactionAttribute in effect - and still keeping only one class. Phew! A: I think the thing is each bean is wrapped in a proxy that controls the transactional behaviour. When you call from one bean to another, you're going via that bean's proxy and the transaction behaviour can be changed by the proxy. But when a bean calls a method on itself with a different transaction attribute, the call doesn't go via the proxy, so the behaviour doesn't change. A: Matt, for what it's worth I've come to exactly the same conclusion as you. TransactionAttributeTypes are only taken into consideration when crossing Bean boundaries. When calling methods within the same bean TransactionAttributeTypes have no effect, no matter what Types are put on the methods. As far as I can see there is nothing in the EJB Persistence Spec that specifies what the behaviour should be under these circumstances. I've also experienced this in Jboss. I'll also give it a try in Glassfish and let you know the results. A: In case someone comes across this one day: to avoid circular dependencies (allowing self reference for example) in JBoss use the annotation 'IgnoreDependency' for example: @IgnoreDependency @EJB MySelf myselfRef; A: I haven't tried it yet (I'm about to), but an alternative to injecting a self-reference via the @EJB annotation is the SessionContext.getBusinessObject() method. This would be another way to avoid the possibility of a circular reference blowing things up on you - although at least for stateless beans injection does seem to work. I'm working on a large system in which both techniques are employed (presumably by different developers), but I'm not sure which is the "correct" way to do it. A: I think has to do with the @TransationAttribute(TransactionAttributeType.Never) on method processObjects. TransactionAttributeType.Never http://docs.sun.com/app/docs/doc/819-3669/6n5sg7cm3?a=view If the client is running within a transaction and invokes the enterprise bean’s method, the container throws a RemoteException. If the client is not associated with a transaction, the container does not start a new transaction before running the method. I assume that you are client the method processObjects from the client code. Because probably your client is not associated with a transaction the method call with TransactionAttributeType.Never is happy in the first place. Then you call the process method from processObjects that altough having the TransactionAttributeType.Required annotation was not a bean method call and the transaction policy is not enforced. When you call merge you get the exception because you are still not associated with a transaction. Try using TransactionAttributeType.Required for both bean methods to see if it does the trick. A: I had these circular dependency issues which Kevin mentioned. However, the proposed annotation @IgnoreDependency is a jboss-specific annotation and there is no counterpart in e.g Glassfish. Since it does not work with default EJB reference, I felt a bit uncomfortable with this solution. Therefore, I gave bluecarbon's solution a chance, thus starting the inner transaction "by hand". Beside this, I see no solution but to implement the inner process() in another bean which is also ugly because we simply want to disturb our class model for such technical details.
{ "language": "en", "url": "https://stackoverflow.com/questions/106437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Requiring users to update .NET I'm working on some production software, using C# on the .NET framework. I really would like to be able to use LINQ on the project. I believe it requires .NET version 3.5 (correct me if I'm wrong). This application is a commercial software app, required to run on a client's work PC. Is it reasonable to assume they have .NET 3.5, or assume that they won't mind upgrading to the latest version? I just wanted to feel out what the consensus was as far as mandating framework upgrades to run apps. A: I would say that it isn't safe to assume they have .NET 3.5. Where as it is very, very unlikely they will have any problems when upgrading, changing anything always carries a risk. I know I wouldn't mind upgrading, but I am a developer. I think it's one of those things that could go either way, they either won't think twice about it and just upgrade, or they might make an issue out of it. I think it would depend on your customers, 'low-tech' clients may think twice as they may not fully understand it, which would make them nervous. A: To use LINQ, as you have said, you need to have .NET 3.5. Just to confirm this, the Wikipedia page for LINQ says: Language Integrated Query (LINQ, pronounced "link") is a Microsoft .NET Framework component that adds native data querying capabilities to .NET languages using a syntax reminiscent of SQL. Many of the concepts that LINQ has introduced were originally tested in Microsoft's Cω research project. LINQ was released as a part of .NET Framework 3.5 on November 19, 2007. Due to the fact that machines may have some of the previous versions of .NET already installed, you may find that this site, Smallest Dot NET by Scott Hanselman (Microsoft employee) is useful. It works out the smallest updates you need to get up to date (currently 3.5 SP1). As for whether it is reasonable to expect it on the client's machine, I guess it depends upon what you're creating. My feelings are: Small low cost applications = PERHAPS NOT YET A tiny application sold at low cost, perhaps targeting 3.5 is a little early and likely to reduce the size of your audience because of the annoyance factor. Large commercial applications, with installers = YES If it is a large commercial application (your baseline specifications are already WInXP or newer running on .NET 2.0), I don't think the customer would care. Put the redistributable on the installer disk! Remember that adopting any new technology should be done for a number of reasons. What is your need to use LINQ, is it something that would be tough to replicate? If LINQ gives you functionality you really need, your costs and timetable are likely to benefit from selecting it. Your company gain by being able to sell the product for less or increase their margins. One final option, as pointed out by Nescio, if all you need is Linq to Objects (eg. you don't need Linq to SQL or Linq to XML) then LinqBridge may be an option. A: Since .NET Framework itself is distributed for free, people are rarely against upgrading it. However there may be problems with system administrator availability or problems with installation. A: Check out: LinqBridge A: Talk to your V.P. of Sales. Seriously. If 3.5 is bleeding edge (I honestly don't know), then odds are he/she will not like the idea very much. If it is a couple of years old, then they'll be more accepting. Being a product that forces upgrades of third party SW is not an insurmountable shortcoming, but it doesn't help. A: It depends on your target audience and the importance of your app. Generally speaking at this point you probably can't assume that your audience already has .NET 3.5. Installing it can take quite a while, and can be quite tedious if they don't already have the other prerequisites to .NET 3.5. So unless it's a fairly comprehensive and/or important piece of enterprise software, I would strongly advise against it. A: You should read this Hanselman's entry: http://www.hanselman.com/blog/SmallestDotNetOnTheSizeOfTheNETFramework.aspx It's really interesting if it comes to installing and thus minimalizing installation size of .NET framework. It should be somehow an answer to your question. A: So long as you know that you don't need to support Windows 2000 or any older versions of Windows then requiring the latest and greatest framework version doesn't feel too onerous. Some less fortunate developers are stuck with older framework versions because they need to support older OS versions. A: .Net 3.5 is not yet auto updated on Windows PC, I would not bet on a standard customer having it "as is". Notice you may have to decide if you go for .Net3.5 SP1, since there is a small DataSet backward incompatibility between 3.5 and 3.5SP1 (and maybe some others I did not see). If your client is a big company you may want to consider that they are often very conservative (My clients are still XP/IE6 and sometime even W2K/IE6). A: Beware Windows 2000 is not supported on any frameworks above 2.0. So you're application would then only support the following operating systems: * *Microsoft Windows XP *Microsoft Windows Server 2003 *Windows Vista *Windows Server 2008 Good Luck!
{ "language": "en", "url": "https://stackoverflow.com/questions/106439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Enabling editing of primary key field in ASP.NET Dynamic Data / LINQ to SQL If you have a table with a compound primary key that is composed of a foreign key and other table columns, how do you get ASP.NET Dynamic Data to allow the non-foreign primary key table columns to be editable? A: LINQ to SQL does not support changing the primary key of an entity even without using Dynamic Data. A: Compound or composite foreign keys are not well supported in the current version. I ran into the same problem when building a test project. For a parent-child relationship with a single-column foreign key, dynamic data allowed me to edit records in the child table using drop downs. For a parent-child relationship with a compound primary key, dynamic data only allowed me to edit one of the foreign keys, without a drop down. I tried both Linq to SQL and Data Entities. A: A primary key just has to be unique, and that doesn't necessarily mean it has to be automatically generated. Nor does it mean it can't be changed. It's conceivable that a human might be coming up with the primary key, in which case the pk field needs an input. A: A primary key represents the identity of an entity. It is assumed that primary key fields are never changed. Your question suggests that you might be using primary keys incorrectly.
{ "language": "en", "url": "https://stackoverflow.com/questions/106444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: using JTables with netbeans 6.1 aka Matisse Before you answer: Yes I have read the jtable tutorial over at Sun. No, it did not help me. Yes, I am a dolt. Please don't answer with a reference to that document. What I am specifically interested in is how to dynamically add rows and columns to my Jtable via the Netbeans IDE. I already have an object that contains a hashmap with my data. I can't figure out where or what object I should be passing that object to. Thanks for your time! I have a vector that contains a series (of length l) of objects (each one corresponding to a row). How do I get that vector object to display on the JTable? A: A JTable uses a TableModel to hold its data. Your hash/vector of data will need to be adapted to be used; you can write a TableModel implementation, using the hash/vector as backing data, or, if you won't be dynamically updating the hash/vector and needing it to show automatically, you can simply copy everything into an instance of DefaultTableModel, and use that. If you do use an adapter, and dynamically update the hash/vector, remember that all updates must be done in the event dispatch thread. :-) A: Just to illustrate, the following are examples of how to use the DefaultTableModel to show your data from HashMaps and Vectors. The following is an example of dumping data from a HashMap onto a DefaultTableModel which is used as the TableModel of a JTable. import java.util.*; import javax.swing.*; import javax.swing.table.*; public class JTableExample extends JFrame { private void makeGUI() { this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); // HashMap with some data. HashMap<String, String> map = new HashMap<String, String>(); map.put("key1", "value1"); map.put("key2", "value2"); // Create a DefaultTableModel, which will be used as the // model for the JTable. DefaultTableModel model = new DefaultTableModel(); // Populate the model with data from HashMap. model.setColumnIdentifiers(new String[] {"key", "value"}); for (String key : map.keySet()) model.addRow(new Object[] {key, map.get(key)}); // Make a JTable, using the DefaultTableModel we just made // as its model. JTable table = new JTable(model); this.getContentPane().add(table); this.setSize(200,200); this.setLocation(200,200); this.validate(); this.setVisible(true); } public static void main(String[] args) { new JTableExample().makeGUI(); } } For using a Vector to include a column of data into a JTable: import java.util.*; import javax.swing.*; import javax.swing.table.*; public class JTableExample extends JFrame { private void makeGUI() { this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); // Vector with data. Vector<String> v = new Vector<String>(); v.add("first"); v.add("second"); // Create a DefaultTableModel, which will be used as the // model for the JTable. DefaultTableModel model = new DefaultTableModel(); // Add a column of data from Vector into the model. model.addColumn("data", v); // Make a JTable, using the DefaultTableModel we just made // as its model. JTable table = new JTable(model); this.getContentPane().add(table); this.setSize(200,200); this.setLocation(200,200); this.validate(); this.setVisible(true); } public static void main(String[] args) { new JTableExample().makeGUI(); } } I have to admit that the column names don't appear when using the above examples (I usually use the DefaultTableModel's setDataVector method), so if anyone has any suggestions on how to make the column names appear, please do :) A: To add to my previous answer, for what it's worth, I've actually written a table model that uses (essentially) an ArrayList<Row> as backing data, where Row is a HashMap<String, Object>, mapping column names to values. The whole thing is about 1500 lines of code, although my code may be overkill for your purposes, and you probably don't have to write nearly as much code. All the best! A: Just an addition to coobird's post; to get the header to appear, I did this: import java.util.*; import javax.swing.*; import javax.swing.table.*; public class JTableExample extends JFrame { private void makeGUI() { this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); // HashMap with some data. HashMap<String, String> map = new HashMap<String, String>(); map.put("key1", "value1"); map.put("key2", "value2"); // Create a DefaultTableModel, which will be used as the // model for the JTable. DefaultTableModel model = new DefaultTableModel(); // Populate the model with data from HashMap. model.setColumnIdentifiers(new String[] {"key", "value"}); for (String key : map.keySet()) model.addRow(new Object[] {key, map.get(key)}); // Make a JTable, using the DefaultTableModel we just made // as its model. JTable table = new JTable(model); this.getContentPane().add(new JScrollPane(table)); this.setSize(200,200); this.setLocation(200,200); this.validate(); this.setVisible(true); } public static void main(String[] args) { new JTableExample().makeGUI(); } } By the way, your post was very helpful for me coobird, you have no idea how thankful I am!
{ "language": "en", "url": "https://stackoverflow.com/questions/106446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I redirect Tornado / VXWorks shell output? I've been working on an embedded C/C++ project recently using the shell in Tornado 2 as a way of debugging what's going on in our kit. The only problem with this approach is that it's a complicated system and as a result, has a fair bit of output. Tornado 'helpfully' scrolls the window every time some new information arrives which means that if you spot an error, it disappears out of site too quickly to see. Each time you scroll up to look, the system adds more information, so the only way to view it is to disconnect the hardware. I'd love to know if anyone has a way of redirecting the output from Tornado? I was hoping there might be a way to log it all from a small python app so that I can apply filters to the incoming information. I've tried connecting into the Tornado process, but the window with the information isn't a standard CEditCtrl so extracting the text that way was a dead end. Any ideas anyone? [Edit] I should have mentioned that we're only running Tornado 2.1.0 and upgrading to a more recent version is beyond my control. [Edit2] The window in question in Tornado is an 'AfxFrameOrView42' according to WinID. A: here is another potential way: -> saveFd = open("myfile.txt",0x102, 0777 ) -> oldFd = ioGlobalStdGet(1) -> ioGlobalStdSet(1, saveFd) -> runmytest() ... -> ioGlobalStdSet(1, oldFd) this will redirect all stdout activity to the file you opened. You might have to play around with the file name of the open to make it write on the host (e.g. use "host:/myfile.txt" or something like this) A: The host shell has a recording capability built in. There are 3 environment variables available (in 6.x - not available in 5.x): RECORD (on/off) : Controls recording of the shell RECORD_TYPE (input/output/all): Determines what you will be recording RECORD_FILE : Filename to save things to. you use the ?shConfig command to configure the shell environment variable. ?shConfig by itself displays the variables. Here is how I set mine up: -> ?shConfig ... RECORD = off RECORD_FILE = C:/test.txt RECORD_TYPE = output ... -> ?shConfig RECORD_TYPE all -> ?shConfig RECORD_FILE myData.txt -> ?shConfig RECORD on Started recording commands in 'myData.txt'. A: I am making the assumption that you are using the host shell to perform this. If you are running a test by launching it from the shell like "runTest()", you can use the redirection operator (>) to send the output of that function to a text file on your host machine. > runTest() > mytestResults.txt This will save any output that runTest generates to the file mytestResults.txt If you would like to capture everything on the screen all the time, I will have to dig more into this. A: rlogin vxWorks-target | tee redirected-output.txt
{ "language": "en", "url": "https://stackoverflow.com/questions/106453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Changing the default settings for a console application I would prefer that a console app would default to multithreaded debug. warning level 4. build browse information. no resource folder. Does anyone know of any technique that would allow me to create a console app, with my desired options, without manually setting it. A: Yes, you can do that. What you want is to create your own project template. You can then select that template from the New Project wizard. I wasn't able to location documentation on how to create a project template in Visual Studio 6, but this MSDN article explains the procedure for Visual Studio 2005. Hopefully you will find those instructions to sufficiently similar. A: I have concluded this is impossible. The is support for custom appwizards for windows projects, but not console projectcs. This is where I did research. http://www.codeproject.com/KB/cpp/genwiz.aspx?fid=15478&df=90&mpp=25&noise=3& sort=Position&view=Quick&select=1266895 http://msdn.microsoft.com/en-us/library/ms950410.aspx http://msdn.microsoft.com/en-us/library/aa300499(VS.60).aspx The custom appwizard will accept windows projects as a base for the template, but not console projects. A message dialog appears that claims that the base project selected is not a c++ project. A: This turns out to be fairly easy to do. Create a new console project in your workspace, name it 0_console Set its characteristics the way you want them to be. (warning level 4 ...) get out of msvc, and use windows explorer to copy the project directory. paste it in at the same directory level as the 0_console project. rename it to be what ever you want the new project to be. go into that directory, and edit the dsp file, and replace the 0_console values by the new name. save that, and go into msvc, and simply insert the project into the work space
{ "language": "en", "url": "https://stackoverflow.com/questions/106470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the best way to debug stored procedures (and write sprocs that are easier to debug)? What are good methodologies for creating sprocs that reduce the pain of debugging? And what tools are out there for debugging stored procedures? Perhaps most importantly, what are indications to look out for that errors are happening in a sproc and not in the code? I hope I'm not all over the board too terribly bad here. Votes for answers to any of the above. Thanks. For what it's worth, I work in a .NET environment, SQL servers. A: One technique I use in stored procedures to make them easier to debug (without IDE or debuggers) for SQL Server 2005 procedures: I add an input parameter named @Debug = 0 (defaulted to 0 = off) at the end of the parameter list for the procedure. I then add if (@Debug = 1) print '...'; statements in the code at key junctures to display any useful internal values etc. Yes, it's "old school" and debuggers that let you "walk the code" are great - but this works for anyone from any SQL tool (including anyone debugging without your same IDE). Ron A: Another technique I use for both simple log output and debugging is to create a table variable at the top of the procedure: --************************************************************************** -- Create a log table variable to store messages to be returned to the -- calling application. --************************************************************************** declare @log as table ( msg varchar(MAX) ); then insert into @log values ('Inserted a new DVO Order into IRMA, order id: [' + convert(varchar(10), @@IDENTITY ) + ']'); etc. then ... select msg from @log; end at the end of the procedure - this depends on how well the calling application logs output from your procedure call, but the app I wrote logs it all. :-) A: I would strongly suggest that you take a look at the built in tooling in SQL management studio. i have written a pretty detailed blog post about it here: http://www.diaryofaninja.com/blog/2010/11/23/debugging-sql-queries-function-amp-stored-procedures-with-sql-management-studio basically the gist of it is that you enter you sql query to execute your stored procedure, and instead of pressing F5 or hitting the exclamation, you hit the play button and use F10 and F11 to step through and step into your stored procs. very very handy - and no one seems to use it. A: TSQLUnit This is a unit testing framework for SQL Server. Not exactly a classic debugging tool but it does allow you to write unit tests for your stored procedures which can help tremendously in identifying bugs and to validate expected behaviors. For example, If you have a buggy stored proc then you can write some unit tests to understand how it is failing. Also, if you make a change to your SQL code you can validate that your changes did not break anything else or at least tell you where a problem lies. If something is hard to test then it might be a good indication that your stored proc might be doing too much and could benefit if it were be broken up into more focus and targeted procs. These procs should then become relatively easier to debug and maintain in the long run. A: I have noticed a lot of suggestions on using different environments and techniques to debug SQL procs, but no one has mentioned DBFit. If you are not familiar with Fit and FitNesse then do yourself a favor and look them up. Using these three tools you can quickly build yourself an entire suite of acceptance tests that will give you peace of mind knowing you can refactor with impunity. DBFit is simply a series of Fit Fixtures that can be used to exercise a database. Using Fitness you can write as many permutations of calls onto your stored proc as you want to create tests for. This isn't debugging per se, but you would be amazed at how quickly you can pinpoint a problem once you have an entire battery of tests against a single stored proc. A failing test will lead you directly to the problem and give you the exact context with which it failed so there is no guess work. On top of it all, you can now refactor your stored procs without fear because you will simply have to re-run the tests to ensure you didn't break anything. A: For tools, you can use Visual Studio to debug SP. If the stored proc has long logic, you can refactor it, create separate stored proc, and call it from your main stored proc. This will help to narrow down your testing also, and ease you to find which part of the queries is wrong. A: This may be a personal preference, but I find it extremely difficult to read SQL queries that are all slapped onto one long line. I prefer the following indentation style: SELECT [Fields] FROM Table WHERE x = x This simple practice has helped me out a lot when writing stored procedures for a brand new database schema. By breaking up the statements onto many lines it becomes easier to identify the culprit of a bug in your query. In SQL Server Management Studio, for example, the line number of the exception is given, so you can target problematic code much quicker. Be easy on your fellow developers...don't cram 800 characters of a SQL query onto one line. You'll thank yourself later if a database field name or datatype changes and nobody emails you. A: A couple of patterns I have seen successfully used are 'diagnostic' or 'test' modes and logging. test or diagnostic modes are useful when you are doing dynamic SQL execution. Make sure you can see what you are going to execute. If you have areas where you need (or should) be checking for errors consider logging to a table with enough details so you can diagnose what is going on. A: You can use Sql server debugging, but I've found that to be a pain in anything but the most direct of situations (debugging on a local server, etc). I've yet to find something better than print statements, so I'll be monitoring this thread with interest. A: Here's some advice that was reiterated to me today - if you're adding a join to an important query on the production database, make sure it's safe when there is a null field in the joining table. LEFT JOIN I broke an important page for 20 minutes before we figured out that it was my small, rushed stored procedure change. And make sure you test your procedures when you make a change. To do this, I like to put a simple test query in the comments of the procedure. Obvisouly, I failed to do this today :-( /************************************ MyProcName Test: ----- exec MyProcName @myParam *************************************/ A: This may not be the answer you are looking for but if you are already in a .Net environment LINQtoSQL has greatly reduced the amount of stored procs I write/use/need to debug. The difficulty of debugging SQL is one of the reasons programming business logic in LINQ is my new preferred practice . A: SQL Server 2008 Management Studio's integrated debugger made step-wise debugging a cinch (compared judo required to figuring out how to get VS2005 + SQL to debug) A: Similar to Ron's Logging we call a logging proc through all other stored procedures to assist in getting tracing on all calls. A common BatchId is used throughout to allow tracing for a certain batch run. Its possibly not the most performant process but it does help grately in tracking down faults. Its also pretty simple to compile summary reports to email admins. ie. Select * from LogEvent where BatchId = 'blah' Sample Call EXEC LogEvent @Source='MyProc', @Type='Start' , @Comment='Processed rows',@Value=50, @BatchId = @batchNum Main Proc CREATE PROCEDURE [dbo].[LogEvent] @Source varchar(50), @Type varchar(50), @Comment varchar(400), @Value decimal = null, @BatchId varchar(255) = 'BLANK' AS IF @BatchId = 'BLANK' SET @BatchId = NEWID() INSERT INTO dbo.Log (Source, EventTime, [Type], Comment, [Value],BatchId) VALUES (@Source, GETDATE(), @Type, @Comment, @Value,@BatchId) Moving forward it would be nice to leverage the CLR and look at calling something like Log4Net via SQL. As our application code uses Log4Net it would be advantageous to intergrate the SQL side of processes into the same infrastructure.
{ "language": "en", "url": "https://stackoverflow.com/questions/106472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Is there a way to debug Velocity templates in the traditional code debugging sense? We make heavy use of Velocity in our web application. While it is easy to debug the Java side of things and ensure the Velocity Context is populated correctly, it would be extremely valuable to be able to step through the parsing of the VTL on the merge step, set breakpoints, etc. Are there any tools or IDEs/IDE plugins that would make this kind of thing possible with VTL (Velocity Template Language)? A: I had not found any yet. The closest I can get is to hack a logging framework to print out information that you want. What you do is: * *create an class with logging method which return boolean value. *Inject the object into velocity context *From inside velocity template you can call the logging method with #if($logger.log($data)) #end A: There might be? but what I've found works is if everything is put into a special map, that is put into the context. Thus you can echo the entire contents of this special map to the screen while rendering (without having to know the keys)... thus indicating the exact value of any given item in the context at any point. It isn't foolproof, but VTL seems to be for "quick n dirty" stuff only. A: There is no step through, nor some kind of built in "print variables". This is something that bothers me too, but using velocity was a decision that was made before I joined our project...
{ "language": "en", "url": "https://stackoverflow.com/questions/106473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Possible to monitor processes another process launches via WMI? I have a setup executable that I need to install. When I run it, it launches a msi to do the actual install and then dies immediately. The side effect of this is it will return control back to any console you call it from before the install finishes. Depending on what machine I run it on, it can take from three to ten minutes so having the calling script sleep is undesirable. I would launch the msi directly but it complains about missing components. I have a WSH script that uses WMI to start a process and then watch until it's pid is no longer running. Is there some way to determine the pid of the MSI the initial executable is executing, and then watch for that pid to end using WMI? Is the launching process information even associated with a process? A: Would doing a WMI lookup of processes that have the initial setup as the parent process do the trick? For example, if I launch an MSI from a command prompt with process id 4000, I can execute the following command line to find information about msiexec process: c:\>wmic PROCESS WHERE ParentProcessId=4000 GET CommandLine, ProcessId CommandLine ProcessId "C:\Windows\System32\msiexec.exe" /i "C:\blahblahblah.msi" 2752 That may be one way to find the information you need. Here is a demo of looking up that information in vbs: Set objWMIService = GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\cimv2") Set colProcesses = objWMIService.ExecQuery("select * from Win32_Process where ParentProcessId = 4000") For Each objProcess in colProcesses Wscript.Echo "Process ID: " & objProcess.ProcessId Next I hope this helps. A: If you're using a .NET language (you can do it in Win32, but waaaay easier in .NET) you can enumerate all the Processes in the system (after your initial call to Setup.exe completes) and find all the processes which parent's PID equal to the PID of the Setup.exe - and then monitor all those processes. When they will complete - setup is complete. Make sure that they don't spawn any more child processes as well. A: This should do it. $p1 = [diagnostics.process]::start($pathToExecutable) # this way we know the PID of the initial exe $p2 = get-wmiobject win32_process -filter "ParentProcessId = $($p1.Id)" # using Jim Olsen's tip (get-process -id $p2.ProcessId).WaitForExit() # voila--no messy sleeping Unfortunately, the .NET object doesn't have a ParentProcessId property, and the WMI object doesn't have the WaitForExit() method, so we have to go back and forth. Props to Jeffrey Snover (always) for this article.
{ "language": "en", "url": "https://stackoverflow.com/questions/106476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Javascript with embedded Ruby: How to safely assign a ruby value to a javascript variable I have this line in a javascript block in a page: res = foo('<%= @ruby_var %>'); What is the best way to handle the case where @ruby_var has a single-quote in it? Else it will break the JavaScript code. A: Rails has method specifically dedicated to this task found in ActionView::Helpers::JavaScriptHelper called escape_javascript. In your example, you would use the following: res = foo('<%= escape_javascript @ruby_var %>'); Or better yet, use the j shortcut: res = foo('<%= j @ruby_var %>'); A: @ruby_var.gsub(/[']/, '\\\\\'') That will escape the single quote with an apostrophe, keeping your Javascript safe! Also, if you're in Rails, there are a bunch of Javascript-specific tools. A: Could you just put the string in a double-quote? res = foo("<%= @ruby_var %>"); A: You can also use inspect assuming you know it'll be a single quote: res = foo(<%= @ruby_var.inspect %>); A: I think I'd use a ruby JSON library on @ruby_var to get proper js syntax for the string and get rid of the '', fex.: res = foo(<%= @ruby_var.to_json %>) (after require "json"'ing, not entirely sure how to do that in the page or if the above syntax is correct as I havn't used that templating language) (on the other hand, if JSON ever changed to be incompatible with js that'd break, but since a decent amount of code uses eval() to eval json I doubt that'd happen anytime soon) A: I don't work with embedded Ruby too much. But how about using p (which invokes inspect) instead of <%= which might be doing something like print or puts. p always prints the string as if it were code wrapped in double quotes: >> p "String ' \" String" "String ' \" String" # => nil >> p 'alpha " \' alpha' "alpha \" ' alpha" # => nil A: You may want to use the following first property, to get rid of the " from your string and then you can go ahead and use your json function. res = foo('<%= @ruby_var %>.first');
{ "language": "en", "url": "https://stackoverflow.com/questions/106481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: sorting hashes/arrays in awk Is there an easy way to do any of the following things in awk? * *Sorting array/hash by it's data *Sorting a hash by it's string key A: Here's someone else's answer to a very similar problem: http://www.computing.net/answers/unix/urgent-help-with-sorting-in-awk/4442.html Which is supposed to be something like this: gawk 'BEGIN {c=1} { array[c] = sprintf ("%s %s", $2, $1); c++ } END { asort(array); for (x=1;x&lt;c;x++) { print array[x] } }' Note that I used 'gawk'. If you want built-in sorting, use gawk. That example takes a 'space-separated' input of key value pairs and sorts them based on the second value (of course it prints them out in value/key format, but you see what I'm doing there.) In order to do that to an array extant in gawk, you'd use something similar. If using awk or mawk, you'll have to use one of the many sort functions available in man pages to accomplish the sort. From the gawk manpage: All arrays in AWK are associative, i.e. indexed by string values. The special operator in may be used in an if or while statement to see if an array has an index consisting of a particular value. if (val in array) print array[val] If the array has multiple subscripts, use (i, j) in array.
{ "language": "en", "url": "https://stackoverflow.com/questions/106484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is a smart pointer and when should I use one? What is a smart pointer and when should I use one? A: http://en.wikipedia.org/wiki/Smart_pointer In computer science, a smart pointer is an abstract data type that simulates a pointer while providing additional features, such as automatic garbage collection or bounds checking. These additional features are intended to reduce bugs caused by the misuse of pointers while retaining efficiency. Smart pointers typically keep track of the objects that point to them for the purpose of memory management. The misuse of pointers is a major source of bugs: the constant allocation, deallocation and referencing that must be performed by a program written using pointers makes it very likely that some memory leaks will occur. Smart pointers try to prevent memory leaks by making the resource deallocation automatic: when the pointer to an object (or the last in a series of pointers) is destroyed, for example because it goes out of scope, the pointed object is destroyed too. A: A smart pointer is a class, a wrapper of a normal pointer. Unlike normal pointers, smart point’s life circle is based on a reference count (how many time the smart pointer object is assigned). So whenever a smart pointer is assigned to another one, the internal reference count plus plus. And whenever the object goes out of scope, the reference count minus minus. Automatic pointer, though looks similar, is totally different from smart pointer. It is a convenient class that deallocates the resource whenever an automatic pointer object goes out of variable scope. To some extent, it makes a pointer (to dynamically allocated memory) works similar to a stack variable (statically allocated in compiling time). A: What is a smart pointer. Long version, In principle: https://web.stanford.edu/class/archive/cs/cs106l/cs106l.1192/lectures/lecture15/15_RAII.pdf A modern C++ idiom: RAII: Resource Acquisition Is Initialization. ● When you initialize an object, it should already have acquired any resources it needs (in the constructor). ● When an object goes out of scope, it should release every resource it is using (using the destructor). key point: ● There should never be a half-ready or half-dead object. ● When an object is created, it should be in a ready state. ● When an object goes out of scope, it should release its resources. ● The user shouldn’t have to do anything more. Raw Pointers violate RAII: It need user to delete manually when the pointers go out of scope. RAII solution is: Have a smart pointer class: ● Allocates the memory when initialized ● Frees the memory when destructor is called ● Allows access to underlying pointer For smart pointer need copy and share, use shared_ptr: ● use another memory to store Reference counting and shared. ● increment when copy, decrement when destructor. ● delete memory when Reference counting is 0. also delete memory that store Reference counting. for smart pointer not own the raw pointer, use weak_ptr: ● not change Reference counting. shared_ptr usage: correct way: std::shared_ptr<T> t1 = std::make_shared<T>(TArgs); std::shared_ptr<T> t2 = std::shared_ptr<T>(new T(Targs)); wrong way: T* pt = new T(TArgs); // never exposure the raw pointer shared_ptr<T> t1 = shared_ptr<T>(pt); shared_ptr<T> t2 = shared_ptr<T>(pt); Always avoid using raw pointer. For scenario that have to use raw pointer: https://stackoverflow.com/a/19432062/2482283 For raw pointer that not nullptr, use reference instead. not use T* use T& For optional reference which maybe nullptr, use raw pointer, and which means: T* pt; is optional reference and maybe nullptr. Not own the raw pointer, Raw pointer is managed by some one else. I only know that the caller is sure it is not released now. A: Definitions provided by Chris, Sergdev and Llyod are correct. I prefer a simpler definition though, just to keep my life simple: A smart pointer is simply a class that overloads the -> and * operators. Which means that your object semantically looks like a pointer but you can make it do way cooler things, including reference counting, automatic destruction etc. shared_ptr and auto_ptr are sufficient in most cases, but come along with their own set of small idiosyncrasies. A: Here's a simple answer for these days of modern C++ (C++11 and later): * *"What is a smart pointer?" It's a type whose values can be used like pointers, but which provides the additional feature of automatic memory management: When a smart pointer is no longer in use, the memory it points to is deallocated (see also the more detailed definition on Wikipedia). *"When should I use one?" In code which involves tracking the ownership of a piece of memory, allocating or de-allocating; the smart pointer often saves you the need to do these things explicitly. *"But which smart pointer should I use in which of those cases?" * *Use std::unique_ptr when you want your object to live just as long as a single owning reference to it lives. For example, use it for a pointer to memory which gets allocated on entering some scope and de-allocated on exiting the scope. *Use std::shared_ptr when you do want to refer to your object from multiple places - and do not want your object to be de-allocated until all these references are themselves gone. *Use std::weak_ptr when you do want to refer to your object from multiple places - for those references for which it's ok to ignore and deallocate (so they'll just note the object is gone when you try to dereference). *There is a proposal to add hazard pointers to C++26, but for now you don't have them. *Don't use the boost:: smart pointers or std::auto_ptr except in special cases which you can read up on if you must. *"Hey, I didn't ask which one to use!" Ah, but you really wanted to, admit it. *"So when should I use regular pointers then?" Mostly in code that is oblivious to memory ownership. This would typically be in functions which get a pointer from someplace else and do not allocate nor de-allocate, and do not store a copy of the pointer which outlasts their execution. A: A smart pointer is like a regular (typed) pointer, like "char*", except when the pointer itself goes out of scope then what it points to is deleted as well. You can use it like you would a regular pointer, by using "->", but not if you need an actual pointer to the data. For that, you can use "&*ptr". It is useful for: * *Objects that must be allocated with new, but that you'd like to have the same lifetime as something on that stack. If the object is assigned to a smart pointer, then they will be deleted when the program exits that function/block. *Data members of classes, so that when the object is deleted all the owned data is deleted as well, without any special code in the destructor (you will need to be sure the destructor is virtual, which is almost always a good thing to do). You may not want to use a smart pointer when: * *... the pointer shouldn't actually own the data... i.e., when you are just using the data, but you want it to survive the function where you are referencing it. *... the smart pointer isn't itself going to be destroyed at some point. You don't want it to sit in memory that never gets destroyed (such as in an object that is dynamically allocated but won't be explicitly deleted). *... two smart pointers might point to the same data. (There are, however, even smarter pointers that will handle that... that is called reference counting.) See also: * *garbage collection. *This stack overflow question regarding data ownership A: A smart pointer is an object that acts like a pointer, but additionally provides control on construction, destruction, copying, moving and dereferencing. One can implement one's own smart pointer, but many libraries also provide smart pointer implementations each with different advantages and drawbacks. For example, Boost provides the following smart pointer implementations: * *shared_ptr<T> is a pointer to T using a reference count to determine when the object is no longer needed. *scoped_ptr<T> is a pointer automatically deleted when it goes out of scope. No assignment is possible. *intrusive_ptr<T> is another reference counting pointer. It provides better performance than shared_ptr, but requires the type T to provide its own reference counting mechanism. *weak_ptr<T> is a weak pointer, working in conjunction with shared_ptr to avoid circular references. *shared_array<T> is like shared_ptr, but for arrays of T. *scoped_array<T> is like scoped_ptr, but for arrays of T. These are just one linear descriptions of each and can be used as per need, for further detail and examples one can look at the documentation of Boost. Additionally, the C++ standard library provides three smart pointers; std::unique_ptr for unique ownership, std::shared_ptr for shared ownership and std::weak_ptr. std::auto_ptr existed in C++03 but is now deprecated. A: UPDATE This answer is rather old, and so describes what was 'good' at the time, which was smart pointers provided by the Boost library. Since C++11, the standard library has provided sufficient smart pointers types, and so you should favour the use of std::unique_ptr, std::shared_ptr and std::weak_ptr. There was also std::auto_ptr. It was very much like a scoped pointer, except that it also had the "special" dangerous ability to be copied — which also unexpectedly transfers ownership. It was deprecated in C++11 and removed in C++17, so you shouldn't use it. std::auto_ptr<MyObject> p1 (new MyObject()); std::auto_ptr<MyObject> p2 = p1; // Copy and transfer ownership. // p1 gets set to empty! p2->DoSomething(); // Works. p1->DoSomething(); // Oh oh. Hopefully raises some NULL pointer exception. OLD ANSWER A smart pointer is a class that wraps a 'raw' (or 'bare') C++ pointer, to manage the lifetime of the object being pointed to. There is no single smart pointer type, but all of them try to abstract a raw pointer in a practical way. Smart pointers should be preferred over raw pointers. If you feel you need to use pointers (first consider if you really do), you would normally want to use a smart pointer as this can alleviate many of the problems with raw pointers, mainly forgetting to delete the object and leaking memory. With raw pointers, the programmer has to explicitly destroy the object when it is no longer useful. // Need to create the object to achieve some goal MyObject* ptr = new MyObject(); ptr->DoSomething(); // Use the object in some way delete ptr; // Destroy the object. Done with it. // Wait, what if DoSomething() raises an exception...? A smart pointer by comparison defines a policy as to when the object is destroyed. You still have to create the object, but you no longer have to worry about destroying it. SomeSmartPtr<MyObject> ptr(new MyObject()); ptr->DoSomething(); // Use the object in some way. // Destruction of the object happens, depending // on the policy the smart pointer class uses. // Destruction would happen even if DoSomething() // raises an exception The simplest policy in use involves the scope of the smart pointer wrapper object, such as implemented by boost::scoped_ptr or std::unique_ptr. void f() { { std::unique_ptr<MyObject> ptr(new MyObject()); ptr->DoSomethingUseful(); } // ptr goes out of scope -- // the MyObject is automatically destroyed. // ptr->Oops(); // Compile error: "ptr" not defined // since it is no longer in scope. } Note that std::unique_ptr instances cannot be copied. This prevents the pointer from being deleted multiple times (incorrectly). You can, however, pass references to it around to other functions you call. std::unique_ptrs are useful when you want to tie the lifetime of the object to a particular block of code, or if you embedded it as member data inside another object, the lifetime of that other object. The object exists until the containing block of code is exited, or until the containing object is itself destroyed. A more complex smart pointer policy involves reference counting the pointer. This does allow the pointer to be copied. When the last "reference" to the object is destroyed, the object is deleted. This policy is implemented by boost::shared_ptr and std::shared_ptr. void f() { typedef std::shared_ptr<MyObject> MyObjectPtr; // nice short alias MyObjectPtr p1; // Empty { MyObjectPtr p2(new MyObject()); // There is now one "reference" to the created object p1 = p2; // Copy the pointer. // There are now two references to the object. } // p2 is destroyed, leaving one reference to the object. } // p1 is destroyed, leaving a reference count of zero. // The object is deleted. Reference counted pointers are very useful when the lifetime of your object is much more complicated, and is not tied directly to a particular section of code or to another object. There is one drawback to reference counted pointers — the possibility of creating a dangling reference: // Create the smart pointer on the heap MyObjectPtr* pp = new MyObjectPtr(new MyObject()) // Hmm, we forgot to destroy the smart pointer, // because of that, the object is never destroyed! Another possibility is creating circular references: struct Owner { std::shared_ptr<Owner> other; }; std::shared_ptr<Owner> p1 (new Owner()); std::shared_ptr<Owner> p2 (new Owner()); p1->other = p2; // p1 references p2 p2->other = p1; // p2 references p1 // Oops, the reference count of of p1 and p2 never goes to zero! // The objects are never destroyed! To work around this problem, both Boost and C++11 have defined a weak_ptr to define a weak (uncounted) reference to a shared_ptr. A: Most kinds of smart pointers handle disposing of the pointer-to object for you. It's very handy because you don't have to think about disposing of objects manually anymore. The most commonly-used smart pointers are std::tr1::shared_ptr (or boost::shared_ptr), and, less commonly, std::auto_ptr. I recommend regular use of shared_ptr. shared_ptr is very versatile and deals with a large variety of disposal scenarios, including cases where objects need to be "passed across DLL boundaries" (the common nightmare case if different libcs are used between your code and the DLLs). A: Smart Pointers are those where you don't have to worry about Memory De-Allocation, Resource Sharing and Transfer. You can very well use these pointer in the similar way as any allocation works in Java. In java Garbage Collector does the trick, while in Smart Pointers, the trick is done by Destructors. A: The existing answers are good but don't cover what to do when a smart pointer is not the (complete) answer to the problem you are trying to solve. Among other things (explained well in other answers) using a smart pointer is a possible solution to How do we use a abstract class as a function return type? which has been marked as a duplicate of this question. However, the first question to ask if tempted to specify an abstract (or in fact, any) base class as a return type in C++ is "what do you really mean?". There is a good discussion (with further references) of idiomatic object oriented programming in C++ (and how this is different to other languages) in the documentation of the boost pointer container library. In summary, in C++ you have to think about ownership. Which smart pointers help you with, but are not the only solution, or always a complete solution (they don't give you polymorphic copy) and are not always a solution you want to expose in your interface (and a function return sounds an awful lot like an interface). It might be sufficient to return a reference, for example. But in all of these cases (smart pointer, pointer container or simply returning a reference) you have changed the return from a value to some form of reference. If you really needed copy you may need to add more boilerplate "idiom" or move beyond idiomatic (or otherwise) OOP in C++ to more generic polymorphism using libraries like Adobe Poly or Boost.TypeErasure. A: Here is the Link for similar answers : http://sickprogrammersarea.blogspot.in/2014/03/technical-interview-questions-on-c_6.html A smart pointer is an object that acts, looks and feels like a normal pointer but offers more functionality. In C++, smart pointers are implemented as template classes that encapsulate a pointer and override standard pointer operators. They have a number of advantages over regular pointers. They are guaranteed to be initialized as either null pointers or pointers to a heap object. Indirection through a null pointer is checked. No delete is ever necessary. Objects are automatically freed when the last pointer to them has gone away. One significant problem with these smart pointers is that unlike regular pointers, they don't respect inheritance. Smart pointers are unattractive for polymorphic code. Given below is an example for the implementation of smart pointers. Example: template <class X> class smart_pointer { public: smart_pointer(); // makes a null pointer smart_pointer(const X& x) // makes pointer to copy of x X& operator *( ); const X& operator*( ) const; X* operator->() const; smart_pointer(const smart_pointer <X> &); const smart_pointer <X> & operator =(const smart_pointer<X>&); ~smart_pointer(); private: //... }; This class implement a smart pointer to an object of type X. The object itself is located on the heap. Here is how to use it: smart_pointer <employee> p= employee("Harris",1333); Like other overloaded operators, p will behave like a regular pointer, cout<<*p; p->raise_salary(0.5); A: UPDATE: This answer is outdated concerning C++ types which were used in the past. std::auto_ptr is deprecated and removed in new standards. Instead of boost::shared_ptr the std::shared_ptr should be used which is part of the standard. The links to the concepts behind the rationale of smart pointers still mostly relevant. Modern C++ has the following smart pointer types and doesn't require boost smart pointers: * *std::shared_ptr *std::weak_ptr *std::unique_ptr There is also 2-nd edition of the book mentioned in the answer: C++ Templates: The Complete Guide 2nd Edition by David Vandevoorde Nicolai, M. Josuttis, Douglas Gregor OLD ANSWER: A smart pointer is a pointer-like type with some additional functionality, e.g. automatic memory deallocation, reference counting etc. A small intro is available on the page Smart Pointers - What, Why, Which?. One of the simple smart-pointer types is std::auto_ptr (chapter 20.4.5 of C++ standard), which allows one to deallocate memory automatically when it out of scope and which is more robust than simple pointer usage when exceptions are thrown, although less flexible. Another convenient type is boost::shared_ptr which implements reference counting and automatically deallocates memory when no references to the object remains. This helps avoiding memory leaks and is easy to use to implement RAII. The subject is covered in depth in book "C++ Templates: The Complete Guide" by David Vandevoorde, Nicolai M. Josuttis, chapter Chapter 20. Smart Pointers. Some topics covered: * *Protecting Against Exceptions *Holders, (note, std::auto_ptr is implementation of such type of smart pointer) *Resource Acquisition Is Initialization (This is frequently used for exception-safe resource management in C++) *Holder Limitations *Reference Counting *Concurrent Counter Access *Destruction and Deallocation A: Let T be a class in this tutorial Pointers in C++ can be divided into 3 types : 1) Raw pointers : T a; T * _ptr = &a; They hold a memory address to a location in memory. Use with caution , as programs become complex hard to keep track. Pointers with const data or address { Read backwards } T a ; const T * ptr1 = &a ; T const * ptr1 = &a ; Pointer to a data type T which is a const. Meaning you cannot change the data type using the pointer. ie *ptr1 = 19 ; will not work. But you can move the pointer. ie ptr1++ , ptr1-- ; etc will work. Read backwards : pointer to type T which is const T * const ptr2 ; A const pointer to a data type T . Meaning you cannot move the pointer but you can change the value pointed to by the pointer. ie *ptr2 = 19 will work but ptr2++ ; ptr2-- etc will not work. Read backwards : const pointer to a type T const T * const ptr3 ; A const pointer to a const data type T . Meaning you cannot either move the pointer nor can you change the data type pointer to be the pointer. ie . ptr3-- ; ptr3++ ; *ptr3 = 19; will not work 3) Smart Pointers : { #include <memory> } Shared Pointer: T a ; //shared_ptr<T> shptr(new T) ; not recommended but works shared_ptr<T> shptr = make_shared<T>(); // faster + exception safe std::cout << shptr.use_count() ; // 1 // gives the number of " things " pointing to it. T * temp = shptr.get(); // gives a pointer to object // shared_pointer used like a regular pointer to call member functions shptr->memFn(); (*shptr).memFn(); // shptr.reset() ; // frees the object pointed to be the ptr shptr = nullptr ; // frees the object shptr = make_shared<T>() ; // frees the original object and points to new object Implemented using reference counting to keep track of how many " things " point to the object pointed to by the pointer. When this count goes to 0 , the object is automatically deleted , ie objected is deleted when all the share_ptr pointing to the object goes out of scope. This gets rid of the headache of having to delete objects which you have allocated using new. Weak Pointer : Helps deal with cyclic reference which arises when using Shared Pointer If you have two objects pointed to by two shared pointers and there is an internal shared pointer pointing to each others shared pointer then there will be a cyclic reference and the object will not be deleted when shared pointers go out of scope. To solve this , change the internal member from a shared_ptr to weak_ptr. Note : To access the element pointed to by a weak pointer use lock() , this returns a weak_ptr. T a ; shared_ptr<T> shr = make_shared<T>() ; weak_ptr<T> wk = shr ; // initialize a weak_ptr from a shared_ptr wk.lock()->memFn() ; // use lock to get a shared_ptr // ^^^ Can lead to exception if the shared ptr has gone out of scope if(!wk.expired()) wk.lock()->memFn() ; // Check if shared ptr has gone out of scope before access See : When is std::weak_ptr useful? Unique Pointer : Light weight smart pointer with exclusive ownership. Use when pointer points to unique objects without sharing the objects between the pointers. unique_ptr<T> uptr(new T); uptr->memFn(); //T * ptr = uptr.release(); // uptr becomes null and object is pointed to by ptr uptr.reset() ; // deletes the object pointed to by uptr To change the object pointed to by the unique ptr , use move semantics unique_ptr<T> uptr1(new T); unique_ptr<T> uptr2(new T); uptr2 = std::move(uptr1); // object pointed by uptr2 is deleted and // object pointed by uptr1 is pointed to by uptr2 // uptr1 becomes null References : They can essentially be though of as const pointers, ie a pointer which is const and cannot be moved with better syntax. See : What are the differences between a pointer variable and a reference variable in C++? r-value reference : reference to a temporary object l-value reference : reference to an object whose address can be obtained const reference : reference to a data type which is const and cannot be modified Reference : https://www.youtube.com/channel/UCEOGtxYTB6vo6MQ-WQ9W_nQ Thanks to Andre for pointing out this question.
{ "language": "en", "url": "https://stackoverflow.com/questions/106508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2119" }
Q: Disable button on form submission I have a button that I would like to disable when the form submits to prevent the user submitting multiple times. I have tried naively disabling the button with javascript onclick but then if a client side validation that fails the button remains disabled. How do I disable the button when the form successfully submits not just when the user clicks? This is an ASP.NET form so I would like to hook in nicely with the asp.net ajax page lifecycle if possible. A: The following function is useful without needing the disabling part which tends to be unreliable. Just use "return check_submit();" as part of the onclick handler of the submit buttons. There should also be a hidden field to hold the form_submitted initial value of 0; <input type="hidden" name="form_submitted" value="0"> function check_submit (){ if (document.Form1.form_submitted.value == 1){ alert("Don't submit twice. Please wait."); return false; } else{ document.Form1.form_submitted.value = 1; return true; } return false; } A: Disable the button at the very end of your submit handler. If the validation fails, it should return false before that. However, the JavaScript approach is not something that can be relied upon, so you should have something to detect duplicates on the server as well. A: if the validation is successful, then disable the button. if it's not, then don't. function validate(form) { // perform validation here if (isValid) { form.mySubmitButton.disabled = true; return true; } else { return false; } } <form onsubmit="return validate(this);">...</form> A: I'm not a huge fan of writing all that javascript in the code-behind. Here is what my final solution looks like. Button: <asp:Button ID="btnSubmit" runat="server" Text="Submit" OnClick="btnSubmit_Click" OnClientClick="doSubmit(this)" /> Javascript: <script type="text/javascript"><!-- function doSubmit(btnSubmit) { if (typeof(Page_ClientValidate) == 'function' && Page_ClientValidate() == false) { return false; } btnSubmit.disabled = 'disabled'; btnSubmit.value = 'Processing. This may take several minutes...'; <%= ClientScript.GetPostBackEventReference(btnSubmit, string.Empty) %>; } //--> </script> A: Give this a whirl: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Threading; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { // Identify button as a "disabled-when-clicked" button... WebHelpers.DisableButtonOnClick( buttonTest, "showPleaseWait" ); } protected void buttonTest_Click( object sender, EventArgs e ) { // Emulate a server-side process to demo the disabled button during // postback. Thread.Sleep( 5000 ); } } using System; using System.Web; using System.Web.UI.WebControls; using System.Text; public class WebHelpers { // // Disable button with no secondary JavaScript function call. // public static void DisableButtonOnClick( Button ButtonControl ) { DisableButtonOnClick( ButtonControl, string.Empty ); } // // Disable button with a JavaScript function call. // public static void DisableButtonOnClick( Button ButtonControl, string ClientFunction ) { StringBuilder sb = new StringBuilder( 128 ); // If the page has ASP.NET validators on it, this code ensures the // page validates before continuing. sb.Append( "if ( typeof( Page_ClientValidate ) == 'function' ) { " ); sb.Append( "if ( ! Page_ClientValidate() ) { return false; } } " ); // Disable this button. sb.Append( "this.disabled = true;" ); // If a secondary JavaScript function has been provided, and if it can be found, // call it. Note the name of the JavaScript function to call should be passed without // parens. if ( ! String.IsNullOrEmpty( ClientFunction ) ) { sb.AppendFormat( "if ( typeof( {0} ) == 'function' ) {{ {0}() }};", ClientFunction ); } // GetPostBackEventReference() obtains a reference to a client-side script function // that causes the server to post back to the page (ie this causes the server-side part // of the "click" to be performed). sb.Append( ButtonControl.Page.ClientScript.GetPostBackEventReference( ButtonControl ) + ";" ); // Add the JavaScript created a code to be executed when the button is clicked. ButtonControl.Attributes.Add( "onclick", sb.ToString() ); } } A: Set the visibility on the button to 'none'; btnSubmit.Attributes("onClick") = document.getElementById('btnName').style.display = 'none'; Not only does it prevent the double submission, but it is a clear indicator to the user that you don't want the button pressed more than once. A: Not sure if this will help, but there's onsubmit event in form. You can use this event whenever the form submit (from any button or controls). For reference: http://www.htmlcodetutorial.com/forms/_FORM_onSubmit.html A: A solution will be to set a hidden field when the button is clicked, with the number 1. On the button click handler first thing is to check that number if it is something other than 1 just return out of the function. A: You may also be able to take advantage of the onsubmit() javascript event that is available on forms. This event fires when the form is actually submit and shouldn't trap until after the validation is complete. A: This is an easier but similar method than what rp has suggested: function submit(button) { Page_ClientValidate(); if(Page_IsValid) { button.disabled = true; } } <asp:Button runat="server" ID="btnSubmit" OnClick="btnSubmit_OnClick" OnClientClick="submit(this)" Text="Submit Me" /> A: Just heard about the "DisableOnSubmit" property of an <asp:Button>, like so: <asp:Button ID="submit" runat="server" Text="Save" OnClick="yourClickEvent" DisableOnSubmit="true" /> When rendered, the button's onclick attribute looks like so: onclick="this.disabled=true; setTimeout('enableBack()', 3000); WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions('yourControlsName', '', true, '', '', false, true)) And the "enableBack()' javascript function looks like this: function enableBack() { document.getElementById('yourControlsName').disabled=false; } So when the button is clicked, it becomes disabled for 3 seconds. If the form posts successfully then you never see the button re-enable. If, however, any validators fail then the button becomes enabled again after 3 seconds. All this just by setting an attribute on the button--no javascript code needs to be written by hand. A: Note that rp's approach will double submit your form if you are using buttons with UseSubmitBehavior="false". I use the following variation of rp's code: public static void DisableButtonOnClick(Button button, string clientFunction) { // If the page has ASP.NET validators on it, this code ensures the // page validates before continuing. string script = "if (typeof(Page_ClientValidate) == 'function') { " + "if (!Page_ClientValidate()) { return false; } } "; // disable the button script += "this.disabled = true; "; // If a secondary JavaScript function has been provided, and if it can be found, call it. // Note the name of the JavaScript function to call should be passed without parens. if (!string.IsNullOrEmpty(clientFunction)) script += string.Format("if (typeof({0}) == 'function') {{ {0}() }} ", clientFunction); // only need to post back if button is using submit behaviour if (button.UseSubmitBehavior) script += button.Page.GetPostBackEventReference(button) + "; "; button.Attributes.Add("onclick", script); } A: The correct (as far as user-friendliness is concerned, at least) way would be to disable the button using the OnClientClick attribute, perform the client-side validation, and then use the result of that to continue or re-enable the button. Of course, you should ALSO write server-side code for this, as you cannot rely on the validation even being carried out due to a lack, or particular implementation, of JavaScript. However, if you rely on the server controlling the button's enabled / disabled state, then you basically have no way of blocking the user submitting the form multiple times anyway. For this reason you should have some kind of logic to detect multiple submissions from the same user in a short time period (identical values from the same Session, for example). A: one of my solution is as follow: add the script in the page_load of your aspx file HtmlGenericControl includeMyJava = new HtmlGenericControl("script"); includeMyJava.Attributes.Add("type", "text/javascript"); includeMyJava.InnerHtml = "\nfunction dsbButton(button) {"; includeMyJava.InnerHtml += "\nPage_ClientValidate();"; includeMyJava.InnerHtml += "\nif(Page_IsValid)"; includeMyJava.InnerHtml += "\n{"; includeMyJava.InnerHtml += "\nbutton.disabled = true;"; includeMyJava.InnerHtml += "}"; includeMyJava.InnerHtml += "\n}"; this.Page.Header.Controls.Add(includeMyJava); and then set your aspx button parameters as follow: <asp:Button ID="send" runat="server" UseSubmitBehavior="false" OnClientClick="dsbButton(this);" Text="Send" OnClick="send_Click" /> Note that "onClientClick" helps to disable to button and "UseSubmitBehaviour" disables the traditional submitting behaviour of page and allows asp.net to render the submit behaviour upon user script. good luck -Waqas Aslam A: So simply disabling the button via javascript is not a cross-browser compatible option. Chrome will not submit the form if you just use OnClientClick="this.disabled=true;" Below is a solution that I have tested in Firefox 9, Internet Explorer 9, and Chrome 16: <script type="text/javascript"> var buttonToDisable; function disableButton(sender) { buttonToDisable=sender; setTimeout('if(Page_IsValid==true)buttonToDisable.disabled=true;', 10); } </script> Then register 'disableButton' with the click event of your form submission button, one way being: <asp:Button runat="server" ID="btnSubmit" Text="Submit" OnClientClick="disableButton(this);" /> Worth noting that this gets around your issue of the button being disabled if client side validation fails. Also requires no server side processing. A: Building on @rp.'s answer, I modified it to invoke the custom function and either submit and disable on success or "halt" on error: public static void DisableButtonOnClick(Button ButtonControl, string ClientFunction) { StringBuilder sb = new StringBuilder(128); if (!String.IsNullOrEmpty(ClientFunction)) { sb.AppendFormat("if (typeof({0}) == 'function') {{ if ({0}()) {{ {1}; this.disabled=true; return true; }} else {{ return false; }} }};", ClientFunction, ButtonControl.Page.ClientScript.GetPostBackEventReference(ButtonControl, null)); } else { sb.Append("return true;"); } ButtonControl.Attributes.Add("onclick", sb.ToString()); } A: Came across rp's code in a legacy app of ours which was struggling with some crazy behaviour. Tracked it down to a strange combination of event firing - when DisableButtonOnClick() was being used on an asp button inside an UpdatePanel, the POST would be sent twice (once by the doPostBack added by DisableButtonOnClick(), and once by the UpdatePanel). However, this only happened with some browsers (early versions of Edge, but not recent ones, and IE11 did this, Chrome and FireFox did not (at least the versions I tested with)). I presume Chrome and newer versions of Edge are dealing with this scenario internally in some way. Tracking the issue with F12 devtools in IE - the two POSTs happen so closely together that the first one gets immediately ABORTED, but under some conditions (network latency, user machine load, etc) the request does get through to the server before the browser can abort. So this results in a seemingly random double-post coming from button presses throughout the system, and it was a pain to trace back. The fix is to add a "return false;" after the doPostBack to prevent the UpdatePanel from getting involved when older browsers are in play. TLDR - beware of this code on buttons in updatepanels. It's a good approach and nice method but has a potential issue in my (likely edge) case. ps - I would have commented on rp's post but I don't have the rep. Thought it might be useful for future travelers.
{ "language": "en", "url": "https://stackoverflow.com/questions/106509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Adding HTML to my RSS/Atom feed in Rails The default rails XML builder escapes all HTML, so something like: atom_feed do |feed| @stories.each do |story| feed.entry story do |entry| entry.title story.title entry.content "<b>foo</b>" end end end will produce the text: <b>foo</b> instead of: foo Is there any way to instruct the XML builder to not escape the XML? A: entry.content "type" => "html" do entry.cdata!(post.content) end A: turns out you need to do entry.content "<b>foo</b>", :type => "html" althought wrapping it in a CDATA stops it working. A: http://builder.rubyforge.org/classes/Builder/XmlMarkup.html The special XML characters <, >, and & are converted to <, > and & automatically. Use the << operation to insert text without modification.
{ "language": "en", "url": "https://stackoverflow.com/questions/106534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Why do I need to SEM_PRIORITY_Q when using a VxWorks inversion safe mutex? In VxWorks, I am creating a mutex with the SEM_INVERSION_SAFE option, to protect against the priority inversion problem. The manual says that I must also use the SEM_PRIORITY_Q option. Why is that? A: When creating a mutex semaphore in VxWroks, you have two options to deal with multiple tasks queued (waiting) for the semaphore: FIFO or Highest priority task first. When you use the SEM_INVERSION_SAFE option, the task holding the mutex will be bumped up to the same priority as the highest priority task waiting for the semaphore. If you were to use a FIFO queue for the semaphore, the kernel would have to traverse the queue of tasks waiting for the mutex to find the one with the highest priority. This operation is not deterministic, as the time to traverse the queue changes as the number of tasks queued changes. When you use a SEM_PRIORITY_Q option, the kernel simply has to look at the task at the head of the queue, as it is the highest priority. This is a constant time operation.
{ "language": "en", "url": "https://stackoverflow.com/questions/106540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Rsync on Windows - Socket operation on non-socket I get the following error when trying to run the latest Cygwin version of rsync in Windows XP SP2. The error occurs for attempts at both local syncs (that is: source and destination on the local harddisk only) and remote syncs (using "-e ssh" from the openssh package). Any advice on how to fix/workaround it? bash-3.2$ rsync -a dir1 dir2 rsync: Failed to dup/close: Socket operation on non-socket (108) rsync error: error in IPC code (code 14) at /home/lapo/packaging/tmp/rsync-2.6.9/pipe.c(143) [receiver=2.6.9] rsync: read error: Connection reset by peer (104) rsync error: error in IPC code (code 14) at /home/lapo/packaging/tmp/rsync-2.6.9/io.c(604) [sender=2.6.9] A: You probably have something blocking rsync. In my case it's NOD32 antivirus. You can check this by running rsync in 'gdb' as follows: $ gdb --args /usr/bin/rsync -a somedir/ anotherdir GNU gdb 6.8.0.20080328-cvs (cygwin-special) ..... (no debugging symbols found) (gdb) run note the "run" command after gdb has started. You will see some output like this: Starting program: /usr/bin/rsync -a somedir/ anotherdir ..... (no debugging symbols found) warning: NOD32 protected [MSAFD Tcpip [TCP/IP]] warning: NOD32 protected [MSAFD Tcpip [UDP/IP]] warning: NOD32 protected [MSAFD Tcpip [RAW/IP]] warning: NOD32 protected [RSVP UDP Service Provider] warning: NOD32 protected [RSVP TCP Service Provider] (no debugging symbols found) (no debugging symbols found) ---Type <return> to continue, or q <return> to quit--- (no debugging symbols found) [New thread 1508.0x720] [New thread 1508.0xeb0] [New thread 1508.0x54c] rsync: Failed to dup/close: Socket operation on non-socket (108) rsync error: error in IPC code (code 14) at /home/lapo/packaging/rsync-3.0.4-1/src/rsync-3.0.4/pipe.c(147) [receiver=3.0.4] So you will have to add rsync to your exclude list in that virus scanner (NOD32): c:\cygwin\bin\rsync.exe A: Be aware that a long-standing pipe implementation bug in Cygwin causes rsync to hang if it's used through an SSH connection. As of Cygwin v. 1.7 it seems that the only reliable way to transfer lots of data with rsync is to connect to an rsync daemon using the rsync protocol. DeltaCopy is just a pretty wrapper around this method. Some users apparently have had success on top of SSH pushing data from Windows to Unix instead of pulling from Windows on the Unix side. In our experience that's unreliable too, though. Google for cygwin, rsync, ssh and pipe/hang/stall and you'll find more information about this problem. A: Not really an answer to your question, but I've found Delta Copy to be a much better option than messing around with Cygwin. It connects to regular rsync daemons too. A: I have found this to be a winsock error. I confirmed the problem starts with the installation of the ATT Communcations Manager (version 6.12.0046.0) for the Sierra Wireless Aircard (875U). Uninstall the Communications Manager and the rsync error goes away. A: After following @akaihola ’s advice I found this blog post with the solution to the same problem. I post the solution here, but the credits go to Marc Abramowitz cygrunsrv --install "rsyncd" --path /usr/bin/rsync --args "--daemon --no-detach" --desc "Starts a rsync daemon for accepting incoming rsync connections" --disp "Rsync Daemon" --type auto Of course you need Cygwin with rsync.
{ "language": "en", "url": "https://stackoverflow.com/questions/106544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Performance of SOAP vs. XML-RPC or REST The arguments about the simplicity of solutions using XML-RPC or REST are easy to understand and hard to argue with. I have often also heard arguments that the increased overhead of SOAP may significantly impact used bandwidth and possibly even latency. I would like to see the results of a test that quantifies the impact. Any one know a good source for such information? A: SOAP and any other protocol which uses XML generally bloats your messages quite a bit - this may or may not be a problem depending on the context. Something like JSON would be more compact and maybe faster to serialise / deserialise - but don't use it exclusively for that reason. Do whatever you feel makes sense at the time and change it if it's a problem. Anything which uses HTTP typically (Unless it's reusing a HTTP 1.1 keepalive connection, which many implementations do not) starts up a new TCP connection for each request; this is quite bad, especially over high latency links. HTTPS is much worse. If you have a lot of short requests from one sender to one receiver, think about how you can take this overhead out. Using HTTP for any kind of RPC (whether SOAP or something else) is always going to incur that overhead. Other RPC protocols generally allow you to keep a connection open. A: The main impact in speed of SOAP vs. REST has not to do with wire speed, but with cachability. REST suggests using the web's semantics instead of trying to tunnel over it via XML, so RESTful web services are generally designed to correctly use cache headers, so they work well with the web's standard infrastructure like caching proxies and even local browser caches. Also, using the web's semantics means that things like ETags and automatic zip compression are well understood ways to increase efficiency. ..and now you say you want benchmarks. Well, with Google's help, I found one guy whose testing shows REST to be 4-6x faster than SOAP and another paper that also favors REST. A: There are a few studies which have been done regarding this which you might find informative. Please see the following: * *SO: Rest vs Soap Performance *Q&A: SOAP and REST 101 *Updated SQL Server Data Services: Northwind REST and SOAP Uploads There is also an (somewhat out of date) interesting performance conversation about the topic at the MSDN Forums. In short - most of these sources seem to agree that SOAP and REST are roughly the same performance for general-purpose data. Some results, however, seem to indicate that with binary data, REST may actually be less performant. See the links in the forum I linked for more detail on this. A: Expanding on "pjz" 's answer. If you're getting a lot of INFORMATION(get* type of calls) based SOAP operations, currently there is no way you can cache them. But if you were to implement these same operations using REST, there is a possibility that the data(depends on your business context) can be cached, as mentioned above. Because SOAP uses POST for its operations, it cannot cache the information at the server side. A: SOAP is definitely slower. Payloads are significantly larger which are slower to assemble, transport, parse, validate and process. A: I don't know of any answer to the benchmarking question, however, what I do know about the SOAP format is yes, it does have overhead, but that overhead does not increase per request: if you have one element sent to the web service, you have overhead + one element construction, and if you have 1000 elements sent to the web service, you have overhead + 1000 element construction. The overhead occurs as the XML request is formatted for the particular operation, but every individual argument element in the request is formatted the same. If you stick to repeatable, short bursts of data (say, 500 elements), the speed should be acceptible. A: REST as a protocol does not define any form of message envelope, while SOAP does have this standard. Therefor, its somewhat simplistic to try and compare the two, they are apples to oranges. That said, a SOAP envelope (minus the data) is only a few k, so there shouldn't be any noticeable difference in speed provided you are retrieving a serialized object via both SOAP and REST. A: I guess the main question here is how compares RPC with SOAP. they both serve the same approach of communication abstraction by having stub objects you operate with and primitive/complex data types you get back without really knowing how this all is handled underneath. I would always prefer (JSON-)RPC because * *it's lightweight *there are many great implementations for all programming languages out there *it's simple to learn/use/create *it's fast (especially with JSON) although there are reasons you should use SOAP, i.e. if you need naming parameters instead of relying on their correct order some more details you get from this stackoverflow question
{ "language": "en", "url": "https://stackoverflow.com/questions/106546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: How do I get USB disk drive inserted and removed events in a service without hearing my hard drive being accessed repeatedly on some O/S's? I use this code in my Windows Service to be notified of USB disk drives being inserted and removed: WqlEventQuery query = new WqlEventQuery("__InstanceOperationEvent", "TargetInstance ISA 'Win32_LogicalDisk' AND TargetInstance.DriveType=2"); query.WithinInterval = TimeSpan.FromSeconds(1); _deviceWatcher = new ManagementEventWatcher(query); _deviceWatcher.EventArrived += new EventArrivedEventHandler(OnDeviceEventArrived); _deviceWatcher.Start(); It works on XP and Vista, but on XP I can hear the very noticeable sound of the hard drive being accessed every second. Is there another WMI query that will give me the events without the sound effect? A: Not sure if this applies to your case but we've been using RegisterDeviceNotification in our C# code (which I can't post here) to detect when USB devices are plugged in. There's a handful of native functions you have to import but it generally works well. Easiest to make it work in C++ first and then see what you have to move up into C#. There's some stuff on koders Code search that appears to be a whole C# device management module that might help: http://www.koders.com/csharp/fidEF5C6B3E2F46BE9AAFC93DB75515DEFC46DB4101.aspx A: Try looking for the InstanceCreationEvent, which will signal the creation of a new Win32_LogicalDisk instance. Right now you're querying for instance operations, not creations. You should know that the query interval on those events is pretty long - it's possible to pop a USB in and out faster that you'll detect. A: try this using System; using System.Management; namespace MonitorDrives { class Program { public enum EventType { Inserted = 2, Removed = 3 } static void Main(string[] args) { ManagementEventWatcher watcher = new ManagementEventWatcher(); WqlEventQuery query = new WqlEventQuery("SELECT * FROM Win32_VolumeChangeEvent WHERE EventType = 2 or EventType = 3"); watcher.EventArrived += (s, e) => { string driveName = e.NewEvent.Properties["DriveName"].Value.ToString(); EventType eventType = (EventType)(Convert.ToInt16(e.NewEvent.Properties["EventType"].Value)); string eventName = Enum.GetName(typeof(EventType), eventType); Console.WriteLine("{0}: {1} {2}", DateTime.Now, driveName, eventName); }; watcher.Query = query; watcher.Start(); Console.ReadKey(); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/106554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to find the amount of physical memory occupied by a hash in Perl? I have a Perl script where I maintain a very simple cache using a hash table. I would like to clear the hash once it occupies more than n bytes, to avoid Perl (32-bit) running out of memory and crashing. I can do a check on the number of keys-value pairs: if (scalar keys %cache > $maxSize) { %cache = (); } But is it possible to check the actual memory occupied by the hash? A: You can install Devel::Size to find out the memory taken by any construct in Perl. However do be aware that it will take a large amount of intermediate memory, so I would not use it against a large data structure. I would certainly not do it if you think you may be about to run out of memory. BTW there are a number of good modules on CPAN to do caching in memory and otherwise. Rather than roll your own I would suggest using one of them instead. For instance try Tie::Cache::LRU for an in-memory cache that will only go up to a specified number of keys. A: Devel::Size is the answer to your question. (Note that Devel::Size will temporarily allocate a significant amount of memory when processing a large data structure, so it's not really well suited to this purpose.) However, Cache::SizeAwareMemoryCache and Tie::Cache already implement what you're looking for (with somewhat different interfaces), and could save you from reinventing the wheel. Memoize is a module that makes it simple to cache the return value from a function. It doesn't implement a size-based cache limit, but it should be possible to use Tie::Cache as a backend for Memoize. A: You can use Devel::Size to determine the memory used, but you can't generally give return memory to the OS. It sounds like you're just trying to clear and reuse, though, which should work fine. If the cache is for a function, consider using the Memoize module instead of maintaining the cache yourself. It supports cache expiration (via Memoize::Expire) so you can limit the size of the cache without destroying it entirely. A: You're looking for Devel::Size NAME Devel::Size - Perl extension for finding the memory usage of Perl variables SYNOPSIS use Devel::Size qw(size total_size); my $size = size("A string"); my @foo = (1, 2, 3, 4, 5); my $other_size = size(\@foo); my $foo = {a => [1, 2, 3], b => {a => [1, 3, 4]} }; my $total_size = total_size($foo); A: Cache::Memory use Cache::Memory; my $cache = Cache::Memory->new( namespace => 'MyNamespace', default_expires => '600 sec' ); my $size = $cache->size() my $limit = $cache->size_limit(); A: If you're worrying about managing the amount of memory that Perl is using, you should probably look at an alternative approach. Why do you need that much in RAM all at once? Should you be using some sort of persistence system? A: As others have said, caching is not a wheel you need to re-invent, there's plenty of simple caching solutions on CPAN which will do the job nicely for you. Cache::SizeAwareMemoryCache can be told the maximum size you want it to use, then you can leave it to care about the cache for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/106555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Does the tee command always wait for EOF? I'd like to log the output of a command to stdout as well as to a log file. I've got Cygwin installed and I'm trying to use the tee command to accomplish this. devenv mysolution.sln /build myproject "Release|Win32" | tee build.log Trouble is that tee seems to insist on waiting for the end of file before outputting anything to either stdout or the log file. This takes away the point of it all, which is to have a log file for future reference, but also some stdout logging so I can easily see the build progress. tee's options appear to be limited to --append, --ignore-interrupts, --help, and --version. So is there another method to get to what I'm trying to do? A: You can output to the file and tail -f the file. devenv mysolution.sln /build myproject "Release|Win32" > build.log & tail -f build.log A: Write your own! (The point here is that the autoflush ($|) setting is turned on, so every line seen is flushed straight away. This may perhaps be what the real tee command lacked.) #!/usr/bin/perl -w use strict; use IO::File; $| = 1; my @fhs = map IO::File->new(">$_"), @ARGV; while (my $line = <STDIN>) { print $line; $_->print($line) for @fhs; } $_->close for @fhs; You can call the script anything you want. I call it perlmilktee! :-P A: tee seems to insist on waiting for the end of file before outputting anything to either stdout or the log file. This should definitely not be happening - it would render tee nearly useless. Here's a simple test that I wrote that puts this to the test, and it's definitely not waiting for eof. $ cat test #!/bin/sh echo "hello" sleep 5 echo "goodbye" $ ./test | tee test.log hello <pause> goodbye
{ "language": "en", "url": "https://stackoverflow.com/questions/106563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What's a good alternative to the included Terminal program on OS X? My keyboard acts flaky when I use a terminal session on OS X (another question?), so using the command line is often frustrating. Other native applications seem fine and don't suffer the same problem. Is there another terminal application that might work better for me? A: As the name ssh reveals: it is meant to be a shell application :) So there will be no gui or what so ever. If you are just pissed using the terminal ... maybe try iTerm 2. Works pretty well ... A: ZOC would be a (shareware) alternative. GUI, tabbed ... overall more along the lines what you see as terminal apps under Windows. A: Not really an alternative, but you may simply need to set your character set to the correct setting to get it to not be flaky. For instance if the trouble you are seeing is that your deletes aren't deleting, etc. Sam A: If you just want to browse and manipulate files, Cyberduck is able to connect to SSH sessions. When I connect to my server, I still use the Terminal to type commands but use Cyberduck with TextMate to edit the configuration files because using vi is just plain wrong and might actually blow up the universe. A: JellyfiSSH does a nice job of managing SSH connections, setting up tunnels, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/106572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Porting Windows software to Embedded/Realtime Operating Systems I have an existing codebase targeting a Windows environment and with an eye to the future, would like to make this as cross platform as possible. I've had some sucess with standard Linux distributions by using cross platform libraries but would like to extend this to Realtime and or embedded operating systems. Would it be possible to port the majority of the codebase to such systems, or would it require reimplentations targeted to that environment? If parts need to be recreated, does development for these systems require a different type of design approach? Some vendors supply their own IDE's for development, are these a necessity or can we or is it possible to standardise on a GNU toolchain type build process? A potential pothole could be differences in IPC handling but without further exposure it is difficult to get a handle on the specifics. NB although Windows based presently, there is not particularly heavy use of the Win32 API (mainly COM) or Windows types. Thanks edit:: the codebase is C\C++ A: If the app is mostly C and posix then it isn't too hard. Embedded platforms today can mean an almost full copy of XP or Linux running on a compact flash card. For the gui both QT and WX have embedded versions which draw the widgets directly. A: If you are using the windows COM interface (I assume you're not talking about serial port here, but the Common Object Model), your code might need to be abstracted away from that. As you talk about IPC, then obviously this is a multi-tasking/multi-processing type code base. With that being the case, you will have to somehow come up with a way to deal with the environment difference. First of all, you will need some kind of RTOS since your application is multi-tasking. As you did a port to Linux, you might want to look into using a version of real-time Linux. This would minimize the number of ports you would have to do. If you don't want to use Linux as your embedded platform, make your code POSIX compliant (Linux is) and make sure that the RTOS you choose support POSIX. This way, the port to Linux and the embedded platform would be mostly the same. Bottom line, COM will be your albatros. Since you don't mention the use of a GUI, we won't address that can of worms :) A: The most important step is to separate all OS dependence functions from the project logic. After you do that, you will see immediately how much code you have to port for migration to new OS, and you will be able to start porting nicely. A: Depends on the capabilities of your embedded platform. If it's an 8-bit, you've got a hard road ahead but if it's 32 bit with decent RAM and such, there are a lot of open source cross-platform libraries available. I used DirectFB for my last embedded GUI app, it was lightweight and OK but not cross-platform. Next time I think I'll try out wxWidgets. A: I don't like using GNU dev tools on Windows since MS Dev Studio is sooo much nicer than any GNU tools but recently I've been playing with Wascana Desktop Developer which is based on Eclipse and GCC and it shows promise. A: If you're in the position of specifying which realtime/embedded OS you use, have you considered Windows CE?
{ "language": "en", "url": "https://stackoverflow.com/questions/106581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What are the principles guiding your exception handling policy? There is a lot of relativity involved in working with exceptions. Beyond low level APIs where exceptions cover errors raised from hardware and the OS there is a shady area where the programmer decides what constitutes an exception and what is a normal condition. How do you decide when to use exceptions? Do you have a consistent policy regarding exceptions? A: This blog entry from Eric Lippert, a Senior Software Design Engineer at Microsoft, sums up an excellent and brief set of exception strategy guidelines. In short: * *Fatal: Terrible errors that indicate your process is totally unrecoverable. Clean up whatever resources you can, but don't catch them. If you're writing code that has the ability to detect such a situation, by all means, throw. Example : Out of memory exception. *Boneheaded: Relatively simple errors that indicate your process can't operate on whatever data it's being handed, but would continue on normally if whatever situation caused the error is simply ignored. These are better known as bugs. Don't throw or catch them, but instead prevent them from happening, usually by passing errors or other meaningful indicators of failure that can be handled by your methods. Example: Null argument exception. *Vexing: Relatively simple errors that code you don't own is throwing at you. You must catch all of these and deal with them, usually in the same way as you would deal with a Boneheaded exception of your own. Please don't throw them right back out again. Example: Format exception from C#'s Int32.Parse() method *Exogenous: Relatively straightforward errors that look a lot like Vexing (from other people's code) or even Boneheaded (from your code) situations, but must be thrown because reality dictates that the code that's throwing them really has no idea how to recover, but the caller probably will. Go ahead and throw these, but when your code receives them from elsewhere, catch them and deal with them. Example: File not found exception. Of the four, the exogenous ones are the ones that you have to think about most to get right. An exception indicating a file is not found is appropriate to throw for an IO library method, in that the method almost certainly will not know what to do should the file not be found, especially given that the situation can occur at any time and that there is no way to detect whether or not the situation is transient. Throwing such an exception would not be appropriate for application-level code, though, because that application can get information from the user on how to proceed. A: * *Never throw exceptions from destructors. *Maintain some basic level of exception guarantees about the state of the object. *Do not use exceptions to communicate errors which can be done using an error code unless it is a truly exception error and you might want the upper layers to know about it. *Do not throw exceptions if you can help it. It slows everything down. *Do not just catch(...) and do nothing. Catch exceptions you know about or specific exceptions. At the very least log what happened. *When in the exception world use RAII because nothing is safe anymore. *Shipping code should not have suppressed exceptions at least with regards to memory. *When throwing exceptions package as much information as is possible along with it so that the upper layers have enough information to debug them. *Know about flags that can cause libraries like STL to throw exceptions instead of exhibiting unknown behaviour (e.g. invalid iterators / vector subscript overflow). *Catch references instead of copies of the exception object? *Take special care about reference counted objects like COM and warp them in reference counted pointers when working with code that could throw exceptions. *If a code throws an exception more than 2% of the time consider making it an error code for performance's sake. *Consider not throwing exceptions from undecorated dll exports / C interfaces because some compilers optimize by assuming C code to not throw exceptions. *If all that you do for handling exceptions is something akin to below, then don't use exception handling at all. You do not need it. main { try { all code.... } catch(...) {} } A: Exceptions should not be used as a method of passing information internally between methods inside your object, locally you should use error codes and defensive programming. Exceptions are designed to pass control from a point where an error is detected to a place (higher up the stack) where the error can be handled, presumably because the local code does not have enough context to correct the problem and something higher up the stack will have more context and thus be able to better organize a recovery. When considering exceptions (in C++ at least) you should consider the exception guarantees that your API makes. The minimum level of guarantee should be the Basic guarantee though you should strive (where appropriate) to provide the strong guarantee. In cases where you use no external dependencies from a articular API you may even try to provide the no throw guarantee. N.B.Do not confuse exception guarantees with exception specifications. Exception Guarantees: No Guarantee: There is no guarantee about the state of the object after an exception escapes a method In these situations the object should no longer be used. Basic Guarantee: In nearly all situations this should be the minimum guarantee a method provides. This guarantees the object's state is well defined and can still be consistently used. Strong Guarantee: (aka Transactional Guarantee) This guarantees that the method will completely successfully Or an Exception will be thrown and the objects state will not change. No Throw Guarantee: The method guarantees that no exceptions are allowed to propagate out of the method. All destructors should make this guarantee. | N.B. If an exception escapes a destructor while an exception is already propagating | the application will terminate A: Exceptions are expensive in processing time, so they should only be thrown when something happens that really shouldn't happen in your app. Sometimes you can predict what kind of things might happen and code to recover from them, in which case it is appropriate to throw and catch an exception, log and recover, then continue. Otherwise they should just be used to handle the unexpected and exit gracefully, while capturing as much information as possible to help with debugging. I'm a .NET developer, and for catch and throw, my approach is: * *Only try/catch in public methods (in general; obviously if you are trapping for a specific error you would check for it there) *Only log in the UI layer right before suppressing the error and redirecting to an error page/form. A: The context this answer is given in is the Java language. For normal errors that may pop up, we handle those directly (such as returning immediately if something is null, empty, etc). We only use an actual exception for exceptional situations. However, we do not throw checked exceptions, ever. We subclass RuntimeException for our own specific exceptions, catching them where applicable directly, and as for exceptions that are thrown by other libraries, the JDK API, etc, we do try/catch internally and either log the exception (if something happened that really shouldn't have and you have no way of recovering like a file not found exception for a batch job) or we wrap the exception in a RuntimeException and then throw it. On the outside of the code, we rely on an exception handler to eventually catch that RuntimeException, be that the JVM or the web container. The reason this is done is that it avoids creating forced try/catch blocks everywhere where you might have four instances of calling a method but only one can actually handle the exception. This seems to be the rule, not the (no pun intended...ouch) exception, so if that fourth one can handle it, it can still catch it and examine the root cause of the exception to get the actual exception that occurred (without worrying about the RuntimeException wrapper). A: I think there's usually a good way to determine exceptions based on access to resources, integrity of data and the validity of data. Access Exceptions * *Creating or connecting to any kind of connection (remote, local). * *Occurs in: Databases, Remoting *Reasons: Non-existent, Already in Use or Unavailable, Insufficient/Invalid Credentials *Opening, Reading or Writing to any kind of resource * *Occurs in: File I/O, Database *Reasons: Locked, Unavailable, Insufficient/Invalid credentials Integrity of Data * *There could be many cases where the integrity of the data matters * *What it references, what it contains... *Look for resources on the methods or code that requires a set of criteria for the data to be clean and in a valid format. *Example: Trying to parse a string with the value 'bleh' to a number. Validity of Data * *Is this the correct data provided? (It's in the right format, but it might not be the correct set of parameters for a given situation) * *Occurs in: Database Queries, Transactions, Web Services *Example: Submitting a row to a database and violating a constraint Obviously there are other cases, but these are usually the ones I try to abide by where its needed. A: I believe that the best way to use exceptions depends on which computer language you are using. For instance Java has a much more solid implementation of exceptions than C++. If you are using C++, I recommend that you at least try to read what Bjarne Stroustrup (the inventor of C++) has to say about exception safety. Refer to appendix E of his book "The C++ programming language". He spends 34 pages trying to explain how to work with exceptions in a safe way. If you do understand his advice then that should be all that you need to know. A: Others may have to correct/clarify this, but there's a strategy called (I believe) "contract-driven development", where you explicitly document in your public interface what the expected preconditions are for each method, and the guaranteed post-conditions. Then, when implementing the method, any error which prevents you from meeting the post-conditions in the contract should result in a thrown exception. Failure to meet preconditions is considered a program error and should cause the program to abort. I'm not sure if contract-driven development speaks to the question of catching exceptions, but in general you should only catch exceptions that you expect and can reasonably recover from. For instance, most code cannot meaningfully recover from an Out Of Memory exception, so there is no point in catching it. On the other hand, if you are trying to open a file for writing, you can (and should) handle the case that the file is exclusively locked by another process, or the case that the file has been deleted (even if you checked its existence before trying to open it). As noted by another commenter, you should also avoid using exceptions to handle expected conditions that can be expected and avoided. For instance, in the .NET framework, int.TryParse is preferable to int.Parse with a try/catch, especially when used in a loop or such. A: this article from bea(now oracle) is a good exposition on how to go about it : http://www.oracle.com/technology/pub/articles/dev2arch/2006/11/effective-exceptions.html. It kinda assumes Java but you should be able to use it for other environments as well. A: As a C++ developer, my own policy is not to throw exceptions from what I consider to be public apis to my classes/modules (in fact, a requirement with COM). However, I use exceptions extensively in private class implementation. For example, working with ATL: HRESULT Foo() { HRESULT hr = S_OK; try { // Avoid a whole lot of nested ifs and return code // checking - internal stuff just throws. DoStuff(); DoMoreStuff(); // etc. } catch ( CAtlException& e ) { hr = e; } return hr; } void DoSomething() { // If something goes wrong, AtlThrow( E_FAILED or E_WHATEVER ); } A: My policy on exception handling can be found at: http://henko.net/imperfection/exception-handling-policy-throwing-exception/. (Hope it's not against the rules to promote a web site, but it is a bit too much information to paste here.) A: Aren't exceptions raised by the language environment in accordance with the spec. of the language being used if indeed it does have the concept of exceptions? I'm thinking of "divide by zero" in Java, or CONSTRAINT_ERROR in Ada vs. nothing at all in C. How can a programmer "decide" to use exceptions after selecting a programming language that has exceptions defined within its makeup? Edit: Or rather than "using" exceptions, do you mean when to have a cohesive and consistent policy about "handling" exceptions? Edit2: You might like to check out the free chapter from Steven Dewhurst's book "C++ Gotchas", specifically Gotcha 64 and Gotcha 65. Though it is focused on C++, the lessons involved are useful in other languages.
{ "language": "en", "url": "https://stackoverflow.com/questions/106586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: What is the volatile keyword useful for? At work today, I came across the volatile keyword in Java. Not being very familiar with it, I found this explanation. Given the detail in which that article explains the keyword in question, do you ever use it or could you ever see a case in which you could use this keyword in the correct manner? A: Assume that a thread modifies the value of a shared variable, if you didn't use volatile modifier for that variable. When other threads want to read this variable's value, they don't see the updated value because they read the variable's value from the CPU's cache instead of RAM memory. This problem also known as Visibility Problem. By declaring the shared variable volatile, all writes to the counter variable will be written back to main memory immediately. Also, all reads of the counter variable will be read directly from main memory. public class SharedObject { public volatile int sharedVariable = 0; } With non-volatile variables there are no guarantees about when the Java Virtual Machine (JVM) reads data from main memory into CPU caches, or writes data from CPU caches to main memory. This can cause several problems which I will explain in the following sections. Example: Imagine a situation in which two or more threads have access to a shared object which contains a counter variable declared like this: public class SharedObject { public int counter = 0; } Imagine too, that only Thread 1 increments the counter variable, but both Thread 1 and Thread 2 may read the counter variable from time to time. If the counter variable is not declared volatile there is no guarantee about when the value of the counter variable is written from the CPU cache back to main memory. This means, that the counter variable value in the CPU cache may not be the same as in main memory. This situation is illustrated here: The problem with threads not seeing the latest value of a variable because it has not yet been written back to main memory by another thread, is called a "visibility" problem. The updates of one thread are not visible to other threads. A: volatile has semantics for memory visibility. Basically, the value of a volatile field becomes visible to all readers (other threads in particular) after a write operation completes on it. Without volatile, readers could see some non-updated value. To answer your question: Yes, I use a volatile variable to control whether some code continues a loop. The loop tests the volatile value and continues if it is true. The condition can be set to false by calling a "stop" method. The loop sees false and terminates when it tests the value after the stop method completes execution. The book "Java Concurrency in Practice," which I highly recommend, gives a good explanation of volatile. This book is written by the same person who wrote the IBM article that is referenced in the question (in fact, he cites his book at the bottom of that article). My use of volatile is what his article calls the "pattern 1 status flag." If you want to learn more about how volatile works under the hood, read up on the Java memory model. If you want to go beyond that level, check out a good computer architecture book like Hennessy & Patterson and read about cache coherence and cache consistency. A: volatile is very useful to stop threads. Not that you should be writing your own threads, Java 1.6 has a lot of nice thread pools. But if you are sure you need a thread, you'll need to know how to stop it. The pattern I use for threads is: public class Foo extends Thread { private volatile boolean close = false; public void run() { while(!close) { // do work } } public void close() { close = true; // interrupt here if needed } } In the above code segment, the thread reading close in the while loop is different from the one that calls close(). Without volatile, the thread running the loop may never see the change to close. Notice how there's no need for synchronization A: You'll need to use 'volatile' keyword, or 'synchronized' and any other concurrency control tools and techniques you might have at your disposal if you are developing a multithreaded application. Example of such application is desktop apps. If you are developing an application that would be deployed to application server (Tomcat, JBoss AS, Glassfish, etc) you don't have to handle concurrency control yourself as it already addressed by the application server. In fact, if I remembered correctly the Java EE standard prohibit any concurrency control in servlets and EJBs, since it is part of the 'infrastructure' layer which you supposed to be freed from handling it. You only do concurrency control in such app if you're implementing singleton objects. This even already addressed if you knit your components using frameworkd like Spring. So, in most cases of Java development where the application is a web application and using IoC framework like Spring or EJB, you wouldn't need to use 'volatile'. A: volatile only guarantees that all threads, even themselves, are incrementing. For example: a counter sees the same face of the variable at the same time. It is not used instead of synchronized or atomic or other stuff, it completely makes the reads synchronized. Please do not compare it with other java keywords. As the example shows below volatile variable operations are also atomic they fail or succeed at once. package io.netty.example.telnet; import java.util.ArrayList; import java.util.List; public class Main { public static volatile int a = 0; public static void main(String args[]) throws InterruptedException{ List<Thread> list = new ArrayList<Thread>(); for(int i = 0 ; i<11 ;i++){ list.add(new Pojo()); } for (Thread thread : list) { thread.start(); } Thread.sleep(20000); System.out.println(a); } } class Pojo extends Thread{ int a = 10001; public void run() { while(a-->0){ try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } Main.a++; System.out.println("a = "+Main.a); } } } Even you put volatile or not results will always differ. But if you use AtomicInteger as below results will be always same. This is same with synchronized also. package io.netty.example.telnet; import java.util.ArrayList; import java.util.List; import java.util.concurrent.atomic.AtomicInteger; public class Main { public static volatile AtomicInteger a = new AtomicInteger(0); public static void main(String args[]) throws InterruptedException{ List<Thread> list = new ArrayList<Thread>(); for(int i = 0 ; i<11 ;i++){ list.add(new Pojo()); } for (Thread thread : list) { thread.start(); } Thread.sleep(20000); System.out.println(a.get()); } } class Pojo extends Thread{ int a = 10001; public void run() { while(a-->0){ try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } Main.a.incrementAndGet(); System.out.println("a = "+Main.a); } } } A: While I see many good Theoretical explanations in the answers mentioned here, I am adding a practical example with an explanation here: 1. CODE RUN WITHOUT VOLATILE USE public class VisibilityDemonstration { private static int sCount = 0; public static void main(String[] args) { new Consumer().start(); try { Thread.sleep(100); } catch (InterruptedException e) { return; } new Producer().start(); } static class Consumer extends Thread { @Override public void run() { int localValue = -1; while (true) { if (localValue != sCount) { System.out.println("Consumer: detected count change " + sCount); localValue = sCount; } if (sCount >= 5) { break; } } System.out.println("Consumer: terminating"); } } static class Producer extends Thread { @Override public void run() { while (sCount < 5) { int localValue = sCount; localValue++; System.out.println("Producer: incrementing count to " + localValue); sCount = localValue; try { Thread.sleep(1000); } catch (InterruptedException e) { return; } } System.out.println("Producer: terminating"); } } } In the above code, there are two threads - Producer and Consumer. The producer thread iterates over the loop 5 times (with a sleep of 1000 milliSecond or 1 Sec) in between. In every iteration, the producer thread increases the value of sCount variable by 1. So, the producer changes the value of sCount from 0 to 5 in all iterations The consumer thread is in a constant loop and print whenever the value of sCount changes until the value reaches 5 where it ends. Both the loops are started at the same time. So both the producer and consumer should print the value of sCount 5 times. OUTPUT Consumer: detected count change 0 Producer: incrementing count to 1 Producer: incrementing count to 2 Producer: incrementing count to 3 Producer: incrementing count to 4 Producer: incrementing count to 5 Producer: terminating ANALYSIS In the above program, when the producer thread updates the value of sCount, it does update the value of the variable in the main memory(memory from where every thread is going to initially read the value of variable). But the consumer thread reads the value of sCount only the first time from this main memory and then caches the value of that variable inside its own memory. So, even if the value of original sCount in main memory has been updated by the producer thread, the consumer thread is reading from its cached value which is not updated. This is called VISIBILITY PROBLEM . 2. CODE RUN WITH VOLATILE USE In the above code, replace the line of code where sCount is declared by the following : private volatile static int sCount = 0; OUTPUT Consumer: detected count change 0 Producer: incrementing count to 1 Consumer: detected count change 1 Producer: incrementing count to 2 Consumer: detected count change 2 Producer: incrementing count to 3 Consumer: detected count change 3 Producer: incrementing count to 4 Consumer: detected count change 4 Producer: incrementing count to 5 Consumer: detected count change 5 Consumer: terminating Producer: terminating ANALYSIS When we declare a variable volatile, it means that all reads and all writes to this variable or from this variable will go directly into the main memory. The values of these variables will never be cached. As the value of the sCount variable is never cached by any thread, the consumer always reads the original value of sCount from the main memory(where it is being updated by producer thread). So, In this case the output is correct where both the threads prints the different values of sCount 5 times. In this way, the volatile keyword solves the VISIBILITY PROBLEM . A: A variable declared with volatile keyword, has two main qualities which make it special. * *If we have a volatile variable, it cannot be cached into the computer's(microprocessor) cache memory by any thread. Access always happened from main memory. *If there is a write operation going on a volatile variable, and suddenly a read operation is requested, it is guaranteed that the write operation will be finished prior to the read operation. Two above qualities deduce that * *All the threads reading a volatile variable will definitely read the latest value. Because no cached value can pollute it. And also the read request will be granted only after the completion of the current write operation. And on the other hand, * *If we further investigate the #2 that I have mentioned, we can see that volatile keyword is an ideal way to maintain a shared variable which has 'n' number of reader threads and only one writer thread to access it. Once we add the volatile keyword, it is done. No any other overhead about thread safety. Conversly, We can't make use of volatile keyword solely, to satisfy a shared variable which has more than one writer thread accessing it. A: Absolutely, yes. (And not just in Java, but also in C#.) There are times when you need to get or set a value that is guaranteed to be an atomic operation on your given platform, an int or boolean, for example, but do not require the overhead of thread locking. The volatile keyword allows you to ensure that when you read the value that you get the current value and not a cached value that was just made obsolete by a write on another thread. A: Yes, I use it quite a lot - it can be very useful for multi-threaded code. The article you pointed to is a good one. Though there are two important things to bear in mind: * *You should only use volatile if you completely understand what it does and how it differs to synchronized. In many situations volatile appears, on the surface, to be a simpler more performant alternative to synchronized, when often a better understanding of volatile would make clear that synchronized is the only option that would work. *volatile doesn't actually work in a lot of older JVMs, although synchronized does. I remember seeing a document that referenced the various levels of support in different JVMs but unfortunately I can't find it now. Definitely look into it if you're using Java pre 1.5 or if you don't have control over the JVMs that your program will be running on. A: Every thread accessing a volatile field will read its current value before continuing, instead of (potentially) using a cached value. Only member variable can be volatile or transient. A: One common example for using volatile is to use a volatile boolean variable as a flag to terminate a thread. If you've started a thread, and you want to be able to safely interrupt it from a different thread, you can have the thread periodically check a flag. To stop it, set the flag to true. By making the flag volatile, you can ensure that the thread that is checking it will see it has been set the next time it checks it without having to even use a synchronized block. A: There are two different uses of volatile keyword. * *Prevents JVM from reading values from register (assume as cache), and forces its value to be read from memory. *Reduces the risk of memory in-consistency errors. Prevents JVM from reading values in register, and forces its value to be read from memory. A busy flag is used to prevent a thread from continuing while the device is busy and the flag is not protected by a lock: while (busy) { /* do something else */ } The testing thread will continue when another thread turns off the busy flag: busy = 0; However, since busy is accessed frequently in the testing thread, the JVM may optimize the test by placing the value of busy in a register, then test the contents of the register without reading the value of busy in memory before every test. The testing thread would never see busy change and the other thread would only change the value of busy in memory, resulting in deadlock. Declaring the busy flag as volatile forces its value to be read before each test. Reduces the risk of memory consistency errors. Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a "happens-before" relationship with subsequent reads of that same variable. This means that changes to a volatile variable are always visible to other threads. The technique of reading, writing without memory consistency errors is called atomic action. An atomic action is one that effectively happens all at once. An atomic action cannot stop in the middle: it either happens completely, or it doesn't happen at all. No side effects of an atomic action are visible until the action is complete. Below are actions you can specify that are atomic: * *Reads and writes are atomic for reference variables and for most primitive variables (all types except long and double). *Reads and writes are atomic for all variables declared volatile (including long and double variables). Cheers! A: “… the volatile modifier guarantees that any thread that reads a field will see the most recently written value.” - Josh Bloch If you are thinking about using volatile, read up on the package java.util.concurrent which deals with atomic behaviour. The Wikipedia post on a Singleton Pattern shows volatile in use. A: Volatile does following. 1> Read and write of volatile variables by different threads are always from memory, not from thread's own cache or cpu register. So each thread always deals with the latest value. 2> When 2 different threads work with same instance or static variables in heap, one may see other's actions as out of order. See jeremy manson's blog on this. But volatile helps here. Following fully running code shows how a number of threads can execute in predefined order and print outputs without using synchronized keyword. thread 0 prints 0 thread 1 prints 1 thread 2 prints 2 thread 3 prints 3 thread 0 prints 0 thread 1 prints 1 thread 2 prints 2 thread 3 prints 3 thread 0 prints 0 thread 1 prints 1 thread 2 prints 2 thread 3 prints 3 To achieve this we may use the following full fledged running code. public class Solution { static volatile int counter = 0; static int print = 0; public static void main(String[] args) { // TODO Auto-generated method stub Thread[] ths = new Thread[4]; for (int i = 0; i < ths.length; i++) { ths[i] = new Thread(new MyRunnable(i, ths.length)); ths[i].start(); } } static class MyRunnable implements Runnable { final int thID; final int total; public MyRunnable(int id, int total) { thID = id; this.total = total; } @Override public void run() { // TODO Auto-generated method stub while (true) { if (thID == counter) { System.out.println("thread " + thID + " prints " + print); print++; if (print == total) print = 0; counter++; if (counter == total) counter = 0; } else { try { Thread.sleep(30); } catch (InterruptedException e) { // log it } } } } } } The following github link has a readme, which gives proper explanation. https://github.com/sankar4git/volatile_thread_ordering A: Volatile(vɒlətʌɪl): Easily evaporated at normal temperatures Important point about volatile: * *Synchronization in Java is possible by using Java keywords synchronized and volatile and locks. *In Java, we can not have synchronized variable. Using synchronized keyword with a variable is illegal and will result in compilation error. Instead of using the synchronized variable in Java, you can use the java volatile variable, which will instruct JVM threads to read the value of volatile variable from main memory and don’t cache it locally. *If a variable is not shared between multiple threads then there is no need to use the volatile keyword. source Example usage of volatile: public class Singleton { private static volatile Singleton _instance; // volatile variable public static Singleton getInstance() { if (_instance == null) { synchronized (Singleton.class) { if (_instance == null) _instance = new Singleton(); } } return _instance; } } We are creating instance lazily at the time the first request comes. If we do not make the _instance variable volatile then the Thread which is creating the instance of Singleton is not able to communicate to the other thread. So if Thread A is creating Singleton instance and just after creation, the CPU corrupts etc, all other threads will not be able to see the value of _instance as not null and they will believe it is still assigned null. Why does this happen? Because reader threads are not doing any locking and until the writer thread comes out of a synchronized block, the memory will not be synchronized and value of _instance will not be updated in main memory. With the Volatile keyword in Java, this is handled by Java itself and such updates will be visible by all reader threads. Conclusion: volatile keyword is also used to communicate the content of memory between threads. Example usage of without volatile: public class Singleton { private static Singleton _instance; //without volatile variable public static Singleton getInstance() { if (_instance == null) { synchronized(Singleton.class) { if (_instance == null) _instance = new Singleton(); } } return _instance; } } The code above is not thread-safe. Although it checks the value of instance once again within the synchronized block (for performance reasons), the JIT compiler can rearrange the bytecode in a way that the reference to the instance is set before the constructor has finished its execution. This means the method getInstance() returns an object that may not have been initialized completely. To make the code thread-safe, the keyword volatile can be used since Java 5 for the instance variable. Variables that are marked as volatile get only visible to other threads once the constructor of the object has finished its execution completely. Source volatile usage in Java: The fail-fast iterators are typically implemented using a volatile counter on the list object. * *When the list is updated, the counter is incremented. *When an Iterator is created, the current value of the counter is embedded in the Iterator object. *When an Iterator operation is performed, the method compares the two counter values and throws a ConcurrentModificationException if they are different. The implementation of fail-safe iterators is typically light-weight. They typically rely on properties of the specific list implementation's data structures. There is no general pattern. A: No one has mentioned the treatment of read and write operation for long and double variable type. Reads and writes are atomic operations for reference variables and for most primitive variables, except for long and double variable types, which must use the volatile keyword to be atomic operations. @link A: Yes, volatile must be used whenever you want a mutable variable to be accessed by multiple threads. It is not very common usecase because typically you need to perform more than a single atomic operation (e.g. check the variable state before modifying it), in which case you would use a synchronized block instead. A: Volatile volatile -> synchronized[About] volatile says for a programmer that the value always will be up to date. The problem is that the value can be saved on different types of hardware memory. For example it can be CPU registers, CPU cache, RAM... СPU registers and CPU cache belong to CPU and can not share a data unlike of RAM which is on the rescue in multithreading envirompment volatile keyword says that a variable will be read and written from/to RAM memory directly. It has some computation footprint Java 5 extended volatile by supporting happens-before[About] A write to a volatile field happens-before every subsequent read of that field. Read is after write volatile keyword does not cure a race condition[About] situation to sove it use synchronized keyword[About] As a result it safety only when one thread writes and others just read the volatile value A: In my opinion, two important scenarios other than stopping thread in which volatile keyword is used are: * *Double-checked locking mechanism. Used often in Singleton design pattern. In this the singleton object needs to be declared volatile. *Spurious Wakeups. Thread may sometimes wake up from wait call even if no notify call has been issued. This behavior is called spurious wakeup. This can be countered by using a conditional variable (boolean flag). Put the wait() call in a while loop as long as the flag is true. So if thread wakes up from wait call due to any reasons other than Notify/NotifyAll then it encounters flag is still true and hence calls wait again. Prior to calling notify set this flag to true. In this case the boolean flag is declared as volatile. A: From oracle documentation page, the need for volatile variable arises to fix memory consistency issues: Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a happens-before relationship with subsequent reads of that same variable. This means that changes to a volatile variable are always visible to other threads. It also means that when a thread reads a volatile variable, it sees not just the latest change to the volatile, but also the side effects of the code that led up the change. As explained in Peter Parker answer, in absence of volatile modifier, each thread's stack may have their own copy of variable. By making the variable as volatile, memory consistency issues have been fixed. Have a look at jenkov tutorial page for better understanding. Have a look at related SE question for some more details on volatile & use cases to use volatile: Difference between volatile and synchronized in Java One practical use case: You have many threads, which need to print current time in a particular format for example : java.text.SimpleDateFormat("HH-mm-ss"). Yon can have one class, which converts current time into SimpleDateFormat and updated the variable for every one second. All other threads can simply use this volatile variable to print current time in log files. A: Volatile Variables are light-weight synchronization. When visibility of latest data among all threads is requirement and atomicity can be compromised , in such situations Volatile Variables must be preferred. Read on volatile variables always return most recent write done by any thread since they are neither cached in registers nor in caches where other processors can not see. Volatile is Lock-Free. I use volatile, when scenario meets criteria as mentioned above. A: volatile variable is basically used for instant update (flush) in main shared cache line once it updated, so that changes reflected to all worker threads immediately. A: If you have a multithread system and these multiple threads work on some shared data, those threads will load data in their own cache. If we do not lock the resource, any change made in one thread is NOT gonna be available in another thread. With a locking mechanism, we add read/write access to the data source. If one thread modifies the data source, that data will be stored in the main memory instead of in its cache. When others threads need this data, they will read it from the main memory. This will increase the latency dramatically. To reduce the latency, we declare variables as volatile. It means that whenever the value of the variable is modified in any of the processors, the other threads will be forced to read it. It still has some delays but better than reading from the main memory. A: The volatile key when used with a variable, will make sure that threads reading this variable will see the same value . Now if you have multiple threads reading and writing to a variable, making the variable volatile will not be enough and data will be corrupted . Image threads have read the same value but each one has done some chages (say incremented a counter) , when writing back to the memory, data integrity is violated . That is why it is necessary to make the varible synchronized (diffrent ways are possible) If the changes are done by 1 thread and the others need just to read this value, the volatile will be suitable. A: Below is a very simple code to demonstrate the requirement of volatile for variable which is used to control the Thread execution from other thread (this is one scenario where volatile is required). // Code to prove importance of 'volatile' when state of one thread is being mutated from another thread. // Try running this class with and without 'volatile' for 'state' property of Task class. public class VolatileTest { public static void main(String[] a) throws Exception { Task task = new Task(); new Thread(task).start(); Thread.sleep(500); long stoppedOn = System.nanoTime(); task.stop(); // -----> do this to stop the thread System.out.println("Stopping on: " + stoppedOn); } } class Task implements Runnable { // Try running with and without 'volatile' here private volatile boolean state = true; private int i = 0; public void stop() { state = false; } @Override public void run() { while(state) { i++; } System.out.println(i + "> Stopped on: " + System.nanoTime()); } } When volatile is not used: you'll never see 'Stopped on: xxx' message even after 'Stopping on: xxx', and the program continues to run. Stopping on: 1895303906650500 When volatile used: you'll see the 'Stopped on: xxx' immediately. Stopping on: 1895285647980000 324565439> Stopped on: 1895285648087300 Demo: https://repl.it/repls/SilverAgonizingObjectcode
{ "language": "en", "url": "https://stackoverflow.com/questions/106591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "788" }
Q: Why are fixnums in Emacs only 29 bits? And why don't they change it? Edit: The reason ask is because I'm new to emacs and I would like to use Emacs as a "programmer calculator". So, I can manipulate 32-bit & 64-bit integers and have them behave as they would on the native machine. A: The remaining 3 bits are used as flags by the Lisp interpreter. (You can get bigger integers by compiling Emacs for a 64-bit machine.) A: Others have commented on why fixnums are only 29 bits wide. But if you want a programmer's calculator, check out calc. It offers arbitrary-precision integers, matrix operations, unit conversions, graphics via gnuplot, statistical functions, financial functions, scientific functions, RPN and algebraic notation, formula simplification... and it's already part of Emacs, so to get started, visit the Info node for "calc" and start at the tutorial. A: The other three bits are used as a tag of the type of object. This used to be so prevalent that a number of CPU architectures included at least some support for tagged integers in their instruction sets: Sparc, Alpha, Burroughs, and the K-Machine for example. Nowadays we let the Lisp runtime deal with tags, without additional hardware support. I'd recommend reading the first link, about Sparc, if you want to get a quick overview of the history. A: Emacs-Lisp is a dynamically-typed language. This means that you need type tags at runtime. If you wanted to work with numbers, you would therefore normally have to pack them into some kind of tagged container that you can point to (i.e. “box” them), as there is no way of distinguishing a pointer from a machine integer at runtime without some kind of tagging scheme. For efficiency reasons, most Lisp implementations do therefore not use raw pointers but what I think is called descriptors. These descriptors are usually a single machine word that may represent a pointer, an unboxed number (a so-called fixnum), or one of various other hard-coded data structures (it's often worth encoding NIL and cons cells specially, too, for example). Now, obviously, if you add the type tag, you don't have the full 32 bits left for the number, so you're left with 26 bits as in MIT Scheme or 29 bits as in Emacs or any other number of bits that you didn't use up for tagging. Some implementations of various dynamic languages reserve multiple tags for fixnums so that they can give you 30-bit or even 31-bit fixnums. SBCL is one implementation of Common Lisp that does this. I don't think the complication that this causes is worth it for Emacs, though. How often do you need fast 30-bit fixnum arithmetic as opposed to 29-bit fixnum arithmetic in a text editor that doesn't even compile its Lisp code into machine code (or does it? I don't remember, actually)? Are you writing a distributed.net client in Emacs-Lisp? Better switch to Common Lisp, then! ;) A: In many Lisp implementations, some of the bits in a word are used for a tag. This lets things like the garbage collector know what is a pointer and what isn't without having to guess. Why do you care how big an Elisp fixnum is? You can open gigantic files as it is. A: I use the Common Lisp interpreter CLISP as a programmer's calculator. Common Lisp has the sanest number handling that I've seen in any programming language; most notably, it has integers of arbitrary size, i.e. bignums, as well as rational numbers. It also has input in arbitrary number bases and bitwise functions for bignums. If you want to calculate from within Emacs, you can run CLISP in an M-x shell. As a bonus, the syntax is almost exactly the same as what you would use in Emacs Lisp. A: That is only true for 32 bit architectures, and can be changed based on build options. The other bits are used for tagging the basic data structures. You can use a 64-bit build which has larger integers, and there are packages for arbitrarily large integer arithmetic. Or, you're just asking a rhetorical question trying to sound angry and important...
{ "language": "en", "url": "https://stackoverflow.com/questions/106597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the most flexible serialization for .NET objects, yet simple to implement? I would like to serialize and deserialize objects without having to worry about the entire class graph. Flexibility is key. I would like to be able to serialize any object passed to me without complete attributes needed throughout the entire object graph. That means that Binary Serialization is not an option as it only works with the other .NET Platforms. I would also like something readable by a person, and thus decipherable by a management program and other interpreters. I've found problems using the DataContract, JSON, and XML Serializers. * *Most of these errors seem to center around Serialization of Lists/Dictionaries (i.e. XML Serializable Generic Dictionary). *"Add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer." Please base your answers on actual experiences and not theory or reading of an article. A: Have you considered serializing to JSON instead of XML? Json.NET has a really powerful and flexible serializer that has no problems with Hashtables/generic dictionaries and doesn't require any particular attributes. I know because I wrote it :) It gives you heaps of control through various options on the serializer and it allows you to override how a type is serialized by creating a JsonConverter for it. In my opinion JSON is more human readable than XML and Json.NET gives the option to write nicely formatted JSON. Finally the project is open source so you can step into the code and make modifications if you need to. A: If I recall it works something like this with a property: [XmlArray("Foo")] [XmlArrayItem("Bar")] public List<BarClass> FooBars { get; set; } If you serialized this you'd get something like: <Foo> <Bar /> <Bar /> </Foo> Of course, I should probably defer to the experts. Here's more info from MS: http://msdn.microsoft.com/en-us/library/system.xml.serialization.xmlarrayitemattribute.aspx Let me know if that works out for you. A: From your requirements it sounds like Xml Serialization is best. What sort of problems do you have with collections when serializing? If you're referring to not knowing what attributes to use on a List or something similar, you might try the XmlArray attribute on your property. You can definitely serialize a collection. A: The IntermediateSerializer in the XNA Framework is pretty damn cool. You can find a bunch of tutorials on using it at http://blogs.msdn.com/shawnhar A: SOAP Serialization worked well for me, even for objects not marked with [Serializable] A: You'll have problems with collection serialization if objects in the collection contain any reference to other objects in the same collection. If any type of dual-pointing exists, you end up creating a multi-map that cannot be serialized. On every problem I've ever had serializing a custom collection, it was always because of some added functionality that I needed that worked fine as part of a "typical" client-server application, and then failed miserably as part of a consumer-provider-server application. A: Put all the classes you want to serialize into a separate assembly, and then use the sgen tool to generate a serialization assembly to serialize to XML. Use XML attributes to control serialization. If you need to customize the serialization assembly (and you will need that to support classes that aren't IXmlSerializable and classes that contain abstract nodes), then instruct sgen to dump the source code into a separate file and then add it to your solution. Then you can modify it as necessary. http://msdn.microsoft.com/en-us/library/bk3w6240(VS.80).aspx FWIW, I've managed to serialize the entire AdsML Framework (over 400 classes) using this technique. It did require a lot of manual customization, but there's no getting around that if you consider the size of the framework. (I used a separate tool to go from XSD to C#) A: Perhaps a more efficient route would be to serialize using the BinaryFormatter As copied from http://blog.paranoidferret.com/index.php/2007/04/27/csharp-tutorial-serialize-objects-to-a-file/ using System.IO; using System.Runtime.Serialization; using System.Runtime.Serialization.Formatters.Binary; public class Serializer { public Serializer() { } public void SerializeObject(string filename, ObjectToSerialize objectToSerialize) { Stream stream = File.Open(filename, FileMode.Create); BinaryFormatter bFormatter = new BinaryFormatter(); bFormatter.Serialize(stream, objectToSerialize); stream.Close(); } public ObjectToSerialize DeSerializeObject(string filename) { ObjectToSerialize objectToSerialize; Stream stream = File.Open(filename, FileMode.Open); BinaryFormatter bFormatter = new BinaryFormatter(); objectToSerialize = (ObjectToSerialize)bFormatter.Deserialize(stream); stream.Close(); return objectToSerialize; } } A: I agree that the DataContract-based serialization methods (to JSON, XML, etc) is a bit more complex than I'd like. If you're trying to get JSON check out http://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer.aspx It's part of the MS AJAX extensions. Admittedly it is flagged as Obsolete in .NET 3.5 but ScottGu mentions in his blog comment here (http://weblogs.asp.net/scottgu/archive/2007/10/01/tip-trick-building-a-tojson-extension-method-using-net-3-5.aspx#4301973) that he's not sure why and it should be supported for a bit longer. A: The simplest thing to do is mark your objects with the Serializable attribute and then use a binary formatter to handle the serialization. The entire class graph shouldn't be a problem provided that any contained objects are also marked as Serializable. A: For interoperability we have always used Xml Serialisation and made sure our class was designed from the ground up to do it correctly. We create an XSD schema document and generate a set of classes from that using XSD.exe. This generates partial classes so we then create a set of corresponding partial classes to add the extra methods we want to help us populate the classes and use them in our application (as they are focused on serialising and deserialising and are a bit difficut to use sometimes). A: You should use the NetDataContractSerializer. It covers any kind of object graph and supports generics, lists, polymorphism (the KnownType attribute is not needed here), recursion and etc. The only drawback is that you have to mark all you classes with [Serializable] / [DataContract] attributes, but experience shows that you have to do some sort of manual fine-tuning anyway since not all members should be persisted. Also it serializes into an Xml, though its readability is questionable. We had the same requirements as yours and chose this solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/106599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: Algorithm to blend gradient filled corners in image I need to put an alpha blended gradient border around an image. My problem is in blending the corners so they are smooth where the horizontal and vertical gradients meet. I believe there is a standard algorithm that solves this problem. I think I even encountered it in school many years ago. But I have been unsuccessful in finding any reference to one in several web searches. (I have implemented a radial fill pattern in the corner, but the transition is still not smooth enough.) My questions: * *If there is a standard algorithm for this problem, what is the name of it, and even better, how is it implemented? *Forgoing any standard algorithm, what's the best way to determine the desired pixel value to produce a smooth gradient in the corners? (Make a smooth transition from the vertical gradient to the horizontal gradient.) EDIT: So imagine I have an image I will insert on top of a larger image. The larger image is solid black and the smaller image is solid white. Before I insert it, I want to blend the smaller image into the larger one by setting the alpha value on the smaller image to create a transparent "border" around it so it "fades" into the larger image. Done correctly, I should have a smooth gradient from black to white, and I do everywhere except the corners and the inside edge. At the edge of the gradient border near the center of the image, the value would be 255 (not transparent). As the border approaches the outside edge, the alpha value approaches 0. In the corners of the image where the vert & horiz borders meet, you end up with what amounts to a diagonal line. I want to eliminate that line and have a smooth transition. What I need is an algorithm that determines the alpha value (0 - 255) for each pixel that overlaps in the corner of an image as the horizontal and vertical edges meet. A: Presumably you're multiplying the two gradients where they overlap, right? Dunno about a standard algorithm. But if you use a signoid shaped gradient instead of a linear one, that should eliminate the visible edge where the two overlap. A simple sigmoid function is smoothstep(t) = tt(3 - 2*t) where 0 <= t <= 1 A: If you don't need it to be resizable, then you can just use a simple alpha map. However, I once used a simple Gaussian fade, with the mean at the location where I wanted it to be the last fully-opaque pixels to be. If that makes sense.
{ "language": "en", "url": "https://stackoverflow.com/questions/106609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can Seam 2.0.2sp1 apps run on Tomcat 5.5.9 with JBoss Embedded? I'm trying to run the Tomcat with JBoss Embedded jpa booking example. I run the build and deploy the war. I then get the following error: ERROR [catalina.core.ContainerBase.[Catalina].[localhost].[/jboss-seam-jpa]] Error configuring application listener of class com.sun.faces.config.ConfigureListener java.lang.NoClassDefFoundError: javax/el/CompositeELResolver at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2357) at java.lang.Class.getConstructor0(Class.java:2671) at java.lang.Class.newInstance0(Class.java:321) at java.lang.Class.newInstance(Class.java:303) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3618) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4104 I find this class exists in el-api.jar which is not in the classpath. So I add el-api.jar to the WEB-INF/lib directory. I then get the following error: INFO: JSF1048: PostConstruct/PreDestroy annotations present. ManagedBeans methods marked with these annotations will have said annotations processed. Sep 19, 2008 5:37:50 PM com.sun.faces.config.ConfigureListener installExpressionFactory SEVERE: Error Instantiating ExpressionFactory java.lang.ClassNotFoundException: com.sun.el.ExpressionFactoryImpl at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1332) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1181) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:164) at com.sun.faces.config.ConfigureListener.installExpressionFactory(ConfigureListener.java:1521) This library appears to be in el-ri.jar or JSP 2.1 jar. Am I doing something wrong? Is there a place that explains how to run seam applications on tomcat 5.5.x? Any help is greatly appreciated! A: I got this to work. I ran ant tomcat55 under the seam/examples/jpa example. This included the el-.jars needed. I then ran 'ant clean' and 'ant jboss-embeded' and manually copied in all of the el-.jars from the tomcat55 make. This got past my problem above. Now I'm able to start tomcat 5.5.9 with embedded JBoss. I can run the booking example now with no problems. A: have you looked at the docs, there's also some pretty good info on the forums at www.seamframework.org and also the old forums at www.jboss.org.
{ "language": "en", "url": "https://stackoverflow.com/questions/106622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I give java.util.Random a specific seed in thirdparty classes? I have a Java program that loads thirdparty class files (classes I did not write) and executes them. These classes often use java.util.Random, which by default generates random starting seed values every time it gets instantiated. For reasons of reproducability, I want to give these classes the same starting seed every time, changing it only at my discretion. Here are some of the obvious solutions, and why they don't work: * *Use a different Random class in the thirdparty classfiles. The problem here is I only load the class files, and cannot modify the source. *Use a custom classloader to load our own Random class instead of the JVM's version. This approach will not work because Java does not allow classloaders to override classes in the java package. *Swap out the rt.jar's java.util.Random implementation for our own, or putting files into trusted locations for the JVM. These approaches require the user of the application messing with the JVM install on their machine, and are no good. *Adding a custom java.util.Random class to the bootclasspath. While this would technically work, for this particular application, it is impractical because this application is intended for end users to run from an IDE. I want to make running the app convenient for users, which means forcing them to set their bootclasspath is a pain. I can't hide this in a script, because it's intended to be run from an IDE like Eclipse (for easy debugging.) So how can I do this? A: Your option 2 will actually work, with the following directions. You will need ( as anjab said ) to change the bootstrap class path . In the command line of the program you need to add the following: java -Xbootclasspath/p:C:\your\random_impl.jar YourProgram Assuming you're on Windown machine or the path for that matter in any OS. That option adds the classes in jar files before the rt.jar are loaded. So your Random will be loaded before the rt.jar Random class does. The usage is displayed by typing : java -X It displays all the X(tra) features de JVM has. It may by not available on other VM implementations such as JRockit or other but it is there on Sun JVM. -Xbootclasspath/p: prepend in front of bootstrap class path I've use this approach in an application where the default ORB class should be replaced with others ORB implementation. ORB class is part of the Java Core and never had any problem. Good luck. A: Consider modifying the third party libraries to have them use a seen for their Random instances. Though you do not have the source code, you can probably edit the bytecode to do it. One helpful toolkit for doing such is ASM. A: You could use AOP to intercept the calls to Random and twiddle the arg to what you want. Sam A: Although you may not change the classloader trivially for "java.x" and "sun.x" packages, there is a way to reckon class loading (and install a "after class was bytecoded and loaded" listener) of theses classes, so you could set something like the seed after loading the classes from these packages. Hint: Use reflection. Anyway, as long as I don't have further informations what exactly you want to achieve, it's pretty hard to help you here. P.S.: Be aware that "static {}" - blocks may hinder you messing around with seeds, again. A: "Use a custom classloader to load our own Random class instead of the JVM's version. This approach will not work because Java does not allow classloaders to override classes in the java package." how about changing the bootclasspath to use your custom Random class ? BR, ~A A: Yes option 2 is working: created two classes for testing purpose named ThirdPartyClass.java and Random.java created jar from ThirdPartyClass.class jar -cvf tpc.jar ThirdPartyClass.class created jar from Random.class jar -cvf rt123.jar Random.class after that execute with following command: java -Xbootclasspath/p:tcp.jar:rt123.jar -cp . -verbose ThirdPartyClass The output will be: seed value for ThirdPartyClass-> 1 source code ThirdPartyClass.java-----> import java.util.Random; public class ThirdPartyClass { ThirdPartyClass(long seed ) { System.out.println("seed value for ThirdPartyClass-> "+seed); } public static void main(String [] args) { ThirdPartyClass tpc=new ThirdPartyClass(new Random().nextLong()); } } source code Random.java-------> package java.util; import java.io.Serializable; public class Random extends Object implements Serializable { public Random() { } public Random(long seed) { } public long nextLong() { return 1; } } Thanks Mahaveer Prasad Mali
{ "language": "en", "url": "https://stackoverflow.com/questions/106623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Memory Management in Objective-C Possible Duplicates: Learn Obj-C Memory Management Where are the best explanations of memory management for iPhone? I come from a C/C++ background and the dynamic nature of Objective-C is somewhat foreign to me, is there a good resource anyone can point me to for some basic memory management techniques in Objective-C? ex. retaining, releasing, autoreleasing For instance, is it completely illegal to use a pointer to an Objective-C object and treat it as an array? Are you forced to use NSArray and NSMutableArray for data structures? I know these are pretty newbie questions, thanks for any help you can offer me. A: If it's an array, feel free to iterate with a pointer. Regular arrays are still governed by C. If it's a NSArray, read the NSArray docs. If they say to do it a particular way, do it that way. When writing for OS X, do it by the book. A: Objective-C is just a superset of C. Anything you can do in C is valid in Objective-C. A: You can certainly use arrays and do your own memory management. The biggest component is that if you're creating anything that's an NSObject subclass, and you create it with a [XXX alloc] method, or if you get it from another copy with [xxx copy], then you've got the responsibility to match that with an associated release. If get a variable from anywhere and intend to keep it around for more than the immediate usage that you're executing through, then make sure you invoke a [... retain] on it. The link http://developer.apple.com/documentation/Cocoa/Conceptual/MemoryMgmt/MemoryMgmt.html has all the details, and is definitely the first place to read. A: Here are the rules: * *If you create an object by calling alloc or copy, you own it and must release it when you're done. *If you didn't create an object, but want it to ensure it sticks around before control returns to the run loop (or, to keep things simple, your method returns), send it a retain message and then release it later when you're done. *If you create an object and want to return it from your method, you are obligated to release it, but you don't want to destroy it before the caller gets a chance to see it. So you send it autorelease instead, which puts it in the Autorelease Pool, which is emptied once control gets back to the program's event loop. If nobody else retains the object, it will be deallocated. Regarding arrays, you are free to do something like this: NSObject *threeObjects[3]; threeObjects[0] = @"a string"; threeObjects[1] = [NSNumber numberWithInt:2]; threeObjects[2] = someOtherObject; Reasons to use NSArray anyway: * *NSArray will take care of retaining objects as you add them and releasing them as you remove them, whereas in a plain C array you will have to do that yourself. *If you are passing an array as a parameter, an NSArray can report the count of objects it contains, with a plain C array you'll need to pass a count along too. *Mixing square bracket meanings on one line feels weird: [threeObjects[0] length] A: Here you go: Application memory management is the process of allocating memory during your program’s runtime, using it, and freeing it when you are done with it. A well-written program uses as little memory as possible. In Objective-C, it can also be seen as a way of distributing ownership of limited memory resources among many pieces of data and code. When you have finished working through this guide, you will have the knowledge you need to manage your application’s memory by explicitly managing the life cycle of objects and freeing them when they are no longer needed. Although memory management is typically considered at the level of an individual object, your goal is actually to manage object graphs. You want to make sure that you have no more objects in memory than you actually need... A: It is generally not useful to repeat the basic rules of memory management, since almost invariably you make a mistake or describe them incompletely -- as is the case in the answers provided by 'heckj' and 'benzado'... The fundamental rules of memory management are provided in Apple's documentation in Memory Management Rules. Apropos of the answer from 'www.stray-bits.com': stating that objects returned from "non-owning" methods are "autoreleased" is also at best misleading. You should typically not think in terms of whether or not something is "autoreleased", but simply consider the memory management rules and determine whether by those conventions you own the returned objet. If you do, you need to relinquish ownership... One counter-example (to thinking in terms of autoreleased objects) is when you're considering performance issues related to methods such as stringWithFormat:. Since you typically(1) don't have direct control over the lifetime of these objects, they can persist for a comparatively long time and unnecessarily increase the memory footprint of your application. Whilst on the desktop this may be of little consequence, on more constrained platforms this can be a significant issue. It is therefore considered best practice on all platforms to use the alloc/init pattern, and on more constrained platforms, where possible you are strongly discouraged from using any methods that would lead to autoreleased objects. (1) You can take control by using your own local autorelease pools. For more on this, see Apple's Memory Management Programming Guide. A: Something to be aware of if you use an C-style array to store objects and you decide to use garbage collection is you'll need to allocate that memory with NSAllocateCollectable(sizeof(id)*size, NSScannedOption) and tag that variable as __strong. This way the collector knows that it holds objects and will treat objects stored there as roots during that variables lifetime. A: For instance, is it completely illegal to use a pointer to an Objective C object and treat it as an array? If it's not an array, then yes. Are you forced to use NSArray and NSMutableArray for data structures? No. You can use C arrays, and you should be able to use C++ STL vectors (although I don't use C++, so I don't know specifics of how). But there's no reason not to use NS{,Mutable}Array. Fear not the Cocoa frameworks, for they are your friend. And don't forget the other collection types, such as NS{,Mutable}Set and NS{,Mutable}Dictionary. A: As another fellow newbie, I found the stanford iOS lectures to be very useful: http://itunes.apple.com/itunes-u/developing-apps-for-ios-hd/id395605774 It's good because it shows the concepts in action with demos, and I generally find someone speaking to me absorbs better than just reading. I definitely think it's one of those topics you have to learn and relearn through different sources though....just to hammer it into your head. A: It's probably also useful to note that for class messages like NSString + (NSString *)stringWithFormat: (basically, helper messages that allocate an object for you rather than requiring you to allocate the object yourself), the resulting object is auto-released unless you explicitly retain it.
{ "language": "en", "url": "https://stackoverflow.com/questions/106627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Why unicode() uses str() on my object only with no encoding given? I start by creating a string variable with some non-ascii utf-8 encoded data on it: >>> text = 'á' >>> text '\xc3\xa1' >>> text.decode('utf-8') u'\xe1' Using unicode() on it raises errors... >>> unicode(text) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128) ...but if I know the encoding I can use it as second parameter: >>> unicode(text, 'utf-8') u'\xe1' >>> unicode(text, 'utf-8') == text.decode('utf-8') True Now if I have a class that returns this text in the __str__() method: >>> class ReturnsEncoded(object): ... def __str__(self): ... return text ... >>> r = ReturnsEncoded() >>> str(r) '\xc3\xa1' unicode(r) seems to use str() on it, since it raises the same error as unicode(text) above: >>> unicode(r) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128) Until now everything is as planned! But as no one would ever expect, unicode(r, 'utf-8') won't even try: >>> unicode(r, 'utf-8') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: coercing to Unicode: need string or buffer, ReturnsEncoded found Why? Why this inconsistent behavior? Is it a bug? is it intended? Very awkward. A: The behaviour does seem confusing, but intensional. I reproduce here the entirety of the unicode documentation from the Python Built-In Functions documentation (for version 2.5.2, as I write this): unicode([object[, encoding [, errors]]]) Return the Unicode string version of object using one of the following modes: If encoding and/or errors are given, unicode() will decode the object which can either be an 8-bit string or a character buffer using the codec for encoding. The encoding parameter is a string giving the name of an encoding; if the encoding is not known, LookupError is raised. Error handling is done according to errors; this specifies the treatment of characters which are invalid in the input encoding. If errors is 'strict' (the default), a ValueError is raised on errors, while a value of 'ignore' causes errors to be silently ignored, and a value of 'replace' causes the official Unicode replacement character, U+FFFD, to be used to replace input characters which cannot be decoded. See also the codecs module. If no optional parameters are given, unicode() will mimic the behaviour of str() except that it returns Unicode strings instead of 8-bit strings. More precisely, if object is a Unicode string or subclass it will return that Unicode string without any additional decoding applied. For objects which provide a __unicode__() method, it will call this method without arguments to create a Unicode string. For all other objects, the 8-bit string version or representation is requested and then converted to a Unicode string using the codec for the default encoding in 'strict' mode. New in version 2.0. Changed in version 2.2: Support for __unicode__() added. So, when you call unicode(r, 'utf-8'), it requires an 8-bit string or a character buffer as the first argument, so it coerces your object using the __str__() method, and attempts to decode that using the utf-8 codec. Without the utf-8, the unicode() function looks for a for a __unicode__() method on your object, and not finding it, calls the __str__() method, as you suggested, attempting to use the default codec to convert to unicode. A: unicode does not guess the encoding of your text. If your object can print itself as unicode, define the __unicode__() method that returns a Unicode string. The secret is that unicode(r) is not actually calling __str__() itself. Instead, it's looking for a __unicode__() method. The default implementation of __unicode__() will call __str__() and then attempt to decode it using the ascii charset. When you pass the encoding, unicode() expects the first object to be something that can be decoded -- that is, an instance of basestring. Behavior is weird because it tries to decode as ascii if I don't pass 'utf-8'. But if I pass 'utf-8' it gives a different error... That's because when you specify "utf-8", it treats the first parameter as a string-like object to be decoded. Without it, it treats the parameter as an object to be coerced to unicode. I do not understand the confusion. If you know that the object's text attribute will always be UTF-8 encoded, just define __unicode__() and then everything will work fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/106630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is it safe to use the Administrator Tasks in Central Administration? Strange question, but: Sharepoint 2007 greets you with the Administrator Tasks on the Central Administration after installation. I just wonder if this list is "safe" to be used for my own Administration Tasks? The reason why i'm asking is because I found that Sharepoint uses a lot of "black magic" and unlogical behaviour and breaks rather easily, so I do not want risk breaking anything if i'm entering my own tasks into the task list. A: I wouldn't use that list as it seems to be specially modified with various extra fields and I wouldn't want to misuse those. It may just pay you to create your own administrative tasks list. A: That list is extensible. You can get a reference to that list via the object model: SPAdministrationWebApplication.Local.AdministrativeTasks I understand your concern, but in this case, you are free to add to this list as you wish.
{ "language": "en", "url": "https://stackoverflow.com/questions/106632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Web-page template where content takes full height of view-port if has 1 line minus footer I am looking for a CSS-based web page template where the main content div occupies the full height of the view port (minus header and footer heights) when its content has few lines. The footer should be at the bottom of the viewport, rather than right below content, where it's more in the middle of the viewport. Content area needs to expand vertically to be joined with the top of footer. If the content requires more space than the viewport, then footer can be at the bottom of the web page (instead of the bottom of view-port) as standard web design. A link to a specific link or sample code appreciated. Don't mention a template site and tell me to do a search there. Must work at least in IE 6 and FF. If JavaScript is required, it's OK as long as if browser doesn't support JS, it defaults to putting the footer at the bottom of the content area without breaking the layout. Sketch for case #1: -------------- <----- header area | | -------------| | small content| | | view-port | | | | -------------| | footer area | | -------------- <----- all other cases: -------------- <----- header area | | -------------| | big content | | | view-port | | | | | | | | | <---- | -------------| footer area | -------------- A: Example here: http://www.rossdmartin.com/aitp/index.htm More in-depth resources: * *http://www.themaninblue.com/experiment/footerStickAlt/ *http://ryanfait.com/sticky-footer/ A: Look for "Footer Stick Alt"... there was a long blog write up on how to make this work. Done by Cameron Adams a.k.a. "The Man in Blue".
{ "language": "en", "url": "https://stackoverflow.com/questions/106646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Simultaneously monitoring multiple log files (over ssh) on Windows? I've used poderosa(a .NET terminal app) to monitor logs on multiple linux/solaris servers. This application is NOT getting currently maintained and I've had several problems with it. I'm wondering what other users do to simultaneously monitor several logs in real-time(as in tail -f logfile). I would like to be able to tab/cascade several ssh tails. A: You could just ssh to one server, and use mutitail from there to tail the logs on all the other servers. A: Ssh to one of the server, run screen on it. You can then split the screen into multiple windows, and each one of them do ssh serverX tail -f /path/to/log/file An incidental advantage to this method is that you don't have to restart the tails each time you connect - instead, you can just reattach to the running screen session. A: You could use Putty Connection Manager to add tabs to PuTTy. Then SSH into the machine twice and tab back and forth. Tutorial on Setting it Up A: From bash you can (save in ~/.bashrc or something): function create-follower () { local _NAME=$1; local _USER=$2; local _HOST=$3; local _PATH=$4; if ! [ "${_NAME}" ]\ || ! [ "${_USER}" ]\ || ! [ "${_HOST}" ]\ || ! [ "${_PATH}" ] ; then { echo "Cannot create log follower." ; echo; echo "Usage: create-follower NAME USER HOST LOG-FILE"; } >&2; return 1 ; fi ; eval "function ${_NAME}(){ ssh ${_USER}@${_HOST} tail -f \"${_PATH}\" & }" } function activate-followers () { if (( $# < 1 )) ; then { echo "You must specify at least one follower to use" ; echo ; echo "Usage:" ; echo " activate-followers follower1 [follower2 ... followerN]"; } >&2; return 1 ; fi ; for FOLLOW in "${@}" ; do ${FOLLOW} ; done ; wait; } function stop-followers () { if [ "$(jobs)" ] ; then kill -9 $(jobs | perl -pe 's/\[([0-9]+)\].*/%$1/') ; fi ; } And then from your shell, define the logs you want to follow: [dsm@localhost:~]$ create-follower test1 user1 localhost /tmp/log-1.txt [dsm@localhost:~]$ create-follower test2 user2 otherhost /tmp/log-2.txt [dsm@localhost:~]$ create-follower test2 user3 remotebox /tmp/log-3.txt Now, activate the followers: [dsm@localhost:~]$ activate-followers test1 test2 test3 To get out of the function use CTRL+C, and to stop the backgrounded processes use: [dsm@localhost:~]$ stop-followers NOTE 1: This assumes public key authentication has been set up for your boxes. NOTE 2: You will have to kill all the jobs that are left running after quitting the activate-followers function. You may want to do this manually as the function provided does a brute force kill on ALL backgrounded jobs NOTE 3: This assumes a working unix-like environment, which you can get by installing cygwin Who says you can't do lisp in shellscript ;-) A: You can checkout in'side log. A Java tool I created, able to read local and distant log files using SSH. It is fairly simple to use. Some more explanations: https://github.com/pschweitz/insidelog/wiki Just download the version corresponding to your operating system, or the native jar release executable within your Java Runtime (requires java 8_40 or higher): https://github.com/pschweitz/insidelog/releases You can find a complete documentation (embedded with and in Github's page as well) A: Two options that pop into my mind first. Choose your favorite SSH app (putty, ssh in cygwin, etc) and log into the machine. 1. SSH for each log (lots of windows open on your machine or tabs depending on your app) 2. SSH once and use screen. A: If you actually needed to see both logs at the same time, and tabs were out of the question, you could install a perl script called LogResolveMerge.pl. It'll will merge two logs together, and dump the output to STDOUT. However, it will be resource intensive, and if your intention is to tail -f the logs, it likely won't be too effective. A: You should be able to do this using fabric, as documented in https://www.markhneedham.com/blog/2013/01/15/fabric-tailing-log-files-on-multiple-machines/ : fab -P --linewise -H host1,host2,host3 -- tail -f /path/to/logfile
{ "language": "en", "url": "https://stackoverflow.com/questions/106668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to create a new type of entity in Microsoft Robotics Studio 2.0? What I'm trying to do with MRS is to teach myself some basic AI; what I want to do is to make a rocket entity, with things such as vectored exhaust, and staging. Anyone have an idea on how to make an entity that can fly? Or do I just need to constantly apply a force upwards? A: Hey TraumaPony, your question looked lonely :) I took a look at an MSDN article about MRS 2.0 here I believe you'll actually need to create a Rocket entity of some kind and then a Thruster entity that it can use. In the article they were able to reuse a DifferentialDrive entity to propel their bot forward. I hope that helps. I'm more or less shooting in the dark since no else has tried to help ya out yet. Cheers! :) A: I'm just starting with MRS myself - but I think you are on the right track, you need to create a rocket engine entity that you can apply a thrust force to. See Simulation Tutorial 2 - Compose Entities with Simulation Services for an example of creating an entity. You can apply force with Simulation.Physics.PhysicsEntity.ApplyForce(). I think you'd do that in your entity's Update() method. But it depends if ApplyForce is actually applying an Impulse (a force for that frame only) or if it's really adding a persistant Force. I'm assuming its the former, since I see no way of unapplying. In that case, Update() is probably the right place. If it is persistant, you only need to do it when thrust levels change. You'll also need to create a Service that partners with the Entity so that you can interact with your rocket, for instance to fire or vector it. There's an example of Service Creation in the same article.
{ "language": "en", "url": "https://stackoverflow.com/questions/106684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to get PHPMyAdmin to show MySQL warnings? I use PHPMyAdmin for convenience in updating a remote database. But it doesn't show warnings, by default, which recently got me into some embarrassing trouble where I was updating a SET field with string not on its list and not noticing the problem. I'm using 2.11.9.1 (Dreamhost's default install). On the PHPMyAdmin wiki it lists "Display warnings" as a feature of version 2.9.0 and even "Display all warnings" as a feature of 2.10.2 -- but how do I actually turn this on? The documentation isn't great. A: I don't believe Dreamhost gives you access to the configuration file for their installation of phpMyAdmin. However, you can easily make your own installation of phpMyAdmin by downloading the source from their website and just untarring it to the directory you want to access it at (your-domain.com/phpma for example). Then, follow the website's instructions for editing your config file (which should include enabling warnings like you've asked). A: I was just looking for the same thing. When I ran INSERTs using the standard phpMyAdmin 'insert' form, rows would get inserted but a red bar would appear stating any warnings. But when I did a bulk insert, no warnings would appear and a green bar appeared instead just saying the number of rows affected (giving you the impression that it had all gone successfully, when in fact it may not have). I found I had to send the SHOW WARNINGS command manually. For example, when running this query, I put both statements into the phpMyAdmin SQL box. INSERT INTO test2 SELECT * FROM test1; SHOW WARNINGS; This gave a list of warnings like the following... Level Code Message Warning 1265 Data truncated for column 'a' at row 1 Warning 1265 Data truncated for column 'a' at row 3 Warning 1265 Data truncated for column 'b' at row 3 Warning 1366 Incorrect integer value: 'x' for column 'b' at row... Things to note: * *You cannot run the SHOW WARNINGS command later, it will appear empty. It must be in the box with your initial query when you click "Go". This is because MySQL only holds the warnings for the last query you ran. Every time you click a link or button phpMyAdmin runs all sorts of other queries on the DB and so your previous warnings get lost. *phpMyAdmin does NOT support showing multiple results from a custom query. So doing this as one SQL script does NOT work... (as of version 3.4.10.1) INSERT INTO test2 VALUES ('my text', 'something else'); SHOW WARNINGS; # you won't see the warnings from here INSERT INTO test2 VALUES ('my text', 'something else'); SHOW WARNINGS; Although the method above will not work in phpMyAdmin, it SHOULD work fine in the MySQL command line client. So use that if you need to. If you do have multiple inserts and want to show all warnings, you've got to chain them together as a single INSERT statement. For example: INSERT INTO test2 VALUES ('my text', 'something else'), ('my text', 'something else'); SHOW WARNINGS; A: I could be mistaken but if I remember correctly you need to have access to the phpMyAdmin config file to enable it. A: follow the website's instructions for editing your config file (which should include enabling warnings like you've asked). Well yes, it should. But I don't see it in the config file and I don't see it in the page you linked to. I've already looked for information in the obvious places, believe me.
{ "language": "en", "url": "https://stackoverflow.com/questions/106685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: in Rails : Retrieve user input from a form that isn't associated with a Model, use the result within the controller Here's a simplified version of what I'm trying to do : * *Before any other actions are performed, present the user with a form to retrieve a string. *Input the string, and then redirect to the default controller action (e.g. index). The string only needs to exist, no other validations are necessary. *The string must be available (as an instance variable?) to all the actions in this controller. I'm very new with Rails, but this doesn't seem like it ought to be exceedingly hard, so I'm feeling kind of dumb. What I've tried : I have a before_filter redirecting to a private method that looks like def check_string if @string return true else get_string end end the get_string method looks like def get_string if params[:string] respond_to do |format| format.html {redirect_to(accounts_url)} # authenticate.html.erb end end respond_to do |format| format.html {render :action =>"get_string"} # get_string.html.erb end end This fails because i have two render or redirect calls in the same action. I can take out that first respond_to, of course, but what happens is that the controller gets trapped in the get_string method. I can more or less see why that's happening, but I don't know how to fix it and break out. I need to be able to show one form (View), get and then do something with the input string, and then proceed as normal. The get_string.html.erb file looks like <h1>Enter a string</h1> <% form_tag('/accounts/get_string') do %> <%= password_field_tag(:string, params[:string])%> <%= submit_tag('Ok')%> <% end %> I'll be thankful for any help! EDIT Thanks for the replies... @Laurie Young : You are right, I was misunderstanding. For some reason I had it in my head that the instance of any given controller invoked by a user would persist throughout their session, and that some of the Rails magic was in tracking objects associated with each user session. I can see why that doesn't make a whole lot of sense in retrospect, and why my attempt to use an instance variable (which I'd thought would persist) won't work. Thanks to you as well :) A: Part of the problem is that you aren't setting @string. You don't really need the before_filter for this at all, and should just be able to use: def get_string @string = params[:string] || session[:string] respond_to do |format| if @string format.html {redirect_to(accounts_url)} # authenticate.html.erb else format.html {render :action =>"get_string"} # get_string.html.erb end end end If you want the @string variable to be available for all actions, you will need to store it in the session. A: It looks like me like your missing a rails concept. Every single page the user sees is a different request. I might have missunderstood what you are trying to do. But it seems to me you want the user to see two pages, * *In the first page they set a string variable *In the second page they see a page that is somehow dependent on the variable set The best way to do this would be to have to a before filter that checks for the existance of the varibale, and if its not set, redirects to them to the form, and otherwise continues class MyController < ApplicationController::Base before_filter :require_string def require_string return true if @string #return early if called multiple times in one request if params['string'] or session['string'] #depending on if you set it as a URL or session var @string = (params['string'] or session['string']) return true end #We now know that string is not set redirect_to string_setting_url and return false #the return false prevents any futher processing in this request end end This is the basic idea behind how plugins like RestfulAuthentication work. In that case "string" is a login token (the user ID i think), is is stored in the session. If you take a look at the login_required' action inauthenticated_system.rb` from ResultfulAuth: it does basically this, though it has a few more error corrections, other stuff added in
{ "language": "en", "url": "https://stackoverflow.com/questions/106711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to make sure a font exists before using it with .NET I have a VB.NET Windows Forms project that at one point paints text directly to onto the form at runtime. Before I paint with the font though, I want to make sure that the font and font-size exists on the user's machine. If they don't, I'll try a few other similar fonts, eventually defaulting with Arial or something. What's the best way to test and validate a font on a user's computer? A: From an MSDN article titled "How To: Enumerate Installed Fonts", I found this code: InstalledFontCollection installedFontCollection = new InstalledFontCollection(); // Get the array of FontFamily objects. FontFamily[] fontFamilies = installedFontCollection.Families; A: Here is one solution, in c#: public partial class Form1 : Form { public Form1() { SetFontFinal(); InitializeComponent(); } /// <summary> /// This method attempts to set the font in the form to Cambria, which /// will only work in some scenarios. If Cambria is not available, it will /// fall back to Times New Roman, so the font is good on almost all systems. /// </summary> private void SetFontFinal() { string fontName = "Cambria"; Font testFont = new Font(fontName, 16.0f, FontStyle.Regular, GraphicsUnit.Pixel); if (testFont.Name == fontName) { // The font exists, so use it. this.Font = testFont; } else { // The font we tested doesn't exist, so fallback to Times. this.Font = new Font("Times New Roman", 16.0f, FontStyle.Regular, GraphicsUnit.Pixel); } } } And here is one method in VB: Public Function FontExists(FontName As String) As Boolean Dim oFont As New StdFont Dim bAns As Boolean oFont.Name = FontName bAns = StrComp(FontName, oFont.Name, vbTextCompare) = 0 FontExists = bAns End Function A: See also this same question that results in this code: private bool IsFontInstalled(string fontName) { using (var testFont = new Font(fontName, 8)) { return 0 == string.Compare( fontName, testFont.Name, StringComparison.InvariantCultureIgnoreCase); } } A: Arial Bold Italic is unlikely to be a font. It's a subclass of the Arial family. Try keeping it simple and test for 'Arial'.
{ "language": "en", "url": "https://stackoverflow.com/questions/106712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Anybody got an Actionscript 3 Youtube API / wrapper? Google / Youtube has an API for Youtube for javascript and Actionscript 2. Unfortunately this API is not compatible with Actionscript 3 without a wrapper - which google does not themselves provide. Has anybody got an actionscript 3 wrapper they can make available? A: as3youtubelib A: as3youtubelib is only for the API itself - for searching videos etc. I wanted a wrapper for the youtube video player itself. http://code.google.com/p/youtubechromelesswrapper-as3/ is a new API that allows you to embed in Flash/Flex and uses the Javascript API (via ExternalInterface) to control the movie within Flash. It doesn't have any control buttons, but has the YouTube logo. A: FINALLY! YouTube's official API changed from AS2 to AS3 http://code.google.com/apis/youtube/flash_api_reference.html Note: This documentation was updated in October 2009 to explain how to use AS3 rather than ActionScript 2.0 (AS2). A: or this: http://code.google.com/apis/youtube/articles/tubeloc.html A: Or you can use this wrapper which doesn't have volume issues and multiple instances problems as the one on YouTube: http://www.ovidiudiac.ro/blog/?p=70 A: Try this: http://code.google.com/apis/youtube/flash_api_reference.html A: Yes there is an AS3 WRAPPER HERE http://flashden.net/item/as3-youtube-wrapper-full-xml-site/21685
{ "language": "en", "url": "https://stackoverflow.com/questions/106721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I bundle a Python application including dependencies into a Windows Installer package for distribution? I need to package my Python application, its dependencies, and Python itself into a single MSI installer for distribution to users. The end result should desirably be: * *Python is installed in the standard location *the package and its dependencies are installed in a separate directory (possibly site-packages) *the installation directory should contain the Python uncompressed and a standalone executable is not required A: My company uses the free InnoSetup tool. It is a moderately complex program that has tons of flexibility for building installers for windows. I believe that it creates .exe and not .msi files, however. InnoSetup is not python specific but we have created an installer for one of our products that installs python along with dependencies to locations specified by the user at install time. A: I've had much better results with dependencies and custom folder structures using pyinstaller, and it lets you find and specify hidden imports and hooks for larger dependencies like numpy and scipy. Also a PITA, though. A: Kind of a dup of this question about how to make a python into an executable. It boils down to: py2exe on windows, Freeze on Linux, and py2app on Mac. A: py2exe will make windows executables with python bundled in. A: I use PyInstaller (the svn version) to create a stand-alone version of my program that includes Python and all the dependencies. It takes a little fiddling to get it to work right and include everything (as does py2exe and other similar programs, see this question), but then it works very well. You then need to create an installer. NSIS Works great for that and is free, but it creates .exe files not .msi. If .msi is not necessary, I highly recommend it. Otherwise check out the answers to this question for other options. A: py2exe is the best way to do this. It's a bit of a PITA to use, but the end result works very well. A: Ok, I have used py2exe before and it works perfectly except for one thing... It only works on executable windows machines. I then learned about Jython which turn a python script into a .Jar file. Which as you know is executable from any machine that has Java ("To your latest running version") installed. Which is great because both unix, windows, and ios (Most of the time) Run java. That means its executable from all of the following machines. As long as they run Java. No need for "py2mac + py2exe + freeze" just to run on all operating systems. Just Jython For more information on how it works and how you can use it click here. http://www.jython.org/
{ "language": "en", "url": "https://stackoverflow.com/questions/106725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: Most efficient way to programmatically determine if a web domain is available? I am writing some code to determine whether a network domain is registered. For example, I want to check if "Google123.com" is available. There are at least two ways I can do this programmatically, but I'm not sure which is more valid: * *A) In linux/cygwin, I can launch the whois command to see if a domain is in use. *B) In linux/windows, i can launch nslookup command to see if a domain is resolvable. My questions are: * *Is nslookup really a valid way to check for registration? Does every registered domain have a name server and show up in DNS? *What is the fastest way to make this registration check? Any tips or other ways to do it? A: nslookup hits your dns server that's in your system settings. It can be behind the times or not have any dns entry. I would think the best way would be to have a tcp connection to whois.internic.net port 43 (the whois port), pass the name you want to check, and then you should get a response letting you know. If it doesn't exist, you'll get a response like No match for "domainyourchecking.com" A: There are rumours that some of the web sites out there that allow you to search for domains are actually fronts for domain speculators who will buy up the domain as soon as you search for it, and then try to sell it for you. I've never encountered such a scam, but you might want to try a few garbage domain searches on a new site before searching for your dream domain name. A: The problem with whois is that there is no consistent response from different tld's. So if you are only looking for .com or some other specific tld, you're fine. If you start looking at the various ccTlds or other gTlds you may find a lot of special casing in your logic trying to figure out what "available" means in the data returned by the whois command. Whois always returns success to the shell, even when the domain is available. :( A: In regards to #1, no. There is no requirement that registered domains actually have DNS. A: system("whois $domainname"); A: This will give you a quick yes/no, but if you think it is free, and you want it, try to register it, you may find it is already taken.
{ "language": "en", "url": "https://stackoverflow.com/questions/106734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: I want my C# Windows Service to automatically update itself Is there a framework that can be used to enable a C# Windows Service to automatically check for a newer version and upgrade itself? I can certainly write code to accomplish this, but I am looking for a framework that has already been implemented and (most importantly) tested. [edit] Here is a link to a similar question with links to modern projects that help accomplish this: Auto-update library for .NET? A: I've had good experiences with Google Omaha for updating Windows Services. It can generally be used to update any Windows application but there is a tutorial for using it to update a Windows Service. In comparison to other update frameworks, Omaha is more complex, but also the most powerful. This is because Google (and now also Microsoft) use it to update their browsers on billions of PCs. However, once it's set up and you understand the basics, it is a very nice experience. A: I'm not aware of any Frameworks that facilitate solutions to this specific problem. What you could try though is separating the business logic of the service from the actual service code into different assemblies. Have the service assembly check for updates to the business logic assembly at regular intervals, copy it from a remote source if necessary, unload the old BL assembly (and perhaps delete), and then dynamically load the new version (unloading the old assembly is not a trivial task). A: Another possible solution is to have a seperate service that runs, stops the other one, if there is an update, and then updates the service. You can't have a service update itself because the .dll that is running will not stop. Seperating the business logic layer would be a good option. You could also rewrite the main service to run under reflection by a master or control service. This is similar to seperating the business logic, and it would just require stopping a thread and the starting it again. I know of no known framework that does this. I have done this myself, but that is not a public framework. A: I've been using WyBuild to update my applications (including Windows services) and it's pretty awesome. Really easy to use, and really easy to integrate with existing applications. It's a pretty great Automatic Updating framework... http://wyday.com/wybuild/help/automatic-updates/windows-services-console-apps.php http://wyday.com/wybuild/help/silent-update-windows-service.php Note that it is a paid framework (the licence is per developer, a free trial is included) A: The only way to unload types is to destroy the appdomain. To do this would require separation of your hosting layer from your executing service code - this is pretty complex. (sort of like doing keyhole surgery) May be easier to either a) run a batch task or b) in-service detect updates then launch a seperate process that stops the service, updates assemblies etc. then restarts it. If you're interested in the former, the MSDN patterns and practices folk wrote an app updater block that you adapt to your service. https://web.archive.org/web/20080506103749/http://msdn.microsoft.com/en-us/library/ms978574.aspx A: In case anyone else is searching for this; I found this link interesting. I have not implemented this solution, but it looks like it might work for me http://www.eggheadcafe.com/articles/20041204.asp A: You can use NetSparke, it supports .NET Core 3+ (.NET 5+) and .NET Framework 4.5.2+. It comes with built-in UI or without any UI at all. You can handle all the events by yourself. It works with an appcast.xml (and also have a utility to generate that), is platform independent and just works out of the box. Just work trough the documentation on their repository and check the example app ;-) A: I was looking into NetSparkle as others have suggested and came across these alternatives. AutoUpdater.NET is my pick due to ease of use and feature set. * *https://github.com/ravibpatel/AutoUpdater.NET *https://github.com/Squirrel/Squirrel.Windows *https://github.com/vslavik/winsparkle A: Could you clarify your question a bit? I'm a bit confused, because as far as I know, you can always overwrite the DLLs the service uses. The copy and restart of the service can easily be made part of you build process.
{ "language": "en", "url": "https://stackoverflow.com/questions/106765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Unit Testing File Modifications A common task in programs I've been working on lately is modifying a text file in some way. (Hey, I'm on Linux. Everything's a file. And I do large-scale system admin.) But the file the code modifies may not exist on my desktop box. And I probably don't want to modify it if it IS on my desktop. I've read about unit testing in Dive Into Python, and it's pretty clear what I want to do when testing an app that converts decimal to Roman Numerals (the example in DintoP). The testing is nicely self-contained. You don't need to verify that the program PRINTS the right thing, you just need to verify that the functions are returning the right output to a given input. In my case, however, we need to test that the program is modifying its environment correctly. Here's what I've come up with: 1) Create the "original" file in a standard location, perhaps /tmp. 2) Run the function that modifies the file, passing it the path to the file in /tmp. 3) Verify that the file in /tmp was changed correctly; pass/fail unit test accordingly. This seems kludgy to me. (Gets even kludgier if you want to verify that backup copies of the file are created properly, etc.) Has anyone come up with a better way? A: You have two levels of testing. * *Filtering and Modifying content. These are "low-level" operations that don't really require physical file I/O. These are the tests, decision-making, alternatives, etc. The "Logic" of the application. *File system operations. Create, copy, rename, delete, backup. Sorry, but those are proper file system operations that -- well -- require a proper file system for testing. For this kind of testing, we often use a "Mock" object. You can design a "FileSystemOperations" class that embodies the various file system operations. You test this to be sure it does basic read, write, copy, rename, etc. There's no real logic in this. Just methods that invoke file system operations. You can then create a MockFileSystem which dummies out the various operations. You can use this Mock object to test your other classes. In some cases, all of your file system operations are in the os module. If that's the case, you can create a MockOS module with mock version of the operations you actually use. Put your MockOS module on the PYTHONPATH and you can conceal the real OS module. For production operations you use your well-tested "Logic" classes plus your FileSystemOperations class (or the real OS module.) A: For later readers who just want a way to test that code writing to files is working correctly, here is a "fake_open" that patches the open builtin of a module to use StringIO. fake_open returns a dict of opened files which can be examined in a unit test or doctest, all without needing a real file-system. def fake_open(module): """Patch module's `open` builtin so that it returns StringIOs instead of creating real files, which is useful for testing. Returns a dict that maps opened file names to StringIO objects.""" from contextlib import closing from StringIO import StringIO streams = {} def fakeopen(filename,mode): stream = StringIO() stream.close = lambda: None streams[filename] = stream return closing(stream) module.open = fakeopen return streams A: When I touch files in my code, I tend to prefer to mock the actual reading and writing of the file... so then I can give my classes exact contents I want in the test, and then assert that the test is writing back the contents I expect. I've done this in Java, and I imagine it is quite simple in Python... but it may require designing your classes/functions in such a way that it is EASY to mock the use of an actual file. For this, you can try passing in streams and then just pass in a simple string input/output stream which won't write to a file, or have a function that does the actual "write this string to a file" or "read this string from a file", and then replace that function in your tests. A: You're talking about testing too much at once. If you start trying to attack a testing problem by saying "Let's verify that it modifies its environment correctly", you're doomed to failure. Environments have dozens, maybe even millions of potential variations. Instead, look at the pieces ("units") of your program. For example, are you going to have a function that determines where the files are that have to be written? What are the inputs to that function? Perhaps an environment variable, perhaps some values read from a config file? Test that function, and don't actually do anything that modifies the filesystem. Don't pass it "realistic" values, pass it values that are easy to verify against. Make a temporary directory, populate it with files in your test's setUp method. Then test the code that writes the files. Just make sure it's writing the right contents file contents. Don't even write to a real filesystem! You don't need to make "fake" file objects for this, just use Python's handy StringIO modules; they're "real" implementations of the "file" interface, they're just not the ones that your program is actually going to be writing to. Ultimately you will have to test the final, everything-is-actually-hooked-up-for-real top-level function that passes the real environment variable and the real config file and puts everything together. But don't worry about that to get started. For one thing, you will start picking up tricks as you write individual tests for smaller functions and creating test mocks, fakes, and stubs will become second nature to you. For another: even if you can't quite figure out how to test that one function call, you will have a very high level of confidence that everything which it is calling works perfectly. Also, you'll notice that test-driven development forces you to make your APIs clearer and more flexible. For example: it's much easier to test something that calls an open() method on an object that came from somewhere abstract, than to test something that calls os.open on a string that you pass it. The open method is flexible; it can be faked, it can be implemented differently, but a string is a string and os.open doesn't give you any leeway to catch what methods are called on it. You can also build testing tools to make repetitive tasks easy. For example, twisted provides facilities for creating temporary files for testing built right into its testing tool. It's not uncommon for testing tools or larger projects with their own test libraries to have functionality like this. A: I think you are on the right track. Depending on what you need to do chroot may help you set up an environment for your scrpits that 'looks' real, but isn't. If that doesn't work then you could write your scripts to take a 'root' path as an argument. In a production run the root path is just /. For testing you create a shadow environment under /tmp/test and then run your scripts with a root path of /tmp/test. A: You might want to setup the test so that it runs inside a chroot jail, so you have all the environment the test needs, even if paths and file locations are hardcoded in the code [not really a good practice, but sometimes one gets the file locations from other places...] and then check the results via the exit code.
{ "language": "en", "url": "https://stackoverflow.com/questions/106766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Standard concise way to copy a file in Java? It has always bothered me that the only way to copy a file in Java involves opening streams, declaring a buffer, reading in one file, looping through it, and writing it out to the other steam. The web is littered with similar, yet still slightly different implementations of this type of solution. Is there a better way that stays within the bounds of the Java language (meaning does not involve exec-ing OS specific commands)? Perhaps in some reliable open source utility package, that would at least obscure this underlying implementation and provide a one line solution? A: * *These methods are performance-engineered (they integrate with operating system native I/O). *These methods work with files, directories and links. *Each of the options supplied may be left out - they are optional. The utility class package com.yourcompany.nio; class Files { static int copyRecursive(Path source, Path target, boolean prompt, CopyOptions options...) { CopyVisitor copyVisitor = new CopyVisitor(source, target, options).copy(); EnumSet<FileVisitOption> fileVisitOpts; if (Arrays.toList(options).contains(java.nio.file.LinkOption.NOFOLLOW_LINKS) { fileVisitOpts = EnumSet.noneOf(FileVisitOption.class) } else { fileVisitOpts = EnumSet.of(FileVisitOption.FOLLOW_LINKS); } Files.walkFileTree(source[i], fileVisitOpts, Integer.MAX_VALUE, copyVisitor); } private class CopyVisitor implements FileVisitor<Path> { final Path source; final Path target; final CopyOptions[] options; CopyVisitor(Path source, Path target, CopyOptions options...) { this.source = source; this.target = target; this.options = options; }; @Override FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) { // before visiting entries in a directory we copy the directory // (okay if directory already exists). Path newdir = target.resolve(source.relativize(dir)); try { Files.copy(dir, newdir, options); } catch (FileAlreadyExistsException x) { // ignore } catch (IOException x) { System.err.format("Unable to create: %s: %s%n", newdir, x); return SKIP_SUBTREE; } return CONTINUE; } @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) { Path newfile= target.resolve(source.relativize(file)); try { Files.copy(file, newfile, options); } catch (IOException x) { System.err.format("Unable to copy: %s: %s%n", source, x); } return CONTINUE; } @Override public FileVisitResult postVisitDirectory(Path dir, IOException exc) { // fix up modification time of directory when done if (exc == null && Arrays.toList(options).contains(COPY_ATTRIBUTES)) { Path newdir = target.resolve(source.relativize(dir)); try { FileTime time = Files.getLastModifiedTime(dir); Files.setLastModifiedTime(newdir, time); } catch (IOException x) { System.err.format("Unable to copy all attributes to: %s: %s%n", newdir, x); } } return CONTINUE; } @Override public FileVisitResult visitFileFailed(Path file, IOException exc) { if (exc instanceof FileSystemLoopException) { System.err.println("cycle detected: " + file); } else { System.err.format("Unable to copy: %s: %s%n", file, exc); } return CONTINUE; } } Copying a directory or file long bytes = java.nio.file.Files.copy( new java.io.File("<filepath1>").toPath(), new java.io.File("<filepath2>").toPath(), java.nio.file.StandardCopyOption.REPLACE_EXISTING, java.nio.file.StandardCopyOption.COPY_ATTRIBUTES, java.nio.file.LinkOption.NOFOLLOW_LINKS); Moving a directory or file long bytes = java.nio.file.Files.move( new java.io.File("<filepath1>").toPath(), new java.io.File("<filepath2>").toPath(), java.nio.file.StandardCopyOption.ATOMIC_MOVE, java.nio.file.StandardCopyOption.REPLACE_EXISTING); Copying a directory or file recursively long bytes = com.yourcompany.nio.Files.copyRecursive( new java.io.File("<filepath1>").toPath(), new java.io.File("<filepath2>").toPath(), java.nio.file.StandardCopyOption.REPLACE_EXISTING, java.nio.file.StandardCopyOption.COPY_ATTRIBUTES java.nio.file.LinkOption.NOFOLLOW_LINKS ); A: Three possible problems with the above code: * *If getChannel throws an exception, you might leak an open stream. *For large files, you might be trying to transfer more at once than the OS can handle. *You are ignoring the return value of transferFrom, so it might be copying just part of the file. This is why org.apache.tools.ant.util.ResourceUtils.copyResource is so complicated. Also note that while transferFrom is OK, transferTo breaks on JDK 1.4 on Linux (see Bug ID:5056395) – Jesse Glick Jan A: If you are in a web application which already uses Spring and if you do not want to include Apache Commons IO for simple file copying, you can use FileCopyUtils of the Spring framework. A: public static void copyFile(File src, File dst) throws IOException { long p = 0, dp, size; FileChannel in = null, out = null; try { if (!dst.exists()) dst.createNewFile(); in = new FileInputStream(src).getChannel(); out = new FileOutputStream(dst).getChannel(); size = in.size(); while ((dp = out.transferFrom(in, p, size)) > 0) { p += dp; } } finally { try { if (out != null) out.close(); } finally { if (in != null) in.close(); } } } A: Here is three ways that you can easily copy files with single line of code! Java7: java.nio.file.Files#copy private static void copyFileUsingJava7Files(File source, File dest) throws IOException { Files.copy(source.toPath(), dest.toPath()); } Appache Commons IO: FileUtils#copyFile private static void copyFileUsingApacheCommonsIO(File source, File dest) throws IOException { FileUtils.copyFile(source, dest); } Guava : Files#copy private static void copyFileUsingGuava(File source,File dest) throws IOException{ Files.copy(source,dest); } A: In Java 7 it is easy... File src = new File("original.txt"); File target = new File("copy.txt"); Files.copy(src.toPath(), target.toPath(), StandardCopyOption.REPLACE_EXISTING); A: NIO copy with a buffer is the fastest according to my test. See the working code below from a test project of mine at https://github.com/mhisoft/fastcopy import java.io.Closeable; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.text.DecimalFormat; public class test { private static final int BUFFER = 4096*16; static final DecimalFormat df = new DecimalFormat("#,###.##"); public static void nioBufferCopy(final File source, final File target ) { FileChannel in = null; FileChannel out = null; double size=0; long overallT1 = System.currentTimeMillis(); try { in = new FileInputStream(source).getChannel(); out = new FileOutputStream(target).getChannel(); size = in.size(); double size2InKB = size / 1024 ; ByteBuffer buffer = ByteBuffer.allocateDirect(BUFFER); while (in.read(buffer) != -1) { buffer.flip(); while(buffer.hasRemaining()){ out.write(buffer); } buffer.clear(); } long overallT2 = System.currentTimeMillis(); System.out.println(String.format("Copied %s KB in %s millisecs", df.format(size2InKB), (overallT2 - overallT1))); } catch (IOException e) { e.printStackTrace(); } finally { close(in); close(out); } } private static void close(Closeable closable) { if (closable != null) { try { closable.close(); } catch (IOException e) { if (FastCopy.debug) e.printStackTrace(); } } } } A: I would avoid the use of a mega api like apache commons. This is a simplistic operation and its built into the JDK in the new NIO package. It was kind of already linked to in a previous answer, but the key method in the NIO api are the new functions "transferTo" and "transferFrom". http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferTo(long,%20long,%20java.nio.channels.WritableByteChannel) One of the linked articles shows a great way on how to integrate this function into your code, using the transferFrom: public static void copyFile(File sourceFile, File destFile) throws IOException { if(!destFile.exists()) { destFile.createNewFile(); } FileChannel source = null; FileChannel destination = null; try { source = new FileInputStream(sourceFile).getChannel(); destination = new FileOutputStream(destFile).getChannel(); destination.transferFrom(source, 0, source.size()); } finally { if(source != null) { source.close(); } if(destination != null) { destination.close(); } } } Learning NIO can be a little tricky, so you might want to just trust in this mechanic before going off and trying to learn NIO overnight. From personal experience it can be a very hard thing to learn if you don't have the experience and were introduced to IO via the java.io streams. A: To copy a file and save it to your destination path you can use the method below. public void copy(File src, File dst) throws IOException { InputStream in = new FileInputStream(src); try { OutputStream out = new FileOutputStream(dst); try { // Transfer bytes from in to out byte[] buf = new byte[1024]; int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } } finally { out.close(); } } finally { in.close(); } } A: As toolkit mentions above, Apache Commons IO is the way to go, specifically FileUtils.copyFile(); it handles all the heavy lifting for you. And as a postscript, note that recent versions of FileUtils (such as the 2.0.1 release) have added the use of NIO for copying files; NIO can significantly increase file-copying performance, in a large part because the NIO routines defer copying directly to the OS/filesystem rather than handle it by reading and writing bytes through the Java layer. So if you're looking for performance, it might be worth checking that you are using a recent version of FileUtils. A: Note that all of these mechanisms only copy the contents of the file, not the metadata such as permissions. So if you were to copy or move an executable .sh file on linux the new file would not be executable. In order to truly a copy or move a file, ie to get the same result as copying from a command line, you actually need to use a native tool. Either a shell script or JNI. Apparently, this might be fixed in java 7 - http://today.java.net/pub/a/today/2008/07/03/jsr-203-new-file-apis.html. Fingers crossed! A: Google's Guava library also has a copy method: public static void copy(File from, File to) throws IOException Copies all the bytes from one file to another. Warning: If to represents an existing file, that file will be overwritten with the contents of from. If to and from refer to the same file, the contents of that file will be deleted. Parameters:from - the source fileto - the destination file Throws: IOException - if an I/O error occurs IllegalArgumentException - if from.equals(to) A: Fast and work with all the versions of Java also Android: private void copy(final File f1, final File f2) throws IOException { f2.createNewFile(); final RandomAccessFile file1 = new RandomAccessFile(f1, "r"); final RandomAccessFile file2 = new RandomAccessFile(f2, "rw"); file2.getChannel().write(file1.getChannel().map(FileChannel.MapMode.READ_ONLY, 0, f1.length())); file1.close(); file2.close(); } A: Now with Java 7, you can use the following try-with-resource syntax: public static void copyFile( File from, File to ) throws IOException { if ( !to.exists() ) { to.createNewFile(); } try ( FileChannel in = new FileInputStream( from ).getChannel(); FileChannel out = new FileOutputStream( to ).getChannel() ) { out.transferFrom( in, 0, in.size() ); } } Or, better yet, this can also be accomplished using the new Files class introduced in Java 7: public static void copyFile( File from, File to ) throws IOException { Files.copy( from.toPath(), to.toPath() ); } Pretty snazzy, eh? A: Available as standard in Java 7, path.copyTo: http://openjdk.java.net/projects/nio/javadoc/java/nio/file/Path.html http://java.sun.com/docs/books/tutorial/essential/io/copy.html I can't believe it took them so long to standardise something so common and simple as file copying :( A: A little late to the party, but here is a comparison of the time taken to copy a file using various file copy methods. I looped in through the methods for 10 times and took an average. File transfer using IO streams seem to be the worst candidate: Here are the methods: private static long fileCopyUsingFileStreams(File fileToCopy, File newFile) throws IOException { FileInputStream input = new FileInputStream(fileToCopy); FileOutputStream output = new FileOutputStream(newFile); byte[] buf = new byte[1024]; int bytesRead; long start = System.currentTimeMillis(); while ((bytesRead = input.read(buf)) > 0) { output.write(buf, 0, bytesRead); } long end = System.currentTimeMillis(); input.close(); output.close(); return (end-start); } private static long fileCopyUsingNIOChannelClass(File fileToCopy, File newFile) throws IOException { FileInputStream inputStream = new FileInputStream(fileToCopy); FileChannel inChannel = inputStream.getChannel(); FileOutputStream outputStream = new FileOutputStream(newFile); FileChannel outChannel = outputStream.getChannel(); long start = System.currentTimeMillis(); inChannel.transferTo(0, fileToCopy.length(), outChannel); long end = System.currentTimeMillis(); inputStream.close(); outputStream.close(); return (end-start); } private static long fileCopyUsingApacheCommons(File fileToCopy, File newFile) throws IOException { long start = System.currentTimeMillis(); FileUtils.copyFile(fileToCopy, newFile); long end = System.currentTimeMillis(); return (end-start); } private static long fileCopyUsingNIOFilesClass(File fileToCopy, File newFile) throws IOException { Path source = Paths.get(fileToCopy.getPath()); Path destination = Paths.get(newFile.getPath()); long start = System.currentTimeMillis(); Files.copy(source, destination, StandardCopyOption.REPLACE_EXISTING); long end = System.currentTimeMillis(); return (end-start); } The only drawback what I can see while using NIO channel class is that I still can't seem to find a way to show intermediate file copy progress.
{ "language": "en", "url": "https://stackoverflow.com/questions/106770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "430" }
Q: What do you use to protect your .NET code from reverse engineering? For a while we were using a tool called CodeVeil. I'm just wondering if there are better alternatives out there. Edit: Before more people misunderstand the question, I'm aware that a determined cracker would probably be able to defeat any of these tools. I'm not too concerned about them though. These tools are just meant to stop the "casual cracker", and to stop people from stealing our company's IP. If they're good enough to get past a decent tool, they probably aren't interested in stealing our crappy code :-P A: I've had a lot of success with Xenocode Postbuild. The tool can obfuscate .NET assemblies, protect agaist Reflector disassembly, combine .NET assemblies into a single executable ("virtualization") and even compile .NET applications to standalone executables that do not need .NET runtime installed. A: I remain unconvinced by the value of these tools. None of the technology solutions prevent reverse engineering any better than legal guards such as licences, trademarks, patents, copyrights etc... .NET really is large transparent source movement. It's much better that instead you frame terms of use around your IP such as licencing and copyright. A: Compiling your .NET application results in output assemblies that contain a great deal of meta information. This information makes it very easy to reconstruct something very close to the original code. An excellent free tool called .NET Reflector can be used to do exactly that and is a popular way to examine how the base class libraries work. Download and use that tool to view reconstructed C#/VB.NET versions of assembly contents. If you're a commerical organization then you do not want people to find it easy to look at your expensive to produce code. A popular method is to use Obfuscation to scramble the contents in a way that does not alter how it runs but does make it hard to understand. Obfuscation uses techniques such as renaming variables and methods. Working out the purpose of methods 'a1', 'a2', 'a3' is much harder than the original 'GetName', 'UpdateInterestRate' and 'SetNewPassword'. So using obfuscation makes it much harder for people to understand what you code is doing and the algorithms it uses. It does not however make it impossible. In the same way C++ code can still be understood by an assembler expert who is willing to spent time working through your binary, an MSIL expert can eventually work out your obfuscated code. But it increases the barrier to the point where few will bother trying. A: Honestly, there isn't a lot you can do besides some obfuscation with tools like you mentioned. .NET is just a step above scripting languages, except the script commands are binary and are called IL. That's a little over simplification, but it's not too far off reality. Any good program written using Reflection can be used to reverse engineer .NET applications, or if you have enough knowledge, a good hex editor. A: Sorry to resurrect an old post, but I think Eziriz's .NET Reactor works brilliantly. In fact I use it myself for all my .net apps and apparently there is no existing tool out there that can decompile a program protected with .net reactor. More details can be found on there info page, http://www.eziriz.com/dotnet_reactor.htm. Test it out with the trial version and .net reflector and you can see for yourself. A: There are several popular tools for obfuscation, including Dotfuscation, which has a "light" version that ships with Visual Studio 2005 and 2008. They have a Pro version that does more than just variable and function name renaming. However, the code is still viewable, it is just scrambled a bit to make it harder to read and grok the logic flow of the software. Another technique is to use other programs that will encrypt the program, and decrypt it at runtime. However, this is not a perfect solution either. In fact, there is no perfect solution that I am aware of that will prevent a determined engineer from reverse engineering the software, if enough time and effort is applied to it. What it really comes down to is determining the level of protection that will make it sufficiently difficult to dissuade the casual hacker, and make it as expensive to reverse engineer as you can, so at least the reverse engineering comes at a cost in either time or money, or ideally, both. The more expensive the reverse engineering costs, the fewer number of individuals that will be willing to put in the effort. And that is the big point to obfuscation. Some think that using a compiler like the C++ compiler that compiles to native code will prevent this sort of reverse engineering, but it doesn't. A good disassembler will allow even pure binary executables to be reverse engineered, so therefore, a perfect solution does not exist. If the computer can read it and execute it, then the memory the computer is using can be scanned and tracked, bypassing all attempts to encrypt, obfuscate, or any other means of keeping your code out of the hands of a determined engineer. A: DISCLAIMER: I don't work for RedGate the makers of SmartAssembly. I'm just a very happy customer who found a good, affordable solution. The choice is very simple, choose SmartAssembly! Don't waste your time or money with the other obfuscators in the marketplace. I spent more money in terms of non-billable hours evaluating competing products. They all had fatal flaws and were next to impossible to debug. SmartAssembly is an easy-to-use, well documented, polished application with excellent support. Post a question on their forum and expect an answer reasonably fast by the actual developers. SmartAssembly is more than an obsfuscator. It has a slew of features, including a built-in, highly customizable crash report generator that your customers can automatically email to you. You can view these reports on either your own server or on red-gates servers. I can't tell you how useful this is when you're beta testing or releasing the product to customers. It also generates debugger files so you can debug any post-release issues you may encounter with your obsfucated product. If you are delivering a commercial application, it makes sense to spend the money on a decent obsfuscator. A bad choice here can compromise your intellectual property or worse lead you to days of gruesome debugging. What would this cost in comparison to what SmartAssembly costs? A: I've heard that Obfusticator is good; it's used on .Net Reflector. A: Another is Crypto Obfuscator - its more affordable than some others, and has various obfuscation and protection methods to hinder the causal and not-so-casual hackers.
{ "language": "en", "url": "https://stackoverflow.com/questions/106794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }