text
stringlengths
8
267k
meta
dict
Q: Ruby RegEx problem text.gsub[^\W-], '') fails I'm trying to learn RegEx in Ruby, based on what I'm reading in "The Rails Way". But, even this simple example has me stumped. I can't tell if it is a typo or not: text.gsub(/\s/, "-").gsub([^\W-], '').downcase It seems to me that this would replace all spaces with -, then anywhere a string starts with a non letter or number followed by a dash, replace that with ''. But, using irb, it fails first on ^: syntax error, unexpected '^', expecting ']' If I take out the ^, it fails again on the W. A: >> text = "I love spaces" => "I love spaces" >> text.gsub(/\s/, "-").gsub(/[^\W-]/, '').downcase => "--" Missing // Although this makes a little more sense :-) >> text.gsub(/\s/, "-").gsub(/([^\W-])/, '\1').downcase => "i-love-spaces" And this is probably what is meant >> text.gsub(/\s/, "-").gsub(/[^\w-]/, '').downcase => "i-love-spaces" \W means "not a word" \w means "a word" The // generate a regexp object /[^\W-]/.class => Regexp A: Step 1: Add this to your bookmarks. Whenever I need to look up regexes, it's my first stop Step 2: Let's walk through your code text.gsub(/\s/, "-") You're calling the gsub function, and giving it 2 parameters. The first parameter is /\s/, which is ruby for "create a new regexp containing \s (the // are like special "" for regexes). The second parameter is the string "-". This will therefore replace all whitespace characters with hyphens. So far, so good. .gsub([^\W-], '').downcase Next you call gsub again, passing it 2 parameters. The first parameter is [^\W-]. Because we didn't quote it in forward-slashes, ruby will literally try run that code. [] creates an array, then it tries to put ^\W- into the array, which is not valid code, so it breaks. Changing it to /[^\W-]/ gives us a valid regex. Looking at the regex, the [] says 'match any character in this group. The group contains \W (which means non-word character) and -, so the regex should match any non-word character, or any hyphen. As the second thing you pass to gsub is an empty string, it should end up replacing all the non-word characters and hyphens with empty string (thereby stripping them out ) .downcase Which just converts the string to lower case. Hope this helps :-) A: You forgot the slashes. It should be /[^\W-]/ A: Well, .gsub(/[^\W-]/,'') says replace anything that's a not word nor a - for nothing. You probably want >> text.gsub(/\s/, "-").gsub(/[^\w-]/, '').downcase => "i-love-spaces" Lower case \w (\W is just the opposite) A: The slashes are to say that the thing between them is a regular expression, much like quotes say the thing between them is a string.
{ "language": "en", "url": "https://stackoverflow.com/questions/138785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I add a separator to a JComboBox in Java? I have a JComboBox and would like to have a separator in the list of elements. How do I do this in Java? A sample scenario where this would come in handy is when making a combobox for font-family-selection; similar to the font-family-selection-control in Word and Excel. In this case I would like to show the most-used-fonts at the top, then a separator and finally all font-families below the separator in alphabetical order. Can anyone help me with how to do this or is this not possible in Java? A: There is a pretty short tutorial with an example that shows how to use a custom ListCellRenderer on java2s http://www.java2s.com/Code/Java/Swing-Components/BlockComboBoxExample.htm Basically it involves inserting a known placeholder in your list model and when you detect the placeholder in the ListCellRenderer you return an instance of 'new JSeparator(JSeparator.HORIZONTAL)' A: By the time I wrote and tested the code below, you probably got lot of better answers... I don't mind as I enjoyed the experiment/learning (still a bit green on the Swing front). [EDIT] Three years later, I am a bit less green, and I took in account the valid remarks of bobndrew. I have no problem with the key navigation that just works (perhaps it was a JVM version issue?). I improved the renderer to show highlight, though. And I use a better demo code. The accepted answer is probably better (more standard), mine is probably more flexible if you want a custom separator... The base idea is to use a renderer for the items of the combo box. For most items, it is a simple JLabel with the text of the item. For the last recent/most used item, I decorate the JLabel with a custom border drawing a line on its bottom. import java.awt.*; import javax.swing.*; @SuppressWarnings("serial") public class TwoPartsComboBox extends JComboBox { private int m_lastFirstPartIndex; public TwoPartsComboBox(String[] itemsFirstPart, String[] itemsSecondPart) { super(itemsFirstPart); m_lastFirstPartIndex = itemsFirstPart.length - 1; for (int i = 0; i < itemsSecondPart.length; i++) { insertItemAt(itemsSecondPart[i], i); } setRenderer(new JLRenderer()); } protected class JLRenderer extends JLabel implements ListCellRenderer { private JLabel m_lastFirstPart; public JLRenderer() { m_lastFirstPart = new JLabel(); m_lastFirstPart.setBorder(new BottomLineBorder()); // m_lastFirstPart.setBorder(new BottomLineBorder(10, Color.BLUE)); } @Override public Component getListCellRendererComponent( JList list, Object value, int index, boolean isSelected, boolean cellHasFocus) { if (value == null) { value = "Select an option"; } JLabel label = this; if (index == m_lastFirstPartIndex) { label = m_lastFirstPart; } label.setText(value.toString()); label.setBackground(isSelected ? list.getSelectionBackground() : list.getBackground()); label.setForeground(isSelected ? list.getSelectionForeground() : list.getForeground()); label.setOpaque(true); return label; } } } Separator class, can be thick, with custom color, etc. import java.awt.*; import javax.swing.border.AbstractBorder; /** * Draws a line at the bottom only. * Useful for making a separator in combo box, for example. */ @SuppressWarnings("serial") class BottomLineBorder extends AbstractBorder { private int m_thickness; private Color m_color; BottomLineBorder() { this(1, Color.BLACK); } BottomLineBorder(Color color) { this(1, color); } BottomLineBorder(int thickness, Color color) { m_thickness = thickness; m_color = color; } @Override public void paintBorder(Component c, Graphics g, int x, int y, int width, int height) { Graphics copy = g.create(); if (copy != null) { try { copy.translate(x, y); copy.setColor(m_color); copy.fillRect(0, height - m_thickness, width - 1, height - 1); } finally { copy.dispose(); } } } @Override public boolean isBorderOpaque() { return true; } @Override public Insets getBorderInsets(Component c) { return new Insets(0, 0, m_thickness, 0); } @Override public Insets getBorderInsets(Component c, Insets i) { i.left = i.top = i.right = 0; i.bottom = m_thickness; return i; } } Test class: import java.awt.*; import java.awt.event.*; import javax.swing.*; @SuppressWarnings("serial") public class TwoPartsComboBoxDemo extends JFrame { private TwoPartsComboBox m_combo; public TwoPartsComboBoxDemo() { Container cont = getContentPane(); cont.setLayout(new FlowLayout()); cont.add(new JLabel("Data: ")) ; String[] itemsRecent = new String[] { "ichi", "ni", "san" }; String[] itemsOther = new String[] { "one", "two", "three" }; m_combo = new TwoPartsComboBox(itemsRecent, itemsOther); m_combo.setSelectedIndex(-1); cont.add(m_combo); m_combo.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent ae) { String si = (String) m_combo.getSelectedItem(); System.out.println(si == null ? "No item selected" : si.toString()); } }); // Reference, to check we have similar behavior to standard combo JComboBox combo = new JComboBox(itemsRecent); cont.add(combo); } /** * Start the demo. * * @param args the command line arguments */ public static void main(String[] args) { // turn bold fonts off in metal UIManager.put("swing.boldMetal", Boolean.FALSE); SwingUtilities.invokeLater(new Runnable() { public void run() { JFrame demoFrame = new TwoPartsComboBoxDemo(); demoFrame.setTitle("Test GUI"); demoFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); demoFrame.setSize(400, 100); demoFrame.setVisible(true); } }); } } A: You can use a custom ListCellRenderer which would draw the separator items differently. See docs and a small tutorial. A: Try adding this renderer. Just supply a list of index values that you want the separator to be above. private class SeperatorComboRenderer extends DefaultListCellRenderer { private final float SEPARATOR_THICKNESS = 1.0f; private final float SPACE_TOP = 2.0f; private final float SPACE_BOTTOM = 2.0f; private final Color SEPARATOR_COLOR = Color.DARK_GRAY; private final List<Integer> marks; private boolean mark; private boolean top; public SeperatorComboRenderer(List<Integer> marks) { this.marks = marks; } @Override public Component getListCellRendererComponent(JList list, Object object, int index, boolean isSelected, boolean hasFocus) { super.getListCellRendererComponent(list, object, index, isSelected, hasFocus); top = false; mark = false; marks.forEach((idx) -> { if(index - 1 == idx) top = true; if(index == idx) mark = true; }); return this; } @Override protected void paintComponent(Graphics g) { if(mark) g.translate(0, (int)(SEPARATOR_THICKNESS + SPACE_BOTTOM)); Graphics2D g2 = (Graphics2D)g; super.paintComponent(g); if(mark) { g2.setColor(SEPARATOR_COLOR); g2.setStroke(new BasicStroke(SEPARATOR_THICKNESS)); g2.drawLine(0, 0, getWidth(), 0); } } @Override public Dimension getPreferredSize() { Dimension pf = super.getPreferredSize(); double height = pf.getHeight(); if(top) height += SPACE_TOP; else if(mark) height += SEPARATOR_THICKNESS + SPACE_BOTTOM; return new Dimension((int)pf.getWidth(), (int)height); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/138793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: batch find file extension If I am iterating over each file using : @echo off FOR %%f IN (*\*.\**) DO ( echo %%f ) how could I print the extension of each file? I tried assigning %%f to a temporary variable, and then using the code : echo "%t:~-3%" to print but with no success. A: This works, although it's not blindingly fast: @echo off for %%f in (*.*) do call :procfile %%f goto :eof :procfile set fname=%1 set ename= :loop1 if "%fname%"=="" ( set ename= goto :exit1 ) if not "%fname:~-1%"=="." ( set ename=%fname:~-1%%ename% set fname=%fname:~0,-1% goto :loop1 ) :exit1 echo.%ename% goto :eof A: The FOR command has several built-in switches that allow you to modify file names. Try the following: @echo off for %%i in (*.*) do echo "%%~xi" For further details, use help for to get a complete list of the modifiers - there are quite a few! A: Sam's answer is definitely the easiest for what you want. But I wanted to add: Don't set a variable inside the ()'s of a for and expect to use it right away, unless you have previously issued setlocal ENABLEDELAYEDEXPANSION and you are using ! instead of % to wrap the variable name. For instance, @echo off setlocal ENABLEDELAYEDEXPANSION FOR %%f IN (*.*) DO ( set t=%%f echo !t:~-3! ) Check out set /? for more info. The other alternative is to call a subroutine to do the set, like Pax shows.
{ "language": "en", "url": "https://stackoverflow.com/questions/138819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How to write a cross-platform program? Greetings, I want to write a small cross-platform utility program with GUI in it. What language/GUI-library should I stick to? Is it possible whatsoever? This is gonna be a small program, so I don't want to make people download JVM or .NET Framework. Is it possible to develop it natively? Update 1. By "natively" I mean that the end result will be native code without intermediate layers like Java Virtual Machine or .NET Common Language Runtime Update 2. A FREE solution is preferable ;) A: The problem is: If you do not want to have a GUI but you do not want to ask the user to download an eternal API, Framework or virtual machine to run it in, be it TCL/TK, Java or QT etc. then you get lost pretty fast. The reason is: You would have to rebuild all the (GUI) functionality those APIs, frameworks and virtual machines provide you with to be platform independent. And that's a whole lot of work to do... . On the other side: The Java virtual machine is installed on nearly any operating system from scratch, why not give this one a shot? A: You want to develop a cross-platform program natively? Uh...I don't think that'll work, mainly because that phrase is a paradox. If you write native code, it by its very nature will only run on the platform you programmed it for. ;-) That's what the Frameworks are all about. So what you should do instead is use a very slim framework if your program is going to be so small. itsmatt's idea of Qt is a possibility. A: WxWindows? Oh, it's called WxWidgets now: http://www.wxwidgets.org/ A: wxWidgets has bindings to all sorts of languages - python for instance, if your app is small enough. A: Lazarus is great. GTK2 on Linux, win32/64 on Windows, WINCE on euh, Wince. It even uses Carbon on Mac (working on COCOA). Also easy to sell to your boss (the code is Delphi compatible) A: How about Python using Qt or Wx and then using PythonToExe to make a 'distributable' Thought will have to giving to the development to ensure that no native functionality is used (i.e. registry etc.) Also things like line breaks in text files will have different escape characters so will need to be handled A: Which OS's do you have in mind when you say cross-platform? As Epaga correctly points out, native and cross-platform are mutually exclusive. You can either write multiple versions that run natively on multiple platforms, or you need to use some cross-platform framework. In the case of the cross-platform framework approach, there will always be extra installs required. For example, many here suggest using Python and one of its frameworks. This would necessitate instructing people to install python - and potentially the framework - first. If you are aiming at Windows and OS X (and are prepared to experiment with alpha-release code for Linux if support for that OS is required), I'd highly recommend you take a look at using Adobe AIR for cross-platform GUI applications. A: I agree with Georgi, Java is the way to go. With a bit of work, you can make your desktop application work as a Java applet too (so that users do not need to actively download anything at all). See http://www.geogebra.org as an example of an application with runs smoothly as a cross-platform Java application AND has a simple port to a web applet. Two other advantages to using Java are: * *They have extensive libraries for building the UI, including UI component builders. *The Java runtime framework is generally updated automatically for the user. One disadvantage: * *The version of Java installed on your end users computer may not be totally compatible with your application, requiring you to code to the lowest likely denominator. A: Try RealBasic. Visual Basic-like syntax, targets Win32, OS X and Linux. I don't know any details about targetting Linux, but for any cross-platform development I've done between Win32 and OS X its been a dream. http://www.realbasic.com Edit: Generates native executables. There is a small cost - $100. A: Have you looked at Qt? A: Flash? It's installed pretty much everywhere. A: If you know C or C++ the first cross platform GUI framework I can think of are: * *QT (C++, proprietary but free with the LGPL licensing) *wxWidgets (C++, the most complete and stable but also huge) *FLTK (C++) *FOX (C++) *IUP (C, simpler and cleaner than the ones above) If you know Pascal, you can try freepascal+Lazarus. I've never used it, though. A: If it "HAS" to be Desktop use Qt. Nothing beats it right now. However personally I gave up on desktop and any UI based project I do is normally Browser/Server based. You can easily write a little custom server that listens to some port so the program can run locally with no need for your users to install Apache or have access to the net. I have a small Lua, Python and C++ framework I made for that purpose (Want to add Javascript for the backend with V8 :) A: If you're going to look at Qt and WxWidgets, don't forget to also check out GTK+ ! A: I agree with David Wees and Georgi, Java is cross-platformness par excellence. You literally write once and run everywhere. With no need of compiling your code for each target OS or bitness, no worries about linking against anything, etc. Only thing is, as you pointed out, that a JRE must be installed, but it's quick and straightforward to do even for novice end-users (it's a matter of clicking "Next>" a few times in the installer). And with Java Web Start deployment gets even easier: the user just clicks the launch button on a webpage and the application runs (if the proper JVM is installed according to what specified in the JNLP descriptor) or the user gets redirected to the Java download page (if no suitable JVM is found).
{ "language": "en", "url": "https://stackoverflow.com/questions/138831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do you parse an HTML string for image tags to get at the SRC information? Currently I use .Net WebBrowser.Document.Images() to do this. It requires the Webrowser to load the document. It's messy and takes up resources. According to this question XPath is better than a regex at this. Anyone know how to do this in C#? A: If your input string is valid XHTML you can treat is as xml, load it into an xmldocument, and do XPath magic :) But it's not always the case. Otherwise you can try this function, that will return all image links from HtmlSource : public List<Uri> FetchLinksFromSource(string htmlSource) { List<Uri> links = new List<Uri>(); string regexImgSrc = @"<img[^>]*?src\s*=\s*[""']?([^'"" >]+?)[ '""][^>]*?>"; MatchCollection matchesImgSrc = Regex.Matches(htmlSource, regexImgSrc, RegexOptions.IgnoreCase | RegexOptions.Singleline); foreach (Match m in matchesImgSrc) { string href = m.Groups[1].Value; links.Add(new Uri(href)); } return links; } And you can use it like this : HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://www.example.com"); request.Credentials = System.Net.CredentialCache.DefaultCredentials; HttpWebResponse response = (HttpWebResponse)request.GetResponse(); if (response.StatusCode == HttpStatusCode.OK) { using(StreamReader sr = new StreamReader(response.GetResponseStream())) { List<Uri> links = FetchLinksFromSource(sr.ReadToEnd()); } } A: If all you need is images I would just use a regular expression. Something like this should do the trick: Regex rg = new Regex(@"<img.*?src=""(.*?)""", RegexOptions.IgnoreCase); A: The big issue with any HTML parsing is the "well formed" part. You've seen the crap HTML out there - how much of it is really well formed? I needed to do something similar - parse out all links in a document (and in my case) update them with a rewritten link. I found the Html Agility Pack over on CodePlex. It rocks (and handles malformed HTML). Here's a snippet for iterating over links in a document: HtmlDocument doc = new HtmlDocument(); doc.Load(@"C:\Sample.HTM"); HtmlNodeCollection linkNodes = doc.DocumentNode.SelectNodes("//a/@href"); Content match = null; // Run only if there are links in the document. if (linkNodes != null) { foreach (HtmlNode linkNode in linkNodes) { HtmlAttribute attrib = linkNode.Attributes["href"]; // Do whatever else you need here } } Original Blog Post A: If it's valid xhtml, you could do this: XmlDocument doc = new XmlDocument(); doc.LoadXml(html); XmlNodeList results = doc.SelectNodes("//img/@src");
{ "language": "en", "url": "https://stackoverflow.com/questions/138839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How do I test a django database schema? I want to write tests that can show whether or not the database is in sync with my models.py file. Actually I have already written them, only to find out that django creates a new database each time the tests are run based on the models.py file. Is there any way I can make the models.py test use the existing database schema? The one that's in mysql/postgresql, and not the one that's in /myapp/models.py ? I don't care about the data that's in the database, I only care about it's schema i.e. I want my tests to notice if a table in the database has less fields than the schema in my models.py file. I'm using the unittest framework (actually the django extension to it) if this has any relevance. thanks A: What we did was override the default test_runner so that it wouldn't create a new database to test against. This way, it runs the test against whatever our current local database looks like. But be very careful if you use this method because any changes to data you make in your tests will be permanent. I made sure that all our tests restores any changes back to their original state, and keep our pristine version of our database on the server and backed up. So to do this you need to copy the run_test method from django.test.simple to a location in your project -- I put mine in myproject/test/test_runner.py Then make the following changes to that method: // change old_name = settings.DATABASE_NAME from django.db import connection connection.creation.create_test_db(verbosity, autoclobber=not interactive) result = unittest.TextTestRunner(verbosity=verbosity).run(suite) connection.creation.destroy_test_db(old_name, verbosity) // to: result = unittest.TextTestRunner(verbosity=verbosity).run(suite) Make sure to do all the necessary imports at the top and then in your settings file set the setting: TEST_RUNNER = 'myproject.test.test_runner.run_tests' Now when you run ./manage.py test Django will run the tests against the current state of your database rather than creating a new version based on your current model definitions. Another thing you can do is create a copy of your database locally, and then do a check in your new run_test() method like this: if settings.DATABASE_NAME != 'my_test_db': sys.exit("You cannot run tests using the %s database. Please switch DATABASE_NAME to my_test_db in settings.py" % settings.DATABASE_NAME) That way there's no danger of running tests against your main database.
{ "language": "en", "url": "https://stackoverflow.com/questions/138851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Webservice toolkits in Java that can interface with WCF We've got some problems with an external company trying in integrate into a WCF service we expose and they are a Java shop. I was wondering if there are more than one toolkit that they can try to solve their issues and would like a list to suggest to them but I'm not familiar with the Java world at all. Essentially they've got some memory leak (apparently!) but they are very sketchy in the details. A: Microsoft and Sun worked together to ensure that their latest web services toolkits worked with each other. Sun's java implementation is Metro. A: Are they using Axis, if you are presenting a standard webservice to them? Or are you presenting a custom REST service that they have had to do more manual coding for (HTTPClient, XML generators/parsers, etc)? A: You need to ensure you're using a normal web service binding and nothing else. I admit though over the years getting java and .net web services to play nicely is no mean feat. A memory leak by the way, would have nothing to do with calling your web service and everything to do with the way the external company is managing their memory. You're not executing anything on their servers after all :-) A: I've done this successfully with Axis. It was actually pretty straight forward, using the Eclipse plugins.
{ "language": "en", "url": "https://stackoverflow.com/questions/138877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: IE6 freezes due to *server* configuration Our web site (running Rails) freezes IE6 nearly every time. The same code, deployed on a different server, does not freeze IE6. Where and how should we start tracking this down? A: You need to determine the difference between them, so I'd start out with the following: curl -D first.headers -o first.body http://first.example.com curl -D second.headers -o second.body http://second.example.com diff -u first.headers second.headers diff -u first.body second.body A: * *Might be a communication problem. Try wireshark against the server that freezes and the server that doesn't freeze. Compare the results to see if there is a difference. *Narrow down the problem. Start cutting out code until IE6 doesn't freeze. Then you might be able to figure out exactly what is causing the problem. A: I've been having this problem today on an AJAX-heavy site. I think I've narrowed the problem down to the server having GZIP compression turned on. When the GZIP was turned off on our server, IE6 loaded the page without freezing at all. When GZIP is turned on, IE6 freezes/crashes completely. I also noticed that images were being served with GZIP from our server, so I disabled that for images and this solved the problem with IE6 freezing/crashing. Now the server uses GZIP only for .js, .html, and JSON. A: Try both in IE6 on different machines, preferably with as few addons as possible such as spyware blockers or Google Toolbars... A: Use Firefox with Firebug to compare the HTTP Headers in the Request and Response from both servers. A: You can also try : http://projects.nikhilk.net/WebDevHelper/Default.aspx That installs in IE and may help you in troubleshooting network issues and such. You may be able to see exactly when and where it freezes in the request/response by using its tracing features. A: Is the freezing happening on your development server or your production server? Weather your developer server locks up IE6 or not isn't that big of a deal, but if your production server fails to kill IE6 you might have a problem! :-P A: Perhaps some more info that will help you. We had the same problem and narrowed it also down to the GZIP compression. The key was that we had gzip compression on for our ScriptResources, which also deliver the javascripts used by the controls in our .NET page. Apperently there is a bug in IE6 that causes is to freeze, we believe that the browser receives the files and parses them before unpacking them, which causes the freeze. For now we have turned off the gzip compression, but as we have a large number of files provided through the ScriptsResource manager we need a different solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/138880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: When should I use Inline vs. External Javascript? I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance. What is the general practice for this? Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I: * *write the bits of code that configure this script inline? *include all bits in one file that's share among all these html pages? *include each bit in a separate external file, one for each html page? Thanks. A: Actually, there's a pretty solid case to use inline javascript. If the js is small enough (one-liner), I tend to prefer the javascript inline because of two factors: * *Locality. There's no need to navigate an external file to validate the behaviour of some javascript *AJAX. If you're refreshing some section of the page via AJAX, you may lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using jQuery you can either use the live or delegate methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline. A: Another reason why you should always use external scripts is for easier transition to Content Security Policy (CSP). CSP defaults forbid all inline script, making your site more resistant to XSS attacks. A: I would take a look at the required code and divide it into as many separate files as needed. Every js file would only hold one "logical set" of functions etc. eg. one file for all login related functions. Then during site developement on each html page you only include those that are needed. When you go live with your site you can optimize by combining every js file a page needs into one file. A: The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful. A: Maintainability is definitely a reason to keep them external, but if the configuration is a one-liner (or in general shorter than the HTTP overhead you would get for making those files external) it's performance-wise better to keep them inline. Always remember, that each HTTP request generates some overhead in terms of execution time and traffic. Naturally this all becomes irrelevant the moment your code is longer than a couple of lines and is not really specific to one single page. The moment you want to be able to reuse that code, make it external. If you don't, look at its size and decide then. A: Three considerations: * *How much code do you need (sometimes libraries are a first-class consumer)? *Specificity: is this code only functional in the context of this specific document or element? *Every code inside the document tends to make it longer and thus slower. Besides that SEO considerations make it obvious, that you minimize internal scripting ... A: On the point of keeping JavaScript external: ASP.NET 3.5SP1 recently introduced functionality to create a Composite script resource (merge a bunch of js files into one). Another benefit to this is when Webserver compression is turned on, downloading one slightly larger file will have a better compression ratio then many smaller files (also less http overhead, roundtrip etc...). I guess this saves on the initial page load, then browser caching kicks in as mentioned above. ASP.NET aside, this screencast explains the benefits in more detail: http://www.asp.net/learn/3.5-SP1/video-296.aspx A: If you only care about performance, most of advice in this thread is flat out wrong, and is becoming more and more wrong in the SPA era, where we can assume that the page is useless without the JS code. I've spent countless hours optimizing SPA page load times, and verifying these results with different browsers. Across the board the performance increase by re-orchestrating your html, can be quite dramatic. To get the best performance, you have to think of pages as two-stage rockets. These two stages roughly correspond to <head> and <body> phases, but think of them instead as <static> and <dynamic>. The static portion is basically a string constant which you shove down the response pipe as fast as you possibly can. This can be a little tricky if you use a lot of middleware that sets cookies (these need to be set before sending http content), but in principle it's just flushing the response buffer, hopefully before jumping into some templating code (razor, php, etc) on the server. This may sound difficult, but then I'm just explaining it wrong, because it's near trivial. As you may have guessed, this static portion should contain all javascript inlined and minified. It would look something like <!DOCTYPE html> <html> <head> <script>/*...inlined jquery, angular, your code*/</script> <style>/* ditto css */</style> </head> <body> <!-- inline all your templates, if applicable --> <script type='template-mime' id='1'></script> <script type='template-mime' id='2'></script> <script type='template-mime' id='3'></script> Since it costs you next to nothing to send this portion down the wire, you can expect that the client will start receiving this somewhere around 5ms + latency after connecting to your server. Assuming the server is reasonably close this latency could be between 20ms to 60ms. Browsers will start processing this section as soon as they get it, and the processing time will normally dominate transfer time by factor 20 or more, which is now your amortized window for server-side processing of the <dynamic> portion. It takes about 50ms for the browser (chrome, rest maybe 20% slower) to process inline jquery + signalr + angular + ng animate + ng touch + ng routes + lodash. That's pretty amazing in and of itself. Most web apps have less code than all those popular libraries put together, but let's say you have just as much, so we would win latency+100ms of processing on the client (this latency win comes from the second transfer chunk). By the time the second chunk arrives, we've processed all js code and templates and we can start executing dom transforms. You may object that this method is orthogonal to the inlining concept, but it isn't. If you, instead of inlining, link to cdns or your own servers the browser would have to open another connection(s) and delay execution. Since this execution is basically free (as the server side is talking to the database) it must be clear that all of these jumps would cost more than doing no jumps at all. If there were a browser quirk that said external js executes faster we could measure which factor dominates. My measurements indicate that extra requests kill performance at this stage. I work a lot with optimization of SPA apps. It's common for people to think that data volume is a big deal, while in truth latency, and execution often dominate. The minified libraries I listed add up to 300kb of data, and that's just 68 kb gzipped, or 200ms download on a 2mbit 3g/4g phone, which is exactly the latency it would take on the same phone to check IF it had the same data in its cache already, even if it was proxy cached, because the mobile latency tax (phone-to-tower-latency) still applies. Meanwhile, desktop connections that have lower first-hop latency typically have higher bandwidth anyway. In short, right now (2014), it's best to inline all scripts, styles and templates. EDIT (MAY 2016) As JS applications continue to grow, and some of my payloads now stack up to 3+ megabytes of minified code, it's becoming obvious that at the very least common libraries should no longer be inlined. A: Externalizing javascript is one of the yahoo performance rules: http://developer.yahoo.com/performance/rules.html#external While the hard-and-fast rule that you should always externalize scripts will generally be a good bet, in some cases you may want to inline some of the scripts and styles. You should however only inline things that you know will improve performance (because you've measured this). A: External scripts are also easier to debug using Firebug. I like to Unit Test my JavaScript and having it all external helps. I hate seeing JavaScript in PHP code and HTML it looks like a big mess to me. A: Another hidden benefit of external scripts is that you can easily run them through a syntax checker like jslint. That can save you from a lot of heartbreaking, hard-to-find, IE6 bugs. A: i think the specific to one page, short script case is (only) defensible case for inline script A: At the time this answer was originally posted (2008), the rule was simple: All script should be external. Both for maintenance and performance. (Why performance? Because if the code is separate, it can easier be cached by browsers.) JavaScript doesn't belong in the HTML code and if it contains special characters (such as <, >) it even creates problems. Nowadays, web scalability has changed. Reducing the number of requests has become a valid consideration due to the latency of making multiple HTTP requests. This makes the answer more complex: in most cases, having JavaScript external is still recommended. But for certain cases, especially very small pieces of code, inlining them into the site’s HTML makes sense. A: In your scenario it sounds like writing the external stuff in one file shared among the pages would be good for you. I agree with everything said above. A: During early prototyping keep your code inline for the benefit of fast iteration, but be sure to make it all external by the time you reach production. I'd even dare to say that if you can't place all your Javascript externally, then you have a bad design under your hands, and you should refactor your data and scripts A: Google has included load times into it's page ranking measurements, if you inline a lot, it will take longer for the spiders to crawl thru your page, this may be influence your page ranking if you have to much included. in any case different strategies may have influence on your ranking. A: well I think that you should use inline when making single page websites as scripts will not need to be shared across multiple pages A: Having internal JS pros: It's easier to manage & debug You can see what's happening Internal JS cons: People can change it around, which really can annoy you. external JS pros: no changing around you can look more professional (or at least that's what I think) external JS cons: harder to manage its hard to know what's going on. A: Always try to use external Js as inline js is always difficult to maintain. Moreover, it is professionally required that you use an external js since majority of the developers recommend using js externally. I myself use external js.
{ "language": "en", "url": "https://stackoverflow.com/questions/138884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "132" }
Q: Visual Studio debugger slows down in in-line code Since I upgraded to Visual Studio 2008 from vs2005, I have found a very annoying behaviour when debugging large projects. If I attempt to step into inline code, the debugger appears to lock up for tens of seconds. Each time that I step inside such a function, there is a similar pause. Has anyone experienced this and is anyone aware of a work around? Postscript: After learning that MS had a service pack for vs2008 and needing to get it because of other compiling issues, the problem that I was encountering with the debugger was resolved. A: I used to get this - I think it's a bug with the 'Autos' debug window: http://social.msdn.microsoft.com/Forums/en-US/vsdebug/thread/eabc58b1-51b2-49ce-b710-15e2bf7e7516/ A: I get delays like this when debugging ASP.NET apps and it seems to happen when a symbol(pdb) file is getting accessed in the background. The larger the library, the longer the wait. My delay is at most about 10 seconds, but it does seem to happen with symbols that have already been accessed. I do get a lot of 1-3 second waits when i try to step over items that cause VS to give me the "Step into Specific" message (http://blogesh.wordpress.com/category/visual-studio-2008/ #3). Perhaps this may be causing a real blow up for you. A: For what it's worth, this issue appears to be addressed in the visual studio 2008 service pack 1. A: As a workaround, you could use something like this for debugging purposes: #ifdef _DEBUG #define INLINE #else #define INLINE inline #endif For extra neatness you can place the functions in a separate .inc file that is included either in the header or the cpp file depending on build type.
{ "language": "en", "url": "https://stackoverflow.com/questions/138917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I fix routing errors from rails in production mode? If I try and access some random string in the URL of my rails app, such as /asdfasdifjasdfkj then I am seeing a rails error message Routing Error No route matches "/asdfasdifjasdfkj" with {:method=>:get} Even though I am in production mode. Clearly I don't want any real users to see this, and would prefer a 404 page. Anyone know whats going wrong and how I fix it? A: To get 404 you need to run server in production environment and use external ip address rather than local/loopback ip address in the url. You can also force controller to consider all your requests as local: def local_request? return false end
{ "language": "en", "url": "https://stackoverflow.com/questions/138928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What are the relative merits of CSV, JSON and XML for a REST API? We're currently planning a new API for an application and debating the various data formats we should use for interchange. There's a fairly intense discussion going on about the relative merits of CSV, JSON and XML. Basically, the crux of the argument is whether we should support CSV at all because of the lack of recursion (i.e. having a document which has multiple authors and multiple references would require multiple API calls to obtain all the information). In the experiences you may have had when working with information from Web APIs and things we can do to make the lives easier for the developers working with our API. Our decision: We've decided to provide XML and JSON due to the difficulty in recursion in CSV needing multiple calls for a single logical operation. JSON doesn't have a parser in Qt and Protocol Buffers doesn't seem to have a non-alpha PHP implementation so they are out for the moment too but will probably be supported eventually. A: XML can be a bit heavyweight at times. JSON is quite nice, though, has good language support, and JSON data can be translated directly to native objects on many playforms. A: Advantages: * *XML - Lots of libraries, Devs are familiar with it, XSLT, Can be easiily Validated by both client and server (XSD, DTD), Hierarchical Data *JSON - easily interpreted on client side, compact notation, Hierarchical Data *CSV - Opens in Excel(?) Disadvantages: * *XML - Bloated, harder to interpret in JavaScript than JSON *JSON - If used improperly can pose a security hole (don't use eval), Not all languages have libraries to interpret it. *CSV - Does not support hierarchical data, you'd be the only one doing it, it's actually much harder than most devs think to parse valid csv files (CSV values can contain new lines as long as they are between quotes, etc). Given the above, I wouldn't even bother supporting CSV. The client can generate it from either XML or JSON if it's really needed. A: CSV has so many problems as a complex data model that I wouldn't use it. XML is very flexible and easy to program with - clients will have no problem coding XML generators and parsers, you can even provide sample parsers using SAX. Have you checked out Google's network data format? It's called Protocol Buffers. Don't know if it is useful for a REST service however as it skips that whole HTTP layer too. A: CSV is right out. JSON is a more compact object notation than XML, so if you're looking for high volumes it has the advantage. XML has wider market penetration (I love that phrase) and is supported by all programming languages and their core frameworks. JSON is getting there (if not already there). Personally, I like the brackets. I would bet more devs are comfortable with working with xml data than with json. A: I don't have any experience with JSON, CSV works up to a point when your data is very tabular and evenly structured. XML can become unwieldy very quickly, especially if you don't have a tool that creates the bindings to your objects automatically. I have not tried this either but Google's Protocol Buffers look really good, simple format, creates automatic bindings to C++, Java and Python and implements serialisation and deserialisation of the created objects. A: Asides from what Allain Lalonde already said, one additional advantage of CSV is that it tends to be more compact than XML or even JSON. So, if your data is strictly tabular, with a completely flat hyerarchy, CSV may be a correct choice. Additonal disadvantages of CSV is that it may use different delimiters and decimal separators, depeding on which tool (and even country!) generated it.
{ "language": "en", "url": "https://stackoverflow.com/questions/138929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How do I obtain CPU cycle count in Win32? In Win32, is there any way to get a unique cpu cycle count or something similar that would be uniform for multiple processes/languages/systems/etc. I'm creating some log files, but have to produce multiple logfiles because we're hosting the .NET runtime, and I'd like to avoid calling from one to the other to log. As such, I was thinking I'd just produce two files, combine them, and then sort them, to get a coherent timeline involving cross-world calls. However, GetTickCount does not increase for every call, so that's not reliable. Is there a better number, so that I get the calls in the right order when sorting? Edit: Thanks to @Greg that put me on the track to QueryPerformanceCounter, which did the trick. A: You can use the RDTSC CPU instruction (assuming x86). This instruction gives the CPU cycle counter, but be aware that it will increase very quickly to its maximum value, and then reset to 0. As the Wikipedia article mentions, you might be better off using the QueryPerformanceCounter function. A: System.Diagnostics.Stopwatch.GetTimestamp() return the number of CPU cycle since a time origin (maybe when the computer start, but I'm not sure) and I've never seen it not increased between 2 calls. The CPU Cycles will be specific for each computer so you can't use it to merge log file between 2 computers. A: RDTSC output may depend on the current core's clock frequency, which for modern CPUs is neither constant nor, in a multicore machine, consistent. Use the system time, and if dealing with feeds from multiple systems use an NTP time source. You can get reliable, consistent time readings that way; if the overhead is too much for your purposes, using the HPET to work out time elapsed since the last known reliable time reading is better than using the HPET alone. A: Heres an interesting article! says not to use RDTSC, but to instead use QueryPerformanceCounter. Conclusion: Using regular old timeGetTime() to do timing is not reliable on many Windows-based operating systems because the granularity of the system timer can be as high as 10-15 milliseconds, meaning that timeGetTime() is only accurate to 10-15 milliseconds. [Note that the high granularities occur on NT-based operation systems like Windows NT, 2000, and XP. Windows 95 and 98 tend to have much better granularity, around 1-5 ms.] However, if you call timeBeginPeriod(1) at the beginning of your program (and timeEndPeriod(1) at the end), timeGetTime() will usually become accurate to 1-2 milliseconds, and will provide you with extremely accurate timing information. Sleep() behaves similarly; the length of time that Sleep() actually sleeps for goes hand-in-hand with the granularity of timeGetTime(), so after calling timeBeginPeriod(1) once, Sleep(1) will actually sleep for 1-2 milliseconds,Sleep(2) for 2-3, and so on (instead of sleeping in increments as high as 10-15 ms). For higher precision timing (sub-millisecond accuracy), you'll probably want to avoid using the assembly mnemonic RDTSC because it is hard to calibrate; instead, use QueryPerformanceFrequency and QueryPerformanceCounter, which are accurate to less than 10 microseconds (0.00001 seconds). For simple timing, both timeGetTime and QueryPerformanceCounter work well, and QueryPerformanceCounter is obviously more accurate. However, if you need to do any kind of "timed pauses" (such as those necessary for framerate limiting), you need to be careful of sitting in a loop calling QueryPerformanceCounter, waiting for it to reach a certain value; this will eat up 100% of your processor. Instead, consider a hybrid scheme, where you call Sleep(1) (don't forget timeBeginPeriod(1) first!) whenever you need to pass more than 1 ms of time, and then only enter the QueryPerformanceCounter 100%-busy loop to finish off the last < 1/1000th of a second of the delay you need. This will give you ultra-accurate delays (accurate to 10 microseconds), with very minimal CPU usage. See the code above. A: Use the GetTickCount and add another counter as you merge the log files. Won't give you perfect sequence between the different log files, but it will at least keep all logs from each file in the correct order.
{ "language": "en", "url": "https://stackoverflow.com/questions/138932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to get UTF-8 working in Java webapps? I need to get UTF-8 working in my Java webapp (servlets + JSP, no framework used) to support äöå etc. for regular Finnish text and Cyrillic alphabets like ЦжФ for special cases. My setup is the following: * *Development environment: Windows XP *Production environment: Debian Database used: MySQL 5.x Users mainly use Firefox2 but also Opera 9.x, FF3, IE7 and Google Chrome are used to access the site. How to achieve this? A: Answering myself as the FAQ of this site encourages it. This works for me: Mostly characters äåö are not a problematic as the default character set used by browsers and tomcat/java for webapps is latin1 ie. ISO-8859-1 which "understands" those characters. To get UTF-8 working under Java+Tomcat+Linux/Windows+Mysql requires the following: Configuring Tomcat's server.xml It's necessary to configure that the connector uses UTF-8 to encode url (GET request) parameters: <Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" compressionMinSize="128" noCompressionUserAgents="gozilla, traviata" compressableMimeType="text/html,text/xml,text/plain,text/css,text/ javascript,application/x-javascript,application/javascript" URIEncoding="UTF-8" /> The key part being URIEncoding="UTF-8" in the above example. This quarantees that Tomcat handles all incoming GET parameters as UTF-8 encoded. As a result, when the user writes the following to the address bar of the browser: https://localhost:8443/ID/Users?action=search&name=*ж* the character ж is handled as UTF-8 and is encoded to (usually by the browser before even getting to the server) as %D0%B6. POST request are not affected by this. CharsetFilter Then it's time to force the java webapp to handle all requests and responses as UTF-8 encoded. This requires that we define a character set filter like the following: package fi.foo.filters; import javax.servlet.*; import java.io.IOException; public class CharsetFilter implements Filter { private String encoding; public void init(FilterConfig config) throws ServletException { encoding = config.getInitParameter("requestEncoding"); if (encoding == null) encoding = "UTF-8"; } public void doFilter(ServletRequest request, ServletResponse response, FilterChain next) throws IOException, ServletException { // Respect the client-specified character encoding // (see HTTP specification section 3.4.1) if (null == request.getCharacterEncoding()) { request.setCharacterEncoding(encoding); } // Set the default response content type and encoding response.setContentType("text/html; charset=UTF-8"); response.setCharacterEncoding("UTF-8"); next.doFilter(request, response); } public void destroy() { } } This filter makes sure that if the browser hasn't set the encoding used in the request, that it's set to UTF-8. The other thing done by this filter is to set the default response encoding ie. the encoding in which the returned html/whatever is. The alternative is to set the response encoding etc. in each controller of the application. This filter has to be added to the web.xml or the deployment descriptor of the webapp: <!--CharsetFilter start--> <filter> <filter-name>CharsetFilter</filter-name> <filter-class>fi.foo.filters.CharsetFilter</filter-class> <init-param> <param-name>requestEncoding</param-name> <param-value>UTF-8</param-value> </init-param> </filter> <filter-mapping> <filter-name>CharsetFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> The instructions for making this filter are found at the tomcat wiki (http://wiki.apache.org/tomcat/Tomcat/UTF-8) JSP page encoding In your web.xml, add the following: <jsp-config> <jsp-property-group> <url-pattern>*.jsp</url-pattern> <page-encoding>UTF-8</page-encoding> </jsp-property-group> </jsp-config> Alternatively, all JSP-pages of the webapp would need to have the following at the top of them: <%@page pageEncoding="UTF-8" contentType="text/html; charset=UTF-8"%> If some kind of a layout with different JSP-fragments is used, then this is needed in all of them. HTML-meta tags JSP page encoding tells the JVM to handle the characters in the JSP page in the correct encoding. Then it's time to tell the browser in which encoding the html page is: This is done with the following at the top of each xhtml page produced by the webapp: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="fi"> <head> <meta http-equiv='Content-Type' content='text/html; charset=UTF-8' /> ... JDBC-connection When using a db, it has to be defined that the connection uses UTF-8 encoding. This is done in context.xml or wherever the JDBC connection is defiend as follows: <Resource name="jdbc/AppDB" auth="Container" type="javax.sql.DataSource" maxActive="20" maxIdle="10" maxWait="10000" username="foo" password="bar" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/ ID_development?useEncoding=true&amp;characterEncoding=UTF-8" /> MySQL database and tables The used database must use UTF-8 encoding. This is achieved by creating the database with the following: CREATE DATABASE `ID_development` /*!40100 DEFAULT CHARACTER SET utf8 COLLATE utf8_swedish_ci */; Then, all of the tables need to be in UTF-8 also: CREATE TABLE `Users` ( `id` int(10) unsigned NOT NULL auto_increment, `name` varchar(30) collate utf8_swedish_ci default NULL PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_swedish_ci ROW_FORMAT=DYNAMIC; The key part being CHARSET=utf8. MySQL server configuration MySQL serveri has to be configured also. Typically this is done in Windows by modifying my.ini -file and in Linux by configuring my.cnf -file. In those files it should be defined that all clients connected to the server use utf8 as the default character set and that the default charset used by the server is also utf8. [client] port=3306 default-character-set=utf8 [mysql] default-character-set=utf8 Mysql procedures and functions These also need to have the character set defined. For example: DELIMITER $$ DROP FUNCTION IF EXISTS `pathToNode` $$ CREATE FUNCTION `pathToNode` (ryhma_id INT) RETURNS TEXT CHARACTER SET utf8 READS SQL DATA BEGIN DECLARE path VARCHAR(255) CHARACTER SET utf8; SET path = NULL; ... RETURN path; END $$ DELIMITER ; GET requests: latin1 and UTF-8 If and when it's defined in tomcat's server.xml that GET request parameters are encoded in UTF-8, the following GET requests are handled properly: https://localhost:8443/ID/Users?action=search&name=Petteri https://localhost:8443/ID/Users?action=search&name=ж Because ASCII-characters are encoded in the same way both with latin1 and UTF-8, the string "Petteri" is handled correctly. The Cyrillic character ж is not understood at all in latin1. Because Tomcat is instructed to handle request parameters as UTF-8 it encodes that character correctly as %D0%B6. If and when browsers are instructed to read the pages in UTF-8 encoding (with request headers and html meta-tag), at least Firefox 2/3 and other browsers from this period all encode the character themselves as %D0%B6. The end result is that all users with name "Petteri" are found and also all users with the name "ж" are found. But what about äåö? HTTP-specification defines that by default URLs are encoded as latin1. This results in firefox2, firefox3 etc. encoding the following https://localhost:8443/ID/Users?action=search&name=*Päivi* in to the encoded version https://localhost:8443/ID/Users?action=search&name=*P%E4ivi* In latin1 the character ä is encoded as %E4. Even though the page/request/everything is defined to use UTF-8. The UTF-8 encoded version of ä is %C3%A4 The result of this is that it's quite impossible for the webapp to correly handle the request parameters from GET requests as some characters are encoded in latin1 and others in UTF-8. Notice: POST requests do work as browsers encode all request parameters from forms completely in UTF-8 if the page is defined as being UTF-8 Stuff to read A very big thank you for the writers of the following for giving the answers for my problem: * * http://tagunov.tripod.com/i18n/i18n.html * http://wiki.apache.org/tomcat/Tomcat/UTF-8 * http://java.sun.com/developer/technicalArticles/Intl/HTTPCharset/ * http://dev.mysql.com/doc/refman/5.0/en/charset-syntax.html * http://cagan327.blogspot.com/2006/05/utf-8-encoding-fix-tomcat-jsp-etc.html * http://cagan327.blogspot.com/2006/05/utf-8-encoding-fix-for-mysql-tomcat.html * http://jeppesn.dk/utf-8.html * http://www.nabble.com/request-parameters-mishandle-utf-8-encoding-td18720039.html * http://www.utoronto.ca/webdocs/HTMLdocs/NewHTML/iso_table.html * http://www.utf8-chartable.de/ Important Note mysql supports the Basic Multilingual Plane using 3-byte UTF-8 characters. If you need to go outside of that (certain alphabets require more than 3-bytes of UTF-8), then you either need to use a flavor of VARBINARY column type or use the utf8mb4 character set (which requires MySQL 5.5.3 or later). Just be aware that using the utf8 character set in MySQL won't work 100% of the time. Tomcat with Apache One more thing If you are using Apache + Tomcat + mod_JK connector then you also need to do following changes: * *Add URIEncoding="UTF-8" into tomcat server.xml file for 8009 connector, it is used by mod_JK connector. <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" URIEncoding="UTF-8"/> *Goto your apache folder i.e. /etc/httpd/conf and add AddDefaultCharset utf-8 in httpd.conf file. Note: First check that it is exist or not. If exist you may update it with this line. You can add this line at bottom also. A: I want also to add from here this part solved my utf problem: runtime.encoding=<encoding> A: I think you summed it up quite well in your own answer. In the process of UTF-8-ing(?) from end to end you might also want to make sure java itself is using UTF-8. Use -Dfile.encoding=utf-8 as parameter to the JVM (can be configured in catalina.bat). A: To add to kosoant's answer, if you are using Spring, rather than writing your own Servlet filter, you can use the class org.springframework.web.filter.CharacterEncodingFilter they provide, configuring it like the following in your web.xml: <filter> <filter-name>encoding-filter</filter-name> <filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class> <init-param> <param-name>encoding</param-name> <param-value>UTF-8</param-value> </init-param> <init-param> <param-name>forceEncoding</param-name> <param-value>FALSE</param-value> </init-param> </filter> <filter-mapping> <filter-name>encoding-filter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> A: This is for Greek Encoding in MySql tables when we want to access them using Java: Use the following connection setup in your JBoss connection pool (mysql-ds.xml) <connection-url>jdbc:mysql://192.168.10.123:3308/mydatabase</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>nts</user-name> <password>xaxaxa!</password> <connection-property name="useUnicode">true</connection-property> <connection-property name="characterEncoding">greek</connection-property> If you don't want to put this in a JNDI connection pool, you can configure it as a JDBC-url like the next line illustrates: jdbc:mysql://192.168.10.123:3308/mydatabase?characterEncoding=greek For me and Nick, so we never forget it and waste time anymore..... A: Nice detailed answer. just wanted to add one more thing which will definitely help others to see the UTF-8 encoding on URLs in action . Follow the steps below to enable UTF-8 encoding on URLs in firefox. * *type "about:config" in the address bar. *Use the filter input type to search for "network.standard-url.encode-query-utf8" property. *the above property will be false by default, turn that to TRUE. *restart the browser. UTF-8 encoding on URLs works by default in IE6/7/8 and chrome. A: Previous responses didn't work with my problem. It was only in production, with tomcat and apache mod_proxy_ajp. Post body lost non ascii chars by ? The problem finally was with JVM defaultCharset (US-ASCII in a default instalation: Charset dfset = Charset.defaultCharset();) so, the solution was run tomcat server with a modifier to run the JVM with UTF-8 as default charset: JAVA_OPTS="$JAVA_OPTS -Dfile.encoding=UTF-8" (add this line to catalina.sh and service tomcat restart) Maybe you must also change linux system variable (edit ~/.bashrc and ~/.profile for permanent change, see https://perlgeek.de/en/article/set-up-a-clean-utf8-environment) export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8 export LANGUAGE=en_US.UTF-8 A: I'm with a similar problem, but, in filenames of a file I'm compressing with apache commons. So, i resolved it with this command: convmv --notest -f cp1252 -t utf8 * -r it works very well for me. Hope it help anyone ;) A: For my case of displaying Unicode character from message bundles, I don't need to apply "JSP page encoding" section to display Unicode on my jsp page. All I need is "CharsetFilter" section. A: One other point that hasn't been mentioned relates to Java Servlets working with Ajax. I have situations where a web page is picking up utf-8 text from the user sending this to a JavaScript file which includes it in a URI sent to the Servlet. The Servlet queries a database, captures the result and returns it as XML to the JavaScript file which formats it and inserts the formatted response into the original web page. In one web app I was following an early Ajax book's instructions for wrapping up the JavaScript in constructing the URI. The example in the book used the escape() method, which I discovered (the hard way) is wrong. For utf-8 you must use encodeURIComponent(). Few people seem to roll their own Ajax these days, but I thought I might as well add this. A: About CharsetFilter mentioned in @kosoant answer .... There is a build in Filter in tomcat web.xml (located at conf/web.xml). The filter is named setCharacterEncodingFilter and is commented by default. You can uncomment this ( Please remember to uncomment its filter-mapping too ) Also there is no need to set jsp-config in your web.xml (I have test it for Tomcat 7+ ) A: Some time you can solve problem through MySQL Administrator wizard. In Startup variables > Advanced > and set Def. char Set:utf8 Maybe this config need restart MySQL. A: Faced the same issue on Spring MVC 5 + Tomcat 9 + JSP. After the long research, came to an elegant solution (no need filters and no need changes in the Tomcat server.xml (starting from 8.0.0-RC3 version)) * *In the WebMvcConfigurer implementation set default encoding for messageSource (for reading data from messages source files in the UTF-8 encoding. @Configuration @EnableWebMvc @ComponentScan("{package.with.components}") public class WebApplicationContextConfig implements WebMvcConfigurer { @Bean public MessageSource messageSource() { final ResourceBundleMessageSource messageSource = new ResourceBundleMessageSource(); messageSource.setBasenames("messages"); messageSource.setDefaultEncoding("UTF-8"); return messageSource; } /* other beans and methods */ } *In the DispatcherServletInitializer implementation @Override the onStartup method and set request and resource character encoding in it. public class DispatcherServletInitializer extends AbstractAnnotationConfigDispatcherServletInitializer { @Override public void onStartup(final ServletContext servletContext) throws ServletException { // https://wiki.apache.org/tomcat/FAQ/CharacterEncoding servletContext.setRequestCharacterEncoding("UTF-8"); servletContext.setResponseCharacterEncoding("UTF-8"); super.onStartup(servletContext); } /* servlet mappings, root and web application configs, other methods */ } *Save all message source and view files in UTF-8 encoding. *Add <%@ page contentType="text/html;charset=UTF-8" %> or <%@ page pageEncoding="UTF-8" %> in each *.jsp file or add jsp-config descriptor to web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" id="WebApp_ID" version="3.0"> <display-name>AppName</display-name> <jsp-config> <jsp-property-group> <url-pattern>*.jsp</url-pattern> <page-encoding>UTF-8</page-encoding> </jsp-property-group> </jsp-config> </web-app> A: In case you have specified in connection pool (mysql-ds.xml), in your Java code you can open the connection as follows: DriverManager.registerDriver(new com.mysql.jdbc.Driver()); Connection conn = DriverManager.getConnection( "jdbc:mysql://192.168.1.12:3308/mydb?characterEncoding=greek", "Myuser", "mypass");
{ "language": "en", "url": "https://stackoverflow.com/questions/138948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "375" }
Q: Is there an implementation for Delphi:TClientDataSet in C++ for MVS? I want to migrate from Embarcadero Delphi to Visual Studio, but without a TClientDataset class it is very difficult. This class represents an in-memory dataset. I can't find any class like TClientDataset. Can anyone help me find something like this please? A: The .NET couple System.Dataset and System.Datatable are very different beasts from the TClientDataset. Filtering and binding are done on another class (Dataview), dotNET DataGrid hides this a little. Extract method is the nearest a datatable provides in termes of filtering (it returns an array of pointers to DataRows). Grouping is not so powerful as in TClientDataset, as also indexing is poorer. (As in dotNet 1.1) There's no record cursor on DataTable, so the positioning is on the visual controls - it takes 10 lines of codes just to get the actual record out of a DataGrid. So the easiness of positioning the cursor on grid and get the value of the field of the dataset does not exist. A: Visual studio has DataSet and DataTable classes which are very close to what a TClientDataSet is in Delphi. See http://msdn.microsoft.com/en-us/library/system.data.dataset.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/138952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: .htm or .html extension - which one is correct and what is different? When I save a file with an .htm or .html extension, which one is correct and what is different? A: Neither is wrong, it's a matter of preference. Traditionally, MS software uses htm by default, and *nix prefers html. As oded pointed out below, the .htm tradition was carried over from win 3.xx, where file extensions were limited to three characters. A: Personally I prefer the .html but as other have said both will work. Just make sure you only use one. Never both on the same site! link to mypage.html is not the same as link to mypage.htm A: Also notice that as part of a URI, the file extension doesn't play any role. In fact, it isn't even a file extension, it just looks like one. The type of the resource identified by a URI is not encoded in its name. Instead, it is decided by the Content-Type HTTP header field. It's completely legitimate (but perhaps a bit stupid) to deliver a bitmap picture as myimage.html and conversely, to deliver an HTML page as index.png. This is also the reason why it is argued that file extensions shouldn't be part of URIs at all. Sir Tim Berners-Lee elaborates on this in Hypertext Style: Cool URIs Don't Change. A: Mainly, the number of characters is different. ".htm" smells of Microsoft operating systems where the file system historically limited file name extensions (the part of the file name after the dot) to 3 characters. ".html" smells of Un*x operating systems that did not have this limitation and that were used for all the serious internet work at the time. Pragmatically, the two are equivalent. The difference is cultural. ".html" is regarded by some as more correct. The same people tend to look down at Microsoft operating systems and regard ".htm" as unsightly reminder of their limitations. A: They are completely interchangeable. If I understand the history properly then in the beginning the correct extension was .html but when Windows 95 came along it could only cope with 3 character extensions. So .html is correct according to some standard or other but in practice it doesn't matter (most of the time...have just done a quick google search and found the following) There is one area of concern though, most host servers will require your default starting page to be named as "index.html" and not as "index.htm" A: I use .htm. Less typing I guess. Or perhaps it's my windows-bias. A: When you save the file locally, the difference doesn't matter - your local system will likely treat the two file extensions as interchangeable for loading by your browser. The reason for it is that historically Windows-based systems used 3 letter extensions (htm) and Unix-based systems the 4 letters (html). On a server-side, there may be some differences when it comes to serving default filenames: The one situation in which there may be a difference between the two extensions is that of a server's default filenames. When a URL that does not specify a filename is requested from a server, such as http://www.domain.dom/dirname/, the server returns a file from the requested URL that matches a default filename. Examples of common default filenames include "index.html," "index.htm," "default.html," "default.htm," etc. However, an administrator can make the server's default filename anything he/she so desires. Note that servers are often configured with more then one default filename. So if you have any level of control over your server's default filenames, then this shouldn't be an issue. A: Both are correct back in the past file extensions had to be a maximum of 3 characters long. http://en.wikipedia.org/wiki/Filename_extension A: Personally I prefer .html, since the name is "Hypertext markup language". .htm was used because certain legacy versions of windows could not have more than 3 characters in the file name extension A: Both are working as same,but For the technical and non technical reference please find out here, http://www.sightspecific.com/~mosh/www_faq/ext.html
{ "language": "en", "url": "https://stackoverflow.com/questions/138953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: What free screen design tools are available I am looking for a free tool to quickly create a screen design in a workshop with a customer (for a web application). The focus of the tool should be on a functional definition of screens and not on the design of them. Do you have any suggestions for an appropriate tool? A: Balsamiq * *Online *Quick and easy to use *Drawings have a hand drawn feel so you can avoid design details and focus on layout and function *You can save (XML) your drawing to open up later *Can save as drawing an image A: I've found mockupscreens to be pretty good. The designs look exactly like works in progress and keeps clients from getting distracted A: Pencil is pretty good: Pencil is built for the purpose of providing a free and open-source GUI prototyping tool that people can easily install and use to create mockups in popular desktop platforms. A: Balsamiq's Mockups worked very well for us. The product actually delivered more than it promised. Everyone to whom we showed our Mockups wanted to know about the development tool we used. Even though, as another commenter observed, the Mockups intentionally look like works in progress, the functionality of our Mockups was such that those to whom they were showed started confusing the Mockups with functioning software--we had to keep on reminded people that the Mockups are only sophisticated specifications. Additionally, the Balsamiq folks who make the product are some of the nicest people with whom I've ever worked. Though the retail price of the product is not free (sometimes free is not really economical), its cost is very reasonable and it will more than pay for itself by producing a solid, semi-functional, easily modified prototype very quickly. A: Any WYSIWYG tool should do the job. I would recommend something like Kompozer to get the initial job done, providing you are not too worried about design, and will therefore start more or less from scratch with the actual product. A: I find that using a PowerPoint like app can do wonders. You design different aspects and make new slides. Use the notes to record details. Google Presentations is free and would be fine for what you want. For some money there is http://www.axure.com/ which is a much more robust "prototyping" tool. A: You can check out video http://malaysia.video.yahoo.com/watch/919824/3643540 about paper prototyping :)
{ "language": "en", "url": "https://stackoverflow.com/questions/138979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to test if a file is a directory in a batch script? Is there any way to find out if a file is a directory? I have the file name in a variable. In Perl I can do this: if(-d $var) { print "it's a directory\n" } A: Recently failed with different approaches from the above. Quite sure they worked in the past, maybe related to dfs here. Now using the files attributes and cut first char @echo off SETLOCAL ENABLEEXTENSIONS set ATTR=%~a1 set DIRATTR=%ATTR:~0,1% if /I "%DIRATTR%"=="d" echo %1 is a folder :EOF A: You can do it like so: IF EXIST %VAR%\NUL ECHO It's a directory However, this only works for directories without spaces in their names. When you add quotes round the variable to handle the spaces it will stop working. To handle directories with spaces, convert the filename to short 8.3 format as follows: FOR %%i IN (%VAR%) DO IF EXIST %%~si\NUL ECHO It's a directory The %%~si converts %%i to an 8.3 filename. To see all the other tricks you can perform with FOR variables enter HELP FOR at a command prompt. (Note - the example given above is in the format to work in a batch file. To get it work on the command line, replace the %% with % in both places.) A: This works perfectly if exist "%~1\" echo Directory we need to use %~1 to remove quotes from %1, and add a backslash at end. Then put thw whole into qutes again. A: A variation of @batchman61's approach (checking the Directory attribute). This time I use an external 'find' command. (Oh, and note the && trick. This is to avoid the long boring IF ERRORLEVEL syntax.) @ECHO OFF SETLOCAL EnableExtensions ECHO.%~a1 | find "d" >NUL 2>NUL && ( ECHO %1 is a directory ) Outputs yes on: * *Directories. *Directory symbolic links or junctions. *Broken directory symbolic links or junctions. (Doesn't try to resolve links.) *Directories which you have no read permission on (e.g. "C:\System Volume Information") A: CD returns an EXIT_FAILURE when the specified directory does not exist. And you got conditional processing symbols, so you could do like the below for this. SET cd_backup=%cd% (CD "%~1" && CD %cd_backup%) || GOTO Error :Error CD %cd_backup% A: The NUL technique seems to only work on 8.3 compliant file names. (In other words, `D:\Documents and Settings` is "bad" and `D:\DOCUME~1` is "good") I think there is some difficulty using the "NUL" tecnique when there are SPACES in the directory name, such as "Documents and Settings." I am using Windows XP service pack 2 and launching the cmd prompt from %SystemRoot%\system32\cmd.exe Here are some examples of what DID NOT work and what DOES WORK for me: (These are all demonstrations done "live" at an interactive prompt. I figure that you should get things to work there before trying to debug them in a script.) This DID NOT work: D:\Documents and Settings>if exist "D:\Documents and Settings\NUL" echo yes This DID NOT work: D:\Documents and Settings>if exist D:\Documents and Settings\NUL echo yes This DOES work (for me): D:\Documents and Settings>cd .. D:\>REM get the short 8.3 name for the file D:\>dir /x Volume in drive D has no label. Volume Serial Number is 34BE-F9C9 Directory of D:\ 09/25/2008 05:09 PM <DIR> 2008 09/25/2008 05:14 PM <DIR> 200809~1.25 2008.09.25 09/23/2008 03:44 PM <DIR> BOOST_~3 boost_repo_working_copy 09/02/2008 02:13 PM 486,128 CHROME~1.EXE ChromeSetup.exe 02/14/2008 12:32 PM <DIR> cygwin [[Look right here !!!! ]] 09/25/2008 08:34 AM <DIR> DOCUME~1 Documents and Settings 09/11/2008 01:57 PM 0 EMPTY_~1.TXT empty_testcopy_file.txt 01/21/2008 06:58 PM <DIR> NATION~1 National Instruments Downloads 10/12/2007 11:25 AM <DIR> NVIDIA 05/13/2008 09:42 AM <DIR> Office10 09/19/2008 11:08 AM <DIR> PROGRA~1 Program Files 12/02/1999 02:54 PM 24,576 setx.exe 09/15/2008 11:19 AM <DIR> TEMP 02/14/2008 12:26 PM <DIR> tmp 01/21/2008 07:05 PM <DIR> VXIPNP 09/23/2008 12:15 PM <DIR> WINDOWS 02/21/2008 03:49 PM <DIR> wx28 02/29/2008 01:47 PM <DIR> WXWIDG~2 wxWidgets 3 File(s) 510,704 bytes 20 Dir(s) 238,250,901,504 bytes free D:\>REM now use the \NUL test with the 8.3 name D:\>if exist d:\docume~1\NUL echo yes yes This works, but it's sort of silly, because the dot already implies i am in a directory: D:\Documents and Settings>if exist .\NUL echo yes A: I use this: if not [%1] == [] ( pushd %~dpn1 2> nul if errorlevel == 1 pushd %~dp1 ) A: This works and also handles paths with spaces in them: dir "%DIR%" > NUL 2>&1 if not errorlevel 1 ( echo Directory exists. ) else ( echo Directory does not exist. ) Probably not the most efficient but easier to read than the other solutions in my opinion. A: A very simple way is to check if the child exists. If a child does not have any child, the exist command will return false. IF EXIST %1\. ( echo %1 is a folder ) else ( echo %1 is a file ) You may have some false negative if you don't have sufficient access right (I have not tested it). A: If you can cd into it, it's a directory: set cwd=%cd% cd /D "%1" 2> nul @IF %errorlevel%==0 GOTO end cd /D "%~dp1" @echo This is a file. @goto end2 :end @echo This is a directory :end2 @REM restore prior directory @cd %cwd% A: Further to my previous offering, I find this also works: if exist %1\ echo Directory No quotes around %1 are needed because the caller will supply them. This saves one entire keystroke over my answer of a year ago ;-) A: Here's a script that uses FOR to build a fully qualified path, and then pushd to test whether the path is a directory. Notice how it works for paths with spaces, as well as network paths. @echo off if [%1]==[] goto usage for /f "delims=" %%i in ("%~1") do set MYPATH="%%~fi" pushd %MYPATH% 2>nul if errorlevel 1 goto notdir goto isdir :notdir echo not a directory goto exit :isdir popd echo is a directory goto exit :usage echo Usage: %0 DIRECTORY_TO_TEST :exit Sample output with the above saved as "isdir.bat": C:\>isdir c:\Windows\system32 is a directory C:\>isdir c:\Windows\system32\wow32.dll not a directory C:\>isdir c:\notadir not a directory C:\>isdir "C:\Documents and Settings" is a directory C:\>isdir \ is a directory C:\>isdir \\ninja\SharedDocs\cpu-z is a directory C:\>isdir \\ninja\SharedDocs\cpu-z\cpuz.ini not a directory A: This works: if exist %1\* echo Directory Works with directory names that contains spaces: C:\>if exist "c:\Program Files\*" echo Directory Directory Note that the quotes are necessary if the directory contains spaces: C:\>if exist c:\Program Files\* echo Directory Can also be expressed as: C:\>SET D="C:\Program Files" C:\>if exist %D%\* echo Directory Directory This is safe to try at home, kids! A: Based on this article titled "How can a batch file test existence of a directory" it's "not entirely reliable". BUT I just tested this: @echo off IF EXIST %1\NUL goto print ECHO not dir pause exit :print ECHO It's a directory pause and it seems to work A: Here's my solution: REM make sure ERRORLEVEL is 0 TYPE NUL REM try to PUSHD into the path (store current dir and switch to another one) PUSHD "insert path here..." >NUL 2>&1 REM if ERRORLEVEL is still 0, it's most definitely a directory IF %ERRORLEVEL% EQU 0 command... REM if needed/wanted, go back to previous directory POPD A: I would like to post my own function script about this subject hope to be useful for someone one day. @pushd %~dp1 @if not exist "%~nx1" ( popd exit /b 0 ) else ( if exist "%~nx1\*" ( popd exit /b 1 ) else ( popd exit /b 3 ) ) This batch script checks if file/folder is exist and if it is a file or a folder. Usage: script.bat "PATH" Exit code(s): 0: file/folder doesn't exist. 1: exists, and it is a folder. 3: exists, and it is a file. A: One issue with using %%~si\NUL method is that there is the chance that it guesses wrong. Its possible to have a filename shorten to the wrong file. I don't think %%~si resolves the 8.3 filename, but guesses it, but using string manipulation to shorten the filepath. I believe if you have similar file paths it may not work. An alternative method: dir /AD %F% 2>&1 | findstr /C:"Not Found">NUL:&&(goto IsFile)||(goto IsDir) :IsFile echo %F% is a file goto done :IsDir echo %F% is a directory goto done :done You can replace (goto IsFile)||(goto IsDir) with other batch commands: (echo Is a File)||(echo is a Directory) A: Under Windows 7 and XP, I can't get it to tell files vs. dirs on mapped drives. The following script: @echo off if exist c:\temp\data.csv echo data.csv is a file if exist c:\temp\data.csv\ echo data.csv is a directory if exist c:\temp\data.csv\nul echo data.csv is a directory if exist k:\temp\nonexistent.txt echo nonexistent.txt is a file if exist k:\temp\something.txt echo something.txt is a file if exist k:\temp\something.txt\ echo something.txt is a directory if exist k:\temp\something.txt\nul echo something.txt is a directory produces: data.csv is a file something.txt is a file something.txt is a directory something.txt is a directory So beware if your script might be fed a mapped or UNC path. The pushd solution below seems to be the most foolproof. A: This is the code that I use in my BATCH files ``` @echo off set param=%~1 set tempfile=__temp__.txt dir /b/ad > %tempfile% set isfolder=false for /f "delims=" %%i in (temp.txt) do if /i "%%i"=="%param%" set isfolder=true del %tempfile% echo %isfolder% if %isfolder%==true echo %param% is a directory ``` A: Here is my solution after many tests with if exist, pushd, dir /AD, etc... @echo off cd /d C:\ for /f "delims=" %%I in ('dir /a /ogn /b') do ( call :isdir "%%I" if errorlevel 1 (echo F: %%~fI) else echo D: %%~fI ) cmd/k :isdir echo.%~a1 | findstr /b "d" >nul exit /b %errorlevel% :: Errorlevel :: 0 = folder :: 1 = file or item not found * *It works with files that have no extension *It works with folders named folder.ext *It works with UNC path *It works with double-quoted full path or with just the dirname or filename only. *It works even if you don't have read permissions *It works with Directory Links (Junctions). *It works with files whose path contains a Directory Link. A: If your objective is to only process directories then this will be useful. This is taken from the https://ss64.com/nt/for_d.html Example... List every subfolder, below the folder C:\Work\ that has a name starting with "User": CD \Work FOR /D /r %%G in ("User*") DO Echo We found FOR /D or FOR /D /R @echo off cd /d "C:\your directory here" for /d /r %%A in ("*") do echo We found a folder: %%~nxA pause Remove /r to only go one folder deep. The /r switch is recursive and undocumented in the command below. The for /d help taken from command for /? FOR /D %variable IN (set) DO command [command-parameters] If set contains wildcards, then specifies to match against directory names instead of file names. A: I was looking for this recently as well, and had stumbled upon a solution which has worked for me, but I do not know of any limitations it has (as I have yet to discover them). I believe this answer is similar in nature to TechGuy's answer above, but I want to add another level of viability. Either way, I have had great success expanding the argument into a full fledged file path, and I believe you have to use setlocal enableextensions for this to work properly. Using below I can tell if a file is a directory, or opposite. A lot of this depends on what the user is actually needing. If you prefer to work with a construct searching for errorlevel vs && and || in your work you can of course do so. Sometimes an if construct for errorlevel can give you a little more flexibility since you do not have to use a GOTO command which can sometimes break your environment conditions. @Echo Off setlocal enableextensions Dir /b /a:D "%~f1" && Echo Arg1 is a Folder || Echo Arg1 is NOT a Folder Dir /b /a:-D "%~f1" && Echo Arg1 is a File || Echo Arg1 is NOT a File pause Using this you could simply drag and drop your file(s) onto the tool you are building to parse them out. Conversely, if you are using other means to comb your file structure and you already have the file and are not dragging/dropping them onto the batch file, you could implement this: @Echo Off setlocal enableextensions Dir /b /s "C:\SomeFolderIAmCombing\*" >"%~dp0SomeFiletogoThroughlater.txt" For /f "Usebackq Delims=" %%a in ("%~dp0SomeFiletogoThroughlater.txt") do ( Call:DetectDir "%%a" ) REM Do some stuff after parsing through Files/Directories if needed. REM GOTO:EOF below is used to skip all the subroutines below. REM Using ' CALL:DetectDir "%%a" ' with the for loop keeps the for REM loop environment running in the background while still parsing the given file REM in a clean environment where GOTO and other commmands do not need Variable Expansion. GOTO:EOF :DetectDir [File or Folder being checked] REM Checks if Arg1 is a Directory. If yes, go to Dir coding. If not, go to File coding. Dir /b /a:D "%~f1" && Echo Arg1 is a Folder & GOTO:IsDir || Echo Arg1 is NOT a Folder & GOTO:IsFile REM Checks if Arg1 is NOT a Directory. If Yes, go to File coding. If not, go to Dir coding Dir /b /a:-D "%~f1" && Echo Arg1 is a File & GOTO:IsFile || Echo Arg1 is NOT a File & GOTO:IsDir :IsDir REM Do your stuff to the Folder GOTO:EOF :IsFile REM do your stuff to the File GOTO:EOF A: Can't we just test with this : IF [%~x1] == [] ECHO Directory It seems to work for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/138981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "91" }
Q: Is there a performance difference between inc(i) and i := i + 1 in Delphi? I have a procedure with a lot of i := i +1; in it and I think inc(i); looks a lot better. Is there a performance difference or does the function call just get inlined by the compiler? I know this probably doesn't matter at all to my app, I'm just curious. EDIT: I did some gauging of the performance and found the difference to be very small, in fact as small as 5.1222741794670901427682121946224e-8! So it really doesn't matter. And optimization options really didn't change the outcome much. Thanks for all tips and suggestions! A: Modern compilers optimize the code. inc(i) and i:= i+1; are pretty much the same. Use whichever you prefer. Edit: As Jim McKeeth corrected: with Overflow Checking there is a difference. Inc does not do a range checking. A: It all depends on the type of "i". In Delphi, one normally declares loop-variables as "i: Integer", but it could as well be "i: PChar" which resolves to PAnsiChar on everything below Delphi 2009 and FPC (I'm guessing here), and to PWideChar on Delphi 2009 and Delphi.NET (also guessing). Since Delphi 2009 can do pointer-math, Inc(i) can also be done on typed-pointers (if they are defined with POINTER_MATH turned on). For example: type PSomeRecord = ^RSomeRecord; RSomeRecord = record Value1: Integer; Value2: Double; end; var i: PSomeRecord; procedure Test; begin Inc(i); // This line increases i with SizeOf(RSomeRecord) bytes, thanks to POINTER_MATH ! end; As the other anwsers already said : It's relativly easy to see what the compiler made of your code by opening up : Views > Debug Windows > CPU Windows > Disassembly Note, that compiler options like OPTIMIZATION, OVERFLOW_CHECKS and RANGE_CHECKS might influence the final result, so you should take care to have the settings according to your preference. A tip on this : In every unit, $INCLUDE a file that steers the compiler options, this way, you won't loose settings when your .bdsproj or .dproj is somehow damaged. (Look at the sourcecode of the JCL for a good example on this) A: You can verify it in the CPU window while debugging. The generated CPU instructions are the same for both cases. I agree Inc(I); looks better although this may be subjective. Correction: I just found this in the documentation for Inc: "On some platforms, Inc may generate optimized code, especially useful in tight loops." So it's probably advisable to stick to Inc. A: There is a huge difference if Overflow Checking is turned on. Basically Inc does not do overflow checking. Do as was suggested and use the disassembly window to see the difference when you have those compiler options turned on (it is different for each). If those options are turned off, then there is no difference. Rule of thumb, use Inc when you don't care about a range checking failure (since you won't get an exception!). A: You could always write both pieces of code (in separate procedures), put a breakpoint in the code and compare the assembler in the CPU window. In general, I'd use inc(i) wherever it's obviously being used only as a loop/index of some sort, and + 1 wherever the 1 would make the code easier to maintain (ie, it might conceivable change to another integer in the future) or just more readable from an algorithm/spec point of view. A: "On some platforms, Inc may generate optimized code, especially useful in tight loops." For optimized compiler such as Delphi it doesn't care. That is about old compilers (e.g. Turbo Pascal)
{ "language": "en", "url": "https://stackoverflow.com/questions/138994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How to output HTML from JSP <%! ... %> block? I just started learning JSP technology, and came across a wall. How do you output HTML from a method in <%! ... %> JSP declaration block? This doesn't work: <%! void someOutput() { out.println("Some Output"); } %> ... <% someOutput(); %> Server says there's no “out”. U: I do know how to rewrite code with this method returning a string, but is there a way to do this inside <%! void () { } %> ? Though it may be non-optimal, it's still interesting. A: A simple alternative would be the following: <%! String myVariable = "Test"; pageContext.setAttribute("myVariable", myVariable); %> <c:out value="myVariable"/> <h1>${myVariable}</h1> The you could simply use the variable in any way within the jsp code A: You can't use the 'out' variable (nor any of the other "predeclared" scriptlet variables) inside directives. The JSP page gets translated by your webserver into a Java servlet. Inside tomcats, for instance, everything inside scriptlets (which start "<%"), along with all the static HTML, gets translated into one giant Java method which writes your page, line by line, to a JspWriter instance called "out". This is why you can use the "out" parameter directly in scriptlets. Directives, on the other hand (which start with "<%!") get translated as separate Java methods. As an example, a very simple page (let's call it foo.jsp): <html> <head/> <body> <%! String someOutput() { return "Some output"; } %> <% someOutput(); %> </body> </html> would end up looking something like this (with a lot of the detail ignored for clarity): public final class foo_jsp { // This is where the request comes in public void _jspService(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { // JspWriter instance is gotten from a factory // This is why you can use 'out' directly in scriptlets JspWriter out = ...; // Snip out.write("<html>"); out.write("<head/>"); out.write("<body>"); out.write(someOutput()); // i.e. write the results of the method call out.write("</body>"); out.write("</html>"); } // Directive gets translated as separate method - note // there is no 'out' variable declared in scope private String someOutput() { return "Some output"; } } A: You can do something like this: <%! String myMethod(String input) { return "test " + input; } %> <%= myMethod("1 2 3") %> This will output test 1 2 3 to the page. A: too late to answer it but this help others <%! public void printChild(Categories cat, HttpServletResponse res ){ try{ if(cat.getCategoriesSet().size() >0){ res.getWriter().write("") ; } }catch(Exception exp){ } } %> A: <%! private void myFunc(String Bits, javax.servlet.jsp.JspWriter myOut) { try{ myOut.println("<div>"+Bits+"</div>"); } catch(Exception eek) { } } %> ... <% myFunc("more difficult than it should be",out); %> Try this, it worked for me! A: I suppose this would help: <%! String someOutput() { return "Some Output"; } %> ... <%= someOutput() %> Anyway, it isn't a good idea to have code in a view. A: All you need to do is pass the JspWriter object into your method as a parameter i.e. void someOutput(JspWriter stream) Then call it via: <% someOutput(out) %> The writer object is a local variable inside _jspService so you need to pass it into your utility method. The same would apply for all the other built in references (e.g. request, response, session). A great way to see whats going on is to use Tomcat as your server and drill down into the 'work' directory for the '.java' file generated from your 'jsp' page. Alternatively in weblogic you can use the 'weblogic.jspc' page compiler to view the Java that will be generated when the page is requested. A: You can do something like this: <% out.print("<p>Hey!</p>"); out.print("<p>How are you?</p>"); %>
{ "language": "en", "url": "https://stackoverflow.com/questions/138999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: DIV with overflow:auto and a 100% wide table I hope someone might be able to help me here. I've tried to simplify my example as best I can. I have an absolutely positioned DIV, which for this example I've made fill the browser window. This div has the overflow:auto attribute to provide scroll bars when the content is too big for the DIV to display. Within the DIV I have a table to present some data, and it's width is 100%. When the content becomes too large vertically, I expect the vertical scroll bar to appear and the table to shrink horizontally slightly to accommodate the scroll bar. However in IE7 what happens is the horizontal scroll bar also appears, despite there still being enough space horizontally for all the content in the div. This is IE specific - firefox works perfectly. Full source below. Any help greatly appreciated. Tony <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Table sizing bug?</title> <style> #maxsize { position: absolute; left: 5px; right: 5px; top: 5px; bottom: 5px; border: 5px solid silver; overflow: auto; } </style> </head> <body> <form id="form1" runat="server"> <div id="maxsize"> <p>This will be fine until such time as the vertical size forces a vertical scroll bar. At this point I'd expect the table to re-size to now take into account of the new vertical scroll bar. Instead, IE7 keeps the table the full size and introduces a horizontal scroll bar. </p> <table width="100%" cellspacing="0" cellpadding="0" border="1"> <tbody> <tr> <td>A</td> <td>B</td> <td>C</td> <td>D</td> <td>E</td> <td>F</td> <td>G</td> <td>H</td> <td>I</td> <td>J</td> <td>K</td> <td>L</td> <td>M</td> <td>N</td> <td>O</td> <td>P</td> <td>Q</td> <td>R</td> </tr> </tbody> </table> <p>Resize the browser window vertically so this content doesn't fit any more</p> <p>Hello</p><p>Hello</p><p>Hello</p><p>Hello</p><p>Hello</p> <p>Hello</p><p>Hello</p><p>Hello</p><p>Hello</p><p>Hello</p> </div> </form> </body> </html> added 03/16/10... thought it might be interesting to point out that GWT's source code points to this question in a comment... http://www.google.com/codesearch/p?hl=en#MTQ26449crI/com/google/gwt/user/client/ui/ScrollPanel.java&q=%22hack%20to%20account%20for%20the%22%20scrollpanel&sa=N&cd=1&ct=rc&l=48 A: Eran Galperin's solution fails to account for the fact that simply turning off horizontal scrolling will still allow the table to underlap the vertical scrollbar. I assume this is because IE is calculating the meaning of "100%" before deciding that it needs a vertical scrollbar, then failing to re-adjust for the remaining horizontal space available. cetnar's solution above nails it, though: <div style="zoom: 1; overflow: auto;"> <div id="myDiv" style="zoom: 1;"> <table style="width: 100%"> ... </table> </div> </div> This works properly on IE6 and 7 in my tests. From what I can tell, the "" hack doesn't appear to actually be necessary on IE6. A: Change: overflow: auto; to: overflow-y:hidden; overflow-x:auto; A: Okay, this one plagued me for a LONG time. I have made far too many designs that have extra padding on the right, allowing for IEs complete disregard for their own scrollbar. The answer is: nest two divs, give them both hasLayout, set the inner one to overflow. <!-- zoom: 1 is a proprietary IE property. It doesn't really do anything here, except give hasLayout --> <div style="zoom: 1;"> <div style="zoom: 1; overflow: auto"> <table style="width: 100%"... ... </table> </div> </div> http://www.satzansatz.de/cssd/onhavinglayout.html Go there to read more about having layout A: I had a problem with excessive horizonal bar in IE7. I've used D Carter's solution slighty changed <div style="zoom: 1; overflow: auto;"> <div id="myDiv" style="zoom: 1;"> <table style="width: 100%"... ... </table> </div> </div> To work in IE browser lesser than 7 you need add: <!--[if lt IE 7]><style> #myDiv { overflow: auto; } </style><![endif]--> A: This is reported fixed in GWT trunk. A: If it's the body tag that insists on having the horizontal scroll (I guess because I have child elements set to 100%) you can add this to your CSS to fix the problem in IE7 (or 8 compatibility mode): html{overflow-x:hidden;} A: This looks like it should fix your problem, as long as you are not apposed to condition statements. Fixing IE overflow A: Unfortunately, this is a quirk of IE. There's no way using pure XHTML and CSS to get it to work the same as Firefox. You could do it using JavaScript to detect the size of the window and set the width of the table dynamically. I can add more detail on that if you really wanted to go that route.
{ "language": "en", "url": "https://stackoverflow.com/questions/139000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: PyQt - QScrollBar Dear Stacktoverflow, can you show me an example of how to use a QScrollBar? Thanks. A: >>> import sys >>> from PyQt4 import QtCore, QtGui >>> app = QtGui.QApplication(sys.argv) >>> sb = QtGui.QScrollBar() >>> sb.setMinimum(0) >>> sb.setMaximum(100) >>> def on_slider_moved(value): print "new slider position: %i" % (value, ) >>> sb.connect(sb, QtCore.SIGNAL("sliderMoved(int)"), on_slider_moved) >>> sb.show() >>> app.exec_() Now, when you move the slider (you might have to resize the window), you'll see the slider position printed to the terminal as you the handle. A: It will come down to you using the QScrollArea, it is a widget that implements showing something that is larger than the available space. You will not need to use QScrollBar directly. I don't have a PyQt example but there is a C++ example in the QT distribution it is called the "Image Viewer". The object hierarchy will still be the same A: In the PyQT source code distribution, look at the file: examples/widgets/sliders.pyw Or there is a minimal example here (I guess I shouldn't copy paste because of potential copyright issues)
{ "language": "en", "url": "https://stackoverflow.com/questions/139005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to resolve a .lnk in c# I need to find out the file/directory name that a .lnk is pointing to using c#. What is the simplest way to do this? Thanks. A: I wrote this for video browser, it works really well #region Signitures imported from http://pinvoke.net [DllImport("shfolder.dll", CharSet = CharSet.Auto)] internal static extern int SHGetFolderPath(IntPtr hwndOwner, int nFolder, IntPtr hToken, int dwFlags, StringBuilder lpszPath); [Flags()] enum SLGP_FLAGS { /// <summary>Retrieves the standard short (8.3 format) file name</summary> SLGP_SHORTPATH = 0x1, /// <summary>Retrieves the Universal Naming Convention (UNC) path name of the file</summary> SLGP_UNCPRIORITY = 0x2, /// <summary>Retrieves the raw path name. A raw path is something that might not exist and may include environment variables that need to be expanded</summary> SLGP_RAWPATH = 0x4 } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)] struct WIN32_FIND_DATAW { public uint dwFileAttributes; public long ftCreationTime; public long ftLastAccessTime; public long ftLastWriteTime; public uint nFileSizeHigh; public uint nFileSizeLow; public uint dwReserved0; public uint dwReserved1; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 260)] public string cFileName; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 14)] public string cAlternateFileName; } [Flags()] enum SLR_FLAGS { /// <summary> /// Do not display a dialog box if the link cannot be resolved. When SLR_NO_UI is set, /// the high-order word of fFlags can be set to a time-out value that specifies the /// maximum amount of time to be spent resolving the link. The function returns if the /// link cannot be resolved within the time-out duration. If the high-order word is set /// to zero, the time-out duration will be set to the default value of 3,000 milliseconds /// (3 seconds). To specify a value, set the high word of fFlags to the desired time-out /// duration, in milliseconds. /// </summary> SLR_NO_UI = 0x1, /// <summary>Obsolete and no longer used</summary> SLR_ANY_MATCH = 0x2, /// <summary>If the link object has changed, update its path and list of identifiers. /// If SLR_UPDATE is set, you do not need to call IPersistFile::IsDirty to determine /// whether or not the link object has changed.</summary> SLR_UPDATE = 0x4, /// <summary>Do not update the link information</summary> SLR_NOUPDATE = 0x8, /// <summary>Do not execute the search heuristics</summary> SLR_NOSEARCH = 0x10, /// <summary>Do not use distributed link tracking</summary> SLR_NOTRACK = 0x20, /// <summary>Disable distributed link tracking. By default, distributed link tracking tracks /// removable media across multiple devices based on the volume name. It also uses the /// Universal Naming Convention (UNC) path to track remote file systems whose drive letter /// has changed. Setting SLR_NOLINKINFO disables both types of tracking.</summary> SLR_NOLINKINFO = 0x40, /// <summary>Call the Microsoft Windows Installer</summary> SLR_INVOKE_MSI = 0x80 } /// <summary>The IShellLink interface allows Shell links to be created, modified, and resolved</summary> [ComImport(), InterfaceType(ComInterfaceType.InterfaceIsIUnknown), Guid("000214F9-0000-0000-C000-000000000046")] interface IShellLinkW { /// <summary>Retrieves the path and file name of a Shell link object</summary> void GetPath([Out(), MarshalAs(UnmanagedType.LPWStr)] StringBuilder pszFile, int cchMaxPath, out WIN32_FIND_DATAW pfd, SLGP_FLAGS fFlags); /// <summary>Retrieves the list of item identifiers for a Shell link object</summary> void GetIDList(out IntPtr ppidl); /// <summary>Sets the pointer to an item identifier list (PIDL) for a Shell link object.</summary> void SetIDList(IntPtr pidl); /// <summary>Retrieves the description string for a Shell link object</summary> void GetDescription([Out(), MarshalAs(UnmanagedType.LPWStr)] StringBuilder pszName, int cchMaxName); /// <summary>Sets the description for a Shell link object. The description can be any application-defined string</summary> void SetDescription([MarshalAs(UnmanagedType.LPWStr)] string pszName); /// <summary>Retrieves the name of the working directory for a Shell link object</summary> void GetWorkingDirectory([Out(), MarshalAs(UnmanagedType.LPWStr)] StringBuilder pszDir, int cchMaxPath); /// <summary>Sets the name of the working directory for a Shell link object</summary> void SetWorkingDirectory([MarshalAs(UnmanagedType.LPWStr)] string pszDir); /// <summary>Retrieves the command-line arguments associated with a Shell link object</summary> void GetArguments([Out(), MarshalAs(UnmanagedType.LPWStr)] StringBuilder pszArgs, int cchMaxPath); /// <summary>Sets the command-line arguments for a Shell link object</summary> void SetArguments([MarshalAs(UnmanagedType.LPWStr)] string pszArgs); /// <summary>Retrieves the hot key for a Shell link object</summary> void GetHotkey(out short pwHotkey); /// <summary>Sets a hot key for a Shell link object</summary> void SetHotkey(short wHotkey); /// <summary>Retrieves the show command for a Shell link object</summary> void GetShowCmd(out int piShowCmd); /// <summary>Sets the show command for a Shell link object. The show command sets the initial show state of the window.</summary> void SetShowCmd(int iShowCmd); /// <summary>Retrieves the location (path and index) of the icon for a Shell link object</summary> void GetIconLocation([Out(), MarshalAs(UnmanagedType.LPWStr)] StringBuilder pszIconPath, int cchIconPath, out int piIcon); /// <summary>Sets the location (path and index) of the icon for a Shell link object</summary> void SetIconLocation([MarshalAs(UnmanagedType.LPWStr)] string pszIconPath, int iIcon); /// <summary>Sets the relative path to the Shell link object</summary> void SetRelativePath([MarshalAs(UnmanagedType.LPWStr)] string pszPathRel, int dwReserved); /// <summary>Attempts to find the target of a Shell link, even if it has been moved or renamed</summary> void Resolve(IntPtr hwnd, SLR_FLAGS fFlags); /// <summary>Sets the path and file name of a Shell link object</summary> void SetPath([MarshalAs(UnmanagedType.LPWStr)] string pszFile); } [ComImport, Guid("0000010c-0000-0000-c000-000000000046"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface IPersist { [PreserveSig] void GetClassID(out Guid pClassID); } [ComImport, Guid("0000010b-0000-0000-C000-000000000046"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface IPersistFile : IPersist { new void GetClassID(out Guid pClassID); [PreserveSig] int IsDirty(); [PreserveSig] void Load([In, MarshalAs(UnmanagedType.LPWStr)] string pszFileName, uint dwMode); [PreserveSig] void Save([In, MarshalAs(UnmanagedType.LPWStr)] string pszFileName, [In, MarshalAs(UnmanagedType.Bool)] bool fRemember); [PreserveSig] void SaveCompleted([In, MarshalAs(UnmanagedType.LPWStr)] string pszFileName); [PreserveSig] void GetCurFile([In, MarshalAs(UnmanagedType.LPWStr)] string ppszFileName); } const uint STGM_READ = 0; const int MAX_PATH = 260; // CLSID_ShellLink from ShlGuid.h [ ComImport(), Guid("00021401-0000-0000-C000-000000000046") ] public class ShellLink { } #endregion public static string ResolveShortcut(string filename) { ShellLink link = new ShellLink(); ((IPersistFile)link).Load(filename, STGM_READ); // TODO: if I can get hold of the hwnd call resolve first. This handles moved and renamed files. // ((IShellLinkW)link).Resolve(hwnd, 0) StringBuilder sb = new StringBuilder(MAX_PATH); WIN32_FIND_DATAW data = new WIN32_FIND_DATAW(); ((IShellLinkW)link).GetPath(sb, sb.Capacity, out data, 0); return sb.ToString(); } A: Adding to what Kev said... If you are using csc.exe instead of Visual Studio, to add a reference to the Windows Script Host Object Model, you have to: * *Use the tlbimp.exe tool to create a managed assembly: tlbimp.exe c:\windows\system32\wshom.ocx /out:IWshRuntimeLibrary.dll *Reference the .dll using the /r switch in csc.exe: csc.exe Lnk.cs /r:IWshRuntimeLibrary.dll A: This may help: http://www.neowin.net/forum/index.php?s=3ad7f1ffb995ba84999376f574e9250f&showtopic=658928&st=0&p=589667108&#entry589667108 In essence... Add reference to Windows Script Host Object Model in COM tab of Add Reference dialogue. IWshRuntimeLibrary.IWshShell shell = new IWshRuntimeLibrary.WshShell(); IWshRuntimeLibrary.IWshShortcut shortcut = (IWshRuntimeLibrary.IWshShortcut)shell.CreateShortcut(link); Console.WriteLine(shortcut.TargetPath); A: Or simply test mydir and if it does not exists, mydir.lnk with File.Exixts(). Works for a file.
{ "language": "en", "url": "https://stackoverflow.com/questions/139010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Tweak the brightness/gamma of the whole scene in OpenGL Does anyone know how I can achieve the following effect in OpenGL: * *Change the brightness of the rendered scene *Or implementing a Gamma setting in OpenGL I have tried by changing the ambient parameter of the light and the type of light (directional and omnidirectional) but the result was not uniform. TIA. Thanks for your help, some additional information: * I can't use any windows specifics API. * The gamma setting should not affect the whole window as I must have different gamma for different views. A: On win32 you can use SetDeviceGammaRamp to adjust the overall brightness / gamma. However, this affects the entire display so it's not a good idea unless your app is fullscreen. The portable alternative is to either draw the entire scene brighter or dimmer (which is a hassle), or to slap a fullscreen alpha-blended quad over the whole scene to brighten or darken it as desired. Neither of these approaches can affect the gamma-curve, only the overall brightness; to adjust the gamma you need grab the entire scene into a texture and then render it back to the screen via a pixel-shader that runs each texel through a gamma function. Ok, having read the updated question, what you need is a quad with blending set up to darken or brighten everything underneath it. Eg. if( brightness > 1 ) { glBlendFunc( GL_DEST_COLOR, GL_ONE ); glColor3f( brightness-1, brightness-1, brightness-1 ); } else { glBlendFunc( GL_ZERO, GL_SRC_COLOR ); glColor3f( brightness, brightness, brightness ); } glEnable( GL_BLEND ); draw_quad(); A: http://www.gamedev.net/community/forums/topic.asp?topic_id=435400 might be an answer to your question otherwise you could probably implement a gamma correction as a pixel shader
{ "language": "en", "url": "https://stackoverflow.com/questions/139012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I do a full-text search of PDF files from Perl? I have a bunch of PDF files and my Perl program needs to do a full-text search of them to return which ones contain a specific string. To date I have been using this: my @search_results = `grep -i -l \"$string\" *.pdf`; where $string is the text to look for. However this fails for most pdf's because the file format is obviously not ASCII. What can I do that's easiest? Clarification: There are about 300 pdf's whose name I do not know in advance. PDF::Core is probably overkill. I am trying to get pdftotext and grep to play nice with each other given I don't know the names of the pdf's, I can't find the right syntax yet. Final solution using Adam Bellaire's suggestion below: @search_results = `for i in \$( ls ); do pdftotext \$i - | grep --label="\$i" -i -l "$search_string"; done`; A: My library, CAM::PDF, has support for extracting text, but it's an inherently hard problem given the graphical orientation of PDF syntax. So, the output is sometimes gibberish. CAM::PDF bundles a getpdftext.pl program, or you can invoke the functionality like so: my $doc = CAM::PDF->new($filename) || die "$CAM::PDF::errstr\n"; for my $pagenum (1 .. $doc->numPages()) { my $text = $doc->getPageText($pagenum); print $text; } A: I second Adam Bellaire solution. I used pdftotext utility to create full-text index of my ebook library. It's somewhat slow but does its job. As for full-text, try PLucene or KinoSearch to store full-text index. A: You may want to look at PDF::Core. A: The PerlMonks thread here talks about this problem. It seems that for your situation, it might be simplest to get pdftotext (the command line tool), then you can do something like: my @search_results = `pdftotext myfile.pdf - | grep -i -l \"$string\"`; A: The easiest fulltext index/seach I've used is mysql. You just insert into the table with the appropriate index on it. You need to spend some time working out the relative weightings for fields (a match in the title might score higher than a match in the body), but this is all possible, albeit with some hairy sql. Plucene is deprecated (there hasn't been any active work on it in the last two years afaik) in favour of KinoSearch. KinoSearch grew, in part, out of understanding the architectural limitations of Plucene. If you have ~300 pdfs, then once you've extracted the text from the PDF (assuming the PDF has text and not just images of text ;) and depending on your query volumes you may find grep is sufficient. However, I'd strongly suggest the mysql/kinosearch route as they have covered a lot of ground (stemming, stopwords, term weighting, token parsing) that you don't benefit from getting bogged down with. KinoSearch is probably faster than the mysql route, but the mysql route gives you more widely used standard software/tools/developer-experience. And you get the ability to use the power of sql to augement your freetext search queries. So unless you're talking HUGE data-sets and insane query volumes, my money would be on mysql. A: You could try Lucene (the Perl port is called Plucene). The searches are incredibly fast and I know that PDFBox already knows how to index PDF files with Lucene. PDFBox is Java, but chances are there is something very similar somewhere in CPAN. Even if you can't find something that already adds PDF files to a Lucene index it shouldn't be more than a few lines of code to do it yourself. Lucene will give you quite a few more searching options than simply looking for a string in a file. There's also a very quick and dirty way. Text in a PDF file is actually stored as plain text. If you open a PDF in a text editor or use 'strings' you can see the text in there. The binary junk is usually embedded fonts, images, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/139015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to program a full-screen mode in Java? I'd like my application to have a full-screen mode. What is the easiest way to do this, do I need a third party library for this or is there something in the JDK that already offers this? A: JFrame setUndecorated(true) method A: Try the Full-Screen Exclusive Mode API. It was introduced in the JDK in release 1.4. Some of the features include: * *Full-Screen Exclusive Mode - allows you to suspend the windowing system so that drawing can be done directly to the screen. *Display Mode - composed of the size (width and height of the monitor, in pixels), bit depth (number of bits per pixel), and refresh rate (how frequently the monitor updates itself). *Passive vs. Active Rendering - painting while on the main event loop using the paint method is passive, whereas rendering in your own thread is active. *Double Buffering and Page Flipping - Smoother drawing means better perceived performance and a much better user experience. *BufferStrategy and BufferCapabilities - classes that allow you to draw to surfaces and components without having to know the number of buffers used or the technique used to display them, and help you determine the capabilities of your graphics device. There are several full-screen exclusive mode examples in the linked tutorial. A: I've done this using JOGL when having a full screen OpenGL user interface for a game. It's quite easy. I believe that the capability was added to Java with version 5 as well, but it's so long ago that I've forgotten how to do it (edit: see answer above for how). A: Use this code: JFrame frame = new JFrame(); // set properties frame.setSize(Toolkit.getDefaultToolkit().getScreenSize()); frame.setUndecorated(true); frame.setVisible(true); Make sure setUndecorated() comes before setVisible() or it won't work. A: It really depends on what you're using to display your interface, i.e. AWT/Spring or OpenGL etc. Java has a full screen exclusive mode API - see this tutorial from Sun.
{ "language": "en", "url": "https://stackoverflow.com/questions/139025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do I configure mongrels return mime types How do I configure the content-types returned from mongrel. Speficially I want it to return some javascripts files as application/x-javascript to try and reproduce a bug I am seeing on a remote server A: I don't know if this is exactly the answer that you are looking for but I found this by doing a quick google search. http://mongrel.rubyforge.org/wiki/HOWTO It states that you can provide a yaml file with mime-types.
{ "language": "en", "url": "https://stackoverflow.com/questions/139027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Manager classes In a recent project I have nearly completed we used an architecture that as its top layer of interaction from the web/services layers uses XXXManager classes. For example, there is a windows services that runs on a scheduled basis that imports data from several diverse data sources into our system. Within this service several "Manager" classes are called i.e. CPImportScheduleManager, CPImportProcessManager etc.. Now these Manager classes do a lot more than just pass method up the chain for use in the web/service layers. For example my UserManager.Register() method doesn't just persist the user via lower level assemblies but it also send a WAP push to the user and determines the mobile handset used etc. It has been suggested to me that this type of architecture I a common means of trying to get OOP to fit into a procedural model. I can see their point here, but what I am wondering is with this top level set of classes any web/service layer can simply call the same common method without having to rewrite code. Thus if I wanted to write a web service that at some point registered a user I could again just call the UserManager.Register() method without having to rewrite all the logic again. I never have been the best person to explain myself, but if my ramblings make sense please feel free to advise on your alternatives. Cheers, Chris. A: Managers are a commonly overused naming strategy for services that handle the workflow or complex tasks for a given set of entities. However, if it gets the job done, then it is not necessarily a bad thing. The important question I would have is what is going on underneath the managers? IF they are simply coordinated the workflow of a process, such as registering a user, then they are controllers (MVC) with a different name. However, if they are containing a lot of business logic (by this I mean conditional logic depending on the state of an entity or set of entities) then I would take a careful look to see if you can make this logic explicit by making it its own class or moving it to the class with the proper responsibility. Update: From the sound of it, you have right idea in general. You have a set of classes that handle the coordination of your business processes that do not care whether they are being used by a WebService, a Webform, or whatever. Then you are saying you want to add your WebService layer on top of this and utilize these classes. This is a Good Thing(tm). A: Sounds like your motivation for that design was good - reuse of code. In some ways it is similar in motivation to Martin Fowler's Service Layer. However, you may be piling too many responsibilities into these Manager classes. Perhaps separate infrastructural concerns (WAP push, user persistence) from domain concerns (user registration). This Separation of Concerns would increase reusablity further and also improve maintainability.
{ "language": "en", "url": "https://stackoverflow.com/questions/139034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Easiest way to decrypt PGP-encrypted files from VBA (MS Access) I need to write code that picks up PGP-encrypted files from an FTP location and processes them. The files will be encrypted with my public key (not that I have one yet). Obviously, I need a PGP library that I can use from within Microsoft Access. Can you recommend one that is easy to use? I'm looking for something that doesn't require a huge amount of PKI knowledge. Ideally, something that will easily generate the one-off private/public key pair, and then have a simple routine for decryption. A: A command line solution is good. If your database is an internal application, not to be redistributed, I can recommend Gnu Privacy Guard. This command-line based tool will allow you to do anything that you need to with regard to the OpenPGP standard. Within Access, you can use the Shell() command in a Macro like this: Public Sub DecryptFile(ByVal FileName As String) Dim strCommand As String strCommand = "C:\Program Files\GNU\GnuPG\gpg.exe " _ & "--batch --passphrase ""My PassPhrase that I used""" & FileName Shell strCommand, vbNormalFocus End Sub This will run the command-line tool to decrypt the file. This syntax uses a plaintext version of your secret passphrase. This is not the most secure solution, but is acceptable if your database is internal and only used by trusted personnel. GnuPG supports other techniques to secure the passphrase. A: PGP has a commandline option for decrypting files. We have a batchfile that does the decryption, passing in the filename to be decrypted: Batch file: "C:\Program Files\Network Associates\PGPNT\pgp" +FORCE %1 -z *password* We than call that from a VBS: Command = "decrypt.bat """ & FolderName & FileName & """" 'Executes the command script. Set objShell = WScript.CreateObject ("WSCript.shell") Command = "cmd /c " & Command objShell.run Command, 1, True Hope that points you in a useful direction. A: Stu... I once had to write a "Secure SMTP" server in Java... The easiest, and quickest way to do this is to download and/or purchase PGP. They have an SDK that you can use to access in anything you want. I'd have to go back and see if I had to write a COM wrapper, or if they already had one. (I wrote this SMTP server about 10 years ago). Anyways, don't get discouraged. About 5 years ago, I wrote an entire PGP based application (based on the openPGP RFC) in C++, but the catch was, I was NOT allowed to use any existing libraries. So I had to write all that stuff myself. And, I used GPG, OpenPGP, and PGP for testing, etc.... So, I could even provide help for you on how to decode this stuff in VBA. It's not impossible, (it may be slow as hell, but not impossible), and I'm NOT one to "shell out and run cmdline stuff to do work like this for you, as it will open you up to some SERIOUS security risks, as hurcane's suggestion (for example) will cause your passphrase to be displayed to tools like ProcExp). The first step is learning how PKE works, etc. Then, the steps you need to do to get what you want. This is something I'd be interested in helping with since I'm always one to write code that everyone says can't be done. :) Plus, I own the source code of the app I wrote, because of of mergers, closures, etc... It was originally written for the Oil and Gas industry, so I know it's secure. That's not to say I don't have ANY security flaws in the code, but I think it's stable. I know I have an issue with my Chinese Remainder Threory code.. For some reason when I use that short-cut, I can't decode the data correctly, but if I use the RSA "long way" it works... Now, this application was never fully finished, so I don't support things like DSA Key-pairs, but I do support RSA key pairs, with SHA1, MD5, using IDEA, AES, (I THINK my 3DES code does not work correctly, but I may have fixed that since). I didn't implement compression yet, etc... But, I'd love a reason to go back and work on this code again. I /COULD/ make you a COM object that you could call from VBA passing the original Base64 data in, along with the Base64 key data, (or a pointer to a key file on disk), and a passpsshrase to decode files.... Think about it... Let me know.. Over the years, I have collected vbScript code for doing things like MD5, SHA1, IDEA, and other crypto routines, but I didn't write them. Hell, you could probably just interface with Microsoft's CryptoAPI, and break each action down to it's core parts and still get it to work. (You will not find a Micosoft CryptoAPI call like "DecryptPGP()"... It'd all have to be done in chunks). Lemme know if I can help. A: You can use OpenPGPBlackbox (ActiveX edition) for this A: I would look for a command line encrypter / decrypter and just call the exe from within your Access application, with the right parameters. There is no PGP encrypter / decrypter in VBA that I know of. A: I am not familiar with VBA for Access, but i think that the best solution (perhaps easiest) would be run external command-line PGP utility. A: There is a DLL you can call directly from your VBA application without having to span an external program: CryptoCX. PGP has also a DLL you can call.
{ "language": "en", "url": "https://stackoverflow.com/questions/139046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Securing an assembly so that it can't be used by a third party I have written an assembly I don't want other people to be able to use. My assembly is signed with a strong name key file, but how do I secure the code so that only my other assemblies signed with the same key can call the members in this assembly? A: You need to add StrongNameIdentityPermissionAttributeattributes to demand security eg [assembly:StrongNameIdentityPermissionAttribute(SecurityAction.RequestMinimum, PublicKey="00240000048000009400000006020000002400005253413100040000010001005" + "38a4a19382e9429cf516dcf1399facdccca092a06442efaf9ecaca33457be26ee0073c6bde5" + "1fe0873666a62459581669b510ae1e84bef6bcb1aff7957237279d8b7e0e25b71ad39df3684" + "5b7db60382c8eb73f289823578d33c09e48d0d2f90ed4541e1438008142ef714bfe604c41a4" + "957a4f6e6ab36b9715ec57625904c6")] see this msdn page for more info or do it in code see this example A: I would suggest that you use the LicenseProvider attribute for securing use to your assembly. More information on the exact usage is available here on MSDN A: There are a few options, none very effective as they merely will make things a tiny bit difficult, but would not prevent a committed user to work around any restriction: On every one of your entry points, you can call Assembly.GetCallingAssembly() and compare the result with a list of assemblies that are allowed to call into your library, and throw an exception otherwise. You could use a tool like ilmerge to merge your assemblies into your main application, and flag all of the internals as private. Combine with an obfuscator to make the results slightly better protected. But securing an assembly is as solid as securing a computer where the attacker has physical access to it: there is very little that you can do to protect the contents once physical access is granted. A: Not sure if this is the best option, but you could make all the "public" classes in your assembly internal, and then use the [InternalsVisibleTo] assembly level attribute to explicitly specify your other signed assemblies. [assembly: InternalsVisibleTo('MyAssembly,Version=1.0.0.1, Culture=neutral,PublicKeyToken=..."); Here are the MSDN docs on the attribute. A: At first I thought you could make your members/classes in your signed assembly private and apply the assembly-level InternalsVisibleTo attribute to your other assemblies. I'm guessing that reflection will let you crack through that though. Maybe the StrongNameIdentityPermission is what you are looking for. A: Sounds to me like an impossible problem. You can't trust your environment. That's a fundamental computing principle, and the reason for public/private key encryption.
{ "language": "en", "url": "https://stackoverflow.com/questions/139053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Subversive connectors not working with newest Ganymede update I'm using Subversive plugin in Ganymede, but after today's update it stopped working - it just doesn't see any valid svn connectors (I've already been using 1.2.0 dev version of SVNKit, instead of a stable one, because Subversive / Ganymede could not handle it; now it can't handle even the dev one). Any ideas how to make it work? Are subversive guys releasing a new version of their plugin / connectors soon? A: I had a similar problem right after the update. It turned out that I had been getting the connectors (the base connector and both the SVNKit and JavaHL connectors) from the Polarion site that had "ganymede" in the URL. Instead, I should have been using the general URL. Checking my current configuration, you should be using this update URL: http://www.polarion.org/projects/subversive/download/eclipse/2.0/update-site/ The one I had been using, that should be deprecated if you are using it, is: http://www.polarion.org/projects/subversive/download/eclipse/2.0/ganymede-site/ Note the difference. Once I changed that, I was able to download the 2.0.3 versions of the connectors, and Subversion again worked for me. A: I'm using Subclipse in Ganymede successfully, maybe could you switch? I do recall having problems with SvnKit also, I'm using the JavaHL client.
{ "language": "en", "url": "https://stackoverflow.com/questions/139055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to pretty print XML from Java? I have a Java String that contains XML, with no line feeds or indentations. I would like to turn it into a String with nicely formatted XML. How do I do this? String unformattedXml = "<tag><nested>hello</nested></tag>"; String formattedXml = new [UnknownClass]().format(unformattedXml); Note: My input is a String. My output is a String. (Basic) mock result: <?xml version="1.0" encoding="UTF-8"?> <root> <tag> <nested>hello</nested> </tag> </root> A: Using scala: import xml._ val xml = XML.loadString("<tag><nested>hello</nested></tag>") val formatted = new PrettyPrinter(150, 2).format(xml) println(formatted) You can do this in Java too, if you depend on the scala-library.jar. It looks like this: import scala.xml.*; public class FormatXML { public static void main(String[] args) { String unformattedXml = "<tag><nested>hello</nested></tag>"; PrettyPrinter pp = new PrettyPrinter(150, 3); String formatted = pp.format(XML.loadString(unformattedXml), TopScope$.MODULE$); System.out.println(formatted); } } The PrettyPrinter object is constructed with two ints, the first being max line length and the second being the indentation step. A: Just for future reference, here's a solution that worked for me (thanks to a comment that @George Hawkins posted in one of the answers): DOMImplementationRegistry registry = DOMImplementationRegistry.newInstance(); DOMImplementationLS impl = (DOMImplementationLS) registry.getDOMImplementation("LS"); LSSerializer writer = impl.createLSSerializer(); writer.getDomConfig().setParameter("format-pretty-print", Boolean.TRUE); LSOutput output = impl.createLSOutput(); ByteArrayOutputStream out = new ByteArrayOutputStream(); output.setByteStream(out); writer.write(document, output); String xmlStr = new String(out.toByteArray()); A: slightly improved version from milosmns... public static String getPrettyXml(String xml) { if (xml == null || xml.trim().length() == 0) return ""; int stack = 0; StringBuilder pretty = new StringBuilder(); String[] rows = xml.trim().replaceAll(">", ">\n").replaceAll("<", "\n<").split("\n"); for (int i = 0; i < rows.length; i++) { if (rows[i] == null || rows[i].trim().length() == 0) continue; String row = rows[i].trim(); if (row.startsWith("<?")) { pretty.append(row + "\n"); } else if (row.startsWith("</")) { String indent = repeatString(--stack); pretty.append(indent + row + "\n"); } else if (row.startsWith("<") && row.endsWith("/>") == false) { String indent = repeatString(stack++); pretty.append(indent + row + "\n"); if (row.endsWith("]]>")) stack--; } else { String indent = repeatString(stack); pretty.append(indent + row + "\n"); } } return pretty.toString().trim(); } private static String repeatString(int stack) { StringBuilder indent = new StringBuilder(); for (int i = 0; i < stack; i++) { indent.append(" "); } return indent.toString(); } A: All above solutions didn't work for me, then I found this http://myshittycode.com/2014/02/10/java-properly-indenting-xml-string/ The clue is remove whitespaces with XPath String xml = "<root>" + "\n " + "\n<name>Coco Puff</name>" + "\n <total>10</total> </root>"; try { Document document = DocumentBuilderFactory.newInstance() .newDocumentBuilder() .parse(new InputSource(new ByteArrayInputStream(xml.getBytes("utf-8")))); XPath xPath = XPathFactory.newInstance().newXPath(); NodeList nodeList = (NodeList) xPath.evaluate("//text()[normalize-space()='']", document, XPathConstants.NODESET); for (int i = 0; i < nodeList.getLength(); ++i) { Node node = nodeList.item(i); node.getParentNode().removeChild(node); } Transformer transformer = TransformerFactory.newInstance().newTransformer(); transformer.setOutputProperty(OutputKeys.ENCODING, "UTF-8"); transformer.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes"); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "4"); StringWriter stringWriter = new StringWriter(); StreamResult streamResult = new StreamResult(stringWriter); transformer.transform(new DOMSource(document), streamResult); System.out.println(stringWriter.toString()); } catch (Exception e) { e.printStackTrace(); } A: This code below working perfectly import javax.xml.transform.OutputKeys; import javax.xml.transform.Source; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerFactory; import javax.xml.transform.stream.StreamResult; import javax.xml.transform.stream.StreamSource; String formattedXml1 = prettyFormat("<root><child>aaa</child><child/></root>"); public static String prettyFormat(String input) { return prettyFormat(input, "2"); } public static String prettyFormat(String input, String indent) { Source xmlInput = new StreamSource(new StringReader(input)); StringWriter stringWriter = new StringWriter(); try { TransformerFactory transformerFactory = TransformerFactory.newInstance(); Transformer transformer = transformerFactory.newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", indent); transformer.transform(xmlInput, new StreamResult(stringWriter)); String pretty = stringWriter.toString(); pretty = pretty.replace("\r\n", "\n"); return pretty; } catch (Exception e) { throw new RuntimeException(e); } } A: I mix all of them and writing one small program. It is reading from the xml file and printing out. Just Instead of xzy give your file path. public static void main(String[] args) throws Exception { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); dbf.setValidating(false); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(new FileInputStream(new File("C:/Users/xyz.xml"))); prettyPrint(doc); } private static String prettyPrint(Document document) throws TransformerException { TransformerFactory transformerFactory = TransformerFactory .newInstance(); Transformer transformer = transformerFactory.newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2"); transformer.setOutputProperty(OutputKeys.ENCODING, "UTF-8"); transformer.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "no"); DOMSource source = new DOMSource(document); StringWriter strWriter = new StringWriter(); StreamResult result = new StreamResult(strWriter);transformer.transform(source, result); System.out.println(strWriter.getBuffer().toString()); return strWriter.getBuffer().toString(); } A: Just to note that top rated answer requires the use of xerces. If you don't want to add this external dependency then you can simply use the standard jdk libraries (which actually are built using xerces internally). N.B. There was a bug with jdk version 1.5 see http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6296446 but it is resolved now., (Note if an error occurs this will return the original text) package com.test; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import javax.xml.transform.OutputKeys; import javax.xml.transform.Source; import javax.xml.transform.Transformer; import javax.xml.transform.sax.SAXSource; import javax.xml.transform.sax.SAXTransformerFactory; import javax.xml.transform.stream.StreamResult; import org.xml.sax.InputSource; public class XmlTest { public static void main(String[] args) { XmlTest t = new XmlTest(); System.out.println(t.formatXml("<a><b><c/><d>text D</d><e value='0'/></b></a>")); } public String formatXml(String xml){ try{ Transformer serializer= SAXTransformerFactory.newInstance().newTransformer(); serializer.setOutputProperty(OutputKeys.INDENT, "yes"); //serializer.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes"); serializer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2"); //serializer.setOutputProperty("{http://xml.customer.org/xslt}indent-amount", "2"); Source xmlSource=new SAXSource(new InputSource(new ByteArrayInputStream(xml.getBytes()))); StreamResult res = new StreamResult(new ByteArrayOutputStream()); serializer.transform(xmlSource, res); return new String(((ByteArrayOutputStream)res.getOutputStream()).toByteArray()); }catch(Exception e){ //TODO log error return xml; } } } A: If you're sure that you have a valid XML, this one is simple, and avoids XML DOM trees. Maybe has some bugs, do comment if you see anything public String prettyPrint(String xml) { if (xml == null || xml.trim().length() == 0) return ""; int stack = 0; StringBuilder pretty = new StringBuilder(); String[] rows = xml.trim().replaceAll(">", ">\n").replaceAll("<", "\n<").split("\n"); for (int i = 0; i < rows.length; i++) { if (rows[i] == null || rows[i].trim().length() == 0) continue; String row = rows[i].trim(); if (row.startsWith("<?")) { // xml version tag pretty.append(row + "\n"); } else if (row.startsWith("</")) { // closing tag String indent = repeatString(" ", --stack); pretty.append(indent + row + "\n"); } else if (row.startsWith("<")) { // starting tag String indent = repeatString(" ", stack++); pretty.append(indent + row + "\n"); } else { // tag data String indent = repeatString(" ", stack); pretty.append(indent + row + "\n"); } } return pretty.toString().trim(); } A: Just another solution which works for us import java.io.StringWriter; import org.dom4j.DocumentHelper; import org.dom4j.io.OutputFormat; import org.dom4j.io.XMLWriter; ** * Pretty Print XML String * * @param inputXmlString * @return */ public static String prettyPrintXml(String xml) { final StringWriter sw; try { final OutputFormat format = OutputFormat.createPrettyPrint(); final org.dom4j.Document document = DocumentHelper.parseText(xml); sw = new StringWriter(); final XMLWriter writer = new XMLWriter(sw, format); writer.write(document); } catch (Exception e) { throw new RuntimeException("Error pretty printing xml:\n" + xml, e); } return sw.toString(); } A: Using jdom2 : http://www.jdom.org/ import java.io.StringReader; import org.jdom2.input.SAXBuilder; import org.jdom2.output.Format; import org.jdom2.output.XMLOutputter; String prettyXml = new XMLOutputter(Format.getPrettyFormat()). outputString(new SAXBuilder().build(new StringReader(uglyXml))); A: I've pretty printed in the past using the org.dom4j.io.OutputFormat.createPrettyPrint() method public String prettyPrint(final String xml){ if (StringUtils.isBlank(xml)) { throw new RuntimeException("xml was null or blank in prettyPrint()"); } final StringWriter sw; try { final OutputFormat format = OutputFormat.createPrettyPrint(); final org.dom4j.Document document = DocumentHelper.parseText(xml); sw = new StringWriter(); final XMLWriter writer = new XMLWriter(sw, format); writer.write(document); } catch (Exception e) { throw new RuntimeException("Error pretty printing xml:\n" + xml, e); } return sw.toString(); } A: Transformer transformer = TransformerFactory.newInstance().newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2"); // initialize StreamResult with File object to save to file StreamResult result = new StreamResult(new StringWriter()); DOMSource source = new DOMSource(doc); transformer.transform(source, result); String xmlString = result.getWriter().toString(); System.out.println(xmlString); Note: Results may vary depending on the Java version. Search for workarounds specific to your platform. A: As an alternative to the answers from max, codeskraps, David Easley and milosmns, have a look at my lightweight, high-performance pretty-printer library: xml-formatter // construct lightweight, threadsafe, instance PrettyPrinter prettyPrinter = PrettyPrinterBuilder.newPrettyPrinter().build(); StringBuilder buffer = new StringBuilder(); String xml = ..; // also works with char[] or Reader if(prettyPrinter.process(xml, buffer)) { // valid XML, print buffer } else { // invalid XML, print xml } Sometimes, like when running mocked SOAP services directly from file, it is good to have a pretty-printer which also handles already pretty-printed XML: PrettyPrinter prettyPrinter = PrettyPrinterBuilder.newPrettyPrinter().ignoreWhitespace().build(); As some have commented, pretty-printing is just a way of presenting XML in a more human-readable form - whitespace strictly does not belong in your XML data. The library is intended for pretty-printing for logging purposes, and also includes functions for filtering (subtree removal / anonymization) and pretty-printing of XML in CDATA and Text nodes. A: I had the same problem and I'm having great success with JTidy (http://jtidy.sourceforge.net/index.html) Example: Tidy t = new Tidy(); t.setIndentContent(true); Document d = t.parseDOM( new ByteArrayInputStream("HTML goes here", null); OutputStream out = new ByteArrayOutputStream(); t.pprint(d, out); String html = out.toString(); A: I always use the below function: public static String prettyPrintXml(String xmlStringToBeFormatted) { String formattedXmlString = null; try { DocumentBuilderFactory documentBuilderFactory = DocumentBuilderFactory.newInstance(); documentBuilderFactory.setValidating(true); DocumentBuilder documentBuilder = documentBuilderFactory.newDocumentBuilder(); InputSource inputSource = new InputSource(new StringReader(xmlStringToBeFormatted)); Document document = documentBuilder.parse(inputSource); Transformer transformer = TransformerFactory.newInstance().newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2"); StreamResult streamResult = new StreamResult(new StringWriter()); DOMSource dOMSource = new DOMSource(document); transformer.transform(dOMSource, streamResult); formattedXmlString = streamResult.getWriter().toString().trim(); } catch (Exception ex) { StringWriter sw = new StringWriter(); ex.printStackTrace(new PrintWriter(sw)); System.err.println(sw.toString()); } return formattedXmlString; } A: Here's a way of doing it using dom4j: Imports: import org.dom4j.Document; import org.dom4j.DocumentHelper; import org.dom4j.io.OutputFormat; import org.dom4j.io.XMLWriter; Code: String xml = "<your xml='here'/>"; Document doc = DocumentHelper.parseText(xml); StringWriter sw = new StringWriter(); OutputFormat format = OutputFormat.createPrettyPrint(); XMLWriter xw = new XMLWriter(sw, format); xw.write(doc); String result = sw.toString(); A: Since you are starting with a String, you can convert to a DOM object (e.g. Node) before you use the Transformer. However, if you know your XML string is valid, and you don't want to incur the memory overhead of parsing a string into a DOM, then running a transform over the DOM to get a string back - you could just do some old fashioned character by character parsing. Insert a newline and spaces after every </...> characters, keep and indent counter (to determine the number of spaces) that you increment for every <...> and decrement for every </...> you see. Disclaimer - I did a cut/paste/text edit of the functions below, so they may not compile as is. public static final Element createDOM(String strXML) throws ParserConfigurationException, SAXException, IOException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); dbf.setValidating(true); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource sourceXML = new InputSource(new StringReader(strXML)); Document xmlDoc = db.parse(sourceXML); Element e = xmlDoc.getDocumentElement(); e.normalize(); return e; } public static final void prettyPrint(Node xml, OutputStream out) throws TransformerConfigurationException, TransformerFactoryConfigurationError, TransformerException { Transformer tf = TransformerFactory.newInstance().newTransformer(); tf.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes"); tf.setOutputProperty(OutputKeys.ENCODING, "UTF-8"); tf.setOutputProperty(OutputKeys.INDENT, "yes"); tf.transform(new DOMSource(xml), new StreamResult(out)); } A: Kevin Hakanson said: "However, if you know your XML string is valid, and you don't want to incur the memory overhead of parsing a string into a DOM, then running a transform over the DOM to get a string back - you could just do some old fashioned character by character parsing. Insert a newline and spaces after every characters, keep and indent counter (to determine the number of spaces) that you increment for every <...> and decrement for every you see." Agreed. Such an approach is much faster and has far fewer dependencies. Example solution: /** * XML utils, including formatting. */ public class XmlUtils { private static XmlFormatter formatter = new XmlFormatter(2, 80); public static String formatXml(String s) { return formatter.format(s, 0); } public static String formatXml(String s, int initialIndent) { return formatter.format(s, initialIndent); } private static class XmlFormatter { private int indentNumChars; private int lineLength; private boolean singleLine; public XmlFormatter(int indentNumChars, int lineLength) { this.indentNumChars = indentNumChars; this.lineLength = lineLength; } public synchronized String format(String s, int initialIndent) { int indent = initialIndent; StringBuilder sb = new StringBuilder(); for (int i = 0; i < s.length(); i++) { char currentChar = s.charAt(i); if (currentChar == '<') { char nextChar = s.charAt(i + 1); if (nextChar == '/') indent -= indentNumChars; if (!singleLine) // Don't indent before closing element if we're creating opening and closing elements on a single line. sb.append(buildWhitespace(indent)); if (nextChar != '?' && nextChar != '!' && nextChar != '/') indent += indentNumChars; singleLine = false; // Reset flag. } sb.append(currentChar); if (currentChar == '>') { if (s.charAt(i - 1) == '/') { indent -= indentNumChars; sb.append("\n"); } else { int nextStartElementPos = s.indexOf('<', i); if (nextStartElementPos > i + 1) { String textBetweenElements = s.substring(i + 1, nextStartElementPos); // If the space between elements is solely newlines, let them through to preserve additional newlines in source document. if (textBetweenElements.replaceAll("\n", "").length() == 0) { sb.append(textBetweenElements + "\n"); } // Put tags and text on a single line if the text is short. else if (textBetweenElements.length() <= lineLength * 0.5) { sb.append(textBetweenElements); singleLine = true; } // For larger amounts of text, wrap lines to a maximum line length. else { sb.append("\n" + lineWrap(textBetweenElements, lineLength, indent, null) + "\n"); } i = nextStartElementPos - 1; } else { sb.append("\n"); } } } } return sb.toString(); } } private static String buildWhitespace(int numChars) { StringBuilder sb = new StringBuilder(); for (int i = 0; i < numChars; i++) sb.append(" "); return sb.toString(); } /** * Wraps the supplied text to the specified line length. * @lineLength the maximum length of each line in the returned string (not including indent if specified). * @indent optional number of whitespace characters to prepend to each line before the text. * @linePrefix optional string to append to the indent (before the text). * @returns the supplied text wrapped so that no line exceeds the specified line length + indent, optionally with * indent and prefix applied to each line. */ private static String lineWrap(String s, int lineLength, Integer indent, String linePrefix) { if (s == null) return null; StringBuilder sb = new StringBuilder(); int lineStartPos = 0; int lineEndPos; boolean firstLine = true; while(lineStartPos < s.length()) { if (!firstLine) sb.append("\n"); else firstLine = false; if (lineStartPos + lineLength > s.length()) lineEndPos = s.length() - 1; else { lineEndPos = lineStartPos + lineLength - 1; while (lineEndPos > lineStartPos && (s.charAt(lineEndPos) != ' ' && s.charAt(lineEndPos) != '\t')) lineEndPos--; } sb.append(buildWhitespace(indent)); if (linePrefix != null) sb.append(linePrefix); sb.append(s.substring(lineStartPos, lineEndPos + 1)); lineStartPos = lineEndPos + 1; } return sb.toString(); } // other utils removed for brevity } A: a simpler solution based on this answer: public static String prettyFormat(String input, int indent) { try { Source xmlInput = new StreamSource(new StringReader(input)); StringWriter stringWriter = new StringWriter(); StreamResult xmlOutput = new StreamResult(stringWriter); TransformerFactory transformerFactory = TransformerFactory.newInstance(); transformerFactory.setAttribute("indent-number", indent); transformerFactory.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, ""); transformerFactory.setAttribute(XMLConstants.ACCESS_EXTERNAL_STYLESHEET, ""); Transformer transformer = transformerFactory.newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.transform(xmlInput, xmlOutput); return xmlOutput.getWriter().toString(); } catch (Exception e) { throw new RuntimeException(e); // simple exception handling, please review it } } public static String prettyFormat(String input) { return prettyFormat(input, 2); } testcase: prettyFormat("<root><child>aaa</child><child/></root>"); returns: <?xml version="1.0" encoding="UTF-8"?> <root> <child>aaa</child> <child/> </root> //Ignore: Original edit just needs missing s in the Class name in code. redundant six characters added to get over 6 characters validation on SO A: Here's an answer to my own question. I combined the answers from the various results to write a class that pretty prints XML. No guarantees on how it responds with invalid XML or large documents. package ecb.sdw.pretty; import org.apache.xml.serialize.OutputFormat; import org.apache.xml.serialize.XMLSerializer; import org.w3c.dom.Document; import org.xml.sax.InputSource; import org.xml.sax.SAXException; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.ParserConfigurationException; import java.io.IOException; import java.io.StringReader; import java.io.StringWriter; import java.io.Writer; /** * Pretty-prints xml, supplied as a string. * <p/> * eg. * <code> * String formattedXml = new XmlFormatter().format("<tag><nested>hello</nested></tag>"); * </code> */ public class XmlFormatter { public XmlFormatter() { } public String format(String unformattedXml) { try { final Document document = parseXmlFile(unformattedXml); OutputFormat format = new OutputFormat(document); format.setLineWidth(65); format.setIndenting(true); format.setIndent(2); Writer out = new StringWriter(); XMLSerializer serializer = new XMLSerializer(out, format); serializer.serialize(document); return out.toString(); } catch (IOException e) { throw new RuntimeException(e); } } private Document parseXmlFile(String in) { try { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(new StringReader(in)); return db.parse(is); } catch (ParserConfigurationException e) { throw new RuntimeException(e); } catch (SAXException e) { throw new RuntimeException(e); } catch (IOException e) { throw new RuntimeException(e); } } public static void main(String[] args) { String unformattedXml = "<?xml version=\"1.0\" encoding=\"UTF-8\"?><QueryMessage\n" + " xmlns=\"http://www.SDMX.org/resources/SDMXML/schemas/v2_0/message\"\n" + " xmlns:query=\"http://www.SDMX.org/resources/SDMXML/schemas/v2_0/query\">\n" + " <Query>\n" + " <query:CategorySchemeWhere>\n" + " \t\t\t\t\t <query:AgencyID>ECB\n\n\n\n</query:AgencyID>\n" + " </query:CategorySchemeWhere>\n" + " </Query>\n\n\n\n\n" + "</QueryMessage>"; System.out.println(new XmlFormatter().format(unformattedXml)); } } A: If using a 3rd party XML library is ok, you can get away with something significantly simpler than what the currently highest-voted answers suggest. It was stated that both input and output should be Strings, so here's a utility method that does just that, implemented with the XOM library: import nu.xom.*; import java.io.*; [...] public static String format(String xml) throws ParsingException, IOException { ByteArrayOutputStream out = new ByteArrayOutputStream(); Serializer serializer = new Serializer(out); serializer.setIndent(4); // or whatever you like serializer.write(new Builder().build(xml, "")); return out.toString("UTF-8"); } I tested that it works, and the results do not depend on your JRE version or anything like that. To see how to customise the output format to your liking, take a look at the Serializer API. This actually came out longer than I thought - some extra lines were needed because Serializer wants an OutputStream to write to. But note that there's very little code for actual XML twiddling here. (This answer is part of my evaluation of XOM, which was suggested as one option in my question about the best Java XML library to replace dom4j. For the record, with dom4j you could achieve this with similar ease using XMLWriter and OutputFormat. Edit: ...as demonstrated in mlo55's answer.) A: Hmmm... faced something like this and it is a known bug ... just add this OutputProperty .. transformer.setOutputProperty(OutputPropertiesFactory.S_KEY_INDENT_AMOUNT, "8"); Hope this helps ... A: Now it's 2012 and Java can do more than it used to with XML, I'd like to add an alternative to my accepted answer. This has no dependencies outside of Java 6. import org.w3c.dom.Node; import org.w3c.dom.bootstrap.DOMImplementationRegistry; import org.w3c.dom.ls.DOMImplementationLS; import org.w3c.dom.ls.LSSerializer; import org.xml.sax.InputSource; import javax.xml.parsers.DocumentBuilderFactory; import java.io.StringReader; /** * Pretty-prints xml, supplied as a string. * <p/> * eg. * <code> * String formattedXml = new XmlFormatter().format("<tag><nested>hello</nested></tag>"); * </code> */ public class XmlFormatter { public String format(String xml) { try { final InputSource src = new InputSource(new StringReader(xml)); final Node document = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(src).getDocumentElement(); final Boolean keepDeclaration = Boolean.valueOf(xml.startsWith("<?xml")); //May need this: System.setProperty(DOMImplementationRegistry.PROPERTY,"com.sun.org.apache.xerces.internal.dom.DOMImplementationSourceImpl"); final DOMImplementationRegistry registry = DOMImplementationRegistry.newInstance(); final DOMImplementationLS impl = (DOMImplementationLS) registry.getDOMImplementation("LS"); final LSSerializer writer = impl.createLSSerializer(); writer.getDomConfig().setParameter("format-pretty-print", Boolean.TRUE); // Set this to true if the output needs to be beautified. writer.getDomConfig().setParameter("xml-declaration", keepDeclaration); // Set this to true if the declaration is needed to be outputted. return writer.writeToString(document); } catch (Exception e) { throw new RuntimeException(e); } } public static void main(String[] args) { String unformattedXml = "<?xml version=\"1.0\" encoding=\"UTF-8\"?><QueryMessage\n" + " xmlns=\"http://www.SDMX.org/resources/SDMXML/schemas/v2_0/message\"\n" + " xmlns:query=\"http://www.SDMX.org/resources/SDMXML/schemas/v2_0/query\">\n" + " <Query>\n" + " <query:CategorySchemeWhere>\n" + " \t\t\t\t\t <query:AgencyID>ECB\n\n\n\n</query:AgencyID>\n" + " </query:CategorySchemeWhere>\n" + " </Query>\n\n\n\n\n" + "</QueryMessage>"; System.out.println(new XmlFormatter().format(unformattedXml)); } } A: Regarding comment that "you must first build a DOM tree": No, you need not and should not do that. Instead, create a StreamSource (new StreamSource(new StringReader(str)), and feed that to the identity transformer mentioned. That'll use SAX parser, and result will be much faster. Building an intermediate tree is pure overhead for this case. Otherwise the top-ranked answer is good. A: There is a very nice command line XML utility called xmlstarlet(http://xmlstar.sourceforge.net/) that can do a lot of things which a lot of people use. You could execute this program programmatically using Runtime.exec and then read in the formatted output file. It has more options and better error reporting than a few lines of Java code can provide. download xmlstarlet : http://sourceforge.net/project/showfiles.php?group_id=66612&package_id=64589 A: I have found that in Java 1.6.0_32 the normal method to pretty print an XML string (using a Transformer with a null or identity xslt) does not behave as I would like if tags are merely separated by whitespace, as opposed to having no separating text. I tried using <xsl:strip-space elements="*"/> in my template to no avail. The simplest solution I found was to strip the space the way I wanted using a SAXSource and XML filter. Since my solution was for logging I also extended this to work with incomplete XML fragments. Note the normal method seems to work fine if you use a DOMSource but I did not want to use this because of the incompleteness and memory overhead. public static class WhitespaceIgnoreFilter extends XMLFilterImpl { @Override public void ignorableWhitespace(char[] arg0, int arg1, int arg2) throws SAXException { //Ignore it then... } @Override public void characters( char[] ch, int start, int length) throws SAXException { if (!new String(ch, start, length).trim().equals("")) super.characters(ch, start, length); } } public static String prettyXML(String logMsg, boolean allowBadlyFormedFragments) throws SAXException, IOException, TransformerException { TransformerFactory transFactory = TransformerFactory.newInstance(); transFactory.setAttribute("indent-number", new Integer(2)); Transformer transformer = transFactory.newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "4"); StringWriter out = new StringWriter(); XMLReader masterParser = SAXHelper.getSAXParser(true); XMLFilter parser = new WhitespaceIgnoreFilter(); parser.setParent(masterParser); if(allowBadlyFormedFragments) { transformer.setErrorListener(new ErrorListener() { @Override public void warning(TransformerException exception) throws TransformerException { } @Override public void fatalError(TransformerException exception) throws TransformerException { } @Override public void error(TransformerException exception) throws TransformerException { } }); } try { transformer.transform(new SAXSource(parser, new InputSource(new StringReader(logMsg))), new StreamResult(out)); } catch (TransformerException e) { if(e.getCause() != null && e.getCause() instanceof SAXParseException) { if(!allowBadlyFormedFragments || !"XML document structures must start and end within the same entity.".equals(e.getCause().getMessage())) { throw e; } } else { throw e; } } out.flush(); return out.toString(); } A: For those searching for a quick and dirty solution - which doesn't need the XML to be 100% valid. e.g. in case of REST / SOAP logging (you never know what the others send ;-)) I found and advanced a code snipped I found online which I think is still missing here as a valid possible approach: public static String prettyPrintXMLAsString(String xmlString) { /* Remove new lines */ final String LINE_BREAK = "\n"; xmlString = xmlString.replaceAll(LINE_BREAK, ""); StringBuffer prettyPrintXml = new StringBuffer(); /* Group the xml tags */ Pattern pattern = Pattern.compile("(<[^/][^>]+>)?([^<]*)(</[^>]+>)?(<[^/][^>]+/>)?"); Matcher matcher = pattern.matcher(xmlString); int tabCount = 0; while (matcher.find()) { String str1 = (null == matcher.group(1) || "null".equals(matcher.group())) ? "" : matcher.group(1); String str2 = (null == matcher.group(2) || "null".equals(matcher.group())) ? "" : matcher.group(2); String str3 = (null == matcher.group(3) || "null".equals(matcher.group())) ? "" : matcher.group(3); String str4 = (null == matcher.group(4) || "null".equals(matcher.group())) ? "" : matcher.group(4); if (matcher.group() != null && !matcher.group().trim().equals("")) { printTabs(tabCount, prettyPrintXml); if (!str1.equals("") && str3.equals("")) { ++tabCount; } if (str1.equals("") && !str3.equals("")) { --tabCount; prettyPrintXml.deleteCharAt(prettyPrintXml.length() - 1); } prettyPrintXml.append(str1); prettyPrintXml.append(str2); prettyPrintXml.append(str3); if (!str4.equals("")) { prettyPrintXml.append(LINE_BREAK); printTabs(tabCount, prettyPrintXml); prettyPrintXml.append(str4); } prettyPrintXml.append(LINE_BREAK); } } return prettyPrintXml.toString(); } private static void printTabs(int count, StringBuffer stringBuffer) { for (int i = 0; i < count; i++) { stringBuffer.append("\t"); } } public static void main(String[] args) { String x = new String( "<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"><soap:Body><soap:Fault><faultcode>soap:Client</faultcode><faultstring>INVALID_MESSAGE</faultstring><detail><ns3:XcbSoapFault xmlns=\"\" xmlns:ns3=\"http://www.someapp.eu/xcb/types/xcb/v1\"><CauseCode>20007</CauseCode><CauseText>INVALID_MESSAGE</CauseText><DebugInfo>Problems creating SAAJ object model</DebugInfo></ns3:XcbSoapFault></detail></soap:Fault></soap:Body></soap:Envelope>"); System.out.println(prettyPrintXMLAsString(x)); } here is the output: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <soap:Fault> <faultcode>soap:Client</faultcode> <faultstring>INVALID_MESSAGE</faultstring> <detail> <ns3:XcbSoapFault xmlns="" xmlns:ns3="http://www.someapp.eu/xcb/types/xcb/v1"> <CauseCode>20007</CauseCode> <CauseText>INVALID_MESSAGE</CauseText> <DebugInfo>Problems creating SAAJ object model</DebugInfo> </ns3:XcbSoapFault> </detail> </soap:Fault> </soap:Body> </soap:Envelope> A: The solutions I have found here for Java 1.6+ do not reformat the code if it is already formatted. The one that worked for me (and re-formatted already formatted code) was the following. import org.apache.xml.security.c14n.CanonicalizationException; import org.apache.xml.security.c14n.Canonicalizer; import org.apache.xml.security.c14n.InvalidCanonicalizerException; import org.w3c.dom.Element; import org.w3c.dom.bootstrap.DOMImplementationRegistry; import org.w3c.dom.ls.DOMImplementationLS; import org.w3c.dom.ls.LSSerializer; import org.xml.sax.InputSource; import org.xml.sax.SAXException; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.ParserConfigurationException; import javax.xml.transform.TransformerException; import java.io.IOException; import java.io.StringReader; public class XmlUtils { public static String toCanonicalXml(String xml) throws InvalidCanonicalizerException, ParserConfigurationException, SAXException, CanonicalizationException, IOException { Canonicalizer canon = Canonicalizer.getInstance(Canonicalizer.ALGO_ID_C14N_OMIT_COMMENTS); byte canonXmlBytes[] = canon.canonicalize(xml.getBytes()); return new String(canonXmlBytes); } public static String prettyFormat(String input) throws TransformerException, ParserConfigurationException, IOException, SAXException, InstantiationException, IllegalAccessException, ClassNotFoundException { InputSource src = new InputSource(new StringReader(input)); Element document = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(src).getDocumentElement(); Boolean keepDeclaration = input.startsWith("<?xml"); DOMImplementationRegistry registry = DOMImplementationRegistry.newInstance(); DOMImplementationLS impl = (DOMImplementationLS) registry.getDOMImplementation("LS"); LSSerializer writer = impl.createLSSerializer(); writer.getDomConfig().setParameter("format-pretty-print", Boolean.TRUE); writer.getDomConfig().setParameter("xml-declaration", keepDeclaration); return writer.writeToString(document); } } It is a good tool to use in your unit tests for full-string xml comparison. private void assertXMLEqual(String expected, String actual) throws ParserConfigurationException, IOException, SAXException, CanonicalizationException, InvalidCanonicalizerException, TransformerException, IllegalAccessException, ClassNotFoundException, InstantiationException { String canonicalExpected = prettyFormat(toCanonicalXml(expected)); String canonicalActual = prettyFormat(toCanonicalXml(actual)); assertEquals(canonicalExpected, canonicalActual); } A: I saw one answer using Scala, so here is another one in Groovy, just in case someone finds it interesting. The default indentation is 2 steps, XmlNodePrinter constructor can be passed another value as well. def xml = "<tag><nested>hello</nested></tag>" def stringWriter = new StringWriter() def node = new XmlParser().parseText(xml); new XmlNodePrinter(new PrintWriter(stringWriter)).print(node) println stringWriter.toString() Usage from Java if groovy jar is in classpath String xml = "<tag><nested>hello</nested></tag>"; StringWriter stringWriter = new StringWriter(); Node node = new XmlParser().parseText(xml); new XmlNodePrinter(new PrintWriter(stringWriter)).print(node); System.out.println(stringWriter.toString()); A: Underscore-java has static method U.formatXml(string). Live example import com.github.underscore.U; public class MyClass { public static void main(String args[]) { String xml = "<tag><nested>hello</nested></tag>"; System.out.println(U.formatXml("<?xml version=\"1.0\" encoding=\"UTF-8\"?><root>" + xml + "</root>")); } } Output: <?xml version="1.0" encoding="UTF-8"?> <root> <tag> <nested>hello</nested> </tag> </root> A: In case you do not need indentation that much but a few line breaks, it could be sufficient to simply regex... String leastPrettifiedXml = uglyXml.replaceAll("><", ">\n<"); The code is nice, not the result because of missing indentation. (For solutions with indentation, see other answers.) A: Try this: try { TransformerFactory transFactory = TransformerFactory.newInstance(); Transformer transformer = null; transformer = transFactory.newTransformer(); StringWriter buffer = new StringWriter(); transformer.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes"); transformer.transform(new DOMSource(element), new StreamResult(buffer)); String str = buffer.toString(); System.out.println("XML INSIDE IS #########################################"+str); return element; } catch (TransformerConfigurationException e) { e.printStackTrace(); } catch (TransformerException e) { e.printStackTrace(); } A: I should have looked for this page first before coming up with my own solution! Anyway, mine uses Java recursion to parse the xml page. This code is totally self-contained and does not rely on third party libraries. Also .. it uses recursion! // you call this method passing in the xml text public static void prettyPrint(String text){ prettyPrint(text, 0); } // "index" corresponds to the number of levels of nesting and/or the number of tabs to print before printing the tag public static void prettyPrint(String xmlText, int index){ boolean foundTagStart = false; StringBuilder tagChars = new StringBuilder(); String startTag = ""; String endTag = ""; String[] chars = xmlText.split(""); // find the next start tag for(String ch : chars){ if(ch.equalsIgnoreCase("<")){ tagChars.append(ch); foundTagStart = true; } else if(ch.equalsIgnoreCase(">") && foundTagStart){ startTag = tagChars.append(ch).toString(); String tempTag = startTag; endTag = (tempTag.contains("\"") ? (tempTag.split(" ")[0] + ">") : tempTag).replace("<", "</"); // <startTag attr1=1 attr2=2> => </startTag> break; } else if(foundTagStart){ tagChars.append(ch); } } // once start and end tag are calculated, print start tag, then content, then end tag if(foundTagStart){ int startIndex = xmlText.indexOf(startTag); int endIndex = xmlText.indexOf(endTag); // handle if matching tags NOT found if((startIndex < 0) || (endIndex < 0)){ if(startIndex < 0) { // no start tag found return; } else { // start tag found, no end tag found (handles single tags aka "<mytag/>" or "<?xml ...>") printTabs(index); System.out.println(startTag); // move on to the next tag // NOTE: "index" (not index+1) because next tag is on same level as this one prettyPrint(xmlText.substring(startIndex+startTag.length(), xmlText.length()), index); return; } // handle when matching tags found } else { String content = xmlText.substring(startIndex+startTag.length(), endIndex); boolean isTagContainsTags = content.contains("<"); // content contains tags printTabs(index); if(isTagContainsTags){ // ie: <tag1><tag2>stuff</tag2></tag1> System.out.println(startTag); prettyPrint(content, index+1); // "index+1" because "content" is nested printTabs(index); } else { System.out.print(startTag); // ie: <tag1>stuff</tag1> or <tag1></tag1> System.out.print(content); } System.out.println(endTag); int nextIndex = endIndex + endTag.length(); if(xmlText.length() > nextIndex){ // if there are more tags on this level, continue prettyPrint(xmlText.substring(nextIndex, xmlText.length()), index); } } } else { System.out.print(xmlText); } } private static void printTabs(int counter){ while(counter-- > 0){ System.out.print("\t"); } } A: I was trying to achieve something similar, but without any external dependency. The application was already using DOM to format just for logging the XMLs! Here is my sample snippet public void formatXML(final String unformattedXML) { final int length = unformattedXML.length(); final int indentSpace = 3; final StringBuilder newString = new StringBuilder(length + length / 10); final char space = ' '; int i = 0; int indentCount = 0; char currentChar = unformattedXML.charAt(i++); char previousChar = currentChar; boolean nodeStarted = true; newString.append(currentChar); for (; i < length - 1;) { currentChar = unformattedXML.charAt(i++); if(((int) currentChar < 33) && !nodeStarted) { continue; } switch (currentChar) { case '<': if ('>' == previousChar && '/' != unformattedXML.charAt(i - 1) && '/' != unformattedXML.charAt(i) && '!' != unformattedXML.charAt(i)) { indentCount++; } newString.append(System.lineSeparator()); for (int j = indentCount * indentSpace; j > 0; j--) { newString.append(space); } newString.append(currentChar); nodeStarted = true; break; case '>': newString.append(currentChar); nodeStarted = false; break; case '/': if ('<' == previousChar || '>' == unformattedXML.charAt(i)) { indentCount--; } newString.append(currentChar); break; default: newString.append(currentChar); } previousChar = currentChar; } newString.append(unformattedXML.charAt(length - 1)); System.out.println(newString.toString()); }
{ "language": "en", "url": "https://stackoverflow.com/questions/139076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "493" }
Q: JFrame.setDefaultLookAndFeelDecorated(true); when i use setDefaultLookAndFeelDecorated(true) method in Java why is the Frame appear FullScreen when i maximize the Frame ? and how can i disaple the FullScreen mode in this method ? A: Setting setDefaultLookAndFeelDecorated to true causes the decorations to be handled by the look and feel; this means that a System look-and-feel on both Windows and Mac (I have no Linux at hand now) retains the borders you would expect them of a native window, e.g. staying clear of the taskbar in Windows. When using the Cross Platform look-and-feel, a.k.a. Metal, which is the default on Windows, the Windows version will take over the entire screen, making it look like a full-screen window. On Mac, the OS refuses to give away its own titlebar, and draws a complete Metal frame (including the title bar) in a Mac-native window. So, in short, if you want to make sure the taskbar gets respected, use the Windows system look-and-feel on Windows. You can set it by using something like UIManager.setLookAndFeel((LookAndFeel) Class.forName(UIManager.getCrossPlatformLookAndFeelClassName()).newInstance()); A: If you don't want your JFrame to be maximize-able then then call .setResizable(false); on it.
{ "language": "en", "url": "https://stackoverflow.com/questions/139088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: GetExitCodeProcess() returns 128 I have a DLL that's loaded into a 3rd party parent process as an extension. From this DLL I instantiate external processes (my own) by using CreateProcess API. This works great in 99.999% of the cases but sometimes this suddenly fails and stops working permanently (maybe a restart of the parent process would solve this but this is undesirable and I don't want to recommend that until I solve the problem.) The failure is symptomized by external process not being invoked any more even though CreteProcess() doesn't report an error and by GetExitCodeProcess() returning 128. Here's the simplified version of what I'm doing: STARTUPINFO si; ZeroMemory(&si, sizeof(si)); si.cb = sizeof(si); si.dwFlags = STARTF_USESHOWWINDOW; si.wShowWindow = SW_HIDE; PROCESS_INFORMATION pi; ZeroMemory(&pi, sizeof(pi)); if(!CreateProcess( NULL, // No module name (use command line). "<my command line>", NULL, // Process handle not inheritable. NULL, // Thread handle not inheritable. FALSE, // Set handle inheritance to FALSE. CREATE_SUSPENDED, // Create suspended. NULL, // Use parent's environment block. NULL, // Use parent's starting directory. &si, // Pointer to STARTUPINFO structure. &pi)) // Pointer to PROCESS_INFORMATION structure. { // Handle error. } else { // Do something. // Resume the external process thread. DWORD resumeThreadResult = ResumeThread(pi.hThread); // ResumeThread() returns 1 which is OK // (it means that the thread was suspended but then restarted) // Wait for the external process to finish. DWORD waitForSingelObjectResult = WaitForSingleObject(pi.hProcess, INFINITE); // WaitForSingleObject() returns 0 which is OK. // Get the exit code of the external process. DWORD exitCode; if(!GetExitCodeProcess(pi.hProcess, &exitCode)) { // Handle error. } else { // There is no error but exitCode is 128, a value that // doesn't exist in the external process (and even if it // existed it doesn't matter as it isn't being invoked any more) // Error code 128 is ERROR_WAIT_NO_CHILDREN which would make some // sense *if* GetExitCodeProcess() returned FALSE and then I were to // get ERROR_WAIT_NO_CHILDREN with GetLastError() } // PROCESS_INFORMATION handles for process and thread are closed. } External process can be manually invoked from Windows Explorer or command line and it starts just fine on its own. Invoked like that it, before doing any real work, creates a log file and logs some information about it. But invoked like described above this logging information doesn't appear at all so I'm assuming that the main thread of the external process never enters main() (I'm testing that assumption now.) There is at least one thing I could do to try to circumvent the problem (not start the thread suspended) but I would first like to understand the root of the failure first. Does anyone has any idea what could cause this and how to fix it? A: Quoting from the MSDN article on GetExitCodeProcess: The following termination statuses can be returned if the process has terminated: * *The exit value specified in the ExitProcess or TerminateProcess function *The return value from the main or WinMain function of the process *The exception value for an unhandled exception that caused the process to terminate Given the scenario you described, I think the most likely cause ist the third: An unhandled exception. Have a look at the source of the processes you create. A: Have a look at Desktop Heap memory. Essentially the desktop heap issue comes down to exhausted resources (eg starting too many processes). When your app runs out of these resources, one of the symptoms is that you won't be able to start a new process, and the call to CreateProcess will fail with code 128. Note that the context you run in also has some effect. For example, running as a service, you will run out of desktop heap much faster than if you're testing your code in a console app. This post has a lot of good information about desktop heap Microsoft Support also has some useful information. A: There are 2 issues that i could think of from your code sample 1.Get yourusage of the first 2 paramaters to the creatprocess command working first. Hard code the paths and invoke notepad.exe and see if that comes up. keep tweaking this until you have notepad running. 2.Contrary to your comment, If you have passed the currentdirectory parameter for the new process as NULL, it will use the current working directory of the process to start the new process from and not the parent' starting directory. I assume that your external process exe cannot start properly due to dll dependencies that cannot be resolved in the new path. ps : In the debugger watch for @err,hr which will tell you the explanation for the last error code,
{ "language": "en", "url": "https://stackoverflow.com/questions/139090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I retrieve latest from database using NHibernate after an update? Here is the scenario: I have a winforms application using NHibernate. When launched, I populate a DataGridView with the results of a NHibernate query. This part works fine. If I update a record in that list and flush the session, the update takes in the database. Upon closing the form after the update, I call a method to retrieve a list of objects to populate the DataGridView again to pick up the change and also get any other changes that may have occurred by somebody else. The problem is that the record that got updated, NHibernate doesn't reflect the change in the list it gives me. When I insert or delete a record, everything works fine. It is just when I update, that I get this behavior. I narrowed it down to NHibernate with their caching mechanism. I cannot figure out a way to make NHibernate retrieve from the database instead of using the cache after an update occurs. I posted on the NHibernate forums, but the suggestions they gave me didn't work. I stated this and nobody replied back. I am not going to state what I have tried in case I didn't do it right. If you answer with something that I tried exactly, I will state it in the comments of your answer. This is the code that I use to retrieve the list: public IList<WorkOrder> FindBy(string fromDate, string toDate) { IQuery query = _currentSession.CreateQuery("from WorkOrder wo where wo.Date >= ? and wo.Date <= ?"); query.SetParameter(0, fromDate); query.SetParameter(1, toDate); return query.List<WorkOrder>(); } The session is passed to the class when it is constructed. I can post my mapping file also, but I am not sure if there is anything wrong with it, since everything else works. Anybody seen this before? This is the first project that I have used NHibernate, thanks for the help. A: After your update, Evict the object from the first level cache. Session.Update(obj); Session.Evict(obj); You may want to commit and/or flush first. A: what about refresh? - see 9.2. Loading an object of the docs: "sess.Save(cat); sess.Flush(); //force the SQL INSERT sess.Refresh(cat); //re-read the state (after the trigger executes) "
{ "language": "en", "url": "https://stackoverflow.com/questions/139115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Javascript Iframe innerHTML Does anyone know how to get the HTML out of an IFRAME I have tried several different ways: document.getElementById('iframe01').contentDocument.body.innerHTML document.frames['iframe01'].document.body.innerHTML document.getElementById('iframe01').contentWindow.document.body.innerHTML etc A: If you take a look at JQuery, you can do something like: <iframe id="my_iframe" ...></iframe> $('#my_iframe').contents().find('html').html(); This is assuming that your iframe parent and child reside on the same server, due to the Same Origin Policy in Javascript. A: Conroy's answer was right. In the case you need only stuff from body tag, just use: $('#my_iframe').contents().find('body').html(); A: I think this is what you want: window.frames['iframe01'].document.body.innerHTML EDIT: I have it on good authority that this won't work in Chrome and Firefox although it works perfectly in IE, which is where I tested it. In retrospect, that was a big mistake This will work: window.frames[0].document.body.innerHTML I understand that this isn't exactly what was asked but don't want to delete the answer because I think it has a place. I like @ravz's jquery answer below. A: You can use the contentDocument or contentWindow property for that purpose. Here is the sample code. function gethtml() { const x = document.getElementById("myframe") const y = x.contentWindow || x.contentDocument const z = y.document ? y.document : y alert(z.body.innerHTML) } here, myframe is the id of your iframe. Note: You can't extract the content out of an iframe from a src outside you domain. A: Having something like the following would work. <iframe id = "testframe" onload = populateIframe(this.id);></iframe> // The following function should be inside a script tag function populateIframe(id) { var text = "This is a Test" var iframe = document.getElementById(id); var doc; if(iframe.contentDocument) { doc = iframe.contentDocument; } else { doc = iframe.contentWindow.document; } doc.body.innerHTML = text; } A: Don't forget that you can not cross domains because of security. So if this is the case, you should use JSON. A: This solution works same as iFrame. I have created a PHP script that can get all the contents from the other website, and most important part is you can easily apply your custom jQuery to that external content. Please refer to the following script that can get all the contents from the other website and then you can apply your cusom jQuery/JS as well. This content can be used anywhere, inside any element or any page. <div id='myframe'> <?php /* Use below function to display final HTML inside this div */ //Display Frame echo displayFrame(); ?> </div> <?php /* Function to display frame from another domain */ function displayFrame() { $webUrl = 'http://[external-web-domain.com]/'; //Get HTML from the URL $content = file_get_contents($webUrl); //Add custom JS to returned HTML content $customJS = " <script> /* Here I am writing a sample jQuery to hide the navigation menu You can write your own jQuery for this content */ //Hide Navigation bar jQuery(\".navbar\").hide(); </script>"; //Append Custom JS with HTML $html = $content . $customJS; //Return customized HTML return $html; } A: document.getElementById('iframe01').outerHTML A: You can get the source from another domain if you install the ForceCORS filter on Firefox. When you turn on this filter, it will bypass the security feature in the browser and your script will work even if you try to read another webpage. For example, you could open FoxNews.com in an iframe and then read its source. The reason modern web brwosers deny this ability by default is because if the other domain includes a piece of JavaScript and you're reading that and displaying it on your page, it could contain malicious code and pose a security threat. So, whenever you're displaying data from another domain on your page, you must beware of this real threat and implement a way to filter out all JavaScript code from your text before you're going to display it. Remember, when a supposed piece of raw text contains some code enclosed within script tags, they won't show up when you display it on your page, nevertheless they will run! So, realize this is a threat. http://www-jo.se/f.pfleger/forcecors A: You can get html out of an iframe using this code iframe = document.getElementById('frame'); innerHtml = iframe.contentDocument.documentElement.innerHTML
{ "language": "en", "url": "https://stackoverflow.com/questions/139118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: web-inf and jsp page directives i have a number of jsp files under web-inf folder. Inside my web.xml i specify an errorppage for 404 amd 403 and java.lang.exception. Do i need to include a page directive for each of my jsp's or will they automatically get forwarded to the exception handling page because they are under web-inf? If this is true does this mean that jsps which are not placed under web-inf do need to have the page directive added in order to forward them to the exception handling page? thank you , im just trying to understand the consequences of web-inf A: You just need to have whatever errorpage you would like to use in your app available with all the other jsps. So in the following example you would just need to have the error pages in the root of the context path(where all of the other jsps are). Anytime the webapp receives a 404 or 403 error it will try to display one of these pages. . <error-page> <error-code>404</error-code> <location>/404Error.jsp</location> </error-page> <error-page> <error-code>403</error-code> <location>/403Error.jsp</location> </error-page> Just make sure 404Error.jsp and 403Error.jsp contain: <%@ page isErrorPage="true" %> If you are actually using jsps for error pages (instead of just static html) A: ok so just to clarify; my jsps dont need to be in the web-inf folder in order for my web descriptor to pick up the exception and forward to the error page
{ "language": "en", "url": "https://stackoverflow.com/questions/139131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's a good way to organize external JavaScript file(s)? In an ASP.NET web application with a lot of HTML pages, a lot of inline JavaScript functions are accumulating. What is a good plan for organizing them into external files? Most of the functions are particular to the page for which they are written, but a few are relevant to the entire application. A single file could get quite large. With C#, etc., I usually divide the files at least into one containing the general functions and classes, so that I can use the same file for other applications, and one for functions and classes particular to this application. I don't think that a large file would be good for performance in a web application, however. What is the thinking in this regard? A: You probably want each page to have its page-specific JavaScript in one place, and then all the shared JavaScript in a large file. If you SRC the large file, then your users' browsers will cache the JavaScript code on the first load, and the file size won't be an issue. If you're particularly worried about it, you can pack/minify your JavaScript source into a "distributable" form and save a few kilobytes. A: Single file is large but is cached. Too many small files mean more requests to the server. It's a balancing act. Use tools like Firebug and YSlow to measure your performance and figure out what is best for your application. A: There is some per-request overhead, so in total you will improve performance by combining it all into a single file. It may, however, slow down load times on the first page a user visits, and it may result in useless traffic if some user never require certain parts of your js. The first of these problems isn't quite as problematic, though. If you have something like a signup page that everyone visits first and spends some time on (filling out a form, etc.), the page will be displayed once the html has been loaded and the js can load in the background while the user is busy with the form anyway. I would organize the js into different files during development, i. e. one for general stuff and one per model, then combine them into a single file in the build process. You should also do compression at this point. A: UPDATE: I explain this a bit more in depth in a blog post. Assuming you mean .aspx pages when you indicate "HTML pages," here is what I do: Let's say I have a page named foo.aspx and I have JavaScript specific to it. I name the .js file foo.aspx.js. Then I use something like this in a base page class (i.e. all of my pages inherit from this class): protected override void OnLoad(EventArgs e) { base.OnLoad(e); string possiblePageSpecificJavaScriptFile = string.Format("{0}.js", this.TemplateControl.AppRelativeVirtualPath); if (File.Exists(Server.MapPath(possiblePageSpecificJavaScriptFile)) == true) { string absolutePath = possiblePageSpecificJavaScriptFile.Replace("~", Request.ApplicationPath); absolutePath = string.Format("/{0}", absolutePath.TrimStart('/')); Page.ClientScript.RegisterClientScriptInclude(absolutePath, absolutePath); } } So, for each page in my application, this will look for a *.aspx.js file that matches the name of the page (in our example, foo.aspx.js) and place, within the rendered page, a script tag referencing it. (The code after the base.OnLoad(e); would best be extracted, I am simply trying to keep this as short as possible!) To complete this, I have a registry hack that will cause any *.aspx.js files to collapse underneath the *.aspx page in the solution explorer of Visual Studio (i.e. it will hide underneath the page, just like the *.aspx.cs file does). Depending on the version of Visual Studio you are using, the registry hack is different. Here are a couple that I use with Windows XP (I don't know if they differ for Vista because I don't use Vista) - copy each one into a text file and rename it with a .reg extension, then execute the file: Visual Studio 2005 Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\8.0\Projects\{E24C65DC-7377-472b-9ABA-BC803B73C61A}\RelatedFiles\.aspx\.js] @="" Visual Studio 2008 Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Projects\{E24C65DC-7377-472b-9ABA-BC803B73C61A}\RelatedFiles\.aspx\.js] @="" You will probably need to reboot your machine before these take effect. Also, the nesting will only take place for newly-added .js files, any that you have which are already named *.aspx.js can be nested by either re-adding them to the project or manually modifying the .csproj file's XML. Anyway, that is how I do things and it really helps to keep things organized. For JavaScript files containing commonly-used JavaScript, I keep those in a root-level folder called JavaScript and also have some code in my base page class that adds those references. That should be simple enough to figure out. Hope this helps someone. A: It also depends on the life of a user session. If a user is likely to go to multiple pages and spend a long time on the site a single large file can be worth the initial load seeing as it's cached. If it's more likely the user will come from google and just hit a single page then it would be better to just have individual files per page. A: Use "namespacing" together with a folder-structure: alt text http://www.roosteronacid.com/js.jpg All you have to do is include Base.js, since that file sets up all the namespaces. And the .js file(s) (the classes) you want to use on a given page. As far as page-specific scripts goes, I normally name the script according to the ASPX/HTML pages: Default.aspx Default.aspx.js A: I would recommend that if you split your JS into seperate files, that you do not use lots of tags to include them , that will kill page-load performance. Instead, use server-side includes to inline them before they leave the server.
{ "language": "en", "url": "https://stackoverflow.com/questions/139142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Experiences with OpenLaszlo? In a related question, I asked about Web Development. I came across something called OpenLaszlo yesterday and thought it looked interesting for doing some website development. The site has a bunch of good information on it and they've got some nice tutorials and such, but being a total novice (as far as web development goes), I'm wondering whether anyone here would recommend this. As I stated in my other question, this is a new world for me and there are a lot of directions I could go. Can you compare/contrast this and other web development you've done? Obviously, this is somewhat subjective, but I haven't heard much about it on SO and I'm hoping to get some opinions on this. A: I worked on a website for about a year in which the entire UI was developed in Laszlo. I've also developed AJAX applications using JS frameworks such as JQuery, Prototype and Scriptaculous. In my experience, the total effort required is considerably less when using Laszlo, and the class-based object model helps to keep your code better organised than when using JS frameworks. My only complaints about Laszlo were that: * *It "breaks the browser" in terms of support for the back/forward/refresh buttons. This problem also exists with AJAX, but most JS libraries seem to have found a workaround. *No support for internationalization, though none of the JS libraries are any better in my experience *Relatively small user base/community compared to competitors such as GWT, JQuery, etc. All in all, I thought OpenLaszlo was a pretty good solution for creating rich web-based user interfaces, and has a number of very novel features, e.g. ability to deploy on multiple runtimes (Flash, DHTML, etc.) without requiring any code changes. Also, I should mention that I haven't used it for almost a year, so it's likely that some progress has been made in recent times on the issues I mentioned above. Update 5 years since I posted this answer, things have changed considerably. In case anyone is in any doubt, don't use Laszlo, the project is completely moribund. A: I used openLaszlo to develop a few blog widgets for some friends of mine (about a year ago) and it was easy enough to get something basic working and it looked OK. But if I had to do it again, I would probably use FLEX I think you can make a more polished looking application in a lot less time using Flex than with Laszlo A: You definitely can write a flash app quickly with OpenLaszlo. There are a lot of similarities to developing for Silverlight. One OpenLaszlo lameness is that it uses a lame variation of javascript similar to ActionScript. Takes a little getting used to, if you are used to the latest features. Also, the final flash file that you end up with is very large (file size) compared to what you can do with other tools. A: One benefit of OpenLaszlo is the possibility of DHTML output. But for me the mix of XML and JavaScript in the same source file was somewhat confusing.
{ "language": "en", "url": "https://stackoverflow.com/questions/139150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the best way to prevent highlighting of text when clicking on its containing div in javascript? I am building a menu in HTML/CSS/JS and I need a way to prevent the text in the menu from being highlighted when double-clicked on. I need a way to pass the id's of several divs into a function and have highlighting turned off within them. So when the user accidentally (or on purpose) double clicks on the menu, the menu shows its sub-elements but its text does not highlight. There are a number of scripts out there floating around on the web, but many seem outdated. What's the best way? A: You could use this CSS to simply hide the selection color (not supported by IE): #id::-moz-selection { background: transparent; } #id::selection { background: transparent; } A: In (Mozilla, Firefox, Camino, Safari, Google Chrome) you can use this: div.noSelect { -moz-user-select: none; /* mozilla browsers */ -khtml-user-select: none; /* webkit browsers */ } For IE there is no CSS option, but you can capture the ondragstart event, and return false; Update Browser support for this property has expanded since 2008. div.noSelect { -webkit-user-select: none; /* Safari */ -ms-user-select: none; /* IE 10 and IE 11 */ user-select: none; /* Standard syntax */ } https://www.w3schools.com/csSref/css3_pr_user-select.php A: You could: * *Give it ("it" being your text) a onclick event *First click sets a variable to the current time *Second click checks to see if that variable is x time from the current, current time (so a double click over, for example, 500ms, doesn't register as a double click) *If it is a double click, do something to the page like adding hidden HTML, doing document.focus(). You'll have to experiment with these as some might cause unwanted scrolling. A: Hope this is what you are looking for. <script type="text/javascript"> function clearSelection() { var sel; if (document.selection && document.selection.empty) { document.selection.empty(); } else if (window.getSelection) { sel = window.getSelection(); if (sel && sel.removeAllRanges) sel.removeAllRanges(); } } </script> <div ondblclick="clearSelection()">Some text goes here.</div>
{ "language": "en", "url": "https://stackoverflow.com/questions/139157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How to list all functions in a module? I have a Python module installed on my system and I'd like to be able to see what functions/classes/methods are available in it. I want to call the help function on each one. In Ruby I can do something like ClassName.methods to get a list of all the methods available on that class. Is there something similar in Python? e.g. something like: from somemodule import foo print(foo.methods) # or whatever is the correct method to call A: import types import yourmodule print([getattr(yourmodule, a) for a in dir(yourmodule) if isinstance(getattr(yourmodule, a), types.FunctionType)]) A: None of these answers will work if you are unable to import said Python file without import errors. This was the case for me when I was inspecting a file which comes from a large code base with a lot of dependencies. The following will process the file as text and search for all method names that start with "def" and print them and their line numbers. import re pattern = re.compile("def (.*)\(") for i, line in enumerate(open('Example.py')): for match in re.finditer(pattern, line): print '%s: %s' % (i+1, match.groups()[0]) A: For completeness' sake, I'd like to point out that sometimes you may want to parse code instead of importing it. An import will execute top-level expressions, and that could be a problem. For example, I'm letting users select entry point functions for packages being made with zipapp. Using import and inspect risks running astray code, leading to crashes, help messages being printed out, GUI dialogs popping up and so on. Instead I use the ast module to list all the top-level functions: import ast import sys def top_level_functions(body): return (f for f in body if isinstance(f, ast.FunctionDef)) def parse_ast(filename): with open(filename, "rt") as file: return ast.parse(file.read(), filename=filename) if __name__ == "__main__": for filename in sys.argv[1:]: print(filename) tree = parse_ast(filename) for func in top_level_functions(tree.body): print(" %s" % func.name) Putting this code in list.py and using itself as input, I get: $ python list.py list.py list.py top_level_functions parse_ast Of course, navigating an AST can be tricky sometimes, even for a relatively simple language like Python, because the AST is quite low-level. But if you have a simple and clear use case, it's both doable and safe. Though, a downside is that you can't detect functions that are generated at runtime, like foo = lambda x,y: x*y. A: You can use dir(module) to see all available methods/attributes. Also check out PyDocs. A: Finding the names (and callable objects) in the current script __main__ I was trying to create a standalone python script that used only the standard library to find functions in the current file with the prefix task_ to create a minimal homebrewed version of what npm run provides. TL;DR If you are running a standalone script you want to run inspect.getmembers on the module which is defined in sys.modules['__main__']. Eg, inspect.getmembers(sys.modules['__main__'], inspect.isfunction) But I wanted to filter the list of methods by prefix and strip the prefix to create a lookup dictionary. def _inspect_tasks(): import inspect return { f[0].replace('task_', ''): f[1] for f in inspect.getmembers(sys.modules['__main__'], inspect.isfunction) if f[0].startswith('task_') } Example Output: { 'install': <function task_install at 0x105695940>, 'dev': <function task_dev at 0x105695b80>, 'test': <function task_test at 0x105695af0> } Longer Version I wanted the names of the methods to define CLI task names without having to repeat myself. ./tasks.py #!/usr/bin/env python3 import sys from subprocess import run def _inspect_tasks(): import inspect return { f[0].replace('task_', ''): f[1] for f in inspect.getmembers(sys.modules['__main__'], inspect.isfunction) if f[0].startswith('task_') } def _cmd(command, args): return run(command.split(" ") + args) def task_install(args): return _cmd("python3 -m pip install -r requirements.txt -r requirements-dev.txt --upgrade", args) def task_test(args): return _cmd("python3 -m pytest", args) def task_dev(args): return _cmd("uvicorn api.v1:app", args) if __name__ == "__main__": tasks = _inspect_tasks() if len(sys.argv) >= 2 and sys.argv[1] in tasks.keys(): tasks[sys.argv[1]](sys.argv[2:]) else: print(f"Must provide a task from the following: {list(tasks.keys())}") Example no arguments: λ ./tasks.py Must provide a task from the following: ['install', 'dev', 'test'] Example running test with extra arguments: λ ./tasks.py test -qq s.ssss.sF..Fs.sssFsss..ssssFssFs....s.s You get the point. As my projects get more and more involved, it's going to be easier to keep a script up to date than to keep the README up to date and I can abstract it down to just: ./tasks.py install ./tasks.py dev ./tasks.py test ./tasks.py publish ./tasks.py logs A: For code that you do not wish to evaluate, I recommend an AST-based approach (like csl's answer), e.g.: import ast source = open(<filepath_to_parse>).read() functions = [f.name for f in ast.parse(source).body if isinstance(f, ast.FunctionDef)] For everything else, the inspect module is correct: import inspect import <module_to_inspect> as module functions = inspect.getmembers(module, inspect.isfunction) This gives a list of 2-tuples in the form [(<name:str>, <value:function>), ...]. The simple answer above is hinted at in various responses and comments, but not called out explicitly. A: Use vars(module) then filter out anything that isn't a function using inspect.isfunction: import inspect import my_module my_module_functions = [f for _, f in vars(my_module).values() if inspect.isfunction(f)] The advantage of vars over dir or inspect.getmembers is that it returns the functions in the order they were defined instead of sorted alphabetically. Also, this will include functions that are imported by my_module, if you want to filter those out to get only functions that are defined in my_module, see my question Get all defined functions in Python module. A: You can use the following method to get list all the functions in your module from shell: import module module.*? A: Except dir(module) or help(module) mentioned in previous answers, you can also try: - Open ipython - import module_name - type module_name, press tab. It'll open a small window with listing all functions in the python module. It looks very neat. Here is snippet listing all functions of hashlib module (C:\Program Files\Anaconda2) C:\Users\lenovo>ipython Python 2.7.12 |Anaconda 4.2.0 (64-bit)| (default, Jun 29 2016, 11:07:13) [MSC v.1500 64 bit (AMD64)] Type "copyright", "credits" or "license" for more information. IPython 5.1.0 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: import hashlib In [2]: hashlib. hashlib.algorithms hashlib.new hashlib.sha256 hashlib.algorithms_available hashlib.pbkdf2_hmac hashlib.sha384 hashlib.algorithms_guaranteed hashlib.sha1 hashlib.sha512 hashlib.md5 hashlib.sha224 A: import sys from inspect import getmembers, isfunction fcn_list = [o[0] for o in getmembers(sys.modules[__name__], isfunction)] A: This will do the trick: dir(module) However, if you find it annoying to read the returned list, just use the following loop to get one name per line. for i in dir(module): print i A: Use the inspect module: from inspect import getmembers, isfunction from somemodule import foo print(getmembers(foo, isfunction)) Also see the pydoc module, the help() function in the interactive interpreter and the pydoc command-line tool which generates the documentation you are after. You can just give them the class you wish to see the documentation of. They can also generate, for instance, HTML output and write it to disk. A: Once you've imported the module, you can just do: help(modulename) ... To get the docs on all the functions at once, interactively. Or you can use: dir(modulename) ... To simply list the names of all the functions and variables defined in the module. A: dir(module) is the standard way when using a script or the standard interpreter, as mentioned in most answers. However with an interactive python shell like IPython you can use tab-completion to get an overview of all objects defined in the module. This is much more convenient, than using a script and print to see what is defined in the module. * *module.<tab> will show you all objects defined in the module (functions, classes and so on) *module.ClassX.<tab> will show you the methods and attributes of a class *module.function_xy? or module.ClassX.method_xy? will show you the docstring of that function / method *module.function_x?? or module.SomeClass.method_xy?? will show you the source code of the function / method. A: r = globals() sep = '\n'+100*'*'+'\n' # To make it clean to read. for k in list(r.keys()): try: if str(type(r[k])).count('function'): print(sep+k + ' : \n' + str(r[k].__doc__)) except Exception as e: print(e) Output : ****************************************************************************************** GetNumberOfWordsInTextFile : Calcule et retourne le nombre de mots d'un fichier texte :param path_: le chemin du fichier à analyser :return: le nombre de mots du fichier ****************************************************************************************** write_in : Ecrit les donnees (2nd arg) dans un fichier txt (path en 1st arg) en mode a, :param path_: le path du fichier texte :param data_: la liste des données à écrire ou un bloc texte directement :return: None ****************************************************************************************** write_in_as_w : Ecrit les donnees (2nd arg) dans un fichier txt (path en 1st arg) en mode w, :param path_: le path du fichier texte :param data_: la liste des données à écrire ou un bloc texte directement :return: None A: The Python documentation provides the perfect solution for this which uses the built-in function dir. You can just use dir(module_name) and then it will return a list of the functions within that module. For example, dir(time) will return ['_STRUCT_TM_ITEMS', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'altzone', 'asctime', 'ctime', 'daylight', 'get_clock_info', 'gmtime', 'localtime', 'mktime', 'monotonic', 'monotonic_ns', 'perf_counter', 'perf_counter_ns', 'process_time', 'process_time_ns', 'sleep', 'strftime', 'strptime', 'struct_time', 'time', 'time_ns', 'timezone', 'tzname', 'tzset'] which is the list of functions the 'time' module contains. A: For global functions dir() is the command to use (as mentioned in most of these answers), however this lists both public functions and non-public functions together. For example running: >>> import re >>> dir(re) Returns functions/classes like: '__all__', '_MAXCACHE', '_alphanum_bytes', '_alphanum_str', '_pattern_type', '_pickle', '_subx' Some of which are not generally meant for general programming use (but by the module itself, except in the case of DunderAliases like __doc__, __file__ ect). For this reason it may not be useful to list them with the public ones (this is how Python knows what to get when using from module import *). __all__ could be used to solve this problem, it returns a list of all the public functions and classes in a module (those that do not start with underscores - _). See Can someone explain __all__ in Python? for the use of __all__. Here is an example: >>> import re >>> re.__all__ ['match', 'fullmatch', 'search', 'sub', 'subn', 'split', 'findall', 'finditer', 'compile', 'purge', 'template', 'escape', 'error', 'A', 'I', 'L', 'M', 'S', 'X', 'U', 'ASCII', 'IGNORECASE', 'LOCALE', 'MULTILINE', 'DOTALL', 'VERBOSE', 'UNICODE'] >>> All the functions and classes with underscores have been removed, leaving only those that are defined as public and can therefore be used via import *. Note that __all__ is not always defined. If it is not included then an AttributeError is raised. A case of this is with the ast module: >>> import ast >>> ast.__all__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'ast' has no attribute '__all__' >>> A: Use inspect.getmembers to get all the variables/classes/functions etc. in a module, and pass in inspect.isfunction as the predicate to get just the functions: from inspect import getmembers, isfunction from my_project import my_module functions_list = getmembers(my_module, isfunction) getmembers returns a list of tuples (object_name, object) sorted alphabetically by name. You can replace isfunction with any of the other isXXX functions in the inspect module. A: This will append all the functions that are defined in your_module in a list. result=[] for i in dir(your_module): if type(getattr(your_module, i)).__name__ == "function": result.append(getattr(your_module, i)) A: If you want to get the list of all the functions defined in the current file, you can do it that way: # Get this script's name. import os script_name = os.path.basename(__file__).rstrip(".py") # Import it from its path so that you can use it as a Python object. import importlib.util spec = importlib.util.spec_from_file_location(script_name, __file__) x = importlib.util.module_from_spec(spec) spec.loader.exec_module(x) # List the functions defined in it. from inspect import getmembers, isfunction list_of_functions = getmembers(x, isfunction) As an application example, I use that for calling all the functions defined in my unit testing scripts. This is a combination of codes adapted from the answers of Thomas Wouters and adrian here, and from Sebastian Rittau on a different question.
{ "language": "en", "url": "https://stackoverflow.com/questions/139180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "557" }
Q: Can I protect against SQL injection by escaping single-quote and surrounding user input with single-quotes? I realize that parameterized SQL queries is the optimal way to sanitize user input when building queries that contain user input, but I'm wondering what is wrong with taking user input and escaping any single quotes and surrounding the whole string with single quotes. Here's the code: sSanitizedInput = "'" & Replace(sInput, "'", "''") & "'" Any single-quote the user enters is replaced with double single-quotes, which eliminates the users ability to end the string, so anything else they may type, such as semicolons, percent signs, etc., will all be part of the string and not actually executed as part of the command. We are using Microsoft SQL Server 2000, for which I believe the single-quote is the only string delimiter and the only way to escape the string delimiter, so there is no way to execute anything the user types in. I don't see any way to launch an SQL injection attack against this, but I realize that if this were as bulletproof as it seems to me someone else would have thought of it already and it would be common practice. What's wrong with this code? Is there a way to get an SQL injection attack past this sanitization technique? Sample user input that exploits this technique would be very helpful. UPDATE: I still don't know of any way to effectively launch a SQL injection attack against this code. A few people suggested that a backslash would escape one single-quote and leave the other to end the string so that the rest of the string would be executed as part of the SQL command, and I realize that this method would work to inject SQL into a MySQL database, but in SQL Server 2000 the only way (that I've been able to find) to escape a single-quote is with another single-quote; backslashes won't do it. And unless there is a way to stop the escaping of the single-quote, none of the rest of the user input will be executed because it will all be taken as one contiguous string. I understand that there are better ways to sanitize input, but I'm really more interested in learning why the method I provided above won't work. If anyone knows of any specific way to mount a SQL injection attack against this sanitization method I would love to see it. A: First of all, it's just bad practice. Input validation is always necessary, but it's also always iffy. Worse yet, blacklist validation is always problematic, it's much better to explicitly and strictly define what values/formats you accept. Admittedly, this is not always possible - but to some extent it must always be done. Some research papers on the subject: * *http://www.imperva.com/docs/WP_SQL_Injection_Protection_LK.pdf *http://www.it-docs.net/ddata/4954.pdf (Disclosure, this last one was mine ;) ) *https://www.owasp.org/images/d/d4/OWASP_IL_2007_SQL_Smuggling.pdf (based on the previous paper, which is no longer available) Point is, any blacklist you do (and too-permissive whitelists) can be bypassed. The last link to my paper shows situations where even quote escaping can be bypassed. Even if these situations do not apply to you, it's still a bad idea. Moreover, unless your app is trivially small, you're going to have to deal with maintenance, and maybe a certain amount of governance: how do you ensure that its done right, everywhere all the time? The proper way to do it: * *Whitelist validation: type, length, format or accepted values *If you want to blacklist, go right ahead. Quote escaping is good, but within context of the other mitigations. *Use Command and Parameter objects, to preparse and validate *Call parameterized queries only. *Better yet, use Stored Procedures exclusively. *Avoid using dynamic SQL, and dont use string concatenation to build queries. *If using SPs, you can also limit permissions in the database to executing the needed SPs only, and not access tables directly. *you can also easily verify that the entire codebase only accesses the DB through SPs... A: Input sanitation is not something you want to half-ass. Use your whole ass. Use regular expressions on text fields. TryCast your numerics to the proper numeric type, and report a validation error if it doesn't work. It is very easy to search for attack patterns in your input, such as ' --. Assume all input from the user is hostile. A: It's a bad idea anyway as you seem to know. What about something like escaping the quote in string like this: \' Your replace would result in: \'' If the backslash escapes the first quote, then the second quote has ended the string. A: Simple answer: It will work sometimes, but not all the time. You want to use white-list validation on everything you do, but I realize that's not always possible, so you're forced to go with the best guess blacklist. Likewise, you want to use parametrized stored procs in everything, but once again, that's not always possible, so you're forced to use sp_execute with parameters. There are ways around any usable blacklist you can come up with (and some whitelists too). A decent writeup is here: http://www.owasp.org/index.php/Top_10_2007-A2 If you need to do this as a quick fix to give you time to get a real one in place, do it. But don't think you're safe. A: There are two ways to do it, no exceptions, to be safe from SQL-injections; prepared statements or prameterized stored procedures. A: Okay, this response will relate to the update of the question: "If anyone knows of any specific way to mount a SQL injection attack against this sanitization method I would love to see it." Now, besides the MySQL backslash escaping - and taking into account that we're actually talking about MSSQL, there are actually 3 possible ways of still SQL injecting your code sSanitizedInput = "'" & Replace(sInput, "'", "''") & "'" Take into account that these will not all be valid at all times, and are very dependant on your actual code around it: * *Second-order SQL Injection - if an SQL query is rebuilt based upon data retrieved from the database after escaping, the data is concatenated unescaped and may be indirectly SQL-injected. See *String truncation - (a bit more complicated) - Scenario is you have two fields, say a username and password, and the SQL concatenates both of them. And both fields (or just the first) has a hard limit on length. For instance, the username is limited to 20 characters. Say you have this code: username = left(Replace(sInput, "'", "''"), 20) Then what you get - is the username, escaped, and then trimmed to 20 characters. The problem here - I'll stick my quote in the 20th character (e.g. after 19 a's), and your escaping quote will be trimmed (in the 21st character). Then the SQL sSQL = "select * from USERS where username = '" + username + "' and password = '" + password + "'" combined with the aforementioned malformed username will result in the password already being outside the quotes, and will just contain the payload directly. 3. Unicode Smuggling - In certain situations, it is possible to pass a high-level unicode character that looks like a quote, but isn't - until it gets to the database, where suddenly it is. Since it isn't a quote when you validate it, it will go through easy... See my previous response for more details, and link to original research. A: If you have parameterised queries available you should be using them at all times. All it takes is for one query to slip through the net and your DB is at risk. A: Patrick, are you adding single quotes around ALL input, even numeric input? If you have numeric input, but are not putting the single quotes around it, then you have an exposure. A: Yeah, that should work right up until someone runs SET QUOTED_IDENTIFIER OFF and uses a double quote on you. Edit: It isn't as simple as not allowing the malicious user to turn off quoted identifiers: The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL Server automatically set QUOTED_IDENTIFIER to ON when connecting. This can be configured in ODBC data sources, in ODBC connection attributes, or OLE DB connection properties. The default for SET QUOTED_IDENTIFIER is OFF for connections from DB-Library applications. When a stored procedure is created, the SET QUOTED_IDENTIFIER and SET ANSI_NULLS settings are captured and used for subsequent invocations of that stored procedure. SET QUOTED_IDENTIFIER also corresponds to the QUOTED_IDENTIFER setting of ALTER DATABASE. SET QUOTED_IDENTIFIER is set at parse time. Setting at parse time means that if the SET statement is present in the batch or stored procedure, it takes effect, regardless of whether code execution actually reaches that point; and the SET statement takes effect before any statements are executed. There's a lot of ways QUOTED_IDENTIFIER could be off without you necessarily knowing it. Admittedly - this isn't the smoking gun exploit you're looking for, but it's a pretty big attack surface. Of course, if you also escaped double quotes - then we're back where we started. ;) A: Your defence would fail if: * *the query is expecting a number rather than a string *there were any other way to represent a single quotation mark, including: * *an escape sequence such as \039 *a unicode character (in the latter case, it would have to be something which were expanded only after you've done your replace) A: In a nutshell: Never do query escaping yourself. You're bound to get something wrong. Instead, use parameterized queries, or if you can't do that for some reason, use an existing library that does this for you. There's no reason to be doing it yourself. A: I realize this is a long time after the question was asked, but .. One way to launch an attack on the 'quote the argument' procedure is with string truncation. According to MSDN, in SQL Server 2000 SP4 (and SQL Server 2005 SP1), a too long string will be quietly truncated. When you quote a string, the string increases in size. Every apostrophe is repeated. This can then be used to push parts of the SQL outside the buffer. So you could effectively trim away parts of a where clause. This would probably be mostly useful in a 'user admin' page scenario where you could abuse the 'update' statement to not do all the checks it was supposed to do. So if you decide to quote all the arguments, make sure you know what goes on with the string sizes and see to it that you don't run into truncation. I would recommend going with parameters. Always. Just wish I could enforce that in the database. And as a side effect, you are more likely to get better cache hits because more of the statements look the same. (This was certainly true on Oracle 8) A: I've used this technique when dealing with 'advanced search' functionality, where building a query from scratch was the only viable answer. (Example: allow the user to search for products based on an unlimited set of constraints on product attributes, displaying columns and their permitted values as GUI controls to reduce the learning threshold for users.) In itself it is safe AFAIK. As another answerer pointed out, however, you may also need to deal with backspace escaping (albeit not when passing the query to SQL Server using ADO or ADO.NET, at least -- can't vouch for all databases or technologies). The snag is that you really have to be certain which strings contain user input (always potentially malicious), and which strings are valid SQL queries. One of the traps is if you use values from the database -- were those values originally user-supplied? If so, they must also be escaped. My answer is to try to sanitize as late as possible (but no later!), when constructing the SQL query. However, in most cases, parameter binding is the way to go -- it's just simpler. A: What ugly code all that sanitisation of user input would be! Then the clunky StringBuilder for the SQL statement. The prepared statement method results in much cleaner code, and the SQL Injection benefits are a really nice addition. Also why reinvent the wheel? A: Rather than changing a single quote to (what looks like) two single quotes, why not just change it to an apostrophe, a quote, or remove it entirely? Either way, it's a bit of a kludge... especially when you legitimately have things (like names) which may use single quotes... NOTE: Your method also assumes everyone working on your app always remembers to sanitize input before it hits the database, which probably isn't realistic most of the time. A: I'm not sure about your case, but I just encountered a case in Mysql that Replace(value, "'", "''") not only can't prevent SQL injection, but also causes the injection. if an input ended with \', it's OK without replace, but when replacing the trailing ', the \ before end of string quote causes the SQL error. A: Yes, you can, if... After studying the topic, I think input sanitized as you suggested is safe, but only under these rules: * *you never allow string values coming from users to become anything else than string literals (i.e. avoid giving configuration option: "Enter additional SQL column names/expressions here:"). Value types other than strings (numbers, dates, ...): convert them to their native data types and provide a routine for SQL literal from each data type. * *SQL statements are problematic to validate *you either use nvarchar/nchar columns (and prefix string literals with N) OR limit values going into varchar/char columns to ASCII characters only (e.g. throw exception when creating SQL statement) * *this way you will be avoiding automatic apostrophe conversion from CHAR(700) to CHAR(39) (and maybe other similar Unicode hacks) *you always validate value length to fit actual column length (throw exception if longer) * *there was a known defect in SQL Server allowing to bypass SQL error thrown on truncation (leading to silent truncation) *you ensure that SET QUOTED_IDENTIFIER is always ON * *beware, it is taken into effect in parse-time, i.e. even in inaccessible sections of code Complying with these 4 points, you should be safe. If you violate any of them, a way for SQL injection opens. A: It might work, but it seems a little hokey to me. I'd recommend verifing that each string is valid by testing it against a regular expression instead. A: While you might find a solution that works for strings, for numerical predicates you need to also make sure they're only passing in numbers (simple check is can it be parsed as int/double/decimal?). It's a lot of extra work.
{ "language": "en", "url": "https://stackoverflow.com/questions/139199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "152" }
Q: Repeater, ListView, DataList, DataGrid, GridView ... Which to choose? So many different controls to choose from! What are best practices for determining which control to use for displaying data in ASP.NET? A: It all comes down to how you want to layout your data. If you need to control the layout (like tables versus CSS versus whatever), when use a Repeater or ListView. Between the two, ListView gives you a lot more events and built-in commands for editing, selecting, inserting. Additionally paging and grouping functionality. A Repeater is extremely simple, it repeats a layout with the data. Since you're building the layout by hand, Listview and Repeater require more code. GridView is an updated DataGrid so there is hardly any reason to use DataGrid. GridView works really well when hooked up to standard ASP.NET datasources, but restricts you to a tabular layout with lots of layout rules. GridView requires less code since you're using a built-in layout. A: Indeed! I've blogged on the differences between the ASP.NET 4.0 data tools. Basically, gridviews are the most powerful way to present tabular information, whereas ListView controls are for more complicated displays of repeated data. If I were giving advice to an ASP.NET newbie, I'd tell them to learn gridviews inside out and ignore the other controls to begin with. A: Everyone else hit it: It Depends. Now for some specific guidance (expanding upon WebDude's excellent answer above) ... Does your design fit into a natural spreadsheet or grid view of the data? GridView. Do you need to display a list or other formatted view of data, possibly with headers and footers, and probably with specific controls and/or formatting for each record of data? (EG, customized links, possibly LinkButtons, or specific edit controls?) Does this display specifically not fit naturally into a spreadsheet or grid view? ListView If you meet all the criteria of ListView, but you would naturally fit in a grid, you may consider DataList. I go for Repeater when I just need some basic data iterated with some custom design bits, no headers, no footers, nice and clean. A: Markup View Declaring the following sample code is possible for all 3( ListView, DataList , Repeater) <asp:ListView runat="server" OnItemCommand="Unnamed1_ItemCommand"> <ItemTemplate> <%# Eval("Name")%> </ItemTemplate> <asp:ListView> in the following lists You can see the available templates and options for each of them and see the differences for yourself ListView (note the edit,group,insert ,layout) * *AlternatingltemTemplate *EditltemTemplate *EmptyDataTemplate *EmptyltemTemplate *GroupSeparatorTemplate *GroupTemplate *lnsertltemTemplate *ItemSeparatorTemplate *ItemTemplate *LayoutTemplate *SelectedltemTemplate DataList (note the Style pairs) * *AlternatingltemStyle *AlternatingltemTemplate *EditltemStyle *EditltemTemplate *FooterStyle *FooterTemplate *HeaderStyle *HeaderTemplate *ItemStyle *ItemTemplate *SelectedltemStyle *SelectedltemTemplate *SeparatorStyle *SeparatorTemplate Repeater * *AlternatingltemTemplate *FooterTemplate *HeaderTemplate *ItemTemplate *SeparatorTemplate Code View (advanced view) CompositeDataBoundControl: look the following classes hierarchy (and related controls). these controls hosts other asp.net controls in their templates to display bound-data to user Some descriptions for better clarifications The ListView Control The ListView control also uses templates for the display of data. However, it supports many additional templates that allow for more scenarios when working with your data. These templates include the LayoutTemplate,GroupTemplate,ItemSeparatorTemplate. The ListView control (unlike DataList and Repeater) also implicitly supports the ability to edit, insert, and delete data by using a data source control. You can define individual templates for each of these scenarios. The DataList Control The DataList control works like the Repeater control. It repeats data for each row in your data set, and it displays this data according to your defined template. However, it lays out the data defined in the template within various HTML structures. This includes options for horizontal or vertical layout, and it also allows you to set how the data should be repeated, as flow or table layout. The DataList control does not automatically use a data source control to edit data. Instead, it provides command events in which you can write your own code for these scenarios. To enable these events, you add a Button control to one of the templates and set the button’s CommandName property to the edit, delete, update, or cancel keyword. The appropriate event is then raised by the DataList control. The Repeater Control The Repeater control also uses templates to define custom binding. However, it does not show data as individual records. Instead, it repeats the data rows as you specify in your template. This allows you to create a single row of data and have it repeat across your page. The Repeater control is a read-only template. That is, it supports only the ItemTemplate. It does not implicitly support editing, insertion, and deletion. You should consider one of the other controls if you need this functionality, otherwise you will have to code this yourself for the Repeater control. The above Descriptions are from MCTS Exam 70-515 Web Applications Development with Microsoft.NET Framework 4 book. DataGrid is not even mentioned in this book and is replaced by popular GridViews and answered nicely by other users A: It's really about what you trying to achieve * *Gridview - Limited in design, works like an html table. More in built functionality like edit/update, page, sort. Lots of overhead. *DataGrid - Old version of the Gridview. A gridview is a super datagrid. *Datalist - more customisable version of the Gridview. Also has some overhead. More manual work as you have to design it yourself. *ListView - the new Datalist :). Almost a hybrid of the datalist and gridview where you can use paging and build in Gridview like functionality, but have the freedom of design. One of the new controls in this family *Repeater - Very light weight. No built in functionality like Headers, Footers. Has the least overhead.
{ "language": "en", "url": "https://stackoverflow.com/questions/139207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "113" }
Q: Do you databind your object fields to your form controls? Or do you populate your form controls manually by a method? Is either considered a best practice? A: Well, it depends. I have tended to use databinding wherever I could - it is darn convenient, but on occasion, I'll populate them manually. Particularly, I find it useful with controls like the DataGridView to use databinding. It makes filtering quite simple. A: It really depends from what you are trying to achieve. Databinding is simple and powerful, but if you need more control or some kind of side effect, you can manually populate control from a method. Personally, I start with a databinding first, than change it later if it is necessary. A: Generally, if data binding business or DAL objects is possible, I would use it. The old axiom holds true: The most error-free and reliable line of code is often the one you didn't have to write. (Bear in mind, however, that you need to know exactly how that data binding occurs, what its overhead is, and you have to be able to trust the framework and your source objects to be error-free!) You would, as others have mentioned, manually populate if you needed specific functionality not brought to bear directly by binding, or if there is an issue with data binding business/DAL objects (as occasionally happens with certain 3rd-party controls).
{ "language": "en", "url": "https://stackoverflow.com/questions/139209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Adding Cookie to ZSI Posts I've added cookie support to SOAPpy by overriding HTTPTransport. I need functionality beyond that of SOAPpy, so I was planning on moving to ZSI, but I can't figure out how to put the Cookies on the ZSI posts made to the service. Without these cookies, the server will think it is an unauthorized request and it will fail. How can I add cookies from a Python CookieJar to ZSI requests? A: If you read the _Binding class in client.py of ZSI you can see that it has a variable cookies, which is an instance of Cookie.SimpleCookie. Following the ZSI example and the Cookie example that is how it should work: b = Binding(url='/cgi-bin/simple-test', tracefile=fp) b.cookies['foo'] = 'bar' A: Additionally, the Binding class also allows any header to be added. So I figured out that I can just add a "Cookie" header for each cookie I need to add. This worked well for the code generated by wsdl2py, just adding the cookies right after the binding is formed in the SOAP client class. Adding a parameter to the generated class to take in the cookies as a dictionary is easy and then they can easily be iterated through and added.
{ "language": "en", "url": "https://stackoverflow.com/questions/139212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Redundant code constructs The most egregiously redundant code construct I often see involves using the code sequence if (condition) return true; else return false; instead of simply writing return (condition); I've seen this beginner error in all sorts of languages: from Pascal and C to PHP and Java. What other such constructs would you flag in a code review? A: Using comments instead of source control: -Commenting out or renaming functions instead of deleting them and trusting that source control can get them back for you if needed. -Adding comments like "RWF Change" instead of just making the change and letting source control assign the blame. A: Somewhere I’ve spotted this thing, which I find to be the pinnacle of boolean redundancy: return (test == 1)? ((test == 0) ? 0 : 1) : ((test == 0) ? 0 : 1); :-) A: Declaring separately from assignment in languages other than C: int foo; foo = GetFoo(); A: Redundant code is not in itself an error. But if you're really trying to save every character return (condition); is redundant too. You can write: return condition; A: Returning uselessly at the end: // stuff return; } A: void myfunction() { if(condition) { // Do some stuff if(othercond) { // Do more stuff } } } instead of void myfunction() { if(!condition) return; // Do some stuff if(!othercond) return; // Do more stuff } A: I once had a guy who repeatedly did this: bool a; bool b; ... if (a == true) b = true; else b = false; A: Using .tostring on a string A: Putting an exit statement as first statement in a function to disable the execution of that function, instead of one of the following options: * *Completely removing the function *Commenting the function body *Keeping the function but deleting all the code Using the exit as first statement makes it very hard to spot, you can easily read over it. A: Fear of null (this also can lead to serious problems): if (name != null) person.Name = name; Redundant if's (not using else): if (!IsPostback) { // do something } if (IsPostback) { // do something else } Redundant checks (Split never returns null): string[] words = sentence.Split(' '); if (words != null) More on checks (the second check is redundant if you are going to loop) if (myArray != null && myArray.Length > 0) foreach (string s in myArray) And my favorite for ASP.NET: Scattered DataBinds all over the code in order to make the page render. A: Copy paste redundancy: if (x > 0) { // a lot of code to calculate z y = x + z; } else { // a lot of code to calculate z y = x - z; } instead of if (x > 0) y = x + CalcZ(x); else y = x - CalcZ(x); or even better (or more obfuscated) y = x + (x > 0 ? 1 : -1) * CalcZ(x) A: Allocating elements on the heap instead of the stack. { char buff = malloc(1024); /* ... */ free(buff); } instead of { char buff[1024]; /* ... */ } or { struct foo *x = (struct foo *)malloc(sizeof(struct foo)); x->a = ...; bar(x); free(x); } instead of { struct foo x; x.a = ...; bar(&x); } A: The most common redundant code construct I see is code that is never called from anywhere in the program. The other is design patterns used where there is no point in using them. For example, writing "new BobFactory().createBob()" everywhere, instead of just writing "new Bob()". Deleting unused and unnecessary code can massively improve the quality of the system and the team's ability to maintain it. The benefits are often startling to teams who have never considered deleting unnecessary code from their system. I once performed a code review by sitting with a team and deleting over half the code in their project without changing the functionality of their system. I thought they'd be offended but they frequently asked me back for design advice and feedback after that. A: if (foo == true) { do stuff } I keep telling the developer that does that that it should be if ((foo == true) == true) { do stuff } but he hasn't gotten the hint yet. A: if (condition == true) { ... } instead of if (condition) { ... } Edit: or even worse and turning around the conditional test: if (condition == false) { ... } which is easily read as if (condition) then ... A: I often run into the following: function foo() { if ( something ) { return; } else { do_something(); } } But it doesn't help telling them that the else is useless here. It has to be either function foo() { if ( something ) { return; } do_something(); } or - depending on the length of checks that are done before do_something(): function foo() { if ( !something ) { do_something(); } } A: From nightmarish code reviews..... char s[100]; followed by memset(s,0,100); followed by s[strlen(s)] = 0; with lots of nasty if (strcmp(s, "1") == 0) littered about the code. A: Using an array when you want set behavior. You need to check everything to make sure its not in the array before you insert it, which makes your code longer and slower. A: Redundant .ToString() invocations: const int foo = 5; Console.WriteLine("Number of Items: " + foo.ToString()); Unnecessary string formatting: const int foo = 5; Console.WriteLine("Number of Items: {0}", foo);
{ "language": "en", "url": "https://stackoverflow.com/questions/139214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What do you do with a developer who does not test his code? One of our developers is continually writing code and putting it into version control without testing it. The quality of our code is suffering as a result. Besides getting rid of the developer, how can I solve this problem? EDIT I have talked to him about it number of times and even given him written warning A: Tell the developer you would like to see a change in their practices within 2 weeks or you will begin your company's disciplinary procedure. Offer as much help and assistance as you can, but if you can't change this person, he's not right for your company. A: Using Cruise Control or a similar tool, you can make checkins automatically trigger a build and unit tests. You would still need to ensure that there are unit tests for any new functionality he adds, which you can do by looking at his checkins. However, this is a human problem, so a technical solution can only go so far. A: Why not just talk to him? He probably won't actually bite you. A: * *Make him "babysit" the build, and become the build manager. This will give him less time to develop code (thus increasing everyone's performance) and teach him why a good build is so necessary. *Enforce test cases - code cannot be submitted without unit test cases. Modify the build system so that if the test cases don't compile and run correctly, or don't exist, then the entire task checkin is denied. -Adam A: Publish stats on test code coverage per developer, this would be after talking to him. A: Here are some ideas from a sea shanty. Intro What shall we do with a drunken sailor, (3×) Early in the morning? Chorus Wey–hey and up she rises, (3×) Early in the morning! Verses Stick him in a bag and beat him senseless, (3×) Early in the morning! Put him in the longboat till he’s sober, (3×) Early in the morning! etc. Replace "drunken sailor" with a "sloppy developer". A: Depending on the type of version control system you are using you could set up check-in policies that force the code to pass certain requirements before being allowed to check-in. If you are using a sytem like Team Foundation Server it gives you the ability to specify code-coverage and unit testing requirements for check-ins. A: You know, this is a perfect opportunity to avoid singling him out (though I agree you need to talk with him) and implement a Test-first process in-house. If the rules aren't clear and the expectations are known to all, I've found that what you describe isn't all that uncommon. I find that doing the test-first development scheme works well for me and improves the code quality. A: If you can do code reviews -- that's a perfect place to catch it. We require reviews prior to merging to iteration trunk, so typically everything is caught then. A: They may be overly focused on speed rather than quality. This can tempt some people into rushing through issues to clear their list and see what comes back in bug reports later. To rectify this balance: * *assign only a couple of items at a time in your issue tracking system, *code review and test anything they have "completed" as soon as possible so it will be back with them immediately if there are any problems *talk to them about your expectations about how long an item will take to do properly A: If you systematically perform code reviews before allowing a developer to commit the code, well, your problem is mostly solved. But this doesn't seem to be your case, so this is what I recommend: * *Talk to the developer. Discuss the consequences for others in the team. Most developers want to be recognized by their peer, so this might be enough. Also point out it is much easier to fix bugs in the code that's fresh in your mind than weeks-old code. This part makes sense if you have some form of code owneship in place. *If this doesn't work after some time, try to put in place a policy that will make commiting buggy code unpleasant for the author. One popular way is to make the person who broke the build responsible for the chores of creating the next one. If your build process is fully automated, look for another menial task to take care of instead. This approach has the added benefit of not pinpointing anyone in particular, making it more acceptable for everybody. *Use disciplinary measures. Depending on the size of your team and of your company, those can take many forms. *Fire the developer. There is a cost associated with keeping bad apples. When you get this far, the developer doesn't care about his fellow developers, and you've got a people problem on your hands already. If the work environment becomes poisoned, you might lose far more - productivity-wise and people-wise - than this single bad developer. A: Peer programming is another possibility. If he is with another skilled developer on the team who dies meet quality standards and knows procedure then this has a few benifits: * *With an experienced developer over his shoulder he will learn what is expected of him and see the difference between his code and code that meets expectations *The other developer can enforce a test first policy: not allowing code to be written until tests have been written for it *Similarly, the other developer can verify that the code is up to standard before it is checked-in reduicing the nmber of bad check-ins All of this of course requires the company and developers to be receptive to this process which they may not be. A: It seems that people have come up with a lot of imaginative and devious answers to this problem. But the fact is that this isn't a game. Devising elaborate peer pressure systems to "name and shame" him is not going to get to the root of the problem, ie. why is he not writing tests? I think you should be direct. I know you say that you've talked to him, but have you tried to find out why he isn't writing tests? Clearly at this point he knows that he should be, so surely there must be some reason why he isn't doing what he's been told to do. Is it laziness? Procrastination? Programmers are famous for their egos and strong opinions - perhaps he's convinced for some reason that testing is a waste of time, or that his code is always perfect and doesn't need testing. If he's an immature programmer, he might not fully understand the implications of his actions. If he's "too mature" he might be too set in his ways. Whatever the reason, address it. If it does come down to a matter of opinion, you need to make him understand that he needs to set his own personal opinion aside and just follow the rules. Make it clear that if he can't be trusted to follow the rules then he will be replaced. If he still doesn't, do just that. One last thing - document all of your discussions along with any problems that occur as a result of his changes. If it comes to the worst you may be forced to justify your decisions, in which case, having documentary evidence will surely be invaluable. A: Stick him on his own development branch, and only bring his stuff into the trunk when you know it's thoroughly tested. This might be a place where a distributed source control management tool like GIT or Mercurial would excel. Although with the increased branching/merging support in SVN, you might not have too much trouble managing it. EDIT This is only if you can't get rid of him or get him to change his ways. If you simply can't get this behaviour to stop (by changing or firing), then the best you can do is buffer the rest of the team from the bad effects of his coding. A: If you are at a place where you can affect the policies, make some changes. Do code reviews before check ins and make testing part of the development cycle. A: It seems pretty simple. Make it a requirement and if he can't do it, replace him. Why would you keep him? A: I usually don't advocate this unless all else fails... Sometimes, a publicly-displayed chart of bug-count-by-developer can apply enough peer pressure to get favorable results. A: Try the Carrot, make it a fun game. E.g The Continuous Integration Game plugin for Hudson http://wiki.hudson-ci.org/display/HUDSON/The+Continuous+Integration+Game+plugin A: As a developer who rarely tests his own code, I can tell you the one thing that's made me slowly shift my behavior... Visibility If the environment allows pushing code out, waiting for users to find problems, and then essentially asking "How about now?" after making a change to the code, there's no real incentive to test your own stuff. Code reviews and collaboration encourage you to work towards making a quality product much more than if you were just delivering 'Widget X' while your coworkers work on 'Widget Y' and 'Widget Z' The more visible your work is, the more likely you are to care about how well it works. A: Code review. Stick all of your dev's in a room every Monday morning and ask them to bring their most proud code-based accomplishment from the previous week along with them to the meeting. Let them take the spotlight and get excited about explaining what they did. Have them bring copies of the code so other dev's can see what they're talking about. We started this process a few months ago, and it's astonishing to see the amount of sub-conscious quality checks that take place. After all, if the dev's are simply asked to talk about what they're most excited about, they'll be totally stoked to show people their code. Then, other dev's will see the quality errors and publicly discuss why they're wrong and how the code should really be written instead. If this doesn't get your dev to write quality code, he's probably not a good fit for your team. A: Make it part of his Annual Review objectives. If he doesn't achieve it, no pay rise. Sometimes though you do just have to accept that someone is just not right for your team/environment, it should be a last resort and can be tough to handle but if you have exhausted all other options it may be the best thing in the long run. A: Put your developers on branches of your code, based on some logic like, per feature, per bug fix, per dev team, whatever. Then bad check-ins are isolated to those branches. When it comes time to do a build, merge to a testing branch, find problems, resolve, and then merge your release back to a main branch. Or remove commit rights for that developer and have them send their code to a younger developer for review and testing before it can be committed. That might motivate a change in procedure. A: You could put together a report with errors found in the code with the name of the programmer that was responsible for that piece of software. If he's a reasonable person, discuss the report with him. If he cares for his "reputation" publish the report regularly and make it available to all his peers. If he only listens to the "authority", do the report and escalate the issue to his manager. Anyway, I've seen often that when people are made aware of how bad they seem from outside, they change their behaviour. Hey this reminds me of something I read on xkcd :) A: Are you referring to writing automated unit test or manually unit testing prior to check-in? If your shop does not write automated tests then his checking in of code that does not work is reckless. Is it impacting the team? Do you have a formalized QA department? If you are all creating automated unit tests then I would suggest that part of your code review process include the unit tests as well. It will become obvious that the code is not acceptable per your standards during your review. Your question is rather broad but I hope I provided some direction. I would agree with Phil that the first step is to individually talk to him and explain the importance of quality. Poor quality can often be linked to the culture of the team, department and company. A: Make executed test cases one of the deliverables before something is considered "done." If you don't have executed test cases, then the work is not complete, and if the deadline passes before you have the documented test case execution, then he has not delivered on time, and the consequences would be the same as if he had not completed the development. If your company's culture would not allow for this, and it values speed over accuracy, then that's probably the root of the problem, and the developer is simply responding to the incentives that are in place -- he is being rewarded for doing a lot of things half-assed rather than fewer things correctly. A: Make the person clean latrines. Worked in the Army. And if you work in a group with individuals who eat a lot of Indian food, it wont take long for them to fall in line. But that's just me... A: Every time a developer checks something in that does not compile, put some money in a jar. You'll think twice before checking in then. A: Unfortunately if you have already spoken to him many times and given him written warnings I would say it is about time to eliminate him from the team. A: You might find some helpful answers here: How to make junior programmers write tests? A: If you've genuinely talked to him and you've given him all the support and training he needs to understand why this is a big deal then I'd look at getting rid of him (how that works depends on where in the world you are). I know you said you wanted something aside from firing him but sometimes there aren't "nice" solutions to a problem. Without being harsh programmers always talk about leaving companies who don't take software development seriously. If this is reasonable then why should a company put up with a developer who has been given every reasonable chance but still clearly isn't taking software development seriously? A: I'd be tempted to suggest elaborating a bit on what you've tried and what results you got as this may have changed a bit but here are my initial suggestions: * *Is it any tests or comprehensive tests? Some may code blindly and do zero tests, but this is rather rare, IME. Usually there are some tests done but not enough to cover most of the cases that would be comprehensive testing. *Group dynamics may help. I'd assume he is part of a team and that the team's view may be of some help here. In a way this is trying to get peer pressure which is usually a bad thing but sometimes it can be used in good ways. *How well spelled out were the warnings? In a way this can seem childish but there is a chance that what you think of as testing may not be the same as his. Do you want nUnit tests, an excel spreadsheet, logs from his computer, or something else as proof of the existence and use of tests? From what you've described there isn't anything to confirm that he did understand what you meant, was going to use tests and provide evidence of doing so. *Check-in policy question. Some places, such as my current workplace, encourage committing often which can mean that one does commit code without tests. Is there a known, accepted and well-followed policy where you are? That's another aspect here. A: If you have automated builds set up, then make sure that the failure notifications are both as obvious and annoying as possible. Wallboards or audio notifications in the development common areas are a good start. Then, make sure that one a build is broken, make sure that no one checks in code until the offender has fixed the problem. Granted, this will only catch it when his code breaks the build, but the peer pressure for him to continually be spotlighted for this will be an incentive for most. In the event that this does not help, take the next discinplinary actions available through your human resources department. You have already talked to him, you already have given a written notice - find out what the next steps are. A developer who goes his own way is either a visionary or not a team player - and I have never personally had the pleasure of working with a visionary in that regard. A: Have you tried talking to them about it? Might not be a bad first step. It might also give you some clues as to what step 2 should be if the problem continues. A: Test your spelling! I think you meant "their code". * *Talk to him. Let him know it's an issue. *Have a group meeting to discuss code quality. *If it's still bad, force him to have his code checked before he can check it in. *If he doesn't get the hint by then, you'll have to let him go. A: NCover + Cruise Control, send out automatic reports and then one can prove that as he checks in code coverage goes down. A: Code reviews and unit tests. Having been (like many people) the guy who checks in a trivial change and breaks things, I can tell you that unit tests remove any excuse for not testing, if they are setup so you can run the whole panoply quickly, and they help identify who broke the code (assuming a decent VCS). Of course, with informal code reviews, I've checked in trivial code that has been reviewed by a senior (and competent) colleague, and still broken the codebase. A: If talking to him didn't work, and you can't fire him, then he's either lazy or unreasonable. If you can't take the high road and reason with the guy, hit him where it hurts and start docking his pay. Or if you really want to punish him, make him maintain the code. A: Introduce code coverage tools and produce an automated report from your build server of all the code not covered by unit tests. His name will be bottom of the board. The board should be printed and stuck somewhere every week where everyone can see it. Stop giving him anything new to do until his coverage is at 85% Give the guy at the top of the board the most interesting jobs. Tie your next written warning to a certain code coverage requirement - then you have clear dismissal reasons should he fail. A: Tell him he will be reassigned to the quality team where he will be doing only documentation. That has worked for me more than once for the teams that I was leading... and if that doesn't work, find somebody else to test his code! ..wait, thats lame...o yea.. Fire him!!! A: It dependes. Does his code work? Is he the most productive or least productive member of your team? Is the code buggier than others? How valuable are his/her contributions? If he is a stellar performer who produces high qualtiy code then who cares. If on the other hand he/she is producing bug ridden code then sit that person down, speak with them, lay out the consequences and they either get on board or they don't A: You mention talking to the developer, but I'm curious to see if you asked them about their testing procedure. If they come from another company then they might be used to writing all of their code, checking it in, then testing, and then checking in the final version of the code. If they are viewing the check-ins as just another way to save their work. However, if they have been with your company for a significant period of time (say at least six months) then they should be used to how you do things and this wouldn't be a very valid excuse for much longer. A: Simple. Make the devs responsibility verify bugs that are reported and fix the ones that reproduce. Don't let the person work on new features. If the person has half a brain, they will quickly develop a certain level of fustration by bone-headed bugs caused by not unit testing code. Additionally, the overall skill of the person will likely grow significantly. A: If even the code reviews of his code doesn't work, maybe give him a task to review some other "wonderful" ( ;) ) code from which he might relate to his own problems and ask him to compare his code with that awful piece of code. Usually, the problems with such kind of people is the self-realization, so no matter how you try to make him understand the problems with his own code; until and unless he himself doesn't realize, it's not gonna work. This is of course, if you don't have an option to fire him, infact want to groom him. A: It sounds like you've made it clear to him that this is important to you, the company and the team. I think you need to find out what is behind his behaviour - is he just not hearing what you're saying? Maybe you need to find another way to say it. Maybe he's not convinced - there could be any number of reasons for that - find the fundamental reason and deal with that. A: If talking doesn't work, put a policy in place where code checked in without accompanying tests are simply backed out of the repository. After they have to rewrite their code a couple of times, they may get the message. A: I would suggest (as others): * *code review, *pair programming, *SCM commit policy. A: You can tell your version control system that this user has no permission to upload anything, so he must ask someone to do it for him. That should teach him. A: Establish an agreement within the team about what will be tested and how it should be tested, and when it should be tested (before check-in, before it pushes, before it gets merged to trunk). Then, when a check-in doesn't meet the set of standards the team agreed code should meet, simply roll it back, and ask the developer to fix it. Rolling check-ins back are an incredibly effective way to both preserve the quality of the codebase in the face of poor quality check-ins, and as a lightweight way to signal to people that their code doesn't meet the standards set by the team. The nice part about rollbacks is that it's really easy to check the code back in - just rollback the rollback, fix whatever the issue is, and then check the change again. I would be careful to do it in a very objective way that doesn't signal anyone out. This means applying it to the whole team, not just your problem member, and focusing on making it more about the quality of the code and having the code that gets checked in meet the standards the team set with each other, rather than as a punishment. A: You said he doesn't test his code. Does this mean he doesn't create unit tests? Or he doesn't test his code AT ALL? If he doesn't test his code at all, then this a is fundamental problem with his development. Testing the code you write is part of the job. A developer who does not test his code is not acceptable and is slowing down your project. Testing is part of the job description of a developer (even the so-called stars). If, however, he is testing his code but is not creating the 'correct' number of automated unit tests, then this is a different problem, which needs a different solution. As others have said, you need to find out why, and fix it. Code reviews are a good way to find these problems. But it sounds like you already know the problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/139228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: What is a good technique or exercise when learning a new language? When you are learning a new language, what is there a particularly good/effective exercise to help get the hang of it? And why? EDIT: Preferably looking for things that are more complicated that 'Hello World'. A: other than hello world, I try to port one of the existing programs to the new languange. this will challenge me to learn some good old techniques in the new language and help me build a new library of classes or helpers.. A: Larry O'Brien had a great series of blogs titled '15 Exercises to know A programming Language' Part 1 Part 2 Part 3 See Larry's Blog for the details. Part 1. Calculations * *Write a program that takes as its first argument one of the words 'sum,' 'product,' 'mean,' or 'sqrt' and for further arguments a series of numbers. The program applies the appropriate function to the series. *Write a program that calculates a Haar wavelet on an array of numbers. . *Write a program that takes as its arguments a the name of a bitmapped image. Apply the Haar wavelet to the pixel values. Save the results to a file. *Using the outputs of the previous exercise file, write a GUI program that reconstitutes the original bitmap (N.B.: The Haar wavelet is lossless). *Write a GUI program that deals with bitmaps images Part 2. Data Structures *Write a class (or module or what-have-you: please map OOP terminology into whatever paradigm appropriate) that only stores objects of the same type as the first object placed in it and raises an exception if a non-compatible type is added. *Using the language's idioms, implement a tree-based datastructure (splay, AVL, or red-black). *Create a new type that uses a custom comparator (i.e., overrides "Equals"). Place more of these objects than can fit in memory into the datastructure created above as well as into standard libraries, put more objects into it than can fit in memory. Compare performance of the standard libraries with your own implementation. *Implement an iterator for your datastructure. Consider multithreading issues. *Write a multithreaded application that uses your data structure, comparable types, and iterators to implement the type-specific storage functionality as described in Exercise 6. How do you deal with concurrent inserts and traversals? Part 3. Libraries *Write a program that outputs the current date and time to a Web page as a reversed ISO 8601-formatted value (i.e.: "2006-06-16T13:15:30Z" becomes "Z03:51:31T61-60-6002"). Create an XML interface (either POX or WS-*) to the same. *Write a client-side program that can both scrape the above Web page and the XML return and redisplays the date in a different format. *Write a daemon program that monitors an email account. When a strongly-encoded email arrives that decrypts to a valid ISO 8601 time, the program sets the system time to that value. *Write a program that connects to your mail client, performs a statistical analysis of its contents (see A Plan for Spam ) and stores the results in a database. *Using previous Exercise, write a spam filter, including moving messages within your mail client If you can do all these things in 2 languages, I'm sure google has a job for you A: 'hello world!' I really do think this a good place to start. Its basic and only takes a few seconds but you make sure your compiler is running and you have everything in place. Once you have that done you can keep going. Add a variable, print to database, print to file. Make sure you know how to leave comments. This could all take a mater of 5 minutes. But its important stuff. A: Connect to data somehow, whether it be a database, file or other... A: Red-Black tree. A: I usually don't do very well with it unless I have a "real" project to apply it to. Even made up ones get boring fast. In fact, I find it helpful to throw yourself in the middle of a bigger project and make small changes to something that already works. YMMV A: My equivalent of a hello world is to do the following: * *Retrieve multiple inputs (ie, parms from command line, text boxes on a gui) *Manipulate that input (ie, do math on numbers and manipulate text) *On a gui use a list box. *read and write files. I feel after doing the above I get a good feel for the language and a good introduction to the IDE and how easy (or really how difficult) it is to work with the language and the environment it runs in. After that if I want to go further I will use the language in a real project that I need to do (probably a utility of some kind). A: I usually do the following (in the order presented): * *Print a pyramid with height provided by the user (checks basic I/O, conditionals and loops) *Write a class hierarchy with polymorphism etc... (checks OO concepts) *Convert decimals to roman numerals (checks enums and basic data structures) *Write a linkedlist implementation (checks memory allocation/deallocation) *Write clones of JUnit and JMock (checks refelction/metaprogramming) *Write a console based chat system (checks basic networking) *Modify (6) to support group chat via multicasting (checks advanced networking) *Write a GUI for (7) (checks GUI library) After that its on to a real project... A: Personally I like to make a simple echo server and client to get the hang of network programming with that language. A: Ray tracer. A: I like to learn a new language by doing a "real" task (for "personal" use) My first java program was a client for an online multiplayer game (that I then released into public domain) My first vb.net program was a front-end for my digital video recorder My first VHDL "program" was a 64x32 led array controller A: Often I'll implement the k-means clustering algorithm. A: Drag-and-drop image gallery. When I was cutting my teeth on Win32 and MFC, this was one of my first projects. Pretty quickly I ported all my code into ActiveX controls. Then I rewrote the thing in Java. For kicks, I rewrote it again in pure Javascript. When I broke into .Net, I rewrote the thing again in C#. Last but not least, I used it as an exercise for learning Objective-C and UIKit. Why? It's a visually appealing toy, for one thing. It's nice to get instant gratification from your code, I think, and working with images is one of the most gratifying things I can think of. A: Console based Tetris A: I like games for learning programming because the business rules are carefully delineated. The first three programs I write in a new language are Ro-Sham-Bo, Blackjack, and Video Poker. A: Pick a task(s) that you already understand. That way you limit the amount of "new stuff" you need to assimilate. A: I think, for me, learning by porting existing code (for example, from another platform) is always a challenge and fun. just simple demos, boardgames, etc. A: Mandelbrot set.       
{ "language": "en", "url": "https://stackoverflow.com/questions/139239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Relative path in t sql? How to get the relative path in t sql? Take for example a .sql file is located in the folder D:\temp, I want to get path of the file hello.txt in the folder D:\temp\App_Data. How to use the relative path reference? Let's say I am executing the sql file inside the SQL server management studio. A: The .sql file is just.... a file. It doesn't have any sense of its own location. It's the thing that excutes it (which you didn't specify) that would have a sense of its location, the file's location. I notice that you mentioned an App_Data folder, so I guess that ASP.NET is involved. If you want to use relative paths in your web app, see MapPath http://msdn.microsoft.com/en-us/library/system.web.httpserverutility.mappath.aspx A: The server is executing the t-sql. It doesn't know where the client loaded the file from. You'll have to have the path embedded within the script. DECLARE @RelDir varchar(1000) SET @RelDir = 'D:\temp\' ... Perhaps you can programmatically place the path into the SET command within the .sql script file, or perhaps you can use sqlcmd and pass the relative directory in as a variable. A: When T-SQL is executing, it is running in a batch on the server, not on the client machine running Management Studio (or any other SQL client). The client just sends the text contents of the .sql file to the server to be executed. So, unless that file is located on the database server, I highly doubt you're going to be able to interact with it from a SQL script. A: I had a similiar problem, and solved it using sqlcmd variables in conjunction with the %CD% pseudo-variable. Took a bit of trial and error to combine all the pieces. But eventually got it all working. This example expects the script.sql file to be in the same directory as the runscript.bat. runscript.bat sqlcmd -S .\SQLINSTANCE -v FullScriptDir="%CD%" -i script.sql -b script.sql BULK INSERT [dbo].[ValuesFromCSV] FROM '$(FullScriptDir)\values.csv' with ( fieldterminator = ',', rowterminator = '\n' ) go A: The t-sql script is first preprocessed by QueryAnalyzer, SSMS or sqlcmd on the client side. These programs are aware of the file localcation and could easily handle relative pathes similar To Oeacle sqlplus. Obviously this is just a design decision from Microsoft and I dare say a rather stupid one. A: I tried method from mateuscb's comments. I found it can not work ,i do not know why,then I managed after several test. It can work with the script below: runscript.bat @set FullScriptDir=%CD% sqlcmd -S .\SQLINSTANCE -i script.sql script.sql BULK INSERT [dbo].[ValuesFromCSV] FROM '$(FullScriptDir)\values.csv' with ( fieldterminator = ',', rowterminator = '\n' ) go Just for your information for further discussion. A: well it's not a Microsoft thing first off... it's an industry standard thing. second your solution for running T-SQL with a relative path is to use a batch script or something to inject your path statement IE: @echo OFF SETLOCAL DisableDelayedExpansion FOR /F "usebackq delims=" %%a in (`"findstr /n ^^ t-SQL.SQL"`) do ( set "var=%%a" SETLOCAL EnableDelayedExpansion set "var=!var:*:=!" set RunLocation=%~dp0 echo(%~dp0!var! > newsql.sql ENDLOCAL ) sqlcmd newsql.sql or something like that anyway
{ "language": "en", "url": "https://stackoverflow.com/questions/139245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Writing XML files using XmlTextWriter with ISO-8859-1 encoding I'm having a problem writing Norwegian characters into an XML file using C#. I have a string variable containing some Norwegian text (with letters like æøå). I'm writing the XML using an XmlTextWriter, writing the contents to a MemoryStream like this: MemoryStream stream = new MemoryStream(); XmlTextWriter xmlTextWriter = new XmlTextWriter(stream, Encoding.GetEncoding("ISO-8859-1")); xmlTextWriter.Formatting = Formatting.Indented; xmlTextWriter.WriteStartDocument(); //Start doc Then I add my Norwegian text like this: xmlTextWriter.WriteCData(myNorwegianText); Then I write the file to disk like this: FileStream myFile = new FileStream(myPath, FileMode.Create); StreamWriter sw = new StreamWriter(myFile); stream.Position = 0; StreamReader sr = new StreamReader(stream); string content = sr.ReadToEnd(); sw.Write(content); sw.Flush(); myFile.Flush(); myFile.Close(); Now the problem is that in the file on this, all the Norwegian characters look funny. I'm probably doing the above in some stupid way. Any suggestions on how to fix it? A: You need to set the encoding everytime you write a string or read binary data as a string. Encoding encoding = Encoding.GetEncoding("ISO-8859-1"); FileStream myFile = new FileStream(myPath, FileMode.Create); StreamWriter sw = new StreamWriter(myFile, encoding); stream.Position = 0; StreamReader sr = new StreamReader(stream, encoding); string content = sr.ReadToEnd(); sw.Write(content); sw.Flush(); myFile.Flush(); myFile.Close(); A: As mentioned in above answers, the biggest issue here is the Encoding, which is being defaulted due to being unspecified. When you do not specify an Encoding for this kind of conversion, the default of UTF-8 is used - which may or may not match your scenario. You are also converting the data needlessly by pushing it into a MemoryStream and then out into a FileStream. If your original data is not UTF-8, what will happen here is that the first transition into the MemoryStream will attempt to decode using default Encoding of UTF-8 - and corrupt your data as a result. When you then write out to the FileStream, which is also using UTF-8 as encoding by default, you simply persist that corruption into the file. In order to fix the issue, you likely need to specify Encoding into your Stream objects. You can actually skip the MemoryStream process entirely, also - which will be faster and more efficient. Your updated code might look something more like: FileStream fs = new FileStream(myPath, FileMode.Create); XmlTextWriter xmlTextWriter = new XmlTextWriter(fs, Encoding.GetEncoding("ISO-8859-1")); xmlTextWriter.Formatting = Formatting.Indented; xmlTextWriter.WriteStartDocument(); //Start doc xmlTextWriter.WriteCData(myNorwegianText); StreamWriter sw = new StreamWriter(fs); fs.Position = 0; StreamReader sr = new StreamReader(fs); string content = sr.ReadToEnd(); sw.Write(content); sw.Flush(); fs.Flush(); fs.Close(); A: Which encoding do you use for displaying the result file? If it is not in ISO-8859-1, it will not display correctly. Is there a reason to use this specific encoding, instead of for example UTF8? A: Why are you writing the XML first to a MemoryStream and then writing that to the actual file stream? That's pretty inefficient. If you write directly to the FileStream it should work. If you still want to do the double write, for whatever reason, do one of two things. Either * *Make sure that the StreamReader and StreamWriter objects you use all use the same encoding as the one you used with the XmlWriter (not just the StreamWriter, like someone else suggested), or *Don't use StreamReader/StreamWriter. Instead just copy the stream at the byte level using a simple byte[] and Stream.Read/Write. This is going to be, btw, a lot more efficient anyway. A: Both your StreamWriter and your StreamReader are using UTF-8, because you're not specifying the encoding. That's why things are getting corrupted. As tomasr said, using a FileStream to start with would be simpler - but also MemoryStream has the handy "WriteTo" method which lets you copy it to a FileStream very easily. I hope you've got a using statement in your real code, by the way - you don't want to leave your file handle open if something goes wrong while you're writing to it. Jon A: After investigating, this is that worked best for me: var doc = new XDocument(new XDeclaration("1.0", "ISO-8859-1", "")); using (XmlWriter writer = doc.CreateWriter()){ writer.WriteStartDocument(); writer.WriteStartElement("Root"); writer.WriteElementString("Foo", "value"); writer.WriteEndElement(); writer.WriteEndDocument(); } doc.Save("dte.xml");
{ "language": "en", "url": "https://stackoverflow.com/questions/139260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How to create a file with a given size in Linux? For testing purposes I have to generate a file of a certain size (to test an upload limit). What is a command to create a file of a certain size on Linux? A: dd if=/dev/zero of=my_file.txt count=12345 A: Use fallocate if you don't want to wait for disk. Example: fallocate -l 100G BigFile Usage: Usage: fallocate [options] <filename> Preallocate space to, or deallocate space from a file. Options: -c, --collapse-range remove a range from the file -d, --dig-holes detect zeroes and replace with holes -i, --insert-range insert a hole at range, shifting existing data -l, --length <num> length for range operations, in bytes -n, --keep-size maintain the apparent size of the file -o, --offset <num> offset for range operations, in bytes -p, --punch-hole replace a range with a hole (implies -n) -z, --zero-range zero and ensure allocation of a range -x, --posix use posix_fallocate(3) instead of fallocate(2) -v, --verbose verbose mode -h, --help display this help -V, --version display version A: This will generate 4 MB text file with random characters in current directory and its name "4mb.txt" You can change parameters to generate different sizes and names. base64 /dev/urandom | head -c 4000000 > 4mb.txt A: There are lots of answers, but none explained nicely what else can be done. Looking into man pages for dd, it is possible to better specify the size of a file. This is going to create /tmp/zero_big_data_file.bin filled with zeros, that has size of 20 megabytes : dd if=/dev/zero of=/tmp/zero_big_data_file.bin bs=1M count=20 This is going to create /tmp/zero_1000bytes_data_file.bin filled with zeros, that has size of 1000 bytes : dd if=/dev/zero of=/tmp/zero_1000bytes_data_file.bin bs=1kB count=1 or dd if=/dev/zero of=/tmp/zero_1000bytes_data_file.bin bs=1000 count=1 * *In all examples, bs is block size, and count is number of blocks *BLOCKS and BYTES may be followed by the following multiplicative suffixes: c =1, w =2, b =512, kB =1000, K =1024, MB =1000*1000, M =1024*1024, xM =M GB =1000*1000*1000, G =1024*1024*1024, and so on for T, P, E, Z, Y. A: Just to follow up Tom's post, you can use dd to create sparse files as well: dd if=/dev/zero of=the_file bs=1 count=0 seek=12345 This will create a file with a "hole" in it on most unixes - the data won't actually be written to disk, or take up any space until something other than zero is written into it. A: Use this command: dd if=$INPUT-FILE of=$OUTPUT-FILE bs=$BLOCK-SIZE count=$NUM-BLOCKS To create a big (empty) file, set $INPUT-FILE=/dev/zero. Total size of the file will be $BLOCK-SIZE * $NUM-BLOCKS. New file created will be $OUTPUT-FILE. A: As shell command: < /dev/zero head -c 1048576 > output A: On OSX (and Solaris, apparently), the mkfile command is available as well: mkfile 10g big_file This makes a 10 GB file named "big_file". Found this approach here. A: For small files: dd if=/dev/zero of=upload_test bs=file_size count=1 Where file_size is the size of your test file in bytes. For big files: dd if=/dev/zero of=upload_test bs=1M count=size_in_megabytes A: Please, modern is easier, and faster. On Linux, (pick one) truncate -s 10G foo fallocate -l 5G bar It needs to be stated that truncate on a file system supporting sparse files will create a sparse file and fallocate will not. A sparse file is one where the allocation units that make up the file are not actually allocated until used. The meta-data for the file will however take up some considerable space but likely no where near the actual size of the file. You should consult resources about sparse files for more information as there are advantages and disadvantages to this type of file. A non-sparse file has its blocks (allocation units) allocated ahead of time which means the space is reserved as far as the file system sees it. Also fallocate nor truncate will not set the contents of the file to a specified value like dd, instead the contents of a file allocated with fallocate or truncate may be any trash value that existed in the allocated units during creation and this behavior may or may not be desired. The dd is the slowest because it actually writes the value or chunk of data to the entire file stream as specified with it's command line options. This behavior could potentially be different - depending on file system used and conformance of that file system to any standard or specification. Therefore it is advised that proper research is done to ensure that the appropriate method is used. A: You can do it programmatically: #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <stdlib.h> int main() { int fd = creat("/tmp/foo.txt", 0644); ftruncate(fd, SIZE_IN_BYTES); close(fd); return 0; } This approach is especially useful to subsequently mmap the file into memory. use the following command to check that the file has the correct size: # du -B1 --apparent-size /tmp/foo.txt Be careful: # du /tmp/foo.txt will probably print 0 because it is allocated as Sparse file if supported by your filesystem. see also: man 2 open and man 2 truncate A: Some of these answers have you using /dev/zero for the source of your data. If your testing network upload speeds, this may not be the best idea if your application is doing any compression, a file full of zeros compresses really well. Using this command to generate the file dd if=/dev/zero of=upload_test bs=10000 count=1 I could compress upload_test down to about 200 bytes. So you could put yourself in a situation where you think your uploading a 10KB file but it would actually be much less. What I suggest is using /dev/urandom instead of /dev/zero. I couldn't compress the output of /dev/urandom very much at all. A: you could do: [dsm@localhost:~]$ perl -e 'print "\0" x 100' > filename.ext Where you replace 100 with the number of bytes you want written. A: Kindly run below command for quickly creating larger file with certain size in linux for i in {1..10};do fallocate -l 2G filename$i;done explanation:-Above command will create 10 files with 10GB size in just few seconds.
{ "language": "en", "url": "https://stackoverflow.com/questions/139261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "201" }
Q: Auto update for WinForms application When creating an auto updating feature for a .NET WinForms application, how does it update the DLLs and not affect the currently running application? Since the application is running during the update process, won't there be a lock on the DLLs (because those DLLs will have to be overwritten during the update). A: You'll have to shutdown your application and restart it, as other people have already commented. I wrote an open-source code to do just that in a transparent mode - including an external update application to do the actual cold update. See http://www.code972.com/blog/2010/08/nappupdate-application-auto-update-framework-for-dotnet/ The code is at http://github.com/synhershko/NAppUpdate (Licensed under the Apache 2.0 license) A: I have a seperate 'launcher' application that checks for updates via a web service. If there are updates, it downloads them and then executes my application, which is in a seperate assembly. The other alternatives are using things like ClickOnce, or downloading the files to a seperate area and restarting the app, as someone else mentioned. Be warned about ClickOnce, though - it's not as flexible as it sounds. And if you deploy to a system that requires elevating your program to a higer security level to run, you might run into problems if you don't have a certificate for your app installed. I found it very difficult to get straight answers on the Internet to things like certificate management when it comes to ClickOnce. If you have a complex app, you may want to just roll your own updater, which is what I ended up having to do. A: If you publish via ClickOnce, all of that tends to be handled for you. It has it's own pro's and con's but usually easier than trying to code it all yourself. Both Wikipedia and 15seconds have decent info on using ClickOnce, how it works, etc. As others have stated, ClickOnce isn't as flexible as rolling your own solution but it is a LOT less complicated. It has a small learning curve at first, but with pretty much everything bundled into Visual Studio and the use of Wizards, it usually doesn't take long to stumble onto a working solution. As deployments get more complex (i.e. beyond than just having prerequisites or application code that needs updating) and you need to do a lot of post-install or pre-install tasks, there are things like WiX which give you somewhat of a hybrid solution between Windows Installer and ClickOnce, with the cost of flexibility being a much steeper learning curve. The only reason I try to avoid custom installers is that you end up spending way too much time trying to get it just right to handle a bunch of different "What If" scenarios... A: Usually you would download the new files into a separate area. Then shutdown and restart and at startup you look for and use the new files if found. Always keeping a last known working version on the side so that the user can revert to something that definitely works if the download causes problems. ClickOnce is a good technology from Microsoft that does this for you and you can use it directly from Visual Studio 2008. A: These days Windows can do such updates automatically for you with AppInstaller if your app is packaged in the MSIX package. It downloads the new version of the app in another folder inside ProgramFiles\WindowsApps, then when a user runs the app via the start menu, the system knows what folder it should use. The previous version gets deleted when not in use. If you want to know how to package your app this way I collected my findings in this answer.
{ "language": "en", "url": "https://stackoverflow.com/questions/139266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Get the time of tomorrow xx:xx what is an efficient way to get a certain time for the next day in Java? Let's say I want the long for tomorrow 03:30:00. Setting Calendar fields and Date formatting are obvious. Better or smarter ideas, thanks for sharing them! Okami A: I'm curious to hear what other people have to say about this one. My own experience is that taking shortcuts (i.e., "better or smarter ideas") with Dates almost always lands you in trouble. Heck, just using java.util.Date is asking for trouble. Added: Many have recommended Joda Time in other Date-related threads. A: I take the brute force approach // make it now Calendar dateCal = Calendar.getInstance(); // make it tomorrow dateCal.add(Calendar.DAY_OF_YEAR, 1); // Now set it to the time you want dateCal.set(Calendar.HOUR_OF_DAY, hours); dateCal.set(Calendar.MINUTE, minutes); dateCal.set(Calendar.SECOND, seconds); dateCal.set(Calendar.MILLISECOND, 0); return dateCal.getTime(); A: I would consider using the predefined api the smart way to do this. A: Not sure why you wouldn't just use the Calendar object? It's easy and maintainable. I agree about not using Date, pretty much everything useful about it is now deprecated. :(
{ "language": "en", "url": "https://stackoverflow.com/questions/139288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can I run a script when I commit to subversion? I'd like to run a script that builds the documentation for my php project. It is basically just using wget to run phpdoc. A: An alternative to using SVN hooks would be to use a continuous integration engine. Personally, I'm a fan of Hudson. CruiseControl is the classic but there are a plethora of others. Why use a continuous integration engine? In general, they're more powerful, feature rich and portable than simply using SVN hooks (what if you want to switch to using Mercurial, Git, etc. ?). A: You might want to check out Phing for a complete build scripting tool. You can manage commits, documentation and other build related activities in one place. A: Here's a fairly extensive tutorial on SVN hooks A: (Answering my own question, I just thought others would like to know as well). Yes, and also TortoiseSVN supports it. The word you are looking for is 'hooks'. For TortoiseSVN, open settings and 'Hook Scripts'. Click 'Add...' and chooe post_commit_hook (for running after the commit is done). Then add whatever script you are running and the working path of the script. I used a batch file and called wget (there is a windows version ported, google it). To get wget to store the log from phpdoc in one specific path, you must specify the full path, else the log will be stored in the current folder from where you committed, so my batch file looks like this: SET BUILDLOG=%~dp0%build_log.html rem %~dp0 returns the full working path *of this script* SET PHPDOCURL=http://localhost/PHPDocumentor/docbuilder SET PHPDOCCONFIG=yourconfigfile wget -O %BUILDLOG% "%PHPDOCURL%/builder.php?setting_useconfig=%PHPDOCCONFIG%&setting_output=HTML%3ASmarty%3Adefault&ConverterSetting=HTML%3ASmarty%3Adefault&setting_title=Generated+Documentation&setting_defaultpackagename=default&setting_defaultcategoryname=default&interface=web&dataform=true" Now, whenever you commit, the batch script will be called. You could of course also use php as a command line tool, but I haven't looked into that with phpdoc - I just took the path of least resistance on this one.
{ "language": "en", "url": "https://stackoverflow.com/questions/139315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Setting all values in a std::map How to set all the values in a std::map to the same value, without using a loop iterating over each value? A: I encountered the same problem but found that the range returned by boost::adaptors::values is mutable, so it can then be used with normal algorithms such as std::fill. #include <boost/range/adaptor/map.hpp> auto my_values = boost::adaptors::values(my_map); std::fill(my_values.begin(), my_values.end(), 123); A: The boost::assign library has all sorts of neat stuff to help out initializing the contents of a container. My thought that this could be used to avoid explicitly iterating through the map. Unfortunately, maps are curious beasts difficult to initialize because the keys must be unique. The bottom line is that a simple for loop is probably the best way to initialize a map. It may not be super elegant, but it gets the job done and is immediatly comprehensible by anyone with any acquaintance with the STL. map <int,string> myMap; for( int k=0;k<1000;k++) myMap.insert(pair<int,string>(k,string(""))); The rest of this post describes the journey I took to reach the above conclusion. The boost::assign makes it simple to assign a small number of values to a map. map<string,int> m; insert( m )( "Bar", 1 )( "Foo", 2 ); or map<int,int> next = map_list_of(1,2)(2,3)(3,4)(4,5)(5,6); In your case, where you want to initialize the entire map with the same value, there are the utilities repeat and repeat_fun. Something like this should work with a multimap ( untested code snippet ) pair<int,string> init( 0,string("")); multimap <int,string> myMap = repeat(1000,init); As Konrad Rudolph as pointed out, you cannot initialize a map with the same exact value, because the keys must be unique. This makes life much more complex ( fun? ). Something like this, perhaps: map <int,string> myMap; struct nextkey { int start; nextkey( s ) : start( s ) {} pair<int,string> operator () () { return pair<int,string>(start++,string("")); } }; myMap = repeat_fun(1000,nextkey(0)); Now, this is getting so complex, I now think a simple iteration IS the way to go map <int,string> myMap; for( int k=0;k<1000;k++) myMap.insert(pair<int,string>(k,string(""))); A: Using a loop is by far the simplest method. In fact, it’s a one-liner:[C++17] for (auto& [_, v] : mymap) v = value; Unfortunately C++ algorithm support for associative containers isn’t great pre-C++20. As a consequence, we can’t directly use std::fill. To use them anyway (pre-C++20), we need to write adapters — in the case of std::fill, an iterator adapter. Here’s a minimally viable (but not really conforming) implementation to illustrate how much effort this is. I do not advise using it as-is. Use a library (such as Boost.Iterator) for a more general, production-strength implementation. template <typename M> struct value_iter : std::iterator<std::bidirectional_iterator_tag, typename M::mapped_type> { using base_type = std::iterator<std::bidirectional_iterator_tag, typename M::mapped_type>; using underlying = typename M::iterator; using typename base_type::value_type; using typename base_type::reference; value_iter(underlying i) : i(i) {} value_iter& operator++() { ++i; return *this; } value_iter operator++(int) { auto copy = *this; i++; return copy; } reference operator*() { return i->second; } bool operator ==(value_iter other) const { return i == other.i; } bool operator !=(value_iter other) const { return i != other.i; } private: underlying i; }; template <typename M> auto value_begin(M& map) { return value_iter<M>(map.begin()); } template <typename M> auto value_end(M& map) { return value_iter<M>(map.end()); } With this, we can use std::fill: std::fill(value_begin(mymap), value_end(mymap), value);
{ "language": "en", "url": "https://stackoverflow.com/questions/139325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Can I create a BIDS project in VS2008 with SQL 2005? From what I can find online about using VS2008 to create a integration/reporting services project it appears I need to have SQL 2008. Does anyone know of a work-around that would allow me to use VS2008 with SQL 2005? A: Installing SQL Server 2005 client/workstation tools should install BIDS 2005. A: Your license for SQL2008 should allow you to downgrade and use BIDS2005. I've had VS2005 and VS2008 both installed on the same machine concurrently.
{ "language": "en", "url": "https://stackoverflow.com/questions/139342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Inno Setup: Capture control events in wizard page In a user defined wizard page, is there a way to capture change or focus events of the controls? I want to provide an immediate feedback on user input in some dropdowns (e.g. a message box) A: Took me some time to work it out, but after being pointed in the right direction by Otherside, I finally got it (works for version 5.2): [Code] var MyCustomPage : TWizardPage; procedure MyEditField_OnChange(Sender: TObject); begin MsgBox('TEST', mbError, MB_OK); end; function MyCustomPage_Create(PreviousPageId: Integer): Integer; var MyEditField: TEdit; begin MyCustomPage := CreateCustomPage(PreviousPageId, 'Caption', 'Description'); MyEditField := TEdit.Create(MyCustomPage); MyEditField.OnChange := @MyEditField_OnChange; end; A: Since the scripting in innosetup is loosely based on Delphi, the controls should have some events like OnEnter (= control got focus) and OnExit (= control lost focus). You can assign procedures to these events, something like this: ComboBox.OnExit := ComboBoxExit; procedure ComboBoxExit(Sender: TObject); begin end; I don't have access to Innosetup right now, so you will need to lookup the available events and parameters for the procedures.
{ "language": "en", "url": "https://stackoverflow.com/questions/139358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Handling TDD interface changes I've begun to use TDD. As mentioned in an earlier question the biggest difficulty is handling interface changes. How do you reduce the impact on your test cases as requirements change? A: Changing an interface requires updating code that uses that interface. Test code isn't any different from non-test code in this respect. It's unavoidable that tests for that interface will need to change. Often when an interface changes you find that "too many" tests break, i.e. tests for largely unrelated functionality turn out to depend on that interface. That can be a sign that your tests are overly broad and need refactoring. There are many possible ways this can happen, but here's an example that hopefully shows the general idea as well as a particular case. For instance if the way to construct an Account object has changed, and this requires updating all or most of your tests for your Order class, something is wrong. Most of your Order unit tests probably don't care about how an account is made, so refactor tests like this: def test_add_item_to_order(self): acct = Account('Joe', 'Bloggs') shipping_addr = Address('123 Elm St', 'etc' 'etc') order = Order(acct, shipping_addr) item = OrderItem('Purple Widget') order.addItem(item) self.assertEquals([item], order.items) to this: def make_order(self): acct = Account('Joe', 'Bloggs') shipping_addr = Address('123 Elm St', 'etc' 'etc') return Order(acct, shipping_addr) def make_order_item(self): return OrderItem('Purple Widget') def test_add_item_to_order(self): order = self.make_order() item = self.make_order_item() order.addItem(item) self.assertEquals([item], order.items) This particular pattern is a Creation Method. An advantage here is that your test methods for Order are insulated from how Accounts and Addresses are created; if those interfaces change you only have one place to change, rather than every single test that happens to use Accounts and Addresses. In short: tests are code too, and like all code, sometimes they need refactoring. A: I think this is one of the reasons for the trendy argument that interfaces are used too much. However, I disagree. When requirements change -- so should your tests. Right? I mean, if the criteria for which you've written the test is no longer valid, then you should rewrite or eliminate that test. I hope this helps, but I think I may have misunderstood your question. A: There will be an impact. You just have to accept that changing the interface will require time to change the associated test cases first. There is no way around this. However then you consider the time you save by not trying to find an elusive bug in this interface later and not fixing that bug during the release week it is totally worth it. A: In TDD, your tests aren't tests. They are executable specifications. IOW: they are an executable encoding of your requirements. Always keep that in mind. Now, suddenly it becomes obvious: if your requirements change, the tests must change! That's the whole point of TDD! If you were doing waterfall, you would have to change your specification document. In TDD, you have to do the same, except that your specification isn't written in Word, it's written in xUnit. A: "What we should do to prevents our code and tests from requiments dependency? Seems that nothing. Every time when requiments changed we must change our code & tests. But maybe we can simplify our work? Yes, we can. And the key principle is: incapsulation of code that might be changed." http://dmitry-nikolaev.blogspot.com/2009/05/atch-your-changes.html A: You write the tests before you write the code for the new interface. A: If you are following the Test First approach, there should in theory be no impact of interface changes on your test code. After all, when you need to change an interface, you'd first change the test case(s) to match the requirements and then go ahead and change your interfaces/implementation until the tests pass. A: When interfaces change, you should expect tests to break. If too many tests break, this means that your system is too tightly coupled and too many things depend on that interface. You should expect a few tests to break, but not a lot. Having tests break is a good thing, any change in your code should break tests. A: If requirements change then your tests should be the first thing to change, rather than the interface. I would start by modifying the interface design in the first appropriate test, updating the interface to pass the newly-breaking test. Once the interface is updated to pass the test, you should see other tests break (as they will be using the outdated interface). It should be a matter of updating the remaining failing tests with the new interface design to get them passing again. Updating the interface in a test driven manner will ensure that the changes are actually necessary, and are testable.
{ "language": "en", "url": "https://stackoverflow.com/questions/139365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: how is MalformedURLException thrown in Java I mean, how does Java decide which protocols are available? I run some code from inside Eclipse, and it works just fine. Then I run the same code from outside Eclipse, and I get "unknown protocol" MalformedURLException. Probably it has to do with the code base, or something? Any hints would be helpful. Thanks! A: The work of resolving the protocol is done by the URLStreamHandler, which are stored in URL.handlers by protocol in lowercase. The handler, in turn, is created by the URLStreamHandlerFactory at URL.factory. Maybe eclipse is monkeying with that? Some of the URL constructors take stream handlers and you can set the factory with URL.setURLStreamHandlerFactory. Here's a web post about developing protocol handlers. A: The java standard way of defining protocol handlers is described here: http://java.sun.com/developer/onlineTraining/protocolhandlers/ This relies on the protocol handler class being available on the boot (?) classloader. That doesn't work well with OSGi (and thus Eclipse). OSGi provides a wrapper around this mechanism to allow bundles/plugins to contribute protocol handlers. See: http://www.osgi.org/javadoc/r4v41/org/osgi/service/url/URLStreamHandlerService.html Eclipse also provides its own protocol: bundle-resource (iirc) which definitely won't work outside of Eclipse. A: Probably a classpath issue. If you are using a protocol that depends on some library (jar) you included, and then exported a JAR from eclipse, the JAR files you included in your project are probably not being found by the running code outside of eclipse. You need a manifest file in your jar that will point to the libraries that are needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/139368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: #defines in linker scripts For an embedded system I need to place a few data structures at fixed addresses, so that a separate control CPU can access them at a known location. I'm using linker scripts for the embedded target to accomplish this, plus #defines of those same addresses for the control CPU. It bothers me that these address constants are therefore defined in two places, the linker script and a header file. I'd like to have just one. The best solution I've come up with so far is to have the Makefile run cpp on the linker script, allowing it to #include the same header. Is there a better way to accomplish this? Is there some little-known option to ld or a naming convention for the linker script which will automatically run it through cpp? A: This isn't quite the solution you are looking for but one option is to utilize the build system to configure these values. Create a config.h.in and a target.ld.in which acts as templates and have the build system produce a config.h with the correct define and a target.ld with the correct address for the target you are building. We use CMake for our embedded systems and it supports this kind of thing. GNU autoconf does too but I never really liked it personally. A: You could use the emdedded-C specific construct @ to place an object anywhere in the address space. static struct SOMESTRUCT somestruct @ 0x40000000; extern int someextint @ 0x3ffffffc; char somebuffer[77] @ 0x80000000; Assuming a 32-bit MCU.
{ "language": "en", "url": "https://stackoverflow.com/questions/139373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Recommend an Open Source .NET Statistics Library I need to calculate averages, standard deviations, medians etc for a bunch of numerical data. Is there a good open source .NET library I can use? I have found NMath but it is not free and may be overkill for my needs. A: How about http://ilnumerics.net/ or http://numerics.mathdotnet.com/ (merge from http://www.codeplex.com/dnAnalytics) A: I found this on the CodeProject website. It looks like a good C# class for handling most of the basic statistical functions. * *http://www.codeproject.com/KB/cs/csstatistics.aspx A: Have a look at MathNet it is not specifically for statistics, but there might be useful functionality for what you want A: Apache Maths.Common and run it through IKVM. A: I decided it was quicker to write my own, that just did what I needed. Here's the code... /// <summary> /// Very basic statistical analysis routines /// </summary> public class Statistics { List<double> numbers; public double Sum { get; private set; } public double Min { get; private set; } public double Max { get; private set; } double sumOfSquares; public Statistics() { numbers = new List<double>(); } public int Count { get { return numbers.Count; } } public void Add(double number) { if(Count == 0) { Min = Max = number; } numbers.Add(number); Sum += number; sumOfSquares += number * number; Min = Math.Min(Min,number); Max = Math.Max(Max,number); } public double Average { get { return Sum / Count; } } public double StandardDeviation { get { return Math.Sqrt(sumOfSquares / Count - (Average * Average)); } } /// <summary> /// A simplistic implementation of Median /// Returns the middle number if there is an odd number of elements (correct) /// Returns the number after the midpoint if there is an even number of elements /// Sorts the list on every call, so should be optimised for performance if planning /// to call lots of times /// </summary> public double Median { get { if (numbers.Count == 0) throw new InvalidOperationException("Can't calculate the median with no data"); numbers.Sort(); int middleIndex = (Count) / 2; return numbers[middleIndex]; } } } A: You have to be careful. There are several ways to compute standard deviation that would give the same answer if floating point arithmetic were perfect. They're all accurate for some data sets, but some are far better than others under some circumstances. The method I've seen proposed here is the one that is most likely to give bad answers. I used it myself until it crashed on me. See Comparing three methods of computing standard deviation. A: AForge.NET has AForge.Math namespace, providing some basic statistics functions: Histogram, mean, median, stddev, entropy. A: If you just need to do some one-off number crunching, a spreadsheet is far and away your best tool. It's trivial to spit out a simple CSV file from C#, which you can then load up in Excel (or whatever): class Program { static void Main(string[] args) { using (StreamWriter sw = new StreamWriter("output.csv", false, Encoding.ASCII)) { WriteCsvLine(sw, new List<string>() { "Name", "Length", "LastWrite" }); DirectoryInfo di = new DirectoryInfo("."); foreach (FileInfo fi in di.GetFiles("*.mp3", SearchOption.AllDirectories)) { List<string> columns = new List<string>(); columns.Add(fi.Name.Replace(",", "<comma>")); columns.Add(fi.Length.ToString()); columns.Add(fi.LastWriteTime.Ticks.ToString()); WriteCsvLine(sw, columns); } } } static void WriteCsvLine(StreamWriter sw, List<string> columns) { sw.WriteLine(string.Join(",", columns.ToArray())); } } Then you can just 'start excel output.csv' and use functions like "=MEDIAN(B:B)", "=AVERAGE(B:B)", "=STDEV(B:B)". You get charts, histograms (if you install the analysis pack), etc. The above doesn't handle everything; generalized CSV files are more complex than you might think. But it's "good enough" for much of the analysis I do.
{ "language": "en", "url": "https://stackoverflow.com/questions/139384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Least favourite design pattern OK, I'm not looking for anti-patterns - I'm looking for things that aren't really patterns, or perhaps patterns that have been abused. My personal least favourite is the "Helper" pattern. E.g. I need to create a SQL query, so call SQLQueryHelper. This needs to process some strings, so it in turn calls StringHelper. Etc., etc. See - this isn't really a design pattern at all... [edit] Don't you people who vote this down think you should add a constructive remark? A: I think that Design Patterns should not be used blindly, implementing them simply because it's cool: they have a well-specified CONTEXT, and using them when appropriate MAY help, but in any other case they're just a waste of time, when not an hindrance to the correct system functioning. A: My least favourite is the "put 'Helper' or 'Manager' on to the end of the class name" pattern. EDIT: And I've proved that I'm a lame programmer by, as Chris pointed out, forgetting "Util". A: Singleton. It's a global variable in disguise and difficult to mock/stub for unit testing. Service Locator better, Dependency injection / Inversion of Control better still. The majority of references on the wikipedia article are about why it is evil. A: Strategy The reason being that I suspect most people are taught to implement it using a class and a method. Consider the following Haskell code: ascending = sortBy compare somelist descending = sortBy (flip compare) somelist pairsBySecondComponent = sortBy (comparing snd) somelist That's the strategy pattern in action: compare, (flip compare) and (comparing snd) are the concrete strategies in this case (they're plain old functions), and the function signature a -> a -> Ordering is the strategy "interface". The brevity of this illustrates that design patterns don't have to be so heavyweight or bulky. The way you want to implement Strategy in Java (interface, classes) is not a good way. It's a way that works around Java giving you the wrong abstractions for the job you need to do. This should not be considered normal or acceptable. For that reason, assuming my assumption about the way it's taught is correct, I don't like the Strategy pattern very much. There are some of other patterns which are also specific instances of the general "Function Pointer" pattern. I don't like them very much either, for very much the same reasons. A: 'Manager' classes. e.g. DataManager BusinessLogicManager WidgetManager What the **** does 'Manager' mean? Be more specific! If your WidgetManager has so many Widget responsibilities that there is no more specific name, then break it down. This is a conversation I have had too many times with myself when looking at old code. A: MVP. It's MVC but broken. Oh no but wait, developing an application IS completely different than following good practise such as "It's just a view". Update I reference "It's just a view" which is from the book Pragmatic Programmer. My main issue is that almost every single MVP implementation has the view holding onto the presenter and telling the presenter to do things. This is conceptually backwards. The UI should not have a dependancy on the logic. It is "just a view". The logic is the primary reason for the application, how that logic is displayed is a secondary concern. I could use one winform, or I could use many. Hell, I could pipe the whole thing out into ASCII text, or create the "view" by sending charges down a wire attached to an artist who renders the view via the medium of interperative dance. Practically speaking this premise does have some viable uses. Some of the controllers I've written in the past have MANY views that are exernally exposed and can be pushed into the UI as the application sees fit. Consider a live feed of data. I could present this as stats, as lines graphs, as pie charts. Perhaps all at the same time! Then the view holding onto the controller looks kinda silly, the parent is the controller and the children are the views. A traditional (Form holds presenter) MVP implementation has other consequences. One being that your UI now has a dependancy on code that performs the logic, this means it will also require references to everything that logic needs (services etc). The way to fix this is to pass in an interface (again, most MVP implementation I see have the form creating the presenter, but hey). Then it becomes a workable model, although i've never been a fan of passing in args to a form constructor. At the end of day it feels like people are twisting things around attempting to justify a model that is broken. I am of the personal belief that the MVP pattern purely exists as an attempt to rationalise how Visual Studio Windows Forms Applications work. They start with a "Form First" mentality. "Oh hai, here is your form, now go and draggy drop your controls and stuff logic into the UI" Anyone with experience with any apps that are beyond a util appreciate that this way of working does not scale. I see MVP as a way of making this scale but this just feels like an architectural band aid around the broken "form first" development model the IDE promotes. I argue that MVC is just MVC, MVP is a bastardisation of the pattern. Infact the whole definition of MVC is kinda backwards. The important part of it is separation of concerns. The UI, the logic, the data and/or services you're consuming. Keep these seperate. You don't implement MVC to do this, you do this and by doing so you end up with a form of MVC. MVP doesn't fit into this because you don't end up with MVP if you start by thinking of Separation of Concerns you end up with MVP if you're stuck in "Form First" land and you feel you should be doing things a bit more MVCish. That's my take on it anyway....
{ "language": "en", "url": "https://stackoverflow.com/questions/139387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Should I use Qt Jambi in Java? Is it a good idea for me to use Qt Jambi in Java as a toolkit? I see that Qt Jambi is hard to learn, and Swing is easier than Qt Jambi, but I think that Qt Jambi is more powerful. A: If you think being familiar with Qt would be useful in the future, when you might want to develop in C++ (or change some Qt-based software), I think it would be nice. A: Qt Jambi is much simple and easier to use than Swing and SWT. I think that the only default of Qt Jambi is that there is not that much documentation about that and forums answering questions about Qt Jambi are rare (this is the only one i found :D) Hope this helps. A: Two years ago, I started a Java Desktop Application and used Swing as a GUI framweork. Up to that point, I had experience with C++/MFC (shudder) and C++/Qt (very nice). After trying to get along with Swing for a while (including reading lots of tutorials and even a book) I came to the following conclusion: Swing is much more difficult and clumsy than Qt for three reasons: * *A lot of simple stuff requires more code than it should. *Some things that Qt brings for free are almost impossible to achieve in a reasonable amount of time. *Swing doesn't bring a WYSIWYG GUI Editor and I could not find a free one that comes close to Qt's Designer. I then threw away the Swing GUI, switched to Qt Jambi and was really impressed by it. One weekend later I had a nice Qt GUI and lived happily ever after.
{ "language": "en", "url": "https://stackoverflow.com/questions/139389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: WikiRank Search I am making a new concept in searching, I am calling lt as WikiRank, In this , a extension will be installed on firefox browser. This extension will see watch the user activity, and he will send visiting url and its visit time also user can give ranking to the material also. So in nutshell,. there will not be any change in user interface, User will use his search any engine but we will collect all of searched website. So, soon I will be having data such as, How many hour the world has spent on a particular idea and combined user rank. This database will be useful for getting better search result. How can i make this website? Is the idea is good or unique? How can I prevent bots or AI agent to give false rank ? May be one captcha at the end of submission of all links ??? A: It seems that you're re-inventing a wheel... again. Did you check any prior art (there are more obviously) on what you're planning to do? From your description it does not sound like a 'new' concept to me at all. A: Mainly I want to collect page visit time per url of user. Also user can send optional rank (1-10) about the content of the page.
{ "language": "en", "url": "https://stackoverflow.com/questions/139394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the best solution? Using WF StateMachine to follow user states on a web application I was looking at WF and was wondering if it would be best to use the StateMachine workflow to follow a users state. i.e. user is anonymous, user is known but not authenticated, user is known and authenticated. would this be a good way to go about solving this regular issue? A: No, I think that it is completely inappropriate. Please see these questions for more info: * *What are your experiences with Windows Workflow Foundation? *When to use Windows Workflow Foundation? A: In my experience, WF is far too heavyweight for any such use - it is too difficult to do simple things with it. It is useless for this scenario. I'd certainly be interested in opposite experiences, though - has anyone successfully used WF on a small scale in a simple project? Workflows and state machines are integral parts of any logical business domain but I have never seen a straightforward implementation of WF or any other framework for it. A: You might be interested in my SO answer regarding Stateless, a light weight .Net state machine. I have used this instead of WF and have implemented it in a web environment.
{ "language": "en", "url": "https://stackoverflow.com/questions/139406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Log Post Parameters sent to a website Something I have always been interested in out of curiosity, is there a tool or utility that will allow me so log post parameters sent to a website? Not a personal website, any site on the web. Reason for this, is that I want to be able to develop a .NET application without having to add the overhead of creating a WebBrowser object and then using the DOM to automate tasks on a website. I also want to use it to test security of my localhost development server, as I don't have VS Team System. A: Check out Fiddler (http://www.fiddler2.com/fiddler2/) - it's quite good debugging proxy, which allows for deep inspection and modification of the traffic between your browser and the websites.
{ "language": "en", "url": "https://stackoverflow.com/questions/139409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is a Gantt Chart larger than a single page ever useful? I've worked on a few projects managed through the use of a Gantt chart. Some of these have has a massive number of tasks and the project manager spends all their time wrestling with MS Project instead of making good choices. I can see the point if there are a number of separate teams working towards something (e.g. legal, IT, marketing) to manage a project overall. Has anyone participated in a software development project that has used a Gantt chart with any success? A: Micromanaging software development projects using MS Project is one of the more stupid things someone can do, especially in an agile environment. Too many things that take 1/10th or 10x the time that you predicted, too many things that overrun, and too many project planning meetings eating up useful work time. In addition being a slave to the Gantt chart is a very common thing you see, especially with project managers that come from different disciplines. However they are useful for ensuring that actions (get account with XYZ set up, get compliance to check wording on website, etc) are completed by certain deadlines. Coarse grained deadlines for programming tasks as well are fine. All in my opinion, I'm certain that there are people who have had successful results from micromanaging programming teeams. A: We always use Gannt charts in the project planning. They are always useful - after everything is said and done Gannt chart is one of the best tool to visualize your project. It is however a tool. If the tool is used properly it is effective. It if it is not it could be counter-effective. You need to know how to plan your project properly. You need to understand what should be included in the the list of task and how. For example for an IT project it is almost always useless going down to the level of individual assignments (create a table for storing using data). Keep it on the story level (allow users to login), assign the whole team to it and the planning will become much easier. Later on you could always go down to the individual tasks level and you could create a separate project plan to handle the assignments for an separate task. A: Yes, I have seen it successful. In the cases where it was successful they used a hierarchical approach. Rather than having a single massive Gannt chart with hundreds of tasks, there was a master chart for the whole project, with top-level goals. Then there were separate charts for the accomplishment of the sub-goals. Although this limits flexibility in one way (you can't automatically balance resources across sub-goal teams), it seems to match better to the way humans operate well: in small or mid-sized teams. A: I am not a Project Manager Professional, but I run development projects for complex program analysis tools. I drew Gantt charts of page size back in the late 70s. They never had enough detail, so I gave it up. Gantt charts with 5 tasks are useless. Starting in the 90s, I have used MS project on some 10 real projects of 6-12 months in length with tasks of roughly 1 week size, and anywhere between 50-250 tasks organized into heirarchies related to software architecture elements, with teams of 5-10 people. Such plans print out as a grid of 3 by 5 full pages and we tended to stick this to a wall where it could be seen. These are wonderful for planning purposes, because they force you to detail out the major activities on a project, write down descriptions, sequence and prioritize. The team can see the tasks, and see which ones are theirs, and you can review it with the team members so that everyone can provide useful feedback about durations and task ordering. What they were NOT useful for was tracking project progress seriously. Sun Tzu tells us no plan survives the battlefield intact, and Gantt charts are no exception. It is true that with care, one could revise the plan carefully every few weeks and mark progress made. A "real Project Manager" might have done that. We were careful enough so that the original plan tasks held up pretty well through about half the project, and by that time people pretty well understood the problem and additonal re-planning occured but informally rather than with MS project. I have also used MS Project to plan many similar tasks for serious time and estimation purposes. This has the unfortunate side effect of producing realistic estimates with most of the costs visible. Its amazing how realistic estimates kill project proposals. Industry seems to want bad underestimates to start projects; its no wonder so many overrun time and budget. I have a love-hate relationship with MS project itself. It takes task descriptions, task precedence, and resource assignments. But I cannot say, "I prefer this task to complete first over that task" which would ask as an optional task-precdence, and I cannot assign a resource partly to one task an partly to another and get any sensible schedule. But for complex project estimation, I don't see how you can live without this. The agile people will tell you you can't plan; I don't know who they have as customers. I've never found a customer that was willing to let me work without a plan or a dollar/time budget he would hold me to. A: And is a GANNT Chart smaller than a single page ever useful? That little information could easily sit on a whiteboard or postit or whatever you have always in sight. There's not really any reason why you should start to wrestle with any GANNT tool when you can specify the neccesary information in a minute with a pencil on whatever paper you have. A: I have worked on one project where a Gantt chart/MS Project file was used to successfully manage the project. The project information was maintained by a non-developer manager who met with the team individually to obtain status updates. This system seemed to work fairly well and the Gantt chart provided a quick look for the entire team for status. And in talking with a friend of mine who works at a company that uses this approach, it seems to work very well for their teams. On other projects I have worked on where the development lead is expected to maintain the chart, it has not been successful. The lead usually spends extra time trying to wrestle with MS Project. And if the culture focuses on punishing schedule delays and not on resolving the problems, then the Gantt chart can easily be manipulated to show a project on schedule until the delivery date. In those cases the Gantt chart becomes an extra piece of work that provides no value to the project. I think the key point is to have a person outside of the development team who updates the MS Project file. And the Gantt chart should be viewed as a tool to use for communication about project status, possible problems for schedule delays and planning for resource needs. With these items in place, the Gantt chart can be helpful. A: I've found Gantt charts to be useful for planning a project's time line and allowing for X days of vacation, slippage, etc. They're also great for making sure all resources are 100% allocated throughout the entire project. When actually working on the project, as both a developer and team leader, I've found it best to work in short iterations with clear tasks defined for the whole team. As things slip, change, or people are added/removed from the project it's nice to be able to adjust the Gantt chart and see the outcome of the changes on the project. A: I've worked on a couple of projects where we used GANTT charts. Yes they were useful, and yes they were larger than a page. What we did was to cut and paste (literally, with scissors and glue) the chart into a single big chart and put it on the wall. McConnell in his excellent Software Project Survival Guide recommends having something that every member of a team can look at to get a rough idea of whether they are on track, and this was it for us. A: I wholeheartedly agree with the comments about GANTT charts not suiting agile development - where we don't have a clear understanding of the details of the implementation at the outset. On the other hand, I can't help but fondly remember a painful weekend I spent putting together a GANTT chart for a project I was managing where the technology and requirements were very well-understood and the schedule was critical. We had the entrance wall to our cubicle section partially covered with this GANTT chart (5 A4 pages wide), and having it was extremely useful to making sure we were working to the critical path - getting the things done that needed to be done right now - and also made it possible for me to report to the project board with detailed reports on how the project is progressing against the schedule. The usefulness of GANTT charts definitely depends on the context, but I'd say that if you know your requirements, and particularly if you have a lot of importance attached to your schedule, they can be incredibly useful. A: I'm not a big fan of Gantt charts, especially ones created in MS Project - so much page space is taken for so little information and at most (like most graphs) information is distorted or hidden. If a Gantt chart helps the team review what is required, who is assigned to what task, what tasks are slipping, where the risks are - then it's useful - however - most Gantt charts are developed at the start of the project and then never seen or used again. So, back to the original question - does size matter???? to quote from a recent book - if it's stupid and works, then it's not stupid A: A Gantt chart is only useful if it has sufficient details for a project manager to see what is the status of the project, and to have dependencies and resources properly accounted for. You can always stick several sheets together with sticky tape. The discipline of analyzing the project - breaking it down into phases, steps, deliverables, determining the resource requirements is probably even more important than printing out a pretty chart. Many of my non-trivial development projects have benefited from the sensible use of project planning tools. A: Yes, I have seen it successful. In the cases where it was successful they used a hierarchical approach. ... Absolutely. I would also suggest it works best when the hierarchy also maps to the team hierarchy, eg Project Manager creates the high level chart, Team Leads manage the middle level charts, with Developers maybe managing their own Gantt charts, but more likely using something like JIRA, as this provides a single point of focus from the developer's perspective, and is usually more paletable (ie it doesn't look like a plan, so doesn't scare them ;) ) A: The best use I have had from Gantt charts in MSProject is for capacity planning in the embryonic stages of a project. You can do broad brush scheduling and what-ifs, moving stuff around, moving people, add in various milestones etc. This can give you confidence that you are developing a reasonable plan. But then using it on day to day tracking of the myriad little tasks that people actually have to do is, as has been mentioned, asking for trouble. What I have used to great effect for managing timescales at this micro level is Fogbugz's EBS stuff. You have to work at it, but it really does help in keeping a handle on things. To summarise by repetition - Gantt is great for developing a software plan, but not for the day to day detailed update of it. As to the page question, most of the plans I've done that have worked best (i.e. communicated the plan to people at an understandable level of detail) can be made to fit into an A3 printout. Occasionally a couple of A3 pages. But A4 is simply too small in most cases in my experience.
{ "language": "en", "url": "https://stackoverflow.com/questions/139411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Which Coding convention to follow for PHP? Should I stick with Sun's Java code conventions for PHP code? A: You have a number of options: Zend: http://framework.zend.com/manual/en/coding-standard.html Pear: http://pear.php.net/manual/en/standards.php Wordpress: http://codex.wordpress.org/WordPress_Coding_Standards But like prakash suggests, Zend is a good choice. A: You should be following one of the PSR standards for PHP approved by the Framework Interop Group * *PSR-0 - Aims to provide a standard file, class and namespace convention.. *PSR-1 - Aims to ensure a high level of technical interoperability between shared PHP code. *PSR-2 - Provides a Coding Style Guide for projects looking to standardise their code. *PSR-3 - Describes a common interface for logging libraries; the LoggerInterface exposes eight methods to write logs to the eight RFC 5424 levels. *PSR-4 - Describes a specification for autoloading classes from file paths. It is fully interoperable, and can be used in addition to any other autoloading specification, including PSR-0. A: For PHP, i'd suggest to follow Zends suggestions As you might know, Zend is the most widely used framework! A: Update: people also use PSR nowadays a lot Zend Framework and PEAR standards are pretty much the most common coding conventions. If your company adopted another one, stick to the your company's convention though. Better than having no convention at all. And they only work if everyone sticks to them. Also see: * *Which Coding convention to follow for PHP? *PHP best practices for naming conventions *PHP coding conventions? Basically, all of the major frameworks have a coding convention somewhere in their documentation. The official (but mostly unknown IMO) PHP Coding Guidelines can be found at * *http://news.php.net/php.standards/2 If you need to validate code against a coding convention, consider using CodeSniffer. Some IDEs also offer automatic sourcecode formatting by templates. For instance Zend Studio has the ZF coding guidelines built-in, so it's just a click to format code to that convention. A: If you are in a business follow the business code convention. If it's for a personal project you can get the specific language specification (if you do Java than Java, if you do Php than PHP). If it's your personal project you can change few things if you desire... If you do open source project, you should go see what's already in place. A: As Gordon says, the Zend and PEAR standards are the effective industry standard. However, the company's code quite possibly pre-dates these so depending on the size of the code base there may be little value in investing the time to make the move to one of these. (That said, if they ever want to use static code analysis tools you could possibly use this as an impetus to seriously consider moving to Zend, etc.) However, being realistic, as long as they have a sensible standard that they stick to there's no real issue here - you'll find yourself adjusting how you "see" the code accordingly. A: There are pros and cons to any coding style. I spend a lot of time working with code from many sources doing integrations so sometimes end up seeing many different styles in a single day (different naming conventions, braces placement, tabs vs spaces etc) As far as I'm concerned - the most important thing if you are working with existing code is to follow the style of the code that you are editing. If you don't you make things harder for anyone following after you. If you are writing new code than you should have freedom to do it the way that makes you most efficient. I find that company coding guidelines are often far to detailed and end up being forgotten after a few years and a bit of churn in the software team ;-) A: There are many different coding conventions out there. Have a look at what other people use (read some example code and see how easy it is to understand what is being done) and take your pick. The important part is to choose one and stick to it. A: Coding styles vary between groups and it isn't a one size fits all type of thing. The most important thing is having a standard that's followed consistently and not going overboard. Too many rules can be just as bad as not enough. I used to prefer the K&R style (the second one). After having to adjust to the Allman style (your preference), I now feel that it makes code more readable and have changed my preference. This Wikipedia article is a decent place to start. It also includes a link to the PEAR Coding Standards, among others.
{ "language": "en", "url": "https://stackoverflow.com/questions/139427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: Is PHP international? Can you create websites with Chinese characters in PHP? UPDATE: Perhaps I should have said - is it straight forward. Because some languages like Java make it extremely easy. Perhaps localisation in PHP isn't as easy as Java??? A: Yes on both counts. Read this guide on building Chinese websites in PHP. A: Yes, but you need to know what you are doing... Read this article (search for PHP's Problem with Character Encoding) for starters. A: Sure. http://www.onlamp.com/pub/a/php/2002/11/28/php_i18n.html
{ "language": "en", "url": "https://stackoverflow.com/questions/139466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I capture the result of var_dump to a string? I'd like to capture the output of var_dump to a string. The PHP documentation says; As with anything that outputs its result directly to the browser, the output-control functions can be used to capture the output of this function, and save it in a string (for example). What would be an example of how that might work? print_r() isn't a valid possibility, because it's not going to give me the information that I need. A: Try var_export You may want to check out var_export — while it doesn't provide the same output as var_dump it does provide a second $return parameter which will cause it to return its output rather than print it: $debug = var_export($my_var, true); Why? I prefer this one-liner to using ob_start and ob_get_clean(). I also find that the output is a little easier to read, since it's just PHP code. The difference between var_dump and var_export is that var_export returns a "parsable string representation of a variable" while var_dump simply dumps information about a variable. What this means in practice is that var_export gives you valid PHP code (but may not give you quite as much information about the variable, especially if you're working with resources). Demo: $demo = array( "bool" => false, "int" => 1, "float" => 3.14, "string" => "hello world", "array" => array(), "object" => new stdClass(), "resource" => tmpfile(), "null" => null, ); // var_export -- nice, one-liner $debug_export = var_export($demo, true); // var_dump ob_start(); var_dump($demo); $debug_dump = ob_get_clean(); // print_r -- included for completeness, though not recommended $debug_printr = print_r($demo, true); The difference in output: var_export ($debug_export in above example): array ( 'bool' => false, 'int' => 1, 'float' => 3.1400000000000001, 'string' => 'hello world', 'array' => array ( ), 'object' => stdClass::__set_state(array( )), 'resource' => NULL, // Note that this resource pointer is now NULL 'null' => NULL, ) var_dump ($debug_dump in above example): array(8) { ["bool"]=> bool(false) ["int"]=> int(1) ["float"]=> float(3.14) ["string"]=> string(11) "hello world" ["array"]=> array(0) { } ["object"]=> object(stdClass)#1 (0) { } ["resource"]=> resource(4) of type (stream) ["null"]=> NULL } print_r ($debug_printr in above example): Array ( [bool] => [int] => 1 [float] => 3.14 [string] => hello world [array] => Array ( ) [object] => stdClass Object ( ) [resource] => Resource id #4 [null] => ) Caveat: var_export does not handle circular references If you're trying to dump a variable with circular references, calling var_export will result in a PHP warning: $circular = array(); $circular['self'] =& $circular; var_export($circular); Results in: Warning: var_export does not handle circular references in example.php on line 3 array ( 'self' => array ( 'self' => NULL, ), ) Both var_dump and print_r, on the other hand, will output the string *RECURSION* when encountering circular references. A: You could also do this: $dump = print_r($variable, true); A: Use output buffering: <?php ob_start(); var_dump($someVar); $result = ob_get_clean(); ?> A: If you want to have a look at a variable's contents during runtime, consider using a real debugger like XDebug. That way you don't need to mess up your source code, and you can use a debugger even while normal users visit your application. They won't notice. A: Here is the complete solution as a function: function varDumpToString ($var) { ob_start(); var_dump($var); return ob_get_clean(); } A: if you are using PHP>=7.0.0 function return_var_dump(...$args): string { ob_start(); try { var_dump(...$args); return ob_get_clean(); } catch (\Throwable $ex) { // PHP8 ArgumentCountError for 0 arguments, probably.. // in php<8 this was just a warning ob_end_clean(); throw $ex; } } or if you are using PHP >=5.3.0: function return_var_dump(){ ob_start(); call_user_func_array('var_dump', func_get_args()); return ob_get_clean(); } or if you are using PHP<5.3.0 (this function is actually compatible all the way back to PHP4) function return_var_dump(){ $args = func_get_args(); // For <5.3.0 support ... ob_start(); call_user_func_array('var_dump', $args); return ob_get_clean(); } (prior to 5.3.0 there was a bug with func_get_args if used directly as an argument for another function call, so you had to put it in a variable and use the variable, instead of using it directly as an argument..) A: This maybe a bit off topic. I was looking for a way to write this kind of information to the Docker log of my PHP-FPM container and came up with the snippet below. I'm sure this can be used by Docker PHP-FPM users. fwrite(fopen('php://stdout', 'w'), var_export($object, true)); A: Also echo json_encode($dataobject); might be helpful A: You may also try to use the serialize() function. Sometimes it is very useful for debugging purposes. A: From the PHP manual: This function displays structured information about one or more expressions that includes its type and value. So, here is the real return version of PHP's var_dump(), which actually accepts a variable-length argument list: function var_dump_str() { $argc = func_num_args(); $argv = func_get_args(); if ($argc > 0) { ob_start(); call_user_func_array('var_dump', $argv); $result = ob_get_contents(); ob_end_clean(); return $result; } return ''; } A: I really like var_dump()'s verbose output and wasn't satisfied with var_export()'s or print_r()'s output because it didn't give as much information (e.g. data type missing, length missing). To write secure and predictable code, sometimes it's useful to differentiate between an empty string and a null. Or between a 1 and a true. Or between a null and a false. So I want my data type in the output. Although helpful, I didn't find a clean and simple solution in the existing responses to convert the colored output of var_dump() to a human-readable output into a string without the html tags and including all the details from var_dump(). Note that if you have a colored var_dump(), it means that you have Xdebug installed which overrides php's default var_dump() to add html colors. For that reason, I created this slight variation giving exactly what I need: function dbg_var_dump($var) { ob_start(); var_dump($var); $result = ob_get_clean(); return strip_tags(strtr($result, ['=&gt;' => '=>'])); } Returns the below nice string: array (size=6) 'functioncall' => string 'add-time-property' (length=17) 'listingid' => string '57' (length=2) 'weekday' => string '0' (length=1) 'starttime' => string '00:00' (length=5) 'endtime' => string '00:00' (length=5) 'price' => string '' (length=0) Hope it helps someone. A: Long string: Just use echo($var); instead of dump($var);. Object or Array: var_dump('<pre>'.json_encode($var).'</pre>);' A: From http://htmlexplorer.com/2015/01/assign-output-var_dump-print_r-php-variable.html: var_dump and print_r functions can only output directly to browser. So the output of these functions can only retrieved by using output control functions of php. Below method may be useful to save the output. function assignVarDumpValueToString($object) { ob_start(); var_dump($object); $result = ob_get_clean(); return $result; } ob_get_clean() can only clear last data entered to internal buffer. So ob_get_contents method will be useful if you have multiple entries. From the same source as above: function varDumpToErrorLog( $var=null ){ ob_start(); // start reading the internal buffer var_dump( $var); $grabbed_information = ob_get_contents(); // assigning the internal buffer contents to variable ob_end_clean(); // clearing the internal buffer. error_log( $grabbed_information); // saving the information to error_log }
{ "language": "en", "url": "https://stackoverflow.com/questions/139474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "666" }
Q: How universally is C99 supported? How universally is the C99 standard supported in today's compilers? I understand that not even GCC fully supports it. Is this right? Which features of C99 are supported more than others, i.e. which can I use to be quite sure that most compilers will understand me? A: For gcc, there is a table with all supported features. It seems to be that the biggest thing missing are variable-length arrays. Most of the other missing features are library issues rather than language features. A: The IBM c compiler has c99 support when invoked as c99 but not when invoked as cc or xlc. A: Look at C99 suport status for GNU for details on which features are supported currently. Sun Studio is purported to support the entire C99 spec. I have never used them, so I can't confirm. I don't believe the microsoft compiler supports the C99 spec in its entirety. They are much more focused on C++ at the moment A: Microsoft appear to be tracking C++ standards, but have no support for C99. (They may cherry-pick some features, but could be said to be cherry-picking C++0x where there's an overlap.) As of Visual Studio .NET 2003, new projects have the 'Compile C code as C++ (/TP)' option enabled by default. A: Clang (the LLVM based C and C++ compiler) has pretty good C99 support. I think the only thing it does not support are the floating point pragmas. A: If you want to write portable C code, then I'd suggest you to write in C89 (old ANSI C standard). This standard is supported by most compilers. The Intel C Compiler has very good C99 support and it produces fast binaries. (Thanks 0x69!) MSVC supports some new features and Microsoft plan to broaden support in future versions. GCC supports some new things of C99. They created a table about the status of C99 features. Probably the most usable feature of C99 is the variable length array, and GCC supports it now. Clang (LLVM's C fronted) supports most features except floating-point pragmas. Wikipedia seems to have a nice summary of C99 support of the compilers. A: Someone mentioned the Intel compiler has C99 support. There is also the Comeau C/C++ compiler which fully supports C99. These are the only ones I'm aware of. C99 features that I do not use because they are not well supported include: * *variable length arrays *macros with variable number of parameters. C99 features that I regularly use that seem to be pretty well supported (except by Microsoft): * *stdint.h *snprintf() - MS has a non-standard _snprintf() that has serious limitations of not always null terminating the buffer and not indicating how big the buffer should be To work around Microsoft's non-support, I use a public domain stdint.h from MinGW (that I modified to also work on VC6) and a nearly public domain snprintf() from Holger Weiss Items that are not supported by Microsoft, but will still use on other compilers depending on the project include: * *mixed declarations and code *inline functions *_Pragma() - this makes pragmas much more usable
{ "language": "en", "url": "https://stackoverflow.com/questions/139479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: What are some good methods to hinder screen scrapers from grabbing specific pieces of content off my site? Pretty sure this question counts as blasphemy to most web 2.0 proponents, but I do think there are times when you could possibly not want pieces of your site being easily ripped off into someone else's arbitrary web aggregator. At least enough so they'd need to be arsed to do it by hand if they really wanted it. My idea was to make a script that positioned text nodes by absolute coordinates in the order they'd appear normally within their respective paragraphs, but then stored those text nodes in a random, jumbled up order in the DOM. Of course, getting a system like that to work properly (proper text wrap, alignment, styling, etc.) seems almost akin to writing my own document renderer from scratch. I was also thinking of combining that with a CAPTCHA-like thing to muss up the text in subtle ways so as to hinder screen scrapers that could simply look at snapshots and discern letters or whatnot. But that's probably overthinking it. Hmm. Has anyone yet devised any good methods for doing something like this? A: Consider that everything that the scraper can't read, search engines can't read either. With that been said, you could inject content into your document via Javascript after the page has loaded. A: Please don't use absolute positioning to reassemble a scrambled page. This won't work for mobile devices, screen readers for the visually impaired, and search engines. Please don't add captcha. It will just drive people away before they ever see your site. Any solution you come up with will be anti-web. The Internet is about sharing, and you have to take the bad with the good. If you must do something, you might want to just use Flash. I haven't seen link farmers grabbing Flash content, yet. But for all the reasons stated in the first paragraph, Flash is anti-web. A: Your ideas would probably break any screen-readers as well, so you should check accessibility requirements/legislation before messing up ordering. A: I've seen a TV guide decrypt using javascript on the client side. It wouldn't stop a determined scraper but would stop most casual scripting. All the textual TV entries are similar ps10825('4VUknMERbnt0OAP3klgpmjs....abd26') where ps10825 is simply a function that calls their decrypt function with a key of ps10825. Obviously the key is generate each time. In this case i think it's quite adequate to stop 99% of people using Greasemonkey or even wget scripts to download their TV guide without seeing all of their adverts. A: To understand this it is best to attempt to scrape a few sites. I have scraped some pretty challenging sites like banking sites. I've seen many attempts at making scraping difficult (e.g. encryption, cookies, etc). At the end of the day the best defense is unpredictable markup. Scrapers rely most heavily on being able to fing "patterns" in the markup. The moment the pattern changes, the scraping logic fails. Scrapers are notoriously brittle and often break down easily. My suggestion, randomly inject non-visible markup into your code. In particular around content that is likely to be interesting. Do anything you can think of to make your markup look different to a scraper each time it is invoked. A: Render all your text in SVG using something like ImageMagick A: Alexa.com does some wacky stuff to prevent scraping. Go here and look at the traffic rank number http://www.alexa.com/data/details/traffic_details/teenormous.com A: Few of these techniques will stop the determined. Alexa-style garbage-HTML/CSS-masking is easy to get around (just parse the CSS); AJAX/Javascript-DOM-insertion is easy to get around as well, although form authenticity tokens make this harder. I've found providing an official API to be the best deterrent :) Barring that, rendering text into an image is a good way to stop the casual scraper (but also still doable) YouTube also uses javascript obfuscation that makes AJAX reverse engineering more difficult A: Just load all your HTML via AJAX calls and the HTML will not "appear" to be in the DOM to most screen scrapers.
{ "language": "en", "url": "https://stackoverflow.com/questions/139482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Websphere 6.1 - Configuring Security When i try to configure security through the admin console of Websphere it just hangs. Its at the last step of the below 4 steps * *Specify extent of protection *Select user repository *Configure user repository *Summary Here are the extracts from my console [26/09/08 13:50:56:539 IST] 0000001f ServletWrappe I SRVE0242I: [isclite] [/ibm/console] [/com.ibm.ws.console.security/EnableSecurity.jsp]: Initialization successful. [26/09/08 13:50:58:616 IST] 0000001f ServletWrappe I SRVE0242I: [isclite] [/ibm/console] [/com.ibm.ws.console.security/SelectRegistry.jsp]: Initialization successful. [26/09/08 13:51:00:126 IST] 0000001f ServletWrappe I SRVE0242I: [isclite] [/ibm/console] [/com.ibm.ws.console.security/ConfigureRegistry.jsp]: Initialization successful. [26/09/08 13:51:00:126 IST] 0000001f ServletWrappe I SRVE0242I: [isclite] [/ibm/console] [/com.ibm.ws.console.security/LocalRegistry.jsp]: Initialization successful. [26/09/08 13:51:36:202 IST] 0000001f ServletWrappe I SRVE0242I: [isclite] [/ibm/console] [/com.ibm.ws.console.security/ConfirmEnableSecurity.jsp]: Initialization successful. [26/09/08 13:52:20:255 IST] 0000001f UserRegistryI A SECJ0136I: Custom Registry:com.ibm.ws.security.registry.nt.NTLocalDomainRegistryImpl has been initialized [26/09/08 13:52:21:025 IST] 0000001f UserRegistryI A SECJ0136I: Custom Registry:com.ibm.ws.security.registry.nt.NTLocalDomainRegistryImpl has been initialized [26/09/08 14:04:03:127 IST] 00000019 ThreadMonitor W WSVR0605W: Thread "WebContainer : 2" (0000001f) has been active for 746076 milliseconds and may be hung. There is/are 1 thread(s) in total in the server that may be hung. Any idea what could be up here? Thanks Damien A: It sounds to me like your LDAP server is not responding in a timely manor. The fact that you have a hung thread rather than a network error indicates to me that you are successfully communicating to LDAP. If you are attaching this to the Active Directory backed LDAP, it could be overtaxed. We have used ADAM servers in the past to get around slow LDAP response time. However our responce times were around 3-5 seconds instead of 12+ minutes. This could also be a deadlock. I have seen hung threads as a result of implementing XD extensions where the nodes deadlocked their communication (But these were not WebContainer threads). To fix this issue, we ditched XD extensions in favor of a NetScaler setup.
{ "language": "en", "url": "https://stackoverflow.com/questions/139483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Connecting input _and_output between of two commands in shell/bash I have two (UNIX) programs A and B that read and write from stdin/stdout. My first problem is how to connect the stdout of A to stdin of B and the stdout of B to the stdin of A. I.e., something like A | B but a bidirectional pipe. I suspect I could solve this by using exec to redirect but I could not get it to work. The programs are interactive so a temporary file would not work. The second problem is that I would like to duplicate each direction and pipe a duplicate via a logging program to stdout so that I can see the (text-line based) traffic that pass between the programs. Here I may get away with tee >(...) if I can solve the first problem. Both these problems seems like they should have well known solutions but I have not be able to find anything. I would prefer a POSIX shell solution, or at least something that works in bash on cygwin. Thanks to your answers I came up with the following solution. The A/B commands uses nc to listen to two ports. The logging program uses sed (with -u for unbuffered processing). bash-3.2$ fifodir=$(mktemp -d) bash-3.2$ mkfifo "$fifodir/echoAtoB" bash-3.2$ mkfifo "$fifodir/echoBtoA" bash-3.2$ sed -u 's/^/A->B: /' "$fifodir/echoAtoB" & bash-3.2$ sed -u 's/^/B->A: /' "$fifodir/echoBtoA" & bash-3.2$ mkfifo "$fifodir/loopback" bash-3.2$ nc -l -p 47002 < "$fifodir/loopback" \ | tee "$fifodir/echoAtoB" \ | nc -l -p 47001 \ | tee "$fifodir/echoBtoA" > "$fifodir/loopback" This listens for connection to port 47001 and 47002 and echos all traffic to standard output. In shell 2 do: bash-3.2$ nc localhost 47001 In shell 3 do: bash-3.2$ nc localhost 47002 Now lines entered in shell 2 will be written to shell 3 and vice versa and the traffic logged to shell 1, something like: B->A: input to port 47001 A->B: input to port 47002 The above has been tested on Cygwin Update: The script above stopped working after a few days(!). Apparently it can deadlock. Some of the suggestions in the answers may be more reliable. A: http://bisqwit.iki.fi/source/twinpipe.html A: You could probably get away with named pipes: mkfifo pipe gawk '$1' < pipe | gawk '$1' > pipe A: You can use Expect. Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc. You could use the following code (taken from the Exploring Expect book) as a starting point - it connects the output of proc1 to the input of proc2 and vice versa, as you requested: #!/usr/bin/expect -f spawn proc1 set proc1 $spawn_id spawn proc2 interact -u $proc1 A: I spent a lot of time on this, gave it up, and last decided to use ksh (the Korn shell), which allows this. cmd1 |& cmd2 >&p <&p where |& is a (pipe) operator to start a co-process and &p is file descriptor of that co-process. A: I had this problem at one point, and I threw together this simple C program. #include <stdio.h> #include <unistd.h> #define PERROR_AND_DIE(_x_) {perror(_x_); _exit(1);} int main(int argc, char **argv) { int fd0[2]; int fd1[2]; if ( argc != 3 ) { fprintf(stdout, "Usage %s: \"[command 1]\" \"[command 2]\"\n", argv[0]); _exit(1); } if ( pipe(fd0) || pipe(fd1) ) PERROR_AND_DIE("pipe") pid_t id = fork(); if ( id == -1 ) PERROR_AND_DIE("fork"); if ( id ) { if ( -1 == close(0) ) PERROR_AND_DIE("P1: close 0"); if ( -1 == dup2(fd0[0], 0) ) PERROR_AND_DIE("P1: dup 0"); //Read my STDIN from this pipe if ( -1 == close(1) ) PERROR_AND_DIE("P1: close 1"); if ( -1 == dup2(fd1[1], 1) ) PERROR_AND_DIE("P1: dup 1"); //Write my STDOUT here execl("/bin/sh", "/bin/sh", "-c", argv[1], NULL); PERROR_AND_DIE("P1: exec") } if ( -1 == close(0) ) PERROR_AND_DIE("P2: close 0"); if ( -1 == dup2(fd1[0], 0) ) PERROR_AND_DIE("P2: dup 0"); if ( -1 == close(1) ) PERROR_AND_DIE("P2: close 1"); if ( -1 == dup2(fd0[1], 1) ) PERROR_AND_DIE("P2: dup 1"); execl("/bin/sh", "/bin/sh", "-c", argv[2], NULL); PERROR_AND_DIE("P2: exec") } A: How about a named pipe? # mkfifo foo # A < foo | B > foo # rm foo For your second part I believe tee is the correct answer. So it becomes: # A < foo | tee logfile | B > foo A: This question is similar to one I asked before. The solutions proposed by others were to use named pipes, but I suspect you don't have them in cygwin. Currently I'm sticking to my own (attempt at a) solution, but it requires /dev/fd/0 which you probably also don't have. Although I don't really like the passing-command-lines-as-strings aspect of twinpipe (mentioned by JeeBee (139495)), it might be your only option in cygwin. A: I'd suggest "coproc": #! /bin/bash # initiator needs argument if [ $# -gt 0 ]; then a=$1 echo "Question $a" else read a fi if [ $# -gt 0 ]; then read a echo "$a" >&2 else echo "Answer to $a is ..." fi exit 0 Then see this session: $ coproc ./dialog $ ./dialog test < /dev/fd/${COPROC[0]} > /dev/fd/${COPROC[1]} Answer to Question test is ...
{ "language": "en", "url": "https://stackoverflow.com/questions/139484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: ASP.NET Themes samples/starter kits I was wondering if there was somewhere I could get some starter kit / theme sample for ASP.NET. I am not a designer, but I need to build a prototype for a project, and if I do it myself it'll certainly be awful Do you know where I could find that (ASP.NET specific)? A: Check http://asp.net. There are quite a few starter kits and sample projects there. (http://www.asp.net/community/projects/) A: Do you mean templates for a website? If so, that is nothing to do with ASP.NET, There are loads of places to get decent free website templates including: * *http://www.templateworld.com/free_templates.html *http://www.styleshout.com/ *http://www.freecsstemplates.org/ If you are after ASP.NET code snippets & samples, try: * *http://www.freevbcode.com/listcode.asp?Category=16 *http://www.asp101.com/ What specifically is your project concerned with? A: What you're looking for is here. Asp.net specific design templates from msdn. A: you can check latest Employee Info Starter Kit ASP.NET project template, which utilized some great css and jQuery frameworks and integrated with ASP.NET server controls. A: I advice to check ASP.NET Iteration Zero: http://aspnetzero.com/ It's a professional starter kit based on a strong framework and UI. It includes role and membership management and much more. You can create a demo to see it in action easily.
{ "language": "en", "url": "https://stackoverflow.com/questions/139487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Namespace scope in SOAP / XML Is this valid SOAP / XML? <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <CreateRoute xmlns="urn:Routs"> <aRoute> <name>ToTheTop</name> <grade xsi:type="FrencGrade"> <gradeNumber>7</gradeNumber> <gradeModifier>a</gradeModifier> </grade> </aRoute> </CreateRoute> </soap:Body> </soap:Envelope> And if it is: in what namespace does FrenchGrade belong? Is it in the urn:Routs namespace? A: Yes that's correct. By doing: <CreateRoute xmlns="urn:Routs"> ...you're changing the default namespace to urn:Routs. This means that all unprefixed child elements will exist in this new namespace. Unless of course: * *you explicitly add new elements using a different prefix *you create a new child element and change its default namespace, in which case its children will be in that new namespace
{ "language": "en", "url": "https://stackoverflow.com/questions/139509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to code the partial extensions that Linq to SQL autogenerates? I made a class from Linq to SQL Clasees with VS 2008 SP1 Framework 3.5 SP1, in this case I extended the partial partial void UpdateMyTable(MyTable instance){ // Business logic // Validation rules, etc. } My problem is when I execute db.SubmitChanges(), it executes UpdateMyTable and makes the validations but it doesn't update, I get this error: [Exception: Deliver] System.Data.Linq.ChangeProcessor.SendOnValidate(MetaType type, TrackedObject item, ChangeAction changeAction) +197 System.Data.Linq.ChangeProcessor.ValidateAll(IEnumerable`1 list) +255 System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) +76 System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) +331 System.Data.Linq.DataContext.SubmitChanges() +19 A: * *if you provide this method, you must perform the update in the method. http://msdn.microsoft.com/en-us/library/bb882671.aspx * *If you implement the Insert, Update and Delete methods in your partial class, the LINQ to SQL runtime will call them instead of its own default methods when SubmitChanges is called. Try MiTabla.OnValidate A: If you want to implement this method but not do the update yourself you make the method call ExecuteDynamicUpdate(item); Likewise ExecuteDynamicDelete and ExecuteDynamicInsert for DeleteMyTable and InsertMyTable respectively.
{ "language": "en", "url": "https://stackoverflow.com/questions/139513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: User or account or person or what? Here on Stack Overflow, you're a "user." On 43things.com you're a "person." On other sites, you're an "account." And then some web apps skip the usage of this kind of signifier, and it's just http://webapp.com/yourusername Do you think that these signifiers imply anything at all? Do you prefer one over the other? In building an application, I often come to this step in the process and stumble on whether to call users of the application a "user" or a "person" or an "account." I'm sure there are other examples, but these are the ones I come across most often. I'm curious what others think when coming to building the user management functions of their applications. I think most default to using "user," but do you put any thought into why? A: Person implies that there is a 1:1 correspondence with a real human being. Account doesn't necessarily imply this (e.g. service accounts), and neither does User, strictly speaking. For example, here on SO there is a "Community" user who is obviously not a real person. It wouldn't make sense to call this the "Community person". A: This semantic is contextual. In a community site, you are often a 'member', on a paid service you have an 'account'. 'User' is the generic default. You should choose a moniker that best describes what is the role of the 'user' in your application. A: I prefer to be a user. Account is also quite standard name for the thing. Person seems cumbersome to me. I am a person anyway, registering at given service does not change it. I wouldn't discuss about subtle differences between the names. Use what is most common, most standard. This will be more user-friendly, since less surprises are more friendly. A: I'm not sure user and account are interchangable. For example , I could be "user" on StackOverflow without having an "account". Though if I had an account , I would have more facilities as a user. A: Depends on your target audience and what kind of application you are building: * *For community webistes, persons would be my first choice. *For a developer community site (like this one), definitely user ;-) *For banking applications, account seems the most logical choice etc... A: "Account" implies there could be several users for it. Using just / is appropriate if the user is the central part of your application, i. e. a social network like facebook. I'd use "user" for real users, people that can actually login etc. and "person" if you're just managing people, like a search engine for people. Edit: In the grand scale of the universe, it really doesn't matter. A: While they all mean roughly the same to us techies, regular "users" like to be considered "members" rather than accounts/users. It's a friendlier face. If you're public facing, call them members to give them the warm fuzzy feeling that non-techs seem to crave. :) A: I had a similar problem when designing a small site for members of a sporting team. I had two types of accounts that I needed to track - someone could apply to be a member of the team (i.e. to be a player), or they could just apply to have a login for the site, so they could keep track of the players (i.e. without being a player themselves). In the end I decided to call them "logins" and "players". So someone can create a "login" for themselves, then after they login they can create "players". Each "login" then could be linked to multiple "players". As has been said several times, what you call it should make sense to your particular audience - and this will usually be what is the most common usage out there in internet-land.
{ "language": "en", "url": "https://stackoverflow.com/questions/139521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to programmatically run an Xpand workflow on a model in a second workbench? I have an Xtext/Xpand (oAW 4.3, Eclipse 3.4) generator plug-in, which I run together with the editor plug-in in a second workbench. There, I'd like to run Xpand workflows programmatically on the model file I create. If I set the model file using the absolute path of the IFile I have, e.g. with: String dslFile = file.getLocation().makeAbsolute().toOSString(); Or if I use a file URI retrieved with: String dslFile = file.getLocationURI().toString(); The file is not found: org.eclipse.emf.ecore.resource.Resource$IOWrappedException: Resource '/absolute/path/to/my/existing/dsl.file' does not exist. at org.openarchitectureware.xtext.parser.impl.AbstractParserComponent.invokeInternal(AbstractParserComponent.java:55) To what value should I set the model file attribute (dslFile) in the map I hand to the WorkflowRunner: Map properties = new HashMap(); properties.put("modelFile", dslFile); I also tried leaving the properties empty and referencing the model file relative to the workflow file (inside the workflow file), but that yields a FileNotFoundException. Running all of this in a normal app (not in a second workbench) works fine. A: 2 important things for people looking in here...the TE used a IFLE for the "file.get....", and the correct syntax for paths is "file:/c:/myOSbla". A: I found help at the openArchitectureWare forum. Basically using properties.put("modelFile", file.getLocation().makeAbsolute().toOSString()); works, but you need to specify looking it up via URI in the workflow you are calling: <component class="org.eclipse.mwe.emf.Reader"> <uri value='${modelFile}'/> <modelSlot value='theModel'/> </component> A: This is a sample application Launcher.java (sitting in the default package): import gnu.getopt.Getopt; import gnu.getopt.LongOpt; import java.io.File; import java.io.IOException; import org.eclipse.emf.mwe2.launch.runtime.Mwe2Launcher; public class Launcher implements Runnable { // // main program // public static void main(final String[] args) { new Launcher(args).run(); } // // private final fields // private static final String defaultModelDir = "src/main/resources/model"; private static final String defaultTargetDir = "target/generated/pageflow-maven-plugin/java"; private static final String defaultFileEncoding = "UTF-8"; private static final LongOpt[] longopts = new LongOpt[] { new LongOpt("baseDir", LongOpt.REQUIRED_ARGUMENT, new StringBuffer(), 'b'), new LongOpt("modelDir", LongOpt.REQUIRED_ARGUMENT, new StringBuffer(), 'm'), new LongOpt("targetDir", LongOpt.REQUIRED_ARGUMENT, new StringBuffer(), 't'), new LongOpt("encoding", LongOpt.REQUIRED_ARGUMENT, new StringBuffer(), 'e'), new LongOpt("help", LongOpt.NO_ARGUMENT, null, 'h'), new LongOpt("verbose", LongOpt.NO_ARGUMENT, null, 'v'), }; private final String[] args; // // public constructors // public Launcher(final String[] args) { this.args = args; } public void run() { final String cwd = System.getProperty("user.dir"); String baseDir = cwd; String modelDir = defaultModelDir; String targetDir = defaultTargetDir; String encoding = defaultFileEncoding; boolean verbose = false; final StringBuffer sb = new StringBuffer(); final Getopt g = new Getopt("pageflow-dsl-generator", this.args, "b:m:t:e:hv;", longopts); g.setOpterr(false); // We'll do our own error handling int c; while ((c = g.getopt()) != -1) switch (c) { case 'b': baseDir = g.getOptarg(); break; case 'm': modelDir = g.getOptarg(); break; case 't': targetDir = g.getOptarg(); break; case 'e': encoding = g.getOptarg(); break; case 'h': printUsage(); System.exit(0); break; case 'v': verbose = true; break; case '?': default: System.out.println("The option '" + (char) g.getOptopt() + "' is not valid"); printUsage(); System.exit(1); break; } String absoluteModelDir; String absoluteTargetDir; try { absoluteModelDir = checkDir(baseDir, modelDir, false, true); absoluteTargetDir = checkDir(baseDir, targetDir, true, true); } catch (final IOException e) { throw new RuntimeException(e.getMessage(), e.getCause()); } if (verbose) { System.err.println(String.format("modeldir = %s", absoluteModelDir)); System.err.println(String.format("targetdir = %s", absoluteTargetDir)); System.err.println(String.format("encoding = %s", encoding)); } Mwe2Launcher.main( new String[] { "workflow.PageflowGenerator", "-p", "modelDir=".concat(absoluteModelDir), "-p", "targetDir=".concat(absoluteTargetDir), "-p", "fileEncoding=".concat(encoding) }); } private void printUsage() { System.err.println("Syntax: [-b <baseDir>] [-m <modelDir>] [-t <targetDir>] [-e <encoding>] [-h] [-v]"); System.err.println("Options:"); System.err.println(" -b, --baseDir project home directory, e.g: /home/workspace/myapp"); System.err.println(" -m, --modelDir default is: ".concat(defaultModelDir)); System.err.println(" -t, --targetDir default is: ".concat(defaultTargetDir)); System.err.println(" -e, --encoding default is: ".concat(defaultFileEncoding)); System.err.println(" -h, --help this help text"); System.err.println(" -v, --verbose verbose mode"); } private String checkDir(final String basedir, final String dir, final boolean create, final boolean fail) throws IOException { final StringBuilder sb = new StringBuilder(); sb.append(basedir).append('/').append(dir); final File f = new File(sb.toString()).getCanonicalFile(); final String absolutePath = f.getAbsolutePath(); if (create) { if (f.isDirectory()) return absolutePath; if (f.mkdirs()) return absolutePath; } else { if (f.isDirectory()) return absolutePath; } if (!fail) return null; throw new IOException(String.format("Failed to locate or create directory %s", absolutePath)); } private String checkFile(final String basedir, final String file, final boolean fail) throws IOException { final StringBuilder sb = new StringBuilder(); sb.append(basedir).append('/').append(file); final File f = new File(sb.toString()).getCanonicalFile(); final String absolutePath = f.getAbsolutePath(); if (f.isFile()) return absolutePath; if (!fail) return null; throw new IOException(String.format("Failed to find or locate directory %s", absolutePath)); } } ... and this is its pom.xml: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.vaadin</groupId> <artifactId>pageflow-dsl-generator</artifactId> <version>0.1.0-SNAPSHOT</version> <build> <sourceDirectory>src</sourceDirectory> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <index>true</index> <manifest> <mainClass>Launcher</mainClass> </manifest> </archive> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>urbanophile</groupId> <artifactId>java-getopt</artifactId> <version>1.0.9</version> </dependency> </dependencies> </project> Unfortunately, this pom.xml is not intended to package it (not yet, at least). For instructions regarding packaging, have a look at link text Have fun :) Richard Gomes
{ "language": "en", "url": "https://stackoverflow.com/questions/139525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Classloader issues - How to determine which library versions (jar-files) are loaded I've just solved another *I-though-I-was-using-this-version-of-a-library-but-apparently-my-app-server-has-already-loaded-an-older-version-of-this-library-*issue (sigh). Does anybody know a good way to verify (or monitor) whether your application has access to all the appropriate jar-files, or loaded class-versions? Thanks in advance! [P.S. A very good reason to start using the OSGi module architecture in my view!] Update: This article helped as well! It gave me insight which classes JBoss' classloader loaded by writing it to a log file. A: If you've got appropriate versions info in a jar manifest, there are methods to retrieve and test the version. No need to manually read the manifest. java.lang.Package.getImplementationVersion() and getSpecificationVersion() and isCompatibleWith() sound like they'd do what you're looking for. You can get the Package with this.getClass().getPackage() among other ways. The javadoc for java.lang.Package doesn't give the specific manifest attribute names for these attributes. A quick google search turned it up at http://java.sun.com/docs/books/tutorial/deployment/jar/packageman.html A: In the current version of Java, library versioning is a rather wooly term that relies on the JAR being packaged correctly with a useful manifest. Even then, it's a lot of work for the running application to gather this information together in a useful way. The JVm runtime gives you no help whatsoever. I think your best bet is to enforce this at build time, using dependency management tools like Ivy or Maven to fetch the correct versions of everything. Interestingly, Java 7 will likely include a proper module versioning framework for precisely this sort of thing. Not that that helps you right at this moment, A: I don't think there is a good way to check that. And I'm not sure you want to do that. What you need to do is get familiar with your app-server's class loading architecture, and understand how that works. A simplified explanation of how it works is: an EJB or a web-app will first look for a class or a resource in libraries declared in its own module (ejb-jar or war). if the class is not found there, the class loader forwards the request to the its parent class loader which is either a declared dependency (usualy an ejb) or the the application class loader which is responsible to load libraries and resources declared in the ear package. If the class or the resource is still not found the request is forwarded to the app server which will look in its own classpath. This being said, you should remember that a Java EE module (web-app, ejb) will always load classes from a jar that is in the nearest scope. For example if you package log4j v1 in the war file, log4j v2 at ear level and you put log4j v3 in your app-server's class path, the module will use the jar in its own module. take that away and it'll use the one at ear level. take that out and it will use the one in the app-server's classpath. Things get more tricky when you have complex dependencies between modules. Best is to put application global libraries at ear level. A: If you happen to be using JBoss, there is an MBean (the class loader repository iirc) where you can ask for all classloaders that have loaded a certain class. If all else fails, there's always java -verbose:class which will print the location of the jar for every class file that is being loaded. A: There must be a better way than the way I do it, but I tend to do this in a very manual way. * *Every Jar must have it's version number in the file name (if it doesn't change it's name). *each application has it's own classpath. *There must be a reason to start using an updated Jar (new version). Don't just change because it is available, change because it gives you functionality that you need. *Each release must include all Jars that are needed. *I keep a Version class that knows the list of Jars it needs (this is coded into the source file) and can be checked at runtime against the list of Jars in the classpath. As I said, it is manual, but it works.
{ "language": "en", "url": "https://stackoverflow.com/questions/139534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: How do I upgrade from Drupal 5 to 6? I'm running Drupal 5 on my website and want to upgrade to V6. I've not got any obscure or unsupported modules running. What do I do though? I can't seem to find any step-by-step upgrade methods. Do I just have to overwrite all the files and then re-run the installer again? A: Drupal: Upgrading from 5.x to 6.x It's a video though. I have no idea what's up with all these video tutorials. Does anybody like them? Can't I get the same information in a quarter the time in text? Is the web now for illiterates only? Edit: There's some text here
{ "language": "en", "url": "https://stackoverflow.com/questions/139537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Whats the best way to create interactive application prototypes? The question should be interpreted from a general point of view and not targeted solely at web apps or desktop apps. I have been looking around to find a simple and easy way of creating interactive prototypes for web applications. I'd like to use a technique that allows simple UI creation and especially UI recreation and modification in further iterations. Filling the UI with mockup data should be very simple. The technique may require a simple form of programming, e.g. to specify a drag and drop behaviour from UI element A to UI element B. One tool i currently use is the Adobe Flex Builder. The included GUI-designer is very good and i have accomplished some skills with AS3 so far. The problem is adding data to the UI. It always results in me programming code for checking and parsing of XML-trees structures, and mainly debugging this section of the prototype. Too cumbersome! Another tool many people use is PowerPoint which involves a really cumbersome way of creating a GUI by drawing every part of an interaction in a separate slide. No way! I would be much faster with paper prototypes. Other (better!) free form drawing tools are also part of this category (i'm a happy heavy weight inkscape user) but Prototyping and Mockup are obviously not their main purpose. The UI-stencil palette for Viso makes it a bit better than the drawing competition. The main competitors in rapid prototyping as far i know are: * *iRise *Axure *Serena and other ? *Viso *Powerpoint, Illustrator, Inkscape or any other free form drawing tool *paper prototyping *IDE with good GUI builders (such as Flex Builder Designer and Netbeans Matisse) My opinion is that real GUI-builders are a good staring point. What are your current approaches? Please outline your process and the pros and cons as an answer here. A: Real GUI builders are: * *Much slower *Only programmers can use them (try to explain to analyst how to populate a table in VB) *They don't let you annotate your mockups on the fly *Don't have skins (e.g. black&white) to create screens which can't be mistaken for "almost done" application While specialized mockup tools are usually: * *Communication oriented *Can print or export your mockups (together with your notes) to PDF/HTML/Word etc *Better ones have some variant of "master screens" so you can derive hundreds of mockups from only a handful of main application screens (you quickly get to quite a lot of mockups when you try to discuss real scenarios with your client) *Fast enough so you can prototype realtime in a meeting Almost a decade ago I got frustrated by all of the above and created my own tool: MockupScreens. It became pretty popular quickly :-) And here is the most complete list of such specialized tools I know of. Many of those are free: http://c2.com/cgi/wiki?GuiPrototypingTools A: Quick and dirty paper prototyping: PowerPoint (see: Powepoint Prototyping Toolkit) -Great for easily putting together prototypes that can be presented. The slide nature can also serve as a substitute for mock interaction. Downside is lack of standardization. Not for disciplined projects. Disciplined paper prototyping: Visio -Standardized and full featured, but more cumbersome Interactive prototyping: Visual Studio -Very quick interaction building using drag-n-drop and events. Can be data driven. You can even build a prototype 'base' as a starter kit. Only downside is the temptation to actually make it THE production application. ;) A: There's also Balsamiq. I kind of like it, but usually grow tired of these things quite fast. I end up using either pen&paper or OS X's interface builder, which isn't more difficult to use than all these prototyping tools. A: If you're talking about mock-ups/wireframes (i.e. static pictures) Visio is a tool of choice. Most software you mentioned is either above the level of the normal business user (i.e. you'll need a specialist to do the mock-ups as opposed to the business users helping you) or are not created for the purpose of mock-ups. If you need a dynamic prototype then there a plenty of options and everything depends on the type of skills you have available in the team. For example I have a guy who is very strong in HTML. It would be much easier for him to create a HTML page from scratch in notepad that try to do the same thing with Flash in a WYSIWYG tool. Some other people have good Flash skills and could employ them etc. A: Expression Blend (http://www.microsoft.com/expression/products/overview.aspx?key=blend) can be used to create quick mockups in XAML. You can store data for the mockup as inline XML in the XAML, or you can quickly convert it to WPF/Silverlight application and build basic business logic behind your mockup using Visual C# Express (http://www.microsoft.com/express/vcsharp/) or Visual Studio 2008.
{ "language": "en", "url": "https://stackoverflow.com/questions/139555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Creating an online catalogue using Drupal, what are the best modules/techniques? I have a large collection of retro games consoles and computers, I want to create some sort of catalogue to keep track of them using Drupal. I could do it as a series of pages in Drupal, but would rather have some sort of more structured method. It'd be great if I could somehow define a record consisting of certain fields (manufacturer, model, serial number, etc) and have a form to fill in, and then have the display part automatically taken care of. From looking at various Drupal modules I get the feeling I can do this, but I can't work out what modules to use. I got somewhat lost looking at the CCK module. A: Look harder at the CCK module, it's exactly what you want. You can define records and then assign taxonomys and views to make it all work, just need your own creativity. CCK is THE module for doing this kind of stuff. Also, this link maybe helpful for pre-made modules. http://drupal.org/search/node/type%3Aproject_project+catalog A: You'll want CCK, yes, but you'll also want the Views module most likely, in order to more easily control how and what data from your CCK-based nodes show up at certain times. Panels might be nice too... These three are the triumvirate of must-haves for Drupal. A: CCK and Views are the easy way to do this, but really not the best. I recommend building your own module. There is going to be a much greater learning curve, but your code will be easy to move from server to server, run faster, be more customizable, and you'll end up with much better understanding about how Drupal works. You won't have to wait for Views and CCK to be ported to a fresh Drupal version, you'll be able to contribute your module, etc, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/139568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I programmatically change ASP.NET ajax AccordionPane with javascript? I've got an asp.net ajax style AccordionPane control that I am trying to get/set based on some user interactions. However it seems not let me do this with javascript: function navPanelMove() { var aPane = $get('ctl00_Accordion1_AccordionExtender_ClientState'); openPaneID = aPane.get_SelectedIndex(); // doesn't work } A: You'll need to use $find('behaviorId') You want the AjaxControlToolkit.AccordionBehavior object, not the DOM elements
{ "language": "en", "url": "https://stackoverflow.com/questions/139578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Balanced Distribution Algorithm I'm working on some code for a loosely coupled cluster. To achieve optimal performance during jobs, I have the cluster remap its data each time a child enters or exits. This will eventually be made optional, but for now it performs its data balancing by default. My balancing is basically just making sure that each child never has more than the average number of files per machine, plus one. The plus one is for the remainder if the division isn't clean. And since the remainder will always be less than the number of children [except 0 case, but we can exclude that], children after a balancing will have at most avg + 1. Everything seems fine, until I realized my algorithm is O(n!). Go down the list of children, find out the avg, remainder, who has too many and who has too few. For each child in the too many list, go through list, send to each child who has too few. Is there a better solution to this? I feel there must be. Edit: Here is some psuedocode to show how i derived O(n!): foreach ( child in children ) { if ( child.dataLoad > avg + 1 ) { foreach ( child2 in children ) { if ( child != child2 && child2.dataLoad < avg ) { sendLoad(child, child2) } } } } Edit: O(n^2). Foreach n, n => n*n => n^2. I guess I didn't have enough coffee this morning! ;) In the future I'd like to move to a more flexible and resilient distribution method[weights and hueristics], but for now, a uniform distribution of data works. A: @zvrba: You do not even have to sort the list. When traversing the list the second time just move all items with less the average workload to the end of the list (you can keep a pointer to the last item at your first traversal). The order does not have to be perfect, it just changes when the iterators have to be augmented or decreased in your last step. See previous answer The last step would look something like: In the second step keep a pointer to the first item with less than average workload in child2 (to prevent the necessity to have a double link list). for each child in list { if child2 == nil then assert("Error in logic"); while child.workload > avg + 1 { sendwork(child, child2, min(avg + 1 - child2.workload, child.workload - (avg + 1))) if child2.workload == avg + 1 then child2 = child2.next; } } A: I think that your analysis is incorrect: * *walking through the list to find out the average is O(n) *making lists of children with too many or too few data chunks is also O(n) *moving data is proportional to the amount of data How did you arrive to O(n!)? You can sort the list [O(n lg n) in the number of children], so that on the front you have children with too much work, and at the end children with too little work. Then traverse the list from both ends simultaneously: one iterator points to a child with excess data, the other to a child with lack of data. Transfer data, and move either one iterator forward, or the other backward. A: You may want to try a completely different approach, such as consistent hashing. See here for a relatively easy introduction to the topic: http://www8.org/w8-papers/2a-webserver/caching/paper2.html (There are deeper papers available as well, starting with Karger et al) I have created a working implementation of consistent hashing in Erlang that you can examine if you wish: http://distributerl.googlecode.com/svn/trunk/chash.erl A: The code you have posted has complexity O(n^2). Still, it is possible to do it in linear time as malach has observed, where n is the number of items in the children list. Consider: the inner loop has n iterations, and it is executed at most n times. n*n = n^2.
{ "language": "en", "url": "https://stackoverflow.com/questions/139580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Wise to run MS Velocity on my development machine? I've never developed a web application that uses distributed memory. Is it common practice to run a tool such as Microsoft Velocity on my local machine as I develop, should I run Velocity on another server as I develop, or should I just develop as normal (default session & cache) and use Velocity only after I've deployed to our development server? We are running into a lot of memory issues in our production web application so we are researching into splitting our servers into a farm. A: I'm looking at using Velocity on a project as well. What I've done thus far is to write a common caching interface and a simple implementation that utilizes the standard ASP.NET caching system. This way I can program against that interface and later plug in the Velocity caching via a concrete implementation of the interface. You can accomplish this more easily using a dependency injection framework such as Unity or Structure Map. As for where to use Velocity, I'd be sure to try it out in a development environment before going live. If you have a limited number of physical machines, use Virtual PC to set up some virtual servers and install the caching framework onto them. A: Ahh that is some good feedback. I was thinking just the same thing about writing a common caching interface so I can switch out the default caching with Velocity without any code changes. Based on an article by Stephen Walther, he appeared to be installing Velocity on his local development machine. So, that sounds like a good place to start. In his article I was pleased to see that switching out the Session in the web server required no code changes... it was seamless ;) I saw an interesting article on Velocity's blog this morning about installing multiple velocity instances on the same server. That way you don't necessarily have to use Virtual PCs. I hope your project goes well.
{ "language": "en", "url": "https://stackoverflow.com/questions/139583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the best way to clone/deep copy a .NET generic Dictionary? I've got a generic dictionary Dictionary<string, T> that I would like to essentially make a Clone() of ..any suggestions. A: This works fine for me // assuming this fills the List List<Dictionary<string, string>> obj = this.getData(); List<Dictionary<string, string>> objCopy = new List<Dictionary<string, string>>(obj); As Tomer Wolberg describes in the comments, this does not work if the value type is a mutable class. A: You could always use serialization. You could serialize the object then deserialize it. That will give you a deep copy of the Dictionary and all the items inside of it. Now you can create a deep copy of any object that is marked as [Serializable] without writing any special code. Here are two methods that will use Binary Serialization. If you use these methods you simply call object deepcopy = FromBinary(ToBinary(yourDictionary)); public Byte[] ToBinary() { MemoryStream ms = null; Byte[] byteArray = null; try { BinaryFormatter serializer = new BinaryFormatter(); ms = new MemoryStream(); serializer.Serialize(ms, this); byteArray = ms.ToArray(); } catch (Exception unexpected) { Trace.Fail(unexpected.Message); throw; } finally { if (ms != null) ms.Close(); } return byteArray; } public object FromBinary(Byte[] buffer) { MemoryStream ms = null; object deserializedObject = null; try { BinaryFormatter serializer = new BinaryFormatter(); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); ms.Position = 0; deserializedObject = serializer.Deserialize(ms); } finally { if (ms != null) ms.Close(); } return deserializedObject; } A: The best way for me is this: Dictionary<int, int> copy= new Dictionary<int, int>(yourListOrDictionary); A: Binary Serialization method works fine but in my tests it showed to be 10x slower than a non-serialization implementation of clone. Tested it on Dictionary<string , List<double>> A: That's what helped me, when I was trying to deep copy a Dictionary < string, string > Dictionary<string, string> dict2 = new Dictionary<string, string>(dict); Good luck A: (Note: although the cloning version is potentially useful, for a simple shallow copy the constructor I mention in the other post is a better option.) How deep do you want the copy to be, and what version of .NET are you using? I suspect that a LINQ call to ToDictionary, specifying both the key and element selector, will be the easiest way to go if you're using .NET 3.5. For instance, if you don't mind the value being a shallow clone: var newDictionary = oldDictionary.ToDictionary(entry => entry.Key, entry => entry.Value); If you've already constrained T to implement ICloneable: var newDictionary = oldDictionary.ToDictionary(entry => entry.Key, entry => (T) entry.Value.Clone()); (Those are untested, but should work.) A: Okay, the .NET 2.0 answers: If you don't need to clone the values, you can use the constructor overload to Dictionary which takes an existing IDictionary. (You can specify the comparer as the existing dictionary's comparer, too.) If you do need to clone the values, you can use something like this: public static Dictionary<TKey, TValue> CloneDictionaryCloningValues<TKey, TValue> (Dictionary<TKey, TValue> original) where TValue : ICloneable { Dictionary<TKey, TValue> ret = new Dictionary<TKey, TValue>(original.Count, original.Comparer); foreach (KeyValuePair<TKey, TValue> entry in original) { ret.Add(entry.Key, (TValue) entry.Value.Clone()); } return ret; } That relies on TValue.Clone() being a suitably deep clone as well, of course. A: For .NET 2.0 you could implement a class which inherits from Dictionary and implements ICloneable. public class CloneableDictionary<TKey, TValue> : Dictionary<TKey, TValue> where TValue : ICloneable { public IDictionary<TKey, TValue> Clone() { CloneableDictionary<TKey, TValue> clone = new CloneableDictionary<TKey, TValue>(); foreach (KeyValuePair<TKey, TValue> pair in this) { clone.Add(pair.Key, (TValue)pair.Value.Clone()); } return clone; } } You can then clone the dictionary simply by calling the Clone method. Of course this implementation requires that the value type of the dictionary implements ICloneable, but otherwise a generic implementation isn't practical at all. A: Dictionary<string, int> dictionary = new Dictionary<string, int>(); Dictionary<string, int> copy = new Dictionary<string, int>(dictionary); A: Try this if key/values are ICloneable: public static Dictionary<K,V> CloneDictionary<K,V>(Dictionary<K,V> dict) where K : ICloneable where V : ICloneable { Dictionary<K, V> newDict = null; if (dict != null) { // If the key and value are value types, just use copy constructor. if (((typeof(K).IsValueType || typeof(K) == typeof(string)) && (typeof(V).IsValueType) || typeof(V) == typeof(string))) { newDict = new Dictionary<K, V>(dict); } else // prepare to clone key or value or both { newDict = new Dictionary<K, V>(); foreach (KeyValuePair<K, V> kvp in dict) { K key; if (typeof(K).IsValueType || typeof(K) == typeof(string)) { key = kvp.Key; } else { key = (K)kvp.Key.Clone(); } V value; if (typeof(V).IsValueType || typeof(V) == typeof(string)) { value = kvp.Value; } else { value = (V)kvp.Value.Clone(); } newDict[key] = value; } } } return newDict; } A: In the case you have a Dictionary of "object" and object can be anything like (double, int, ... or ComplexClass): Dictionary<string, object> dictSrc { get; set; } public class ComplexClass : ICloneable { private Point3D ...; private Vector3D ....; [...] public object Clone() { ComplexClass clone = new ComplexClass(); clone = (ComplexClass)this.MemberwiseClone(); return clone; } } dictSrc["toto"] = new ComplexClass() dictSrc["tata"] = 12.3 ... dictDest = dictSrc.ToDictionary(entry => entry.Key, entry => ((entry.Value is ICloneable) ? (entry.Value as ICloneable).Clone() : entry.Value) ); A: Here is some real "true deep copying" without knowing type with some recursive walk, good for the beginnig. It is good for nested types and almost any tricky type I think. I did not added nested arrays handling yet, but you can modify it by your choice. Dictionary<string, Dictionary<string, dynamic>> buildInfoDict = new Dictionary<string, Dictionary<string, dynamic>>() { {"tag",new Dictionary<string,dynamic>(){ { "attrName", "tag" }, { "isCss", "False" }, { "turnedOn","True" }, { "tag",null } } }, {"id",new Dictionary<string,dynamic>(){ { "attrName", "id" }, { "isCss", "False" }, { "turnedOn","True" }, { "id",null } } }, {"width",new Dictionary<string,dynamic>(){ { "attrName", "width" }, { "isCss", "True" }, { "turnedOn","True" }, { "width","20%" } } }, {"height",new Dictionary<string,dynamic>(){ { "attrName", "height" }, { "isCss", "True" }, { "turnedOn","True" }, { "height","20%" } } }, {"text",new Dictionary<string,dynamic>(){ { "attrName", null }, { "isCss", "False" }, { "turnedOn","True" }, { "text","" } } }, {"href",new Dictionary<string,dynamic>(){ { "attrName", null }, { "isCss", "False" }, { "flags", "removeAttrIfTurnedOff" }, { "turnedOn","True" }, { "href","about:blank" } } } }; var cln=clone(buildInfoDict); public static dynamic clone(dynamic obj) { dynamic cloneObj = null; if (IsAssignableFrom(obj, typeof(IDictionary))) { cloneObj = Activator.CreateInstance(obj.GetType()); foreach (var key in obj.Keys) { cloneObj[key] = clone(obj[key]); } } else if (IsNumber(obj) || obj.GetType() == typeof(string)) { cloneObj = obj; } else { Debugger.Break(); } return cloneObj; } public static bool IsAssignableFrom(this object obj, Type ObjType = null, Type ListType = null, bool HandleBaseTypes = false) { if (ObjType == null) { ObjType = obj.GetType(); } bool Res; do { Res = (ObjType.IsGenericType && ObjType.GetGenericTypeDefinition().IsAssignableFrom(ListType)) || (ListType == null && ObjType.IsAssignableFrom(obj.GetType())); ObjType = ObjType.BaseType; } while ((!Res && ObjType != null) && HandleBaseTypes && ObjType != typeof(object)); return Res; } public static bool IsNumber(this object value) { return value is sbyte || value is byte || value is short || value is ushort || value is int || value is uint || value is long || value is ulong || value is float || value is double || value is decimal; } A: Here is another way to clone a dictionary, assuming you know to do the "right" thing as far as handling whatever is hiding behind the "T" (a.k.a. "object") in your specific circumstances. internal static Dictionary<string, object> Clone(Dictionary<string, object> dictIn) { Dictionary<string, object> dictOut = new Dictionary<string, object>(); IDictionaryEnumerator enumMyDictionary = dictIn.GetEnumerator(); while (enumMyDictionary.MoveNext()) { string strKey = (string)enumMyDictionary.Key; object oValue = enumMyDictionary.Value; dictOut.Add(strKey, oValue); } return dictOut; } A: I would evaluate if T was a value or reference type. In the case T was a value type I would use the constructor of Dictionary, and in the case when T was a reference type I would make sure T inherited from ICloneable. It will give private static IDictionary<string, T> Copy<T>(this IDictionary<string, T> dict) where T : ICloneable { if (typeof(T).IsValueType) { return new Dictionary<string, T>(dict); } else { var copy = new Dictionary<string, T>(); foreach (var pair in dict) { copy[pair.Key] = pair.Value; } return copy; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/139592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "264" }
Q: ProcessStartInfo hanging on "WaitForExit"? Why? I have the following code: info = new System.Diagnostics.ProcessStartInfo("TheProgram.exe", String.Join(" ", args)); info.CreateNoWindow = true; info.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden; info.RedirectStandardOutput = true; info.UseShellExecute = false; System.Diagnostics.Process p = System.Diagnostics.Process.Start(info); p.WaitForExit(); Console.WriteLine(p.StandardOutput.ReadToEnd()); //need the StandardOutput contents I know that the output from the process I am starting is around 7MB long. Running it in the Windows console works fine. Unfortunately programmatically this hangs indefinitely at WaitForExit. Note also this code does NOT hang for smaller outputs (like 3KB). Is it possible that the internal StandardOutput in ProcessStartInfo can't buffer 7MB? If so, what should I do instead? If not, what am I doing wrong? A: We have this issue as well (or a variant). Try the following: 1) Add a timeout to p.WaitForExit(nnnn); where nnnn is in milliseconds. 2) Put the ReadToEnd call before the WaitForExit call. This is what we've seen MS recommend. A: Credit to EM0 for https://stackoverflow.com/a/17600012/4151626 The other solutions (including EM0's) still deadlocked for my application, due to internal timeouts and the use of both StandardOutput and StandardError by the spawned application. Here is what worked for me: Process p = new Process() { StartInfo = new ProcessStartInfo() { FileName = exe, Arguments = args, UseShellExecute = false, RedirectStandardOutput = true, RedirectStandardError = true } }; p.Start(); string cv_error = null; Thread et = new Thread(() => { cv_error = p.StandardError.ReadToEnd(); }); et.Start(); string cv_out = null; Thread ot = new Thread(() => { cv_out = p.StandardOutput.ReadToEnd(); }); ot.Start(); p.WaitForExit(); ot.Join(); et.Join(); Edit: added initialization of StartInfo to code sample A: The problem is that if you redirect StandardOutput and/or StandardError the internal buffer can become full. Whatever order you use, there can be a problem: * *If you wait for the process to exit before reading StandardOutput the process can block trying to write to it, so the process never ends. *If you read from StandardOutput using ReadToEnd then your process can block if the process never closes StandardOutput (for example if it never terminates, or if it is blocked writing to StandardError). The solution is to use asynchronous reads to ensure that the buffer doesn't get full. To avoid any deadlocks and collect up all output from both StandardOutput and StandardError you can do this: EDIT: See answers below for how avoid an ObjectDisposedException if the timeout occurs. using (Process process = new Process()) { process.StartInfo.FileName = filename; process.StartInfo.Arguments = arguments; process.StartInfo.UseShellExecute = false; process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; StringBuilder output = new StringBuilder(); StringBuilder error = new StringBuilder(); using (AutoResetEvent outputWaitHandle = new AutoResetEvent(false)) using (AutoResetEvent errorWaitHandle = new AutoResetEvent(false)) { process.OutputDataReceived += (sender, e) => { if (e.Data == null) { outputWaitHandle.Set(); } else { output.AppendLine(e.Data); } }; process.ErrorDataReceived += (sender, e) => { if (e.Data == null) { errorWaitHandle.Set(); } else { error.AppendLine(e.Data); } }; process.Start(); process.BeginOutputReadLine(); process.BeginErrorReadLine(); if (process.WaitForExit(timeout) && outputWaitHandle.WaitOne(timeout) && errorWaitHandle.WaitOne(timeout)) { // Process completed. Check process.ExitCode here. } else { // Timed out. } } } A: I solved it this way: Process proc = new Process(); proc.StartInfo.FileName = batchFile; proc.StartInfo.UseShellExecute = false; proc.StartInfo.CreateNoWindow = true; proc.StartInfo.RedirectStandardError = true; proc.StartInfo.RedirectStandardInput = true; proc.StartInfo.RedirectStandardOutput = true; proc.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; proc.Start(); StreamWriter streamWriter = proc.StandardInput; StreamReader outputReader = proc.StandardOutput; StreamReader errorReader = proc.StandardError; while (!outputReader.EndOfStream) { string text = outputReader.ReadLine(); streamWriter.WriteLine(text); } while (!errorReader.EndOfStream) { string text = errorReader.ReadLine(); streamWriter.WriteLine(text); } streamWriter.Close(); proc.WaitForExit(); I redirected both input, output and error and handled reading from output and error streams. This solution works for SDK 7- 8.1, both for Windows 7 and Windows 8 A: I tried to make a class that would solve your problem using asynchronous stream read, by taking in account Mark Byers, Rob, stevejay answers. Doing so I realised that there is a bug related to asynchronous process output stream read. I reported that bug at Microsoft: https://connect.microsoft.com/VisualStudio/feedback/details/3119134 Summary: You can't do that: process.BeginOutputReadLine(); process.Start(); You will receive System.InvalidOperationException : StandardOut has not been redirected or the process hasn't started yet. ============================================================================================================================ Then you have to start asynchronous output read after the process is started: process.Start(); process.BeginOutputReadLine(); Doing so, make a race condition because the output stream can receive data before you set it to asynchronous: process.Start(); // Here the operating system could give the cpu to another thread. // For example, the newly created thread (Process) and it could start writing to the output // immediately before next line would execute. // That create a race condition. process.BeginOutputReadLine(); ============================================================================================================================ Then some people could say that you just have to read the stream before you set it to asynchronous. But the same problem occurs. There will be a race condition between the synchronous read and set the stream into asynchronous mode. ============================================================================================================================ There is no way to acheive safe asynchronous read of an output stream of a process in the actual way "Process" and "ProcessStartInfo" has been designed. You are probably better using asynchronous read like suggested by other users for your case. But you should be aware that you could miss some information due to race condition. A: This is a more modern awaitable, Task Parallel Library (TPL) based solution for .NET 4.5 and above. Usage Example try { var exitCode = await StartProcess( "dotnet", "--version", @"C:\", 10000, Console.Out, Console.Out); Console.WriteLine($"Process Exited with Exit Code {exitCode}!"); } catch (TaskCanceledException) { Console.WriteLine("Process Timed Out!"); } Implementation public static async Task<int> StartProcess( string filename, string arguments, string workingDirectory= null, int? timeout = null, TextWriter outputTextWriter = null, TextWriter errorTextWriter = null) { using (var process = new Process() { StartInfo = new ProcessStartInfo() { CreateNoWindow = true, Arguments = arguments, FileName = filename, RedirectStandardOutput = outputTextWriter != null, RedirectStandardError = errorTextWriter != null, UseShellExecute = false, WorkingDirectory = workingDirectory } }) { var cancellationTokenSource = timeout.HasValue ? new CancellationTokenSource(timeout.Value) : new CancellationTokenSource(); process.Start(); var tasks = new List<Task>(3) { process.WaitForExitAsync(cancellationTokenSource.Token) }; if (outputTextWriter != null) { tasks.Add(ReadAsync( x => { process.OutputDataReceived += x; process.BeginOutputReadLine(); }, x => process.OutputDataReceived -= x, outputTextWriter, cancellationTokenSource.Token)); } if (errorTextWriter != null) { tasks.Add(ReadAsync( x => { process.ErrorDataReceived += x; process.BeginErrorReadLine(); }, x => process.ErrorDataReceived -= x, errorTextWriter, cancellationTokenSource.Token)); } await Task.WhenAll(tasks); return process.ExitCode; } } /// <summary> /// Waits asynchronously for the process to exit. /// </summary> /// <param name="process">The process to wait for cancellation.</param> /// <param name="cancellationToken">A cancellation token. If invoked, the task will return /// immediately as cancelled.</param> /// <returns>A Task representing waiting for the process to end.</returns> public static Task WaitForExitAsync( this Process process, CancellationToken cancellationToken = default(CancellationToken)) { process.EnableRaisingEvents = true; var taskCompletionSource = new TaskCompletionSource<object>(); EventHandler handler = null; handler = (sender, args) => { process.Exited -= handler; taskCompletionSource.TrySetResult(null); }; process.Exited += handler; if (cancellationToken != default(CancellationToken)) { cancellationToken.Register( () => { process.Exited -= handler; taskCompletionSource.TrySetCanceled(); }); } return taskCompletionSource.Task; } /// <summary> /// Reads the data from the specified data recieved event and writes it to the /// <paramref name="textWriter"/>. /// </summary> /// <param name="addHandler">Adds the event handler.</param> /// <param name="removeHandler">Removes the event handler.</param> /// <param name="textWriter">The text writer.</param> /// <param name="cancellationToken">The cancellation token.</param> /// <returns>A task representing the asynchronous operation.</returns> public static Task ReadAsync( this Action<DataReceivedEventHandler> addHandler, Action<DataReceivedEventHandler> removeHandler, TextWriter textWriter, CancellationToken cancellationToken = default(CancellationToken)) { var taskCompletionSource = new TaskCompletionSource<object>(); DataReceivedEventHandler handler = null; handler = new DataReceivedEventHandler( (sender, e) => { if (e.Data == null) { removeHandler(handler); taskCompletionSource.TrySetResult(null); } else { textWriter.WriteLine(e.Data); } }); addHandler(handler); if (cancellationToken != default(CancellationToken)) { cancellationToken.Register( () => { removeHandler(handler); taskCompletionSource.TrySetCanceled(); }); } return taskCompletionSource.Task; } A: Mark Byers' answer is excellent, but I would just add the following: The OutputDataReceived and ErrorDataReceived delegates need to be removed before the outputWaitHandle and errorWaitHandle get disposed. If the process continues to output data after the timeout has been exceeded and then terminates, the outputWaitHandle and errorWaitHandle variables will be accessed after being disposed. (FYI I had to add this caveat as an answer as I couldn't comment on his post.) A: I think with async, it is possible to have a more elegant solution and not having deadlocks even when using both standardOutput and standardError: using (Process process = new Process()) { process.StartInfo.FileName = filename; process.StartInfo.Arguments = arguments; process.StartInfo.UseShellExecute = false; process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.Start(); var tStandardOutput = process.StandardOutput.ReadToEndAsync(); var tStandardError = process.StandardError.ReadToEndAsync(); if (process.WaitForExit(timeout)) { string output = await tStandardOutput; string errors = await tStandardError; // Process completed. Check process.ExitCode here. } else { // Timed out. } } It is base on Mark Byers answer. If you are not in an async method, you can use string output = tStandardOutput.result; instead of await A: I've read many of the answers and made my own. Not sure this one will fix in any case, but it fixes in my environment. I'm just not using WaitForExit and use WaitHandle.WaitAll on both output & error end signals. I will be glad, if someone will see possible problems with that. Or if it will help someone. For me it's better because not uses timeouts. private static int DoProcess(string workingDir, string fileName, string arguments) { int exitCode; using (var process = new Process { StartInfo = { WorkingDirectory = workingDir, WindowStyle = ProcessWindowStyle.Hidden, CreateNoWindow = true, UseShellExecute = false, FileName = fileName, Arguments = arguments, RedirectStandardError = true, RedirectStandardOutput = true }, EnableRaisingEvents = true }) { using (var outputWaitHandle = new AutoResetEvent(false)) using (var errorWaitHandle = new AutoResetEvent(false)) { process.OutputDataReceived += (sender, args) => { // ReSharper disable once AccessToDisposedClosure if (args.Data != null) Debug.Log(args.Data); else outputWaitHandle.Set(); }; process.ErrorDataReceived += (sender, args) => { // ReSharper disable once AccessToDisposedClosure if (args.Data != null) Debug.LogError(args.Data); else errorWaitHandle.Set(); }; process.Start(); process.BeginOutputReadLine(); process.BeginErrorReadLine(); WaitHandle.WaitAll(new WaitHandle[] { outputWaitHandle, errorWaitHandle }); exitCode = process.ExitCode; } } return exitCode; } A: The problem with unhandled ObjectDisposedException happens when the process is timed out. In such case the other parts of the condition: if (process.WaitForExit(timeout) && outputWaitHandle.WaitOne(timeout) && errorWaitHandle.WaitOne(timeout)) are not executed. I resolved this problem in a following way: using (AutoResetEvent outputWaitHandle = new AutoResetEvent(false)) using (AutoResetEvent errorWaitHandle = new AutoResetEvent(false)) { using (Process process = new Process()) { // preparing ProcessStartInfo try { process.OutputDataReceived += (sender, e) => { if (e.Data == null) { outputWaitHandle.Set(); } else { outputBuilder.AppendLine(e.Data); } }; process.ErrorDataReceived += (sender, e) => { if (e.Data == null) { errorWaitHandle.Set(); } else { errorBuilder.AppendLine(e.Data); } }; process.Start(); process.BeginOutputReadLine(); process.BeginErrorReadLine(); if (process.WaitForExit(timeout)) { exitCode = process.ExitCode; } else { // timed out } output = outputBuilder.ToString(); } finally { outputWaitHandle.WaitOne(timeout); errorWaitHandle.WaitOne(timeout); } } } A: The documentation for Process.StandardOutput says to read before you wait otherwise you can deadlock, snippet copied below: // Start the child process. Process p = new Process(); // Redirect the output stream of the child process. p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.FileName = "Write500Lines.exe"; p.Start(); // Do not wait for the child process to exit before // reading to the end of its redirected stream. // p.WaitForExit(); // Read the output stream first and then wait. string output = p.StandardOutput.ReadToEnd(); p.WaitForExit(); A: Rob answered it and saved me few more hours of trials. Read the output/error buffer before waiting: // Read the output stream first and then wait. string output = p.StandardOutput.ReadToEnd(); p.WaitForExit(); A: I thing that this is simple and better approach (we don't need AutoResetEvent): public static string GGSCIShell(string Path, string Command) { using (Process process = new Process()) { process.StartInfo.WorkingDirectory = Path; process.StartInfo.FileName = Path + @"\ggsci.exe"; process.StartInfo.CreateNoWindow = true; process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardInput = true; process.StartInfo.UseShellExecute = false; StringBuilder output = new StringBuilder(); process.OutputDataReceived += (sender, e) => { if (e.Data != null) { output.AppendLine(e.Data); } }; process.Start(); process.StandardInput.WriteLine(Command); process.BeginOutputReadLine(); int timeoutParts = 10; int timeoutPart = (int)TIMEOUT / timeoutParts; do { Thread.Sleep(500);//sometimes halv scond is enough to empty output buff (therefore "exit" will be accepted without "timeoutPart" waiting) process.StandardInput.WriteLine("exit"); timeoutParts--; } while (!process.WaitForExit(timeoutPart) && timeoutParts > 0); if (timeoutParts <= 0) { output.AppendLine("------ GGSCIShell TIMEOUT: " + TIMEOUT + "ms ------"); } string result = output.ToString(); return result; } } A: None of the answers above is doing the job. Rob solution hangs and 'Mark Byers' solution get the disposed exception.(I tried the "solutions" of the other answers). So I decided to suggest another solution: public void GetProcessOutputWithTimeout(Process process, int timeoutSec, CancellationToken token, out string output, out int exitCode) { string outputLocal = ""; int localExitCode = -1; var task = System.Threading.Tasks.Task.Factory.StartNew(() => { outputLocal = process.StandardOutput.ReadToEnd(); process.WaitForExit(); localExitCode = process.ExitCode; }, token); if (task.Wait(timeoutSec, token)) { output = outputLocal; exitCode = localExitCode; } else { exitCode = -1; output = ""; } } using (var process = new Process()) { process.StartInfo = ...; process.Start(); string outputUnicode; int exitCode; GetProcessOutputWithTimeout(process, PROCESS_TIMEOUT, out outputUnicode, out exitCode); } This code debugged and works perfectly. A: Introduction Currently accepted answer doesn't work (throws exception) and there are too many workarounds but no complete code. This is obviously wasting lots of people's time because this is a popular question. Combining Mark Byers' answer and Karol Tyl's answer I wrote full code based on how I want to use the Process.Start method. Usage I have used it to create progress dialog around git commands. This is how I've used it: private bool Run(string fullCommand) { Error = ""; int timeout = 5000; var result = ProcessNoBS.Start( filename: @"C:\Program Files\Git\cmd\git.exe", arguments: fullCommand, timeoutInMs: timeout, workingDir: @"C:\test"); if (result.hasTimedOut) { Error = String.Format("Timeout ({0} sec)", timeout/1000); return false; } if (result.ExitCode != 0) { Error = (String.IsNullOrWhiteSpace(result.stderr)) ? result.stdout : result.stderr; return false; } return true; } In theory you can also combine stdout and stderr, but I haven't tested that. Code public struct ProcessResult { public string stdout; public string stderr; public bool hasTimedOut; private int? exitCode; public ProcessResult(bool hasTimedOut = true) { this.hasTimedOut = hasTimedOut; stdout = null; stderr = null; exitCode = null; } public int ExitCode { get { if (hasTimedOut) throw new InvalidOperationException( "There was no exit code - process has timed out."); return (int)exitCode; } set { exitCode = value; } } } public class ProcessNoBS { public static ProcessResult Start(string filename, string arguments, string workingDir = null, int timeoutInMs = 5000, bool combineStdoutAndStderr = false) { using (AutoResetEvent outputWaitHandle = new AutoResetEvent(false)) using (AutoResetEvent errorWaitHandle = new AutoResetEvent(false)) { using (var process = new Process()) { var info = new ProcessStartInfo(); info.CreateNoWindow = true; info.FileName = filename; info.Arguments = arguments; info.UseShellExecute = false; info.RedirectStandardOutput = true; info.RedirectStandardError = true; if (workingDir != null) info.WorkingDirectory = workingDir; process.StartInfo = info; StringBuilder stdout = new StringBuilder(); StringBuilder stderr = combineStdoutAndStderr ? stdout : new StringBuilder(); var result = new ProcessResult(); try { process.OutputDataReceived += (sender, e) => { if (e.Data == null) outputWaitHandle.Set(); else stdout.AppendLine(e.Data); }; process.ErrorDataReceived += (sender, e) => { if (e.Data == null) errorWaitHandle.Set(); else stderr.AppendLine(e.Data); }; process.Start(); process.BeginOutputReadLine(); process.BeginErrorReadLine(); if (process.WaitForExit(timeoutInMs)) result.ExitCode = process.ExitCode; // else process has timed out // but that's already default ProcessResult result.stdout = stdout.ToString(); if (combineStdoutAndStderr) result.stderr = null; else result.stderr = stderr.ToString(); return result; } finally { outputWaitHandle.WaitOne(timeoutInMs); errorWaitHandle.WaitOne(timeoutInMs); } } } } } A: I know that this is supper old but, after reading this whole page none of the solutions was working for me, although I didn't try Muhammad Rehan as the code was a little hard to follow, although I guess he was on the right track. When I say it didn't work that's not entirely true, sometimes it would work fine, I guess it is something to do with the length of the output before an EOF mark. Anyway, the solution that worked for me was to use different threads to read the StandardOutput and StandardError and write the messages. StreamWriter sw = null; var queue = new ConcurrentQueue<string>(); var flushTask = new System.Timers.Timer(50); flushTask.Elapsed += (s, e) => { while (!queue.IsEmpty) { string line = null; if (queue.TryDequeue(out line)) sw.WriteLine(line); } sw.FlushAsync(); }; flushTask.Start(); using (var process = new Process()) { try { process.StartInfo.FileName = @"..."; process.StartInfo.Arguments = $"..."; process.StartInfo.UseShellExecute = false; process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.Start(); var outputRead = Task.Run(() => { while (!process.StandardOutput.EndOfStream) { queue.Enqueue(process.StandardOutput.ReadLine()); } }); var errorRead = Task.Run(() => { while (!process.StandardError.EndOfStream) { queue.Enqueue(process.StandardError.ReadLine()); } }); var timeout = new TimeSpan(hours: 0, minutes: 10, seconds: 0); if (Task.WaitAll(new[] { outputRead, errorRead }, timeout) && process.WaitForExit((int)timeout.TotalMilliseconds)) { if (process.ExitCode != 0) { throw new Exception($"Failed run... blah blah"); } } else { throw new Exception($"process timed out after waiting {timeout}"); } } catch (Exception e) { throw new Exception($"Failed to succesfully run the process.....", e); } } } Hope this helps someone, who thought this could be so hard! A: After reading all the posts here, i settled on the consolidated solution of Marko Avlijaš. However, it did not solve all of my issues. In our environment we have a Windows Service which is scheduled to run hundreds of different .bat .cmd .exe,... etc. files which have accumulated over the years and were written by many different people and in different styles. We have no control over the writing of the programs & scripts, we are just responsible for scheduling, running, and reporting on success/failure. So i tried pretty much all of the suggestions here with different levels of success. Marko's answer was almost perfect, but when run as a service, it didnt always capture stdout. I never got to the bottom of why not. The only solution we found that works in ALL our cases is this : http://csharptest.net/319/using-the-processrunner-class/index.html A: Workaround I ended up using to avoid all the complexity: var outputFile = Path.GetTempFileName(); info = new System.Diagnostics.ProcessStartInfo("TheProgram.exe", String.Join(" ", args) + " > " + outputFile + " 2>&1"); info.CreateNoWindow = true; info.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden; info.UseShellExecute = false; System.Diagnostics.Process p = System.Diagnostics.Process.Start(info); p.WaitForExit(); Console.WriteLine(File.ReadAllText(outputFile)); //need the StandardOutput contents So I create a temp file, redirect both the output and error to it by using > outputfile > 2>&1 and then just read the file after the process has finished. The other solutions are fine for scenarios where you want to do other stuff with the output, but for simple stuff this avoids a lot of complexity. A: In my case I had an error so I just waited in vain for a normal ouput. I switched the order from this: string result = process.StandardOutput.ReadToEnd(); string error = process.StandardError.ReadToEnd(); To this: string error = process.StandardError.ReadToEnd(); if (string.IsNullOrEmpty(error)) string result = process.StandardOutput.ReadToEnd(); A: I was having the same issue, but the reason was different. It would however happen under Windows 8, but not under Windows 7. The following line seems to have caused the problem. pProcess.StartInfo.UseShellExecute = False The solution was to NOT disable UseShellExecute. I now received a Shell popup window, which is unwanted, but much better than the program waiting for nothing particular to happen. So I added the following work-around for that: pProcess.StartInfo.WindowStyle = ProcessWindowStyle.Hidden Now the only thing bothering me is to why this is happening under Windows 8 in the first place. A: This post maybe outdated but i found out the main cause why it usually hang is due to stack overflow for the redirectStandardoutput or if you have redirectStandarderror. As the output data or the error data is large, it will cause a hang time as it is still processing for indefinite duration. so to resolve this issue: p.StartInfo.RedirectStandardoutput = False p.StartInfo.RedirectStandarderror = False A: Let us call the sample code posted here the redirector and the other program the redirected. If it were me then I would probably write a test redirected program that can be used to duplicate the problem. So I did. For test data I used the ECMA-334 C# Language Specificationv PDF; it is about 5MB. The following is the important part of that. StreamReader stream = null; try { stream = new StreamReader(Path); } catch (Exception ex) { Console.Error.WriteLine("Input open error: " + ex.Message); return; } Console.SetIn(stream); int datasize = 0; try { string record = Console.ReadLine(); while (record != null) { datasize += record.Length + 2; record = Console.ReadLine(); Console.WriteLine(record); } } catch (Exception ex) { Console.Error.WriteLine($"Error: {ex.Message}"); return; } The datasize value does not match the actual file size but that does not matter. It is not clear if a PDF file always uses both CR and LF at the end of lines but that does not matter for this. You can use any other large text file to test with. Using that the sample redirector code hangs when I write the large amount of data but not when I write a small amount. I tried very much to somehow trace the execution of that code and I could not. I commented out the lines of the redirected program that disabled creation of a console for the redirected program to try to get a separate console window but I could not. Then I found How to start a console app in a new window, the parent’s window, or no window. So apparently we cannot (easily) have a separate console when one console program starts another console program without ShellExecute and since ShellExecute does not support redirection we must share a console, even if we specify no window for the other process. I assume that if the redirected program fills up a buffer somewhere then it must wait for the data to be read and if at that point no data is read by the redirector then it is a deadlock. The solution is to not use ReadToEnd and to read the data while the data is being written but it is not necessary to use asynchronous reads. The solution can be quite simple. The following works for me with the 5 MB PDF. ProcessStartInfo info = new ProcessStartInfo(TheProgram); info.CreateNoWindow = true; info.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden; info.RedirectStandardOutput = true; info.UseShellExecute = false; Process p = Process.Start(info); string record = p.StandardOutput.ReadLine(); while (record != null) { Console.WriteLine(record); record = p.StandardOutput.ReadLine(); } p.WaitForExit(); Another possibility is to use a GUI program to do the redirection. The preceding code works in a WPF application except with obvious modifications.
{ "language": "en", "url": "https://stackoverflow.com/questions/139593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "218" }
Q: Where to find packages names and versions for RedHat? How can I find out whether a specific RedHat release (RHEL4, RHEL5...) contains a certain package (or a certain version of a package)? For Debian and Ubuntu, there's packages.debian.org and packages.ubuntu.com; is there a similar web site for RedHat? Note: I don't want to have to install all the releases just to check some package version :-) A: For starters: http://distrowatch.com/table.php?distribution=redhat has a list of the important versions, but certainly does not list all package versions. A: Apparently Red Hat does have such an online package list for its RHEL distros, but it's only accessible for paying Red Hat customers :-( Btw. similar question (for CentOS): https://serverfault.com/questions/239205/official-online-rpm-package-browser-search-for-centos A: The best you can do, as it seems, is to check the FTP directories of the source packages: * *RHEL4 *RHEL5 *RHEL6 *RHEL7 Keep in mind that RedHat has the habit of patching software to hell and back, so the version number might not have to much in common with the actual release, especially for the kernel. A: Since RedHat and CENTOS are very similar, you may be able to get a decent list from looking at the packages in the CENTOS repository for the corresponding version. Go to one of the following sites and go to your RedHat / CENTOS version and locate the packages. The CENTOS versions coincide with the RedHat versions. http://vault.centos.org/ http://mirrors.centos.org/ For example, for an i386 system with RedHat 6.3 (or CENTOS 6.3), check out the following site: http://vault.centos.org/6.3/os/i386/Packages/ I am not guaranteeing it will be a 1 to 1 mapping, but it should be very close. A: If you have your yum repositories set, you can use "yum search " to see if it exists, or "yum list" to see all available packages. A: Looks like Fedora is now working on an online list of packages (PackageDB - see http://fedoraproject.org/wiki/Infrastructure/PackageDatabase). The current version is available at https://admin.fedoraproject.org/pkgdb/ . It doesn't seem to know about all RHEL4/RHEL5 packages, though. A: Note: by "browsing" an rsync:// mirror instead of an ftp:// mirror you can conveniently filter files using wildcards instead of having to wait minutes to download the massive and comprehensive list of all packages.
{ "language": "en", "url": "https://stackoverflow.com/questions/139605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: What is the difference between myCustomer.GetType() and typeof(Customer) in C#? I've seen both done in some code I'm maintaining, but don't know the difference. Is there one? let me add that myCustomer is an instance of Customer A: For the first, you need an actual instance (ie myCustomer), for the second you don't A: typeof(foo) is converted into a constant during compiletime. foo.GetType() happens at runtime. typeof(foo) also converts directly into a constant of its type (ie foo), so doing this would fail: public class foo { } public class bar : foo { } bar myBar = new bar(); // Would fail, even though bar is a child of foo. if (myBar.getType == typeof(foo)) // However this Would work if (myBar is foo) A: GetType() is used to find the actual type of a object reference at run-time. This can be different from the type of the variable that references the object, because of inheritance. typeof() creates a Type literal that is of the exact type specified and is determined at compile-time. A: typeof is executed at compile time while GetType at runtime. That's what is so different about these two methods. That's why when you deal with type hierarchy, you can find out the exact type name of a type simply by running GetType. public Type WhoAreYou(Base base) { base.GetType(); } A: Yes, there is a difference if you have an inherited type from Customer. class VipCustomer : Customer { ..... } static void Main() { Customer c = new VipCustomer(); c.GetType(); // returns typeof(VipCustomer) } A: The result of both are exactly the same in your case. It will be your custom type that derives from System.Type. The only real difference here is that when you want to obtain the type from an instance of your class, you use GetType. If you don't have an instance, but you know the type name (and just need the actual System.Type to inspect or compare to), you would use typeof. Important difference EDIT: Let me add that the call to GetType gets resolved at runtime, while typeof is resolved at compile time. A: The typeof operator takes a type as a parameter. It is resolved at compile time. The GetType method is invoked on an object and is resolved at run time. The first is used when you need to use a known Type, the second is to get the type of an object when you don't know what it is. class BaseClass { } class DerivedClass : BaseClass { } class FinalClass { static void RevealType(BaseClass baseCla) { Console.WriteLine(typeof(BaseClass)); // compile time Console.WriteLine(baseCla.GetType()); // run time } static void Main(string[] str) { RevealType(new BaseClass()); Console.ReadLine(); } } // ********* By Praveen Kumar Srivastava
{ "language": "en", "url": "https://stackoverflow.com/questions/139607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: Does the CAutoPtr class implement reference counting? Modern ATL/MFC applications now have access to a new shared pointer class called CAutoPtr, and associated containers (CAutoPtrArray, CAutoPtrList, etc.). Does the CAutoPtr class implement reference counting? A: Having checked the CAutoPtr source, no, reference counting is not supported. Using boost::shared_ptr instead if this ability is required. A: The documentation for http://msdn.microsoft.com/en-us/library/txda4x5t(VS.80).aspx From reading this it looks like it tries to provides the same functionality as std::auto_ptr i.e. It uses ownership semantics. Only one CAutoPtr object holds the pointer and assignment transfers ownership from one CAutoPtr object to another.
{ "language": "en", "url": "https://stackoverflow.com/questions/139622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Delayed jump to a new web page How do I cause the page to make the user jump to a new web page after X seconds. If possible I'd like to use HTML but a niggly feeling tells me it'll have to be Javascript. So far I have the following but it has no time delay <body onload="document.location='newPage.html'"> A: If you are going the JS route just use setTimeout("window.location.href = 'newPage.html';", 5000); A: A meta refresh is ugly but will work. The following will go to the new url after 5 seconds: <meta http-equiv="refresh" content="5;url=http://example.com/"/> http://en.wikipedia.org/wiki/Meta_refresh A: Put this is in the head: <meta http-equiv="refresh" content="5;url=newPage.html"> This will redirect after 5 seconds. Make 0 to redirect onload. A: You can use good ole' META REFRESH, no JS required, although those are (I think) deprecated. A: The Meta Refresh is the way to go, but here is the JavaScript solution: <body onload="setTimeout('window.location = \'newpage.html\'', 5000)"> More details can be found here. A: The JavaScript method, without invoking eval in the the setTimeout: <body onload="setTimeout(function(){window.location.href='newpage.html'}, 5000)">
{ "language": "en", "url": "https://stackoverflow.com/questions/139623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the difference between TRUNCATE and DELETE in SQL What's the difference between TRUNCATE and DELETE in SQL? If your answer is platform specific, please indicate that. A: Yes, DELETE is slower, TRUNCATE is faster. Why? DELETE must read the records, check constraints, update the block, update indexes, and generate redo/undo. All of that takes time. TRUNCATE simply adjusts a pointer in the database for the table (the High Water Mark) and poof! the data is gone. This is Oracle specific, AFAIK. A: DROP The DROP command removes a table from the database. All the tables' rows, indexes and privileges will also be removed. No DML triggers will be fired. The operation cannot be rolled back. TRUNCATE TRUNCATE removes all rows from a table. The operation cannot be rolled back and no triggers will be fired. As such, TRUNCATE is faster and doesn't use as much undo space as a DELETE. Table level lock will be added when Truncating. DELETE The DELETE command is used to remove rows from a table. A WHERE clause can be used to only remove some rows. If no WHERE condition is specified, all rows will be removed. After performing a DELETE operation you need to COMMIT or ROLLBACK the transaction to make the change permanent or to undo it. Note that this operation will cause all DELETE triggers on the table to fire. Row level lock will be added when deleting. From: http://www.orafaq.com/faq/difference_between_truncate_delete_and_drop_commands A: If accidentally you removed all the data from table using Delete/Truncate. You can rollback committed transaction. Restore the last backup and run transaction log till the time when Delete/Truncate is about to happen. The related information below is from a blog post: While working on database, we are using Delete and Truncate without knowing the differences between them. In this article we will discuss the difference between Delete and Truncate in Sql. Delete: * *Delete is a DML command. *Delete statement is executed using a row lock,each row in the table is locked for deletion. *We can specify filters in where clause. *It deletes specified data if where condition exists. *Delete activities a trigger because the operation are logged individually. *Slower than Truncate because it Keeps logs Truncate * *Truncate is a DDL command. *Truncate table always lock the table and page but not each row.As it removes all the data. *Cannot use Where condition. *It Removes all the data. *Truncate table cannot activate a trigger because the operation does not log individual row deletions. *Faster in performance wise, because it doesn't keep any logs. Note: Delete and Truncate both can be rolled back when used with Transaction. If Transaction is done, means committed then we can not rollback Truncate command, but we can still rollback Delete command from Log files, as delete write records them in Log file in case it is needed to rollback in future from log files. If you have a Foreign key constraint referring to the table you are trying to truncate, this won't work even if the referring table has no data in it. This is because the foreign key checking is done with DDL rather than DML. This can be got around by temporarily disabling the foreign key constraint(s) to the table. Delete table is a logged operation. So the deletion of each row gets logged in the transaction log, which makes it slow. Truncate table also deletes all the rows in a table, but it won't log the deletion of each row instead it logs the deallocation of the data pages of the table, which makes it faster. ~ If accidentally you removed all the data from table using Delete/Truncate. You can rollback committed transaction. Restore the last backup and run transaction log till the time when Delete/Truncate is about to happen. A: Here is my detailed answer on the difference between DELETE and TRUNCATE in SQL Server • Remove Data : First thing first, both can be used to remove the rows from table. But a DELETE can be used to remove the rows not only from a Table but also from a VIEW or the result of an OPENROWSET or OPENQUERY subject to provider capabilities. • FROM Clause : With DELETE you can also delete rows from one table/view/rowset_function_limited based on rows from another table by using another FROM clause. In that FROM clause you can also write normal JOIN conditions. Actually you can create a DELETE statement from a SELECT statement that doesn’t contain any aggregate functions by replacing SELECT with DELETE and removing column names. With TRUNCATE you can’t do that. • WHERE : A TRUNCATE cannot have WHERE Conditions, but a DELETE can. That means with TRUNCATE you can’t delete a specific row or specific group of rows. TRUNCATE TABLE is similar to the DELETE statement with no WHERE clause. • Performance : TRUNCATE TABLE is faster and uses fewer system and transaction log resources. And one of the reason is locks used by either statements. The DELETE statement is executed using a row lock, each row in the table is locked for deletion. TRUNCATE TABLE always locks the table and page but not each row. • Transaction log : DELETE statement removes rows one at a time and makes individual entries in the transaction log for each row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table data and records only the page deallocations in the transaction log. • Pages : After a DELETE statement is executed, the table can still contain empty pages. TRUNCATE removes the data by deallocating the data pages used to store the table data. • Trigger : TRUNCATE does not activate the delete triggers on the table. So you must be very careful while using TRUNCATE. One should never use a TRUNCATE if delete Trigger is defined on the table to do some automatic cleanup or logging action when rows are deleted. • Identity Column : With TRUNCATE if the table contains an identity column, the counter for that column is reset to the seed value defined for the column. If no seed was defined, the default value 1 is used. DELETE doesn’t reset the identity counter. So if you want to retain the identity counter, use DELETE instead. • Replication : DELETE can be used against table used in transactional replication or merge replication. While TRUNCATE cannot be used against the tables involved in transactional replication or merge replication. • Rollback : DELETE statement can be rolled back. TRUNCATE can also be rolled back provided it is enclosed in a TRANSACTION block and session is not closed. Once session is closed you won't be able to Rollback TRUNCATE. • Restrictions : The DELETE statement may fail if it violates a trigger or tries to remove a row referenced by data in another table with a FOREIGN KEY constraint. If the DELETE removes multiple rows, and any one of the removed rows violates a trigger or constraint, the statement is canceled, an error is returned, and no rows are removed. And if DELETE is used against View, that View must be an Updatable view. TRUNCATE cannot be used against the table used in Indexed view. TRUNCATE cannot be used against the table referenced by a FOREIGN KEY constraint, unless a table that has a foreign key that references itself. A: In SQL Server 2005 I believe that you can rollback a truncate A: DELETE The DELETE command is used to remove rows from a table. A WHERE clause can be used to only remove some rows. If no WHERE condition is specified, all rows will be removed. After performing a DELETE operation you need to COMMIT or ROLLBACK the transaction to make the change permanent or to undo it. Note that this operation will cause all DELETE triggers on the table to fire. TRUNCATE TRUNCATE removes all rows from a table. The operation cannot be rolled back and no triggers will be fired. As such, TRUCATE is faster and doesn't use as much undo space as a DELETE. DROP The DROP command removes a table from the database. All the tables' rows, indexes and privileges will also be removed. No DML triggers will be fired. The operation cannot be rolled back. DROP and TRUNCATE are DDL commands, whereas DELETE is a DML command. Therefore DELETE operations can be rolled back (undone), while DROP and TRUNCATE operations cannot be rolled back. From: http://www.orafaq.com/faq/difference_between_truncate_delete_and_drop_commands A: Here's a list of differences. I've highlighted Oracle-specific features, and hopefully the community can add in other vendors' specific difference also. Differences that are common to most vendors can go directly below the headings, with differences highlighted below. General Overview If you want to quickly delete all of the rows from a table, and you're really sure that you want to do it, and you do not have foreign keys against the tables, then a TRUNCATE is probably going to be faster than a DELETE. Various system-specific issues have to be considered, as detailed below. Statement type Delete is DML, Truncate is DDL (What is DDL and DML?) Commit and Rollback Variable by vendor SQL*Server Truncate can be rolled back. PostgreSQL Truncate can be rolled back. Oracle Because a TRUNCATE is DDL it involves two commits, one before and one after the statement execution. Truncate can therefore not be rolled back, and a failure in the truncate process will have issued a commit anyway. However, see Flashback below. Space reclamation Delete does not recover space, Truncate recovers space Oracle If you use the REUSE STORAGE clause then the data segments are not de-allocated, which can be marginally more efficient if the table is to be reloaded with data. The high water mark is reset. Row scope Delete can be used to remove all rows or only a subset of rows. Truncate removes all rows. Oracle When a table is partitioned, the individual partitions can be truncated in isolation, thus a partial removal of all the table's data is possible. Object types Delete can be applied to tables and tables inside a cluster. Truncate applies only to tables or the entire cluster. (May be Oracle specific) Data Object Identity Oracle Delete does not affect the data object id, but truncate assigns a new data object id unless there has never been an insert against the table since its creation Even a single insert that is rolled back will cause a new data object id to be assigned upon truncation. Flashback (Oracle) Flashback works across deletes, but a truncate prevents flashback to states prior to the operation. However, from 11gR2 the FLASHBACK ARCHIVE feature allows this, except in Express Edition Use of FLASHBACK in Oracle http://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_flashback.htm#ADFNS638 Privileges Variable Oracle Delete can be granted on a table to another user or role, but truncate cannot be without using a DROP ANY TABLE grant. Redo/Undo Delete generates a small amount of redo and a large amount of undo. Truncate generates a negligible amount of each. Indexes Oracle A truncate operation renders unusable indexes usable again. Delete does not. Foreign Keys A truncate cannot be applied when an enabled foreign key references the table. Treatment with delete depends on the configuration of the foreign keys. Table Locking Oracle Truncate requires an exclusive table lock, delete requires a shared table lock. Hence disabling table locks is a way of preventing truncate operations on a table. Triggers DML triggers do not fire on a truncate. Oracle DDL triggers are available. Remote Execution Oracle Truncate cannot be issued over a database link. Identity Columns SQL*Server Truncate resets the sequence for IDENTITY column types, delete does not. Result set In most implementations, a DELETE statement can return to the client the rows that were deleted. e.g. in an Oracle PL/SQL subprogram you could: DELETE FROM employees_temp WHERE employee_id = 299 RETURNING first_name, last_name INTO emp_first_name, emp_last_name; A: TRUNCATE can be rolled back if wrapped in a transaction. Please see the two references below and test yourself:- http://blog.sqlauthority.com/2007/12/26/sql-server-truncate-cant-be-rolled-back-using-log-files-after-transaction-session-is-closed/ http://sqlblog.com/blogs/kalen_delaney/archive/2010/10/12/tsql-tuesday-11-rolling-back-truncate-table.aspx The TRUNCATE vs. DELETE is one of the infamous questions during SQL interviews. Just make sure you explain it properly to the Interviewer or it might cost you the job. The problem is that not many are aware so most likely they will consider the answer as wrong if you tell them that YES Truncate can be rolled back. A: One further difference of the two operations is that if the table contains an identity column, the counter for that column is reset 1 (or to the seed value defined for the column) under TRUNCATE. DELETE does not have this affect. A: All good answers, to which I must add: Since TRUNCATE TABLE is a DDL (Data Defination Language), not a DML (Data Manipulation Langauge) command, the Delete Triggers do not run. A: Summary of Delete Vs Truncate in SQL server For Complete Article follow this link : http://codaffection.com/sql-server-article/delete-vs-truncate-in-sql-server/ Taken from dotnet mob article :Delete Vs Truncate in SQL Server A: The difference between truncate and delete is listed below: +----------------------------------------+----------------------------------------------+ | Truncate | Delete | +----------------------------------------+----------------------------------------------+ | We can't Rollback after performing | We can Rollback after delete. | | Truncate. | | | | | | Example: | Example: | | BEGIN TRAN | BEGIN TRAN | | TRUNCATE TABLE tranTest | DELETE FROM tranTest | | SELECT * FROM tranTest | SELECT * FROM tranTest | | ROLLBACK | ROLLBACK | | SELECT * FROM tranTest | SELECT * FROM tranTest | +----------------------------------------+----------------------------------------------+ | Truncate reset identity of table. | Delete does not reset identity of table. | +----------------------------------------+----------------------------------------------+ | It locks the entire table. | It locks the table row. | +----------------------------------------+----------------------------------------------+ | Its DDL(Data Definition Language) | Its DML(Data Manipulation Language) | | command. | command. | +----------------------------------------+----------------------------------------------+ | We can't use WHERE clause with it. | We can use WHERE to filter data to delete. | +----------------------------------------+----------------------------------------------+ | Trigger is not fired while truncate. | Trigger is fired. | +----------------------------------------+----------------------------------------------+ | Syntax : | Syntax : | | 1) TRUNCATE TABLE table_name | 1) DELETE FROM table_name | | | 2) DELETE FROM table_name WHERE | | | example_column_id IN (1,2,3) | +----------------------------------------+----------------------------------------------+ A: With SQL Server or MySQL, if there is a PK with auto increment, truncate will reset the counter. A: A small correction to the original answer - delete also generates significant amounts of redo (as undo is itself protected by redo). This can be seen from autotrace output: SQL> delete from t1; 10918 rows deleted. Elapsed: 00:00:00.58 Execution Plan ---------------------------------------------------------- 0 DELETE STATEMENT Optimizer=FIRST_ROWS (Cost=43 Card=1) 1 0 DELETE OF 'T1' 2 1 TABLE ACCESS (FULL) OF 'T1' (TABLE) (Cost=43 Card=1) Statistics ---------------------------------------------------------- 30 recursive calls 12118 db block gets 213 consistent gets 142 physical reads 3975328 redo size 441 bytes sent via SQL*Net to client 537 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 10918 rows processed A: DELETE DELETE is a DML command DELETE you can rollback Delete = Only Delete- so it can be rolled back In DELETE you can write conditions using WHERE clause Syntax – Delete from [Table] where [Condition] TRUNCATE TRUNCATE is a DDL command You can't rollback in TRUNCATE, TRUNCATE removes the record permanently Truncate = Delete+Commit -so we can't roll back You can't use conditions(WHERE clause) in TRUNCATE Syntax – Truncate table [Table] For more details visit http://www.zilckh.com/what-is-the-difference-between-truncate-and-delete/ A: "Truncate doesn't log anything" is correct. I'd go further: Truncate is not executed in the context of a transaction. The speed advantage of truncate over delete should be obvious. That advantage ranges from trivial to enormous, depending on your situation. However, I've seen truncate unintentionally break referential integrity, and violate other constraints. The power that you gain by modifying data outside a transaction has to be balanced against the responsibility that you inherit when you walk the tightrope without a net. A: TRUNCATE is the DDL statement whereas DELETE is a DML statement. Below are the differences between the two: * *As TRUNCATE is a DDL (Data definition language) statement it does not require a commit to make the changes permanent. And this is the reason why rows deleted by truncate could not be rollbacked. On the other hand DELETE is a DML (Data manipulation language) statement hence requires explicit commit to make its effect permanent. *TRUNCATE always removes all the rows from a table, leaving the table empty and the table structure intact whereas DELETE may remove conditionally if the where clause is used. *The rows deleted by TRUNCATE TABLE statement cannot be restored and you can not specify the where clause in the TRUNCATE statement. *TRUNCATE statements does not fire triggers as opposed of on delete trigger on DELETE statement Here is the very good link relevant to the topic. A: The biggest difference is that truncate is non logged operation while delete is. Simply it means that in case of a database crash , you cannot recover the data operated upon by truncate but with delete you can. More details here A: DELETE Statement: This command deletes only the rows from the table based on the condition given in the where clause or deletes all the rows from the table if no condition is specified. But it does not free the space containing the table. The Syntax of a SQL DELETE statement is: DELETE FROM table_name [WHERE condition]; TRUNCATE statement: This command is used to delete all the rows from the table and free the space containing the table. A: Here is a summary of some important differences between these sql commands: sql truncate command: 1) It is a DDL (Data Definition Language) command, therefore commands such as COMMIT and ROLLBACK do not apply to this command (the exceptions here are PostgreSQL and MSSQL, whose implementation of the TRUNCATE command allows the command to be used in a transaction) 2) You cannot undo the operation of deleting records, it occurs automatically and is irreversible (except for the above exceptions - provided, however, that the operation is included in the TRANSACTION block and the session is not closed). In case of Oracle - Includes two implicit commits, one before and one after the statement is executed. Therefore, the command cannot be withdrawn while a runtime error will result in commit anyway 3) Deletes all records from the table, records cannot be limited to deletion. For Oracle, when the table is split per partition, individual partitions can be truncated (TRUNCATE) in isolation, making it possible to partially remove all data from the table 4) Frees up the space occupied by the data in the table (in the TABLESPACE - on disk). For Oracle - if you use the REUSE STORAGE clause, the data segments will not be rolled back, i.e. you will keep space from the deleted rows allocated to the table, which can be a bit more efficient if the table is to be reloaded with data. The high mark will be reset 5) TRUNCATE works much faster than DELETE 6) Oracle Flashback in the case of TRUNCATE prevents going back to pre-operative states 7) Oracle - TRUNCATE cannot be granted (GRANT) without using DROP ANY TABLE 8) The TRUNCATE operation makes unusable indexes usable again 9) TRUNCATE cannot be used when the enabled foreign key refers to another table, then you can: * *execute the command: DROP CONSTRAINT, then TRUNCATE, and then play it through CREATE CONSTRAINT or *execute the command: SET FOREIGN_KEY_CHECKS = 0; then TRUNCATE, then: SET_FOREIGN_KEY_CHECKS = 1; 10) TRUNCATE requires an exclusive table lock, therefore, turning off exclusive table lock is a way to prevent TRUNCATE operation on the table 11) DML triggers do not fire after executing TRUNCATE (so be very careful in this case, you should not use TRUNCATE, if a delete trigger is defined in the table to perform an automatic table cleanup or a logon action after row deletion). On Oracle, DDL triggers are fired 12) Oracle - TRUNCATE cannot be used in the case of: database link 13) TRUNCATE does not return the number of records deleted 14) Transaction log - one log indicating page deallocation (removes data, releasing allocation of data pages used for storing table data and writes only page deallocations to the transaction log) - faster execution than DELETE. TRUNCATE only needs to adjust the pointer in the database to the table (High Water Mark) and the data is immediately deleted, therefore it uses less system resources and transaction logs 15) Performance (acquired lock) - table and page lock - does not degrade performance during execution 16) TRUNCATE cannot be used on tables involved in transactional replication or merge replication sql delete command: 1) It is a DML (Data Manipulation Language) command, therefore the following commands are used for this command: COMMIT and ROLLBACK 2) You can undo the operation of removing records by using the ROLLBACK command 3) Deletes all or some records from the table, you can limit the records to be deleted by using the WHERE clause 4) Does not free the space occupied by the data in the table (in the TABLESPACE - on the disk) 5) DELETE works much slower than TRUNCATE 6) Oracle Flashback works for DELETE 7) Oracle - For DELETE, you can use the GRANT command 8) The DELETE operation does not make unusable indexes usable again 9) DELETE in case foreign key enabled refers to another table, can (or not) be applied depending on foreign key configuration (if not), please: * *execute the command: DROP CONSTRAINT, then TRUNCATE, and then play it through CREATE CONSTRAINT or *execute the command: SET FOREIGN_KEY_CHECKS = 0; then TRUNCATE, then: SET_FOREIGN_KEY_CHECKS = 1; 10) DELETE requires a shared table lock 11) Triggers fire 12) DELETE can be used in the case of: database link 13) DELETE returns the number of records deleted 14) Transaction log - for each deleted record (deletes rows one at a time and records an entry in the transaction log for each deleted row) - slower execution than TRUNCATE. The table may still contain blank pages after executing the DELETE statement. DELETE needs to read records, check constraints, update block, update indexes, and generate redo / undo. All of this takes time, hence it takes time much longer than with TRUNCATE 15) Performance (acquired lock) - record lock - reduces performance during execution - each record in the table is locked for deletion 16) DELETE can be used on a table used in transactional replication or merge replication A: In short, truncate doesn't log anything (so is much faster but can't be undone) whereas delete is logged (and can be part of a larger transaction, will rollback etc). If you have data that you don't want in a table in dev it is normally better to truncate as you don't run the risk of filling up the transaction log A: A big reason it is handy, is when you need to refresh the data in a multi-million row table, but don't want to rebuild it. "Delete *" would take forever, whereas the perfomance impact of Truncate would be negligible. A: Can't do DDL over a dblink. A: I'd comment on matthieu's post, but I don't have the rep yet... In MySQL, the auto increment counter gets reset with truncate, but not with delete. A: It is not that truncate does not log anything in SQL Server. truncate does not log any information but it log the deallocation of data page for the table on which you fired TRUNCATE. and truncated record can be rollback if we define transaction at beginning and we can recover the truncated record after rollback it. But can not recover truncated records from the transaction log backup after committed truncated transaction. A: Truncate can also be Rollbacked here the exapmle begin Tran delete from Employee select * from Employee Rollback select * from Employee A: Truncate and Delete in SQL are two commands which is used to remove or delete data from table. Though quite basic in nature both Sql commands can create lot of trouble until you are familiar with details before using it. An Incorrect choice of command can result is either very slow process or can even blew up log segment, if too much data needs to be removed and log segment is not enough. That's why it's critical to know when to use truncate and delete command in SQL but before using these you should be aware of the Differences between Truncate and Delete, and based upon them, we should be able to find out when DELETE is better option for removing data or TRUNCATE should be used to purge tables. Refer check click here A: By issuing a TRUNCATE TABLE statement, you are instructing SQL Server to delete every record within a table, without any logging or transaction processing taking place. A: DELETE statement can have a WHERE clause to delete specific records whereas TRUNCATE statement does not require any and wipes the entire table. Importantly, the DELETE statement logs the deleted date whereas the TRUNCATE statement does not. A: One more difference specific to microsoft sql server is with delete you can use output statement to track what records have been deleted, e.g.: delete from [SomeTable] output deleted.Id, deleted.Name You cannot do this with truncate. A: Truncate command is used to re-initialize the table, it is a DDL command which delete all the rows of table.Whereas DELETE is a DML command which is used to delete row or set of rows according to some condition, if condition is not specified then this command will delete all the rows from the table. A: TRUNCATE is fast, DELETE is slow. Although, TRUNCATE has no accountability.
{ "language": "en", "url": "https://stackoverflow.com/questions/139630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "373" }
Q: .NET Architectural issue: 2 Web Services, how do I change which one is used at run time? I am working with Reporting Services and Sharepoint, I have an application that leverages reporting services however a client would like our application integrated into sharepoint. Currently we are tightly coupled to the ReportService.asmx webservice which exposes various methods for performing operations. Reporting Services has something called "Sharepoint Integration mode" when enabled the report server works differently and Sharepoint is used to manage the reports. Sharepoint adds a new web service called ReportService2006.asmx which is almost exactly the same. Now our application uses a web reference to the ReportService and uses various objects exposed by the service. ReportService2006 has exactly the same objects but they are obviously in a different namespace e.g I have 2 web references - 1 to each service so there is an object MyApplication.ReportService.CatalogItem and another MyApplication.ReportService2006.CatalogItem. I've tried to use dependency injection to absract the Service out of our application coupled with a factory pattern to determine which implementation of my interface to instantiate. Heres my interface. I've simplified it to include only the calls I need for this application. using System; using NetworkUserEncrypt.ReportService; namespace MyApplication.Service { public interface IReportingService { CatalogItem CreateDataSource(string DataSource, string Parent, bool Overwrite, DataSourceDefinition Definition, Property[] Properties); void DeleteItem(string Item); DataSourceDefinition GetDataSourceContents(string DataSource); byte[] GetReportDefinition(string Report); CatalogItem[] ListChildren(string Item); } } So I have 2 implementations of this each instantiating a different web service e.g: namespace MyApp.Service.Implementation { class ReportingServiceImpl : IReportingService { ReportingService _service = null; public ReportingServiceImpl() { ReportingService _service = new ReportingService(); } /* SNIP */ } } and namespace MyApp.Service.Implementation { class ReportingService2006Impl : IReportingService { ReportingService2006 _service = null; public ReportingService2006Impl() { ReportingService2006 _service = new ReportingService2006(); } /* SNIP */ } } So the plan is I can inject these into my ServiceWrapper at run time. However - if you'll notice the interface is tied to the ReportService and some of the methods return objects that are from the web reference e.g. CatalogItem. Thus my project won't build because my implementation for ReportService2006 is referencing the CatalogItem from a different namespace. Any ideas? Am I going totally the wrong direction with this? A: I think you are headed in the right direction for this situation, It's just going to take a fair amount more of work to drive it home. I would create some proxy classes that can wrap both versions of the classes using reflection or dynamic methods. I've also seen people use the proxy classes from the remoting namespace to intercept method calls at runtime and direct them to the right place, that way you could create the dynamic methods on demand instead of hand coding them, all you really need for that is an interface that matches the object's interface. A: Either add the reference it needs or build wrappers for CatalogItem and the rest of the specific classes. I'd build the wrappers, the interface should be able to stand on its own without referencing any particular implementation. A: If the Web Services reside in different namespaces, then there is no trivial solution (eg. something as simple as changing the URL). You seem to be on the right track with abstraction though. If you're feeling adventurous though, you can modify the generated Web Service classes yourself (the "reference.cs" files), and then manually add them to your project. First create a common Interface, then modify the first lines in the file like such: public partial class MyWebService : SoapHttpClientProtocol, IMyWebService Then you use this to call the code: IMyWebService webService = new MyWebService(); // Or you can use a Factory A: In VS2008, if I try to add a ServiceReference to a webservice, I see an advanced button. When clicking it, there is an option to "re-use types". A: The most robust solution is to create a CatalogItem interface and create wrappers for each of your web services and hide the whole thing behind a factory. The factory will contain the logic for calling the "correct" web service and the client code will have to be changed to use the interface but it is a change for the better. WCF does solve most of these issues with Service Contracts and if my earlier advice proves to be too unmanageable you could consider migrating towards to a WCF solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/139639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: When writing XML, is it better to hand write it, or to use a generator such as simpleXML in PHP? I have normally hand written xml like this: <tag><?= $value ?></tag> Having found tools such as simpleXML, should I be using those instead? What's the advantage of doing it using a tool like that? A: If you're dealing with a small bit of XML, there's little harm in doing it by hand (as long as you can avoid typos). However, with larger documents you're frequently better off using an editor, which can validate your doc against the schema and protect against typos. A: You could use the DOM extenstion which can be quite cumbersome to code against. My personal opinion is that the most effective way to write XML documents from ground up is the XMLWriter extension that comes with PHP and is enabled by default in recent versions. $w=new XMLWriter(); $w->openMemory(); $w->startDocument('1.0','UTF-8'); $w->startElement("root"); $w->writeAttribute("ah", "OK"); $w->text('Wow, it works!'); $w->endElement(); echo htmlentities($w->outputMemory(true)); A: using a good XML generator will greatly reduce potential errors due to fat-fingering, lapse of attention, or whatever other human frailty. there are several different levels of machine assistance to choose from, however: * *at the very least, use a programmer's text editor that does syntax highlighting and auto-indentation. just noticing that your text is a different color than you expect, or not lining up the way you expect, can tip you off to a typo you might otherwise have missed. *better yet, take a step back and write the XML as a data structure of whatever language you prefer, than convert that data structure to XML. Perl gives you modules such as the lightweight XML::Simple for small jobs or the heftier XML::Generator; using XML::Simple is just a matter of arranging your content into a standard Perl hash of hashes and running it through the appropriate method. -steve A: Producing XML via any sort of string manipulation opens the door for bugs to get into your code. The extremely simple example you posted, for instance, won't produce well-formed XML if $value contains an ampersand. There aren't a lot of edge cases in XML, but there are enough that it's a waste of time to write your own code to handle them. (And if you don't handle them, your code will unexpectedly fail someday. Nobody wants that.) Any good XML tool will automatically handle those cases. A: Good XML tools will ensure that the resulting XML file properly validates against the DTD you are using. Good XML tools also save a bunch of repetitive typing of tags. A: Use the generator. The advantage of using a generator is you have consistent markup and don't run the risk of fat-fingering a bracket or quote, or forgetting to encode something. This is crucial because these mistakes will not be found until runtime, unless you have significant tests to ensure otherwise. A: hand writing isn't always the best practice, because in large XML ou can write wrong tags and can be difficult to find the reason of an error. So I suggest to use XMl parsers to create XML files. A: Speed may be an issue... handwritten can be a lot faster. A: The XML tools in eclipse are really useful too. Just create a new xml schema and document, and you can easily use most of the graphical tools. I do like to point out that a prior understanding of how schemas work will be of use. A: Always use a tool of some kind. XML can be very complex, I know that the PHP guys are used to working with hackey little stuff, but its a huge code smell in the .NET world if someone doesn't use System.XML for creating XML.
{ "language": "en", "url": "https://stackoverflow.com/questions/139650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Convert Pixels to Points I have a need to convert Pixels to Points in C#. I've seen some complicated explanations about the topic, but can't seem to locate a simple formula. Let's assume a standard 96dpi, how do I calulate this conversion? A: WPF converts points to pixels with the System.Windows.FontSizeConverter. The FontSizeConverter uses the System.Windows.LengthConverter. The LengthConverter uses the factor 1.333333333333333333 to convert from points (p) to pixels (x): x = p * 1.3333333333333333 A: Try this if your code lies in a form: Graphics g = this.CreateGraphics(); points = pixels * 72 / g.DpiX; g.Dispose(); A: System.Drawing.Graphics has DpiX and DpiY properties. DpiX is pixels per inch horizontally. DpiY is pixels per inch vertically. Use those to convert from points (72 points per inch) to pixels. Ex: 14 horizontal points = (14 * DpiX) / 72 pixels A: Surely this whole question should be: "How do I obtain the horizontal and vertical PPI (Pixels Per Inch) of the monitor?" There are 72 points in an inch (by definition, a "point" is defined as 1/72nd of an inch, likewise a "pica" is defined as 1/72nd of a foot). With these two bits of information you can convert from px to pt and back very easily. A: Actually it must be points = pixels * 96 / 72 A: points = (pixels / 96) * 72 on a standard XP/Vista/7 machine (factory defaults) points = (pixels / 72) * 72 on a standard Mac running OSX (Factory defaults) Windows runs as default at 96dpi (display) Macs run as default at 72 dpi (display) 72 POSTSCRIPT Points = 1 inch 12 POSTSCRIPT Points = 1 POSTSCRIPT Pica 6 POSTSCRIPT Picas = 72 Points = 1 inch 1 point = 1⁄72 inches = 25.4⁄72 mm = 0.3527 mm DPI = Dots Per Inch PPI = Pixels Per Inch LPI = Lines per inch More info if using em as measuring 16px = 1em (default for normal text) 8em = 16px * 8 Pixels/16 = em A: Assuming 96dpi is a huge mistake. Even if the assumption is right, there's also an option to scale fonts. So a font set for 10pts may actually be shown as if it's 12.5pt (125%). A: Starting with the given: * *There are 72 points in an inch (that is what a point is, 1/72 of an inch) *on a system set for 150dpi, there are 150 pixels per inch. *1 in = 72pt = 150px (for 150dpi setting) If you want to find points (pt) based on pixels (px): 72 pt x pt ------ = ----- (1) for 150dpi system 150 px y px Rearranging: x = (y/150) * 72 (2) for 150dpi system so: points = (pixels / 150) * 72 (3) for 150dpi system A: There are 72 points per inch; if it is sufficient to assume 96 pixels per inch, the formula is rather simple: points = pixels * 72 / 96 There is a way to get the configured pixels per inch of your display in Windows using GetDeviceCaps. Microsoft has a guide called "Developing DPI-Aware Applications", look for the section "Creating DPI-Aware Fonts". The W3C has defined the pixel measurement px as exactly 1/96th of 1in regardless of the actual resolution of your display, so the above formula should be good for all web work. A: This works: int pixels = (int)((dp) * Resources.System.DisplayMetrics.Density + 0.5f); A: Using wxPython on Mac to get the correct DPI as follows: from wx import ScreenDC from wx import Size size: Size = ScreenDC().GetPPI() print(f'x-DPI: {size.GetWidth()} y-DPI: {size.GetHeight()}') This yields: x-DPI: 72 y-DPI: 72 Thus, the formula is: points: int = (pixelNumber * 72) // 72 A: Height lines converted into points and pixel (my own formula). Here is an example with a manual entry of 213.67 points in the Row Height field: 213.67 Manual Entry 0.45 Add 0.45 214.12 Subtotal 213.75 Round to a multiple of 0.75 213.00 Subtract 0.75 provides manual entry converted by Excel 284.00 Divide by 0.75 gives the number of pixels of height Here the manual entry of 213.67 points gives 284 pixels. Here the manual entry of 213.68 points gives 285 pixels. (Why 0.45? I do not know but it works.)
{ "language": "en", "url": "https://stackoverflow.com/questions/139655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "135" }
Q: How to deal with .NET TabPage controls all ending up in one Form class? I'd like to structure a Form with a TabControl but I'd like to avoid having every control on each TabPage end up being a member of the Form I'm adding the TabControl to. So far I've identified these options, please comment or suggest alternatives: 1) Write a UserControl for each TabPage 2) Leave only the Control on the main Form but turn the control variable public and cut-and-past all actual code into separate classes 3) Forgo the Form designer and do everything at runtime 4) Derive from Tabpage (not sure if this works and what the design-time implications are) Thanks all, Andrew A: In any complex WinForms application, you will probably run into the problem of too many controls on a form. Not that you'll run into a hard limit, but rather you'll run into a pain point -- such as you're describing. In most scenarios, for me, your option #1 -- a user control for each tab page -- is the least painful approach. It allows you to encapsulate logical breakdowns of the controls in their own way, scoping them appropriately. The downside to this is that you will likely end up exposing either a ton of properties on your user control. The way out of that problem is fairly simple, though: Use a custom class to represent the data which is "bound" to said control, and then expose a single property for the bound instance of the class. You'll have better architecture overall, you'll be more maintainable, and as an added bonus, you won't go nuts trying to get the form to work. :) EDIT: I should note that you may also end up needing to expose some custom events from the user controls which represent your tab pages. Essentially, if there's a control on the tab whose event is needed by the parent form, you'll have to create an event and raise it so that the parent form knows about it. This isn't terribly difficult, but can add significant LOCs to the user controls. A: Option 1 is the best as it allows you to use the designer for laying out the contents of the UserControl and also makes it easy for different developers to work on different UserControl instances at the same time. Option 2 is a bad idea because if you want to change the layout the designer will generate some new code and your cut-paste will have to be corrected by hand. Option 3 is going to be 10 times more work then using the designer to organize the layout. Option 4 has no benefit over the UserControl but you need to make some changes to the TabPage class in order to allow it to work as a design surface. So I would stick with option 1.
{ "language": "en", "url": "https://stackoverflow.com/questions/139665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why would the Win32 OleGetClipboard() function return CLIPBRD_E_CANT_OPEN? Under what circumstances will the Win32 API function OleGetClipboard() fail and return CLIPBRD_E_CANT_OPEN? More background: I am assisting with a Firefox bug fix. Details here: bug 444800 - cannot retrieve image data from clipboard in lossless format In the automated test that I helped write, we see that OleGetClipboard() sometimes fails and returns CLIPBRD_E_CANT_OPEN. That is unexpected, and the Firefox code to pull image data off the Windows clipboard depends on that call succeeding. A: The documentation says that OleGetClipboard can fail with this error code if OpenClipboard fails. In turn, if you read that documentation, it says: "OpenClipboard fails if another window has the clipboard open." It's an exclusive resource: only one window can have the clipboard open at a time. Basically, if you can't do it, wait a little while and try again. A: Is your test running over Terminal Services? See CLIPBRD_E_CANT_OPEN error when setting the Clipboard from .NET. A: From what I see in MSDN it seems to imply that the problem originates with whomever tried to actually put the data in the clipboard, i,.e. the source of the data. If their call to OleSetClipboard() failed, for whatever reason, then you won't be able to extract stuff out. I would take a look at how the data is being put into the clipboard, and see if there's a test case that performs this (copying the data to the clipboard), and then causes the problem you're talking about.
{ "language": "en", "url": "https://stackoverflow.com/questions/139668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Data in a table with carriage return? In SQL SERVER Is it possible to store data with carriage return in a table and then retrieve it back again with carriage return. Eg: insert into table values ('test1 test2 test3 test4'); When I retrieve it, I get the message in a line test1 test2 test3 test4 The carriage return is treated as a single character. Is there way to get the carriage returns or its just the way its going to be stored? Thanks for the help guys!!! Edit: I should have explained this before. I get the data from the web development (asp .net) and I just insert it into the table. I might not be doing any data manipulation.. just insert. I return the data to the app development (C++) and may be some data or report viewer. I don't want to manipulate on the data. A: INSERT INTO table values('test1' + CHAR(10) + 'test2' + CHAR(10) + 'test3' + CHAR(10) + 'test4') This should do it. To see the effect, switch the query result window to plain text output. Regards A: IIRC, using chr(13) + chr(10) should works. insert into table values ('test1' + chr(13) + chr(10) + 'test2' ); A: You can store Carriage return in the database. The problem here is that you are using SQL Server Management Studio to display the results of your query. You probably have it configured to show the results in a grid. Change the configuration of SSMS to show results to text and you will see the carriage returns. Right click in the query window -> Results To -> Results To Text Run your query again. A: Can you please clarify how you retrieve the data back from the database? What tool do you use? The data probably contains the carriage returns but it's not displayed if you get the results in grid (try the results in text option) A: You might need to put in a "\n" instead of a literal carriage return. A: The carriage return is stored as is. The problem here is that your sql client is not understanding it. If you did a raw dump of this data you'll see that the carriage returns are there in the data. I use DBArtisan at work and it seems to work fine. However isql seems to have the same problem that you reported. A: Is this result in your HTML or in Query analyser? If it's in HTML, have a look at the source code and it might appear correct there, in which case you'd have to replace the crlf characters with <br /> tags. I'm also thinking that there used to be attributes you could add to an HTML textarea to force it to send carriage returns in certain ways -- soft or hard? I haven't looked that up, perhaps someone could do that. But SQL Server does save the two characters in my experience. In fact I did exactly as you described here a few days ago using SQL 2005 and each line break has two unprintable characters. A: I am using SQLite to store multiline texbox and got something like that when retrieving data stored and showing it on any object (txtboxes, labels, etc.). Anytime I copied data into NotePad/WordPad or similar, I could see that carriage returns were stored, they simply weren't show in the ASP page. Found the answer here: http://www.mikesdotnetting.com/Article/20/How-to-retain-carriage-returns-or-line-breaks-in-an-ASP.NET-web-page hope that helps. My code example: C#: protected void Page_Load(object sender, EventArgs e) { String str = Request.QueryString["idNoticia"]; this.dsNewsDetails.FilterExpression = "idNoticia=" + str; } ASPX: <asp:Label ID="BodyLabel" runat="server" style="font-size: medium" Text='<%# Eval("body").ToString().Replace(Environment.NewLine,"<br/>") %>' Width="100%" /> Note: as mentioned in the link provided, Environment.NewLine works both for C# and VB A: If you switch the output to plain text you can see the data in different lines. To switch output go to Tools>Options>Query Resultsand set default destination to: text. You can also try hitting Ctrl+p before executing query. Hope it helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/139670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: tool for reading glassfish logs? I'm dealing with huge glassfish log files (in windows, eek!) and well ... Wordpad isn't cutting it. Are there any tools out there that can handle these log files in a more intelligent manner? Functionality that would be welcome: * *View all lines of a certain log level (info, warning, severe) *Show logs between two timestamps *Occurency counter (this exception was thrown 99 times between time x and time y) A: On Windows I'd still go perl or awk. Download and install cygwin, then use awk or whatever you are familiar with. awk has the time functions needed for filtering, and features such as getline for log file navigation. Ex: Exception occurency count - all time $ awk '/^java.*:\W/ {print $1}' server.log* |sort|uniq -c|sort -nr 60 javax.ejb.EJBException: 45 java.rmi.ServerException: 2 javax.persistence.PersistenceException: 2 javax.ejb.ObjectNotFoundException: 1 java.lang.Error: A: try UltraEdit (paid) or Notepad++ (free) A: Try the MS LogParser tool: http://www.microsoft.com/downloads/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en Basically turns your flat log file into a "database" you can run SQL-like queries on. You can even output in grids, charts and graphs. A: http://sourceforge.net/project/screenshots.php?group_id=212019 A: I use Excel for parsing log files. If you use tab-delimited log files this can work great. The filtering and sorting features of Excel lend themselves well to logfile analysis.
{ "language": "en", "url": "https://stackoverflow.com/questions/139683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }