text
stringlengths
8
267k
meta
dict
Q: Change value of attribute on an XML object in AS3 Is there a easy way to do this? Or do I have to parse the file and do some search/replacing on my own? The ideal would be something like: var myXML: XML = ???; // ... load xml data into the XML object myXML.someAttribute = newValue; A: Problem is with some attributes, like @class. Just imagine you want to create HTML source and want to create tag test So code should be var myDiv:XML = test myDiv.@class = "myClass"; //I want to set it here, because it can vary but this is not compilable and it throw error (at least in Flex Builder) in that case you can also use this: myDiv.@['class'] = "myClass"; A: Attributes are accessible in AS3 using the @ prefix. For example: var myXML:XML = <test name="something"></test>; trace(myXML.@name); myXML.@name = "new"; trace(myXML.@name); Output: something new
{ "language": "en", "url": "https://stackoverflow.com/questions/91305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Attaching additional javadoc in Intellij IDEA When I use quick documentaion lookup (Ctrl+Q) on j2ee classes or annotations in IDEA I only get an empty javadoc. It only contains the basics like class name. How do I add the javadoc to the libs IDEA provides itself? A: You can attach javadoc to any library you have configure in your module or project. Just access the project structure windows (File -> Project Structure), then select "modules" and select the module that has the dependency you want to configure. Then select the "Dependencies" tab, select the dependency that's missing the javadoc and click "Edit". In the window that just showed up you see two buttons "Add" and "Specify Javadoc URL". If you have the javadoc in a jar file select the first one, if you want to point to a web site that contains the javadoc select the latest. That's it. A: If using maven: "right click on your pom.xml" -> "Maven" -> then choose "Download Sources and Documentation" To avoid this in the future: "Preferences" -> "Build, Execution, Deployment" -> "Build Tools" -> "Maven" -> "Importing" -> Check the Automatically download Sources and Documentation boxes. Credit to Yacov Schondorf and Stephen Boesch from Intellij user forums. A: What about documentation for an extension API? I had added the j3d API to my jdk1.6 and use it succesfully, next step, get the javadoc Forget the links on java.sun.com (It is a mess) Go directly to java.net and get all the stuff (API, javadoc, get the source even -for every package-) Go to your jdk and selected the documentation path and add the javadoc zip. Go to the source tab and add the source's zip You're done. Actually only the source is needed to enjoy javadoc and the uncompiled classes A: right click on the maven pom.xml -> "Maven" ->Download documentation,wait for a sec,you'll make it!
{ "language": "en", "url": "https://stackoverflow.com/questions/91307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68" }
Q: How to extend default timeout period in flash application? I have an application written in flash (actually it is written in Haxe and run under SHWX but it doesn't matter here). I have a pretty complex task that consumes a lot of CPU power and sometimes executes for more that 15 seconds. If that happens, I've got an error saying 'A script has executed for longer than the default timeout period of 15 seconds.' and everything crashes. I know I can use continuations to stop this from happening but is there a way to extend that 'default timeout period'? It's a developer tool, responsivnes doesn't really matter. A: Another way is to link a swfmill-based swf via -swf-lib switch and set this ScriptLimits tag there, haxe will re-use it then. A: in CS3+ you simply set the "Script time limit" property of the swf at publish time - it's in the flash tab of the publish settings A: When you test your application, be aware of the scriptTimeLimit property. If an application takes too long to initialize, Flash Player warns users that a script is causing Flash Player to run slowly and prompts the user to abort the application. If this is the situation, you can set the scriptTimeLimit property of the tag to a longer time so that the Flex application has enough time to initialize. However, the default value of the scriptTimeLimit property is 60 seconds, which is also the maximum, so you can only increase the value if you have previously set it to a lower value. You rarely need to change this value. Source: http://livedocs.adobe.com/flex/3/html/help.html?content=performance_05.html A: I'm not sure if there is something more native to get this done, but there seems to be a command that hacks the SWF to add a ScriptLimits tag to extend the timeout period. A: I suggest breaking your function into smaller chunks and spreading them over multiple frames. This way you can display an progress animation and the Flash application won't become unresponsive. So for example if you have to loop over 1000 items, you do 100 in one frame, then another hundred in the next frame, etc, until you have processed them all. I wouldn't recommend hacking your swf.
{ "language": "en", "url": "https://stackoverflow.com/questions/91318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Can Visual Studio 2008 work with Team System 2005? I would like to upgrade my team from VS2005 to VS2008 without touching the version of Team Server which is 2005. Is that possible? And if so, how do I tell VS to recognize TFS? Currently in my VS2008 options menu, I don't have any source control to choose from. A: VS 2008 works fine with TFS 2005. There are a couple of exceptions in the Team Build area (which changed massively between 2005 and 2008) but otherwise you will be able to do everything you need to do from the Visual Studio 2008 client. You need to ensure that you have the 2008 version of the Team Explorer installed to add TFS functionality into Visual Studio. The 2005 version only installs into Visual Studio 2005. To download the 2008 version see the following link http://www.microsoft.com/downloads/details.aspx?familyid=0ed12659-3d41-4420-bbb0-a46e51bfca86 Note that if you have previously applied SP1 of Visual Studio 2008, then you will need to run it again once installing Team Explorer. For what it is worth, I would encourage you to upgrade to TFS 2008 on the server side as soon as you can. TFS 2008 works fine with client connecting from Visual Studio 2005 machine but it has some significant performance improvements and the Team Build functionality is much improved. A: Yes, you can... (We're doing that here too) * *Tools -> Connect To Team Foundation Server *"Add..." *Enter IP / hostname A: Yes, that works fine. If you install the Team Foundation Client from the TFS 2008 DVD on your VS machine, you can connect to both TFS 2005 and TFS 2008 servers. If you don't have access to a TFS 2008 DVD (note that the trial should be fine), installing the 2005 client on VS 2008 should also work, but I've never personally tried that. A: Do you have the Team Foundation Client installed? If you have the Team Version it should be residing in the TFC folder on your installation DVD. (I don't know why it isn't an option in the installer) It is also possible to download the TFC from Microsoft (for free), there is an SP1 version on Microsoft Downloads.
{ "language": "en", "url": "https://stackoverflow.com/questions/91344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Multiple forms on ASP.NET page Coming from a Classic ASP background, I'm used to multiple forms on a page, but this clearly limited in a ASP.NET page. However, I have a situation where I have a form that gathers input from the user, saves the data to a DB, and afterwards I want to render (and tweak the values of) a special form that posts to the PayPal website. If the PayPal form's field values were static, there would be no problem, but since I want to manipulate the form server-side (to tweak the qty, desc, price fields etc) this will be a problem. I was considering redirecting to a different page after writing to the DB, and I suspect this would work fairly well, but it's a bit of extra effort that may be unneccessary. It has also been suggested to me that I could programmatically render a different form, depending on where in the cycle I am. That is, use a placeholder, and on Page_Load I would add a DB Form (complete with child controls) initially, and the PayPal form after a Postback. This scenario has got to be a common one for you guys, so I'm looking for opinions advice and any relevant code samples if you have preferred approach. I know I can get by, but this project is a learning vehicle so I want to adopt what passes for best practice. A: You can have multiple forms, it's just only one form may have the runat="server" attribute. There are a bunch of answers to getting PayPal to work; but as it's a learning vehicle that may be cheating. In all honesty, I'd look at the full-blown PayPal API rather than use the method of the somewhat simplistic form (with the added advantage that it should stretch your learning more). Otherwise yes, set up an HTML form outside of the server side form, and add a literal into it and write the bits and pieces in there. A: A basic approach is to use two panels on the page - one for the first form and another for the second. You can change the visibility property of these panels depending on which form you want to display (during page_load or any time before rendering). A: Maybe I'm misunderstanding the question, but can't you simply have 2 divs inside the form, where only one is visible at any time. e.g. <form id="Form1" method="post" runat="server"> <div id="getUserInput" visible="true"> <asp:button id="btnSubmitFirst" /> </div> <div id="doSubmissionToPaypal" visible="false"> <asp:button id="btnSubmitSecond" /> </div> </form> Then, in btnSubmitFirst_Click: doSubmissionToPaypal.visible=True getUserInput.visible = false Something along those lines? A: A workaround for this paypal is to use the Paypal Integration Code as Paypal is not always the most friendly to integrate. The hard work is basically done for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/91350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: XMP Library for Ruby Can anyone recommend an open source Ruby library for adding XMP metadata to JPEG images? A: MiniExiftool, which is just a wrapper around the Exiftool command-line app, is the only open-source one I know of. There's a commercial library called Chilkat, but I do not have experience with it, being that it is commercial.
{ "language": "en", "url": "https://stackoverflow.com/questions/91352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: GnuPG: "decryption failed: secret key not available" error from gpg on Windows Environment: HP laptop with Windows XP SP2 I had created some encrypted files using GnuPG (gpg) for Windows. Yesterday, my hard disk failed so I had reimage the hard disk. I have now reinstalled gpg and regenerated my keys using the same passphrase as earlier. But, I am now unable to decrypt the files. I get the following error: C:\sureshr>gpg -a c:\sureshr\work\passwords.gpg gpg: encrypted with 1024-bit ELG-E key, ID 279AB302, created 2008-07-21 "Suresh Ramaswamy (AAA) BBB" gpg: decryption failed: secret key not available C:\sureshr>gpg --list-keys C:/Documents and Settings/sureshr/Application Data/gnupg\pubring.gpg -------------------------------------------------------------------- pub 1024D/80059241 2008-07-21 uid Suresh Ramaswamy (AAA) BBB sub 1024g/279AB302 2008-07-21 AAA = gpg comment BBB = my email address I am sure that I am using the correct passphrase. What exactly does this error mean? How do I tell gpg where to find my secret key? Thanks, Suresh A: Yes, your secret key appears to be missing. Without it, you will not be able to decrypt the files. Do you have the key backed up somewhere? Re-creating the keys, whether you use the same passphrase or not, will not work. Each key pair is unique. A: workmad3 is apparently out of date, at least for current gpg, as the --allow-secret-key-import is now obsolete and does nothing. What happened to me was that I failed to export properly. Just doing gpg --export is not adequate, as it only exports the public keys. When exporting keys, you have to do gpg --export-secret-keys >keyfile A: One more cause for the "secret key not available" message: GPG version mismatch. Practical example: I had been using GPG v1.4. Switching packaging systems, the MacPorts supplied gpg was removed, and revealed another gpg binary in the path, this one version 2.0. For decryption, it was unable to locate the secret key and gave this very error. For encryption, it complained about an unusable public key. However, gpg -k and -K both listed valid keys, which was the cause of major confusion. A: You need to import not only your secret key, but also the corresponding public key, or you'll get this error. A: when reimporting your keys from the old keyring, you need to specify the command: gpg --allow-secret-key-import --import <keyring> otherwise it will only import the public keys, not the private keys. A: The resolution to this problem for me, was to notify the sender that he did use the Public key that I sent them but rather someone elses. You should see the key that they used. Tell them to use the correct one.
{ "language": "en", "url": "https://stackoverflow.com/questions/91355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: What is the value of href attribute in openid.server link tag if Techorati OpenID is hosted at my site? I want to log in to Stack Overflow with Techorati OpenID hosted at my site. https://stackoverflow.com/users/login has some basic information. I understood that I should change <link rel="openid.delegate" href="http://yourname.x.com" /> to <link rel="openid.delegate" href="http://technorati.com/people/technorati/USERNAME/" /> but if I change <link rel="openid.server" href="http://x.com/server" /> to <link rel="openid.server" href="http://technorati.com/server" /> or <link rel="openid.server" href="http://technorati.com/" /> it does not work. A: From http://blog.blogupp.com/2008/06/get-openid-fied-and-discover-new-web.html <link rel="openid.server" href="http://technorati.com/openid/" A: A general way to find out the answer to this question is to load the page you want to delegate to (http://technorati.com/people/technorati/USERNAME in this case), look at the source, and find the server tag used there. If there are openid2 tags, you should copy those as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/91357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to calculate the sum of values in a tree using SQL I need to sum points on each level earned by a tree of users. Level 1 is the sum of users' points of the users 1 level below the user. Level 2 is the Level 1 points of the users 2 levels below the user, etc... The calculation happens once a month on a non production server, no worries about performance. What would the SQL look like to do it? If you're confused, don't worry, I am as well! User table: ID ParentID Points 1 0 230 2 1 150 3 0 80 4 1 110 5 4 54 6 4 342 Tree: 0 |---\ 1 3 | \ 2 4--- \ \ 5 6 Output should be: ID Points Level1 Level2 1 230 150+110 150+110+54+342 2 150 3 80 4 110 54+342 5 54 6 342 SQL Server Syntax and functions preferably... A: If you were using Oracle DBMS that would be pretty straightforward since Oracle supports tree queries with the CONNECT BY/STARTS WITH syntax. For SQL Server I think you might find Common Table Expressions useful A: Trees don't work well with SQL. If you have very (very very) few write accesses, you could change the tree implementation to use nested sets, that would make this query incredibly easy. Example (if I'm not mistaken): SELECT SUM(points) FROM users where left > x and right < y However, any changes on the tree require touching a massive amount of rows. It's probably better to just do the recursion in you client. A: I would say: create a stored procedure, probably has the best performance. Or if you have a maximum number of levels, you could create subqueries, but they will have a very poort performance. (Or you could get MS SQL Server 2008 and get the new hierarchy functions... ;) ) A: If you are working with trees stored in a relational database, I'd suggest looking at "nested set" or "modified preorder tree traversal". The SQL will be as simple as that: SELECT id, SUM(value) AS value FROM table WHERE left>left\_value\_of\_your\_node AND right<$right\_value\_of\_your\_node; ... and do this for every node you are interested in. Maybe this will help you: http://www.dbazine.com/oracle/or-articles/tropashko4 or use google. A: SQL in general, like others said, does not handle well such relations. Typically, a surrogate 'relations' table is needed (id, parent_id, unique key on (id, parent_id)), where: * *every time you add a record in 'table', you: INSERT INTO relations (id, parent_id) VALUES ([current_id], [current_id]); INSERT INTO relations (id, parent_id) VALUES ([current_id], [current_parent_id]); INSERT INTO relations (id, parent_id) SELECT [current_id], parent_id FROM relations WHERE id = [current_parent_id]; *have logic to avoid cycles *make sure that updates, deletions on 'relations' are handled with stored procedures Given that table, you want: SELECT rel.parent_id, SUM(tbl.points) FROM table tbl INNER JOIN relations rel ON tbl.id=rel.id WHERE rel.parent_id <> 0 GROUP BY rel.parent_id; A: Ok, this gives you the results you are looking for, but there are no guarantees that I didn't miss something. Consider it a starting point. I used SQL 2005 to do this, SQL 2000 does not support CTE's WITH Parent (id, GrandParentId, parentId, Points, Level1Points, Level2Points) AS ( -- Find root SELECT id, 0 AS GrandParentId, ParentId, Points, 0 AS Level1Points, 0 AS Level2Points FROM tblPoints ptr WHERE ptr.ParentId = 0 UNION ALL ( -- Level2 Points SELECT pa.GrandParentId AS Id, NULL AS GrandParentId, NULL AS ParentId, 0 AS Points, 0 AS Level1Points, pa.Points AS Level2Points FROM tblPoints pt JOIN Parent pa ON pa.GrandParentId = pt.Id UNION ALL -- Level1 Points SELECT pt.ParentId AS Id, NULL AS GrandParentId, NULL AS ParentId, 0 AS Points, pt.Points AS Level1Points, 0 AS Level2Points FROM tblPoints pt JOIN Parent pa ON pa.Id = pt.ParentId AND pa.ParentId IS NOT NULL UNION ALL -- Points SELECT pt.id, pa.ParentId AS GrandParentId, pt.ParentId, pt.Points, 0 AS Level1Points, 0 AS Level2Points FROM tblPoints pt JOIN Parent pa ON pa.Id = pt.ParentId AND pa.ParentId IS NOT NULL ) ) SELECT id, SUM(Points) AS Points, SUM(Level1Points) AS Level1Points, CASE WHEN SUM(Level2Points) > 0 THEN SUM(Level1Points) + SUM(Level2Points) ELSE 0 END AS Level2Points FROM Parent GROUP BY id ORDER by id A: You have a couple of options: * *Use a cursor and a recursive user-defined function call (it's quite slow) *Create a cache table, update it on INSERT using a trigger (it's the fastest solution but could be problematic if you have lots of updates to the main table) *Do a client-side recursive calculation (preferable if you don't have too many records) A: You can write a simple recursive function to do the job. My MSSQL is a little bit rusty, but it would look like this: CREATE FUNCTION CALC ( @node integer, ) returns ( @total integer ) as begin select @total = (select node_value from yourtable where node_id = @node); declare @children table (value integer); insert into @children select calc(node_id) from yourtable where parent_id = @node; @current = @current + select sum(value) from @children; return end A: The following table: Id ParentId 1 NULL 11 1 12 1 110 11 111 11 112 11 120 12 121 12 122 12 123 12 124 12 And the following Amount table: Id Val 110 500 111 50 112 5 120 3000 121 30000 122 300000 Only the leaves (last level) Id's have a value defined. The SQL query to get the data looks like: ;WITH Data (Id, Val) AS ( select t.Id, SUM(v.val) as Val from dbo.TestTable t join dbo.Amount v on t.Id = v.Id group by t.Id ) select cd.Id, ISNULL(SUM(cd.Val), 0) as Amount FROM ( -- level 3 select t.Id, d.val from TestTable t left join Data d on d.id = t.Id UNION -- level 2 select t.parentId as Id, sum(y.Val) from TestTable t left join Data y on y.id = t.Id where t.parentId is not null group by t.parentId UNION -- level 1 select t.parentId as Id, sum(y.Val) from TestTable t join TestTable c on c.parentId = t.Id left join Data y on y.id = c.Id where t.parentId is not null group by t.parentId ) AS cd group by id this results in the output: Id Amount 1 333555 11 555 12 333000 110 500 111 50 112 5 120 3000 121 30000 122 300000 123 0 124 0 I hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/91360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to escape braces (curly brackets) in a format string in .NET How can brackets be escaped in using string.Format? For example: String val = "1,2,3" String.Format(" foo {{0}}", val); This example doesn't throw an exception, but it outputs the string foo {0}. Is there a way to escape the brackets? A: Almost there! The escape sequence for a brace is {{ or }} so for your example you would use: string t = "1, 2, 3"; string v = String.Format(" foo {{{0}}}", t); A: [TestMethod] public void BraceEscapingTest() { var result = String.Format("Foo {{0}}", "1,2,3"); //"1,2,3" is not parsed Assert.AreEqual("Foo {0}", result); result = String.Format("Foo {{{0}}}", "1,2,3"); Assert.AreEqual("Foo {1,2,3}", result); result = String.Format("Foo {0} {{bar}}", "1,2,3"); Assert.AreEqual("Foo 1,2,3 {bar}", result); result = String.Format("{{{0:N}}}", 24); //24 is not parsed, see @Guru Kara answer Assert.AreEqual("{N}", result); result = String.Format("{0}{1:N}{2}", "{", 24, "}"); Assert.AreEqual("{24.00}", result); result = String.Format("{{{0}}}", 24.ToString("N")); Assert.AreEqual("{24.00}", result); } A: Or you can use C# string interpolation like this (feature available in C# 6.0): var value = "1, 2, 3"; var output = $" foo {{{value}}}"; A: My objective: I needed to assign the value "{CR}{LF}" to a string variable delimiter. C# code: string delimiter= "{{CR}}{{LF}}"; Note: To escape special characters normally you have to use \. For opening curly bracket {, use one extra, like {{. For closing curly bracket }, use one extra, }}. A: You can use double open brackets and double closing brackets which will only show one bracket on your page. A: Yes, to output { in string.Format you have to escape it like this: {{ So the following will output "foo {1,2,3}". String val = "1,2,3"; String.Format(" foo {{{0}}}", val); But you have to know about a design bug in C# which is that by going on the above logic you would assume this below code will print {24.00}: int i = 24; string str = String.Format("{{{0:N}}}", i); // Gives '{N}' instead of {24.00} But this prints {N}. This is because the way C# parses escape sequences and format characters. To get the desired value in the above case, you have to use this instead: String.Format("{0}{1:N}{2}", "{", i, "}") // Evaluates to {24.00} Reference Articles * *String.Format gotcha *String Formatting FAQ A: You can also use like this. var outVal = $" foo {"{"}{inVal}{"}"} --- {"{"}Also Like This{"}"}" A: I came here in search of how to build JSON strings ad-hoc (without serializing a class/object) in C#. In other words, how to escape braces and quotes while using Interpolated Strings in C# and "verbatim string literals" (double quoted strings with '@' prefix), like... var json = $@"{{""name"":""{name}""}}"; A: Escaping curly brackets AND using string interpolation makes for an interesting challenge. You need to use quadruple brackets to escape the string interpolation parsing and string.format parsing. Escaping Brackets: String Interpolation $("") and String.Format string localVar = "dynamic"; string templateString = $@"<h2>{0}</h2><div>this is my {localVar} template using a {{{{custom tag}}}}</div>"; string result = string.Format(templateString, "String Interpolation"); // OUTPUT: <h2>String Interpolation</h2><div>this is my dynamic template using a {custom tag}</div> A: For you to output foo {1, 2, 3} you have to do something like: string t = "1, 2, 3"; string v = String.Format(" foo {{{0}}}", t); To output a { you use {{ and to output a } you use }}. Or now, you can also use C# string interpolation like this (a feature available in C# 6.0) Escaping brackets: String interpolation $(""). It is new feature in C# 6.0. var inVal = "1, 2, 3"; var outVal = $" foo {{{inVal}}}"; // The output will be: foo {1, 2, 3} A: Escaping Brackets: String Interpolation $(""): Now, you can also use C# string interpolation like this (feature available in C# 6.0): var inVal = "1, 2, 3"; var outVal = $" foo {{{inVal}}}"; // The output will be: foo {1, 2, 3}
{ "language": "en", "url": "https://stackoverflow.com/questions/91362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1150" }
Q: Mercury Quick Test Pro and Virtual machines: Works from one client machine but not another I have a virtual machine (VMware) with Mercury Quick Test Professional 9.2 installed. I have a script to test an application, written in VB.NET using the Infragistics library. If I access this virtual machine using my laptop (using Remote Desktop), everything works fine, the script completes without a problem. My laptop runs XP, with Windows Classic theme. If I access this virtual machine using another machine (using Remote Desktop), the script starts fine, but stops halfway through, without no error message from QTP, nothing. This machine runs XP, with Windows Classic theme. One difference between the two setups is the size of the screen, the laptop is 1920x1280, other machine 1280x1024. The step where the script stops involves checking a checkbox within an UltraWinGrid. The checkbox itself is displayed, is on the screen in both cases. Has anyone had this problem before, or have any idea why the behaviour is different between the two machines? Thanks. A: OK. I've found the problem. In fact, the script was failing silently because that's what the person who wrote the script told it to do. It couldn't validate something which was off screen, so the script failed. The problem was the QTP definition of 'off screen'. I have two screens attached to my laptop, the screen for the laptop itself (1920x1200) and another screen (1280x1024). I connect to the VM for QTP using remote desktop, and it uses the settings of the screen for the laptop. This means that when I launch my QTP script, and move it to the other screen, it doesn't fit, so the screen is no longer maximized, and the object is partially off screen, so can't be found. The fix is simple: in the Remote Desktop, use the Display tab, and set the size of the screen to a size to 1280x1024, and QTP doesn't have any more problems. Voilà. A: If you are not using Expert Mode, and / or are allowing QTP to do most of the work to create your repository objects, then yes it is referencing everything by pixels. I create all of my repository objects by hand, viewing the source (in the case of automated web-application testing) and using the Object Spy for assistance where needed. I make a point to not have any positioning information as part of my object definition, for the very reason you are running in to. For the parts of my web-app that interacted with Windows (opening a file to upload, etc.) the Object Spy was essential for the trial and error necessary to create a unique identifier for creating the repository object. But it can be done. Ex1: File Browse Dialog text = "Choose file" nativeclass = #32770 (apparently some Windows VooDoo for a file open dialog?) Ex2: Filename textbox in Browse dialog: nativeclass = "Edit" attached text "File &name:" (more Windows VooDoo? It woudn't work for me without the "&") Ex3: Open Button in the dialog: text = "&Open" object class = "Button" Good Luck! A: Point of clarification: You mentioned that QTP stops with no error message. Does that also mean that the test results log file also has no error message? If the log has any information, that may be helpful in diagnosing the problem. Could you share the lines of code at the point where the script fails? Also, remote desktop will resize the desktop on the remote machine. Although QTP scripts are not inherently coordinate based, individual statements can be coordinate-based relative to an object. The resolution could be an issue in that regard. For example, imagine you had a line like Button.Click(5, 150) recorded on a higher resolution machine. But if you attempted to play it back on a lower resolution machine, and the 150 is out of bounds of the object on the lower resolution, it could cause an issue. A: QTP does not use screen coordinates except as a last resort, if the objects are identified as high level objects (SwfTable in this case) you should be OK, if however QTP doesn't recognise the object it falls back to WinObject and screen coordinates. If you're using Infragistics then you should know that they extend QTP's support with their TestAdvantage product which will probably solve your issue. Edit: @MatthieuF said: In fact, we use the Infragistics plugin for QTP, and we still have the problem Can you give me an example of a line that fails? A: A few things: You should be able to debug on the VM easily - just wait for it to stop, go into your object repository, and see if it can identify the object. If no then use object spy to figure out what properties are different between the OSes. If there is a difference then you can always set that property to a regular expression and have it check for both possibilities. Assuming that isn't the issue we've run into problems using remote desktop with QTP if the Remote window is closed or minimized. For us, it was an issue where the clipboard can not be changed when an RDP window isn't visible, but there could be other surprises when using QTP that way.
{ "language": "en", "url": "https://stackoverflow.com/questions/91364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to trace JavaScript events like onclick onblur? Is there a way to debug or trace every JavaScript event in Internet Explorer 7? I have a bug that prevents scrolling after text-selecting, and I have no idea which event or action creates the bug. I really want to see which events are being triggered when I move the mouse for example. It's too much work to rewire the source and I kind of hoped there was something like a sniffer which shows me all the events that are triggered. A: You might want to try Visual Studio 2008 and its feature to debug JavaScript code. If the problem is not specific to Internet Explorer 7 but also occurs in Firefox, then another good way to debug JavaScript code is Firefox and the Firebug add-on which has a JavaScript debugger. Then you can also put console.log statements in the JavaScript code which you can then see the output of in the Console Window in Firebug, instead of using alerts which sometimes mess up the event chain. A: @[nickf] - I'm pretty sure document.all is an Internet Explorer specific extension. You need to attach an event handler, there's no way to just 'watch' the events. A framework like jQuery of the Microsoft Ajax library will easily give you methods to add the event handlers. jQuery is nice because of its selector framework. Then I use Firebug (Firefox extension) and put in a breakpoint. I find Firebug is a lot easier to set up and tear down than Visual Studio 2008. A: Loop through all elements on the page which have an onXYZ function defined and then add the trace to them: var allElements = document.all; // Is this right? Anyway, you get the idea. for (var i in allElements) { if (typeof allElements[i].onblur == "function") { var oldFunc = allElements[i].onblur; allElements[i].onblur = function() { alert("onblur called"); oldFunc(); }; } } A: Borkdude said: You might want to try Visual Studio 2008 and its feature to debug JavaScript code. I've been hacking around event handling multiple times, and in my opinion, although classical stepping debuggers are useful to track long code runs, they're not good in tracking events. Imagine listening to mouse move events and breaking into another application on each event... So in this case, I'd strongly advise logging. If the problem is not specific to Internet Explorer 7 but also occurs in Firefox, then another good way to debug JavaScript code is Firefox and the Firebug add-on which has a JavaScript debugger. And there's also Firebug Lite for Internet Explorer. I didn't have a chance to use it, but it exists. :-) The downside of it is that it doesn't a fully-fledged debugger, but it has a window.console object, which is exactly what you need. A: It's basic, but you could stick alerts or document.write calls in when you trigger something. A: I am not sure on the exact code (it has been a while since I wrote complex JavaScript code), but you could enumerate through all of the controls on the form and attach an event that outputs something when the event is triggered. You could even use anonymous functions to wrap the necessary information for identifying which event was triggering. A: The obvious way would be to set up some alerts for various events something like: element.onclick = function () { alert('Click event'); } Otherwise you have a less intrusive option of inserting your alerts into the dom somewhere. But, seriously consider using a library like jQuery to implement your functionality. Lots of the cross-browser issues are solved problems and you don't need to solve them again. I am not sure exactly of the functionality you are trying to achieve but there are most probably plenty of scrolling and selecting plugins for jQuery you could use. A: One thing I like to do is create a bind function in JavaScript (like what you can find in the Prototype library) specifically for events, so that it passes the "event" object along to the bound function. Now, if you were to do this, you could simply throw in a trace call that will be invoked for every handler that uses it. And then remove it when it's not needed. One place. Easy. However, regardless of how you get the trace statement to be called, you still want to see it. The best strategy is to have a separate pane or window handing the trace calls. Dojo Toolkit has a built-in console that runs in Internet Explorer, and there are other similar things out there. The classic way of doing it is to create a new window and document.write to it. * *I recommend attaching a date-time to each trace. Helped me considerably in the past. *Debugging and alerts usually won't help you, because it interrupts the normal event flow. A: Matt Berseth has something that may be the kind of thing you're looking for in Debugging ASP.NET AJAX Applications with the Trace Console AjaxControlToolkit Control. It's based on the Yahoo YUI logger, YUI 2: Logger. A: My suggestion is, use FireFox together with FireBug and use the built-in Debug/Trace objects. They are a charm.
{ "language": "en", "url": "https://stackoverflow.com/questions/91367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Checking from shell script if a directory contains files From a shell script, how do I check if a directory contains files? Something similar to this if [ -e /some/dir/* ]; then echo "huzzah"; fi; but which works if the directory contains one or several files (the above one only works with exactly 0 or 1 files). A: The solutions so far use ls. Here's an all bash solution: #!/bin/bash shopt -s nullglob dotglob # To include hidden files files=(/some/dir/*) if [ ${#files[@]} -gt 0 ]; then echo "huzzah"; fi A: DIR="/some/dir" if [ "$(ls -A $DIR)" ]; then echo 'There is something alive in here' fi A: Could you compare the output of this? ls -A /some/dir | wc -l A: How about the following: if find /some/dir/ -maxdepth 0 -empty | read v; then echo "Empty dir"; fi This way there is no need for generating a complete listing of the contents of the directory. The read is both to discard the output and make the expression evaluate to true only when something is read (i.e. /some/dir/ is found empty by find). A: This may be a really late response but here is a solution that works. This line only recognizes th existance of files! It will not give you a false positive if directories exist. if find /path/to/check/* -maxdepth 0 -type f | read then echo "Files Exist" fi A: # Checks whether a directory contains any nonhidden files. # # usage: if isempty "$HOME"; then echo "Welcome home"; fi # isempty() { for _ief in $1/*; do if [ -e "$_ief" ]; then return 1 fi done return 0 } Some implementation notes: * *The for loop avoids a call to an external ls process. It still reads all the directory entries once. This can only be optimized away by writing a C program that uses readdir() explicitly. *The test -e inside the loop catches the case of an empty directory, in which case the variable _ief would be assigned the value "somedir/*". Only if that file exists will the function return "nonempty" *This function will work in all POSIX implementations. But be aware that the Solaris /bin/sh doesn't fall into that category. Its test implementation doesn't support the -e flag. A: This tells me if the directory is empty or if it's not, the number of files it contains. directory="/some/dir" number_of_files=$(ls -A $directory | wc -l) if [ "$number_of_files" == "0" ]; then echo "directory $directory is empty" else echo "directory $directory contains $number_of_files files" fi A: ZSH I know the question was marked for bash; but, just for reference, for zsh users: Test for non-empty directory To check if foo is non-empty: $ for i in foo(NF) ; do ... ; done where, if foo is non-empty, the code in the for block will be executed. Test for empty directory To check if foo is empty: $ for i in foo(N/^F) ; do ... ; done where, if foo is empty, the code in the for block will be executed. Notes We did not need to quote the directory foo above, but we can do so if we need to: $ for i in 'some directory!'(NF) ; do ... ; done We can also test more than one object, even if it is not a directory: $ mkdir X # empty directory $ touch f # regular file $ for i in X(N/^F) f(N/^F) ; do echo $i ; done # echo empty directories X Anything that is not a directory will just be ignored. Extras Since we are globbing, we can use any glob (or brace expansion): $ mkdir X X1 X2 Y Y1 Y2 Z $ touch Xf # create regular file $ touch X1/f # directory X1 is not empty $ touch Y1/.f # directory Y1 is not empty $ ls -F # list all objects X/ X1/ X2/ Xf Y/ Y1/ Y2/ Z/ $ for i in {X,Y}*(N/^F); do printf "$i "; done; echo # print empty directories X X2 Y Y2 We can also examine objects that are placed in an array. With the directories as above, for example: $ ls -F # list all objects X/ X1/ X2/ Xf Y/ Y1/ Y2/ Z/ $ arr=(*) # place objects into array "arr" $ for i in ${^arr}(N/^F); do printf "$i "; done; echo X X2 Y Y2 Z Thus, we can test objects that may already be set in an array parameter. Note that the code in the for block is, obviously, executed on every directory in turn. If this is not desirable then you can simply populate an array parameter and then operate on that parameter: $ for i in *(NF) ; do full_directories+=($i) ; done $ do_something $full_directories Explanation For zsh users there is the (F) glob qualifier (see man zshexpn), which matches "full" (non-empty) directories: $ mkdir X Y $ touch Y/.f # Y is now not empty $ touch f # create a regular file $ ls -dF * # list everything in the current directory f X/ Y/ $ ls -dF *(F) # will list only "full" directories Y/ The qualifier (F) lists objects that match: is a directory AND is not empty. So, (^F) matches: not a directory OR is empty. Thus, (^F) alone would also list regular files, for example. Thus, as explained on the zshexp man page, we also need the (/) glob qualifier, which lists only directories: $ mkdir X Y Z $ touch X/f Y/.f # directories X and Y now not empty $ for i in *(/^F) ; do echo $i ; done Z Thus, to check if a given directory is empty, you can therefore run: $ mkdir X $ for i in X(/^F) ; do echo $i ; done ; echo "finished" X finished and just to be sure that a non-empty directory would not be captured: $ mkdir Y $ touch Y/.f $ for i in Y(/^F) ; do echo $i ; done ; echo "finished" zsh: no matches found: Y(/^F) finished Oops! Since Y is not empty, zsh finds no matches for (/^F) ("directories that are empty") and thus spits out an error message saying that no matches for the glob were found. We therefore need to suppress these possible error messages with the (N) glob qualifier: $ mkdir Y $ touch Y/.f $ for i in Y(N/^F) ; do echo $i ; done ; echo "finished" finished Thus, for empty directories we need the qualifier (N/^F), which you can read as: "don't warn me about failures, directories that are not full". Similarly, for non-empty directories we need the qualifier (NF), which we can likewise read as: "don't warn me about failures, full directories". A: Try: if [ ! -z `ls /some/dir/*` ]; then echo "huzzah"; fi A: dir_is_empty() { [ "${1##*/}" = "*" ] } if dir_is_empty /some/dir/* ; then echo "huzzah" fi Assume you don't have a file named * into /any/dir/you/check, it should work on bash dash posh busybox sh and zsh but (for zsh) require unsetopt nomatch. Performances should be comparable to any ls which use *(glob), I guess will be slow on directories with many nodes (my /usr/bin with 3000+ files went not that slow), will use at least memory enough to allocate all dirs/filenames (and more) as they are all passed (resolved) to the function as arguments, some shell probably have limits on number of arguments and/or length of arguments. A portable fast O(1) zero resources way to check if a directory is empty would be nice to have. update The version above doesn't account for hidden files/dirs, in case some more test is required, like the is_empty from Rich’s sh (POSIX shell) tricks: is_empty () ( cd "$1" set -- .[!.]* ; test -f "$1" && return 1 set -- ..?* ; test -f "$1" && return 1 set -- * ; test -f "$1" && return 1 return 0 ) But, instead, I'm thinking about something like this: dir_is_empty() { [ "$(find "$1" -name "?*" | dd bs=$((${#1}+3)) count=1 2>/dev/null)" = "$1" ] } Some concern about trailing slashes differences from the argument and the find output when the dir is empty, and trailing newlines (but this should be easy to handle), sadly on my busybox sh show what is probably a bug on the find -> dd pipe with the output truncated randomically (if I used cat the output is always the same, seems to be dd with the argument count). A: Taking a hint (or several) from olibre's answer, I like a Bash function: function isEmptyDir { [ -d $1 -a -n "$( find $1 -prune -empty 2>/dev/null )" ] } Because while it creates one subshell, it's as close to an O(1) solution as I can imagine and giving it a name makes it readable. I can then write if isEmptyDir somedir then echo somedir is an empty directory else echo somedir does not exist, is not a dir, is unreadable, or is not empty fi As for O(1) there are outlier cases: if a large directory has had all or all but the last entry deleted, "find" may have to read the whole thing to determine whether it's empty. I believe that expected performance is O(1) but worst-case is linear in the directory size. I have not measured this. A: Three best tricks shopt -s nullglob dotglob; f=your/dir/*; ((${#f})) This trick is 100% bash and invokes (spawns) a sub-shell. The idea is from Bruno De Fraine and improved by teambob's comment. files=$(shopt -s nullglob dotglob; echo your/dir/*) if (( ${#files} )) then echo "contains files" else echo "empty (or does not exist or is a file)" fi Note: no difference between an empty directory and a non-existing one (and even when the provided path is a file). There is a similar alternative and more details (and more examples) on the 'official' FAQ for #bash IRC channel: if (shopt -s nullglob dotglob; f=(*); ((${#f[@]}))) then echo "contains files" else echo "empty (or does not exist, or is a file)" fi [ -n "$(ls -A your/dir)" ] This trick is inspired from nixCraft's article posted in 2007. Add 2>/dev/null to suppress the output error "No such file or directory". See also Andrew Taylor's answer (2008) and gr8can8dian's answer (2011). if [ -n "$(ls -A your/dir 2>/dev/null)" ] then echo "contains files (or is a file)" else echo "empty (or does not exist)" fi or the one-line bashism version: [[ $(ls -A your/dir) ]] && echo "contains files" || echo "empty" Note: ls returns $?=2 when the directory does not exist. But no difference between a file and an empty directory. [ -n "$(find your/dir -prune -empty)" ] This last trick is inspired from gravstar's answer where -maxdepth 0 is replaced by -prune and improved by phils's comment. if [ -n "$(find your/dir -prune -empty 2>/dev/null)" ] then echo "empty (directory or file)" else echo "contains files (or does not exist)" fi a variation using -type d: if [ -n "$(find your/dir -prune -empty -type d 2>/dev/null)" ] then echo "empty directory" else echo "contains files (or does not exist or is not a directory)" fi Explanation: * *find -prune is similar than find -maxdepth 0 using less characters *find -empty prints the empty directories and files *find -type d prints directories only Note: You could also replace [ -n "$(find your/dir -prune -empty)" ] by just the shorten version below: if [ `find your/dir -prune -empty 2>/dev/null` ] then echo "empty (directory or file)" else echo "contains files (or does not exist)" fi This last code works most of the cases but be aware that malicious paths could express a command... A: Take care with directories with a lot of files! It could take a some time to evaluate the ls command. IMO the best solution is the one that uses find /some/dir/ -maxdepth 0 -empty A: # Works on hidden files, directories and regular files ### isEmpty() # This function takes one parameter: # $1 is the directory to check # Echoes "huzzah" if the directory has files function isEmpty(){ if [ "$(ls -A $1)" ]; then echo "huzzah" else echo "has no files" fi } A: I am surprised the wooledge guide on empty directories hasn't been mentioned. This guide, and all of wooledge really, is a must read for shell type questions. Of note from that page: Never try to parse ls output. Even ls -A solutions can break (e.g. on HP-UX, if you are root, ls -A does the exact opposite of what it does if you're not root -- and no, I can't make up something that incredibly stupid). In fact, one may wish to avoid the direct question altogether. Usually people want to know whether a directory is empty because they want to do something involving the files therein, etc. Look to the larger question. For example, one of these find-based examples may be an appropriate solution: # Bourne find "$somedir" -type f -exec echo Found unexpected file {} \; find "$somedir" -maxdepth 0 -empty -exec echo {} is empty. \; # GNU/BSD find "$somedir" -type d -empty -exec cp /my/configfile {} \; # GNU/BSD Most commonly, all that's really needed is something like this: # Bourne for f in ./*.mpg; do test -f "$f" || continue mympgviewer "$f" done In other words, the person asking the question may have thought an explicit empty-directory test was needed to avoid an error message like mympgviewer: ./*.mpg: No such file or directory when in fact no such test is required. A: Small variation of Bruno's answer: files=$(ls -1 /some/dir| wc -l) if [ $files -gt 0 ] then echo "Contains files" else echo "Empty" fi It works for me A: With some workaround I could find a simple way to find out whether there are files in a directory. This can extend with more with grep commands to check specifically .xml or .txt files etc. Ex : ls /some/dir | grep xml | wc -l | grep -w "0" #!/bin/bash if ([ $(ls /some/dir | wc -l | grep -w "0") ]) then echo 'No files' else echo 'Found files' fi A: if [[ -s somedir ]]; then echo "Files present" fi In my testing with bash 5.0.17, [[ -s somedir ]] will return true if somedir has any children. The same is true of [ -s somedir ]. Note that this will also return true if there are hidden files or subdirectories. It may also be filesystem-dependent. A: It really feels like there should be an option to test for an empty directory. I'll leave that editorial comment as a suggestion to the maintainers of the test command, but the counterpart exists for empty files. In the trivial use case that brought me here, I'm not worried about looping through a huge number of files, nor am I worried about .files. I was hoping to find the aforementioned "missing" operand to test. C'est la guerre. In the example below directory empty is empty, and full has files. $ for f in empty/*; do test -e $f; done $ echo $? 1 $ for f in full/*; do test -e $f; done $ echo $? 0 Or, shorter and uglier still, but again only for relatively trivial use cases: $ echo empty/*| grep \* $ echo $? 1 $ echo full/* | grep \* $ echo $? 0 A: So far I haven't seen an answer that uses grep which I think would give a simpler answer (with not too many weird symbols!). Here is how I would check if any files exist in the directory using bourne shell: this returns the number of files in a directory: ls -l <directory> | egrep -c "^-" you can fill in the directory path in where directory is written. The first half of the pipe ensures that the first character of output is "-" for each file. egrep then counts the number of line that start with that symbol using regular expressions. now all you have to do is store the number you obtain and compare it using backquotes like: #!/bin/sh fileNum=`ls -l <directory> | egrep -c "^-"` if [ $fileNum == x ] then #do what you want to do fi x is a variable of your choice. A: Mixing prune things and last answers, I got to find "$some_dir" -prune -empty -type d | read && echo empty || echo "not empty" that works for paths with spaces too A: Simple answer with bash: if [[ $(ls /some/dir/) ]]; then echo "huzzah"; fi; A: I would go for find: if [ -z "$(find $dir -maxdepth 1 -type f)" ]; then echo "$dir has NO files" else echo "$dir has files" This checks the output of looking for just files in the directory, without going through the subdirectories. Then it checks the output using the -z option taken from man test: -z STRING the length of STRING is zero See some outcomes: $ mkdir aaa $ dir="aaa" Empty dir: $ [ -z "$(find aaa/ -maxdepth 1 -type f)" ] && echo "empty" empty Just dirs in it: $ mkdir aaa/bbb $ [ -z "$(find aaa/ -maxdepth 1 -type f)" ] && echo "empty" empty A file in the directory: $ touch aaa/myfile $ [ -z "$(find aaa/ -maxdepth 1 -type f)" ] && echo "empty" $ rm aaa/myfile A file in a subdirectory: $ touch aaa/bbb/another_file $ [ -z "$(find aaa/ -maxdepth 1 -type f)" ] && echo "empty" empty A: In another thread How to test if a directory is empty with find i proposed this [ "$(cd $dir;echo *)" = "*" ] && echo empty || echo non-empty With the rationale that, $dir do exist because the question is "Checking from shell script if a directory contains files", and that * even on big dir is not that big, on my system /usr/bin/* is just 12Kb. Update: Thanx @hh skladby, the fixed one. [ "$(cd $dir;echo .* *)" = ". .. *" ] && echo empty || echo non-empty A: Without calling utils like ls, find, etc.: POSIX safe, i.e. not dependent on your Bash / xyz shell / ls / etc. version: dir="/some/dir" [ "$(echo $dir/*)x" != "$dir/*x" ] || [ "$(echo $dir/.[^.]*)x" != "$dir/.[^.]*x" ] || echo "empty dir" The idea: * *echo * lists non-dot files *echo .[^.]* lists dot files except of "." and ".." *if echo finds no matches, it returns the search expression, i.e. here * or .[^.]* - which both are no real strings and have to be concatenated with e.g. a letter to coerce a string *|| alternates the possibilities in a short circuit: there is at least one non-dot file or dir OR at least one dot file or dir OR the directory is empty - on execution level: "if first possibility fails, try next one, if this fails, try next one"; here technically Bash "tries to execute" echo "empty dir", put your action for empty dirs here (eg. exit). Checked with symlinks, yet to check with more exotic possible file types. A: I dislike the ls - A solutions posted. Most likely you wish to test if the directory is empty because you don't wish to delete it. The following does that. If however you just wish to log an empty file, surely deleting and recreating it is quicker then listing possibly infinite files? This should work... if ! rmdir ${target} then echo "not empty" else echo "empty" mkdir ${target} fi A: Works well for me this (when dir exist): some_dir="/some/dir with whitespace & other characters/" if find "`echo "$some_dir"`" -maxdepth 0 -empty | read v; then echo "Empty dir"; fi With full check: if [ -d "$some_dir" ]; then if find "`echo "$some_dir"`" -maxdepth 0 -empty | read v; then echo "Empty dir"; else "Dir is NOT empty" fi fi A: if ls /some/dir/* >/dev/null 2>&1 ; then echo "huzzah"; fi; A: to test a specific target directory if [ -d $target_dir ]; then ls_contents=$(ls -1 $target_dir | xargs); if [ ! -z "$ls_contents" -a "$ls_contents" != "" ]; then echo "is not empty"; else echo "is empty"; fi; else echo "directory does not exist"; fi; A: Try with command find. Specify the directory hardcoded or as argument. Then initiate find to search all files inside the directory. Check if return of find is null. Echo the data of find #!/bin/bash _DIR="/home/user/test/" #_DIR=$1 _FIND=$(find $_DIR -type f ) if [ -n "$_FIND" ] then echo -e "$_DIR contains files or subdirs with files \n\n " echo "$_FIND" else echo "empty (or does not exist)" fi
{ "language": "en", "url": "https://stackoverflow.com/questions/91368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "146" }
Q: Unit testing for C++ code - Tools and methodology I'm working on a large c++ system that is has been in development for a few years now. As part of an effort to improve the quality of the existing code we engaged on a large long-term refactoring project. Do you know a good tool that can help me write unit tests in C++? Maybe something similar to Junit or Nunit? Can anyone give some good advice on the methodology of writing unit tests for modules that were written without unit testing in mind? A: CxxTest is a light, easy to use and cross platform JUnit/CppUnit/xUnit-like framework for C++. A: Applying unit tests to legacy code was the very reason Working Effectively with Legacy Code was written. Michael Feathers is the author - as mentioned in other answers, he was involved in the creation of both CppUnit and CppUnitLite. A: CppUnit is the way. See link below: http://cppunit.sourceforge.net/cppunit-wiki http://en.wikipedia.org/wiki/CppUnit A: UnitTest++, small & simple. A: I am currently looking for a unit test and mock framework that can be used at our company for a long lived code-base. As you know the list of unit testing frameworks for c++ is long so I applied some filters to reduce it to a hand-full that can be looked in more closely. The first filter criterion was that it must be for free. The second criterion was project activity. I also looked for mocking frameworks because you need one if you want to write unit-tests. I came up with the following list (approximately) sorted by activity, highest activity at the top: * *GoogleTest / GoogleMock: Many contributers and used by Google itself. This will probably be here for some time and receive updates. For my private code-base I will switch to this combination in hopes to jump on the fastest train. *BoostTest + Turtle: Not updated that often, but the testing framework is a part of boost so it should be maintained. Turtle on the other hand is maintained by mainly one guy, but it has resent activity so it is not dead. I made almost all my testing experience with this combination because we already used the boost library at my previous job and I currently use it for my private code. *CppUTest: Provides testing and mocking. This project has been active from 2008 to 2015 and has quite a lot recent activity. This find was a little suprise because a lot of projects with significantly less activity come up more often when searching on the web (like CppUnit which had its last update in 2013). I have not looked deeper into this so I can't say anything about the details. Edit (16.12.2015): I recently tried this out and found this framework to be a little clumsy and "C-stylish", especially when using the mock classes. Also it seemed to have a smaller variety of assertions then other frameworks. I think its main strength is that it can be used with pure C projects. *QTest: The test library that ships with the Qt framework. Maintanance should be guaranteed for some time, but I use it rather as a supporting library, because the test-registration is IMO more clumsy then in other frameworks. As far as I understand it, it forces you to have one test-exe per test-fixture. But the test helper functions can be of good use when testing Qt-Gui code. It has no mocks. *Catch: It has recent activity but is mainly developed by one guy. The nice thing about this framework is the alternative fixture approach that lets you write reusable fixture code in the test itself. It also lets you set test names as strings which is nice when you tend to write whole sentences as test names. I whish this style would be ripped of and put into googleTest ;-) Mock Frameworks The number of mock frameworks is much smaller then the number of test frameworks but here are the ones that I found to have recent activity. * *Hippomock: Active from 2008 unitl now but only with low intensity. *FakeIt: Active from 2013 unitl now but more or less developed by one guy. Conclusion If your code-base is in for the long run, choose between between BoostTest + Turtle and GoogleTest + GoogleMock. I think those two will have long term maintenance. If you only have a short lived code-base you could try out Catch which has a nice syntax. Then you would need to additionally choose a mocking framework. If you work with Visual Studio you can download test-runner adaptors for BoostTest and GoogleTest, that will allow you to run the tests with the test runner GUI that is integrated into VS. A: Google recently released their own library for unit testing C++ apps, called Google Test. Project on Google Code A: Check out an excellent comparison between several available suites. The author of that article later developed UnitTest++. What I particularly like about it (apart from the fact that it handles exceptions etc. well) is that there is a very limited amount of 'administration' around the test cases and test fixtures definition. A: See also the answers to the closely related question "choosing a c++ unit testing tool/framework", here A: There also is TUT, Template-Unit-Test, a template-based framework. It's syntax is awkward (some called it template-abusing), but its main advantage is that is it all contained in a single header file. You'll find an example of unit-test written with TUT here. A: Boost has a Testing library which contains support for unit testing. It might be worth checking out. A: Noel Llopis of Games From Within is the author of Exploring the C++ Unit Testing Framework Jungle, a comprehensive (but now dated) evaluation of the various C++ Unit Testing frameworks, as well as a book on game programming. He used CppUnitLite for quite a while, fixing various things, but eventually joined forces with another unit test library author, and produced UnitTest++. We use UnitTest++ here, and I like it a lot, so far. It has (to me) the exact right balance of power with a small footprint. I've used homegrown solutions, CxxTest (which requires Perl), and boost::test. When I implemented unit testing here at my current job it pretty much came down to UnitTest++ vs boost::test. I really like most boost libraries I have used, but IMHO, boost::test is a little too heavy-handed. I especially did not like that it requires you (AFAIK) to implement the main program of the test harness using a boost::test macro. I know that it is not "pure" TDD, but sometimes we need a way to run tests from withing a GUI application, for example when a special test flag is passed in on the command line, and boost::test cannot support this type of scenario. UnitTest++ was the simplest test framework to set up and use that I have encountered in my (limited) experience. A: I've tried CPPunit and it's not very user friendly. The only alternative I know is using C++.NET to wrap your C++ classes and writing unit tests with one of .NET unit testing frameworks (NUnit, MBUnit etc.) A: CppUTest is an excellent, light-weight framework for C and C++ unit-testing. A: I'm using the excellent Boost.Test library in conjunction with a much less known but oh-so-awesome Turtle library : a mock object library based on boost. As a code example speaks better than words, imagine you would like to test a calculator object which works on a view interface (that is Turtle's introductory example) : // declares a 'mock_view' class implementing 'view' MOCK_BASE_CLASS( mock_view, view ) { // implements the 'display' method from 'view' (taking 1 argument) MOCK_METHOD( display, 1 ) }; BOOST_AUTO_TEST_CASE( zero_plus_zero_is_zero ) { mock_view v; calculator c( v ); // expects the 'display' method to be called once with a parameter value equal to 0 MOCK_EXPECT( v, display ).once().with( 0 ); c.add( 0, 0 ); } See how easy and verbose it is do declare expectation on the mock object ? Obviously, test is failed if expectations are not met. A: I've just pushed my own framework, CATCH, out there. It's still under development but I believe it already surpasses most other frameworks. Different people have different criteria but I've tried to cover most ground without too many trade-offs. Take a look at my linked blog entry for a taster. My top five features are: * *Header only *Auto registration of function and method based tests *Decomposes standard C++ expressions into LHS and RHS (so you don't need a whole family of assert macros). *Support for nested sections within a function based fixture *Name tests using natural language - function/ method names are generated It also has Objective-C bindings. A: Michael Feathers of ObjectMentor was instrumental in the development of both CppUnit and CppUnitLite. He now recommends CppUnitLite A: Have a look at CUnitWin32. It's written for MS Visual C. It includes an example. A: Have a look at cfix (http://www.cfix-testing.org), it's specialized for Windows C/C++ development and supports both user mode and kernel mode unit testing. A: If you are on Visual Studio 2008 SP1, I would highly recommend using MSTest for writing the unit tests. I then use Google mock for writing the mocks. The integration with the IDE is ideal and allows and doesn't carry the overhead of CPPunit in terms of editing three places for the addition of one test. A: I think VisualAssert is doing a great job in VS integration. It lets you run and debug the tests from VS and you don't need to create an executable in order to run the tests. A: Check out fructose: http://sourceforge.net/projects/fructose/ It's a very simple framework, containing only header files and thus easy portable. A: I'm using MS Test with Typemock Isolator++. Give it a try!
{ "language": "en", "url": "https://stackoverflow.com/questions/91384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "139" }
Q: Best way to debug an ODBC driver on Windows What is the best way to debug a custom ODBC driver on Windows? A former member of our team wrote the driver so we have the source available. How do you attach a debugger to the driver? Or is it easier to just add "trace prints" to the driver to see what is going on? A: The best solution i found so far is a combination of trace prints and breakpoints (int 3) compiled into the driver. Trace prints for general debug information and the breakpoints for pieces of the code where I need to more thoroughly investigate the inner state of the driver. A: You can debug any ODBC driver by activating the logging for it via the Control Panel. Just go to driver's properties, activate the logging and set the target log file - and then set up another program to read from it interactively, so you can see what's going on. A: As far as I know, ODBC drivers are just DLL's which implement a specific set of functions. So if you have the sources available, you can use Visual Studio to debug it. Here is an article which seems to be something in the right directions: Debugging DLL Projects in Visual Studio 2005. A: Supportingly to VS you could use WireShark to see what the ODCB driver is sending to the DB.
{ "language": "en", "url": "https://stackoverflow.com/questions/91398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to represent date and/or time information in JSON? JSON text (RFC 4627) has unambigious representation of objects, arrays, strings, numbers, Boolean values (literally true or false) and null. However, it has nothing defined for representing time information like date and time of day, which is very common in applications. What are the current methods in use to represent time in JSON given the constraints and grammar laid out in RFC 4627? Note to respondents: The purpose of this question is to document the various methods known to be in circulation along with examples and relative pros and cons (ideally from field experience). A: ISO 8601 seems like a natural choice, but if you'd like to parse it with JavaScript running in a browser, you will need to use a library, for browser supports for the parts of the JavaScript Date object that can parse ISO 8601 dates is inconsistent, even in relatively new browsers. Another problem with ISO 8601 is that it is a large, rich standard, and the date/time libraries support only part of it, so you will have to pick a subset of ISO 8601 to use that is supported by the libraries you use. Instead, I represent times as the number of milliseconds since 1970-01-01T00:00Z. This is understood by the constructor for the Date object in much older browsers, at least going back to IE7 (which is the oldest I have tested). A: There is no set literal so use what's easiest for you. For most people, that's either a string of the UTC output or an long-integer of the UTC-centered timecode. Read this for a bit more background: http://msdn.microsoft.com/en-us/library/bb299886.aspx A: The only representation that I have seen in use (though, admittedly, my experience is limited to DOJO) is ISO 8601, which works nicely, and represents just about anything you could possibly think of. For examples, you can visit the link above. Pros: * *Represents pretty much anything you could possibly throw at it, including timespans. (ie. 3 days, 2 hour) Cons: * *Umm... I don't know actually. Other than perhaps it might take a bit of getting used to? It's certainly easy enough to parse, if there aren't built in functions to parse it already. A: I recommend using RFC 3339 format, which is nice and simple, and understood by an increasing number of languages, libraries, and tools. Unfortunately, RFC 3339, Unix epoch time, and JavaScript millisecond time, are all still not quite accurate, since none of them account for leap seconds! At some point we're all going to have to revisit time representations yet again. Maybe the next time we can be done with it. A: Sorry to comment on such an old question, but in the intervening years more solutions have turned up. Representing date and/or time information in JSON is a special case of the more general problem of representing complex types and complex data structures in JSON. Part of what make the problem tricky is that if you represent complex types like timestamps as JSON objects, then you need to have a way of expression associative arrays and objects, which happen to look like your JSON object representation of a timestamp, as some other marked-up object. Google's protocol buffers have a JSON mapping which has the notion of a timestamp type, with defined semantics. MongoDB's BSON has an Extended JSON which says { "$date": "2017-05-17T23:09:14.000000Z" }. Both can also express way more complex structures in addition to datetime.
{ "language": "en", "url": "https://stackoverflow.com/questions/91413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Export variable from C++ static library I have a static library written in C++ and I have a structure describing data format, i.e. struct Format{ long fmtId; long dataChunkSize; long headerSize; Format(long, long, long); bool operator==(Format const & other) const; }; Some of data formats are widely used, like {fmtId=0, dataChunkSize=128, headerSize=0} and {fmtId=0, dataChunkSize=256, headerSize=0} Some data structure classes receive format in constructor. I'd like to have some sort of shortcuts for those widely used formats, like a couple of global Format members gFmt128, gFmt256 that I can pass by reference. I instantiate them in a .cpp file like Format gFmt128(0, 128, 0); and in .h there is extern Format gFmt128; also, I declare Format const & Format::Fmt128(){return gFmt128;} and try to use it in the main module. But if I try and do it in the main module that uses the lib, the linker complains about unresolved external gFmt128. How can I make my library 'export' those global vars, so I can use them from other modules? A: Don't use the static keyword on global declarations. Here is an article explain the visibility of variables with/without static. The static gives globals internal linkage, that is, only visible in the translation unit they are declared in. A: Are they defined in .cpp file as well? Roughly, it should look like: struct Format { [...] static Format gFmt128; }; // Format.cpp Format Format::gFmt128 = { 0, 128, 0 } A: You need to declare your Format objects as extern not static A: Morhveus, I tried this out, too. My linker rather says it has the gFmt128 symbol already defined. This is indeed the behaviour I would expect: the compiler adds the function body to both the library and the client object since it's defined in the include file. The only way I get unresolved externals is by * *not adding the static library to the objects-to-be-linked *not defining the symbol gFmt128 in the static library's source file I'm puzzled... How come we see something different? Can you explain what happens?
{ "language": "en", "url": "https://stackoverflow.com/questions/91420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: javascript message box I want to display an error message on my asp.net application. This message is a warning message, this is the way I did it: CmdCalcInvoke.Attributes["onclick"] = "return confirm('Are you sure you want to calculate the certification? WARNING: If the quarter has not finished, all the partners status will change')"; The code above works fine. The CmdCalcInvoke is an htmlInputButton. This is the message that the message box displays; Are you sure you want to calculate the certification? WARNING: If the quarter has not finished, all the partners status will change What I want to do is to display this message, but wanted to highlight the WARNING word by making it bold, or displaying the word in red, can this be done???, can't remember seeing a message box with this characteristics, but I though i would ask in case Any suggestions will be welcome A: you can if you dont use the default alert boxes. Try using a javascript modal window which is just normal div markup that you can control the styling of. Look at blockui for jquery (there are loads of others) A: You can try something like: http://weblogs.asp.net/johnkatsiotis/archive/2008/09/14/asp-net-messagebox-server-and-client.aspx A: The modal dialog control I use is : http://foohack.com/tests/vertical-align/dialog.html http://foohack.com/2007/11/css-modal-dialog-that-works-right/ I find it works well across all browsers. I've hacked it round to work well with ASP .NET, and that was pretty easy. A: It isn't possible to apply formatting to a standard dialogue box. However if you really want to format it you could flash up the message in HTML either next to the button or as a absolutely placed div, which you could format with CSS. A: Else, use something like a LightBox based solution like Thickbox?
{ "language": "en", "url": "https://stackoverflow.com/questions/91434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Amazon S3 standalone stub server I seem to recall reading about an Amazon S3-compatible test server that you could run on your own server for unit tests or whatever. However, I've just exhausted my patience looking for this with both Google and AWS. Does such a thing exist? If not, I think I'll write one. Note: I'm asking about Amazon S3 (the storage system) rather than Amazon EC2 (cloud computing). A: Are you thinking of Park Place? FYI, its old home page is offline now. A: I think moto (https://github.com/spulec/moto) is the perfect tool for your unittests. Moto mocks all accesses to S3, SQS, etc. and can be used in any programming language using their web server. It is trivial to setup, lightweight and fast. From moto's README: Imagine you have the following code that you want to test: import boto from boto.s3.key import Key class MyModel(object): def __init__(self, name, value): self.name = name self.value = value def save(self): conn = boto.connect_s3() bucket = conn.get_bucket('mybucket') k = Key(bucket) k.key = self.name k.set_contents_from_string(self.value) Take a minute to think how you would have tested that in the past. Now see how you could test it with Moto: import boto from moto import mock_s3 from mymodule import MyModel @mock_s3 def test_my_model_save(): model_instance = MyModel('steve', 'is awesome') model_instance.save() conn = boto.connect_s3() assert conn.get_bucket('mybucket').get_key('steve') == 'is awesome' A: Park Place has moved to github: http://github.com/technoweenie/parkplace A: Eucalyptus http://eucalyptus.cs.ucsb.edu/ EUCALYPTUS - Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems - is an open-source software infrastructure for implementing "cloud computing" on clusters. The current interface to EUCALYPTUS is compatible with Amazon's EC2 interface, but the infrastructure is designed to support multiple client-side interfaces. Note that, according to the documentation, Eucalypus includes a reimplementation not only of the EC2 interface but also the S3 storage system. That storage component is called Walrus. (http://open.eucalyptus.com/wiki/EucalyptusUserGuide_v1.5.2) A: Fake S3 appears to be an up-to-date reimplementation of S3, specifically designed for use in testing. A: We ran into the problem of testing our S3 based code locally and actually implemented a small Java server, which emulates the S3 object API. As it might be useful to others, we setup a github repo along with a small website: http://s3ninja.net - all OpenSource under the MIT license. It's quite small and simple and can be setup in minutes. (Being a SIRIUS based application, statup on a moderate server takes less than a second). A: Amazon uses Xen, so you can probably just run your AMI in your own Xen installation. I'd just fire up an instance and run the tests there, though. It doesn't cost much and you should usually be fine with developing locally and infrequently testing it on their system.
{ "language": "en", "url": "https://stackoverflow.com/questions/91443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Examples of Hierarchical-Model-View-Controller (HMVC)? I'm interested in the Presentation-Abstraction-Control? (aka Hierarchical-Model-View-Controller (HMVC)) Architectural Pattern for constructing complex user interfaces (GUI or web) and was wondering if anyone was aware of any examples in the wild where I could read the code? I'm aware of the JavaWorld article and associated letters cited in the Presentation-Abstraction-Control wikipedia article. A: In the php world, I'm aware of a few methods that might qualify as HMVC. They all allow calling a controller and displaying the results from within a view. The calls can be nested infinitly creating widgets within widgets. * *Zend Framework: Action View Helper *CodeIgniter: 3rd party Modular Extensions - HMVC *Kohana: 3rd party Component Edit: Kohana 3 now natively supports HMVC A: I wrote an HMVC framework a while back for J2EE and FreeMarker: http://www.neocoders.com/portal/projects/jandal and recently another one for Javascript: http://www.neocoders.com/portal/projects/subo These are fairly 'experimental', but might be of some academic use. cheers, Lindsay A: It's my understanding that the Cairngorm framework for Adobe Flex is just one example of an HMVC implementation. It's open source, so you can find out more information and download the code at Adobe's website. A: The APF-Webframework - http://adventure-php-framework.org/Page/001-Home - uses HMVC since many years, and has a very experienced and engaged developer. Only the small community discourages a little bit. A: I wrote an HMVC framework in PHP called Alloy: http://alloyframework.org/ It's pretty lightweight and has a modular structure.
{ "language": "en", "url": "https://stackoverflow.com/questions/91478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to change default order of Group By clause in mysql By default data extracted by the GROUP BY clause is ordered as ascending. How to change it to descending. A: Add DESC to the GROUP BY clause, e.g. : GROUP BY myDate DESC A: As the MySQL documentation says, SELECT * FROM foo GROUP BY bar is equivalent to SELECT * FROM foo GROUP BY bar ORDER BY bar Default behaviour can not be changed, but you can use SELECT * FROM foo GROUP BY bar ORDER BY bar DESC without experiencing any speed penalties as the sorting will be performed on the grouped field anyway. By the way, when sorting is not important you can get (small) speed-up by using ORDER BY NULL. A: You should use the derived tables on your SQL. For example if you want to pick up the most recent row for an specific activity you're attempt to use: select * from activities group by id_customer order by creation_date but it doesn't work. Try instead: SELECT * FROM ( select * from activities order by creation_date desc ) sorted_list GROUP BY id_customer A: ORDER BY foo DESC?
{ "language": "en", "url": "https://stackoverflow.com/questions/91479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: source for eclipse plugin.xml page I would like to know where can I find the code which eclipse uses to display the forms in the plugin.xml file. In particular I am looking for the form layout used in the extension tab in the plugin.xml A: Unfortunately, Eclipse's plugin search doesn't work for referenced plugins. To do these searches I created a workspace that contains all the plugins from my eclipse install as source folders. I just open the workspace and perform my plugin search there. Just open the search dialog and choose plugin. A: You can import the eclipse plugins into your workspace by using import -> Plugins and Fragments from the package explorer. Then use the following options: * *Select from all plug-ins and fragments found at the specified location *import from the target platform with source folders (last option) Import the org.eclipse.pde.ui plugin The code you seek is in org.eclipse.pde.internal.ui.editor.plugin More specifically, the ExtensionPointsPage class
{ "language": "en", "url": "https://stackoverflow.com/questions/91480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you get an embedded Jetty webserver to dump its interim Java code for JSPs I keep running into this problem when debugging JSP pages in OpenNMS. The Jetty wiki talks about keepGenerated (http://docs.codehaus.org/display/JETTY/KeepGenerated) in webdefault.xml but it seems unclear how this works in embedded setups. A: I know this is ages old, but I haven't found the answer anywhere else on the internet and it doesn't seem as though this has gotten any easier. Hopefully this will help someone: extract your webdefault.xml from the jetty-version.jar, mine was in :C:\Documents and Settings\JB.m2\repository\org\mortbay\jetty\jetty\6.1.22\jetty-6.1.22.jar inside the org/mortbay/jetty/webapp/webdefault.xml file Put the webdefault.xml into my project directory Edit the webdefault.xml and add the following line: <servlet id="jsp"> .... <init-param> <param-name>keepgenerated</param-name> <param-value>true</param-value> </init-param> Add the following into your maven pom.xml config: <plugin> <groupId>org.mortbay.jetty</groupId> <artifactId>maven-jetty-plugin</artifactId> <configuration> <webDefaultXml>webdefault.xml</webDefaultXml> </configuration> </plugin> When you run the mvn jetty:run maven goal my jsp code is now kept in target\work\jsp\org\apache\jsp\WEB_002dINF\jsp A: If you are using Jetty 6 you can use the following code: String webApp = "./web/myapp"; // Location of the jsp files String contextPath = "/myapp"; WebAppContext webAppContext = new WebAppContext(webApp, contextPath); ServletHandler servletHandler = webAppContext.getServletHandler(); ServletHolder holder = new ServletHolder(JspServlet.class); servletHandler.addServletWithMapping(holder, "*.jsp"); holder.setInitOrder(0); holder.setInitParameter("compiler", "modern"); holder.setInitParameter("fork", "false"); File dir = new File("./web/compiled/" + webApp); dir.mkdirs(); holder.setInitParameter("scratchdir", dir.getAbsolutePath()); A: It is dumped already. for example if you have a file called index.jsp, a file will be created called index_jsp.java Just search for something like that in the work directory.
{ "language": "en", "url": "https://stackoverflow.com/questions/91487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to wrap an existing memory buffer as a DC for GDI I have a memory buffer corresponding to my screen resolution (1280x800 at 24-bits-per-pixel) that contains my screen contents at 24bpp. I want to convert this to 8-bpp (ie. Halftone color palette in Windows). I currently do this: 1. Use CreateDIBSection to allocate a new 1280x800 24-bpp buffer and access it as a DC, as well as a plain memory buffer 2. Use memcpy to copy from my original buffer to this new buffer from step 1 3. Use BitBlt to let GDI perform the color conversion I want to avoid the extra memcpy of step 2. To do this, I can think of two approaches: a. Wrap my original mem buf in a DC to perform BitBlt directly from it b. Write my own 24-bpp to 8-bpp color conversion. I can't find any info on how Windows implements this halftone color conversion. Besides even if I find out, I won't be using the accelerated features of GDI that BitBlt has access to. So how do I do either (a) or (b)? thanks! A: OK, to address the two parts of the problem. * *the following code shows how to get at the pixels inside of a bitmap, change them and put them back into the bitmap. You could always generate a dummy bitmap of the correct size and format, open it up, copy over your data and you then have a bitmap object with your data: private void LockUnlockBitsExample(PaintEventArgs e) { // Create a new bitmap. Bitmap bmp = new Bitmap("c:\\fakePhoto.jpg"); // Lock the bitmap's bits. Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height); System.Drawing.Imaging.BitmapData bmpData = bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadWrite, bmp.PixelFormat); // Get the address of the first line. IntPtr ptr = bmpData.Scan0; // Declare an array to hold the bytes of the bitmap. int bytes = bmpData.Stride * bmp.Height; byte[] rgbValues = new byte[bytes]; // Copy the RGB values into the array. System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes); // Set every third value to 255. A 24bpp bitmap will look red. for (int counter = 2; counter < rgbValues.Length; counter += 3) rgbValues[counter] = 255; // Copy the RGB values back to the bitmap System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes); // Unlock the bits. bmp.UnlockBits(bmpData); // Draw the modified image. e.Graphics.DrawImage(bmp, 0, 150); } To convert the contents to 8bpp you'll want to use the System.Drawing.Imaging.ColorMatrix class. I don't have at hand the correct matrix values for half-tone, but this example grayscales and adjustment of the values should give you an idea of the effect: Graphics g = e.Graphics; Bitmap bmp = new Bitmap("sample.jpg"); g.FillRectangle(Brushes.White, this.ClientRectangle); // Create a color matrix // The value 0.6 in row 4, column 4 specifies the alpha value float[][] matrixItems = { new float[] {1, 0, 0, 0, 0}, new float[] {0, 1, 0, 0, 0}, new float[] {0, 0, 1, 0, 0}, new float[] {0, 0, 0, 0.6f, 0}, new float[] {0, 0, 0, 0, 1}}; ColorMatrix colorMatrix = new ColorMatrix(matrixItems); // Create an ImageAttributes object and set its color matrix ImageAttributes imageAtt = new ImageAttributes(); imageAtt.SetColorMatrix(colorMatrix, ColorMatrixFlag.Default, ColorAdjustType.Bitmap); // Now draw the semitransparent bitmap image. g.DrawImage(bmp, this.ClientRectangle, 0.0f, 0.0f, bmp.Width, bmp.Height, GraphicsUnit.Pixel, imageAtt); imageAtt.Dispose(); I shall try and update later with the matrix values for half-tone, it's likely to be lots 0.5 or 0.333 values in there! A: Use CreateDIBitmap rather than CreateDIBSection. A: If you want to eliminate the copy (step 2), just use CreateDIBSection to create your original memory buffer in the first place. Then you can just create a compatible DC for that bitmap and use it as the source for the BitBlt operation. I.e. there is no need to copy the memory from a "plain memory" buffer to a CreateDIBSection bitmap prior to blitting if you use a CreateDIBSection bitmap instead of a "plain memory" buffer in the first place. After all, a buffer allocated using CreateDIBSection is essentially just a "plain memory" buffer that is compatible with CreateCompatibleDC, which is what you are looking for. A: How did you get the screen contents into this 24bpp memory buffer in the first place? The obvious route to avoiding a needless memcpy is to subvert the original screengrab by creating the 24bpp DIBSection first, and passing it to the screengrab function as the destination buffer. If thats not possible, you can still try and coerce GDI into doing the hard lifting by creating a BITMAPINFOHEADER describing the format of the memory buffer, and just call StretchDIBits to blit it onto your 8bpp DIBSection.
{ "language": "en", "url": "https://stackoverflow.com/questions/91511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: jQuery attribute selectors: How to query for an attribute with a custom namespace Suppose I have a simple XHTML document that uses a custom namespace for attributes: <html xmlns="..." xmlns:custom="http://www.example.com/ns"> ... <div class="foo" custom:attr="bla"/> ... </html> How do I match each element that has a certain custom attribute using jQuery? Using $("div[custom:attr]") does not work. (Tried with Firefox only, so far.) A: the syntax for matching by attribute is: $("div[customattr=bla]") matches div customattr="bla" $("[customattr]") matches all tags with the attribute "customattr" with namespace attributes like 'custom:attr' its not working Here you can find a good overview. A: jQuery does not support custom namespaces directly, but you can find the divs you are looking for by using filter function. // find all divs that have custom:attr $('div').filter(function() { return $(this).attr('custom:attr'); }).each(function() { // matched a div with custom::attr $(this).html('I was found.'); }); A: You should use $('div').attr('custom:attr'). A: Here is an implementation of a custom selector that works for me. // Custom jQuery selector to select on custom namespaced attributes $.expr[':'].nsAttr = function(obj, index, meta, stack) { // if the parameter isn't a string, the selector is invalid, // so always return false. if ( typeof meta[3] != 'string' ) return false; // if the parameter doesn't have an '=' character in it, // assume it is an attribute name with no value, // and match all elements that have only that attribute name. if ( meta[3].indexOf('=') == -1 ) { var val = $(obj).attr(meta[3]); return (typeof val !== 'undefined' && val !== false); } // if the parameter does contain an '=' character, // we should only match elements that have an attribute // with a matching name and value. else { // split the parameter into name/value pairs var arr = meta[3].split('=', 2); var attrName = arr[0]; var attrValue = arr[1]; // if the current object has an attribute matching the specified // name & value, include it in our selection. return ( $(obj).attr(attrName) == attrValue ); } }; Example usage: // Show all divs where the custom attribute matches both name and value. $('div:nsAttr(MyNameSpace:customAttr=someValue)').show(); // Show all divs that have the custom attribute, regardless of its value. $('div:nsAttr(MyNameSpace:customAttr)').show(); A: This works in some conditions: $("div[custom\\:attr]") However, for a more advanced method, see this XML Namespace jQuery plug-in
{ "language": "en", "url": "https://stackoverflow.com/questions/91518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: How can drawString method be used for writing diagonal way I am using c# 2005 i want to write string diagonally on image. But by default c# provides the option to write horizontally or vertically. how we write diagonally? Thanks A: You can use the RotateTransform and TranslateTransform that are available on the Graphics class. Because using DrawString is GDI+ the transforms affects the drawing. So use something like this... g.RotateTransform(45f); g.DrawString("My String"...); g.RotateTransform(-45f); Don't forget to reverse the change though! A: Do a Graphics.rotateTransform before the drawString call. Don't forget to reverse the change afterwards, as Phil Wright points out. A: You can use this function. void DrawDigonalString(Graphics G, string S, Font F, Brush B, PointF P, int Angle) { SizeF MySize = G.MeasureString(S, F); G.TranslateTransform(P.X + MySize.Width / 2, P.Y + MySize.Height / 2); G.RotateTransform(Angle); G.DrawString(S, F, B, new PointF(-MySize.Width / 2, -MySize.Height / 2)); G.RotateTransform(-Angle); G.TranslateTransform(-P.X - MySize.Width / 2, -P.Y- MySize.Height / 2); } Like this A: u have right..It can be done in that way..BUT text will be written from top to bottom always and I'm not sure u can change it from bottom to top.. cheers A: There is another way to draw a text vertically which is built in the C#. There is no need of explicit graphics transformation. You can use the StringFormat class. Here is a sample code which draws a text vertically: StringFormat sf = new StringFormat(); sf.FormatFlags = StringFormatFlags.DirectionVertical; e.Graphics.DrawString("My String", this.Font, Brushes.Black, PointF.Empty, sf);
{ "language": "en", "url": "https://stackoverflow.com/questions/91521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Streaming audio with Flash / Actionscript 3 - Slow playback problem I've written a simple Flash player for a Shoutcast stream. At first it seemed to work reliably, however about 5% of the time users experience slow playback where the stream plays at roughly half of normal speed. All files being streamed are MP3, encoded at 128kbps/44.1kHz, the same settings as used in the Shoutcast config files, so the issue is not caused by mismatched bit rates as suggested on a number of forums I have read. Has anyone else encountered this problem and possibly found a solution? Regards, Alan EDIT: A sample player can be found at http://radionations.com/utils/players/pulse.swf There is no graphical display as the player is designed to run in the background. The problem only occurs a small proportion of the time, and only when the player is being loaded in the browser. It does not occur mid-stream. The player has been tested on a number of different machines running Windows XP, Vista, Ubuntu, and MacOS X. Various different hardware configurations are involved. The problem occurs across all of these test platforms so I am inclined to believe it is not an issue with problematic / buggy audio drivers. I have encountered the problem both with and without other applications using the audio device. EDIT: I'm surprised I still haven't found a solution to this problem. So I've decided to come back to it now in the hopes that somebody might know something. Any help is greatly appreciated. Thanks, Alan A: This is a flash player bug unfortunately. It seems like the only reliable solution is to roll it back to AS2. https://bugs.adobe.com/jira/browse/FP-173 A: I believe that the slow playing is caused by audio drivers problems. Can you give a link to the player? A: I have encountered the slow playing problem in your player, about 25% of the times I reloaded it, but only if another application using the audio device is running when the stream starts playing, as I reckon after a quick look. Maybe you should test this situation on multiple computers. I would guess that it's an audio driver problem, I'm using XP 64 and my audio drivers are beta. A: I made a very simple player that streams from your server and it also had that problem. Very intriguing... I then made it start the sound a little bit later, after it loaded 100K and it seems to be working, I don't have time to test it more right now though. You can get it here http://rromania.ro/sc/sc.rar A: It seems to work when after the complete event is dispatched wait 6 seconds so its downloads enough data to run at normal speed ,tried it a bunch of times no slow play yet A: I've been successfully using: s.load(new URLRequest(mp3),new SoundLoaderContext(3000)); // 3000 is 3sec buffer time I can't be 100% sure, but since I've been buffering it I haven't noticed any slow play from many reloads. Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/91525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Switch over PropertyType How can I make this work? switch(property.PropertyType){ case typeof(Boolean): //doStuff break; case typeof(String): //doOtherStuff break; default: break; } I don't want to use the name since string comparing for types is just awfull and can be subject to change. A: System.Type propertyType = typeof(Boolean); System.TypeCode typeCode = Type.GetTypeCode(propertyType); switch (typeCode) { case TypeCode.Boolean: //doStuff break; case TypeCode.String: //doOtherStuff break; default: break; } You can use an hybrid approach for TypeCode.Object where you dynamic if with typeof. This is very fast because for the first part - the switch - the compiler can decide based on a lookup table. A: You can't. What you can do is create a mapping between Types and a delegate using a dictionary: var TypeMapping = new Dictionary<Type, Action<string>>(){ {typeof(string), (x)=>Console.WriteLine("string")}, {typeof(bool), (x)=>Console.WriteLine("bool")} }; string s = "my string"; TypeMapping[s.GetType()]("foo"); TypeMapping[true.GetType()]("true"); A: I think what you are looking for here is a good Map. Using delegates and a Generic IDictionary you can do what you want. Try something like this: private delegate object MyDelegate(); private IDictionary<Type, MyDelegate> functionMap = new IDictionary<Type, MyDelegate>(); public Init() { functionMap.Add(typeof(String), someFunction); functionMap.Add(tyepof(Boolean), someOtherFunction); } public T doStuff<T>(Type someType) { return (T)functionMap[someType](); } A: C# 7.0 will support switch on types as a part of bigger pattern matching feature. This example is taken from .NET blog post that announces new features: switch(shape) { case Circle c: WriteLine($"circle with radius {c.Radius}"); break; case Rectangle s when (s.Length == s.Height): WriteLine($"{s.Length} x {s.Height} square"); break; case Rectangle r: WriteLine($"{r.Length} x {r.Height} rectangle"); break; default: WriteLine("<unknown shape>"); break; case null: throw new ArgumentNullException(nameof(shape)); } A: Do not worry about using strings within a switch because if you have several the compiler will automatically convert it into a hash lookup giving decent performance despite it looking pretty aweful. The problem of type strings changing can be solved by making it into an explicit hash lookup yourself and populating the constents of the hash in a static constructor. That way the hash is populate with the correct strings at runtime so they remain correct. A: You can't do this with switch in c# as the case has to be constant. What is wrong with: if(property.PropertyType == typeof(bool)) { //dostuff; } else if (property.PropertyType == typeof(string)) { //do other stuff; } A: I recently had to do something similar and using switch wasn't an option. Doing an == on the typeof(x) is fine, but a more elegant way might be to do something like this: if(property.PropertyType is bool){ //dostuff; } else if (property.PropertyType is string){ //do other stuff; } But, I'm not certain that you can use the "is" keyword in this way, I think it only works for objects... A: About the stringmatching: it was one of the reqs in the question to not do it through stringmatching. The dictionary is an approach I will use when I put this entire serialization algorithm in its own library. As for now I will first try the typeCode as my case only uses basic types. If that doesn't work I will go back to the swarm of if/elses :S Before ppl ask me why I want my own serialization: 1) .net xml serialization doesn't serialize properties without setters 2) serialization has to comply to some legacy rules A: Just use the normal if/else if/else pattern: if (property.PropertyType == typeof(Boolean)) { } else if (property.PropertyType == typeof(String)) { } else if (...) { } A: I personally prefer the Dictionary<Type, other> approach the most... I can even provide you another example: http://www.timvw.be/presenting-namevaluecollectionhelper/ In case you insist on writing a switch-case statement you could use the Type name... switch (blah.PropertyType.FullName) { case typeof(int).FullName: break; case typeof(string).FullName: break; }
{ "language": "en", "url": "https://stackoverflow.com/questions/91563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: crti.o file missing I'm building a project using a GNU tool chain and everything works fine until I get to linking it, where the linker complains that it is missing/can't find crti.o. This is not one of my object files, it seems to be related to libc but I can't understand why it would need this crti.o, wouldn't it use a library file, e.g. libc.a? I'm cross compiling for the arm platform. I have the file in the toolchain, but how do I get the linker to include it? crti.o is on one of the 'libraries' search path, but should it look for .o file on the library path? Is the search path the same for gcc and ld? A: I had the same issue while cross-compiling. crti.o was in <sysroot>/usr/lib64 but the linker would not find it. Turns out that creating an empty directory <sysroot>/usr/lib fixed the issue. It seems that the linker would search for a path <sysroot>/usr/lib first, and only if it exists it would even consider <sysroot>/usr/lib64. Is this a bug in the linker? Or is this behaviour documented somewhere? A: In my case Linux Mint 18.0/Ubuntu 16.04, I have no crti.o at all: $ find /usr/ -name crti* I find nothing so I install developer package: sudo apt-get install libc6-dev If you find some libs read here A: crti.o is the bootstrap library, generally quite small. It's usually statically linked into your binary. It should be found in /usr/lib. If you're running a binary distribution they tend to put all the developer stuff into -dev packages (e.g. libc6-dev) as it's not needed to run compiled programs, just to build them. You're not cross-compiling are you? If you're cross-compiling it's usually a problem with gcc's search path not matching where your crti.o is. It should have been built when the toolchain was. The first thing to check is gcc -print-search-dirs and see if crti.o is in any of those paths. The linking is actually done by ld but it has its paths passed down to it by gcc. Probably the quickest way to find out what's going on is compile a helloworld.c program and strace it to see what is getting passed to ld and see what's going on. strace -v -o log -f -e trace=open,fork,execve gcc hello.c -o test Open the log file and search for crti.o, as you can see my non-cross compiler: 10616 execve("/usr/bin/ld", ["/usr/bin/ld", "--eh-frame-hdr", "-m", "elf_x86_64", "--hash-style=both", "-dynamic-linker", "/lib64/ld-linux-x86-64.so.2", "-o" , "test", "/usr/lib/gcc/x86_64-linux-gnu/4."..., "/usr/lib/gcc/x86_64-linux-gnu/4."..., "/usr/lib/gcc/x86_64-linux-gnu/4."..., "-L/usr/lib/gcc/x86_64-linux-g nu/"..., "-L/usr/lib/gcc/x86_64-linux-gnu/"..., "-L/usr/lib/gcc/x86_64-linux-gnu/"..., "-L/lib/../lib", "-L/usr/lib/../lib", "-L/usr/lib/gcc/x86_64-linux-gnu /"..., "/tmp/cc4rFJWD.o", "-lgcc", "--as-needed", "-lgcc_s", "--no-as-needed", "-lc", "-lgcc", "--as-needed", "-lgcc_s", "--no-as-needed", "/usr/lib/gcc/x86_ 64-linux-gnu/4."..., "/usr/lib/gcc/x86_64-linux-gnu/4."...], "COLLECT_GCC=gcc", "COLLECT_GCC_OPTIONS=\'-o\' \'test\' "..., "COMPILER_PATH=/usr/lib/gcc/x86_6"..., "LIBRARY_PATH=/usr/lib/gcc/x86_64"..., "CO LLECT_NO_DEMANGLE="]) = 0 10616 open("/etc/ld.so.cache", O_RDONLY) = 3 10616 open("/usr/lib/libbfd-2.18.0.20080103.so", O_RDONLY) = 3 10616 open("/lib/libc.so.6", O_RDONLY) = 3 10616 open("test", O_RDWR|O_CREAT|O_TRUNC, 0666) = 3 10616 open("/usr/lib/gcc/x86_64-linux-gnu/4.2.3/../../../../lib/crt1.o", O_RDONLY) = 4 10616 open("/usr/lib/gcc/x86_64-linux-gnu/4.2.3/../../../../lib/crti.o", O_RDONLY) = 5 10616 open("/usr/lib/gcc/x86_64-linux-gnu/4.2.3/crtbegin.o", O_RDONLY) = 6 10616 open("/tmp/cc4rFJWD.o", O_RDONLY) = 7 If you see a bunch of attempts to open(...crti.o) = -1 ENOENT, ld is getting confused and you want to see where the path it's opening came from... A: OK I had to reinstall the tool chain, so that the missing files were then included. It seems strange since it should have found it on the gcc path. The main problem I guess was that I had 15 or so different crti.o files on my computer and wasn't point to the correct one. Still doesn't make since but it works now :-) Thanks for your help :-) A: I had a similar problem with a badly set-up cross-compiler. I got around it like so: /home/rob/compiler/usr/bin/arm-linux-gcc --sysroot=/home/rob/compiler hello.c This assumes /lib, /usr/include and so on exist in the location pointed to by the sysroot option. This is probably not how things are supposed to be done, but it got me out of trouble when I needed to compile a simple C file. A: If you are cross-compiling , add sysroot option in LDFLAGS export LDFLAGS=""--sysroot=${SDKTARGETSYSROOT}" -L${SDKTARGETSYSROOT}/lib -L${SDKTARGETSYSROOT}/usr/lib -L${SDKTARGETSYSROOT}/usr/lib/arm-poky-linux-gnueabi/5.3.0" A: I get the same kind of issue on a default Ubuntu 8.04 install. I had to get the libc developer headers/files manually for it to work. A: This solved for me (cross compiling pjsip for ARM): export LDFLAGS='--sysroot=/home/me/<path-to-my-sysroot-parent>/sysroot'
{ "language": "en", "url": "https://stackoverflow.com/questions/91576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: What do you need from a test harness? I'm one of the people involved in the Test Anything Protocol (TAP) IETF group (if interested, feel free to join the mailing list). Many programming languages are starting to adopt TAP as their primary testing protocol and they want more from it than what we currently offer. As a result, we'd like to get feedback from people who have a background in xUnit, TestNG or any other testing framework/methodology. Basically, aside from a simple pass/fail, what information do you need from a test harness? Just to give you some examples: * *Filename and line number (if applicable) *Start and end time *Diagnostic output such as the difference between what you got and what you expected. And so on ... A: Most definitely all things from your list for each individual item: * *Filename *Line number *Namespace/class/function name *Test coverage *Start time and end time *And/or total time (this would be more useful for me than the top two items) *Diagnostic output such as the difference between what you got and what you expected. From the top of my head not much else but for the group of tests I would like to know * *group name *total execution time A: It must be very, very easy to write a test, and equally easy to run them. That, to me, is the single most important feature of a testing harness. If someone has to fire up a GUI or jump through a bunch of hoops to write a test, they won't use it. A: An arbitrary set of tags - so I can mark a test as, for example "integration, UI, admin". (you knew I was going to ask for this didn't you :-) A: To what you said I'd add: * *Method/function/class name *Coverage counting tool, with exceptions (Do not count these methods) *Result of N last runs available *Mandate that ways to easily parse test results must exist A: Any sort of diagnostic output - especially on failure is critical. If a test fails, you don't want to always have to rerun the test under a debugger to see what happened - there should be some cludes in the output. I also like to see a before and after snapshot of critical system variables like memory or hard disk space available as those can provide great clues as well. Finally, if you're using random seeds for any of the tests, write the seed out to the logfile so that the test can be reproduced if necessary. A: I'd like the ability to concatenate and nest TAP streams. A: A unique id (uuid, md5sum) to be able to identify an individual test -- say, for use when inserting test results in a database, or identifying them in a bug tracker to make it possible for QA to rerun an individual test. This would also make it possible to trace an individual test's behavior from build-to-build through the entire lifecycle of multiple revisions of a product. This could eventually allow larger-scale correlations between 'historic' events (new hire, product release, hardware upgrades) and the profile(s) of tests that fail as a result of such events. I'm also thinking that TAP should be emitted through a dedicated side-channel rather than mixed in with stdout. I'm not sure this is under the scope of the protocol definition. A: I use TAP as output protocol for a set of simple C++ test methods, and have seen the following shortcomings: * *test steps cannot be put into groups (there's only the grouping into several test scripts; but for running all tests in our software, I need at least one more level of grouping, so that a single test step would be identified by like "DB connection" -> "Reconnection Test" -> "test step #3") *seeing differences between expected and actual output is useful; I either print the diff to stderr (as comment) or actually launch a graphical diff tool *the protocol and tools must be really language-independent. For example, so far I only know of the Perl "prove" tool for running tests, which is limited to running Perl scripts In the end, the test output must be suitable as basis for easily generating an HTML report file which lists succeeded tests very concisely, gives detailed output for failed tests, and makes it possible to quickly jump into the IDE to the failing test line. A: * *optional ascii coloured output, green for good, yellow for pending, red for errors *the idea of things being pending *a summary at the end of the test report of commands that will run the individual tests where *List item * *something went wrong *something in the test was pending A: Extension idea for TAP: 1..4 ok 1 - yay not ok 2 - boo ok 3 - yay #json:{...} ok 4 - see my json Ability to attach a #json comment... - can be safely ignored by existing code - well-defined tags can be easily reserved at testanything.org - easy to produce, parse and read complex types - yaml is a pain
{ "language": "en", "url": "https://stackoverflow.com/questions/91585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Easiest cross platform widget toolkit? What is the easiest cross platform widget toolkit? I'm looking for one that minimally covers Windows, OSX, and Linux with a C or C++ interface. A: I really like Qt. Have been working with it in several projects now. Although the project, I am currently working on, will be released for windows only, some of our developers code under Mac OS X using the gcc. And using different compilers and environments is an extra benefit for locating errors & bugs. I forgot to mention that Qt has a really good documentation including lots of practical examples that help for a quick start. A: I've used both wxWidgets and QT professionally. Both are certainly capable of meeting your goals. Which one is easiest is hard to say. You don't tell us whether you're looking for easy to use, or easy to learn. Qt is easier for big programs. WxWidgets is easier to learn. This for a large part due to the signal/slot mechanism in QT, which is a good but non-intuitive architecture for large applications. Both libraries are actually so good that I'd recommend them for non-crossplatform programming too. A: Are we talking GUI Widgets? If so, I can suggest 3 FLTK: http://www.fltk.org/ GTK: http://www.gtk.org/ QT: http://trolltech.com/products/qt/ A: As with the other posters, I strongly recommend looking at C++ toolkits. GTK will work on Windows and the Mac OS, but will only give you truly good results on Linux. And even some of the GTK maintainers are inventing their their own object-oriented C dialect to avoid writing GUIs against the native GTK API. As for C++, it depends on what you want. Ease of development? Native GUIs on every platform? Commercial support? If you want native-looking GUIs on Win32 and Linux (and something semi-reasonable on the Mac), one excellent choice is wxWidgets. Here's a longer article with real-world wxWidgets experiences. The Mac port has improved substantially since 2002, when that article was written, but it still has some soft spots. A: I don't know of any I've personally used with a C API, but wxWidgets is C++. It runs on Windows, Linux, and Mac OS X. And if you're looking for easy, wxPython is a Python wrapper around wxWidgets and it is pretty easy to use. A: The easiest to write a new program in would be the one you're most familiar with. The easiest to use, test or distribute would probably be the most cross-platform, most distributed or the most supported one, so GTK+/wx/Qt/Tk? Note that C itself isn't a particularly easy language, especially with the growing object-oriented approach to GUIs. The easiest one to cook up a prototype in a scripting language, then convert to a compiled one might be any toolkit with a scripting language binding (pyGTK, wxPython, etc.) That being said, of the "big" ones, only GTK+ and Tk have a C bindings. wxWidgets, Qt and FLTK were all written in C++ and don't have any C bindings as far as I know. I suggest you look into learning C++ and then comparing the available options. Coding in C++ might feel like coding in a scripting language with great conveniences such as automatic pointers, utility classes and overloaded operators, non-invasive garbage collectors and easy to inherit parent classes all brought to your fingertips by the language itself and your widget toolkit. Then my personal suggestion would be wxWidgets; quite easy to use, better documented than GTKmm and "freer" than Qt.
{ "language": "en", "url": "https://stackoverflow.com/questions/91616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Generating classes automatically from unit tests? I am looking for a tool that can take a unit test, like IPerson p = new Person(); p.Name = "Sklivvz"; Assert.AreEqual("Sklivvz", p.Name); and generate, automatically, the corresponding stub class and interface interface IPerson // inferred from IPerson p = new Person(); { string Name { get; // inferred from Assert.AreEqual("Sklivvz", p.Name); set; // inferred from p.Name = "Sklivvz"; } } class Person: IPerson // inferred from IPerson p = new Person(); { private string name; // inferred from p.Name = "Sklivvz"; public string Name // inferred from p.Name = "Sklivvz"; { get { return name; // inferred from Assert.AreEqual("Sklivvz", p.Name); } set { name = value; // inferred from p.Name = "Sklivvz"; } } public Person() // inferred from IPerson p = new Person(); { } } I know ReSharper and Visual Studio do some of these, but I need a complete tool -- command line or whatnot -- that automatically infers what needs to be done. If there is no such tool, how would you write it (e.g. extending ReSharper, from scratch, using which libraries)? A: What you appear to need is a parser for your language (Java), and a name and type resolver. ("Symbol table builder"). After parsing the source text, a compiler usually has a name resolver, that tries to record the definition of names and their corresponding types, and a type checker, that verifies that each expression has a valid type. Normally the name/type resolver complains when it can't find a definition. What you want it to do is to find the "undefined" thing that is causing the problem, and infer a type for it. For IPerson p = new Person(); the name resolver knows that "Person" and "IPerson" aren't defined. If it were Foo p = new Bar(); there would be no clue that you wanted an interface, just that Foo is some kind of abstract parent of Bar (e.g., a class or an interface). So the decision as which is it must be known to the tool ("whenever you find such a construct, assume Foo is an interface ..."). You could use a heuristic: IFoo and Foo means IFoo should be an interface, and somewhere somebody has to define Foo as a class realizing that interface. Once the tool has made this decision, it would need to update its symbol tables so that it can move on to other statements: For p.Name = "Sklivvz"; given that p must be an Interface (by the previous inference), then Name must be a field member, and it appears its type is String from the assignment. With that, the statement: Assert.AreEqual("Sklivvz", p.Name); names and types resolve without further issue. The content of the IFoo and Foo entities is sort of up to you; you didn't have to use get and set but that's personal taste. This won't work so well when you have multiple entities in the same statement: x = p.a + p.b ; We know a and b are likely fields, but you can't guess what numeric type if indeed they are numeric, or if they are strings (this is legal for strings in Java, dunno about C#). For C++ you don't even know what "+" means; it might be an operator on the Bar class. So what you have to do is collect constraints, e.g., "a is some indefinite number or string", etc. and as the tool collects evidence, it narrows the set of possible constraints. (This works like those word problems: "Joe has seven sons. Jeff is taller than Sam. Harry can't hide behind Sam. ... who is Jeff's twin?" where you have to collect the evidence and remove the impossibilities). You also have to worry about the case where you end up with a contradiction. You could rule out p.a+p.b case, but then you can't write your unit tests with impunity. There are standard constraint solvers out there if you want impunity. (What a concept). OK, we have the ideas, now, can this be done in a practical way? The first part of this requires a parser and a bendable name and type resolver. You need a constraint solver or at least a "defined value flows to undefined value" operation (trivial constraint solver). Our DMS Software Reengineering Toolkit with its Java Front End could probably do this. DMS is a tool builder's tool, for people that want to build tools that process computer langauges in arbitrary ways. (Think of "computing with program fragments rather than numbers"). DMS provides general purpose parsing machinery, and can build an tree for whatever front end it is given (e.g., Java, and there's a C# front end). The reason I chose Java is that our Java front end has all that name and type resolution machinery, and it is provided in source form so it can be bent. If you stuck to the trivial constraint solver, you could probably bend the Java name resolver to figure out the types. DMS will let you assemble trees that correspond to code fragments, and coalesce them into larger ones; as your tool collected facts for the symbol table, it could build the primitive trees. Somewhere, you have to decide you are done. How many unit tests the tool have to see before it knows the entire interface? (I guess it eats all the ones you provide?). Once complete, it assembles the fragments for the various members and build an AST for an interface; DMS can use its prettyprinter to convert that AST back into source code like you've shown. I suggest Java here because our Java front end has name and type resolution. Our C# front end does not. This is a "mere" matter of ambition; somebody has to write one, but that's quite a lot of work (at least it was for Java and I can't imagine C# is really different). But the idea works fine in principle using DMS. You could do this with some other infrastructure that gave you access to a parser and an a bendable name and type resolver. That might not be so easy to get for C#; I suspect MS may give you a parser, and access to name and type resolution, but not any way to change that. Maybe Mono is the answer? You still need a was to generate code fragments and assemble them. You might try to do this by string hacking; my (long) experience with gluing program bits together is that if you do it with strings you eventually make a mess of it. You really want pieces that represent code fragments of known type, that can only be combined in ways the grammar allows; DMS does that thus no mess. A: Its amazing how no one really gave anything towards what you were asking. I dont know the answer, but I will give my thoughts on it. If I were to attempt to write something like this myself I would probably see about a resharper plugin. The reason I say that is because as you stated, resharper can do it, but in individual steps. So I would write something that went line by line and applied the appropriate resharper creation methods chained together. Now by no means do I even know how to do this, as I have never built anything for resharper, but that is what I would try to do. It makes logical sense that it could be done. And if you do write up some code, PLEASE post it, as I could find that usefull as well, being able to generate the entire skeleton in one step. Very useful. A: If you plan to write your own implementation I would definately suggest that you take a look at the NVelocity (C#) or Velocity (Java) template engines. I have used these in a code generator before and have found that they make the job a whole lot easier. A: It's doable - at least in theory. What I would do is use something like csparser to parse the unit test (you cannot compile it, unfortunately) and then take it from there. The only problem I can see is that what you are doing is wrong in terms of methodology - it makes more sense to generate unit tests from entity classes (indeed, Visual Studio does precisely this) than doing it the other way around. A: I think a real solution to this problem would be a very specialized parser. Since that's not so easy to do, I have a cheaper idea. Unfortunately, you'd have to change the way you write your tests (namely, just the creation of the object): dynamic p = someFactory.Create("MyNamespace.Person"); p.Name = "Sklivvz"; Assert.AreEqual("Sklivvz", p.Name); A factory object would be used. If it can find the named object, it will create it and return it (this is the normal test execution). If it doesn't find it, it will create a recording proxy (a DynamicObject) that will record all calls and at the end (maybe on tear down) could emit class files (maybe based on some templates) that reflect what it "saw" being called. Some disadvantages that I see: * *Need to run the code in "two" modes, which is annoying. *In order for the proxy to "see" and record calls, they must be executed; so code in a catch block, for example, has to run. *You have to change the way you create your object under test. *You have to use dynamic; you'll lose compile-time safety in subsequent runs and it has a performance hit. The only advantage that I see is that it's a lot cheaper to create than a specialized parser. A: I like CodeRush from DevExpress. They have a huge customizable templating engine. And the best for me their is no Dialog boxes. They also have functionality to create methods and interfaces and classes from interface that does not exist. A: Try looking at the Pex , A microsoft project on unit testing , which is still under research research.microsoft.com/en-us/projects/Pex/ A: I think what you are looking for is a fuzzing tool kit (https://en.wikipedia.org/wiki/Fuzz_testing). Al tough I never used, you might give Randoop.NET a chance to generate 'unit tests' http://randoop.codeplex.com/ A: I find that whenever I need a code generation tool like this, I am probably writing code that could be made a little bit more generic so I only need to write it once. In your example, those getters and setters don't seem to be adding any value to the code - in fact, it is really just asserting that the getter/setter mechanism in C# works. I would refrain from writing (or even using) such a tool before understanding what the motivations for writing these kinds of tests are. BTW, you might want to have a look at NBehave? A: I use Rhino Mocks for this, when I just need a simple stub. http://www.ayende.com/wiki/Rhino+Mocks+-+Stubs.ashx A: Visual Studio ships with some features that can be helpful for you here: Generate Method Stub. When you write a call to a method that doesn't exist, you'll get a little smart tag on the method name, which you can use to generate a method stub based on the parameters you're passing. If you're a keyboard person (I am), then right after typing the close parenthesis, you can do: * *Ctrl-. (to open the smart tag) *ENTER (to generate the stub) *F12 (go to definition, to take you to the new method) The smart tag only appears if the IDE thinks there isn't a method that matches. If you want to generate when the smart tag isn't up, you can go to Edit->Intellisense->Generate Method Stub. Snippets. Small code templates that makes it easy to generate bits of common code. Some are simple (try "if[TAB][TAB]"). Some are complex ('switch' will generate cases for an enum). You can also write your own. For your case, try "class" and "prop". See also "How to change “Generate Method Stub” to throw NotImplementedException in VS?" for information snippets in the context of GMS. autoprops. Remember that properties can be much simpler: public string Name { get; set; } create class. In Solution Explorer, RClick on the project name or a subfolder, select Add->Class. Type the name of your new class. Hit ENTER. You'll get a class declaration in the right namespace, etc. Implement interface. When you want a class to implement an interface, write the interface name part, activate the smart tag, and select either option to generate stubs for the interface members. These aren't quite the 100% automated solution you're looking for, but I think it's a good mitigation.
{ "language": "en", "url": "https://stackoverflow.com/questions/91617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Is there any point for interfaces in dynamic languages? In static languages like Java you need interfaces because otherwise the type system just won't let you do certain things. But in dynamic languages like PHP and Python you just take advantage of duck-typing. PHP supports interfaces. Ruby and Python don't have them. So you can clearly live happily without them. I've been mostly doing my work in PHP and have never really made use of the ability to define interfaces. When I need a set of classes to implement certain common interface, then I just describe it in documentation. So, what do you think? Aren't you better off without using interfaces in dynamic languages at all? A: Interfaces actually add some degree of dynamic lang-like flexibility to static languages that have them, like Java. They offer a way to query an object for which contracts it implements at runtime. That concept ports well into dynamic languages. Depending on your definition of the word "dynamic", of course, that even includes Objective-C, which makes use of Protocols pretty extensively in Cocoa. In Ruby you can ask whether an object responds to a given method name. But that's a pretty weak guarantee that it's going to do what you want, especially given how few words get used over and over, that the full method signature isn't taken into account, etc. In Ruby I might ask object.respond_to? :sync So, yeah, it has a method named "sync", whatever that means. In Objective-C I might ask something similar, i.e. "does this look/walk/quack like something that synchronizes?": [myObject respondsToSelector:@selector(sync)] Even better, at the cost of some verbosity, I can ask something more specific, i.e. "does this look/walk/quack like something that synchronizes to MobileMe?": [myObject respondsToSelector:@selector(sync:withMobileMeAccount:)] That's duck typing down to the species level. But to really ask an object whether it is promising to implement synchronization to MobileMe... [receiver conformsToProtocol:@protocol(MobileMeSynchronization)] Of course, you could implement protocols by just checking for the presence of a series of selectors that you consider the definition of a protocol/duck, and if they are specific enough. At which point the protocol is just an abbreviation for a big hunk of ugly responds_to? queries, and some very useful syntactic sugar for the compiler/IDE to use. Interfaces/protocols are another dimension of object metadata that can be used to implement dynamic behavior in the handling of those objects. In Java the compiler just happens to demand that sort of thing for normal method invocation. But even dynamic languages like Ruby, Python, Perl, etc. implement a notion of type that goes beyond just "what methods an object responds to". Hence the class keyword. Javascript is the only really commonly used language without that concept. If you've got classes, then interfaces make sense, too. It's admittedly more useful for more complicated libraries or class hierarchies than in most application code, but I think the concept is useful in any language. Also, somebody else mentioned mixins. Ruby mixins are a way to share code -- e.g., they relate to the implementation of a class. Interfaces/protocols are about the interface of a class or object. They can actually complement each other. You might have an interface which specifies a behavior, and one or more mixins which help an object to implement that behavior. Of course, I can't think of any languages which really have both as distinct first-class language features. In those with mixins, including the mixin usually implies the interface it implements. A: If you do not have hight security constraints (so nobody will access you data a way you don't want to) and you have a good documentation or well trained coders (so they don't need the interpreter / compiler to tell them what to do), then no, it's useless. For most medium size projects, duck typing is all you need. A: I was under the impression that Python doesn't have interfaces. As far as I'm aware in Python you can't enforce a method to be implemented at compilation time precisely because it is a dynamic language. There are interface libraries for Python but I haven't used any of them. Python also has Mixins so you could have create an Interface class by defining a Mixin an having pass for every method implementation but that's not really giving you much value. A: I think use of interfaces is determined more by how many people will be using your library. If it's just you, or a small team then documentation and convention will be fine and requiring interfaces will be an impediment. If it's a public library then interfaces are much more useful because they constrain people to provide the right methods rather than just hint. So interfaces are definitely a valuable feature for writing public libraries and I suppose that lack (or at least de-emphasis) is one of the many reasons why dynamic languages are used more for apps and strongly-typed languages are used for big libraries. A: Rene, please read my answer to "Best Practices for Architecting Large Systems in a Dynamic Language" question here on StackOverflow. I discuss some benefits of giving away the freedom of dynamic languages to save development effort and to ease introducing new programmers to the project. Interfaces, when used properly, greatly contribute to writing reliable software. A: In a language like PHP where a method call that doesn't exist results in a fatal error and takes the whole application down, then yes interfaces make sense. In a language like Python where you can catch and handle invalid method calls, it doesn't. A: Python 3000 will have Abstract Base Classes. Well worth a read. A: I think of it more as a level of convenience. If you have a function which takes a "file-like" object and only calls a read() method on it, then it's inconvenient - even limiting - to force the user to implement some sort of File interface. It's just as easy to check if the object has a read method. But if your function expects a large set of methods, it's easier to check if the object supports an interface then to check for support of each individual method. A: Yes, there is a point If you don't explicitly use interfaces your code still uses the object as though it implemented certain methods it's just unclear what the unspoken interface is. If you define a function to accept an interface (in PHP say) then it'll fail earlier, and the problem will be with the caller not with the method doing the work. Generally failing earlier is a good rule of thumb to follow. A: One use of the Java "interface" is to allow strongly-typed mixins in Java. You mix the proper superclass, plus any additional methods implemented to support the interface. Python has multiple inheritance, so it doesn't really need the interface contrivance to allow methods from multiple superclasses. I, however, like some of the benefits of strong typing -- primarily, I'm a fan of early error detection. I try to use an "interface-like" abstract superclass definition. class InterfaceLikeThing( object ): def __init__( self, arg ): self.attr= None self.otherAttr= arg def aMethod( self ): raise NotImplementedError def anotherMethod( self ): return NotImplemented This formalizes the interface -- in a way. It doesn't provide absolute evidence for a subclass matching the expectations. However, if a subclass fails to implement a required method, my unit tests will fail with an obvious NotImplemented return value or NotImplementedError exception. A: Well, first of all, it's right that Ruby does not have Interface as is, but they have mixin, wich takes somehow the best of both interfaces and abstract classes from other languages. The main goal of interface is to ensure that your object SHALL implement ALL the methods present in the interface itself. Of course, interface are never mandatory, even in Java you could imagine to work only with classes and using reflection to call methods when you don't know wich kind of object you're manipulating, but it is error prone and should be discouraged in many ways. A: Well, it would certainly be easier to check if a given object supported an entire interface, instead of just not crashing when you call the one or two methods you use in the initial method, for instance to add an object to an internal list. Duck typing has some of the benefits of interfaces, that is, easy of use everywhere, but the detection mechanism is still missing. A: It's like saying you don't need explicit types in a dynamically-typed language. Why don't you make everything a "var" and document their types elsewhere? It's a restriction imposed on a programmer, by a programmer. It makes it harder for you to shoot yourself in the foot; gives you less room for error. A: as a PHP programmer, the way I see it, an Interface is basically used as a contract. It lets you say that everything which uses this interface MUST implement a given set of functions. I dunno if that's all that useful, but I found it a bit of a stumbling block when trying to understand what Interfaces were all about. A: If you felt you had to, you could implement a kind of interface with a function that compares an object's methods/attributes to a given signature. Here's a very basic example: file_interface = ('read', 'readline', 'seek') class InterfaceException(Exception): pass def implements_interface(obj, interface): d = dir(obj) for item in interface: if item not in d: raise InterfaceException("%s not implemented." % item) return True >>> import StringIO >>> s = StringIO.StringIO() >>> implements_interface(s, file_interface) True >>> >>> fp = open('/tmp/123456.temp', 'a') >>> implements_interface(fp, file_interface) True >>> fp.close() >>> >>> d = {} >>> implements_interface(d, file_interface) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 4, in implements_interface __main__.InterfaceException: read not implemented. Of course, that doesn't guarantee very much. A: In addition to the other answers I just want to point out that Javascript has an instanceof keyword that will return true if the given instance is anywhere in a given object's prototype chain. This means that if you use your "interface object" in the prototype chain for your "implementation objects" (both are just plain objects to JS) then you can use instanceof to determine if it "implements" it. This does not help the enforcement aspect, but it does help in the polymorphism aspect - which is one common use for interfaces. MDN instanceof Reference A: Stop trying to write Java in a dynamic language.
{ "language": "en", "url": "https://stackoverflow.com/questions/91618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: JBoss 4.23 and EJB 2 support Does anyone know if the JBoss 4.2.3 release that is compiled for Java 6 still supports EJB 2? I'm having issues where it can't cast a class to a certain interface where I never had this problem before and the code hasn't changed. A: EJB2-style beans should still work in the 4.2 release. What interface do you want to cast to? Maybe that particular interface was renamed or moved. You should try not to use container-specific classes.
{ "language": "en", "url": "https://stackoverflow.com/questions/91627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: MS SQL Server 2005 - How to Auto Increment a field (not primary key) I would like to automatically increment a field named `incrementID' anytime any field in any row within the table named 'tb_users' is updated. Currently I am doing it via the sql update statement. i.e "UPDATE tb_users SET name = @name, incrementID = incrementID + 1 .....WHERE id = @id; I'm wondering how I can do this automatically. for example, by changing the way sql server treats the field - kind of like the increment setting of 'Identity'. Before I update a row, I wish to check whether the incrementID of the object to be updated is different to the incrementID of the row of the db. A: Columns in the Table can have an Identity Specification set. Simply expand the node in the property window and fill in the details (Is Identity, Increment, Seed). The IDENTITYCOL keyword can be used for operations on Identity Specifications. A: You could use a trigger for this (if I've read you correctly and you want the value incremented each time you update the row). A: If you just need to know that it changed, rather than specifically that this is a later version or how many changes there have been, consider using a rowversion column. A: This trigger should do the trick: create trigger update_increment for update as if not update(incrementID) UPDATE tb_users SET incrementID = incrementID + 1 from inserted WHERE tb_users.id = inserted.id
{ "language": "en", "url": "https://stackoverflow.com/questions/91628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: JavaScript RegEx Elements Match I'm trying to match elements with a name that is 'container1$container2$chkChecked', using a regex of '.+\$chkChecked', but I'm not getting the matches I expect when the element name is as described. What am I doing wrong? A: try string.match( /[$]chkChecked$/ ) alternatively, you could try string.match( /[^a-zA-Z0-9]chkChecked/ ) ( Also, make sure your using // around your regex, otherwise you might be matching using string literals. Not obvious tho without a larger code snippet ) A: my guess, by your use of quotes, is you did something like re = new RegExp('.+\$chkChecked'); which won't work because js takes advantage of the \ in its string interpretation as an escape so it never makes it into the regex interpreter instead you want re = new RegExp('.+\\$chkChecked'); A: There's two levels of escaping: one when your code is first parsed (e.g. in case you want to include a ' inside the string), and the second in the regexp engine. So you need two \s before the $ to make the regexp engine not treat it as a special character. A: It looks like it should work. There's a good Javascript Regex Tester that also says it matches. A: Steven Noble: which won't work because js takes advantage of the \ in its string interpretation as an escape so it never makes it into the regex interpreter I intended to use \ as an escape because I'm really looking for a $ in the element name.
{ "language": "en", "url": "https://stackoverflow.com/questions/91629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: PostSharp - il weaving - thoughts I am considering using Postsharp framework to ease the burden of application method logging. It basically allows me to adorn methods with logging attribute and at compile time injects the logging code needed into the il. I like this solution as it keeps the noise out of the deign time code environment. Any thoughts, experiences or better alternatives? A: I apply logging with AOP using Castle Windsor DynamicProxies. I was already using Castle for it's IoC container, so using it for AOP was the path of least resistence for me. If you want more info let me know, I'm in the process of tidying the code up for releasing it as a blog post Edit Ok, here's the basic Intercepter code, faily basic but it does everything I need. There are two intercepters, one logs everyhing and the other allows you to define method names to allow for more fine grained logging. This solution is faily dependant on Castle Windsor Abstract Base class namespace Tools.CastleWindsor.Interceptors { using System; using System.Text; using Castle.Core.Interceptor; using Castle.Core.Logging; public abstract class AbstractLoggingInterceptor : IInterceptor { protected readonly ILoggerFactory logFactory; protected AbstractLoggingInterceptor(ILoggerFactory logFactory) { this.logFactory = logFactory; } public virtual void Intercept(IInvocation invocation) { ILogger logger = logFactory.Create(invocation.TargetType); try { StringBuilder sb = null; if (logger.IsDebugEnabled) { sb = new StringBuilder(invocation.TargetType.FullName).AppendFormat(".{0}(", invocation.Method); for (int i = 0; i < invocation.Arguments.Length; i++) { if (i > 0) sb.Append(", "); sb.Append(invocation.Arguments[i]); } sb.Append(")"); logger.Debug(sb.ToString()); } invocation.Proceed(); if (logger.IsDebugEnabled && invocation.ReturnValue != null) { logger.Debug("Result of " + sb + " is: " + invocation.ReturnValue); } } catch (Exception e) { logger.Error(string.Empty, e); throw; } } } } Full Logging Implemnetation namespace Tools.CastleWindsor.Interceptors { using Castle.Core.Logging; public class LoggingInterceptor : AbstractLoggingInterceptor { public LoggingInterceptor(ILoggerFactory logFactory) : base(logFactory) { } } } Method logging namespace Tools.CastleWindsor.Interceptors { using Castle.Core.Interceptor; using Castle.Core.Logging; using System.Linq; public class MethodLoggingInterceptor : AbstractLoggingInterceptor { private readonly string[] methodNames; public MethodLoggingInterceptor(string[] methodNames, ILoggerFactory logFactory) : base(logFactory) { this.methodNames = methodNames; } public override void Intercept(IInvocation invocation) { if ( methodNames.Contains(invocation.Method.Name) ) base.Intercept(invocation); } } } A: +1 on postsharp. Have been using for several things (including some attempts on adding preconditions and postconditions to C# code) and don't know how I'd make it without it... A: It depends to an extent on how long you'll be developing and supporting the project for. Sure, IL weaving is a nice technology, but what happens if the IL and/or assembly metadata format changes again (as it did between 1.1 and 2.0) and those changes make the tool incompatible with the new format. If you depend on the tool then it prevents you from upgrading your technology until the tool supports it. With no guarantees in place about this (or even that development will continue, though it does seem likely) then I'd be very wary about using it on a long term project. Short term, no problem though.
{ "language": "en", "url": "https://stackoverflow.com/questions/91635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: NI CVI with Python I'd like to integrate a Python IDLE-esque command prompt interface into an existing NI-CVI (LabWindows) application. I've tried to follow the Python.org discussions but seem to get lost in the details. Is there a resource out there for dummies like me? A: Here is a python sample code calling a CVI. There are DaqMx python bindings too.
{ "language": "en", "url": "https://stackoverflow.com/questions/91666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way to add users to multiple groups in a database? In an application where users can belong to multiple groups, I'm currently storing their groups in a column called groups as a binary. Every four bytes is a 32 bit integer which is the GroupID. However, this means that to enumerate all the users in a group I have to programatically select all users, and manually find out if they contain that group. Another method was to use a unicode string, where each character is the integer denoting a group, and this makes searching easy, but is a bit of a fudge. Another method is to create a separate table, linking users to groups. One column called UserID and another called GroupID. Which of these ways would be the best to do it? Or is there a better way? A: I'd definitely go for the separate table - certainly the best relational view of data. If you have indexes on both UserID and GroupID you have a quick way of getting users per group and groups per user. A: You have a many-to-many relationship between users and groups. This calls for a separate table to combine users with groups: User: (UserId[PrimaryKey], UserName etc.) Group: (GroupId[PrimaryKey], GroupName etc.) UserInGroup: (UserId[ForeignKey], GroupId[ForeignKey]) To find all users in a given group, you just say: select * from User join UserInGroup on UserId Where GroupId=<the GroupId you want> Rule of thumb: If you feel like you need to encode multiple values in the same field, you probably need a foreign key to a separate table. Your tricks with byte-blocks or Unicode chars are just clever tricks to encode multiple values in one field. Database design should not use clever tricks - save that for application code ;-) A: The more standard, usable and comprehensible way is the join table. It's easily supported by many ORMs, in addition to being reasonably performant for most cases. Only enter in "clever" ways if you have a reason to, say a million of users and having to answer that question every half a second. A: I would make 3 tables. users, groups and usersgroups which is used as cross-reference table to link users and groups. In usersgroups table I would add userId and groupId columns and make them as primary key. BTW. What naming conventions there are to name those xref tables? A: It depends what you're trying to do, but if your database supports it, you might consider using roles. The advantage of this is that the database provides security around roles, and you don't have to create any tables.
{ "language": "en", "url": "https://stackoverflow.com/questions/91672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I specify the local address on a java.net.URLConnection? My Tomcat instance is listening to multiple IP addresses, but I want to control which source IP address is used when opening a URLConnection. How can I specify this? A: This should do the trick: URL url = new URL(yourUrlHere); Proxy proxy = new Proxy(Proxy.Type.DIRECT, new InetSocketAddress( InetAddress.getByAddress( new byte[]{your, ip, interface, here}), yourTcpPortHere)); URLConnection conn = url.openConnection(proxy); And you are done. Dont forget to handle exceptions nicely and off course change the values to suit your scenario. Ah and I omitted the import statements A: Using the Apache commons HttpClient I have also found the following to work (removed try/catch for clarity): HostConfiguration hostConfiguration = new HostConfiguration(); byte b[] = new byte[4]; b[0] = new Integer(192).byteValue(); b[1] = new Integer(168).byteValue(); b[2] = new Integer(1).byteValue(); b[3] = new Integer(11).byteValue(); hostConfiguration.setLocalAddress(InetAddress.getByAddress(b)); HttpClient client = new HttpClient(); client.setHostConfiguration(hostConfiguration); GetMethod method = new GetMethod("http://remoteserver/"); method.getParams().setParameter(HttpMethodParams.RETRY_HANDLER, new DefaultHttpMethodRetryHandler(3, false)); int statusCode = client.executeMethod(method); if (statusCode != HttpStatus.SC_OK) { System.err.println("Method failed: " + method.getStatusLine()); } byte[] responseBody = method.getResponseBody(); System.out.println(new String(responseBody));"); However, I still wonder what would happen if the gateway of the IP is down (192.168.1.11 in this case). Will the next gateway be tried or will it fail? A: The obvious portable way would be to set a Proxy in URL.openConnection. The proxy can be in local host, you can then write a very simple proxy that binds the local address of the client socket. If you can't modify the source where the URL is connected, you can replace the URLStreamHandler either when calling the URL constructor or globally through URL.setURLStreamHandlerFactory. The URLStreamHandler can then delegate to the default http/https handler, modifying the openConnection call. A more extreme method would be to completely replace the handler (perhaps extending the implementation in your JRE). Alternatively, alternative (open source) http clients are available. A: Setting manually socket work fine ... private HttpsURLConnection openConnection(URL src, URL dest, SSLContext sslContext) throws IOException, ProtocolException { HttpsURLConnection connection = (HttpsURLConnection) dest.openConnection(); HttpsHostNameVerifier httpsHostNameVerifier = new HttpsHostNameVerifier(); connection.setHostnameVerifier(httpsHostNameVerifier); connection.setConnectTimeout(CONNECT_TIMEOUT); connection.setReadTimeout(READ_TIMEOUT); connection.setRequestMethod(POST_METHOD); connection.setRequestProperty(CONTENT_TYPE, SoapConstants.CONTENT_TYPE_HEADER); connection.setDoOutput(true); connection.setDoInput(true); connection.setSSLSocketFactory(sslContext.getSocketFactory()); if ( src!=null ) { InetAddress inetAddress = InetAddress.getByName(src.getHost()); int destPort = dest.getPort(); if ( destPort <=0 ) destPort=SERVER_HTTPS_PORT; int srcPort = src.getPort(); if ( srcPort <=0 ) srcPort=CLIENT_HTTPS_PORT; connectionSocket = connection.getSSLSocketFactory().createSocket(dest.getHost(), destPort, inetAddress, srcPort); } connection.connect(); return connection; }
{ "language": "en", "url": "https://stackoverflow.com/questions/91678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do you implement unit-testing in large scale C++ projects? I believe strongly in using unit-tests as part of building large multi-platform applications. We currently are planning on having our unit-tests within a separate project. This has the benefit of keeping our code base clean. I think, however, that this would separate the test code from the implementation of the unit. What do you think of this approach and are there any tools like JUnit for c++ applications? A: You should separate your base code to a shared (dynamic) library and then write the major part of your unit tests for this library. Two years ago (2008) I have been involved in large LSB Infrastructure project deployed by The Linux Foundation. One of the aims of this project was to write unit tests for 40.000 functions from the Linux core libraries. In the context of this project we have created the AZOV technology and the basic tool named API Sanity Autotest in order to automatically generate all the tests. You may try to use this tool to generate unit tests for your base library (ies). A: There are many Test Unit frameforks for C++. CppUnit is certainly not the one I would choose (at least in its stable version 1.x, as it lacks many tests, and requires a lot of redundant lines of codes). So far, my preferred framework is CxxTest, and I plan on evaluating Fructose some day. Any way, there are a few "papers" that evaluate C++ TU frameworks : * *Exploring the C++ Unit Testing Framework Jungle, By Noel Llopis *an article in Overload Journal #78 A: I think your on the right path with unit testing and its a great plan to improve reliability of your product. Though unit testing is not going to solve all your problems when converting your application to different platforms or even different operating systems. The reason for this, is the process unit testings goes through to uncover bugs in your application. It just simply throws as many inputs imaginable into your system and waits for a result on the other end. Its like getting a monkey to constantly pound at the keyboard and observing the results(Beta testers). To take it to the next step, with good unit testing you need to focus on your internal design of your application. The best approach i found was to use a design pattern or design process called "contract programming" or "Design by contract". The other book that is very helpful for building reliability into your core design was. Debugging the Development Process: Practical Strategies for Staying Focused, Hitting Ship Dates, and Building Solid Teams. In our development team, we looked very closely at what we consider to be a programmer error, developer error, design error and how we could use both unit testing and also building reliability into our software package through DBC and following the advice of debugging the development proccess. A: I use UnitTest++. The tests are in a separate project but the actual tests are intertwined with the actual code. They exist in a folder under the section under test. ie: MyProject\src\ <- source of the actual app MyProject\src\tests <- the source of the tests If you have nested folders (and who doesn't) then they too will have their own \tests subdirectory. A: That's a reasonable approach. I've had very good results both with UnitTest++ and Boost.Test I've looked at CppUnit, but to me, it felt more like a translation of the JUnit stuff than something aimed at C++. Update: These days I prefer using Catch. I found it to be effective and simple to use. A: Cppunit is a direct equivalent of Junit for C++ applications http://cppunit.sourceforge.net/cppunit-wiki Personally, I created the unit tests in a different project, and created a separate build configuration which built all the unit tests and dependent source code. In some cases I wanted to test private member functionss of a class so I made the Test class a friend class to the object to be tested, but hid the friend declarations when building in "non-test" configurations through preprocessor declarations. I ended up doing these coding gymnastics as I was integrating tests into legacy code however. If you are starting out with the purpose of unit testing a better design may be simple. A: You can create a unit test project for each library in your source tree in a subdirectory of that library. You end up with a test driver application for each library, which makes it easier to run a single suite of tests. By putting them in a subdirectory, it keeps your code base clean, but also keeps the tests close to the code. Scripts can easily be written to run all of the test suites in your source tree and collect the results. I've been using a customized version of the original CppUnit for years with great success, but there are other alternatives now. GoogleTest looks interesting. A: CxxTest is also worth a look for lightweight, easy to use cross platform JUnit/CppUnit/xUnit-like framework for C++. We find it very straightforward to add and develop tests Aeryn is another C++ Testing Framework worth looking at A: Using tut http://tut-framework.sourceforge.net/ very simple, just header file only no macros. Can generate XML results
{ "language": "en", "url": "https://stackoverflow.com/questions/91683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: What are the differences between a clustered and a non-clustered index? What are the differences between a clustered and a non-clustered index? A: Clustered indexes physically order the data on the disk. This means no extra data is needed for the index, but there can be only one clustered index (obviously). Accessing data using a clustered index is fastest. All other indexes must be non-clustered. A non-clustered index has a duplicate of the data from the indexed columns kept ordered together with pointers to the actual data rows (pointers to the clustered index if there is one). This means that accessing data through a non-clustered index has to go through an extra layer of indirection. However if you select only the data that's available in the indexed columns you can get the data back directly from the duplicated index data (that's why it's a good idea to SELECT only the columns that you need and not use *) A: A clustered index actually describes the order in which records are physically stored on the disk, hence the reason you can only have one. A Non-Clustered Index defines a logical order that does not match the physical order on disk. A: An indexed database has two parts: a set of physical records, which are arranged in some arbitrary order, and a set of indexes which identify the sequence in which records should be read to yield a result sorted by some criterion. If there is no correlation between the physical arrangement and the index, then reading out all the records in order may require making lots of independent single-record read operations. Because a database may be able to read dozens of consecutive records in less time than it would take to read two non-consecutive records, performance may be improved if records which are consecutive in the index are also stored consecutively on disk. Specifying that an index is clustered will cause the database to make some effort (different databases differ as to how much) to arrange things so that groups of records which are consecutive in the index will be consecutive on disk. For example, if one were to start with an empty non-clustered database and add 10,000 records in random sequence, the records would likely be added at the end in the order they were added. Reading out the database in order by the index would require 10,000 one-record reads. If one were to use a clustered database, however, the system might check when adding each record whether the previous record was stored by itself; if it found that to be the case, it might write that record with the new one at the end of the database. It could then look at the physical record before the slots where the moved records used to reside and see if the record that followed that was stored by itself. If it found that to be the case, it could move that record to that spot. Using this sort of approach would cause many records to be grouped together in pairs, thus potentially nearly doubling sequential read speed. In reality, clustered databases use more sophisticated algorithms than this. A key thing to note, though, is that there is a tradeoff between the time required to update the database and the time required to read it sequentially. Maintaining a clustered database will significantly increase the amount of work required to add, remove, or update records in any way that would affect the sorting sequence. If the database will be read sequentially much more often than it will be updated, clustering can be a big win. If it will be updated often but seldom read out in sequence, clustering can be a big performance drain, especially if the sequence in which items are added to the database is independent of their sort order with regard to the clustered index. A: A clustered index is essentially a sorted copy of the data in the indexed columns. The main advantage of a clustered index is that when your query (seek) locates the data in the index then no additional IO is needed to retrieve that data. The overhead of maintaining a clustered index, especially in a frequently updated table, can lead to poor performance and for that reason it may be preferable to create a non-clustered index. A: You might have gone through theory part from the above posts: -The clustered Index as we can see points directly to record i.e. its direct so it takes less time for a search. Additionally it will not take any extra memory/space to store the index -While, in non-clustered Index, it indirectly points to the clustered Index then it will access the actual record, due to its indirect nature it will take some what more time to access.Also it needs its own memory/space to store the index A: Clustered indexes are stored physically on the table. This means they are the fastest and you can only have one clustered index per table. Non-clustered indexes are stored separately, and you can have as many as you want. The best option is to set your clustered index on the most used unique column, usually the PK. You should always have a well selected clustered index in your tables, unless a very compelling reason--can't think of a single one, but hey, it may be out there--for not doing so comes up. A: Clustered Index * *There can be only one clustered index for a table. *Usually made on the primary key. *The leaf nodes of a clustered index contain the data pages. Non-Clustered Index * *There can be only 249 non-clustered indexes for a table(till sql version 2005 later versions support upto 999 non-clustered indexes). *Usually made on the any key. *The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows. A: Clustered Index * *Only one per table *Faster to read than non clustered as data is physically stored in index order Non Clustered Index * *Can be used many times per table *Quicker for insert and update operations than a clustered index Both types of index will improve performance when select data with fields that use the index but will slow down update and insert operations. Because of the slower insert and update clustered indexes should be set on a field that is normally incremental ie Id or Timestamp. SQL Server will normally only use an index if its selectivity is above 95%. A: Clustered Index * *Only one clustered index can be there in a table *Sort the records and store them physically according to the order *Data retrieval is faster than non-clustered indexes *Do not need extra space to store logical structure Non Clustered Index * *There can be any number of non-clustered indexes in a table *Do not affect the physical order. Create a logical order for data rows and use pointers to physical data files *Data insertion/update is faster than clustered index *Use extra space to store logical structure Apart from these differences you have to know that when table is non-clustered (when the table doesn't have a clustered index) data files are unordered and it uses Heap data structure as the data structure. A: Pros: Clustered indexes work great for ranges (e.g. select * from my_table where my_key between @min and @max) In some conditions, the DBMS will not have to do work to sort if you use an orderby statement. Cons: Clustered indexes are can slow down inserts because the physical layouts of the records have to be modified as records are put in if the new keys are not in sequential order. A: Clustered basically means that the data is in that physical order in the table. This is why you can have only one per table. Unclustered means it's "only" a logical order. A: // Copied from MSDN, the second point of non-clustered index is not clearly mentioned in the other answers. Clustered * *Clustered indexes sort and store the data rows in the table or view based on their key values. These are the columns included in the index definition. There can be only one clustered index per table, because the data rows themselves can be stored in only one order. *The only time the data rows in a table are stored in sorted order is when the table contains a clustered index. When a table has a clustered index, the table is called a clustered table. If a table has no clustered index, its data rows are stored in an unordered structure called a heap. Nonclustered * *Nonclustered indexes have a structure separate from the data rows. A nonclustered index contains the nonclustered index key values and each key value entry has a pointer to the data row that contains the key value. *The pointer from an index row in a nonclustered index to a data row is called a row locator. The structure of the row locator depends on whether the data pages are stored in a heap or a clustered table. For a heap, a row locator is a pointer to the row. For a clustered table, the row locator is the clustered index key. A: Clustered Indexes * *Clustered Indexes are faster for retrieval and slower for insertion and update. *A table can have only one clustered index. *Don't require extra space to store logical structure. *Determines the order of storing the data on the disk. Non-Clustered Indexes * *Non-clustered indexes are slower in retrieving data and faster in insertion and update. *A table can have multiple non-clustered indexes. *Require extra space to store logical structure. *Has no effect of order of storing data on the disk.
{ "language": "en", "url": "https://stackoverflow.com/questions/91688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "305" }
Q: JSTL/JSP EL (Expression Language) in a non JSP (standalone) context Can anyone recommend a framework for templating/formatting messages in a standalone application along the lines of the JSP EL (Expression Language)? I would expect to be able to instantiate a an object of some sort, give it a template along the lines of Dear ${customer.firstName}. You order will be dispatched on ${order.estimatedDispatchDate} provide it with a context which would include a value dictionary of parameter objects (in this case an object of type Customer with a name 'customer', say, and an object of type Order with a name 'order'). I know there are many template frameworks out there - many of which work outside the web application context, but I do not see this as a big heavyweight templating framework. Just a better version of the basic Message Format functionality Java already provides For example, I can accomplish the above with java.text.MessageFormat by using a template (or a 'pattern' as they call it) such as Dear {0}. You order will be dispatched on {1,date,EEE dd MMM yyyy} and I can pass it an Object array, in my calling Java program new Object[] { customer.getFirstName(), order.getEstimatedDispatchDate() }; However, in this usage, the code and the pattern are intimately linked. While I could put the pattern in a resource properties file, the code and the pattern need to know intimate details about each other. With an EL-like system, the contract between the code and the pattern would be at a much higher level (e.g. customer and order, rather then customer.firstName and order.estimatedDispatchDate), making it easier to change the structure, order and contents of the message without changing any code. A: StringTemplate is a more lightweight alternative to Velocity and Freemarker. A: I would recommend looking into Apache Velocity. It is quite simple and lightweight. We are currently using it for our e-mail templates, and it works very well. A: The idea of using EL itself outside of Java EE was advocated by Ed Burns and discussed on The Server Side. Tomcats implementation ships in a separate JAR but I don't know if it can be used outside the server. A: You can use Casper very similar to jsp and easy to use : Casper A: I would go for the Spring Expression language: http://docs.spring.io/spring/docs/current/spring-framework-reference/html/expressions.html A few examples which demonstrate the power (the first two are from the documentation): int year = (Integer) parser.parseExpression("Birthdate.Year + 1900").getValue(context); String city = (String) parser.parseExpression("placeOfBirth.City").getValue(context); // weekday is a String, e.g. "Mon", time is an int, e.g. 1400 or 900 {"Thu", "Fri"}.contains(weekday) and time matches '\d{4}' Expressions can also use object properties: public class Data { private String name; // getter and setter omitted } Data data = new Data(); data.setName("John Doe"); ExpressionParser p = new SpelExpressionParser(); Expression e = p.parseExpression("name == 'John Doe'"); Boolean r = (Boolean) e.getValue(data); // will return true e = p.parseExpression("Hello " + name + ", how are you ?"); String text = e.getValue(data, String.class); // text will be "Hello John Doe, how are you ?" A: You can just use the Universal Expression Language itself. You need an implementation (but there are a few to choose from). After that, you need to implement three classes: ELResolver, FunctionMapper and VariableMapper. This blog post describes how to do it: Java: using EL outside J2EE. A: You might want to look at OGNL which is the kind of library you are after. OGNL can be reasonably powerful, and is the expression language used in the WebWork web framework. A: Re: Jasper and Juel being built for 1.5: And then I discovered RetroTranslator (http://retrotranslator.sourceforge.net/). Once retrotranslated, EL and Jasper works like a charm A: Freemarker would do exactly what you need. This is a template engine with a syntax very similar to JSP : http://freemarker.org/ A: AAh. Whereas with MessageFormat, I can do Dear {0}. Your order will be dispatched on {1,date,EEE dd MMM yyyy} where parameter #1 is a Date object and it gets formatted according to the pattern, there is no equivalent in EL. In JSP, I would have used, perhaps, a format tag. In this standalone example, I am going to have to format the Date as a String in my code prior to evaluating the expression.
{ "language": "en", "url": "https://stackoverflow.com/questions/91692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: PHP equivalent of Perl's 'use strict' (to require variables to be initialzied before use) Python's convention is that variables are created by first assignment, and trying to read their value before one has been assigned raises an exception. PHP by contrast implicitly creates a variable when it is read, with a null value. This means it is easy to do this in PHP: function mymodule_important_calculation() { $result = /* ... long and complex calculation ... */; return $resukt; } This function always returns null, and if null is a valid value for the functuion then the bug might go undetected for some time. The Python equivalent would complain that the variable resukt is being used before it is assigned. So... is there a way to configure PHP to be stricter with variable assignments? A: There is no way to make it fail as far as I know, but with E_NOTICE in error_reporting settings you can make it throw a warning (well, a notice :-) But still a string you can search for ). A: PHP doesn't do much forward checking of things at parse time. The best you can do is crank up the warning level to report your mistakes, but by the time you get an E_NOTICE, its too late, and its not possible to force E_NOTICES to occur in advance yet. A lot of people are toting the "error_reporting E_STRICT" flag, but its still retroactive warning, and won't protect you from bad code mistakes like you posted. This gem turned up on the php-dev mailing-list this week and I think its just the tool you want. Its more a lint-checker, but it adds scope to the current lint checking PHP does. PHP-Initialized Google Project There's the hope that with a bit of attention we can get this behaviour implemented in PHP itself. So put your 2-cents on the PHP mailing list / bug system / feature requests and see if we can encourage its integration. A: Check out error reporting, http://php.net/manual/en/function.error-reporting.php What you want is probably E_STRICT. Just bare in mind that PHP has no namespaces, and error reporting becomes global. Kind of sucks to be you if you use a 3rd party library from developers that did not have error reporting switched on. A: I'm pretty sure that it generates an error if the variable wasn't previously declared. If your installation isn't showing such errors, check the error_reporting() level in your php.ini file. A: You can try to play with the error reporting level as indicated here: http://us3.php.net/error_reporting but I'm not sure it mention the usage of non initiated variable, even with E_STRICT. A: There is something similar : in PHP you can change the error reporting level. It's a best practice to set it to maximum in a dev environnement. To do so : Add in your PHP.ini: error_reporting = E_ALL Or you can just add this at the top of the file your are working on : error_reporting(E_ALL); This won't prevent your code from running but the lack of variable assignments will display a very clear error message in your browser. A: If you use the "Analyze Code" on files, or your project in Zend Studio it will warn you about any uninitialized variables (this actually helped find a ton of misspelled variables lurking in seldom used portions of the code just waiting to cause very difficult to detect errors). Perhaps someone could add that functionality in the PHP lint function (php -l), which currently only checks for syntax errors.
{ "language": "en", "url": "https://stackoverflow.com/questions/91699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: End-to-End application testing from a users standpoint I am looking for a good way to consistently test my web applications from the end users point of view. I have all kinds of ways to check to make sure my code is working behind the scenes. I can't count the number of times that I make a change to a piece of code, test it and it works fine and then deploy it only to have it blow up somewhere else weeks later. I have by that time forgotten the change I made that caused it to blow up. I need something that I can run every time I make a change to assure me I did not break something somewhere else. It needs to be able to input correct and incorrect entries so that client side validation can be tested also. Thank you, Scott and the Dev Team A: I thin you need to investigate Selenium. We use it to do automated UI testing throughout our solution, and it is cross browser and cross platform. You can use the Selenium IDE to record a walkthrough of your web application, and then you can either run it in the browser, or export it to various languages such as C#, and run it using NUnit. I find this is the easiest approach because I can create the basic walkthrough and then modify the code to use inputs from a file/database in order to create multiple scenarios. A: Have you seen this technique using fitnesse and selenium? Can't vouch for how easy it is to set up, we've looked at selenium a little and one of our test analysts is keen to integrate something like FIT/Fitnesse into our automated testing but we're not there yet. A: Have a look at Seleno, which abstracts Selenium / Browser interaction into C# Page Objects which represent the pages of your site. More details in this answer.
{ "language": "en", "url": "https://stackoverflow.com/questions/91703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: When people talk about scaling a website with 'shards', what do they mean? I have heard the 'shard' technique mentioned several times with regard to solving scaling problems for large websites. What is this 'shard' technique and why is it so good? A: Karl Seguin has a good blog post about sharding. From the post: Sharding is the separation of your data across multiple servers. How you separate your data is up to you, but generally it’s done on some fundamental identifier. A: In brief, imagine seperating your users_tbl across several servers. So Users 1-5000 and on Server 1, Users 5000-10000 on Server 2; etc. If your data model is sufficiently abstract in code, it's often not a huge change in code. Of course this approach becomes difficult if all your queries are similar to "SELECT COUNT(*) FROM users_tbl GROUP BY userType" but when your where is "WHERE userid = 5" then it makes more sense. A: As 'sharding' is part of the architecture principles for large websites, you may be interested in listening to 'eBay's Architecture Principles with Randy Shoup' here.
{ "language": "en", "url": "https://stackoverflow.com/questions/91710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: C++ strings without and STL I've not used C++ very much in the past, and have recently been doing a lot of C#, and I'm really struggling to get back into the basics of C++ again. This is particularly tricky as work mandates that none of the most handy C++ constructs can be used, so all strings must be char *'s, and there is no provision for STL lists. What I'm currently trying to do is to create a list of strings, something which would take me no time at all using STL or in C#. Basically I want to have a function such as: char **registeredNames = new char*[numberOfNames]; Then, RegisterName(const * char const name, const int length) { //loop to see if name already registered snipped if(notFound) { registeredNames[lastIndex++] = name; } } or, if it was C#... if(!registeredNames.Contains(name)) { registeredNames.Add(name); } and I realize that it doesn't work. I know the const nature of the passed variables (a const pointer and a const string) makes it rather difficult, but my basic problem is that I've always avoided this situation in the past by using STL lists etc. so I've never had to work around it! A: There are legitimate reasons that STL might be avoided. When working in fixed environments where memory or speed is a premium, it's sometimes difficult to tell what is going on under the hood with STL. Yes, you can write your own memory allocators, and yes, speed generally isn't a problem, but there are differences between STL implementations across platforms, and those differences mighe be subtle and potentially buggy. Memory is perhaps my biggest concern when thinking about using it. Memory is precious, and how we use it needs to be tightly controlled. Unless you've been down this road, this concept might not make sense, but it's true. We do allow for STL usage in tools (outside of game code), but it's prohibited inside of the actual game. One other related problem is code size. I am slightly unsure of how much STL can contribute to executable size, but we've seen marked increases in code size when using STL. Even if your executable is "only" 2M bigger, that's 2M less RAM for something else for your game. STL is nice for sure. But it can be abused by programmers who don't know what they are doing. It's not intentional, but it can provide nasty surprises when you don't want to see them (again, memory bloat and performance issues) I'm sure that you are close with your solution. for ( i = 0; i < lastIndex; i++ ) { if ( !strcmp(&registeredNames[i], name ) { break; // name was found } } if ( i == lastIndex ) { // name was not found in the registeredNames list registeredNames[lastIndex++] = strdup(name); } You might not want to use strdup. That's simply an example of how to to store the name given your example. You might want to make sure that you either don't want to allocate space for the new name yourself, or use some other memory construct that might already be available in your app. And please, don't write a string class. I have held up string classes as perhaps the worst example of how not to re-engineer a basic C construct in C++. Yes, the string class can hide lots of nifty details from you, but it's memory usage patterns are terrible, and those don't fit well into a console (i.e. ps3 or 360, etc) environment. About 8 years ago we did the same time. 200000+ memory allocations before we hit the main menu. Memory was terribly fragmented and we couldn't get the rest of the game to fit in the fixed environment. We wound up ripping it out. Class design is great for some things, but this isn't one of them. This is an opinion, but it's based on real world experience. A: You'll probably need to use strcmp to see if the string is already stored: for (int index=0; index<=lastIndex; index++) { if (strcmp(registeredNames[index], name) == 0) { return; // Already registered } } Then if you really need to store a copy of the string, then you'll need to allocate a buffer and copy the characters over. char* nameCopy = malloc(length+1); strcpy(nameCopy, name); registeredNames[lastIndex++] = nameCopy; You didn't mention whether your input is NULL terminated - if not, then extra care is needed, and strcmp/strcpy won't be suitable. A: If portability is an issue, you may want to check out STLport. A: Why can't you use the STL? Anyway, I would suggest that you implement a simple string class and list templates of your own. That way you can use the same techniques as you normally would and keep the pointer and memory management confined to those classes. If you mimic the STL, it would be even better. A: If you really can't use stl (and I regret believing that was true when I was in the games industry) then can you not create your own string class? The most basic of string class would allocate memory on construction and assignment, and handle the delete in the destructor. Later you could add further functionality as you need it. Totally portable, and very easy to write and unit test. A: Edit: I guess I misunderstood your question. There is no constness problem in this code I'm aware of. I'm doing this from my head but it should be about right: static int lastIndex = 0; static char **registeredNames = new char*[numberOfNames]; void RegisterName(const * char const name) { bool found = false; //loop to see if name already registered snipped for (int i = 0; i < lastIndex; i++) { if (strcmp(name, registeredNames[i] == 0)) { found = true; break; } } if (!found) { registeredNames[lastIndex++] = name; } } A: Working with char* requires you to work with C functions. In your case, what you really need is to copy the strings around. To help you, you have the strndup function. Then you'll have to write something like: void RegisterName(const char* name) { // loop to see if name already registered snipped if(notFound) { registerNames[lastIndex++] = stdndup(name, MAX_STRING_LENGTH); } } This code suppose your array is big enough. Of course, the very best would be to properly implement your own string and array and list, ... or to convince your boss the STL is not evil anymore ! A: Using: const char **registeredNames = new const char * [numberOfNames]; will allow you to assign a const * char const to an element of the array. Just out of curiosity, why does "work mandates that none of the most handy C++ constructs can be used"? A: I can understand why you can't use STL - most do bloat your code terribly. However there are implementations for games programmers by games programmers - RDESTL is one such library. A: If you are not worried about conventions and just want to get the job done use realloc. I do this sort of thing for lists all of the time, it goes something like this: T** list = 0; unsigned int length = 0; T* AddItem(T Item) { list = realloc(list, sizeof(T)*(length+1)); if(!list) return 0; list[length] = new T(Item); ++length; return list[length]; } void CleanupList() { for(unsigned int i = 0; i < length; ++i) { delete item[i]; } free(list) } There is more you can do, e.g. only realloc each time the list size doubles, functions for removing items from list by index or by checking equality, make a template class for handling lists etc... (I have one I wrote ages ago and always use myself... but sadly I am at work and can't just copy-paste it here). To be perfectly honest though, this will probably not outperform the STL equivalent, although it may equal its performance if you do a ton of work or have an especially poor implementation of STL. Annoyingly C++ is without an operator renew/resize to replace realloc, which would be very useful. Oh, and apologies if my code is error ridden, I just pulled it out from memory. A: All the approaches suggested are valid, my point is if the way C# does it is appealing replicate it, create your own classes/interfaces to present the same abstraction, i.e. a simple linked list class with methods Contains and Add, using the sample code provided by other answers this should be relatively simple. One of the great things about C++ is generally you can make it look and act the way you want, if another language has a great implementation of something you can usually reproduce it. A: const correctness is still const correctness regardless of whether you use the STL or not. I believe what you are looking for is to make registeredNames a const char ** so that the assignment to registeredNames[i] (which is a const char *) works. Moreover, is this really what you want to be doing? It seems like making a copy of the string is probably more appropriate. Moreover still, you shouldn't be thinking about storing this in a list given the operation you are doing on it, a set would be better. A: I have used this String class for years. http://www.robertnz.net/string.htm It provides practically all the features of the STL string but is implemented as a true class not a template and does not use STL. A: This is a clear case of you get to roll your own. And do the same for a vector class. * *Do it with test-first programming. *Keep it simple. Avoid reference counting the string buffer if you are in MT environment.
{ "language": "en", "url": "https://stackoverflow.com/questions/91715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Cross browser issue (iframe sizing) I am displaying pages from an external site (that I own) in an iframe in one of my pages. It's all fine except when viewed in Opera with the browser window size reduced (not widescreen), when the iframe shrinks and squashes the content. It works in widescreen (maximize browser window), and is OK in IE7, Firefox, Chrome and Safari in maximize and reduced window size. I have set the frame dimensions in the HTML and have nested the iframe in a div which is larger than the iframe via the css. Is this a peculiar bug to Opera or is there something I can do about it? A: We had a similar issue with iframe sizing on our web app main page, although in IE6. The solution was to trap the window.onresize event and call a JavaScript function to appropriately size the iframe. content is the name of the iframe we want sized. Also note that we are using ASP.Net AJAX's $get which translates to document.getElementById() window.onresize=resizeContentFrame; resizeContentFrame(); function resizeContentFrame() { setFrameHeight($get('content')); } function setFrameHeight(f) { if(isDefined(f)) { var h=document.documentElement.scrollHeight; h-=(HEADER_HEIGHT+CONTENT_PADDING+5); f.style.height=h+'px'; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/91721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: JIRA - Updating summary from post function How do you update a summary field's value from post function in JIRA? A: There is an inbuilt post function that allows you to change the summary - though only to a hard coded value. If you wanted to modify the current summary, you will need to create a post function as mentioned. If you have a commercial license, you should have access to the JIRA source. Check out the code in: src/java/com/atlassian/jira/workflow/function/issue/UpdateIssueFieldFunction.java A: Ok, I'll try to explain... For simplicity, suppose we have a post function and we want summary value changed to "foobar": public class SomePostFunction implements FunctionProvider { public void execute(Map transientVars, Map args, PropertySet ps) throws WorkflowException { String newValue = "foobar"; // TODO update summary so it's value becomes newValue } } A: How about something like: transientVars.get("issue").setSummary(newValue); Anyway, you should take a look here : http://confluence.atlassian.com/display/JIRA/Upgrading+Workflow+Plugins+for+JIRA+3.2
{ "language": "en", "url": "https://stackoverflow.com/questions/91731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to transform a date-string in classic asp I'm a little blockheaded right now… I have a date string in european format dd.mm.yyyy and need to transform it to mm.dd.yyyy with classic ASP. Any quick ideas? A: If its always in that format you could use split d = split(".","dd.mm.yyyy") s = d(1) & "." & d(0) & "." & d(2) this would allow for dates like 1.2.99 as well A: Dim arrParts() As String Dim theDate As Date arrParts = Split(strOldFormat, ".") theDate = DateTime.DateSerial(parts(2), parts(1), parts(0)) strNewFormat = Format(theDate, "mm.dd.yyyy") A: OK, I just found a solution myself: payment_date = MID(payment_date,4,3) & LEFT(payment_date,3) & MID(payment_date,7) A: This is a way to do it with built in sanity check for dates: Dim OldString, NewString OldString = "31.12.2008" Dim myRegExp Set myRegExp = New RegExp myRegExp.Global = True myRegExp.Pattern = "(0[1-9]|[12][0-9]|3[01])[- /.](0[1-9]|1[012])[- /.]((19|20)[0-9]{2})" If myRegExp.Test Then NewString = myRegExp.Replace(OldString, "$2.$1.$3") Else ' A date of for instance 32 December would end up here NewString = "Invalid date" End If A: I have my own date manipulation functions which I use in all my apps, but it was originally based on this sample: http://www.adopenstatic.com/resources/code/formatdate.asp A: function MyDateFormat(mydate) 'format: YYYYMMDDHHMMSS MyDateFormat = year(mydate) & right("0" & month(mydate),2) & _ right("0" & day(mydate),2) & right("0" & hour(mydate),2) &_ right("0" & minute(mydate),2) & right("0" & second(mydate),2) end function response.write(MyDateFormat(Now)) show: 20200623102805
{ "language": "en", "url": "https://stackoverflow.com/questions/91734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: DataGridViewComboBoxColumn adding different items to each row . I am building a table using the DataGridView where a user can select items from a dropdown in each cell. To simplify the problem, lets say i have 1 column. I am using the DataGridViewComboBoxColumn in the designer. I am trying to support having each row in that column have a different list of items to choose from. Is this possible? A: private void dataGridView1_CellClick(object sender, DataGridViewCellEventArgs e) { if (e.ColumnIndex == DataGridViewComboBoxColumnNumber) { setCellComboBoxItems(myDataGridView, e.RowIndex, e.ColumnIndex, someObj); } } A: Yes. This can be done using the DataGridViewComboBoxCell. Here is an example method to add the items to just one cell, rather than the whole column. private void setCellComboBoxItems(DataGridView dataGrid, int rowIndex, int colIndex, object[] itemsToAdd) { DataGridViewComboBoxCell dgvcbc = (DataGridViewComboBoxCell) dataGrid.Rows[rowIndex].Cells[colIndex]; // You might pass a boolean to determine whether to clear or not. dgvcbc.Items.Clear(); foreach (object itemToAdd in itemsToAdd) { dgvcbc.Items.Add(itemToAdd); } } A: Just in case anyone finds this thread, this is my solution in VB 2008. The advantage this offers is that it allows you to assign an ID to each value in the combobox. Private Sub FillGroups() Try 'Create Connection and SQLCommand here. Conn.Open() Dim dr As SqlDataReader = cm.ExecuteReader dgvGroups.Rows.Clear() Dim PreviousGroup As String = "" Dim l As New List(Of Groups) While dr.Read Dim g As New Groups g.RegionID = CheckInt(dr("cg_id")) g.RegionName = CheckString(dr("cg_name")) g.GroupID = CheckInt(dr("vg_id")) g.GroupName = CheckString(dr("vg_name")) l.Add(g) End While dr.Close() Conn.Close() For Each a In (From r In l Select r.RegionName, r.RegionID).Distinct Dim RegionID As Integer = a.RegionID 'Doing it this way avoids a warning dgvGroups.Rows.Add(New Object() {a.RegionID, a.RegionName}) Dim c As DataGridViewComboBoxCell = CType(dgvGroups.Rows(dgvGroups.RowCount - 1).Cells(colGroup.Index), DataGridViewComboBoxCell) c.DataSource = (From g In l Where g.RegionID = RegionID Select g.GroupID, g.GroupName).ToArray c.DisplayMember = "GroupName" c.ValueMember = "GroupID" Next Catch ex As Exception End Try End Sub Private Class Groups Private _RegionID As Integer Public Property RegionID() As Integer Get Return _RegionID End Get Set(ByVal value As Integer) _RegionID = value End Set End Property Private _RegionName As String Public Property RegionName() As String Get Return _RegionName End Get Set(ByVal value As String) _RegionName = value End Set End Property Private _GroupName As String Public Property GroupName() As String Get Return _GroupName End Get Set(ByVal value As String) _GroupName = value End Set End Property Private _GroupID As Integer Public Property GroupID() As Integer Get Return _GroupID End Get Set(ByVal value As Integer) _GroupID = value End Set End Property End Class A: this is an example with gridView which have 2 comboboxColumns and when a comboBoxColumns1 selected index changed then load comboBoxColumns2 with data from from two different columns from database . private void dataGridView1_CellEndEdit(object sender, DataGridViewCellEventArgs e) { if (dataGridView1.Rows[e.RowIndex].Cells[0].Value != null && dataGridView1.CurrentCell.ColumnIndex == 0) { SqlConnection conn = new SqlConnection("data source=.;initial catalog=pharmacy;integrated security=true"); SqlCommand cmd = new SqlCommand("select [drugTypeParent],[drugTypeChild] from [drugs] where [drugName]='" + dataGridView1.Rows[e.RowIndex].Cells[0].Value.ToString() + "'", conn); conn.Open(); SqlDataReader dr = cmd.ExecuteReader(); while (dr.Read()) { object[] o = new object[] { dr[0].ToString(),dr[1].ToString() }; DataGridViewComboBoxCell dgvcbc = (DataGridViewComboBoxCell)dataGridView1.Rows[e.RowIndex].Cells[1]; dgvcbc.Items.Clear(); foreach (object itemToAdd in o) { dgvcbc.Items.Add(itemToAdd); } } dr.Close(); conn.Close(); } } A: setting the comboboxcell right after setting datasource doesnt work for me. it has to be done after binding operations completed. i choosed CellBeginEdit example of empty dropdowns: dgv1.datasource = datatable1; dgv1.columns.add ( "cbxcol" , typeof(string) ); // different source for each comboboxcell in rows var dict_rowInd_cbxDs = new Dictionary<int, object>(); dict_rowInd_cbxDs[1] = new list<string>(){"en" , "us"}; dict_rowInd_cbxDs[2] = new list<string>(){ "car", "bike"}; // !!!!!! setting comboboxcell after creating doesnt work here foreach( row in dgv.Rows.asEnumerable() ) { var cell = res_tn.dgv.CurrentCell as DataGridViewComboBoxCell; cell.DataSource = dict_dgvRowI_cbxDs[res_tn.dgv.CurrentCell.RowIndex]; } working example: dgv1.datasource = datatable1; dgv1.columns.add ( "cbxcol" , typeof(string) ); // different source for each comboboxcell in rows var dict_rowInd_cbxDs = new Dictionary<int, object>(); dict_rowInd_cbxDs[1] = new list<string>(){"en" , "us"}; dict_rowInd_cbxDs[2] = new list<string>(){ "car", "bike"}; // cmboboxcell datasource Assingment Must be done after BindingComplete (not tested ) or cellbeginEdit (tested by me) res_tn.dgv.CellBeginEdit += (s1, e1) => { if (res_tn.dgv.CurrentCell is DataGridViewComboBoxCell) { if (dict_dgvRowI_cbxDs.ContainsKey(res_tn.dgv.CurrentCell.RowIndex)) { var cll = res_tn.dgv.CurrentCell as DataGridViewComboBoxCell; cll.DataSource = dict_dgvRowI_cbxDs[res_tn.dgv.CurrentCell.RowIndex]; // required if it is list<mycustomClass> // cll.DisplayMember = "ColName"; // cll.ValueMember = "This"; } } }; A: 2023 .Net 7 I lost an hour on this. As mentioned in some post above, filling combo box items must be done after DataBinding so that worked for me: dataGridServices.DataBindingComplete += DataGridServices_DataBindingComplete; dataGridServices.DataError += (sender,e) => { }; // required otherwise DataGridView will complain that "DataGridViewComboBoxCell value is not valid" private void DataGridServices_DataBindingComplete(object? sender, DataGridViewBindingCompleteEventArgs e) { int nCol = dataGridServices.Columns[nameof(PackageService.CheckMethod)].Index; int nRow = 0; foreach (var packageService in AllServices) SetCellComboBoxItems(nRow++, nCol, packageService.MethodsWithoutParameters); } public void SetCellComboBoxItems(int rowIndex, int colIndex, IEnumerable<string> items) { DataGridViewComboBoxCell cell = (DataGridViewComboBoxCell)Grid.Rows[rowIndex].Cells[colIndex]; cell.MaxDropDownItems = 100; // not sure if useful. Default is 8, max is 100 cell.Items.Clear(); cell.Items.AddRange(items.ToArray()); } A: //Populate the Datatable with the Lookup lists private DataTable typeDataTable(DataGridView dataGridView, Lookup<string, Element> type_Lookup, Dictionary<Element, string> type_dictionary, string strNewStyle, string strOldStyle, string strID, string strCount) { int row = 0; DataTable dt = new DataTable(); dt.Columns.Add(strOldStyle, typeof(string)); dt.Columns.Add(strID, typeof(string)); dt.Columns.Add(strCount, typeof(int)); dt.Columns.Add("combobox", typeof(DataGridViewComboBoxCell)); //Add All Doc Types to ComboBoxes DataGridViewComboBoxCell CmBx = new DataGridViewComboBoxCell(); CmBx.DataSource = new BindingSource(type_dictionary, null); CmBx.DisplayMember = "Value"; CmBx.ValueMember = "Key"; //Add Style Comboboxes DataGridViewComboBoxColumn Data_CmBx_Col = new DataGridViewComboBoxColumn(); Data_CmBx_Col.HeaderText = strNewStyle; dataGridView.Columns.Add(addDataGrdViewComboBox(Data_CmBx_Col, type_dictionary)); setCellComboBoxItems(dataGridView, 1, 3, CmBx); //Add style Rows foreach (IGrouping<string, Element> StyleGroup in type_Lookup) { row++; //Iterate through each group in the Igrouping //Add Style Rows dt.Rows.Add(StyleGroup.Key, row, StyleGroup.Count().ToString()); } return dt; } private void setCellComboBoxItems(DataGridView dataGrid, int rowIndex, int colIndex, DataGridViewComboBoxCell CmBx) { DataGridViewComboBoxCell dgvcbc = (DataGridViewComboBoxCell)dataGrid.Rows[rowIndex].Cells[colIndex]; // You might pass a boolean to determine whether to clear or not. dgvcbc.Items.Clear(); foreach (DataGridViewComboBoxCell itemToAdd in CmBx.Items) { dgvcbc.Items.Add(itemToAdd); }
{ "language": "en", "url": "https://stackoverflow.com/questions/91745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Background color of a ListBox item (Windows Forms) How can I set the background color of a specific item in a System.Windows.Forms.ListBox? I would like to be able to set multiple ones if possible. A: Thanks for the answer by Grad van Horck. It guided me in the correct direction. To support text (not just background color), here is my fully working code: //global brushes with ordinary/selected colors private SolidBrush reportsForegroundBrushSelected = new SolidBrush(Color.White); private SolidBrush reportsForegroundBrush = new SolidBrush(Color.Black); private SolidBrush reportsBackgroundBrushSelected = new SolidBrush(Color.FromKnownColor(KnownColor.Highlight)); private SolidBrush reportsBackgroundBrush1 = new SolidBrush(Color.White); private SolidBrush reportsBackgroundBrush2 = new SolidBrush(Color.Gray); //custom method to draw the items, don't forget to set DrawMode of the ListBox to OwnerDrawFixed private void lbReports_DrawItem(object sender, DrawItemEventArgs e) { e.DrawBackground(); bool selected = ((e.State & DrawItemState.Selected) == DrawItemState.Selected); int index = e.Index; if (index >= 0 && index < lbReports.Items.Count) { string text = lbReports.Items[index].ToString(); Graphics g = e.Graphics; //background: SolidBrush backgroundBrush; if (selected) backgroundBrush = reportsBackgroundBrushSelected; else if ((index % 2) == 0) backgroundBrush = reportsBackgroundBrush1; else backgroundBrush = reportsBackgroundBrush2; g.FillRectangle(backgroundBrush, e.Bounds); //text: SolidBrush foregroundBrush = (selected) ? reportsForegroundBrushSelected : reportsForegroundBrush; g.DrawString(text, e.Font, foregroundBrush, lbReports.GetItemRectangle(index).Location); } e.DrawFocusRectangle(); } The above adds to the given code and will show the proper text plus highlight the selected item. A: Probably the only way to accomplish that is to draw the items yourself. Set the DrawMode to OwnerDrawFixed and code something like this on the DrawItem event: private void listBox_DrawItem(object sender, DrawItemEventArgs e) { e.DrawBackground(); Graphics g = e.Graphics; g.FillRectangle(new SolidBrush(Color.Silver), e.Bounds); // Print text e.DrawFocusRectangle(); } The second option would be using a ListView, although they have an other way of implementations (not really data bound, but more flexible in way of columns). A: // Set the background to a predefined colour MyListBox.BackColor = Color.Red; // OR: Set parts of a color. MyListBox.BackColor.R = 255; MyListBox.BackColor.G = 0; MyListBox.BackColor.B = 0; If what you mean by setting multiple background colors is setting a different background color for each item, this isn't possible with a ListBox, but it is with a ListView, with something like: // Set the background of the first item in the list MyListView.Items[0].BackColor = Color.Red; A: public MainForm() { InitializeComponent(); this.listbox1.DrawItem += new DrawItemEventHandler(this.listbox1_DrawItem); } private void listbox1_DrawItem(object sender, System.Windows.Forms.DrawItemEventArgs e) { e.DrawBackground(); Brush myBrush = Brushes.Black; var item = listbox1.Items[e.Index]; if(e.Index % 2 == 0) { e.Graphics.FillRectangle(new SolidBrush(Color.Gold), e.Bounds); } e.Graphics.DrawString(((ListBox)sender).Items[e.Index].ToString(), e.Font, myBrush,e.Bounds, StringFormat.GenericDefault); e.DrawFocusRectangle(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/91747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: How do I sort an ASP.NET DataGrid by the length of a field? I have a DataGrid where each column has a SortExpression. I would like the sort expression to be the equivalent of "ORDER BY LEN(myField)". I have tried SortExpression="LEN(myField)" but this throws an exception as it is not valid syntax. Any ideas? A: What about returning the len by the query already, but don't show that column, only use it as your original column's sortexpression? I don't think that your idea is supported by default. A: Depending on your SQL flavor the following could work: SELECT ColumnA as FieldA , ColumnB as FieldB , LEN(ColumnA) as FieldL FROM TableName ORDER BY L And then do SortExpression="FieldL" A: The SortExpression parameter specifies the name of the column to sort, followed by "ASC" or "DESC" to control the order. You could change the DataType property of the column to specifiy a user defined type whose comparer function compares string lengths. It won't be a trivial task. A: Using Linq, you could write your query like: query.OrderBy(column => column.MyField.Length); A: Hmmm. Had some time to test. I was able to get SortExpression="Description.Length" to work. Is this 1.1, 2.0 or 3.5?
{ "language": "en", "url": "https://stackoverflow.com/questions/91766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to remove all event handlers from an event To create a new event handler on a control you can do this c.Click += new EventHandler(mainFormButton_Click); or this c.Click += mainFormButton_Click; and to remove an event handler you can do this c.Click -= mainFormButton_Click; But how do you remove all event handlers from an event? A: From Removing All Event Handlers: Directly no, in large part because you cannot simply set the event to null. Indirectly, you could make the actual event private and create a property around it that tracks all of the delegates being added/subtracted to it. Take the following: List<EventHandler> delegates = new List<EventHandler>(); private event EventHandler MyRealEvent; public event EventHandler MyEvent { add { MyRealEvent += value; delegates.Add(value); } remove { MyRealEvent -= value; delegates.Remove(value); } } public void RemoveAllEvents() { foreach(EventHandler eh in delegates) { MyRealEvent -= eh; } delegates.Clear(); } A: Accepted answer is not full. It doesn't work for events declared as {add; remove;} Here is working code: public static void ClearEventInvocations(this object obj, string eventName) { var fi = obj.GetType().GetEventField(eventName); if (fi == null) return; fi.SetValue(obj, null); } private static FieldInfo GetEventField(this Type type, string eventName) { FieldInfo field = null; while (type != null) { /* Find events defined as field */ field = type.GetField(eventName, BindingFlags.Static | BindingFlags.Instance | BindingFlags.NonPublic); if (field != null && (field.FieldType == typeof(MulticastDelegate) || field.FieldType.IsSubclassOf(typeof(MulticastDelegate)))) break; /* Find events defined as property { add; remove; } */ field = type.GetField("EVENT_" + eventName.ToUpper(), BindingFlags.Static | BindingFlags.Instance | BindingFlags.NonPublic); if (field != null) break; type = type.BaseType; } return field; } A: It doesn't do any harm to delete a non-existing event handler. So if you know what handlers there might be, you can simply delete all of them. I just had similar case. This may help in some cases. Like: // Add handlers... if (something) { c.Click += DoesSomething; } else { c.Click += DoesSomethingElse; } // Remove handlers... c.Click -= DoesSomething; c.Click -= DoesSomethingElse; A: If you reaallly have to do this... it'll take reflection and quite some time to do this. Event handlers are managed in an event-to-delegate-map inside a control. You would need to * *Reflect and obtain this map in the control instance. *Iterate for each event, get the delegate * *each delegate in turn could be a chained series of event handlers. So call obControl.RemoveHandler(event, handler) In short, a lot of work. It is possible in theory... I never tried something like this. See if you can have better control/discipline over the subscribe-unsubscribe phase for the control. A: I just found How to suspend events when setting a property of a WinForms control. It will remove all events from a control: namespace CMessWin05 { public class EventSuppressor { Control _source; EventHandlerList _sourceEventHandlerList; FieldInfo _headFI; Dictionary<object, Delegate[]> _handlers; PropertyInfo _sourceEventsInfo; Type _eventHandlerListType; Type _sourceType; public EventSuppressor(Control control) { if (control == null) throw new ArgumentNullException("control", "An instance of a control must be provided."); _source = control; _sourceType = _source.GetType(); _sourceEventsInfo = _sourceType.GetProperty("Events", BindingFlags.Instance | BindingFlags.NonPublic); _sourceEventHandlerList = (EventHandlerList)_sourceEventsInfo.GetValue(_source, null); _eventHandlerListType = _sourceEventHandlerList.GetType(); _headFI = _eventHandlerListType.GetField("head", BindingFlags.Instance | BindingFlags.NonPublic); } private void BuildList() { _handlers = new Dictionary<object, Delegate[]>(); object head = _headFI.GetValue(_sourceEventHandlerList); if (head != null) { Type listEntryType = head.GetType(); FieldInfo delegateFI = listEntryType.GetField("handler", BindingFlags.Instance | BindingFlags.NonPublic); FieldInfo keyFI = listEntryType.GetField("key", BindingFlags.Instance | BindingFlags.NonPublic); FieldInfo nextFI = listEntryType.GetField("next", BindingFlags.Instance | BindingFlags.NonPublic); BuildListWalk(head, delegateFI, keyFI, nextFI); } } private void BuildListWalk(object entry, FieldInfo delegateFI, FieldInfo keyFI, FieldInfo nextFI) { if (entry != null) { Delegate dele = (Delegate)delegateFI.GetValue(entry); object key = keyFI.GetValue(entry); object next = nextFI.GetValue(entry); Delegate[] listeners = dele.GetInvocationList(); if(listeners != null && listeners.Length > 0) _handlers.Add(key, listeners); if (next != null) { BuildListWalk(next, delegateFI, keyFI, nextFI); } } } public void Resume() { if (_handlers == null) throw new ApplicationException("Events have not been suppressed."); foreach (KeyValuePair<object, Delegate[]> pair in _handlers) { for (int x = 0; x < pair.Value.Length; x++) _sourceEventHandlerList.AddHandler(pair.Key, pair.Value[x]); } _handlers = null; } public void Suppress() { if (_handlers != null) throw new ApplicationException("Events are already being suppressed."); BuildList(); foreach (KeyValuePair<object, Delegate[]> pair in _handlers) { for (int x = pair.Value.Length - 1; x >= 0; x--) _sourceEventHandlerList.RemoveHandler(pair.Key, pair.Value[x]); } } } } A: I hated any complete solutions shown here, I did a mix and tested now, worked for any event handler: public class MyMain() public void MyMethod() { AnotherClass.TheEventHandler += DoSomeThing; } private void DoSomething(object sender, EventArgs e) { Debug.WriteLine("I did something"); AnotherClass.ClearAllDelegatesOfTheEventHandler(); } } public static class AnotherClass { public static event EventHandler TheEventHandler; public static void ClearAllDelegatesOfTheEventHandler() { foreach (Delegate d in TheEventHandler.GetInvocationList()) { TheEventHandler -= (EventHandler)d; } } } Easy! Thanks for Stephen Punak. I used it because I use a generic local method to remove the delegates and the local method was called after different cases, when different delegates are setted. A: This page helped me a lot. The code I got from here was meant to remove a click event from a button. I need to remove double click events from some panels and click events from some buttons. So I made a control extension, which will remove all event handlers for a certain event. using System; using System.Collections.Generic; using System.ComponentModel; using System.Drawing; using System.Windows.Forms; using System.Reflection; public static class EventExtension { public static void RemoveEvents<T>(this T target, string eventName) where T:Control { if (ReferenceEquals(target, null)) throw new NullReferenceException("Argument \"target\" may not be null."); FieldInfo fieldInfo = typeof(Control).GetField(eventName, BindingFlags.Static | BindingFlags.NonPublic); if (ReferenceEquals(fieldInfo, null)) throw new ArgumentException( string.Concat("The control ", typeof(T).Name, " does not have a property with the name \"", eventName, "\""), nameof(eventName)); object eventInstance = fieldInfo.GetValue(target); PropertyInfo propInfo = typeof(T).GetProperty("Events", BindingFlags.NonPublic | BindingFlags.Instance); EventHandlerList list = (EventHandlerList)propInfo.GetValue(target, null); list.RemoveHandler(eventInstance, list[eventInstance]); } } Now, the usage of this extenstion. If you need to remove click events from a button, Button button = new Button(); button.RemoveEvents(nameof(button.EventClick)); If you need to remove doubleclick events from a panel, Panel panel = new Panel(); panel.RemoveEvents(nameof(panel.EventDoubleClick)); I am not an expert in C#, so if there are any bugs please forgive me and kindly let me know about it. A: Stephen has right. It is very easy: public event EventHandler<Cles_graph_doivent_etre_redessines> les_graph_doivent_etre_redessines; public void remove_event() { if (this.les_graph_doivent_etre_redessines != null) { foreach (EventHandler<Cles_graph_doivent_etre_redessines> F_les_graph_doivent_etre_redessines in this.les_graph_doivent_etre_redessines.GetInvocationList()) { this.les_graph_doivent_etre_redessines -= F_les_graph_doivent_etre_redessines; } } } A: I found a solution on the MSDN forums. The sample code below will remove all Click events from button1. public partial class Form1 : Form { public Form1() { InitializeComponent(); button1.Click += button1_Click; button1.Click += button1_Click2; button2.Click += button2_Click; } private void button1_Click(object sender, EventArgs e) => MessageBox.Show("Hello"); private void button1_Click2(object sender, EventArgs e) => MessageBox.Show("World"); private void button2_Click(object sender, EventArgs e) => RemoveClickEvent(button1); private void RemoveClickEvent(Button b) { FieldInfo f1 = typeof(Control).GetField("EventClick", BindingFlags.Static | BindingFlags.NonPublic); object obj = f1.GetValue(b); PropertyInfo pi = b.GetType().GetProperty("Events", BindingFlags.NonPublic | BindingFlags.Instance); EventHandlerList list = (EventHandlerList)pi.GetValue(b, null); list.RemoveHandler(obj, list[obj]); } } A: You guys are making this WAY too hard on yourselves. It's this easy: void OnFormClosing(object sender, FormClosingEventArgs e) { foreach(Delegate d in FindClicked.GetInvocationList()) { FindClicked -= (FindClickedHandler)d; } } A: I'm actually using this method and it works perfectly. I was 'inspired' by the code written by Aeonhack here. Public Event MyEvent() Protected Overrides Sub Dispose(ByVal disposing As Boolean) If MyEventEvent IsNot Nothing Then For Each d In MyEventEvent.GetInvocationList ' If this throws an exception, try using .ToArray RemoveHandler MyEvent, d Next End If End Sub ~MyClass() { if (MyEventEvent != null) { foreach (var d in MyEventEvent.GetInvocationList()) { MyEventEvent -= (MyEvent)d; } } } The field MyEventEvent is hidden, but it does exist. Debugging, you can see how d.target is the object actually handling the event, and d.method its method. You only have to remove it. It works great. No more objects not being GC'ed because of the event handlers. A: Wow. I found this solution, but nothing worked like I wanted. But this is so good: EventHandlerList listaEventos; private void btnDetach_Click(object sender, EventArgs e) { listaEventos = DetachEvents(comboBox1); } private void btnAttach_Click(object sender, EventArgs e) { AttachEvents(comboBox1, listaEventos); } public EventHandlerList DetachEvents(Component obj) { object objNew = obj.GetType().GetConstructor(new Type[] { }).Invoke(new object[] { }); PropertyInfo propEvents = obj.GetType().GetProperty("Events", BindingFlags.NonPublic | BindingFlags.Instance); EventHandlerList eventHandlerList_obj = (EventHandlerList)propEvents.GetValue(obj, null); EventHandlerList eventHandlerList_objNew = (EventHandlerList)propEvents.GetValue(objNew, null); eventHandlerList_objNew.AddHandlers(eventHandlerList_obj); eventHandlerList_obj.Dispose(); return eventHandlerList_objNew; } public void AttachEvents(Component obj, EventHandlerList eventos) { PropertyInfo propEvents = obj.GetType().GetProperty("Events", BindingFlags.NonPublic | BindingFlags.Instance); EventHandlerList eventHandlerList_obj = (EventHandlerList)propEvents.GetValue(obj, null); eventHandlerList_obj.AddHandlers(eventos); } A: A bit late to the party, but I used this link that worked perfectly well for me: https://www.codeproject.com/Articles/103542/Removing-Event-Handlers-using-Reflection The beauty of this code is that it works for all, WFP, Forms, Xamarin Forms. I used it for Xamarin. Note that you need this way of using Reflection only if you don't own this event (e.g. a library code that crashes on some event that you don't care about). Here is my slightly modified code: static Dictionary<Type, List<FieldInfo>> dicEventFieldInfos = new Dictionary<Type, List<FieldInfo>>(); static BindingFlags AllBindings { get { return BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.Static; } } static void BuildEventFields(Type t, List<FieldInfo> lst) { foreach (EventInfo ei in t.GetEvents(AllBindings)) { Type dt = ei.DeclaringType; FieldInfo fi = dt.GetField(ei.Name, AllBindings); if (fi != null) lst.Add(fi); } } static List<FieldInfo> GetTypeEventFields(Type t) { if (dicEventFieldInfos.ContainsKey(t)) return dicEventFieldInfos[t]; List<FieldInfo> lst = new List<FieldInfo>(); BuildEventFields(t, lst); dicEventFieldInfos.Add(t, lst); return lst; } static EventHandlerList GetStaticEventHandlerList(Type t, object obj) { MethodInfo mi = t.GetMethod("get_Events", AllBindings); return (EventHandlerList)mi.Invoke(obj, new object[] { }); } public static void RemoveEventHandler(object obj, string EventName = "") { if (obj == null) return; Type t = obj.GetType(); List<FieldInfo> event_fields = GetTypeEventFields(t); EventHandlerList static_event_handlers = null; foreach (FieldInfo fi in event_fields) { if (EventName != "" && string.Compare(EventName, fi.Name, true) != 0) continue; var eventName = fi.Name; // After hours and hours of research and trial and error, it turns out that // STATIC Events have to be treated differently from INSTANCE Events... if (fi.IsStatic) { // STATIC EVENT if (static_event_handlers == null) static_event_handlers = GetStaticEventHandlerList(t, obj); object idx = fi.GetValue(obj); Delegate eh = static_event_handlers[idx]; if (eh == null) continue; Delegate[] dels = eh.GetInvocationList(); if (dels == null) continue; EventInfo ei = t.GetEvent(eventName, AllBindings); foreach (Delegate del in dels) ei.RemoveEventHandler(obj, del); } else { // INSTANCE EVENT EventInfo ei = t.GetEvent(eventName, AllBindings); if (ei != null) { object val = fi.GetValue(obj); Delegate mdel = (val as Delegate); if (mdel != null) { foreach (Delegate del in mdel.GetInvocationList()) { ei.RemoveEventHandler(obj, del); } } } } } } Example usage: RemoveEventHandler(obj, "Focused"); A: Sometimes we have to work with ThirdParty controls and we need to build these awkward solutions. Based in @Anoop Muraleedharan answer I created this solution with inference type and ToolStripItem support public static void RemoveItemEvents<T>(this T target, string eventName) where T : ToolStripItem { RemoveObjectEvents<T>(target, eventName); } public static void RemoveControlEvents<T>(this T target, string eventName) where T : Control { RemoveObjectEvents<T>(target, eventName); } private static void RemoveObjectEvents<T>(T target, string Event) where T : class { var typeOfT = typeof(T); var fieldInfo = typeOfT.BaseType.GetField( Event, BindingFlags.Static | BindingFlags.NonPublic); var provertyValue = fieldInfo.GetValue(target); var propertyInfo = typeOfT.GetProperty( "Events", BindingFlags.NonPublic | BindingFlags.Instance); var eventHandlerList = (EventHandlerList)propertyInfo.GetValue(target, null); eventHandlerList.RemoveHandler(provertyValue, eventHandlerList[provertyValue]); } And you can use it like this var toolStripButton = new ToolStripButton(); toolStripButton.RemoveItemEvents("EventClick"); var button = new Button(); button.RemoveControlEvents("EventClick"); A: removes all handlers for button: save.RemoveEvents(); public static class EventExtension { public static void RemoveEvents<T>(this T target) where T : Control { var propInfo = typeof(T).GetProperty("Events", BindingFlags.NonPublic | BindingFlags.Instance); var list = (EventHandlerList)propInfo.GetValue(target, null); list.Dispose(); } } A: I found this answer and it almost fit my needs. Thanks to SwDevMan81 for the class. I have modified it to allow suppression and resumation of individual methods, and I thought I'd post it here. // This class allows you to selectively suppress event handlers for controls. You instantiate // the suppressor object with the control, and after that you can use it to suppress all events // or a single event. If you try to suppress an event which has already been suppressed // it will be ignored. Same with resuming; you can resume all events which were suppressed, // or a single one. If you try to resume an un-suppressed event handler, it will be ignored. //cEventSuppressor _supButton1 = null; //private cEventSuppressor SupButton1 { // get { // if (_supButton1 == null) { // _supButton1 = new cEventSuppressor(this.button1); // } // return _supButton1; // } //} //private void button1_Click(object sender, EventArgs e) { // MessageBox.Show("Clicked!"); //} //private void button2_Click(object sender, EventArgs e) { // SupButton1.Suppress("button1_Click"); //} //private void button3_Click(object sender, EventArgs e) { // SupButton1.Resume("button1_Click"); //} using System; using System.Collections.Generic; using System.Text; using System.Reflection; using System.Windows.Forms; using System.ComponentModel; namespace Crystal.Utilities { public class cEventSuppressor { Control _source; EventHandlerList _sourceEventHandlerList; FieldInfo _headFI; Dictionary<object, Delegate[]> suppressedHandlers = new Dictionary<object, Delegate[]>(); PropertyInfo _sourceEventsInfo; Type _eventHandlerListType; Type _sourceType; public cEventSuppressor(Control control) { if (control == null) throw new ArgumentNullException("control", "An instance of a control must be provided."); _source = control; _sourceType = _source.GetType(); _sourceEventsInfo = _sourceType.GetProperty("Events", BindingFlags.Instance | BindingFlags.NonPublic); _sourceEventHandlerList = (EventHandlerList)_sourceEventsInfo.GetValue(_source, null); _eventHandlerListType = _sourceEventHandlerList.GetType(); _headFI = _eventHandlerListType.GetField("head", BindingFlags.Instance | BindingFlags.NonPublic); } private Dictionary<object, Delegate[]> BuildList() { Dictionary<object, Delegate[]> retval = new Dictionary<object, Delegate[]>(); object head = _headFI.GetValue(_sourceEventHandlerList); if (head != null) { Type listEntryType = head.GetType(); FieldInfo delegateFI = listEntryType.GetField("handler", BindingFlags.Instance | BindingFlags.NonPublic); FieldInfo keyFI = listEntryType.GetField("key", BindingFlags.Instance | BindingFlags.NonPublic); FieldInfo nextFI = listEntryType.GetField("next", BindingFlags.Instance | BindingFlags.NonPublic); retval = BuildListWalk(retval, head, delegateFI, keyFI, nextFI); } return retval; } private Dictionary<object, Delegate[]> BuildListWalk(Dictionary<object, Delegate[]> dict, object entry, FieldInfo delegateFI, FieldInfo keyFI, FieldInfo nextFI) { if (entry != null) { Delegate dele = (Delegate)delegateFI.GetValue(entry); object key = keyFI.GetValue(entry); object next = nextFI.GetValue(entry); if (dele != null) { Delegate[] listeners = dele.GetInvocationList(); if (listeners != null && listeners.Length > 0) { dict.Add(key, listeners); } } if (next != null) { dict = BuildListWalk(dict, next, delegateFI, keyFI, nextFI); } } return dict; } public void Resume() { } public void Resume(string pMethodName) { //if (_handlers == null) // throw new ApplicationException("Events have not been suppressed."); Dictionary<object, Delegate[]> toRemove = new Dictionary<object, Delegate[]>(); // goes through all handlers which have been suppressed. If we are resuming, // all handlers, or if we find the matching handler, add it back to the // control's event handlers foreach (KeyValuePair<object, Delegate[]> pair in suppressedHandlers) { for (int x = 0; x < pair.Value.Length; x++) { string methodName = pair.Value[x].Method.Name; if (pMethodName == null || methodName.Equals(pMethodName)) { _sourceEventHandlerList.AddHandler(pair.Key, pair.Value[x]); toRemove.Add(pair.Key, pair.Value); } } } // remove all un-suppressed handlers from the list of suppressed handlers foreach (KeyValuePair<object, Delegate[]> pair in toRemove) { for (int x = 0; x < pair.Value.Length; x++) { suppressedHandlers.Remove(pair.Key); } } //_handlers = null; } public void Suppress() { Suppress(null); } public void Suppress(string pMethodName) { //if (_handlers != null) // throw new ApplicationException("Events are already being suppressed."); Dictionary<object, Delegate[]> dict = BuildList(); foreach (KeyValuePair<object, Delegate[]> pair in dict) { for (int x = pair.Value.Length - 1; x >= 0; x--) { //MethodInfo mi = pair.Value[x].Method; //string s1 = mi.Name; // name of the method //object o = pair.Value[x].Target; // can use this to invoke method pair.Value[x].DynamicInvoke string methodName = pair.Value[x].Method.Name; if (pMethodName == null || methodName.Equals(pMethodName)) { _sourceEventHandlerList.RemoveHandler(pair.Key, pair.Value[x]); suppressedHandlers.Add(pair.Key, pair.Value); } } } } } } A: Well, here there's another solution to remove an asociated event (if you already have a method for handling the events for the control): EventDescriptor ed = TypeDescriptor.GetEvents(this.button1).Find("MouseDown",true); Delegate delegate = Delegate.CreateDelegate(typeof(EventHandler), this, "button1_MouseDownClicked"); if(ed!=null) ed.RemoveEventHandler(this.button1, delegate); A: This is not an answer to the OP, but I thought I'd post this here in case it can help others. /// <summary> /// Method to remove a (single) SocketAsyncEventArgs.Completed event handler. This is /// partially based on information found here: http://stackoverflow.com/a/91853/253938 /// /// But note that this may not be a good idea, being very .Net implementation-dependent. Note /// in particular use of "m_Completed" instead of "Completed". /// </summary> private static void RemoveCompletedEventHandler(SocketAsyncEventArgs eventArgs) { FieldInfo fieldInfo = typeof(SocketAsyncEventArgs).GetField("m_Completed", BindingFlags.Instance | BindingFlags.NonPublic); eventArgs.Completed -= (EventHandler<SocketAsyncEventArgs>)fieldInfo.GetValue(eventArgs); }
{ "language": "en", "url": "https://stackoverflow.com/questions/91778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "426" }
Q: How can I delete duplicate rows in a table I have a table with say 3 columns. There's no primary key so there can be duplicate rows. I need to just keep one and delete the others. Any idea how to do this is Sql Server? A: Add an identity column to act as a surrogate primary key, and use this to identify two of the three rows to be deleted. I would consider leaving the identity column in place afterwards, or if this is some kind of link table, create a compound primary key on the other columns. A: The following example works as well when your PK is just a subset of all table columns. (Note: I like the approach with inserting another surrogate id column more. But maybe this solution comes handy as well.) First find the duplicate rows: SELECT col1, col2, count(*) FROM t1 GROUP BY col1, col2 HAVING count(*) > 1 If there are only few, you can delete them manually: set rowcount 1 delete from t1 where col1=1 and col2=1 The value of "rowcount" should be n-1 times the number of duplicates. In this example there are 2 dulpicates, therefore rowcount is 1. If you get several duplicate rows, you have to do this for every unique primary key. If you have many duplicates, then copy every key once into anoher table: SELECT col1, col2, col3=count(*) INTO holdkey FROM t1 GROUP BY col1, col2 HAVING count(*) > 1 Then copy the keys, but eliminate the duplicates. SELECT DISTINCT t1.* INTO holddups FROM t1, holdkey WHERE t1.col1 = holdkey.col1 AND t1.col2 = holdkey.col2 In your keys you have now unique keys. Check if you don't get any result: SELECT col1, col2, count(*) FROM holddups GROUP BY col1, col2 Delete the duplicates from the original table: DELETE t1 FROM t1, holdkey WHERE t1.col1 = holdkey.col1 AND t1.col2 = holdkey.col2 Insert the original rows: INSERT t1 SELECT * FROM holddups btw and for completeness: In Oracle there is a hidden field you could use (rowid): DELETE FROM our_table WHERE rowid not in (SELECT MIN(rowid) FROM our_table GROUP BY column1, column2, column3... ; see: Microsoft Knowledge Site A: Here's the method I used when I asked this question - DELETE MyTable FROM MyTable LEFT OUTER JOIN ( SELECT MIN(RowId) as RowId, Col1, Col2, Col3 FROM MyTable GROUP BY Col1, Col2, Col3 ) as KeepRows ON MyTable.RowId = KeepRows.RowId WHERE KeepRows.RowId IS NULL A: This is a way to do it with Common Table Expressions, CTE. It involves no loops, no new columns or anything and won't cause any unwanted triggers to fire (due to deletes+inserts). Inspired by this article. CREATE TABLE #temp (i INT) INSERT INTO #temp VALUES (1) INSERT INTO #temp VALUES (1) INSERT INTO #temp VALUES (2) INSERT INTO #temp VALUES (3) INSERT INTO #temp VALUES (3) INSERT INTO #temp VALUES (4) SELECT * FROM #temp ; WITH [#temp+rowid] AS (SELECT ROW_NUMBER() OVER (ORDER BY i ASC) AS ROWID, * FROM #temp) DELETE FROM [#temp+rowid] WHERE rowid IN (SELECT MIN(rowid) FROM [#temp+rowid] GROUP BY i HAVING COUNT(*) > 1) SELECT * FROM #temp DROP TABLE #temp A: I'd SELECT DISTINCT the rows and throw them into a temporary table, then drop the source table and copy back the data from the temp. EDIT: now with code snippet! INSERT INTO TABLE_2 SELECT DISTINCT * FROM TABLE_1 GO DELETE FROM TABLE_1 GO INSERT INTO TABLE_1 SELECT * FROM TABLE_2 GO A: This is a tough situation to be in. Without knowing your particular situation (table size etc) I think that your best shot is to add an identity column, populate it and then delete according to it. You may remove the column later but I would suggest that you should keep it as it is really a good thing to have in the table A: After you clean up the current mess you could add a primary key that includes all the fields in the table. that will keep you from getting into the mess again. Of course this solution could very well break existing code. That will have to be handled as well. A: Can you add a primary key identity field to the table? A: Manrico Corazzi - I specialize in Oracle, not MS SQL, so you'll have to tell me if this is possible as a performance boost:- * *Leave the same as your first step - insert distinct values into TABLE2 from TABLE1. *Drop TABLE1. (Drop should be faster than delete I assume, much as truncate is faster than delete). *Rename TABLE2 as TABLE1 (saves you time, as you're renaming an object rather than copying data from one table to another). A: Here's another way, with test data create table #table1 (colWithDupes1 int, colWithDupes2 int) insert into #table1 (colWithDupes1, colWithDupes2) Select 1, 2 union all Select 1, 2 union all Select 2, 2 union all Select 3, 4 union all Select 3, 4 union all Select 3, 4 union all Select 4, 2 union all Select 4, 2 select * from #table1 set rowcount 1 select 1 while @@rowcount > 0 delete #table1 where 1 < (select count(*) from #table1 a2 where #table1.colWithDupes1 = a2.colWithDupes1 and #table1.colWithDupes2 = a2.colWithDupes2 ) set rowcount 0 select * from #table1 A: How about: select distinct * into #t from duplicates_tbl truncate duplicates_tbl insert duplicates_tbl select * from #t drop table #t A: What about this solution : First you execute the following query : select 'set rowcount ' + convert(varchar,COUNT(*)-1) + ' delete from MyTable where field=''' + field +'''' + ' set rowcount 0' from mytable group by field having COUNT(*)>1 And then you just have to execute the returned result set set rowcount 3 delete from Mytable where field='foo' set rowcount 0 .... .... set rowcount 5 delete from Mytable where field='bar' set rowcount 0 I've handled the case when you've got only one column, but it's pretty easy to adapt the same approach tomore than one column. Let me know if you want me to post the code. A: I'm not sure if this works with DELETE statements, but this is a way to find duplicate rows: SELECT * FROM myTable t1, myTable t2 WHERE t1.field = t2.field AND t1.id > t2.id I'm not sure if you can just change the "SELECT" to a "DELETE" (someone wanna let me know?), but even if you can't, you could just make it into a subquery.
{ "language": "en", "url": "https://stackoverflow.com/questions/91784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Grep and Sed Equivalent for XML Command Line Processing When doing shell scripting, typically data will be in files of single line records like csv. It's really simple to handle this data with grep and sed. But I have to deal with XML often, so I'd really like a way to script access to that XML data via the command line. What are the best tools? A: If you're looking for a solution on Windows, Powershell has built-in functionality for reading and writing XML. test.xml: <root> <one>I like applesauce</one> <two>You sure bet I do!</two> </root> Powershell script: # load XML file into local variable and cast as XML type. $doc = [xml](Get-Content ./test.xml) $doc.root.one #echoes "I like applesauce" $doc.root.one = "Who doesn't like applesauce?" #replace inner text of <one> node # create new node... $newNode = $doc.CreateElement("three") $newNode.set_InnerText("And don't you forget it!") # ...and position it in the hierarchy $doc.root.AppendChild($newNode) # write results to disk $doc.save("./testNew.xml") testNew.xml: <root> <one>Who likes applesauce?</one> <two>You sure bet I do!</two> <three>And don't you forget it!</three> </root> Source: https://serverfault.com/questions/26976/update-xml-from-the-command-line-windows A: There're also xmlsed & xmlgrep of the NetBSD xmltools! http://blog.huoc.org/xmltools-not-dead.html A: Depends on exactly what you want to do. XSLT may be the way to go, but there is a learning curve. Try xsltproc and note that you can hand in parameters. A: There's also saxon-lint from command line with the ability to use XPath 3.0/XQuery 3.0. (Other command-line tools use XPath 1.0). EXAMPLES : http/html: $ saxon-lint --html --xpath 'count(//a)' http://stackoverflow.com/q/91791 328 xml : $ saxon-lint --xpath '//a[@class="x"]' file.xml A: D. Bohdan maintains an open source GitHub repo that keeps a list of command line tools for structured text tools, there a section for XML/HTML tools: https://github.com/dbohdan/structured-text-tools#xml-html A: Some promising tools: * *nokogiri: parsing HTML/XML DOMs in ruby using XPath & CSS selectors *hpricot: deprecated *fxgrep: Uses its own XPath-like syntax to query documents. Written in SML, so installation may be difficult. *LT XML: XML toolkit derived from SGML tools, including sggrep, sgsort, xmlnorm and others. Uses its own query syntax. The documentation is very formal. Written in C. LT XML 2 claims support of XPath, XInclude and other W3C standards. *xmlgrep2: simple and powerful searching with XPath. Written in Perl using XML::LibXML and libxml2. *XQSharp: Supports XQuery, the extension to XPath. Written for the .NET Framework. *xml-coreutils: Laird Breyer's toolkit equivalent to GNU coreutils. Discussed in an interesting essay on what the ideal toolkit should include. *xmldiff: Simple tool for comparing two xml files. *xmltk: doesn't seem to have package in debian, ubuntu, fedora, or macports, hasn't had a release since 2007, and uses non-portable build automation. xml-coreutils seems the best documented and most UNIX-oriented. A: XQuery might be a good solution. It is (relatively) easy to learn and is a W3C standard. I would recommend XQSharp for a command line processor. A: I first used xmlstarlet and still using it. When the query gets tough, i need XML's xpath2 and xquery feature support I turn to xidel http://www.videlibri.de/xidel.html A: There is also xml2 and 2xml pair. It will allow usual string editing tools to process XML. Example. q.xml: <?xml version="1.0"?> <foo> text more text <textnode>ddd</textnode><textnode a="bv">dsss</textnode> <![CDATA[ asfdasdsa <foo> sdfsdfdsf <bar> ]]> </foo> xml2 < q.xml /foo= /foo= text /foo= more text /foo= /foo/textnode=ddd /foo/textnode /foo/textnode/@a=bv /foo/textnode=dsss /foo= /foo= asfdasdsa <foo> sdfsdfdsf <bar> /foo= xml2 < q.xml | grep textnode | sed 's!/foo!/bar/baz!' | 2xml <bar><baz><textnode>ddd</textnode><textnode a="bv">dsss</textnode></baz></bar> P.S. There are also html2 / 2html. A: To Joseph Holsten's excellent list, I add the xpath command-line script which comes with Perl library XML::XPath. A great way to extract information from XML files: xpath -q -e '/entry[@xml:lang="fr"]' *xml A: You can use xmllint: xmllint --xpath //title books.xml Should be bundled with most distros, and is also bundled with Cygwin. $ xmllint --version xmllint: using libxml version 20900 See: $ xmllint Usage : xmllint [options] XMLfiles ... Parse the XML files and output the result of the parsing --version : display the version of the XML library used --debug : dump a debug tree of the in-memory document ... --schematron schema : do validation against a schematron --sax1: use the old SAX1 interfaces for processing --sax: do not build a tree but work just at the SAX level --oldxml10: use XML-1.0 parsing rules before the 5th edition --xpath expr: evaluate the XPath expression, inply --noout A: I've found xmlstarlet to be pretty good at this sort of thing. http://xmlstar.sourceforge.net/ Should be available in most distro repositories, too. An introductory tutorial is here: http://www.ibm.com/developerworks/library/x-starlet.html A: Grep Equivalent You can define a bash function, say "xp" ("xpath") that wraps some python3 code. To use it you need to install python3 and python-lxml. Benefits: * *regex matching which you lack in e.g. xmllint. *Use as a filter (in a pipe) on the commandline It's easy and powerful to use like this: xmldoc=$(cat <<EOF <?xml version="1.0" encoding="utf-8"?> <job xmlns="http://www.sample.com/">programming</job> EOF ) selection='//*[namespace-uri()="http://www.sample.com/" and local-name()="job" and re:test(.,"^pro.*ing$")]/text()' echo "$xmldoc" | xp "$selection" # prints programming xp() looks something like this: xp() { local selection="$1"; local xmldoc; if ! [[ -t 0 ]]; then read -rd '' xmldoc; else xmldoc="$2"; fi; python3 <(printf '%b' "from lxml.html import tostring\nfrom lxml import etree\nfrom sys import stdin\nregexpNS = \"http://exslt.org/regular-expressions\"\ntree = etree.parse(stdin)\nfor e in tree.xpath('""$selection""', namespaces={'re':regexpNS}):\n if isinstance(e, str):\n print(e)\n else:\n print(tostring(e).decode('UTF-8'))") <<< "$xmldoc" } Sed Equivalent Consider using xq which gives you the full power of the jq "programming language". If you have python-pip installed, you can install xq with pip install yq, then in below example we are replacing "Keep Accounts" with "Keep Accounts 2": xmldoc=$(cat <<'EOF' <resources> <string name="app_name">Keep Accounts</string> <string name="login">"login"</string> <string name="login_password">"password:"</string> <string name="login_account_hint">input to login</string> <string name="login_password_hint">input your password</string> <string name="login_fail">login failed</string> </resources> EOF ) echo "$xmldoc" | xq '.resources.string = ([.resources.string[]|select(."#text" == "Keep Accounts") ."#text" = "Keep Accounts 2"])' -x A: JEdit has a plugin called "XQuery" which provides querying functionality for XML documents. Not quite the command line, but it works!
{ "language": "en", "url": "https://stackoverflow.com/questions/91791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "149" }
Q: Order SharePoint search results by more columns I'm using a FullTextSqlQuery in SharePoint 2007 (MOSS) and need to order the results by two columns: SELECT WorkId FROM SCOPE() ORDER BY Author ASC, Rank DESC However it seems that only the first column from ORDER BY is taken into account when returning results. In this case the results are ordered correctly by Author, but not by Rank. If I change the order the results will be ordered by Rank, but not by Author. I had to resort to my own sorting of the results, which I don't like very much. Has anybody a solution to this? Edit: Unfortunately it also doesn't accept expressions in the ORDER BY clause (SharePoint throws an exception). My guess is that even if the query looks like legitimate SQL it is parsed somehow before being served to the SQL server. I tried to catch the query with SQL Profiler, but to no avail. Edit 2: In the end I used ordering by a single column (Author in my case, since it's the most important) and did the second ordering in code on the TOP N of the results. Works good enough for the project, but leaves a bad feeling of kludgy code. A: Microsoft finally posted a knowledge base article about this issue. "When using RANK in the ORDER BY clause of a SharePoint Search query, no other properties should be used" http://support.microsoft.com/kb/970830 Symptom: When using RANK in the ORDER BY clause of a SharePoint Search query only the first ORDER BY column is used in the results. Cause: RANK is a special property that is ranked in the full text index and hence cannot be used with other managed properties. Resolution: Do not use multiple properties in conjunction with the RANK property. A: Rank is a special column in MOSS FullTextSqlQuery that give a numeric value to the rank of each result. That value will be different for each query, and is relative to the other results for that particular query. Because of this rank should have a unique value for each result, and sorting by rank then author would be the same as just sorting by rank. I would try sorting on another column instead of rank to see if results come back as you expect, if so your trouble could be related to the way MOSS is ranking the results, which will vary for each unique query. Also you are right, the query looks like SQL, but it is not the query actually passed to the SQL server, it is special Microsoft Enterprise Search SQL Query syntax. A: I, too, am experiencing the same problem with FullTextSqlQuery and MOSS 2007 where only the first column in a multi-column "ORDER BY" is respected. I entered this topic in the MSDN Forums for SharePoint Search, but have not received any replies: http://social.msdn.microsoft.com/Forums/en-US/sharepointsearch/thread/489b4f29-4155-4c3b-b493-b2fad687ee56 A: I have no experience in SharePoint, but if it is the case where only one ORDER BY clause is being honored I would change it to an expression rather than a column. Assuming "Rank" is a numeric column with a maximum value of 10 the following may work: SELECT WorkId FROM SCOPE() ORDER BY AUTHOR + (10 - Rank) ASC
{ "language": "en", "url": "https://stackoverflow.com/questions/91800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What database privileges does a Wordpress Blog really need? I am setting up a few Wordpress blog sites. I have created a user in mysql that wordpress will use to access its database. The docs say to give this user all privileges on the database. Does it really need full privileges? I expect not, so does anyone know the min set of privileges that it really needs? A: I'm no Wordpress expert, but I would recommend it does actually have all privileges apart from GRANT. It will need to be able to create tables and insert/update etc. Several plugins use their own tables, which they create on the fly if they do not exist. A: I grant: * *ALTER *CREATE *CREATE TEMPORARY TABLES *DELETE *DROP *INDEX *INSERT *LOCK TABLES *SELECT *UPDATE Hope that helps anyone else that looks into this. A: grant select, insert, delete, update, create, drop, alter on myblog
{ "language": "en", "url": "https://stackoverflow.com/questions/91805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Is there a pretty printer for python data? Working with python interactively, it's sometimes necessary to display a result which is some arbitrarily complex data structure (like lists with embedded lists, etc.) The default way to display them is just one massive linear dump which just wraps over and over and you have to parse carefully to read it. Is there something that will take any python object and display it in a more rational manner. e.g. [0, 1, [a, b, c], 2, 3, 4] instead of: [0, 1, [a, b, c], 2, 3, 4] I know that's not a very good example, but I think you get the idea. A: In addition to pprint.pprint, pprint.pformat is really useful for making readable __repr__s. My complex __repr__s usually look like so: def __repr__(self): from pprint import pformat return "<ClassName %s>" % pformat({"attrs":self.attrs, "that_i":self.that_i, "care_about":self.care_about}) A: from pprint import pprint a = [0, 1, ['a', 'b', 'c'], 2, 3, 4] pprint(a) Note that for a short list like my example, pprint will in fact print it all on one line. However, for more complex structures it does a pretty good job of pretty printing data. A: Another good option is to use IPython, which is an interactive environment with a lot of extra features, including automatic pretty printing, tab-completion of methods, easy shell access, and a lot more. It's also very easy to install. IPython tutorial A: Somtimes YAML can be good for this. import yaml a = [0, 1, ['a', 'b', 'c'], 2, 3, 4] print yaml.dump(a) Produces: - 0 - 1 - [a, b, c] - 2 - 3 - 4
{ "language": "en", "url": "https://stackoverflow.com/questions/91810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: What's the use/meaning of the @ character in variable names in C#? I discovered that you can start your variable name with a '@' character in C#. In my C# project I was using a web service (I added a web reference to my project) that was written in Java. One of the interface objects defined in the WSDL had a member variable with the name "params". Obviously this is a reserved word in C# so you can't have a class with a member variable with the name "params". The proxy object that was generated contained a property that looked like this: public ArrayList @params { get { return this.paramsField; } set { this.paramsField = value; } } I searched through the VS 2008 c# documentation but couldn't find anything about it. Also searching Google didn't give me any useful answers. So what is the exact meaning or use of the '@' character in a variable/property name? A: It just lets you use a reserved word as a variable name. Not recommended IMHO (except in cases like you have). A: It simply allows you to use reserved words as variable names. I wanted a var called event the other day. I was going to go with _event instead, but my colleague reminded me that I could just call it @event instead. A: In C# the at (@) character is used to denote literals that explicitly do not adhere to the relevant rules in the language spec. Specifically, it can be used for variable names that clash with reserved keywords (e.g. you can't use params but you can use @params instead, same with out/ref/any other keyword in the language specification). Additionally it can be used for unescaped string literals; this is particularly relevant with path constants, e.g. instead of path = "c:\\temp\\somefile.txt" you can write path = @"c:\temp\somefile.txt". It's also really useful for regular expressions. A: Straight from the C# Language Specification, Identifiers (C#) : The prefix "@" enables the use of keywords as identifiers, which is useful when interfacing with other programming languages. The character @ is not actually part of the identifier, so the identifier might be seen in other languages as a normal identifier, without the prefix. An identifier with an @ prefix is called a verbatim identifier. A: Another use case are extension methods. The first, special parameter can be distinguished to denote its real meaning with @this name. An example: public static TValue GetValueOrDefault<TKey, TValue>( this IDictionary<TKey, TValue> @this, TKey key, TValue defaultValue) { if (!@this.ContainsKey(key)) { return defaultValue; } return @this[key]; } A: Unlike Perl's sigils, an @ prefix before a variable name in C# has no meaning. If x is a variable, @x is another name for the same variable. > string x = "abc"; > Object.ReferenceEquals(x, @x).Dump(); True But the @ prefix does have a use, as you've discovered - you can use it to clarify variables names that C# would otherwise reject as illegal. > string string; Identifier expected; 'string' is a keyword > string @string; A: The @ symbol allows you to use reserved keywords for variable name. like @int, @string, @double etc. For example: string @public = "Reserved Keyword used for me and its fine"; The above code works fine, but below will not work: string public = "This will not compile"; A: If we use a keyword as the name for an identifier, we get a compiler error “identifier expected, ‘Identifier Name’ is a keyword” To overcome this error, prefix the identifier with “@”. Such identifiers are verbatim identifiers. The character @ is not actually part of the identifier, so the identifier might be seen in other languages as a normal identifier, without the prefix A: You can use it to use the reserved keywords as variable name like int @int = 3; the compiler will ignores the @ and compile the variable as int it is not a common practice to use thought
{ "language": "en", "url": "https://stackoverflow.com/questions/91817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "332" }
Q: Google App Engine: how can I programmatically access the properties of my Model class? I have a model class: class Person(db.Model): first_name = db.StringProperty(required=True) last_name = db.StringProperty(required=True) I have an instance of this class in p, and string s contains the value 'first_name'. I would like to do something like: print p[s] and p[s] = new_value Both of which result in a TypeError. Does anybody know how I can achieve what I would like? A: If the model class is sufficiently intelligent, it should recognize the standard Python ways of doing this. Try: getattr(p, s) setattr(p, s, new_value) There is also hasattr available. A: With much thanks to Jim, the exact solution I was looking for is: p.properties()[s].get_value_for_datastore(p) To all the other respondents, thank you for your help. I also would have expected the Model class to implement the python standard way of doing this, but for whatever reason, it doesn't. A: getattr(p, s) setattr(p, s, new_value) A: Try: p.model_properties()[s].get_value_for_datastore(p) See the documentation. A: p.first_name = "New first name" p.put() or p = Person(first_name = "Firsty", last_name = "Lasty" ) p.put()
{ "language": "en", "url": "https://stackoverflow.com/questions/91821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: FitNesse for Delphi 2006 / Delphi 2007 /Delphi 2009 Is there a version of FitNesse that works on Delphi 2006/2007/2009? If so where can I find It? Are there any other programs like FitNesse that work on Delphi 2006? A: Fitnesse has support for Delphi. See the FitServers page at fitnesse.org. A: EDIT: The Delphi fit server at delphixtreme now works The code is saved here
{ "language": "en", "url": "https://stackoverflow.com/questions/91826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Detecting Web.Config Authentication Mode Say I have the following web.config: <?xml version="1.0" encoding="utf-8"?> <configuration> <system.web> <authentication mode="Windows"></authentication> </system.web> </configuration> Using ASP.NET C#, how can I detect the Mode value of the Authentication tag? A: You can also get the authentication mode by using the static ConfigurationManager class to get the section and then the enum AuthenticationMode. AuthenticationMode authMode = ((AuthenticationSection) ConfigurationManager.GetSection("system.web/authentication")).Mode; The difference between WebConfigurationManager and ConfigurationManager If you want to retrieve the name of the constant in the specified enumeration you can do this by using the Enum.GetName(Type, Object) method Enum.GetName(typeof(AuthenticationMode), authMode); // e.g. "Windows" A: Try Context.User.Identity.AuthenticationType Go for PB's answer folks A: The mode property from the authenticationsection: AuthenticationSection.Mode Property (System.Web.Configuration). And you can even modify it. // Get the current Mode property. AuthenticationMode currentMode = authenticationSection.Mode; // Set the Mode property to Windows. authenticationSection.Mode = AuthenticationMode.Windows; This article describes how to get a reference to the AuthenticationSection. A: Import the System.Web.Configuration namespace and do something like: var configuration = WebConfigurationManager.OpenWebConfiguration("/"); var authenticationSection = (AuthenticationSection)configuration.GetSection("system.web/authentication"); if (authenticationSection.Mode == AuthenticationMode.Forms) { //do something } A: In ASP.Net Core you can use this: public Startup(IHostingEnvironment env, IConfiguration config) { var enabledAuthTypes = config["IIS_HTTPAUTH"].Split(';').Where(l => !String.IsNullOrWhiteSpace(l)).ToList(); } A: use an xpath query //configuration/system.web/authentication[mode] ? protected void Page_Load(object sender, EventArgs e) { XmlDocument config = new XmlDocument(); config.Load(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); XmlNode node = config.SelectSingleNode("//configuration/system.web/authentication"); this.Label1.Text = node.Attributes["mode"].Value; }
{ "language": "en", "url": "https://stackoverflow.com/questions/91831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: D-Editor with debugging i've been programming a while in D (http://www.digitalmars.com/d/) now. I prefer it to Java because it is faster. However, i have not found an Editor that supports Code-Completion and Debugging (Step-Over, Step-Into, Breakpoints,...). Do you have any suggestions? P.S: gdb did not work. A: Descent, the Eclipse plugin, should support both (if you have a D supporting debugger installed). I have to admit I haven't tried it in a long time though, and when I did, debugging did not work, using gdb. See also this question Personally I use Vim which currently provides neither completion nor debugging, although I know a completion engine was started once. A: I suggest you try the excellent Code::Blocks IDE. It has a very good support for D (it even automatically recognizes DMD and/or GDC D compilers). Another alternative has already been mentioned above - Descent. I haven't used Descent because whenever I tried it I had problems and at some point I gave up (this does not mean it is bad, it means I am just lazy to figure out what problems were). C::B uses GDB so I guess (not sure, did not try) you can use patched GDB to debug your code. A: Under Linux I use Eclipse (+Descent) or gEdit as IDE and use gdb as debugger. A: I use descent as well. I don't use it's debugger bit but that is because I'm editing on a windows desktop and building/running/debugging on a Linux server.
{ "language": "en", "url": "https://stackoverflow.com/questions/91834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Multi tenant architecture and NHibernate Could anyone explain me finally what is the best strategy to implement transparent and fluent support of multi-tenant functionality in NHibernate powered domain model? Im looking for the way, how to keep the domain logic as isolated as possible from the multi-tenant stuff like filtering by TenantID etc A: The simplest approach is to use different databases for each client. Implementing multi-tenanting in this manner allows you to effectively write a single tenant application and only worry about the multi-tenanting at the point where you create / retrieve the session. I haven't delved deep into the details as yet (I need to do something similar in a few months), but I think the easiest way to manage which database a session is connected to is via a custom ISessionFactory implementation that can determine which connection to use (based on an external aspect such as the host portion of the request url). I have seen at least one post around the net somewhere discussing this, but I cannot find the link at this time. If you are using Castle Windsor, have a look at the NHibernate integration facility. This supports the concept of multiple ( named ) session factories which would allow you to have a session factory per client. The integration facility provides an ISessionManager interface which allows you to open a session on a named session factory ( as well as providing per request session semantics for web applications ). Anything requiring access to the session could simply take an ISession constructor parameter and you could create a factory that takes an ISessionManager as a constructor parameter. You factory could then open a session on the appropriate named session factory by inspecting the request to determine which named session factory should be used. A: I have also been digging into it for my next project recently. You can implement custom IConnectionProvider and register it in configuration with "connection.provider". I suggest you derive from DriverConnectionProvider and override ConnectionString rather then implement a completely custom one. It can be something like this one: public class ContextualConnectionProvider : DriverConnectionProvider { protected override string ConnectionString { get { return GetCurrentTenantDatabaseConnectionStringInternally(); } } public override void Configure(IDictionary<string, string> settings) { ConfigureDriver(settings); } } Hope this helps. A: There are a variety of ways to accomplish it, but the issues of multi-tenancy go deeper than just the data model. I hate to be plugging product, but check out SaaSGrid by my the company I work at, Apprenda.We're a cloud operating system that allows you to write single tenant SOA apps (feel free to use NHibernate for data access) that automatically injects multi-tenancy into your app. When you publish your app, you can do things like choose a data model (isolated database or shared) and SaaSGrid will deploy accordingly and your app will run without any code changes - just write code as if it were for a single tenant! A: I've blogged an approach for Multi-tenancy here, the approach is not ideal for all situations, however, it does allow you to largely forget about multi-tenancy issues without having to use a 3rd party product. A: Ayende has some good blog posts about building multi-tenancy apps. How NHibernate is used for it would depend on the type of multi-tenancy you are going for. A: Using a shared schema approach requires you to intercept and decorate all of your queries with additional information to restrict the results. NHibernate provides interceptors to do this, and event listeners are also available from NHibernate 2.0 Aplpha 1. See http://elegantcode.com/2008/05/15/implementing-nhibernate-interceptors/ and http://www.codinginstinct.com/2008/04/nhibernate-20-events-and-listeners.html for discussions on these. Also have a look at Ayende's Rhino Security component as he does a lot of work in this to modify queries with additional restrictions based on security descriptors. You can browse the source at https://rhino-tools.svn.sourceforge.net/svnroot/rhino-tools/trunk/security
{ "language": "en", "url": "https://stackoverflow.com/questions/91840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to get all datatype sizes and function stack footprint sizes in a C/C++ project? I have a large inherited C/C++ project. Are there any good tools or techniques to produce a report on the "sizeof" of all the datatypes, and a breakdown of the stack footprints of each function in such a project. A: I'm curious to know why you want to do this, but that's merely a curiosity. Determining the sizeof for every class used should be simple, unless they've been templated, in which case you'd have to check every instantiation, also. Likewise, determining the per call sizeof on a function is simple: it's a sizeof on each passed parameter plus some function overhead. To determine the full memory usage of the whole program, if it's not all statically defined, couldn't be done without a runtime profiler. Writing a shell scrip that would collect all the class names into a file would be pretty simple. That file could be constructed as a .cpp file that was a series of calls to sizeof on each class. If the file also #included each header file, it could be compiled and run to get an output of the memory footprint of just the classes. Likewise, culling all of the function definitions to see when they're not using reference or pointer arguments (ie copying the entire class instance onto the stack) should be pretty straight-forward. All this goes to say that I know of no existing tool, but writing one shouldn't be difficult. A: I'm not aware of any tools, but if you're working under MSVC you can use DIA SDK to extract size information from .PDB files. Sadly, this wont work for stack footprints IIRC. A: I'm not sure if the concept of the stack footprint actually exists with modern compilers. That is to say, I think that determining the amount of stack space used depends on the branches taken, which in turn depends on input parameters, and in general requires solving the halting problem. A: I am looking for the same information about stack footprint for functions, and I dont believe what warren said is true. Yes, part of what impacts the stack in a function is the parameters, but I've also found that every local variable in a function, regardless of the scoping of said variable, is used to determine the amount of stack space to reserve for the function. In the particular poor code example I am working with, there are >200 local class instances, each guarded by if (blah-blah) clauses, but the stack space reserved is modified by these guarded local variables. I know what I need is to be able to read the function prologue for each method to determine the amount of space being reserved for the function, now how would I do that....?
{ "language": "en", "url": "https://stackoverflow.com/questions/91849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Software Design Description Practise How many people actually write an SDD document before writing a single line of code? How do you handle large CSCI's? What standard do you use for SDD content? What tailoring have you done? A: I certainly have. Historically and on recent projects. Years ago I worked in organisations where templates were everything. Then I worked other places where the templates were looser or non-existent or didn't fit the projects I was working on. Now the content of the software design is pretty much governed by what I need to describe to get the idea across to the audience. "before writing a single line of code" there wouldn't be a a lot of detail. The documents I produce before I start coding are meant to get the idea of what we need to build across to the affected teams and senior management so they introduce high level architecture, functionality, technologies, risks and scope. Those last two are really important. The rest is to show other teams where you need to interface with them and to leave managers with a lingering notion that cool stuff is happening. A: Most big software companies have their own practices. For example Motorola has detailed documentation for every aspect of software development process. There are standard templates for each type of documents. Having strict standards allows effectively maintain huge number of documents and integrate it with different tools. Each document obtains tracking number from special document-tracking system. They even have system (last time I seen it was in stage of early development) for automatically requirements tracking - you can say which line of code relate to given requirement\design guideline. A: I would suppose that most people who write SDD documents and use terminology like CSCI have to be using a specific software development methodology and most likely are working for some serious government customer. They usually tend to take their preparations quite seriously and the documents are ready and approved before any development starts. In an Agile process the development and the design document could be developed in parallel. It means that there will be plenty of refactoring to be done but it usually delivers very good results in the end. In more formal processes (like RUP) a SAD document is mostly created during the elaboration/prototyping phase based on the team research.
{ "language": "en", "url": "https://stackoverflow.com/questions/91851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are indexes on temporary tables deleted when the table is deleted? Would the following SQL remove also the index - or does it have to be removed separately? CREATE TABLE #Tbl (field int) CREATE NONCLUSTERED INDEX idx ON #Tbl (field) DROP TABLE #Tbl A: It will be removed automatically, as there is nothing left to index. Think of it as a child object in this respect. A: Yes they are. You can search in MSSQL help for CREATE INDEX article it is said there: "Indexes can be created on a temporary table. When the table is dropped or the session ends, all indexes and triggers are dropped." A: The drop table will remove the index. Drop Index takes the index name and the table name. In this case would be DROP INDEX idc ON #tbl which can be called if you want to drop the index but leave the table.
{ "language": "en", "url": "https://stackoverflow.com/questions/91856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Renaming/Mapping Cygwin Folders Can I safely rename the cygdrive folder? Also, I would like to add other folders at root and map them to folders on windows in the same way as /cygdrive/c maps to my C drive. Is that possible? A: Yes, you can. See The Cygwin Mount Table in Cygwin's documentation. I have my documents folder mounted as /doc. These mounts end up in the registry and are retained across reboots etc. A: I wouldn't rename cygdrive as I don't know what that would do, but you can map other directories at root to various windows directories using the mount command.
{ "language": "en", "url": "https://stackoverflow.com/questions/91857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Reading quicken data files Looking for an open source library, for C++, Java, C# or Python, for reading the data from Quicken .qdf files. @Swati: Quicken .qif format is for transfer only and is not kept up to date by the application like the .qdf file is. A: QDF is proprietary and not really meant for reading other than my Quicken, probably for a reason as it is messy. I would recommend finding a way to export the qdf into an OFX (Open Financial Exchange) or qif file. I have done some financial and quickbooks automation and I did something similar. The problem is if you don't export to an exchange format, each version differs and strange things happen for many conditions that since they aren't documented (QDF) it becomes a bad situation for the programmer. OFX is what allows online banking, brokerages and apps like mint.com securely get financial data. It is a standard and consistent. Finding a way to this is much better if at all possible. A: http://www.west-wind.com/Weblog/posts/10491.aspx And i know one other blog where the author was developing a parser for qfx/qif... lemme look it up... googling hasnt helped yet :( Update: Found one more: http://blogs.msdn.com/lucabol/archive/2007/08/31/parsing-qif-quicken-files-in-c.aspx A: Check out http://qif.codeplex.com/ You may want to check the license before use. Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/91890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Use grep to find content in files and move them if they match I'm using grep to generate a list of files I need to move: grep -L -r 'Subject: \[SPAM\]' . How can I pass this list to the mv command and move the files somewhere else? A: If you want to find and move files that do not match your pattern (move files that don't contain 'Subject \[SPAM\]' in this example) use: grep -L -Z -r 'Subject: \[SPAM\]' . | xargs -0 -I{} mv {} DIR The -Z means output with zeros (\0) after the filenames (so spaces are not used as delimeters). xargs -0 means interpret \0 to be delimiters. The -L means find files that do not match the pattern. Replace -L with -l if you want to move files that match your pattern. Then -I{} mv {} DIR means replace {} with the filenames, so you get mv filenames DIR. A: This is what helped me: grep -lir 'spam' ./ | xargs mv -t ../spam Of course, I was already in required folder (that's why ./) and moved them to neighboring folder. But you can change them to any paths. I don't know why accepted answer didn't work. Also I didn't have spaces and special characters in filenames - maybe this will not work. Stolen here: Grep command to find files containing text string and move them A: mv `grep -L -r 'Subject: \[SPAM\]' .` <directory_path> Assuming that the grep you wrote returns the files paths you're expecting. A: This alternative works where xargs is not availabe: grep -L -r 'Subject: \[SPAM\]' . | while read f; do mv "$f" out; done A: Maybe this will work: mv $(grep -l 'Subject: \[SPAM\]' | awk -F ':' '{print $1}') your_file A: This is what I use in Fedora Core 12: grep -l 'Subject: \[SPAM\]' | xargs -I '{}' mv '{}' DIR A: There are several ways but here is a slow but failsafe one : IFS=$'\n'; # set the field separator to line break for $mail in $(grep -L -r 'Subject: \[SPAM\]' .); do mv "$mail" your_dir; done; IFS=' '; # restore FS A: Work perfect fo me : * *move files who contain the text withe the word MYSTRINGTOSEARCH to directory MYDIR. find . -type f -exec grep -il 'MYSTRINGTOSEARCH' {} \; -exec mv {} MYDIR/ \; I hope this helps A: You can pass the result to the next command by using grep ... | xargs mv {} destination Check man xargs for more info.
{ "language": "en", "url": "https://stackoverflow.com/questions/91899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: Adding urgent priority to mail set from mailto link I want to add a mailto link on our web page. I want to add a urgent priority to this mail. A: mailto links just doesn't support this feature , sorry. however, you could use a specific subject and filter it in your inbox <a href="mailto:webmaster@website.com?subject=Urgent">Send a email</a> A: You can get your priority, but probably not that way. Most mail clients honor subject= and body= in the query string of a mailto: link. Some mail clients treat multiple body= attributes as different lines; others only use the last body. Getting to your point, though: I don't think most clients will let you set priority, and it only takes one client that won't do it to make your system unreliable. The easiest approach is to use mail filters to set priority on inbound mail. The filters should set the priority based on the subject lines, which you can reliably control. If your mail system's filters can't set priority, try sorting to different mail folders. A: You can't do this with a mailto: link, but you could create a server-side contact form that sends the e-mail out with the proper headers. A: I guess if such a feature exist it's browser-specific. from w3's website: User agents may support MAILTO URL extensions that are not yet Internet standards (e.g., appending subject information to a URL with the syntax "?Subject=my%20subject" where any space characters are replaced by "%20"). Some user agents also support "?Cc=email-address". A: It can't be done.
{ "language": "en", "url": "https://stackoverflow.com/questions/91905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can we share individual rules between .drl files in JBoss Rules? We are using JBoss Rules (a.k.a. Drools) and have several .drl files that each contain several rules. Is there a way to avoid duplication between files, so that we can define common rules that are available to more than one .drl file? Unfortunately, there does not seem to be any kind of include or module facility. A: There is no way of including rules from another .drl file from within a .drl file. You can however add two .drl files to the same ruleBase and they will work as if they were in the same file. PackageBuilder builder = new PackageBuilder(); builder.addPackageFromDrl( new InputStreamReader( getClass().getResourceAsStream( "common.drl" ) ) ); builder.addPackageFromDrl( new InputStreamReader( getClass().getResourceAsStream( "rules1.drl" ) ) ); RuleBase ruleBase = RuleBaseFactory.newRuleBase(); ruleBase.addPackage( builder.getPackage() );
{ "language": "en", "url": "https://stackoverflow.com/questions/91917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How does the Strategy Pattern work? How does it work, what is it used for and when should one use it? A: Let's explain the strategy pattern the easy way: You have a class Car() with a method run(), so you use it this way in a pseudo language: mycar = new Car() mycar.run() Now, you may want to change the run() behavior on the fly, while the program is executing. For example, you might want to simulate a motor failure or the use of a "boost" button in a video game. There are several ways to do this simulation: using conditional statements and a flag variable is one way. The strategy pattern is another: it delegates the behavior of the run() method to another class: Class Car() { this.motor = new Motor(this) // passing "this" is important for the motor so it knows what it is running method run() { this.motor.run() } method changeMotor(motor) { this.motor = motor } } If you want to change the car's behavior, you can just change the motor. (Easier in a program than in real life, right? ;-) ) It's very useful if you have a lot of complex states: you can change and maintain them much more easily. A: A closely related pattern is the Delegate pattern; in both cases, some of the work is passed to some other component. If I understand correctly, the difference between these patterns is this (and please correct me if I'm wrong): * *In the Delegate pattern, the delegate is instantiated by the enclosing (delegating) class; this allows for code reuse by composition rather than inheritance. The enclosing class may be aware of the delegate's concrete type, e.g. if it invokes its constructor itself (as opposed to using a factory). *In the Strategy pattern, the component that executes the strategy is a dependency provided to the enclosing (using) component via its constructor or a setter (according to your religion). The using component is totally unaware of what strategy is in use; the strategy is always invoked via an interface. Anyone know any other differences? A: Directly from the Strategy Pattern Wikipedia article: The strategy pattern is useful for situations where it is necessary to dynamically swap the algorithms used in an application. The strategy pattern is intended to provide a means to define a family of algorithms, encapsulate each one as an object, and make them interchangeable. The strategy pattern lets the algorithms vary independently from clients that use them. A: To add to the already magnificient answers: The strategy pattern has a strong similarity to passing a function (or functions) to another function. In the strategy this is done by wrapping said function in an object followed by passing the object. Some languages can pass functions directly, so they don't need the pattern at all. But other languages can't pass functions, but can pass objects; the pattern then applies. Especially in Java-like languages, you will find that the type zoo of the language is pretty small and that your only way to extend it is by creating objects. Hence most solutions to problems is to come up with a pattern; a way to compose objects to achieve a specific goal. Languages with richer type zoos often have simpler ways of going about the problems -- but richer types also means you have to spend more time learning the type system. Languages with dynamic typing discipline often gets a sneaky way around the problem as well. A: Problem The strategy pattern is used to solve problems that might (or is foreseen they might) be implemented or solved by different strategies and that possess a clearly defined interface for such cases. Each strategy is perfectly valid on its own with some of the strategies being preferable in certain situations that allow the application to switch between them during runtime. Code Example namespace StrategyPatterns { // Interface definition for a Sort algorithm public interface ISort { void Sort(List<string> list) } // QuickSort implementation public class CQuickSorter : ISort { void Sort(List<string> list) { // Here will be the actual implementation } } // BubbleSort implementation public class CBubbleSort : ISort { void Sort(List<string> list) { // The actual implementation of the sort } } // MergeSort implementation public class CMergeSort : ISort { void Sort(List<string> list) { // Again the real implementation comes here } } public class Context { private ISort sorter; public Context(ISort sorter) { // We pass to the context the strategy to use this.sorter = sorter; } public ISort Sorter { get{return sorter;) } } public class MainClass { static void Main() { List<string> myList = new List<string>(); myList.Add("Hello world"); myList.Add("Another item"); myList.Add("Item item"); Context cn = new Context(new CQuickSorter()); // Sort using the QuickSort strategy cn.Sorter.Sort(myList); myList.Add("This one goes for the mergesort"); cn = new Context(new CMergeSort()); // Sort using the merge sort strategy cn.Sorter.Sort(myList); } } } A: * *What is a Strategy? A strategy is a plan of action designed to achieve a specific goal; *“Define a family of algorithms, encapsulate each one, and make them interchangeable. Strategy lets the algorithm vary independently from clients that use it.” (Gang of Four); *Specifies a set of classes, each representing a potential behaviour. Switching between those classes changes the application behaviour. (the Strategy); *This behaviour can be selected at runtime (using polymorphism) or design time; *Capture the abstraction in an interface, bury implementation details in derived classes; * *An alternative to the Strategy is to change the application behaviour by using conditional logic. (BAD); *Using this pattern makes it easier to add or remove specific behaviour, without having to recode and retest, all or parts of the application; *Good uses: * *When we have a set of similar algorithms and its need to switch between them in different parts of the application. With Strategy Pattern is possible to avoid ifs and ease maintenance; *When we want to add new methods to superclass that don’t necessarily make sense to every subclass. Instead of using an interface in a traditional way, adding the new method, we use an instance variable that is a subclass of the new Functionality interface. This is known as Composition : Instead of inheriting an ability through inheritance the class is composed with Objects with the right ability;
{ "language": "en", "url": "https://stackoverflow.com/questions/91932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: Performance when checking for duplicates I've been working on a project where I need to iterate through a collection of data and remove entries where the "primary key" is duplicated. I have tried using a List<int> and Dictionary<int, bool> With the dictionary I found slightly better performance, even though I never need the Boolean tagged with each entry. My expectation is that this is because a List allows for indexed access and a Dictionary does not. What I was wondering is, is there a better solution to this problem. I do not need to access the entries again, I only need to track what "primary keys" I have seen and make sure I only perform addition work on entries that have a new primary key. I'm using C# and .NET 2.0. And I have no control over fixing the input data to remove the duplicates from the source (unfortunately!). And so you can have a feel for scaling, overall I'm checking for duplicates about 1,000,000 times in the application, but in subsets of no more than about 64,000 that need to be unique. A: They have added the HashSet class in .NET 3.5. But I guess it will be on par with the Dictionary. If you have less than say a 100 elements a List will probably perform better. A: Edit: Nevermind my comment. I thought you're talking about C++. I have no idea if my post is relevant in the C# world.. A hash-table could be a tad faster. Binary trees (that's what used in the dictionary) tend to be relative slow because of the way the memory gets accessed. This is especially true if your tree becomes very large. However, before you change your data-structure, have you tried to use a custom pool allocator for your dictionary? I bet the time is not spent traversing the tree itself but in the millions of allocations and deallocations the dictionary will do for you. You may see a factor 10 speed-boost just plugging a simple pool allocator into the dictionary template. Afaik boost has a component that can be directly used. Another option: If you know only 64.000 entries in your integers exist you can write those to a file and create a perfect hash function for it. That way you can just use the hash function to map your integers into the 0 to 64.000 range and index a bit-array. Probably the fastest way, but less flexible. You have to redo your perfect hash function (can be done automatically) each time your set of integers changes. A: I don't really get what you are asking. Firstly is just the opposite of what you say. The dictionary has indexed access (is a hash table) while de List hasn't. If you already have the data in a dictionary then all keys are unique, there can be no duplicates. I susspect you have the data stored in another data type and you're storing it into the dictionary. If that's the case the inserting the data will work with two dictionarys. foreach (int key in keys) { if (!MyDataDict.ContainsKey(key)) { if (!MyDuplicatesDict.ContainsKey(key)) MyDuplicatesDict.Add(key); } else MyDataDict.Add(key); } A: If you are checking for uniqueness of integers, and the range of integers is constrained enough then you could just use an array. For better packing you could implement a bitmap data structure (basically an array, but each int in the array represents 32 ints in the key space by using 1 bit per key). That way if you maximum number is 1,000,000 you only need ~30.5KB of memory for the data structure. Performs of a bitmap would be O(1) (per check) which is hard to beat. A: There was a question awhile back on removing duplicates from an array. For the purpose of the question performance wasn't much of a consideration, but you might want to take a look at the answers as they might give you some ideas. Also, I might be off base here, but if you are trying to remove duplicates from the array then a LINQ command like Enumerable.Distinct might give you better performance than something that you write yourself. As it turns out there is a way to get LINQ working on .NET 2.0 so this might be a route worth investigating. A: If you're going to use a List, use the BinarySearch: // initailize to a size if you know your set size List<int> FoundKeys = new List<int>( 64000 ); Dictionary<int,int> FoundDuplicates = new Dictionary<int,int>(); foreach ( int Key in MyKeys ) { // this is an O(log N) operation int index = FoundKeys.BinarySearch( Key ); if ( index < 0 ) { // if the Key is not in our list, // index is the two's compliment of the next value that is in the list // i.e. the position it should occupy, and we maintain sorted-ness! FoundKeys.Insert( ~index, Key ); } else { if ( DuplicateKeys.ContainsKey( Key ) ) { DuplicateKeys[Key]++; } else { DuplicateKeys.Add( Key, 1 ); } } } You can also use this for any type for which you can define an IComparer by using an overload: BinarySearch( T item, IComparer< T > );
{ "language": "en", "url": "https://stackoverflow.com/questions/91933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Windows API spying/hijacking techniques I'm interested in using API spying/hijacking to implement some core features of a project I'm working on. It's been mentioned in this question as well, but that wasn't really on topic so I figured it'd be better with a question of its own for this., I'd like to gather as much information as possible on this, different techniques/libraries (MS Detours, IAT patching) or other suggestions. Also, it'd be especially interesting to know if someone has any real production experience of using such techniques -- can they be made stable enough for production code or is this strictly a technique for research? Does it work properly over multiple versions of windows? How bug prone is it? Personal experiences and external links both appreciated. A: I implemented syringe.dll (L-GPL) instead of MS Detours (we did not like the license requirements or huge payment for x64 support) it works fantastically well, I ported it from Win32 to Win64, we have been using in our off-the-self commercial applications for around 2 years now. We use it for very simple reasons really its to provide a presentation frame work for re-packing, re-branding the same compiled application as many different products, we do general filtering and replacment for string, general resource, toolbar, and menus. Being L-GPL'd we supply the source, copyright etc, and only dynamically link to the library. A: Hooking standard WinAPI functions is relatively safe since they're not going to change much in the near future, if at all, since Microsoft does it's best to keep the WinAPI backwards compatible between versions. Standard WinAPI hooking, I'd say, is generally stable and safe. Hooking anything else, as in the target program's internals, is a different story. Regardless of the target program, the hooking itself is usually a solid practice. The weakest link of the process is usually finding the correct spot, and hanging on to it. The smallest change in the application can and will change the addresses of functions, not to mention dynamic libraries and so forth. In gamehacking, where hooking is standard practice, this has been defeated to some degree with "sigscanning", a technique first developed by LanceVorgin on the somewhat infamous MPC boards. It works by scanning the executable image for the static parts of a function, the actual instruction bytes that won't change unless the function's action is modified. Sigscanning is obviously better than using static address tables, but it will also fail eventually, when the target application is changed enough. Example implementation of sigscanning in c++ can be found here. A: I've been using standard IAT hooking techniques for a few years now and it works well has been nice and stable and ported to x64 with no problems. The main problems I've had have been more to do with how I inject the hooks in the first place, it took a fair while to work out how best to suspend managed processes at the 'right' point in their start up so that injection was reliable and early enough for me. My injector uses the Win32 debug API and whilst this made it easy to suspend unmanaged processes it took a bit of trial and error to get managed processes suspended at an appropriate time. My uses for IAT have mostly been for writing test tools, I have a deadlock detection program which is detailed here: http://www.lenholgate.com/blog/2006/04/deadlock-detection-tool-updates.html, a GetTickCount() controlling program which is available for download from here http://www.lenholgate.com/blog/2006/04/tickshifter-v02.html and a time shifting application which is still under development. A: Something a lot of people forget is that windows dll's are compiled as hot-patchable images(MSDN). Hot-patching is the best way to do WinAPI detours, as its clean and simple, and preserves the original function, meaning no inline assembly needs to be used, only slightly adjusted function pointers. A small hot patching tutorial can be found here.
{ "language": "en", "url": "https://stackoverflow.com/questions/91935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I use groovy to search+replace in XML? How do I use groovy to search+replace in XML? I need something as short/easy as possible, since I'll be giving this code to the testers for their SoapUI scripting. More specifically, how do I turn: <root><data></data></root> into: <root><data>value</data></root> A: Some of the stuff you can do with an XSLT you can also do with some form of 'search & replace'. It all depends on how complex your problem is and how 'generic' you want to implement the solution. To make your own example slightly more generic: xml.replaceFirst("<Mobiltlf>[^<]*</Mobiltlf>", '<Mobiltlf>32165487</Mobiltlf>') The solution you choose is up to you. In my own experience (for very simple problems) using simple string lookups is faster than using regular expressions which is again faster than using a fullblown XSLT transformation (makes sense actually). A: After some frenzied coding i saw the light and did like this import org.custommonkey.xmlunit.Diff import org.custommonkey.xmlunit.XMLUnit def input = '''<root><data></data></root>''' def expectedResult = '''<root><data>value</data></root>''' def xml = new XmlParser().parseText(input) def p = xml.'**'.data p.each{it.value="value"} def writer = new StringWriter() new XmlNodePrinter(new PrintWriter(writer)).print(xml) def result = writer.toString() XMLUnit.setIgnoreWhitespace(true) def xmlDiff = new Diff(result, expectedResult) assert xmlDiff.identical() Unfortunately this will not preserve the comments and metadata etc, from the original xml document, so i'll have to find another way A: I did some some testing with DOMCategory and it's almost working. I can make the replace happen, but some infopath related comments disappear. I'm using a method like this: def rtv = { xml, tag, value -> def doc = DOMBuilder.parse(new StringReader(xml)) def root = doc.documentElement use(DOMCategory) { root.'**'."$tag".each{it.value=value} } return DOMUtil.serialize(root) } on a source like this: <?xml version="1.0" encoding="utf-8"?> <?mso-infoPathSolution name="urn:schemas-microsoft-com:office:infopath:FA_Ansoegning:http---ementor-dk-application-2007-06-22-" href="manifest.xsf" solutionVersion="1.0.0.14" productVersion="12.0.0" PIVersion="1.0.0.0" ?> <?mso-application progid="InfoPath.Document" versionProgid="InfoPath.Document.2"?> <application:FA_Ansoegning xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:application="http://corp.dk/application/2007/06/22/" xmlns:xd="http://schemas.microsoft.com/office/infopath/2003" xmlns:my="http://schemas.microsoft.com/office/infopath/2003/myXSD/200 8-04-14T14:31:48"> <Mobiltlf></Mobiltlf> <E-mail-adresse></E-mail-adresse> </application:FA_Ansoegning> The only thing missing from the result are the <?mso- lines from the result. Anyone with an idea for that? A: That's the best answer so far and it gives the right result, so I'm going to accept the answer :) However, it's a little too large for me. I think i had better explain that the alternative is: xml.replace("<Mobiltlf></Mobiltlf>", <Mobiltlf>32165487</Mobiltlf>") But that's not very xml'y so I thought i'd look for an alternative. Also, I can't be sure that the first tag is empty all the time. A: To retain the attributes just modify your little program like this (I've included a sample source to test it): def input = """ <?xml version="1.0" encoding="utf-8"?> <?mso-infoPathSolution name="urn:schemas-microsoft-com:office:infopath:FA_Ansoegning:http---ementor-dk-application-2007-06-22-" href="manifest.xsf" solutionVersion="1.0.0.14" productVersion="12.0.0" PIVersion="1.0.0.0" ?> <?mso-application progid="InfoPath.Document" versionProgid="InfoPath.Document.2"?> <application:FA_Ansoegning xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:application="http://ementor.dk/application/2007/06/22/" xmlns:xd="http://schemas.microsoft.com/office/infopath/2003" xmlns:my="http://schemas.microsoft.com/office/infopath/2003/myXSD/200 8-04-14T14:31:48"> <Mobiltlf type="national" anotherattribute="value"></Mobiltlf> <E-mail-adresse attr="whatever"></E-mail-adresse> </application:FA_Ansoegning> """.trim() def rtv = { xmlSource, tagName, newValue -> regex = "(<$tagName[^>]*>)([^<]*)(</$tagName>)" replacement = "\$1${newValue}\$3" xmlSource = xmlSource.replaceAll(regex, replacement) return xmlSource } input = rtv( input, "Mobiltlf", "32165487" ) input = rtv( input, "E-mail-adresse", "bob@email.com" ) println input Running this script produces: <?xml version="1.0" encoding="utf-8"?> <?mso-infoPathSolution name="urn:schemas-microsoft-com:office:infopath:FA_Ansoegning:http---ementor-dk-application-2007-06-22-" href="manifest.xsf" solutionVersion="1.0.0.14" productVersion="12.0.0" PIVersion="1.0.0.0" ?> <?mso-application progid="InfoPath.Document" versionProgid="InfoPath.Document.2"?> <application:FA_Ansoegning xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:application="http://ementor.dk/application/2007/06/22/" xmlns:xd="http://schemas.microsoft.com/office/infopath/2003" xmlns:my="http://schemas.microsoft.com/office/infopath/2003/myXSD/200 8-04-14T14:31:48"> <Mobiltlf type="national" anotherattribute="value">32165487</Mobiltlf> <E-mail-adresse attr="whatever">bob@email.com</E-mail-adresse> </application:FA_Ansoegning> Note that the matching regexp now contains 3 capturing groups: (1) the start tag (including attributes), (2) whatever is the 'old' content of your tag and (3) the end tag. The replacement string refers to these captured groups via the $i syntax (with backslashes to escape them in the GString). Just a tip: regular expressions are very powerful animals, it's really worthwile to become familiar with them ;-) . A: Three "official" groovy ways of updating XML are described on page http://groovy.codehaus.org/Processing+XML, section "Updating XML". Of that three it seems only DOMCategory way preserves XML comments etc. A: To me the actual copy & search & replace seems like the perfect job for an XSLT stylesheet. In an XSLT you have no problem at all to just copy everything (including the items you're having problems with) and simply insert your data where it is required. You can pass the specific value of your data in via an XSL parameter or you can dynamically modify the stylesheet itself (if you include as a string in your Groovy program). Calling this XSLT to transform your document(s) from within Groovy is very simple. I quickly cobbled the following Groovy script together (but I have no doubts it can be written even more simple/compact): import javax.xml.transform.TransformerFactory import javax.xml.transform.stream.StreamResult import javax.xml.transform.stream.StreamSource def xml = """ <?xml version="1.0" encoding="utf-8"?> <?mso-infoPathSolution name="urn:schemas-microsoft-com:office:infopath:FA_Ansoegning:http---ementor-dk-application-2007-06-22-" href="manifest.xsf" solutionVersion="1.0.0.14" productVersion="12.0.0" PIVersion="1.0.0.0" ?> <?mso-application progid="InfoPath.Document" versionProgid="InfoPath.Document.2"?> <application:FA_Ansoegning xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:application="http://ementor.dk/application/2007/06/22/" xmlns:xd="http://schemas.microsoft.com/office/infopath/2003" xmlns:my="http://schemas.microsoft.com/office/infopath/2003/myXSD/200 8-04-14T14:31:48"> <Mobiltlf></Mobiltlf> <E-mail-adresse></E-mail-adresse> </application:FA_Ansoegning> """.trim() def xslt = """ <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:param name="mobil" select="'***dummy***'"/> <xsl:param name="email" select="'***dummy***'"/> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> <xsl:template match="Mobiltlf"> <xsl:copy> <xsl:value-of select="\$mobil"/> </xsl:copy> </xsl:template> <xsl:template match="E-mail-adresse"> <xsl:copy> <xsl:value-of select="\$email"/> </xsl:copy> </xsl:template> </xsl:stylesheet> """.trim() def factory = TransformerFactory.newInstance() def transformer = factory.newTransformer(new StreamSource(new StringReader(xslt))) transformer.setParameter('mobil', '1234567890') transformer.setParameter('email', 'john.doe@foobar.com') transformer.transform(new StreamSource(new StringReader(xml)), new StreamResult(System.out)) Running this script produces: <?xml version="1.0" encoding="UTF-8"?><?mso-infoPathSolution name="urn:schemas-microsoft-com:office:infopath:FA_Ansoegning:http---ementor-dk-application-2007-06-22-" href="manifest.xsf" solutionVersion="1.0.0.14" productVersion="12.0.0" PIVersion="1.0.0.0" ?> <?mso-application progid="InfoPath.Document" versionProgid="InfoPath.Document.2"?> <application:FA_Ansoegning xmlns:application="http://ementor.dk/application/2007/06/22/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xd="http://schemas.microsoft.com/office/infopath/2003" xmlns:my="http://schemas.microsoft.com/office/infopath/2003/myXSD/200 8-04-14T14:31:48"> <Mobiltlf>1234567890</Mobiltlf> <E-mail-adresse>john.doe@foobar.com</E-mail-adresse> </application:FA_Ansoegning> A: Brilliant! Thank you very much for you assistance :) That solves my problem in a much cleaner and easier way. It's ended up looking like this: def rtv = { xmlSource, tagName, newValue -> regex = "<$tagName>[^<]*</$tagName>" replacement = "<$tagName>${newValue}</$tagName>" xmlSource = xmlSource.replaceAll(regex, replacement) return xmlSource } input = rtv( input, "Mobiltlf", "32165487" ) input = rtv( input, "E-mail-adresse", "bob@email.com" ) println input Since I'm giving this to our testers for use in their testing tool SoapUI, I've tried to "wrap" it, to make it easier for them to copy and paste. This is good enough for my purpose, but it would be perfect if we could add one more "twist" Let's say the input had this in it... <Mobiltlf type="national" anotherattribute="value"></Mobiltlf> ...and we wanted to retain thos two attributes even though we replaced the value. Is there a way to use regexp for that too? A: check this: http://today.java.net/pub/a/today/2004/08/12/groovyxml.html?page=2
{ "language": "en", "url": "https://stackoverflow.com/questions/91957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to mock object construction? Is there a way to mock object construction using JMock in Java? For example, if I have a method as such: public Object createObject(String objectType) { if(objectType.equals("Integer") { return new Integer(); } else if (objectType.equals("String") { return new String(); } } ...is there a way to mock out the expectation of the object construction in a test method? I'd like to be able to place expectations that certain constructors are being called, rather than having an extra bit of code to check the type (as it won't always be as convoluted and simple as my example). So instead of: assertTrue(a.createObject() instanceof Integer); I could have an expectation of the certain constructor being called. Just to make it a bit cleaner, and express what is actually being tested in a more readable way. Please excuse the simple example, the actual problem I'm working on is a bit more complicated, but having the expectation would simplify it. For a bit more background: I have a simple factory method, which creates wrapper objects. The objects being wrapped can require parameters which are difficult to obtain in a test class (it's pre-existing code), so it is difficult to construct them. Perhaps closer to what I'm actually looking for is: is there a way to mock an entire class (using CGLib) in one fell swoop, without specifying every method to stub out? So the mock is being wrapped in a constructor, so obviously methods can be called on it, is JMock capable of dynamically mocking out each method? My guess is no, as that would be pretty complicated. But knowing I'm barking up the wrong tree is valuable too :-) A: The only thing I can think of is to have the create method on at factory object, which you would than mock. But in terms of mocking a constructor call, no. Mock objects presuppose the existence of the object, whereas a constructor presuppose that the object doesn't exist. At least in java where allocation and initialization happen together. A: jmockit can do this. See my answer in https://stackoverflow.com/questions/22697#93675 A: Alas, I think I'm guilty of asking the wrong question. The simple factory I was trying to test looked something like: public Wrapper wrapObject(Object toWrap) { if(toWrap instanceof ClassA) { return new Wrapper((ClassA) toWrap); } else if (toWrap instanceof ClassB) { return new Wrapper((ClassB) toWrap); } // etc else { return null; } } I was asking the question how to find if "new ClassAWrapper( )" was called because the object toWrap was hard to obtain in an isolated test. And the wrapper (if it can even be called that) is kind of weird as it uses the same class to wrap different objects, just uses different constructors[1]. I suspect that if I had asked the question a bit better, I would have quickly received the answer: "You should mock Object toWrap to match the instances you're testing for in different test methods, and inspect the resulting Wrapper object to find the correct type is returned... and hope you're lucky enough that you don't have to mock out the world to create the different instances ;-)" I now have an okay solution to the immediate problem, thanks! [1] opening up the question of whether this should be refactored is well out of the scope of my current problem :-) A: Are you familiar with Dependency Injection? If no, then you ceartanly would benefit from learning about that concept. I guess the good-old Inversion of Control Containers and the Dependency Injection pattern by Martin Fowler will serve as a good introduction. With Dependency Injection (DI), you would have a DI container object, that is able to create all kinds of classes for you. Then your object would make use of the DI container to instanciate classes and you would mock the DI container to test that the class creates instances of expected classes. A: Dependency Injection or Inversion of Control. Alternatively, use the Abstract Factory design pattern for all the objects that you create. When you are in Unit Test mode, inject an Testing Factory which will tell you what are you creating, then include the assertion code in the Testing Factory to check the results (inversion of control). To leave your code as clean as possible create an internal protected interface, implement the interface (your factory) with the production code as an internal class. Add a static variable type of your interface initialized to your default factory. Add static setter for the factory and you are done. In your test code (must be in the same package, otherwise the internal interface must be public), create an anonymous or internal class with the assertion code and the test code. Then in your test, initialize the target class, assign (inject) the test factory, and run the methods of your target class. A: I hope there is none. Mocks are supposed to mock interfaces, which have no constructors... just methods. Something seems to be amiss in your approach to testing here. Any reason why you need to test that explicit constructors are being called ? Asserting the type of returned object seems okay for testing factory implementations. Treat createObject as a blackbox.. examine what it returns but dont micromanage how it does it. No one likes that :) Update on the Update: Ouch! Desperate measures for desperate times eh? I'd be surprised if JMock allows that... as I said it works on interfaces.. not concrete types. So * *Either try and expend some effort on getting those pesky input objects 'instantiable' under the test harness. Go Bottom up in your approach. *If that is infeasible, manually test it out with breakpoints (I know it sucks). Then stick a "Touch it at your own risk" comment in a visible zone in the source file and move ahead. Fight another day.
{ "language": "en", "url": "https://stackoverflow.com/questions/91981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I hotkey directly to File Search tab in Eclipse When I use CTRL+H I end up on the Java Search tab. I would very much like a shortcut to go directly to File Search instead. Is that possible? See image here for what I'm talking about: A: Another option is to open the search dialog (Ctrl+H) then click customize and hide java and task search tabs, next time you do Ctrl+H, file search will be the only one showing, thus it will be selected by default A: I've run into this problem before, too. I tried following the advice in the question response given by @Martin to rebind Ctrl+H to "File Search" in Window | Preferences | General | Keys, but for some reason, I don't have a "File Search" entry in the Command column. (I'm running Eclipse 3.3 currently; maybe the "File Search" entry was added in a subsequent release?) Update: As Martin pointed out in a comment on this answer, I didn't have the "Include unbound commands" checkbox checked in the Preferences | Keys dialog, which is why "File Search" wasn't showing up for me. I now have Ctrl+H bound to "File Search", as Martin suggested in his answer on this page, and it works great. Thanks Martin! I ended up working around the original problem by bringing up the Search dialog with Ctrl+H, then clicking the Customize button on the dialog, which brings up a "Search Page Selection" dialog which allows you to hide or show tabs on the Search dialog. I hid the tabs other than "File Search," which causes "File Search" to be activated by default on future uses of Ctrl+H. A: I actually think the best (and easiest way) is to simply open the search dialog (ctrl + h), hit customize, and then select the checkbox for "Remember last page used." Then tab over to the File Search once. So long as that is the last search tab you used, it will always open there. The advantage to this is that you don't lose easy access to the other tabs, should you actually need them! (working in Eclipse Kepler). A: You can just define a key binding that opens the file search: * *Go to Preferences > General > Keys *Type "file search" in the search box. (If there are no results, and you have a really old Eclipse version, select the Include Unbound Commands check box.) *Put the caret into the Binding text box and press the key combination you want to use: You can either re-use the CTRL+H binding (delete the other binding in that case) or define another one (e.g. CTRL+SHIFT+H). To delete the other binding search for "Open Search Dialog" and click on Unbind Command. Other solution: You could press CTRL+3 in your editor, type in "file s", press Enter. The next time you press CTRL+3 "File Search" is at the top. A: I learnt to use a "pseudo-hotkey" ALT+A F (works also as ALT+A ALT+F), which resolves to: "Menu Se[a]rch → [F]ile..." and has the advantage of being always present, without need for reconfiguration. A: As far as I know, the search window tab depend of the open file you're on when calling the search function. So, for example if your on a web.xml file, it will open the "plug-in search" instead of the "java-search". Edit: there is a way to force the default open tab, by assigning a shortcut to the "File Search" action in the "Keys" preference panel. A: Probably this feature came recently [confirmed its there in since Juno] and looks intelligent. Press Ctrl+H --> Customize --> [Checkbox] Remember last used page. This way you are not far from other options if required anytime. So if you use File search often then you will not get annoyed getting what you last chose. A: I would like to provide a workaround here: you can 'remember last used page' to avoid opening it over and over again. A: UPDATE: user @muescha, in the comments underneath the question, just pointed out to me that I accidentally answered the wrong question! Nevertheless, it is still a valuable answer (just not to this question), so I'm leaving it. My answer answers the question: How do I use a hotkey directly to search for a File in Eclipse? Ctrl + Shift + R works fantastically! Use asterisks (*) for wildcards. It is very similar to the Ctrl + P fuzzy search in Sublime Text 3. Sample searches using the Ctrl + Shift + R "Open Resource" search in Eclipse: rea *.txt *32*f1*c *3*1*c*h Notice if you just put an asterisk * between every character in the search string it works just like Sublime Text 3's Ctrl + P "fuzzy search"! Beautiful! Side note: you can also use the Search --> File menu dialog to search for files.
{ "language": "en", "url": "https://stackoverflow.com/questions/91984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "116" }
Q: LINQ to SQL: How to write a 'Like' select? I've got the following SQL: select * from transaction_log where stoptime like '%2008%' How do I write this in LINQ to SQL syntax? A: If you want to use the literal method, it's like this: var query = from l in transaction_log where SqlMethods.Like(l.stoptime, "%2008%") select l; Another option is: var query = from l in transaction_log where l.stoptime.Contains("2008") select l; If it's a DateTime: var query = from l in transaction_log where l.stoptime.Year = 2008 select l; That method is in the System.Data.Linq.SqlClient namespace A: from x in context.Table where x.Contains("2008") select x A: If stoptime data type is string, you can use .Contains() function, and also .StartsWith() and .EndsWith(). A: If you use the contains to method then you are doing a LIKE '%somestring%'. If you use a startswith method then it is the same as 'somestring%'. Finally, endswith is the same as using '%somestring'. To summarize, contains will find any pattern in the string but startswith and endswith will help you find matches at the beginning and end of the word. A: The really interesting point is, that .NET creates queries like "Select * from table where name like '%test%'" when you use "from x in context.Table where x.Contains("test") select x" which is quite impressing A: Thanks--good answers. This is, in fact, a DateTime type; I had to typecast "stoptime" as: var query = from p in dbTransSummary.Transaction_Logs where ( (DateTime) p.StopTime).Year == dtRollUpDate.Year select Minor point. It works great!
{ "language": "en", "url": "https://stackoverflow.com/questions/91986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: relational operator expression order This is probably a silly question, but curiosity has gotten the better of me. I've been seeing code lately that seems to "reverse" the order of expressions for relational operators e.g.: if (0 == someVariable) As opposed to what I normally see/write: if (someVariable == 0) To me, the second method seems more readable and intuitive, so I'm wondering if there's some reason I'm seeing the first method? Logically, both statements evaluate to the same result, so is it just a matter of personal preference how they're written? A: I understand that this is personal preference. Although by putting the variable second you can ensure that you don't accidentally assign the constant to the variable which used to concearn c developers. This is probably why you are seeing it in c# as developers switch language. A: Order does not matter, however, the former implies that it s the zero you're checking. Convention dictates the use of hte latter. A: The main reason in C and C++ is that it is easy to type if (someVariable = 0) { ... } which always fails and also sets someVariable to 0. I personally prefer the variable-first style because it reads more naturally, and just hope I don't forget to use == not =. Many C and C++ compilers will issue a warning if you assign a constant inside an if. Java and C# avoid this problem by forbidding non-boolean expressions in if clauses. Python avoids this problem by making assignments a statement, not an expression. A: The first method exists as a way to remind yourself not to do assignments in an IF statement, which could have disasterous consequences in some languages (C/C++). In C# you'll only get bitten by this if you're setting booleans. Potentially fatal C code: if (succeeded = TRUE) { // I could be in trouble here if 'succeeded' was FALSE } In C/C++, any variable is susceptible to this problem of VAR = CONSTANT when you intended VAR == CONSTANT. So, it is often the custom to reorder your IF statement to receive a compile error if you flub this up: if (TRUE = succeeded) { // This will fail to compile, and I'll fix my mistake } In C# only booleans are susceptible to this, as only boolean expressions are valid in an if statement. if (myInteger = 9) { // this will fail to compile } So, in the C# world it isn't necessary to adopt the CONSTANT == VAR style, unless you're comfortable with doing so. A: The latter format is a left-over from C-syntax, where, if you inadvertently left out one of the equals-signs, it did an assignment, instead of a comparison. However, you can of course not assign to a numeric literal, so if you wrote it like the second example, you would get a compiler error, and not a bug. In C#, however, you cannot inadvertently do this, so it doesn't really matter. A: In addition to equality I often come across code like if (0 > number) or if (NULL != pointer) where there isn't even any danger of making a mistake in C/C++! It's one of those situations where a well-intentioned teaching technique has turned into a plain bad habit.
{ "language": "en", "url": "https://stackoverflow.com/questions/91994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the real difference between Pointers and References? AKA - What's this obsession with pointers? Having only really used modern, object oriented languages like ActionScript, Java and C#, I don't really understand the importance of pointers and what you use them for. What am I missing out on here? A: It's all just indirection: The ability to not deal with data, but say "I'll direct you to some data, over there". You have the same concept in Java and C#, but only in reference format. The key differences are that references are effectively immutable signposts - they always point to something. This is useful, and easy to understand, but less flexible than the C pointer model. C pointers are signposts that you can happily rewrite. You know that the string you're looking for is next door to the string being pointed at? Well, just slightly alter the signpost. This couples well with C's "close to the bone, low level knowledge required" approach. We know that a char* foo consists of a set of characters beginning at the location pointed to by the foo signpost. If we also know that the string is at least 10 characters long, we can change the signpost to (foo + 5) to point at then same string, but start half the length in. This flexibility is useful when you know what you're doing, and death if you don't (where "know" is more than just "know the language", it's "know the exact state of the program"). Get it wrong, and your signpost is directing you off the edge of a cliff. References don't let you fiddle, so you're much more confident that you can follow them without risk (especially when coupled with rules like "A referenced object will never disappear", as in most Garbage collected languages). A: Since you have been programming in object-oriented languages, let me put it this way. You get Object A instantiate Object B, and you pass it as a method parameter to Object C. The Object C modifies some values in the Object B. When you are back to Object A's code, you can see the changed value in Object B. Why is this so? Because you passed in a reference of Object B to Object C, not made another copy of Object B. So Object A and Object C both hold references to the same Object B in memory. Changes from one place and be seen in another. This is called By Reference. Now, if you use primitive types instead, like int or float, and pass them as method parameters, changes in Object C cannot be seen by Object A, because Object A merely passed a copy instead of a reference of its own copy of the variable. This is called By Value. You probably already knew that. Coming back to the C language, Function A passes to Function B some variables. These function parameters are natively copies, By Value. In order for Function B to manipulate the copy belonging to Function A, Function A must pass a pointer to the variable, so that it becomes a pass By Reference. "Hey, here's the memory address to my integer variable. Put the new value at that address location and I will pick up later." Note the concept is similar but not 100% analogous. Pointers can do a lot more than just passing "by reference". Pointers allow functions to manipulate arbitrary locations of memory to whatever value required. Pointers are also used to point to new addresses of execution code to dynamically execute arbitrary logic, not just data variables. Pointers may even point to other pointers (double pointer). That is powerful but also pretty easy to introduce hard-to-detect bugs and security vulnerabilities. A: I use pointers and references heavily in my day to day work...in managed code (C#, Java) and unmanaged (C++, C). I learned about how to deal with pointers and what they are by the master himself...[Binky!!][1] Nothing else needs to be said ;) The difference between a pointer and reference is this. A pointer is an address to some block of memory. It can be rewritten or in other words, reassigned to some other block of memory. A reference is simply a renaming of some object. It can only be assigned once! Once it is assigned to an object, it cannot be assigned to another. A reference is not an address, it is another name for the variable. Check out C++ FAQ for more on this. Link1 LInk2 A: If you haven't seen pointers before, you're surely missing out on this mini-gem: void strcpy(char *dest, char *src) { while(*dest++ = *src++); } A: Historically, what made programming possible was the realization that memory locations could hold computer instructions, not just data. Pointers arose from the realization that memory locations could also hold the address of other memory locations, thus giving us indirection. Without pointers (at a low level) most complicated data structures would be impossible. No linked-lists, binary-trees or hash-tables. No pass by reference, only by value. Since pointers can point to code, without them we would also have no virtual functions or function look up tables. A: You're missing out on a lot! Understanding how the computer works on lower levels is very useful in several situations. C and assembler will do that for you. Basically a pointer lets you write stuff to any point in the computer's memory. On more primitive hardware/OS or in embedded systems this actually might do something useful. Say turn the blinkenlichts on and off again. Of course this doesn't work on modern systems. The operating system is the Lord and Master of main memory. If you try to access a wrong memory location, your process will pay for its hubris with its life. In C, pointers are the way of passing references to data. When you call a function, you don't want to copy a million bits to a stack. Instead you just tell where the data resides in the main memory. In other words, you give a pointer to the data. To some extent that is what happens even with Java. You pass references to objects, not the objects themselves. Remember, ultimately every object is a set of bits in the computer main memory. A: I'm currently waist-deep in designing some high level enterprise software in which chunks of data (stored in an SQL database, in this case) are referenced by 1 or more other entities. If a chunk of data remains when no more entities reference it, we're wasting storage. If a reference points so data that's not present, that's a big problem too. There's a strong analogy to be made between our issues, and those of memory management in a language that uses pointers. It's tremendously useful to be able to talk to my colleagues in terms of that analogy. Not deleting unreferenced data is a "memory leak". A reference that goes nowhere is a "dangling pointer". We can choose explicit "frees", or we can implement "garbage collection" using "reference counting". So here, understanding low-level memory management is helping design high-level applications. In Java you're using pointers all the time. Most variables are pointers to objects - which is why: StringBuffer x = new StringBuffer("Hello"); StringBuffer y = x; x.append(" boys"); System.out.println(y); ... prints "Hello boys" and not "Hello". The only difference in C is that it's common to add and subtract from pointers - and if you get the logic wrong you can end up messing with data you shouldn't be touching. A: Strings are fundamental to C (and other related languages). When programming in C, you must manage your memory. You don't just say "okay, I'll need a bunch of strings"; you need to think about the data structure. How much memory do you need? When will you allocate it? When will you free it? Let's say you want 10 strings, each with no more than 80 characters. Okay, each string is an array of characters (81 characters - you mustn't forget the null or you'll be sorry!) and then each string is itself in an array. The final result will be a multidimensional array something like char dict[10][81]; Note, incidentally, that dict isn't a "string" or an "array", or a "char". It's a pointer. When you try to print one of those strings, all you're doing is passing the address of a single character; C assumes that if it just starts printing characters it will eventually hit a null. And it assumes that if you are at the start of one string, and you jump forward 81 bytes, you'll be at the start of the next string. And, in fact taking your pointer and adding 81 bytes to it is the only possible way to jump to the next string. So, why are pointers important? Because you can't do anything without them. You can't even do something simple like print out a bunch of strings; you certainly can't do anything interesting like implement linked lists, or hashes, or queues, or trees, or a file system, or some memory management code, or a kernel or...whatever. You NEED to understand them because C just hands you a block of memory and let's you do the rest, and doing anything with a block of raw memory requires pointers. Also many people suggest that the ability to understand pointers correlates highly with programming skill. Joel has made this argument, among others. For example Now, I freely admit that programming with pointers is not needed in 90% of the code written today, and in fact, it's downright dangerous in production code. OK. That's fine. And functional programming is just not used much in practice. Agreed. But it's still important for some of the most exciting programming jobs. Without pointers, for example, you'd never be able to work on the Linux kernel. You can't understand a line of code in Linux, or, indeed, any operating system, without really understanding pointers. From here. Excellent article. A: To be honest, most seasoned developers will have a laugh (hopefully friendly) if you don't know pointers. At my previous Job we had two new hires last year (just graduated) that didn't know about pointers, and that alone was the topic of conversation with them for about a week. No one could believe how someone could graduate without knowing pointers... A: References in C++ are fundamentally different from references in Java or .NET languages; .NET languages have special types called "byrefs" which behave much like C++ "references". A C++ reference or .NET byref (I'll use the latter term, to distinguish from .NET references) is a special type which doesn't hold a variable, but rather holds information sufficient to identify a variable (or something that can behave as one, such as an array slot) held elsewhere. Byrefs are generally only used as function parameters/arguments, and are intended to be ephemeral. Code which passes a byref to a function guarantees that the variable which is identified thereby will exist at least until that function returns, and functions generally guarantee not to keep any copy of a byref after they return (note that in C++ the latter restriction is not enforced). Thus, byrefs cannot outlive the variables identified thereby. In Java and .NET languages, a reference is a type that identifies a heap object; each heap object has an associated class, and code in the heap object's class can access data stored in the object. Heap objects may grant outside code limited or full access to the data stored therein, and/or allow outside code to call certain methods within their class. Using a reference to calling a method of its class will cause that reference to be made available to that method, which may then use it to access data (even private data) within the heap object. What makes references special in Java and .NET languages is that they maintain, as an absolute invariant, that every non-null reference will continue to identify the same heap object as long as that reference exists. Once no reference to a heap object exists anywhere in the universe, the heap object will simply cease to exist, but there is no way a heap object can cease to exist while any reference to it exists, nor is there any way for a "normal" reference to a heap object to spontaneously become anything other than a reference to that object. Both Java and .NET do have special "weak reference" types, but even they uphold the invariant. If no non-weak references to an object exist anywhere in the universe, then any existing weak references will be invalidated; once that occurs, there won't be any references to the object and it can thus be invalidated. Pointers, like both C++ references and Java/.NET references, identify objects, but unlike the aforementioned types of references they can outlive the objects they identify. If the object identified by a pointer ceases to exist but the pointer itself does not, any attempt to use the pointer will result in Undefined Behavior. If a pointer isn't known either to be null or to identify an object that presently exists, there's no standard-defined way to do anything with that pointer other than overwrite it with something else. It's perfectly legitimate for a pointer to continue to exist after the object identified thereby has ceased to do so, provided that nothing ever uses the pointer, but it's necessary that something outside the pointer indicate whether or not it's safe to use because there's no way to ask the pointer itself. The key difference between pointers and references (of either type) is that references can always be asked if they are valid (they'll either be valid or identifiable as null), and if observed to be valid they will remain so as long as they exist. Pointers cannot be asked if they are valid, and the system will do nothing to ensure that pointers don't become invalid, nor allow pointers that become invalid to be recognized as such. A: Pointers are for directly manipulating the contents of memory. It's up to you whether you think this is a good thing to do, but it's the basis of how anything gets done in C or assembler. High-level languages hide pointers behind the scenes: for example a reference in Java is implemented as a pointer in almost any JVM you'll come across, which is why it's called NullPointerException rather than NullReferenceException. But it doesn't let the programmer directly access the memory address it points to, and it can't be modified to take a value other than the address of an object of the correct type. So it doesn't offer the same power (and responsibility) that pointers in low-level languages do. [Edit: this is an answer to the question 'what's this obsession with pointers?'. All I've compared is assembler/C-style pointers with Java references. The question title has since changed: had I set out to answer the new question I might have mentioned references in languages other than Java] A: This is like asking, “what's this obsession with CPU instructions? Do I miss out on something by not sprinkling x86 MOV instructions all over the place?” You just need pointers when programming on a low level. In most higher-level programming language implementations, pointers are used just as extensively as in C, but hidden from the user by the compiler. So... Don't worry. You're using pointers already -- and without the dangers of doing so incorrectly, too. :) A: I see pointers as a manual transmission in a car. If you learn to drive with a car that has an automatic transmission, that won't make for a bad driver. And you can still do most everything that the drivers that learned on a manual transmission can do. There will just be a hole in your knowledge of driving. If you had to drive a manual you'd probably be in trouble. Sure, it is easy to understand the basic concept of it, but once you have to do a hill start, you're screwed. But, there is still a place for manual transmissions. For instance, race car drivers need to be able to shift to get the car to respond in the most optimal way to the current racing conditions. Having a manual transmission is very important to their success. This is very similar to programming right now. There is a need for C/C++ development on some software. Some examples are high-end 3D games, low level embedded software, things where speed is a critical part of the software's purpose, and a lower level language that allows you closer access to the actual data that needs to be processed is key to that performance. However, for most programmers this is not the case and not knowing pointers is not crippling. However, I do believe everybody can benefit from learning about C and pointers, and manual transmissions too. A: For a long time I didn't understand pointers, but I understood array addressing. So I'd usually put together some storage area for objects in an array, and then use an index to that array as the 'pointer' concept. SomeObject store[100]; int a_ptr = 20; SomeObject A = store[a_ptr]; One problem with this approach is that after I modified 'A', I'd have to reassign it to the 'store' array in order for the changes to be permanent: store[a_ptr] = A; Behind the scenes, the programming language was doing several copy-operations. Most of the time this didn't affect performance. It mostly made the code error-prone and repetitive. After I learned to understand pointers, I moved away from implementing the array addressing approach. The analogy is still pretty valid. Just consider that the 'store' array is managed by the programming language's run-time. SomeObject A; SomeObject* a_ptr = &A; // Any changes to a_ptr's contents hereafter will affect // the one-true-object that it addresses. No need to reassign. Nowadays, I only use pointers when I can't legitimately copy an object. There are a bunch of reasons why this might be the case: * *To avoid an expensive object-copy operation for the sake of performance. *Some other factor doesn't permit an object-copy operation. *You want a function call to have side-effects on an object (don't pass the object, pass the pointer thereto). *In some languages- if you want to return more than one value from a function (though generally avoided). A: Pointers are the most pragmatic way of representing indirection in lower-level programming languages. A: Pointers are important! They "point" to a memory address, and many internal structures are represented as pointers, IE, An array of strings is actually a list of pointers to pointers! Pointers can also be used for updating variables passed to functions. A: You need them if you want to generate "objects" at runtime without pre allocate memory on the stack A: Parameter efficency - passing a pointer (Int - 4 bytes) as opposed to copying a whole (arbitarily large) object. Java classes are passed via reference (basically a pointer) also btw, its just that in java that's hidden from the programmer. A: Programming in languages like C and C++ you are much closer to the "metal". Pointers hold a memory location where your variables, data, functions etc. live. You can pass a pointer around instead of passing by value (copying your variables and data). There are two things that are difficult with pointers: * *Pointers on pointers, addressing, etc. can get very cryptic. It leads to errors, and it is hard to read. *Memory that pointers point to is often allocated from the heap, which means you are responsible for releasing that memory. Bigger your application gets, harder it is to keep up with this requirement, and you end up with memory leaks that are hard to track down. You could compare pointer behavior to how Java objects are passed around, with the exception that in Java you do not have to worry about freeing the memory as this is handled by garbage collection. This way you get the good things about pointers but do not have to deal with the negatives. You can still get memory leaks in Java of course if you do not de-reference your objects but that is a different matter. A: Also just something to note, you can use pointers in C# (as opposed to normal references) by marking a block of code as unsafe. Then you can run around changing memory addresses directly and do pointer arithmetic and all that fun stuff. It's great for very fast image manipulation (the only place I personally have used it). As far as I know Java and ActionScript don't support unsafe code and pointers. A: I am always distressed by the focus on such things as pointers or references in high-level languages. It's really useful to think at a higher level of abstraction in terms of the behavior of objects (or even just functions) as opposed to thinking in terms of "let me see, if I send the address of this thing to there, then that thing will return me a pointer to something else" Consider even a simple swap function. If you have void swap(int & a, int & b) or procedure Swap(var a, b : integer) then interpret these to mean that the values can be changed. The fact that this is being implemented by passing the addresses of the variables is just a distraction from the purpose. Same with objects --- don't think of object identifiers as pointers or references to "stuff". Instead, just think of them as, well, OBJECTS, to which you can send messages. Even in primitive languages like C++, you can go a lot further a lot faster by thinking (and writing) at as high a level as possible. A: Write more than 2 lines of c or c++ and you'll find out. They are "pointers" to the memory location of a variable. It is like passing a variable by reference kinda.
{ "language": "en", "url": "https://stackoverflow.com/questions/92001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: Programmatically Make Bound Column Invisible I'm trying to make a data bound column invisible after data binding, because it won't exist before data binding. However, the DataGrid.Columns collection indicates a count of 0, making it seem as if the automatically generated columns don't belong to the collection. How can I make a column that is automatically generated during binding invisible? A: You have to add code to the line item rendering code and set the visibility of that column to false. Even though its bound, the event will be fired for each record and you can manipulate the output. A: The only way to do this I know of since it's created on the fly is to hide the cell, here's an example you can adapt: protected void GridView_RowCreated(object sender, GridViewRowEventArgs e) { e.Row.Cells[1].Visible = false; } A: If I understand your scenerio correctly you'll probably want to set it's visible property during the databound event A: Nick Craver GridView_RowCreated Nick, I'm not using a GridView. It's ItemCreated ;-)
{ "language": "en", "url": "https://stackoverflow.com/questions/92004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I determine if a random string sounds like English? I have an algorithm that generates strings based on a list of input words. How do I separate only the strings that sounds like English words? ie. discard RDLO while keeping LORD. EDIT: To clarify, they do not need to be actual words in the dictionary. They just need to sound like English. For example KEAL would be accepted. A: It's quite easy to generate English sounding words using a Markov chain. Going backwards is more of a challenge, however. What's the acceptable margin of error for the results? You could always have a list of common letter pairs, triples, etc, and grade them based on that. A: You could approach this by tokenizing a candidate string into bigrams—pairs of adjascent letters—and checking each bigram against a table of English bigram frequencies. * *Simple: if any bigram is sufficiently low on the frequency table (or outright absent), reject the string as implausible. (String contains a "QZ" bigram? Reject!) *Less simple: calculate the overall plausibility of the whole string in terms of, say, a product of the frequencies of each bigram divided by the mean frequency of a valid English string of that length. This would allow you to both (a) accept a string with an odd low-frequency bigram among otherwise high-frequency bigrams, and (b) reject a string with several individual low-but-not-quite-below-the-threshold bigrams. Either of those would require some tuning of the threshold(s), the second technique more so than the first. Doing the same thing with trigrams would likely be more robust, though it'll also likely lead to a somewhat more strict set of "valid" strings. Whether that's a win or not depends on your application. Bigram and trigram tables based on existing research corpora may be available for free or purchase (I didn't find any freely available but only did a cursory google so far), but you can calculate a bigram or trigram table from yourself from any good-sized corpus of English text. Just crank through each word as a token and tally up each bigram—you might handle this as a hash with a given bigram as the key and an incremented integer counter as the value. English morphology and English phonetics are (famously!) less than isometric, so this technique might well generate strings that "look" English but present troublesome prounciations. This is another argument for trigrams rather than bigrams—the weirdness produced by analysis of sounds that use several letters in sequence to produce a given phoneme will be reduced if the n-gram spans the whole sound. (Think "plough" or "tsunami", for example.) A: I'd be tempted to run the soundex algorithm over a dictionary of English words and cache the results, then soundex your candidate string and match against the cache. Depending on performance requirements, you could work out a distance algorithm for soundex codes and accept strings within a certain tolerance. Soundex is very easy to implement - see Wikipedia for a description of the algorithm. An example implementation of what you want to do would be: def soundex(name, len=4): digits = '01230120022455012623010202' sndx = '' fc = '' for c in name.upper(): if c.isalpha(): if not fc: fc = c d = digits[ord(c)-ord('A')] if not sndx or (d != sndx[-1]): sndx += d sndx = fc + sndx[1:] sndx = sndx.replace('0','') return (sndx + (len * '0'))[:len] real_words = load_english_dictionary() soundex_cache = [ soundex(word) for word in real_words ] if soundex(candidate) in soundex_cache: print "keep" else: print "discard" Obviously you'll need to provide an implementation of read_english_dictionary. EDIT: Your example of "KEAL" will be fine, since it has the same soundex code (K400) as "KEEL". You may need to log rejected words and manually verify them if you want to get an idea of failure rate. A: You should research "pronounceable" password generators, since they're trying to accomplish the same task. A Perl solution would be Crypt::PassGen, which you can train with a dictionary (so you could train it to various languages if you need to). It walks through the dictionary and collects statistics on 1, 2, and 3-letter sequences, then builds new "words" based on relative frequencies. A: You can build a markov-chain of a huge english text. Afterwards you can feed words into the markov chain and check how high the probability is that the word is english. See here: http://en.wikipedia.org/wiki/Markov_chain At the bottom of the page you can see the markov text generator. What you want is exactly the reverse of it. In a nutshell: The markov-chain stores for each character the probabilities of which next character will follow. You can extend this idea to two or three characters if you have enough memory. A: Metaphone and Double Metaphone are similar to SOUNDEX, except they may be tuned more toward your goal than SOUNDEX. They're designed to "hash" words based on their phonetic "sound", and are good at doing this for the English language (but not so much other languages and proper names). One thing to keep in mind with all three algorithms is that they're extremely sensitive to the first letter of your word. For example, if you're trying to figure out if KEAL is English-sounding, you won't find a match to REAL because the initial letters are different. A: The easy way with Bayesian filters (Python example from http://sebsauvage.net/python/snyppets/#bayesian) from reverend.thomas import Bayes guesser = Bayes() guesser.train('french','La souris est rentrée dans son trou.') guesser.train('english','my tailor is rich.') guesser.train('french','Je ne sais pas si je viendrai demain.') guesser.train('english','I do not plan to update my website soon.') >>> print guesser.guess('Jumping out of cliffs it not a good idea.') [('english', 0.99990000000000001), ('french', 9.9999999999988987e-005)] >>> print guesser.guess('Demain il fera très probablement chaud.') [('french', 0.99990000000000001), ('english', 9.9999999999988987e-005)] A: Do they have to be real English words, or just strings that look like they could be English words? If they just need to look like possible English words you could do some statistical analysis on some real English texts and work out which combinations of letters occur frequently. Once you've done that you can throw out strings that are too improbable, although some of them may be real words. Or you could just use a dictionary and reject words that aren't in it (with some allowances for plurals and other variations). A: You could compare them to a dictionary (freely available on the internet), but that may be costly in terms of CPU usage. Other than that, I don't know of any other programmatic way to do it. A: That sounds like quite an involved task! Off the top of my head, a consonant phoneme needs a vowel either before or after it. Determining what a phoneme is will be quite hard though! You'll probably need to manually write out a list of them. For example, "TR" is ok but not "TD", etc. A: I would probably evaluate each word using a SOUNDEX algorithm against a database of english words. If you're doing this on a SQL-server it should be pretty easy to setup a database containing a list of most english words (using a freely available dictionary), and MSSQL server has SOUNDEX implemented as an available search-algorithm. Obviously you can implement this yourself if you want, in any language - but it might be quite a task. This way you'd get an evaluation of how much each word sounds like an existing english word, if any, and you could setup some limits for how low you'd want to accept results. You'd probably want to consider how to combine results for multiple words, and you would probably tweak the acceptance-limits based on testing. A: I'd suggest looking at the phi test and index of coincidence. http://www.threaded.com/cryptography2.htm A: I'd suggest a few simple rules and standard pairs and triplets would be good. For example, english sounding words tend to follow the pattern of vowel-consonant-vowel, apart from some dipthongs and standard consonant pairs (e.g. th, ie and ei, oo, tr). With a system like that you should strip out almost all words that don't sound like they could be english. You'd find on closer inspection that you will probably strip out a lot of words that do sound like english as well, but you can then start adding rules that allow for a wider range of words and 'train' your algorithm manually. You won't remove all false negatives (e.g. I don't think you could manage to come up with a rule to include 'rythm' without explicitly coding in that rythm is a word) but it will provide a method of filtering. I'm also assuming that you want strings that could be english words (they sound reasonable when pronounced) rather than strings that are definitely words with an english meaning.
{ "language": "en", "url": "https://stackoverflow.com/questions/92006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Programmatically setting the record pointer in a C# DataGridView How do I programmatically set the record pointer in a C# DataGridView? I've tried "DataGridView.Rows[DesiredRowIndex].Selected=true;", and that does not work. All it does is highlight that row within the grid; it doesn not move the record pointer to that row. A: To change the active row for the datagrid you need to set the current cell property of the datagrid to a non-hidden non-disabled, non-header cell on the row that you have selected. You'd do this like: dataGridView1.CurrentCell = this.dataGridView1[YourColumn,YourRow]; Making sure that the cell matches the above criteria. Further information can be found at: http://msdn.microsoft.com/en-us/library/yc4fsbf5.aspx A: Try setting the focus of the DataGrid first . Some thing like this dataGridView1.Focus(); dataGridView1.CurrentCell = this.dataGridView1[YourColumn,YourRow]; This worked in my case, hope it helps you as well
{ "language": "en", "url": "https://stackoverflow.com/questions/92008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to display only one validation error message with a with MyFaces Trinidad? For a registration form I have something simple like: <tr:panelLabelAndMessage label="Zip/City" showRequired="true"> <tr:inputText id="zip" value="#{data['registration'].zipCode}" contentStyle="width:36px" simple="true" required="true" /> <tr:inputText id="city" value="#{data['registration'].city}" contentStyle="width:133px" simple="true" required="true" /> </tr:panelLabelAndMessage> <tr:message for="zip" /> <tr:message for="city" /> When including the last two lines, I get two messages on validation error. When ommiting last to lines, a javascript alert shows up, which is not what I want. Is there a solution to show only one validation failed message somehow? Thanks a lot! A: Problem is, the fields must layout horizontally. It's a no-go to put ZIP field and city not next to each other in one line. At least for me. A co-worker has pointed me to set a faclets variable inside the first tr:message and to put a rendered attribute at the second one that reacts on this variable. Havn't got the time to try nor found the right command for setting a varable yet. Will post results as soon as possible. A: I know this won't be ideal, but if you remove the panelLabelAndMessage tag and just use the label attribute on the inputText tag that should remove the extra error message.
{ "language": "en", "url": "https://stackoverflow.com/questions/92027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: .NET UML generation from code? Is there anything out there that for .NET that can generate UML diagrams from code. Preferably an addin for Visual Studio. Starting work on a mature project that has little architectural documentation can be painful at first. Eventually you get the ins and outs of the code but helping to see how the code all fits together from the get go would be wonderful. A: In visual studio 2005/8 you can right click on a class and then select View in Class Diagram which will create a new ClassDiagram containing the selected class and any related classes. A: If you have Visio and select Project->Visio UML->Reverse engineer you will get a uml of the project. Sparx Systems has made a product called "Enterprise Architect" that should be able to do the trick as well. A: If you generate UML class diagrams for a big project the result is going to be quite chaotic. Sometimes I use the class diagrams in Visual Studio. I manually add the classes I think deserve some extra explanations. The diagrams are not UML, but it is close enough. They are always up-to-date and you can change the diagram and the code is updated automatically. To convey the bigger picture of a design I use these UML stencils and draw the diagram by hand. For my points to come across to the people I am communicating with, I find it best to omit irrelevant details, so we can focus on what I think is important. No automatic UML generation tool can figure out which irrelevant details to omit. A: Visual Studio 2010 Ultimate supports UML class, sequence, component, use case, and activity diagrams. It also supports creating sequence, dependency graphs, and layer diagrams from code. Regarding your question about generating UML diagrams from code, there's a response here in the VS Architecture & Modeling tools forum: Is it possible to reverse engineer C# code into an UML Class Diagram? Other tools include Architecture Explorer, which lets you browse and explore your solution. For more info, see the following links: To download the RC release, visit: Microsoft Visual Studio 2010 Ultimate RC To see the RC documentation, see Modeling the Application. To discuss these tools, visit the Visual Studio 2010 Architectural Discovery & Modeling Tools forum. A: Enterprise Architect does this and has an add-in for Visual Studio. It will also do sequence diagrams which can be very useful. A: Class diagram doesn't always work. I often find it wont display the classes for some reason. Pen & pencil or talking with people who work on the project is what I have to rely on.
{ "language": "en", "url": "https://stackoverflow.com/questions/92029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Set selected item on a DataGridViewComboboxColumn I have a datagridview with a DataGridViewComboboxColumn column with 3 values: "Small", "Medium", "Large" I get back the users default which in this case is "Medium" I want to show a dropdown cell in the datagridview but default the value to "Medium". i would do this in a regular combobox by doing selected index or just stting the Text property of a combo box. A: When you get into the datagridview it is probably best to get into databinding. This will take care of all of the selected index stuff you are talking about. However, if you want to get in there by yourself, DataGridView.Rows[rowindex].Cells[columnindex].Value will let you get and set the value associated to the DataGridViewComboBoxColumn. Just make sure you supply the correct rowindex and columnindex along with setting the value to the correct type (the same type as the ValueMember property of the DataGridViewComboBoxColumn). A: DataGridViewComboBoxColumn ColumnPage = new DataGridViewComboBoxColumn(); ColumnPage.DefaultCellStyle.NullValue = "Medium"; A: Are you retrieving the user data and attempting to set values in the DataGridView manually, or have you actually bound the DataGridVew to a data source? Because if you've bound the grid to a data source, you should just need to set the DataPropertyName on the column to be the string name of the object Property: [DataGridViewComboboxColumnName].DataPropertyName = "PropertyNameToBindTo"; Or do you mean you want it to default to Medium for a new row? A: Do accomplish this task you should do something like this:- this.dataGridViewStudentInformation.Columns[ColumnIndex].DataPropertyName = dataGridViewStudentInformation.Columns[2].Name ; //Set the ColumnName to which you want to bind. And set the default value in Database as Medium.
{ "language": "en", "url": "https://stackoverflow.com/questions/92035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: " Attach to Process " in Visual Studio 2005 I installed Visual Studio 2005 ( with SP1 ) and made the default settings as what is required for C++ . Now i open a solution and run the exe . Under " Tools " menu item i go and Select " Attach the process " and i attach it to the exe i just ran . I put breakpoints several places in the code ( this breakpoints looks enabled ) and these are the places where the breakpoints should definitely be hit . But for some reason , my breakpoints are not hit . PS : All pdb's are present in correct location . Is there any setting i am missing . A: Perhaps it is attaching to "the wrong kind" of code. In the "Attach to Process" dialog, there is a setting that allows you to select the kind of code you want to debug. Try clicking "Select" button next to "Attach to" text box and checking only "Managed code" the relevant code type. http://img204.imageshack.us/img204/3017/capture5ct4.png Most of the time, leaving "automatically determine the type of code to debug" setting on works for me. However, in some cases, the debugger is not able to understand that I want to attach to managed code (if I have launched my application from a batch file, for example) and when it does that, the above solution works for me. A: Are you in Debug mode? I've had this problem when I was trying to do it in Release mode. It doesn't complain, it just doesn't hit the breakpoints. A: Use the Modules view to see if your exe/dll is loaded, and if not, to specify where to load the PDB from.
{ "language": "en", "url": "https://stackoverflow.com/questions/92039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there a simple tool to convert mysql to postgresql syntax? I've tried the tools listed here, some with more success than others, but none gave me valid postgres syntax I could use (tinyint errors etc.) A: After some time on Google I found this post. * *Install the mysql2psql gem using [sudo] gem install mysql2psql. *Create a config file by running mysql2psql. You'll see an error but a mysql2psql.yml file should have been created. *Edit mysql2psql.yml *Run mysql2psql again to migrate you data. Tip: Set force_truncate to true in your mysql2psql.yml config file if you want the postgresql database to be cleared before migrating your data. A: Install pgloader on Debian or Ubuntu: sudo apt install pgloader Login as the postgres user and create a database sudo su postgres createdb -O user db_migrated Transfer data from the mysql database to postgresql pgloader mysql://user@localhost/db postgresql:///db_migrated Check also Dimitri Fontaine's rewrite of pgloader from python to common lisp so that he could implement real threading. Installation on other platforms * *To install pgloader on Windows, you can use the Windows Subsystem for Linux. *To install pgloader on Mac, you can use: brew install --HEAD pgloader. A: There's a mysqldump option which makes it output PostgreSQL code: mysqldump --compatible=postgresql ... But that doesn't work too well. Instead, please see the mysql-to-postgres tool as described in Linus Oleander's answer. A: Try this one , it works like charm !! http://www.sqlines.com/online A: I've used py-mysql2pgsql. After installation it needs only simple configuration file in yml format (source, destination), e.g.: # if a socket is specified we will use that # if tcp is chosen you can use compression mysql: hostname: localhost port: 3306 socket: /tmp/mysql.sock username: mysql2psql password: database: mysql2psql_test compress: false destination: # if file is given, output goes to file, else postgres file: postgres: hostname: localhost port: 5432 username: mysql2psql password: database: mysql2psql_test Usage: > py-mysql2pgsql -h usage: py-mysql2pgsql [-h] [-v] [-f FILE] Tool for migrating/converting data from mysql to postgresql. optional arguments: -h, --help show this help message and exit -v, --verbose Show progress of data migration. -f FILE, --file FILE Location of configuration file (default: mysql2pgsql.yml). If none exists at that path, one will be created for you. More on its home page https://github.com/philipsoutham/py-mysql2pgsql. A: Have a look at PG Foundry, extra utilities for Postgres tend to live there. I believe that the tool you're looking for does exist though. A: There is one piece of pay software listed on this postgresql page: http://www.postgresql.org/download/products/1 and this is on pgFoundry: http://pgfoundry.org/projects/mysql2pgsql/ A: This page lists the syntax differences, but a simple working query converter i haven't found yet. Using an ORM package instead of raw SQL could prevent these issues. I'm currently hacking up a converter for a legacy codebase: function mysql2pgsql($mysql){ return preg_replace("/limit (\d+), *(\d+)/i", "limit $1 offset $2", preg_replace("/as '([^']+)'/i", 'as "$1"', $mysql)); // Note: limit needs order } For CREATE statements, SQLines converts most of them online. I still had to edit the mysqldump afterwards, though: "mediumtext" -> "text", "^LOCK.*" -> "", "^UNLOCK.*" -> "", "`" -> '"', "'" -> "''" in 'data', "0000-00-00" -> "2000-01-01", deduplicate constraint names, " CHARACTER SET utf8 " -> " ". "int(10)" -> "int" was missed in the last table, so pass that part of the mysqldump through http://www.sqlines.com/online again. A: you will most likely never get a tool for such task which would do all of your job for you. be prepared to do some refactoring work yourself.
{ "language": "en", "url": "https://stackoverflow.com/questions/92043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: session variable mixup in ASP.NET? Is it possible for ASP.NET to mix up which user is associated with which session variable on the server? Are session variables immutably tied to the original user that created them across time, space & dimension? A: It depends on your session provider, if you have overriden the session key generation in a way that is no longer unique, then multiple users may be accessing the same session. What behavior are you seeing? And are you sure there's no static in play with the variables you are talking about? A: To answer your original question: Sessions are keyed to an id that is placed in a cookie. This id is generated using some random number crypto routines. It is not guaranteed to be unique but it is highly unlikely that it will ever be duplicated in the span of the life of a session. Even if your sessions run for full work days. It would probably take years for a really popular site to even generate a duplicate key (No stats or facts to back that up). Having said all that it doesn't appear that your problem is with session values getting mixed up. The first thing that I would start to look at is connection pooling. ADO pools connections by default but if you request a connection with a username/password that is not in the pool it should give you a new connection. Hint that may be a performance bottleneck in the future if your site is very large. It has been a while since I worked with SQL Server, in Oracle there is a call that can be made to switch the identity of the user. I would be surprised if there was no equivalent in SQL Server. You might try connecting to your DB with a generic username/password and then executing that identity switch call before you hand back the connection to the rest of your code. A: while anything is possible. . . . No, unless you are storing session state in sql server or some other out of process storage and then messing with it. . . A: The session is bound to a user cookie, the chances of that messing up in a normal scenario is very unlikely, however there could be issues if using distributed session state. A: It's not possible. Sessions are tied to the creator. Do you want to mix up, or do you have a case when it looks like mixed up? A: More information: I've got an app that takes the userid/password from the login page and stores it in a session variable. I plop it into my connection string for making calls to SQL Server. When a table gets updated, we're using 'system_user' in the database to identify the 'last updated by' user. We're seeing some odd behavior in which the user we're expecting to be listed is incorrect, and it's showing someone else. A: Can you pop in the debugger and see if the correct value is indeed being passed on that connection string? It would quickly help you idenfity which side the problem is on. Also make sure that none of the connection code has static properties for connection or user, or one user may have their connection replaced with that of the most recent user before the update fires off. A: My guess is that you're re-using a static field on a class to hold the connection string. Those static fields are re-used across multiple IIS requests so you're probably only ever seeing the most recently logged in user in the 'last updated by'. BTW, unless you have a REALLY good reason for doing so then you shouldn't be connecting to the DB like this. You're preventing yourself from using connection pooling which is going to hurt performance under high loads.
{ "language": "en", "url": "https://stackoverflow.com/questions/92052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: GUI Testing tools and feedbacks I am working on the issue of testing my GUI and I am not entirely sure of the best approach here. My GUI is built using a traditional MVC framework so I am easily able to test the logic parts of the GUI without bringing up the GUI itself. However, when it comes to testing the functionality of the GUI, I am not really sure if I should worry about individually testing GUI components or if I should mainly just focus on functional testing the system. It is a pretty complex system in which testing the GUI frequently involves sending a message to the server and then observing the response on the GUI. My initial thoughts are that functional testing is the way to go here since I need a whole system running to really test the UI. Comments on this issue would be appreciated. A: Other GUI-testing tools I can offer are: Thoughtworks White, PyWinAuto, AutoIt, AutoHotKey. One thing to keep in mind when trying to automate GUIs is that the only way you can do that is to build the GUI with automation in mind. Crush devs that think their GUIs should not support testability early on in the project and happily expose all the hooks that can help in automation on demand as your testing needs require that. A: You have (at least) 2 issues - the complexity of the environment (the server) and the complexity of the GUI. There are many tools for automating GUI testing. All of them are more or less fragile and require pretty much constant maintenance in the face of changing layout. There is benefit to be gained from using them, but it's a long term benefit. The environment, on the other hand, is an area that can be tamed. If your application is architected using the Dependency Injection/Inversion technique (where you 'inject' the server component into the application), then you can use a 'mock' of the relevant server interfaces to enable you to script test cases. Combining these two techniques will allow you to automate GUI testing. A: Depending on where in the spectrum of MVC (that's an overused term) you sit, testing the view could be a mechanical process of ensuring that the correct model methods are called in response to the correct inputs to the view to testing some client side validation to who knows. A lot of the patterns that have been evolved out of MVC (I'm thinking passive view, supervising controller) are striving to make the view require very little testing because it's really just wiring user inputs to the presenter or model (depending on the exact variant of the pattern you're using). "testing the GUI frequently involves sending a message to the server and then observing the response on the GUI" This statement worries me. I'm immediately thinking that the GUI should be tested using a mock or stub of the server to test that the correct interactions are occurring and the GUI responds appropriately. If you need automated functional tests of the server, I don't see the need to have the GUI involved in those. A: Mercury QuickTest Pro, Borland SilkTest, and Ranorex Recorder are some GUI testing tools. A: If your application is web-based you can write tests using tools like WatiN or Selenium. If your application is Windows .NET based, you could try White. A: My advice: forget the traditional GUI testing. It's too expensive. Coding the tests takes a lot of time, the tools aren't really stable so you will get unreliable test results. The coupling between the code and the test is very strong and you'll spend a lot of time with the maintenance. The new trend is to ignore the GUI tests. See the ModelViewPresenter pattern from Fowler as a guideline link text A: The clearest way I can say this is: Don't waste your time writing automated GUI tests. Especially when your working with an MVC app - in your case, when you send a message to the server, you can make sure the right message number comes back and be done. You can add some additional cases - or another test completely to make sure that the GUI is converting the message id's into the right strings, but you just need to run that test once. A: We do incorporate GUI testing in our project, and it has its side effects. The developers however have one critical design principle: Keep the GUI layer as thin as possible! That means no logic in the GUI classes. Separate this in presentation models responsible for input validation etc. For testing on a Unix machine we use the Xvfb server as the DISPLAY when running the tests. A: Try the hallway usability test. It's cheap and useful: go to the nearest hallway, grab the first person that passes, make them sit at your computer and use your software. Watch over their shoulder, you will see what they try to do, what frustrates them, and so on. Do this a few times and notice the patterns. A: What you're looking for is "acceptance testing." How you do it depends on the frameworks you're using, what type of application you are creating and in what language. If you google your particular technology and the above phrase, you should find some tools you can use. A: Don't miss the 'U' in 'GUI' I mean: if what you're trying to test is all works right and works as it was planned to work, then you may follow Seb Rose's answer. But please, don't forget a USER interface has to be made thinking about USERS, and not ANY user but the TARGET USER the application was made for. So, after you are sure all works like it have to work, put every single view/screen/form in a test with a team made of users representing every group of different users that may use your application: advanced users, administrators, MS Office users, low computer profile users, high computer profile users... and then, get the critiques of every user, make a mix, re-touch your GUI if it's neccesary and back again to GUI user's test. A: I've found WinTask to be a very good way to do GUI testing. Provided you don't constantly change the way the OS refers to each element of the UI, WinTask addresses the UI elements by name, so even if the layout changes, the UI elements can still be pressed / tweaked / selected. A: For SIMPLE Web based GUI testing try iMacros ( a simple Firefox plug-in , has a cool feature to send the entire test to another person ) Note that SIMPLE was spelled with Initials ...
{ "language": "en", "url": "https://stackoverflow.com/questions/92059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Smallest Unicode encodings for different languages? What are the typical average bytes-per-character rates for different unicode encodings in different languages? E.g. if I wanted the smallest number of bytes to encode some english text, then on average UTF-8 would be 1-byte per character and UTF-16 would be 2 so I'd pick UTF-8. If I wanted some Korean text, then UTF-16 might average about 2 per character but UTF-8 might average about 3 (I don't know, I'm just making up some illustrative numbers here). Which encodings yield the smallest storage requirements for different languages and character sets? A: For any given language, your bytes-per-character rates are fairly constant, because most languages are allocated to contiguous code pages. The big exception is accented Latin characters, which are allocated higher in the code space than the unaccented forms. I don't have hard numbers for these. For languages with contiguous character allocation, there is a table with detailed numbers for various languages on Wikipedia. In general, UTF-8 works well for most small character sets (except the ones allocated on high code pages), and UTF-16 is great for two-byte character sets. If you need denser compression, you may also want to look at Unicode Technical Note 14, which compares some special-purpose encodings designed to reduce data size for a variety of languages. But these techniques aren't especially common. A: UTF8 is best for any character-set where characters are primarily below U+0800. Otherwise UTF16. That is, UTF8 for Latin, Greek, Cyrillic, Hebrew and Arabic and a few others. In langs other than Latin, characters will take up the same space as they would in UTF16, but you'll save bytes on punctuation and spacing. A: If you're really worried about string/character size, have you thought about compressing them? That would automatically reduce the string to it's 'minimal' encoding. It's a layer of headache, especially if you want to do it in memory, and there are plenty of cases in which it wouldn't buy you anything, but encoding, especially, tend to be too general purpose to the level of compactness you seem to be aiming for. A: In UTF-16, all the languages that matter (i.e. anything but klingons, elven and other strange things) will be encoded into 2 byte chars. So the question is to find the languages that will have glyphs that will be 2-bytes or 1-byte sized characters long. In the Wikipedia page on UTF-8: http://en.wikipedia.org/wiki/Utf-8 We see that a character with an unicode index of 0x0800 or more will be at least 3 bytes long in UTF-8. Knowing that, you just need to look at the code charts on unicode: http://www.unicode.org/charts/ for the languages that comply to your requirements. :-) Now, note that, depending on the framework you're using, the choice could well be not yours to do: * *On Windows API, Unicode is handled by wchar_t chars, and is UTF-16 *On Linux, Unicode is handled by char, and is UTF-8 *Java is internally UTF-16, as are most compliant XML parsers *I was told (some tech meeting I was not interested on... sorry...) that UTF-8 was the encoding of choices on Databases. So, pick up your poison... :-) A: I don't know exact figures, but for Japanese Shift_JIS averages fewer bytes per character than UTF-8, and so does EUC-JP, since they're optimised for Japanese text. However, they don't cover the same space of code points as Unicode, so they might not be correct answers to your question. UTF-16 is better than UTF-8 for Japanese characters (2 bytes per char as opposed to 3), but worse than UTF-8 if there's a lot of 7-bit chars. It depends on the context - technical text is more likely to contain a lot of chars in the 1-byte range. A classical Japanese text might not have any. Note that for transport, the encoding doesn't matter much if you can zip (gzip, bz2) the data. Code points for an alphabet in Unicode are close together, so you'd expect common prefixes with very short representations in the compressed data. UTF-8 is usually good for representation in memory, since it's often more compact than UTF-32 or UTF-16, and is compatible with functions on char* which 'expect' ASCII or ISO-8859-1 NUL-terminated strings. It's useless if you need random access to characters by index, though. If you don't care about non-BMP characters, UCS-2 is always 2 bytes per character and so offers random access. But that depends what you mean by 'Unicode'. A: UTF-8 There is a very good article about unicode on JoelOnSoftware: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
{ "language": "en", "url": "https://stackoverflow.com/questions/92073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to use node-set function in a platform-independent way? I'm writing some xlst file which I want to use under linux and Windows. In this file I use node-set function which declared in different namespaces for MSXML and xsltproc ("urn:schemas-microsoft-com:xslt" and "http://exslt.org/common" respectively). Is there any platform independent way of using node-set? A: You can use the function function-available() to determine which function you should use: <xsl:choose> <xsl:when test="function-available('exslt:node-set')"> <xsl:apply-templates select="exslt:node-set($nodelist)" /> </xsl:when> <xsl:when test="function-available('msxsl:node-set')"> <xsl:apply-templates select="msxsl:node-set($nodelist)" /> </xsl:when> <!-- etc --> </xsl:choose> You can even wrap this logic in a named template and call it with the nodeset as a parameter. A: Yes, there is a good and universal solution. EXSLT's function common:node-set() can be implemented as an inline Javascript function and is thus available with any browser that supports Javascript (practically all major browsers without exception). This technique was first discovered by Julian Reschke and after he published it on the xsl-list, was publicized by David Carlisle. On the blog of David Carlisle there is also a link to a test page that shows if the common:node-set() function thus implemented works with the browser of your choice. To summarize: * *First go here and read the explanation. *Then try the test page. In particular, verify that it works with IE (that means with MSXML) *Finally, use the code. Do enjoy! A: Exslt is "supposed to be" a platform-independent set of xslt extensions, but only so far as various xslt processors choose to implement them. There's some evidence that MSXML actually does support exsl:node-set(), but I don't know for sure. There is an old article discussing an implementation of exslt on top of MSXML. Otherwise, I think function-available() is your friend :) A: Firefox 3 implements node-set (as part of the EXSLT 2.0 namespace improvements) in it's client-side XSLT processing. Maybe not quite the answer you were looking for - but it could be, depending on the context of your problem. ;-) A: If there is not a particular reason to use msxml implementation of node-set on windows you coul use exslt one everywhere, by including the implemenation downloaded from http://exslt.org with your stylesheet, exslt howto describes the needed steps. You can use either "Extension namespaces" way or "Named templates" way.
{ "language": "en", "url": "https://stackoverflow.com/questions/92076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Borland C++ Builder 6 always compiles all files Why does C++ Builder 6 always compile all files? I make some changes on one file but BCB 6 compiles all files when I start the app. Any idea? I use Windows XP SP2. A: Are you source files and binary objects located on the same machine? If not sounds like you have a network time sync issue. If they are its most likely a header file issue, either the compiler include files have a modified date some time in the future or your application is dependent on some header file that changes during compilation say from a COM import. EDIT: Check the setting VS has a flag to always re-compile, this might be true for BCB too, if set then unset it. Another possibility is that pre-compiled headers are miss-configured to generate on every source file. I am not familiar with BCB 6 to give a more precise answer. A: try this plugin for BCB compiler: Bcc32Pch IDE Plugin A: Have you made all or many of your files dependent on a particular module? Any files that are dependent on a particular module will be rebuilt any time the module class structure (contained in the .h file) is modified. If, for example, you have a data module that is accessed by many other modules you will see a rebuild of all dependent modules each time the data module's class structure is modified. A: There are an pragma in Borland, wich controls how many lines of code is recompiled. In the past years i have managed (in some project), that only changes of my source are compiled. I don't know, if this will be worked in newer versions of borland Borland 6 has an pragma "hdrstop". this is only active, if the project option "Pre-Compile headers" is NOT "none" years ago I have an very slow computer an i accelerate the compilition time from hours to minutes with following trick all cpps have become this first line #include "all.h" #pragma hdrstop default was an include of "vcl.h" "all.h" will includes all header, wich are needed in all! units. every unit will skip all sources, wich depend on header before pragma hdrstop. Example: Unit1.h #include <string> Unit1.cpp #include "all.h" #pragma hdrstop #include "Unit1.h" Unit2.h #include <vcl> Unit2.cpp #include "all.h" #pragma hdrstop #include "Unit2.h" all.h #include <string> #include <vcl> Importing * *dont use all.h in headerfiles *you can add all includes, wich are used in the project header, like , *All sources wich depend on the "pre compiled headers" will not be compiled again! *generation of precompiled headers will be slow! So only add headers in all.h, which will not be changed often. Like system headers or your headers wich are already finished. *compilation can be failed. sometimes the order of the includes produce an "deadlock" for the comilation. if its happen, deactivate "pre-compiled headers". Most problems will be solved, if you write your c++ like in java: every class will become his own files(cpp and h). *Filename in the project option "Pre-Compiled headers" shows the basename of the real precompiled files. an unit can share an precompiled file with another unit, if it have (exact) the same inludes before "pragma hdrstop". Best performance is reached, if you have only one file with an numeric postfix. Example for more than one precompiled header: Unit1.h #include <string> Unit1.cpp #include "all.h" #pragma hdrstop #include "Unit1.h" Unit2.h #include <vcl> Unit2.cpp #include <vcl> //!!!!!!!!!!!!!!!!!!! produce a second version of an precompiled file #pragma hdrstop #include "Unit2.h" all.h #include <string> #include <vcl> A: Make sure you are using the "make" command and not the "build" command, unless it is required. Making a project with the Borland tools has always seemed to have that issue -- that it doesn't necessarily notice which ones have changed and starts to compile everything. Look at the Pre-Compiled Headers options, which may help speed things up. When Borland/CodeGear, starting in C++Builder 2007, switched to the MSBuild system, the compilations have gone much faster and are more efficient.
{ "language": "en", "url": "https://stackoverflow.com/questions/92079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Add a column with a default value to an existing table in SQL Server How can I add a column with a default value to an existing table in SQL Server 2000 / SQL Server 2005? A: If you want to add multiple columns you can do it this way for example: ALTER TABLE YourTable ADD Column1 INT NOT NULL DEFAULT 0, Column2 INT NOT NULL DEFAULT 1, Column3 VARCHAR(50) DEFAULT 'Hello' GO A: If the default is Null, then: * *In SQL Server, open the tree of the targeted table *Right click "Columns" ==> New Column *Type the column Name, Select Type, and Check the Allow Nulls Checkbox *From the Menu Bar, click Save Done! A: You can use this query: ALTER TABLE tableName ADD ColumnName datatype DEFAULT DefaultValue; A: To add a column to an existing database table with a default value, we can use: ALTER TABLE [dbo.table_name] ADD [Column_Name] BIT NOT NULL Default ( 0 ) Here is another way to add a column to an existing database table with a default value. A much more thorough SQL script to add a column with a default value is below including checking if the column exists before adding it also checkin the constraint and dropping it if there is one. This script also names the constraint so we can have a nice naming convention (I like DF_) and if not SQL will give us a constraint with a name which has a randomly generated number; so it's nice to be able to name the constraint too. ------------------------------------------------------------------------- -- Drop COLUMN -- Name of Column: Column_EmployeeName -- Name of Table: table_Emplyee -------------------------------------------------------------------------- IF EXISTS ( SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'table_Emplyee' AND COLUMN_NAME = 'Column_EmployeeName' ) BEGIN IF EXISTS ( SELECT 1 FROM sys.default_constraints WHERE object_id = OBJECT_ID('[dbo].[DF_table_Emplyee_Column_EmployeeName]') AND parent_object_id = OBJECT_ID('[dbo].[table_Emplyee]') ) BEGIN ------ DROP Contraint ALTER TABLE [dbo].[table_Emplyee] DROP CONSTRAINT [DF_table_Emplyee_Column_EmployeeName] PRINT '[DF_table_Emplyee_Column_EmployeeName] was dropped' END -- ----- DROP Column ----------------------------------------------------------------- ALTER TABLE [dbo].table_Emplyee DROP COLUMN Column_EmployeeName PRINT 'Column Column_EmployeeName in images table was dropped' END -------------------------------------------------------------------------- -- ADD COLUMN Column_EmployeeName IN table_Emplyee table -------------------------------------------------------------------------- IF NOT EXISTS ( SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'table_Emplyee' AND COLUMN_NAME = 'Column_EmployeeName' ) BEGIN ----- ADD Column & Contraint ALTER TABLE dbo.table_Emplyee ADD Column_EmployeeName BIT NOT NULL CONSTRAINT [DF_table_Emplyee_Column_EmployeeName] DEFAULT (0) PRINT 'Column [DF_table_Emplyee_Column_EmployeeName] in table_Emplyee table was Added' PRINT 'Contraint [DF_table_Emplyee_Column_EmployeeName] was Added' END GO These are two ways to add a column to an existing database table with a default value. A: Use: ALTER TABLE {TABLENAME} ADD {COLUMNNAME} {TYPE} {NULL|NOT NULL} CONSTRAINT {CONSTRAINT_NAME} DEFAULT {DEFAULT_VALUE} Reference: ALTER TABLE (Transact-SQL) (MSDN) A: step-1. FIRST YOU HAVE TO ALTER TABLE WITH ADD a FIELD alter table table_name add field field_name data_type step-2 CREATE DEFAULT USE data_base_name; GO CREATE DEFAULT default_name AS 'default_value'; step-3 THEN YOU HAVE TO EXECUTE THIS PROCEDURE exec sp_bindefault 'default_name' , 'schema_name.table_name.field_name' example - USE master; GO EXEC sp_bindefault 'today', 'HumanResources.Employee.HireDate'; A: --Adding New Column with Default Value ALTER TABLE TABLENAME ADD COLUMNNAME DATATYPE NULL|NOT NULL DEFAULT (DEFAULT_VALUE) OR --Adding CONSTRAINT And Set Default Value on Column ALTER TABLE TABLENAME ADD CONSTRAINT [CONSTRAINT_Name] DEFAULT (DEFAULT_VALUE) FOR [COLUMNNAME] A: You can do the thing with T-SQL in the following way. ALTER TABLE {TABLENAME} ADD {COLUMNNAME} {TYPE} {NULL|NOT NULL} CONSTRAINT {CONSTRAINT_NAME} DEFAULT {DEFAULT_VALUE} As well as you can use SQL Server Management Studio also by right clicking table in the Design menu, setting the default value to table. And furthermore, if you want to add the same column (if it does not exists) to all tables in database, then use: USE AdventureWorks; EXEC sp_msforeachtable 'PRINT ''ALTER TABLE ? ADD Date_Created DATETIME DEFAULT GETDATE();''' ; A: In SQL Server 2008-R2, I go to the design mode - in a test database - and add my two columns using the designer and made the settings with the GUI, and then the infamous Right-Click gives the option "Generate Change Script"! Bang up pops a little window with, you guessed it, the properly formatted guaranteed-to-work change script. Hit the easy button. A: Alternatively, you can add a default without having to explicitly name the constraint: ALTER TABLE [schema].[tablename] ADD DEFAULT ((0)) FOR [columnname] If you have an issue with existing default constraints when creating this constraint then they can be removed by: alter table [schema].[tablename] drop constraint [constraintname] A: This can be done in the SSMS GUI as well. I show a default date below but the default value can be whatever, of course. * *Put your table in design view (Right click on the table in object explorer->Design) *Add a column to the table (or click on the column you want to update if it already exists) *In Column Properties below, enter (getdate()) or 'abc' or 0 or whatever value you want in Default Value or Binding field as pictured below: A: SQL Server + Alter Table + Add Column + Default Value uniqueidentifier... ALTER TABLE [TABLENAME] ADD MyNewColumn INT not null default 0 GO A: ALTER TABLE ADD ColumnName {Column_Type} Constraint The MSDN article ALTER TABLE (Transact-SQL) has all of the alter table syntax. A: Syntax: ALTER TABLE {TABLENAME} ADD {COLUMNNAME} {TYPE} {NULL|NOT NULL} CONSTRAINT {CONSTRAINT_NAME} DEFAULT {DEFAULT_VALUE} WITH VALUES Example: ALTER TABLE SomeTable ADD SomeCol Bit NULL --Or NOT NULL. CONSTRAINT D_SomeTable_SomeCol --When Omitted a Default-Constraint Name is autogenerated. DEFAULT (0)--Optional Default-Constraint. WITH VALUES --Add if Column is Nullable and you want the Default Value for Existing Records. Notes: Optional Constraint Name: If you leave out CONSTRAINT D_SomeTable_SomeCol then SQL Server will autogenerate     a Default-Contraint with a funny Name like: DF__SomeTa__SomeC__4FB7FEF6 Optional With-Values Statement: The WITH VALUES is only needed when your Column is Nullable     and you want the Default Value used for Existing Records. If your Column is NOT NULL, then it will automatically use the Default Value     for all Existing Records, whether you specify WITH VALUES or not. How Inserts work with a Default-Constraint: If you insert a Record into SomeTable and do not Specify SomeCol's value, then it will Default to 0. If you insert a Record and Specify SomeCol's value as NULL (and your column allows nulls),     then the Default-Constraint will not be used and NULL will be inserted as the Value. Notes were based on everyone's great feedback below. Special Thanks to:     @Yatrix, @WalterStabosz, @YahooSerious, and @StackMan for their Comments. A: ALTER TABLE Table1 ADD Col3 INT NOT NULL DEFAULT(0) A: In SQL Server, you can use below template: ALTER TABLE {tablename} ADD {columnname} {datatype} DEFAULT {default_value} For example, to add a new column [Column1] of data type int with default value = 1 into an existing table [Table1] , you can use below query: ALTER TABLE [Table1] ADD [Column1] INT DEFAULT 1 A: OFFLINE and ONLINE pertain to how to ALTER table performed on NDB Cluster Tables. NDB Cluster supports online ALTER TABLE operations using the ALGORITHM=INPLACE syntax in MySQL NDB Cluster 7.3 and later. NDB Cluster also supports an older syntax specific to NDB that uses the ONLINE and OFFLINE keywords. These keywords are deprecated beginning with MySQL NDB Cluster 7.3; they continue to be supported in MySQL NDB Cluster 7.4 but are subject to removal in a future version of NDB Cluster. IGNORE pertains to how the ALTER statement will deal with duplicate value in the column that has newly added constraint UNIQUE. If IGNORE is not specified, ALTER will fail and not be applied. If IGNORE is specified, the first row of all duplicate rows is kept, the reset deleted and the ALTER applied. The ALTER_SPECIFICATION would be what you are changing. what column or index you are adding, dropping or modifying, or what constraints you are applying on the column. ALTER [ONLINE | OFFLINE] [IGNORE] TABLE tbl_name alter_specification [, alter_specification] ... alter_specification: ... ADD [COLUMN] (col_name column_definition,...) ... Eg: ALTER TABLE table1 ADD COLUMN foo INT DEFAULT 0; A: Example: ALTER TABLE [Employees] ADD Seniority int not null default 0 GO A: ALTER table dataset.tablename ADD column_current_ind integer DEFAULT 0 A: When adding a nullable column, WITH VALUES will ensure that the specific DEFAULT value is applied to existing rows: ALTER TABLE table ADD column BIT -- Demonstration with NULL-able column added CONSTRAINT Constraint_name DEFAULT 0 WITH VALUES A: Example: ALTER TABLE tes ADD ssd NUMBER DEFAULT '0'; A: First create a table with name student: CREATE TABLE STUDENT (STUDENT_ID INT NOT NULL) Add one column to it: ALTER TABLE STUDENT ADD STUDENT_NAME INT NOT NULL DEFAULT(0) SELECT * FROM STUDENT The table is created and a column is added to an existing table with a default value. A: This is for SQL Server: ALTER TABLE TableName ADD ColumnName (type) -- NULL OR NOT NULL DEFAULT (default value) WITH VALUES Example: ALTER TABLE Activities ADD status int NOT NULL DEFAULT (0) WITH VALUES If you want to add constraints then: ALTER TABLE Table_1 ADD row3 int NOT NULL CONSTRAINT CONSTRAINT_NAME DEFAULT (0) WITH VALUES A: This has a lot of answers, but I feel the need to add this extended method. This seems a lot longer, but it is extremely useful if you're adding a NOT NULL field to a table with millions of rows in an active database. ALTER TABLE {schemaName}.{tableName} ADD {columnName} {datatype} NULL CONSTRAINT {constraintName} DEFAULT {DefaultValue} UPDATE {schemaName}.{tableName} SET {columnName} = {DefaultValue} WHERE {columName} IS NULL ALTER TABLE {schemaName}.{tableName} ALTER COLUMN {columnName} {datatype} NOT NULL What this will do is add the column as a nullable field and with the default value, update all fields to the default value (or you can assign more meaningful values), and finally it will change the column to be NOT NULL. The reason for this is if you update a large scale table and add a new not null field it has to write to every single row and hereby will lock out the entire table as it adds the column and then writes all the values. This method will add the nullable column which operates a lot faster by itself, then fills the data before setting the not null status. I've found that doing the entire thing in one statement will lock out one of our more active tables for 4-8 minutes and quite often I have killed the process. This method each part usually takes only a few seconds and causes minimal locking. Additionally, if you have a table in the area of billions of rows it may be worth batching the update like so: WHILE 1=1 BEGIN UPDATE TOP (1000000) {schemaName}.{tableName} SET {columnName} = {DefaultValue} WHERE {columName} IS NULL IF @@ROWCOUNT < 1000000 BREAK; END A: Try this ALTER TABLE Product ADD ProductID INT NOT NULL DEFAULT(1) GO A: SQL Server + Alter Table + Add Column + Default Value uniqueidentifier ALTER TABLE Product ADD ReferenceID uniqueidentifier not null default (cast(cast(0 as binary) as uniqueidentifier)) A: --Adding Value with Default Value ALTER TABLE TestTable ADD ThirdCol INT NOT NULL DEFAULT(0) GO A: IF NOT EXISTS ( SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME ='TABLENAME' AND COLUMN_NAME = 'COLUMNNAME' ) BEGIN ALTER TABLE TABLENAME ADD COLUMNNAME Nvarchar(MAX) Not Null default END A: Add a new column to a table: ALTER TABLE [table] ADD Column1 Datatype For example, ALTER TABLE [test] ADD ID Int If the user wants to make it auto incremented then: ALTER TABLE [test] ADD ID Int IDENTITY(1,1) NOT NULL A: ALTER TABLE <table name> ADD <new column name> <data type> NOT NULL GO ALTER TABLE <table name> ADD CONSTRAINT <constraint name> DEFAULT <default value> FOR <new column name> GO A: Try with the below query: ALTER TABLE MyTable ADD MyNewColumn DataType DEFAULT DefaultValue This will add a new column into the Table. A: ALTER TABLE MYTABLE ADD MYNEWCOLUMN VARCHAR(200) DEFAULT 'SNUGGLES' A: This can be done by the below code. CREATE TABLE TestTable (FirstCol INT NOT NULL) GO ------------------------------ -- Option 1 ------------------------------ -- Adding New Column ALTER TABLE TestTable ADD SecondCol INT GO -- Updating it with Default UPDATE TestTable SET SecondCol = 0 GO -- Alter ALTER TABLE TestTable ALTER COLUMN SecondCol INT NOT NULL GO A: There are 2 different ways to address this problem. Both adds a default value but adds a totally different meaning to the problem statement here. Lets start with creating some sample data. Create Sample Data CREATE TABLE ExistingTable (ID INT) GO INSERT INTO ExistingTable (ID) VALUES (1), (2), (3) GO SELECT * FROM ExistingTable 1.Add Columns with Default Value for Future Inserts ALTER TABLE ExistingTable ADD ColWithDefault VARCHAR(10) DEFAULT 'Hi' GO So now as we have added a default column when we are inserting a new record it will default it's value to 'Hi' if value not provided INSERT INTO ExistingTable(ID) VALUES (4) GO Select * from ExistingTable GO Well this addresses our problem to have default value but here is a catch to the problem. What if we want to have default value in all the columns not just the future inserts??? For this we have Method 2. 2.Add Column with Default Value for ALL Inserts ALTER TABLE ExistingTable ADD DefaultColWithVal VARCHAR(10) DEFAULT 'DefaultAll' WITH VALUES GO Select * from ExistingTable GO The following script will add a new column with a default value in every possible scenario. Hope it adds value to the question asked. Thanks. A: The most basic version with two lines only ALTER TABLE MyTable ADD MyNewColumn INT NOT NULL DEFAULT 0 A: ALTER TABLE Protocols ADD ProtocolTypeID int NOT NULL DEFAULT(1) GO The inclusion of the DEFAULT fills the column in existing rows with the default value, so the NOT NULL constraint is not violated. A: Well, I now have some modification to my previous answer. I have noticed that none of the answers mentioned IF NOT EXISTS. So I am going to provide a new solution of it as I have faced some problems altering the table. IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.columns WHERE table_name = 'TaskSheet' AND column_name = 'IsBilledToClient') BEGIN ALTER TABLE dbo.TaskSheet ADD IsBilledToClient bit NOT NULL DEFAULT ((1)) END GO Here TaskSheet is the particular table name and IsBilledToClient is the new column which you are going to insert and 1 the default value. That means in the new column what will be the value of the existing rows, therefore one will be set automatically there. However, you can change as you wish with the respect of the column type like I have used BIT, so I put in default value 1. I suggest the above system, because I have faced a problem. So what is the problem? The problem is, if the IsBilledToClient column does exists in the table table then if you execute only the portion of the code given below you will see an error in the SQL server Query builder. But if it does not exist then for the first time there will be no error when executing. ALTER TABLE {TABLENAME} ADD {COLUMNNAME} {TYPE} {NULL|NOT NULL} CONSTRAINT {CONSTRAINT_NAME} DEFAULT {DEFAULT_VALUE} [WITH VALUES] A: ALTER TABLE <YOUR_TABLENAME> ADD <YOUR_COLUMNNAME> <DATATYPE> <NULL|NOT NULL> ADD CONSTRAINT <CONSTRAINT_NAME> ----OPTIONAL DEFAULT <DEFAULT_VALUE> If you are not giving constrain name then sql server use default name for this. Example:- ALTER TABLE TEMP_TABLENAME ADD COLUMN1 NUMERIC(10,0) NOT NULL ADD CONSTRAINT ABCDE ----OPTIONAL DEFAULT (0) A: Beware when the column you are adding has a NOT NULL constraint, yet does not have a DEFAULT constraint (value). The ALTER TABLE statement will fail in that case if the table has any rows in it. The solution is to either remove the NOT NULL constraint from the new column, or provide a DEFAULT constraint for it. A: Use: -- Add a column with a default DateTime -- to capture when each record is added. ALTER TABLE myTableName ADD RecordAddedDate SMALLDATETIME NULL DEFAULT (GETDATE()) GO A: ALTER TABLE tbl_table ADD int_column int NOT NULL DEFAULT(0) From this query you can add a column of datatype integer with default value 0. A: Right click on the table name and click on Design, click under the last column name and enter Column Name, Data Type, Allow Nulls. Then in bottom of page set a default value or binding : something like '1' for string or 1 for int. A: SYNTAX: ALTER TABLE {TABLENAME} ADD {COLUMNNAME} {TYPE} {NULL|NOT NULL} CONSTRAINT {CONSTRAINT_NAME} DEFAULT {DEFAULT_VALUE} WITH VALUES EXAMPLE: ALTER TABLE Admin_Master ADD Can_View_Password BIT NULL CONSTRAINT DF_Admin_Master_Can_View_Password DEFAULT (1) WITH VALUES A: For Oracle Toad users: ALTER TABLE YOUR_SCHEMA.YOUR_TABLENAME ADD YOUR_COLUMNNAME VARCHAR2(100 CHAR); COOMMIT;
{ "language": "en", "url": "https://stackoverflow.com/questions/92082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3178" }
Q: Removing leading zeroes from a field in a SQL statement I am working on a SQL query that reads from a SQLServer database to produce an extract file. One of the requirements to remove the leading zeroes from a particular field, which is a simple VARCHAR(10) field. So, for example, if the field contains '00001A', the SELECT statement needs to return the data as '1A'. Is there a way in SQL to easily remove the leading zeroes in this way? I know there is an RTRIM function, but this seems only to remove spaces. A: You can use this: SELECT REPLACE(LTRIM(REPLACE('000010A', '0', ' ')),' ', '0') A: I had the same need and used this: select case when left(column,1) = '0' then right(column, (len(column)-1)) else column end A: select substring(substring('B10000N0Z', patindex('%[0]%','B10000N0Z'), 20), patindex('%[^0]%',substring('B10000N0Z', patindex('%[0]%','B10000N0Z'), 20)), 20) returns N0Z, that is, will get rid of leading zeroes and anything that comes before them. A: select replace(ltrim(replace(ColumnName,'0',' ')),' ','0') A: If you want the query to return a 0 instead of a string of zeroes or any other value for that matter you can turn this into a case statement like this: select CASE WHEN ColumnName = substring(ColumnName, patindex('%[^0]%',ColumnName), 10) THEN '0' ELSE substring(ColumnName, patindex('%[^0]%',ColumnName), 10) END A: select substring(ColumnName, patindex('%[^0]%',ColumnName), 10) A: In case you want to remove the leading zeros from a string with a unknown size. You may consider using the STUFF command. Here is an example of how it would work. SELECT ISNULL(STUFF(ColumnName ,1 ,patindex('%[^0]%',ColumnName)-1 ,'') ,REPLACE(ColumnName,'0','') ) See in fiddler various scenarios it will cover https://dbfiddle.uk/?rdbms=sqlserver_2012&fiddle=14c2dca84aa28f2a7a1fac59c9412d48 A: You can try this - it takes special care to only remove leading zeroes if needed: DECLARE @LeadingZeros VARCHAR(10) ='-000987000' SET @LeadingZeros = CASE WHEN PATINDEX('%-0', @LeadingZeros) = 1 THEN @LeadingZeros ELSE CAST(CAST(@LeadingZeros AS INT) AS VARCHAR(10)) END SELECT @LeadingZeros Or you can simply call CAST(CAST(@LeadingZeros AS INT) AS VARCHAR(10)) A: Here is the SQL scalar value function that removes leading zeros from string: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO -- ============================================= -- Author: Vikas Patel -- Create date: 01/31/2019 -- Description: Remove leading zeros from string -- ============================================= CREATE FUNCTION dbo.funRemoveLeadingZeros ( -- Add the parameters for the function here @Input varchar(max) ) RETURNS varchar(max) AS BEGIN -- Declare the return variable here DECLARE @Result varchar(max) -- Add the T-SQL statements to compute the return value here SET @Result = @Input WHILE LEFT(@Result, 1) = '0' BEGIN SET @Result = SUBSTRING(@Result, 2, LEN(@Result) - 1) END -- Return the result of the function RETURN @Result END GO A: select ltrim('000045', '0') from dual; LTRIM ----- 45 This should do. A: I borrowed from ideas above. This is neither fast nor elegant. but it is accurate. CASE WHEN left(column, 3) = '000' THEN right(column, (len(column)-3)) WHEN left(column, 2) = '00' THEN right(a.column, (len(column)-2)) WHEN left(column, 1) = '0' THEN right(a.column, (len(column)-1)) ELSE END A: select CASE WHEN TRY_CONVERT(bigint,Mtrl_Nbr) = 0 THEN '' ELSE substring(Mtrl_Nbr, patindex('%[^0]%',Mtrl_Nbr), 18) END A: you can try this SELECT REPLACE(columnname,'0','') FROM table A: To remove the leading 0 from month following statement will definitely work. SELECT replace(left(Convert(nvarchar,GETDATE(),101),2),'0','')+RIGHT(Convert(nvarchar,GETDATE(),101),8) Just Replace GETDATE() with the date field of your Table. A: To remove leading 0, You can multiply number column with 1 Eg: Select (ColumnName * 1)
{ "language": "en", "url": "https://stackoverflow.com/questions/92093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109" }
Q: Is it possible to set code behind a resource dictionary in WPF for event handling? Is it possible to set code behind a resource dictionary in WPF. For example in a usercontrol for a button you declare it in XAML. The event handling code for the button click is done in the code file behind the control. If I was to create a data template with a button how can I write the event handler code for it's button click within the resource dictionary. A: I disagree with "ageektrapped"... using the method of a partial class is not a good practice. What would be the purpose of separating the Dictionary from the page then? From a code-behind, you can access a x:Name element by using: Button myButton = this.GetTemplateChild("ButtonName") as Button; if(myButton != null){ ... } You can do this in the OnApplyTemplate method if you want to hookup to controls when your custom control loads. OnApplyTemplate needs to be overridden to do this. This is a common practice and allows your style to stay disconnected from the control. (The style should not depend on the control, but the control should depend on having a style). A: Gishu - whilst this might seem to be a "generally not to be encouraged practice" Here is one reason you might want to do it: The standard behaviour for text boxes when they get focus is for the caret to be placed at the same position that it was when the control lost focus. If you would prefer throughout your application that when the user tabs to any textbox that the whole content of the textbox was highlighted then adding a simple handler in the resource dictionary would do the trick. Any other reason where you want the default user interaction behaviour to be different from the out of the box behaviour seems like good candidates for a code behind in a resource dictionary. Totally agree that anything which is application functionality specific ought not be in a code behind of a resource dictionary. A: I think what you're asking is you want a code-behind file for a ResourceDictionary. You can totally do this! In fact, you do it the same way as for a Window: Say you have a ResourceDictionary called MyResourceDictionary. In your MyResourceDictionary.xaml file, put the x:Class attribute in the root element, like so: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="MyCompany.MyProject.MyResourceDictionary" x:ClassModifier="public"> Then, create a code behind file called MyResourceDictionary.xaml.cs with the following declaration: namespace MyCompany.MyProject { partial class MyResourceDictionary : ResourceDictionary { public MyResourceDictionary() { InitializeComponent(); } ... // event handlers ahead.. } } And you're done. You can put whatever you wish in the code behind: methods, properties and event handlers. == Update for Windows 10 apps == And just in case you are playing with UWP there is one more thing to be aware of: <Application x:Class="SampleProject.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:rd="using:MyCompany.MyProject"> <!-- no need in x:ClassModifier="public" in the header above --> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <!-- This will NOT work --> <!-- <ResourceDictionary Source="/MyResourceDictionary.xaml" />--> <!-- Create instance of your custom dictionary instead of the above source reference --> <rd:MyResourceDictionary /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> A: Adding on....these days, with the advent of {x:Bind ...}, if you want to put your DataTemplate into a shared ResourceDictionary file, you are required to give that file a code behind. A: XAML is for constructing object graphs not containing code. A Data template is used to indicate how a custom user-object is to be rendered on screen... (e.g. if it is a listbox item) behavior is not part of a data template's area of expertise. Redraw the solution...
{ "language": "en", "url": "https://stackoverflow.com/questions/92100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "156" }
Q: MySQL slow query log - how slow is slow? What do you find is the optimal setting for mysql slow query log parameter, and why? A: Whatever time /you/ feel is unacceptably slow for a query on your systems. It depends on the kind of queries you run and the kind of system; a query taking several seconds might not matter if it's some back-end reporting system doing complex data-mining etc where a delay doesn't matter, but might be completely unacceptable on a user-facing system which is expected to return results promptly. A: Set it to whatever you like. The only problem is that in a stock MySQL, it can only be set in increments of 1 second, which is too slow for some people. Most heavily used production servers execute far too many queries to log them all. The slow log is a way of filtering the log so that we can see the ones which take a long time (most queries are likely to be executed almost instantly). It's a bit of a blunt instrument. Set it to 1 sec if you like, you're probably not going to run out of disc space or create a performance problem by doing that. It's really about the risk of enabling the slow log- don't do it if you feel it's likely to cause further disc or performance problems. Of course you could enable the slow log on a non-production server and put simulated load through, but that is never quite the same. A: Peter Zaitsev posted a nice article about using the slow query log. One thing he notes is important is to also consider how often a certain query is used. Reports run once a day are not important to be fast. But something that is run very often might be a problem even if it takes half a second. And you cant detect that without the microslow patch. A: I recommend these three lines log_slow_queries set-variable = long_query_time=1 log-queries-not-using-indexes The first and second will log any query over a second. As others have pointed out a one second query is pretty far gone if you are a shooting for a high transaction rate on your website, but I find that it turns up some real WTFs; queries that should be fast, but for whatever combination of data it was run against was not. The last will log any query that does not use an index. Unless your doing data warehousing any common query should have the best index you can find so pay attention to its output. Although its certainly not for production, this last option log = /var/log/mysql/mysql.log will log all queries, which can be useful if you are trying to tune a specific page or action. A: Not only is it a blunt instrument as far as resolution is concerned, but also it is MySQL-instance wide, so that if you have different databases with differing performancy requirements you're kind of out of luck. Obviously there are ways around that, but it's important to keep that in mind when setting your slow log setting. Aside from performance requirements of your application, another factor to consider is what you're trying to log. Are you using the log to catch queries that would threaten the stability of your db instance (ones that cause deadlocks or Cartesian joins, for instance) or queries that affect the performance for specific users and that might require a little tuning? That will influence where you set your threshold.
{ "language": "en", "url": "https://stackoverflow.com/questions/92103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How do you get through the inevitable motivational "slump" near the end of projects? When working on a project, after the interesting parts are coded, my motivation is severely diminished. What do you do to get over this problem? A: Don't leave all the "boring" bits to the end - make sure that each component works, with regression tests and documentation, as early as possible in the project. That said, the last few weeks are still going to involve chasing down the really elusive bugs, dealing with last-second requirements changes, finalising the documentation, and generally getting the damn thing out of the door. My approach is just to suck it up: put your head down and know that the sooner it's done, the sooner you can start on all the lower-priority, more interesting things that have been queued behind the current release. You can't completely avoid last-minute requirements/docs changes other than by arranging for your customers to all be on holiday just before release. Or get yourself in a dominating position like Apple and Google, so that customers have no prior knowledge of releases. You "should" chase elusive bugs (by which I mean the ones so hard to reproduce that you don't have a consistent test case) early, because you cannot estimate how long they will take to fix. But in practice some proportion of them will become less elusive as the project goes on, or turn out to be side-effects of another known issue, so you save time on average by giving them a limited chance to do so. The downside of this is that towards the end there will be a few left. If there are more than about two, though, you've done it wrong. Taking a short "break" after a major deadline to do whatever you find most fun is a good way to avoid burn-out in the long run. Even if you end up throwing most of it away because you skipped some difficult planning, you'll have made yourself more productive. A: Use test-driven development. A failing test is always a strong motivation. A: Let some testers loose on it. Nothing is more motivating than seeing people use your interesting bits and finding obvious improvements. A: Repeat to myself: My code doesn't exist until it is checked in. Or if you're not using version control, 'until it is published' or 'until it is launched.' You could also use fear and say that if YOU don't finish and launch it, someone else will. A: Usually I try to tell myself that getting things to work in the real world is just as interesting, because there is where your code will get credits or will be improved by discovered bugs and feature requests. A: Don't do all the interesting parts first. I motivate myself to do the boring code by always leaving a decent bit until last and being strict about completing the boring section first. A: "if YOU don't finish and launch it, someone else will." Told myself that one before. Sometimes however its good to take a break for a couple hours then come back to it. Then you are not as burned out on it as you were. A: I try to push the concept of bug days / evenings. Set a target of bugs/issues to address and when you hit that number everyone gets to go out for (paid for!) pizza/beers. Keeps the morale of the team up and acts as a focus in an otherwise boring period. Also you can add into this concept prizes/cudos for the best piece of refactoring or performance improvement etc A: I agree it is tough. The only thing that keeps me going is to keep in mind the feeling I would have after seeing it complete / shipped / in the hands of customers. A: My motivation is just to Get It Done. Like onebyone said, you just have to hunker down and do it. It's all a matter of priorities. The quicker the priorities are out of the way, the sooner you can get back to the interesting stuff. A: * *Try to see if you can take a very short break for a day or two and come back more refreshed. *Don't leave boring bits to the end *Test it yourself *Ensure that your diet/exercise/sleep/etc level don't get lower *Tell the others that you are feeling a bit down, can you swap areas of work for a day? A: In general, when you've done 90% of the work, it's almost ended, you just have to do the last 90% :-) Always think about that, and you'll see that's a long way until it's working. A: I'm happy doing the creative fun bits of programming. But after that I think about making the user happy.
{ "language": "en", "url": "https://stackoverflow.com/questions/92113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }