text
stringlengths
8
267k
meta
dict
Q: What's a good f/oss GDI+ (System.Drawing) based graphing and charting component for .NET? I want to create basic line, bar, pie and area charts for .NET 3.5 WinForms and I'm interested in finding a free, mature, open-source .NET (preferably C# based) project to help me accomplish that. I would consider a WPF based project, however I'm more comfortable in GDI+ so I'd rather it used System.Drawing and/or GDI interop as its base technology. Thanks! A: ZedGraph. 'Nuff said. A: This one's quite nice: zed graph A: Take a look at ZedGraph: http://zedgraph.org/wiki/index.php?title=Main_Page
{ "language": "en", "url": "https://stackoverflow.com/questions/99869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Selenium Critique I just wanted some opinions from people that have run Selenium (http://selenium.openqa.org) I have had a lot of experience with WaTiN and even wrote a recording suite for it. I had it producing some well-structured code but being only maintained by me it seems my company all but abandoned it. If you have run selenium have you had a lot of success? I will be using .NET 3.5, does Selenium work well with it? Is the code produced clean or simply a list of all the interaction? (http://blogs.conchango.com/richardgriffin/archive/2006/11/14/Testing-Design-Pattern-for-using-WATiR_2F00_N.aspx) How well does the distributed testing suite fair? Any other gripes or compliments on the system would be greatly appreciated! A: I'm using Selenium Remote Control in order to test ASP.Net apps (which is what I'm assuming you'll be targetting as well), and it works great. If you've never used Selenium, watch some of the screencasts for using Selenium IDE. This will give you a good idea of how 'Selenium' works. The IDE is a firefox plugin that basically lets you develop quick record-and-play tests as you go. For larger test suites or for writing really maintainable tests though, I'd recommend Selenium Remote Control. (The IDE is terrific if you're just getting a start though.) Selenium Remote Control lets you use your favourite language and unit testing framework to drive a web browser in order to execute your tests. If you're most comfortable with C#/NUnit, you can write your tests that way and use all the NUnit goodies that you like. (For example, the Test-Driven.net plugin). Also, since your tests are written in a high level language, you're able to do things like inherit from a particular test class which you can use to make your actual test method code much cleaner. (Or at least thats the way I write my tests. It lets me test complex scenarios which keeping my test method line-count at a reasonable number.) You mention distributed testing. Unfortunately I haven't found a way to use the Selenium Grid project with NUnit. Selenium Grid allows you to execute your test suite over a number different machines and browser instances. So rather than running through say 200 test methods one after another (ie, serially), you could spread the load out over say four Grid instances (ie, running in four different browsers instances at a time) on a single machine or multiple machines depending on how distributed you want to get. If you write your tests in Java or PHP though, you might have better luck. I'm expecting this to be available via NUnit with the release of NUnit2.5 which will include pNUnit for parallel testing. If you have any further questions about selenium, just clarify your original question and I'll be happy to try and help you out. (Selenium is just one of those tools that I use everyday so I enjoy helping to get new people started with it..) A: I started out with Selenium IDE and Selenium Core. Those are definitely good tools to get you started. But they're not very powerful, since you can only use Selenese, Selenium's HTML-based command-by-command language. Now I use Selenium Remote Control with the Ruby driver, which allows me to utilize what Ruby offers. I test many environments: Windows 2000, XP, Vista, Mac 10.4/10.5, and for each of those that apply, Safari 2/3, Firefox 2/3, Internet Explorer 6/7. Selenium claims to be compatible with all those OS's and browsers, though I'm having problems currently with Internet Explorer (my first question on StackOverflow is about that, actually). But I don't know of any other tools that are this powerful and works with so many platforms. The biggest problem I've had with Selenium is the DOM parsing. JavaScript's childNodes is unreliable because Safari/Firefox ignore whitespace & comment nodes, while Internet Explorer doesn't. XPath in Internet Explorer is 10-20 times slower than in SF/FF. innerHTML isn't always reliable in IE. A: If you are using Selenium IDE to generate code, then you just get a list of every action that selenium will execute. To me, Selenium IDE is a good way to start or do a fast "try and see" test. But, when you think about maintainability and more readable code, you must write your own code. A good way to achieve good selenium code is to use the Page Object Pattern in a way that the code represents your navigation flow. Here is a good example that I see in Coding Dojo Floripa (from Brazil): public class GoogleTest { private Selenium selenium; @Before public void setUp() throws Exception { selenium = new DefaultSelenium("localhost", 4444, "*firefox", "http://www.google.com/webhp?hl=en"); selenium.start(); } @Test public void codingDojoShouldBeInFirstPageOfResults() { GoogleHomePage home = new GoogleHomePage(selenium); GoogleSearchResults searchResults = home.searchFor("coding dojo"); String firstEntry = searchResults.getResult(0); assertEquals("Coding Dojo Wiki: FrontPage", firstEntry); } @After public void tearDown() throws Exception { selenium.stop(); } } public class GoogleHomePage { private final Selenium selenium; public GoogleHomePage(Selenium selenium) { this.selenium = selenium; this.selenium.open("http://www.google.com/webhp?hl=en"); if (!"Google".equals(selenium.getTitle())) { throw new IllegalStateException("Not the Google Home Page"); } } public GoogleSearchResults searchFor(String string) { selenium.type("q", string); selenium.click("btnG"); selenium.waitForPageToLoad("5000"); return new GoogleSearchResults(string, selenium); } } public class GoogleSearchResults { private final Selenium selenium; public GoogleSearchResults(String string, Selenium selenium) { this.selenium = selenium; if (!(string + " - Google Search").equals(selenium.getTitle())) { throw new IllegalStateException( "This is not the Google Results Page"); } } public String getResult(int i) { String nameXPath = "xpath=id('res')/div[1]/div[" + (i + 1) + "]/h2/a"; return selenium.getText(nameXPath); } } Hope that Helps A: Selenium is pretty decent tool but there are couple things to watch out: * *Selenium IDE and Selenium core do not share 100% same functionality. For example right clicking is supported by IDE but current core release does not have it. However, using a newer version from their repository solves that. *In case of ext js, gwt etc make sure you have proper IDs for your display elements instead of automatically generated (random) ones. *Maintaining test cases. I have seen cases where a lot of effort was put on Selenium tests and good coverage. Later on tests started to fail as the person creating them was busy with other tasks and no-one else wanted to touch them. But this was issue with management, not Selenium. A: I'm a big fan of Selenium. One major gotcha that's good to know about ahead of time, though, is that Selenium IDE has a lot of trouble with pop-up windows. These problems don't persist in Selenium RC, but it can make development a bit of a headache.
{ "language": "en", "url": "https://stackoverflow.com/questions/99876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Generating a unique machine id I need to write a function that generates an id that is unique for a given machine running a Windows OS. Currently, I'm using WMI to query various hardware parameters and concatenate them together and hash them to derive the unique id. My question is, what are the suggested parameters I should use? Currently, I'm using a combination of bios\cpu\disk data to generate the unique id. And am using the first result if multiple results are there for each metric. However, I ran into an issue where a machine that dual boots into 2 different Windows OS generates different site codes on each OS, which should ideally not happen. For reference, these are the metrics I'm currently using: Win32_Processor:UniqueID,ProcessorID,Name,Manufacturer,MaxClockSpeed Win32_BIOS:Manufacturer Win32_BIOS:SMBIOSBIOSVersion,IdentificationCode,SerialNumber,ReleaseDate,Version Win32_DiskDrive:Model, Manufacturer, Signature, TotalHeads Win32_BaseBoard:Model, Manufacturer, Name, SerialNumber Win32_VideoController:DriverVersion, Name A: I had the same problem and after a little research I decided the best would be to read MachineGuid in registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography, as @Agnus suggested. It is generated during OS installation and won't change unless you make another fresh OS install. Depending on the OS version it may contain the network adapter MAC address embedded (plus some other numbers, including random), or a pseudorandom number, the later for newer OS versions (after XP SP2, I believe, but not sure). If it's a pseudorandom theoretically it can be forged - if two machines have the same initial state, including real time clock. In practice, this will be rare, but be aware if you expect it to be a base for security that can be attacked by hardcore hackers. Of course a registry entry can also be easily changed by anyone to forge a machine GUID, but what I found is that this would disrupt normal operation of so many components of Windows that in most cases no regular user would do it (again, watch out for hardcore hackers). A: With our licensing tool we consider the following components * *MAC Address *CPU (Not the serial number, but the actual CPU profile like stepping and model) *System Drive Serial Number (Not Volume Label) *Memory *CD-ROM model & vendor *Video Card model & vendor *IDE Controller *SCSI Controller However, rather than just hashing the components and creating a pass/fail system, we create a comparable fingerprint that can be used to determine how different two machine profiles are. If the difference rating is above a specified tolerance then ask the user to activate again. We've found over the last 8 years in use with hundreds of thousands of end-user installs that this combination works well to provide a reliably unique machine id - even for virtual machines and cloned OS installs. A: What about just using the UniqueID of the processor? A: Parse the SMBIOS yourself and hash it to an arbitrary length. See the PDF specification for all SMBIOS structures available. To query the SMBIOS info from Windows you could use EnumSystemFirmwareEntries, EnumSystemFirmwareTables and GetSystemFirmwareTable. IIRC, the "unique id" from the CPUID instruction is deprecated from P3 and newer. A: You should look into using the MAC address on the network card (if it exists). Those are usually unique but can be fabricated. I've used software that generates its license file based on your network adapter MAC address, so it's considered a fairly reliable way to distinguish between computers. A: I hate to be the guy who says, "you're just doing it wrong" (I always hate that guy ;) but... Does it have to be repeatably generated for the unique machine? Could you just assign the identifier or do a public/private key? Maybe if you could generate and store the value, you could access it from both OS installs on the same disk? You've probably explored these options and they doesn't work for you, but if not, it's something to consider. If it's not a matter of user trust, you could just use MAC addresses. A: In my program I first check for Terminal Server and use the WTSClientHardwareId. Else the MAC address of the local PC should be adequate. If you really want to use the list of properties you provided leave out things like Name and DriverVersion, Clockspeed, etc. since it's possibly OS dependent. Try outputting the same info on both operating systems and leave out that which differs between. A: For one of my applications, I either use the computer name if it is non-domain computer, or the domain machine account SID for domain computers. Mark Russinovich talks about it in this blog post, Machine SID: The final case where SID duplication would be an issue is if a distributed application used machine SIDs to uniquely identify computers. No Microsoft software does so and using the machine SID in that way doesn’t work just for the fact that all DC’s have the same machine SID. Software that relies on unique computer identities either uses computer names or computer Domain SIDs (the SID of the computer accounts in the Domain). You can access the domain machine account SID via LDAP or System.DirectoryServices. A: There is a library available for getting hardware specific informations: Hardware serial number extractor (CPU, RAM, HDD, BIOS) A: Maybe cheating a little, but the MAC Address of a machines Ethernet adapter rarely changes without the motherboard changing these days. A: Can you pull some kind of manufacturer serial number or service tag? Our shop is a Dell shop, so we use the service tag which is unique to each machine to identify them. I know it can be queried from the BIOS, at least in Linux, but I don't know offhand how to do it in Windows. A: I had an additional constraint, I was using .net express so I couldn't use the standard hardware query mechanism. So I decided to use power shell to do the query. The full code looks like this: Private Function GetUUID() As String Dim GetDiskUUID As String = "get-wmiobject Win32_ComputerSystemProduct | Select-Object -ExpandProperty UUID" Dim X As String = "" Dim oProcess As New Process() Dim oStartInfo As New ProcessStartInfo("powershell.exe", GetDiskUUID) oStartInfo.UseShellExecute = False oStartInfo.RedirectStandardInput = True oStartInfo.RedirectStandardOutput = True oStartInfo.CreateNoWindow = True oProcess.StartInfo = oStartInfo oProcess.Start() oProcess.WaitForExit() X = oProcess.StandardOutput.ReadToEnd Return X.Trim() End Function A: Look up CPUID for one option. There might be some issues with multi-CPU systems. A: Try this one, it gives a unique hard disk ID: Port of DiskId32 for Delphi 7-2010.
{ "language": "en", "url": "https://stackoverflow.com/questions/99880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109" }
Q: Reading a COBOL DAT file I have been given a set of COBOL DAT, IDX and KEY files and I need to read the data in them and export it into Access, XLS, CSV, etc. I do not know the version, vendor of the COBOL code as I only have the windows executable that created the files. I have tried Easysoft and Parkway ODBC drivers but I have not been successful in reading the data from the files. I do not have access to the source code as the company that was distributing this product shut down. A: I have successfully read some of the dat files using http://www.cobolproducts.com/datafile just now which I came to know through another forum. Most probably I will work with them to help me read the rest of the files that I am having an issue with. A: A few possibilities. 1/ See if you can find the names of the people that worked for the company. They may be helpful. 2/ Open the DAT file in a text editor. The data may be decodable from that. If the basic format can be discerned, quick'n'dirty code can be written to extract it. 3/ Open up the executable in an editor, there may be strings in there that indicate which compiler was used, then you can search for info on its file formats. If it's a DOS application, there's a good chance it was either Microsoft or Fujitsu COBOL. 4/ Consider placing job requests on work sites like elance or rentacoder; I don't think there's a cost if the work can't be done successfully. 5/ Hire someone to examine it and advise on the likelihood of recovery. 6/ Get a screen dump of the record contents for every active record and re-construct it from that. Some of these are pretty hard so your mileage may vary. Good luck. A: I have read COBOL DAT files only with FD, when I do not have the FD, I open the file in a Text Editor, and try to guess the columns, and try again, until I have this working, the big problem with this approach is when the DAT file have COMP columns, that can be any kind of COMP type, but with a litthe patience I cold get this done. I had tryed Parkway ODBC, but without success. A: for anyone going through this journey, I found this in sourceforge: Cobol and RPG data reader and converter http://sourceforge.net/projects/cobol2j/ Im about to try it, sounds kind of promising
{ "language": "en", "url": "https://stackoverflow.com/questions/99897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Count image similarity on GPU [OpenGL/OcclusionQuery] OpenGL. Let's say I've drawn one image and then the second one using XOR. Now I've got black buffer with non-black pixels somewhere, I've read that I can use shaders to count black [ rgb(0,0,0) ] pixels ON GPU? I've also read that it has to do something with OcclusionQuery. http://oss.sgi.com/projects/ogl-sample/registry/ARB/occlusion_query.txt Is it possible and how? [any programming language] If you've got other idea on how to find similarity via OpenGL/GPU - that would be great too. A: I'm not sure how you do the XOR bit (at least it should be slow; I don't think any of current GPUs accelerate that), but here's my idea: * *have two input images *turn on occlusion query. *draw the two images to the screen (i.e. full screen quad with two textures set up), with a fragment shader that computes abs(texel1-texel2), and kills the pixel (discard in GLSL) if the pixels are the same (difference is zero or below some threshold). Easiest is probably just using a GLSL fragment shader, and there you just read two textures, compute abs() of the difference and discard the pixel. Very basic GLSL knowledge is enough here. *get number of pixels that passed the query. For pixels that are the same, the query won't pass (pixels will be discarded by the shader), and for pixels that are different, the query will pass. At first I though of a more complex approach that involves depth buffer, but then realized that just killing pixels should be enough. Here's my original though (but the above one is simpler and more efficient): * *have two input images *clear screen and depth buffer *draw the two images to the screen (i.e. full screen quad with two textures set up), with a fragment shader that computes abs(texel1-texel2), and kills the pixel (discard in GLSL) if the pixels are different. Draw the quad so that it's depth buffer value is something close to near plane. *after this step, depth buffer will contain small depth values for pixels that are the same, and large (far plane) depth values for pixels that are different. *turn on occlusion query, and draw another full screen quad with depth closer than far plane, but larger than the previous quad. *get number of pixels that passed the query. For pixels that are the same, the query won't pass (depth buffer is already closer), and for pixels that are different, the query will pass. You'd use SAMPLES_PASSED_ARB to get this. There's an occlusion query example at CodeSampler.com to get your started. Of course all this requires GPU with occlusion query support. Most GPUs since 2002 or so do support that, with exception of some low-end ones (in particular, Intel 915 (aka GMA 900) and Intel 945 (aka GMA 950)).
{ "language": "en", "url": "https://stackoverflow.com/questions/99906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Minimal latency objects pooling technique in multithread application * *In the application we have something about 30 types of objects that are created repeatedly. *Some of them have long life (hours) some have short (milliseconds). *Objects could be created in one thread and destroyed in another. Does anybody have any clue what could be good pooling technique in the sense of minimal creation/destruction latency, low lock contention and reasonable memory utilization? Append 1. 1.1. Object pool/memory allocations for one type usually is not related to another type (see 1.3 for an exception) 1.2. Memory allocation is performed for only one type (class) at time, usually for several objects at time. 1.3. If a type aggregates another type using pointer (for some reason) these types allocated together in the one continuous piece of memory. Append 2. 2.1. Using a collection with access serialization per type is known to be worse than new/delete. 2.2. Application is used on different platforms/compilers and cannot use compiler/platform specific tricks. Append 3. It becomes obvious that the fastest (with lowest latency) implementation should organize object pooling as star-like factories network. Where the central factory is global for other thread specific factories. Regular object provision/recycling is more effective to do in a thread specific factory while the central factory could be used for object balancing between threads. 3.1. What is the most effective way to organize communications between the central factory and thread specific factories? A: I assume you have profile and measured your code after doing all that creation and verified that create/destroy is actually causing an issue. Else this is what you should do first. If you still want to do the object pooling, as a first step, you should ensure your objects are stateless coz, that would be the prerequisite for reusing an object. Similarly you should ensure the members of the object and the object itself has no issue with being used from a different threads other than the one which created it. (COM STA objects / window handles etc) If you use windows and COM, one way to use system provided pooling would be to write Free Threaded objects and enable object pooling, which will make the COM+ run time (earlier known as MTS) do this for you. If you use some other platform like Java perhaps you could use application servers that define interfaces that your objects should implement and the COM+ server could do the pooling for you. or you could roll your own code. But you should try to find if there is pattern for this and if yes use that instead of what follows below If you need to roll your own code, create a dynamically growable collection which tracks the objects already created. Use a vector preferrably for the collection since you would only be adding to the collection and it would be easy to traverse it searching for a free object. (assuming you do not delete objects in pool). Change the collection type according to your delete policies (vector of pointers/references to objects if you are using C++ so that delete and recreate an object at the same location) Each object should be tracked using a flag which can be read in a volatile manner and changed using an interlock function to mark it as being used/ not used. If all objects are used, you need to create a new object and add it to the collection. Before adding, you can acquire a lock (critical section), mark the new object as being used and exit the lock. Measure and proceed - probably if you implemented the above collection as a class you could easily create different collections for different object types so as to reduce lock contention from threads that do different work. Finally you could implement an overloaded class factory interface that can create all kinds of pooled objects and knows which collection holds which class You could then optimize on this design from there. Hope that helps. A: To minimize construct/destruct latency, you need fully constructed objects at hand, so you will eliminate the new/ctor/dtor/delete time. These "free" objects can be kept in a list so you just pop/push the element at the end. You may lock the object pools (one for each type) one by one. It is a bit more efficient than a system-wide lock, but does not have the overhead of a by-object locking. A: If you haven't looked at tcmalloc, you might want to take a look. Basing your implementation off of its concepts might be a good start. Key points: * *Determine a set of size classes. (Each allocation will be fulfilled by using an entry from an equal or greater sized allocation.) *Use one size-class per page. (All instances in a page are the same size.) *Use per-thread freelists to avoid atomic operations on every alloc/dealloc *When a per-thread freelist is too large, move some of the instances back to the central freelist. Try to move back allocations from the same page. *When a per-thread freelist is empty, take some from the central freelist. Try to take contiguous entries. *Important: You probably know this, but make sure your design will minimize false sharing. Additional things you can do that tcmalloc can't: * *Try to enable locality of reference by using finer-grained allocation pools. For example, if a few thousand objects will be accessed together, then it is best if they are close together in memory. (To minimize cache missed and TLB faults.) If you allocate these instances from their own threadcache, then they should have fairly good locality. *If you know in advance which instances will be long-lived and which will not, then allocate them from separate thread caches. If you do not know, then periodically copy the old instances using a threadcache for allocation and update old references to the new instances. A: If you have some guess of the preferred size of the pool you can create fixed size pool using stack structure using array (the fastest possible solution). Then you need to implement four phases of object life time hard initialization (and memory allocation), soft initialization, soft cleanup and hard cleanup (and memory release). Now in pseudo code: Object* ObjectPool::AcquireObject() { Object* object = 0; lock( _stackLock ); if( _stackIndex ) object = _stack[ --_stackIndex ]; unlock( _stackLock ); if( !object ) object = HardInit(); SoftInit( object ); } void ObjectPool::ReleaseObject(Object* object) { SoftCleanup( object ); lock( _stackLock ); if( _stackIndex < _maxSize ) { object = _stack[ _stackIndex++ ]; unlock( _stackLock ); } else { unlock( _stack ); HardCleanup( object ); } } HardInit/HardCleanup method performs full object initialization and destruction and they are executed only if the pool is empty or if the freed object cannot fit the pool because it is full. SoftIniti performs soft initialization of objects, it initializes only those aspect of objects that can be changed since it was released. SoftCleanup method free resources used by the object which should be freed as fast as possible or those resources which can become invalid during the time its owner resides in the pool. As you can see locking is minimal, only two lines of code (or only few instructions). These four methods can be implemented in separate (template) classes so you can implement fine tuned operations per object type or usage. Also you may consider using smart pointers to automatically return object to its pool when it is no longer needed. A: Have you tried the hoard allocator? It provides better performance than the default allocator on many systems. A: Why do you have multiple threads destroying objects they did not create? It's a simple way to handle object lifetime, but the costs can vary widely depending on use. Anyways, if you haven't started implementing this yet, at the very least you can put the create/destroy functionality behind an interface so that you can test/change/optimize this at a later date when you have more information about what your system actually does.
{ "language": "en", "url": "https://stackoverflow.com/questions/99907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: VB.Net "There is a naming violation" Error with open ldap for creating user I am trying to use directory services to add a directory entry to an openldap server. The examples I have seen look pretty simple, but I keep getting the error "There is an naming violation". What does this message mean? How do I resolve it? I have included the code, ldif file used to create the person container. Public Function Ldap_Store_Manual_Registration(ByVal userName As String, ByVal firstMiddleName As String, ByVal lastName As String, ByVal password As String) Dim entry As DirectoryEntry = OpenLDAPconnection() 'OpenLDAPconnection() is DirectoryEntry(domainName, userId, password, AuthenticationTypes.SecureSocketsLayer) ) Dim newUser As DirectoryEntry newUser = entry.Children.Add("ou=alumni", "organizationalUnit") 'also try with newUser = entry.Children.Add("ou=alumni,o=xxxx", "organizationalUnit") , also not working SetADProperty(newUser, "objectClass", "organizationalPerson") SetADProperty(newUser, "objectClass", "person") SetADProperty(newUser, "cn", userName) SetADProperty(newUser, "sn", userName) newUser.CommitChanges() End Function Public Shared Sub SetADProperty(ByVal de As DirectoryEntry, _ ByVal pName As String, ByVal pValue As String) 'First make sure the property value isnt "nothing" If Not pValue Is Nothing Then 'Check to see if the DirectoryEntry contains this property already If de.Properties.Contains(pName) Then 'The DE contains this property 'Update the properties value de.Properties(pName)(0) = pValue Else 'Property doesnt exist 'Add the property and set it's value de.Properties(pName).Add(pValue) End If End If End Sub The ldif file: version: 1 dn: cn=test3,ou=alumni,o=unimelb objectClass: organizationalPerson objectClass: person objectClass: top cn: test3 sn: test3 A: Maybe you need to include this? SetADProperty(newUser, "objectClass", "top") Also, check what the required fields for organizationalPerson and person are...you might be missing one. A: Try: Dim entry As New DirectoryEntry("LDAP://ou=alumni", etc.) newUser = entry.Children.Add("cn=" + userName, "user")
{ "language": "en", "url": "https://stackoverflow.com/questions/99912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I chain my own JavaScript into the client side without losing PostBack functionality So I was reading these Asp.Net interview questions at Scott Hanselman's blog and I came across this question. Can anyone shed some light of what he's talking about. A: <asp:LinkButton ID="lbEdit" CssClass="button" OnClientClick="javascript:alert('do something')" onclick="OnEdit" runat="server">Edit</asp:LinkButton> The OnClientClick attribute means you can add some JavaScript without losing PostBack functionality would be my answer in the interview. A: Explain how PostBacks work Postbacks are an abstraction on top of web protocols which emulate stateful behavior over a stateless protocol. , on both the client-side On the client side, postbacks are achieved by javascript calls and hidden fields which store the state of all controls on the page. and server-side. The server side goes through a life cycle of events, part of that life cycle is the hydration of the viewstate to maintain the state of all the controls on the page, and the raising of events based on the paramaters that were passed into the __doPostBack call on the client side How do I chain my own JavaScript into the client side without losing PostBack functionality? Depends on what is required. The easiest way that works 99% of the time is to use asp:hiddenfield to communicate between client and server side. For edge cases, you want to get into Exenders and manipulating viewstate/controlstate/clientstate in javascript through the MS ajax APIs. This is pretty painful with a huge learning curve and a lot of gotchas, generally using hidden fields and manually calling __doPostBack is enough That is how I would answer that bullet point. For more information on __doPostBack, a quick google will give you plenty of results (for the lazy, this is the first hit http://aspalliance.com/895) A: Do you mean something like this: in your code behind: protected string GetPostBack() { return ClientScript.GetPostBackEventReference(this, null); } and in your aspx: <a href="javascript:<%=GetPostBack() %>">Click here to postback</a> A: I think what he's asking here is how you wire up javascript functions to work hand in hand with your ASP.NET postback functionality. i.e. How can I trigger a control's event using my own JavaScript? The ASP.NET class library contains a ClientScript class - Found in the System.Web.UI.Page class - which enables you to programmatically add JavaScript to your ASP.NET page. This contains a method called GetPostBackEventReference which will generate the __doPostBack script ASP.NET utilises to trigger events wired up to your web controls. Hope that makes sense
{ "language": "en", "url": "https://stackoverflow.com/questions/99917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: OO Javascript : Definitive explanation of variable scope Can someone provide an explanation of variable scope in JS as it applies to objects, functions and closures? A: Variables not declared with var are global in scope. Functions introduce a scope, but note that if blocks and other blocks do not introduce a scope. I could also see much information about this by Googling Javascript scope. That's really what I would recommend. http://www.digital-web.com/articles/scope_in_javascript/ A: Global variables Every variable in Javascript is a named attribute of an object. For example:- var x = 1; x is added to the global object. The global object is provided by the script context and may already have a set of attributes. For example in a browser the global object is window. An equivalent to the above line in a browser would be:- window.x = 1; Local variables Now what if we change this to:- function fn() { var x = 1; } When fn is called a new object is created called the execution context also referred to as the scope (I use these terms interchangeably). x is added as an attribute to this scope object. Hence each call to fn will get its own instance of a scope object and therefore its own instance of the x attribute attached to that scope object. Closure Now lets take this further:- function fnSequence() { var x = 1; return function() { return x++; } } var fn1 = fnSequence(); var fn2 = fnSequence(); WScript.Echo(fn1()) WScript.Echo(fn2()) WScript.Echo(fn1()) WScript.Echo(fn2()) WScript.Echo(fn1()) WScript.Echo(fn1()) WScript.Echo(fn2()) WScript.Echo(fn2()) Note: Replace WScript.Echo with whatever writes to stdout in your context. The sequence you should get is :- 1 1 2 2 3 4 3 4 So what has happened here? We have fnSequence which initialises a variable x to 1 and returns an anonymous function which will return the value of x and then increment it. When this function is first executed a scope object is created and an attribute x is added to that scope object with the value of 1. Also created in the same execution object is an anonymous function. Each function object will have a scope attribute which points to the execution context in which it is created. This creates what is know as a scope chain which we will come to later. A reference to this function is returned by fnSequence and stored in fn1. Note that fn1 is now pointing at the anonymous function and that the anonymous function has a scope attribute pointing at a scope object that still has an x attribute attached. This is known as closure where the contents of an execution context is still reachable after the function it was created for has completed execution. Now this same sequence happens when assigning to fn2. fn2 will be pointing at a different anonymous function that was created in a different execution context that was create when fnSequence was called this second time. Scope Chain What happens when the function held by fn1 is executed the first time? A new execution context is created for the execution of the anonymous function. A return value is to be found from the identifier x. The function's scope object is inspected for an x attribute but none is found. This is where the scope chain comes in. Having failed to find x in the current execution context JavaScript takes the object held by the function's scope attribute and looks for x there. It finds it since the functions scope was created inside an execution of fnSequence, retrieves its value and increments it. Hence 1 is output and the x in this scope is incremented to 2. Now when fn2 is executed it is ultimately attached to a different execution context whose x attribute is still 1. Hence executing fn2 also results in 1. As you can see fn1 and fn2 each generate their own independent sequence of numbers. A: Functions introduce a scope. You can declare functions inside other functions, thereby creating a nested scope. The inner scope can access the outer scope, but the outer can not access the inner scope. Variables are bound to a scope, using the var keyword. All variables are implicitly bound to the top-level scope. So if you omit the var keyword, you are implicitly referring to a variable bound to the top level. In a browser, the top level is the window object. Note that window is it self a variable, so window == window.window
{ "language": "en", "url": "https://stackoverflow.com/questions/99927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: General Binary Data Viewer for Windows Vista I'm looking for recommendations for a good program for 32-bit Windows Vista that will load any arbitrary binary file and display textual information or graphical visualization relevant to identifying what actual data the bits are supposed to represent. Is ther anything better than a hex editor for this kind of thing? One thing I'd like to do is say, look at the non-visible data in a Spore PNG file to get a clue as to what's actually being stored in there. Right now I'm using WordPad and all I get is something that looks like this: ‰PNG IHDR ¢ /Qã!$D4"Ž‚îvÚ°‰ÅØÃ ïjÃÞÉ_{!…‡ú 9¥Ý´îÁ6 ‰ms ^ I guess what I'm looking for is a souped up hex editor that acts more like an Excel for bits so I can slice and dice statistical patterns to get a better idea of what the bits might be doing. A: I like xvi32, although it seems similar to the above - I've found it to be fairly fast even for big files. A: What you probably want is a hex editor. The PSPad text editor has a pretty good hex-editing mode. A: Try HxD: HxD is a carefully designed and fast hex editor which, additionally to raw disk editing and modifying of main memory (RAM), handles files of any size. The easy to use interface offers features such as searching and replacing, exporting, checksums/digests, insertion of byte patterns, a file shredder, concatenation or splitting of files, statistics and much more. A: I use HHD's free HexEditor, it's Free! A: I use frhed and vim (with its convert to hex mode but that can be slow for big files). A: Do you mean something that detects a set of known fileformats and knows how to display them? Otherwise, hex editor (for example PSPad contains one) is the best thing that you can wish for. It's just bits which can mean anything. A: I use Total Commander's built-in Lister (file viewer). It can show data as text, hex, it can show images. Then there are many plugins that range from code editors / viewers, image viewers etc. Plugins I use are * *imagine for viewing images *fileinfo for displaying info of executables *HTML viewer *Syn2 - code viewer / editor with syntax highlighting There are a lot of plugins listed on author's web page, another good source is www.totalcmd.net A: I've used Hex Workshop before, it has a "Find Strings" option in the Tools menu. Not free but works great.
{ "language": "en", "url": "https://stackoverflow.com/questions/99934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What tools do you use to implement SOA/Messaging? NServiceBus and MassTransit are two tools that can be used to implement messaging with MSMQ and other message queues. I find that once you start using messaging to have applications talk to each other, you don't really want to go back to the old RPC style. My question is, what other tools are out there? What tools do you use? A: Apache ActiveMQ is probably the most popular and powerful open source message broker out there with the most active open source community behind it as well as commercial support, training and tooling if you need it. One of the more interesting aspects of ActiveMQ is its wide support for a large number of different language bindings and transport protocols A: WebSphere Message Broker is IBM's flagship ESB which runs ontop of MQ. They also produce WebSphere ESB which is a slightly lighter offering which specialises in ESB across web services. A: We use WCF services for synchronous message based operations, and nServiceBus for anything asynchronous. A: Rogue Wave is very popular [ http://roguewave.com/products/hydra/ ] So are IBM's Websphere offerings [ http://en.wikipedia.org/wiki/Mqseries ] A: WCF is extremely powerful and should be looked into by anyone in the .NET space starting up a message based system. I would recommend against BizTalk unless you can make a lot of use out of it's adapters (ie. you have a lot of old systems to communicate with). Nuedesic makes a great WCF based ESB, Neuron, if you are willing to pay a bit. A: I use IBM software stack because it has the widest set of features (pub/sub, async, sync) and platform support (60+ combination of platform, languages) and also a great set of free tools provided by IBM For Operations, I use use the linear log rotation IBM WebSphere MQ supportpac For development and testing, I like RFHUTIL to generate fake cobol, java, MS objects, other binary and text objects and SOAPUI to invoke HTTP web services. If I need to invoke MQ based web services, I go back to RFHUtil. Of course Websphere MQ Explorer for admin. A: We use the old WebSphere Message Broker 6.1 (now IBM Integration Bus) that is fast and reliable once you are acquainted.
{ "language": "en", "url": "https://stackoverflow.com/questions/99980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Self validating binaries? My question is pretty straightforward: You are an executable file that outputs "Access granted" or "Access denied" and evil persons try to understand your algorithm or patch your innards in order to make you say "Access granted" all the time. After this introduction, you might be heavily wondering what I am doing. Is he going to crack Diablo3 once it is out? I can pacify your worries, I am not one of those crackers. My goal are crackmes. Crackmes can be found on - for example - www.crackmes.de. A Crackme is a little executable that (most of the time) contains a little algorithm to verify a serial and output "Access granted" or "Access denied" depending on the serial. The goal is to make this executable output "Access granted" all the time. The methods you are allowed to use might be restricted by the author - no patching, no disassembling - or involve anything you can do with a binary, objdump and a hex editor. Cracking crackmes is one part of the fun, definately, however, as a programmer, I am wondering how you can create crackmes that are difficult. Basically, I think the crackme consists of two major parts: a certain serial verification and the surrounding code. Making the serial verification hard to track just using assembly is very possible, for example, I have the idea to take the serial as an input for a simulated microprocessor that must end up in a certain state in order to get the serial accepted. On the other hand, one might grow cheap and learn more about cryptographically strong ways to secure this part. Thus, making this hard enough to make the attacker try to patch the executable should not be tha t hard. However, the more difficult part is securing the binary. Let us assume a perfectly secure serial verification that cannot be reversed somehow (of course I know it can be reversed, in doubt, you rip parts out of the binary you try to crack and throw random serials at it until it accepts). How can we prevent an attacker from just overriding jumps in the binary in order to make our binary accept anything? I have been searching on this topic a bit, but most results on binary security, self verifying binaries and such things end up in articles that try to prevent attacks on an operating system using compromised binaries. by signing certain binaries and validate those signatures with the kernel. My thoughts currently consist of: * *checking explicit locations in the binary to be jumps. *checksumming parts of the binary and compare checksums computed at runtime with those. *have positive and negative runtime-checks for your functions in the code. With side-effects on the serial verification. :) Are you able to think of more ways to annoy a possible attacker longer? (of course, you cannot keep him away forever, somewhen, all checks will be broken, unless you managed to break a checksum-generator by being able to embed the correct checksum for a program in the program itself, hehe) A: You're getting into "Anti-reversing techniques". And it's an art basically. Worse is that even if you stomp newbies, there are "anti-anti reversing plugins" for olly and IDA Pro that they can download and bypass much of your countermeasures. Counter measures include debugger detection by trap Debugger APIs, or detecting 'single stepping'. You can insert code that after detecting a debugger breakin, continues to function, but starts acting up at random times much later in the program. It's really a cat and mouse game and the crackers have a significant upper hand. Check out... http://www.openrce.org/reference_library/anti_reversing - Some of what's out there. http://www.amazon.com/Reversing-Secrets-Engineering-Eldad-Eilam/dp/0764574817/ - This book has a really good anti-reversing info and steps through the techniques. Great place to start if you're getting int reversing in general. A: I believe these things are generally more trouble than they're worth. You spend a lot of effort writing code to protect your binary. The bad guys spend less effort cracking it (they're generally more experienced than you) and then release the crack so everyone can bypass your protection. The only people you'll annoy are those honest ones who are inconvenienced by your protection. Just view piracy as a cost of business - the incremental cost of pirated software is zero if you ensure all support is done only for paying customers. A: There's TPM technology: tpm on wikipedia It allows you to store the cryptographic check sums of a binary on special chip, which could act as one-way verification. Note: TPM has sort of a bad rap because it could be used for DRM. But to experts in the field, that's sort of unfair, and there's even an open-TPM group allowing linux users control exactly how their TPM chip is used. A: One of the strongest solutions to this problem is Trusted Computing. Basically you would encrypt the application and transmit the decryption key to a special chip (the Trusted Platform Module), The chip would only decrypt the application once it has verified that the computer is in a "trusted" state: no memory viewers/editors, no debuggers etc. Basically, you would need special hardware to just be able to view the decrypted program code. A: So, you want to write a program that accepts a key at the beginning and stores it in memory, subsequently retrieving it from disc. If it's the correct key, the software works. If it's the wrong key, the software crashes. The goal is that it's hard for pirates to generate a working key, and it's hard to patch the program to work with an unlicensed key. This can actually be achieved without special hardware. Consider our genetic code. It works based on the physics of this universe. We try to hack it, create drugs, etc., and we fail miserably, usually creating tons of undesirable side-effects, because we haven't yet fully reverse engineered the complex "world" in which the genetic "code" evolved to operate. Basically, if you're running everything on an common processor (a common "world"), which everyone has access to, then it's virtually impossible to write such a secure code, as demonstrated by current software being so easily cracked. To achieve security in software, you essentially would have to write your own sufficiently complex platform, which others would have to completely and thoroughly reverse engineer in order to modify the behavior of your code without unpredictable side effects. Once your platform is reverse engineered, however, you'd be back to square one. The catch is, your platform is probably going to run on common hardware, which makes your platform easier to reverse engineer, which in turn makes your code a bit easier to reverse engineer. Of course, that may just mean the bar is raised a bit for the level of complexity required of your platform to be sufficiently difficult to reverse engineer. What would a sufficiently complex software platform look like? For example, perhaps after every 6 addition operations, the 7th addition returns the result multiplied by PI divided by the square root of the log of the modulus 5 of the difference of the total number of subtract and multiply operations performed since system initialization. The platform would have to keep track of those numbers independently, as would the code itself, in order to decode correct results. So, your code would be written based on knowledge of the complex underlying behavior of a platform you engineered. Yes, it would eat processor cycles, but someone would have to reverse engineer that little surprise behavior and re-engineer it into any new code to have it behave properly. Furthermore, your own code would be difficult to change once written, because it would collapse into irreducible complexity, with each line depending on everything that happened prior. Of course, there would be much more complexity in a sufficiently secure platform, but the point is that someone would have reverse engineer your platform before they could reverse engineer and modify your code, without debilitating side-effects. A: Great article on copy protection and protecting the protection Keeping the Pirates at Bay: Implementing Crack Protection for Spyro: Year of the Dragon The most interesting idea mentioned in there that hasn't yet been mentioned is cascading failures - you have checksums that modify a single byte that causes another checksum to fail. Eventually one of the checksums causes the system to crash or do something strange. This makes attempts to pirate your program seem unstable and makes the cause occur a long way from the crash.
{ "language": "en", "url": "https://stackoverflow.com/questions/99999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: MVC or event-driven component-oriented web frameworks? This question intends to be technology-agnostic. Which kind of web framework do you prefer, and when: Pure MVC or event-driven component-oriented? Just to make the point in "technology-agnosticism", here I name a few MVC vs. component web frameworks, in diverse technologies / languages: * *Struts vs. Java Server Faces / Tapestry *The new ASP.NET MVC vs. "classic" ASP.NET *Cake PHP vs. PRADO A: I'm a php dev by day; however, I strongly prefer Wicket and/or Lift, especially the latter. The problem with Prado seems to be that the controller is tied to the page, rather than the logical controls on the page, otherwise, it still seems better than most other options in PHP land. I think all boils down to reusability, unless you have components that are backed by controllers, you can't separate the display all that well from the backing control logic. MVC as implmented by all these 'MVC' frameworks seems to suck, you get a logical page with a tonne of controls and you have to handle all those on page controllers, wow, thanks, now I have MVC / n, where n is the number of controls. Most 'MVC' systems that I've seen so far, have been a mish-mash of brain-dead tag libraries, contorting request response into a single controller that has to be aware of everything on the page. xhtml templates with js, and css wonderfully separated. Along with a few classes backing those components, and all of a sudden you're not busy wondering how complex pages are going to work, or if you want to take piece x, and drop it somewhere else. A: Right now, the 'new hotness' trend is towards the MVC approach. I personally prefer the conventions of MVC frameworks, as a lot of the scut work that takes up valuable development time is done away with. That being said, the constraints tend to be fairly rigid, and a more traditional component-based approach might be needed in certain situations. All in all, it's a right tool for the job sort of choice. A: The technology used is usually not matter of choice and especially in a big company you don't have many options. If I were able to choose a technology, in Java I would pick Wicket. I have been using Spring MVC and it is good, but Wicket has a neat features that Spring MVC has not: server-side state management and encapsulation, rich component model, no unnecessary XML mapping files - just pure Java and HTML. A: I'm primarily an ASP.Net developer, but I find MVC is a better way of creating functionally complex websites (typically Line-of-Business type sites) since it allows for better separation of business logic and rules from the markup used to display data to the end-user. For quick and dirty sites (typically with a direct connection to the database) or richer interfaces, the "event-driven component-oriented" model is more effective. A: Personally I would say MVC is the way to go for web sites. You have a lot more control over the HTML and CSS and at the same time the controller pattern works very well with HTTP. Event driven web programming is great for small sites or for people who are not that clued up with HTML and CSS and more low-level concepts. A: I loosely follow these guidelines: * *Web Forms/SQLDataSource- Quick and dirty app for internal use to show reporting or some other such data. *MVC- Simple to complex business logic for a core product. *MVC/REST Web Services/jQuery- HTML/Whatever type of client RIA's (when user experience reigns supreme). *Flash/Flex RIA- Useful when an extremely rich client is needed (think multimedia manipulation here). There are a lot of gaps in this list of course but that just represents how complicated a question it is.
{ "language": "en", "url": "https://stackoverflow.com/questions/100001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What are metaclasses in Python? What are metaclasses? What are they used for? A: Role of a metaclass' __call__() method when creating a class instance If you've done Python programming for more than a few months you'll eventually stumble upon code that looks like this: # define a class class SomeClass(object): # ... # some definition here ... # ... # create an instance of it instance = SomeClass() # then call the object as if it's a function result = instance('foo', 'bar') The latter is possible when you implement the __call__() magic method on the class. class SomeClass(object): # ... # some definition here ... # ... def __call__(self, foo, bar): return bar + foo The __call__() method is invoked when an instance of a class is used as a callable. But as we've seen from previous answers a class itself is an instance of a metaclass, so when we use the class as a callable (i.e. when we create an instance of it) we're actually calling its metaclass' __call__() method. At this point most Python programmers are a bit confused because they've been told that when creating an instance like this instance = SomeClass() you're calling its __init__() method. Some who've dug a bit deeper know that before __init__() there's __new__(). Well, today another layer of truth is being revealed, before __new__() there's the metaclass' __call__(). Let's study the method call chain from specifically the perspective of creating an instance of a class. This is a metaclass that logs exactly the moment before an instance is created and the moment it's about to return it. class Meta_1(type): def __call__(cls): print "Meta_1.__call__() before creating an instance of ", cls instance = super(Meta_1, cls).__call__() print "Meta_1.__call__() about to return instance." return instance This is a class that uses that metaclass class Class_1(object): __metaclass__ = Meta_1 def __new__(cls): print "Class_1.__new__() before creating an instance." instance = super(Class_1, cls).__new__(cls) print "Class_1.__new__() about to return instance." return instance def __init__(self): print "entering Class_1.__init__() for instance initialization." super(Class_1,self).__init__() print "exiting Class_1.__init__()." And now let's create an instance of Class_1 instance = Class_1() # Meta_1.__call__() before creating an instance of <class '__main__.Class_1'>. # Class_1.__new__() before creating an instance. # Class_1.__new__() about to return instance. # entering Class_1.__init__() for instance initialization. # exiting Class_1.__init__(). # Meta_1.__call__() about to return instance. Observe that the code above doesn't actually do anything more than logging the tasks. Each method delegates the actual work to its parent's implementation, thus keeping the default behavior. Since type is Meta_1's parent class (type being the default parent metaclass) and considering the ordering sequence of the output above, we now have a clue as to what would be the pseudo implementation of type.__call__(): class type: def __call__(cls, *args, **kwarg): # ... maybe a few things done to cls here # then we call __new__() on the class to create an instance instance = cls.__new__(cls, *args, **kwargs) # ... maybe a few things done to the instance here # then we initialize the instance with its __init__() method instance.__init__(*args, **kwargs) # ... maybe a few more things done to instance here # then we return it return instance We can see that the metaclass' __call__() method is the one that's called first. It then delegates creation of the instance to the class's __new__() method and initialization to the instance's __init__(). It's also the one that ultimately returns the instance. From the above it stems that the metaclass' __call__() is also given the opportunity to decide whether or not a call to Class_1.__new__() or Class_1.__init__() will eventually be made. Over the course of its execution it could actually return an object that hasn't been touched by either of these methods. Take for example this approach to the singleton pattern: class Meta_2(type): singletons = {} def __call__(cls, *args, **kwargs): if cls in Meta_2.singletons: # we return the only instance and skip a call to __new__() # and __init__() print ("{} singleton returning from Meta_2.__call__(), " "skipping creation of new instance.".format(cls)) return Meta_2.singletons[cls] # else if the singleton isn't present we proceed as usual print "Meta_2.__call__() before creating an instance." instance = super(Meta_2, cls).__call__(*args, **kwargs) Meta_2.singletons[cls] = instance print "Meta_2.__call__() returning new instance." return instance class Class_2(object): __metaclass__ = Meta_2 def __new__(cls, *args, **kwargs): print "Class_2.__new__() before creating instance." instance = super(Class_2, cls).__new__(cls) print "Class_2.__new__() returning instance." return instance def __init__(self, *args, **kwargs): print "entering Class_2.__init__() for initialization." super(Class_2, self).__init__() print "exiting Class_2.__init__()." Let's observe what happens when repeatedly trying to create an object of type Class_2 a = Class_2() # Meta_2.__call__() before creating an instance. # Class_2.__new__() before creating instance. # Class_2.__new__() returning instance. # entering Class_2.__init__() for initialization. # exiting Class_2.__init__(). # Meta_2.__call__() returning new instance. b = Class_2() # <class '__main__.Class_2'> singleton returning from Meta_2.__call__(), skipping creation of new instance. c = Class_2() # <class '__main__.Class_2'> singleton returning from Meta_2.__call__(), skipping creation of new instance. a is b is c # True A: Classes as objects Before understanding metaclasses, it helps to understand Python classes more deeply. Python has a very peculiar idea of what classes are, which it borrows from the Smalltalk language. In most languages, classes are just pieces of code that describe how to produce an object. That is somewhat true in Python too: >>> class ObjectCreator(object): ... pass >>> my_object = ObjectCreator() >>> print(my_object) <__main__.ObjectCreator object at 0x8974f2c> But classes are more than that in Python. Classes are objects too. Yes, objects. When a Python script runs, every line of code is executed from top to bottom. When the Python interpreter encounters the class keyword, Python creates an object out of the "description" of the class that follows. Thus, the following instruction >>> class ObjectCreator(object): ... pass ...creates an object with the name ObjectCreator! This object (the class) is itself capable of creating objects (called instances). But still, it's an object. Therefore, like all objects: * *you can assign it to a variable1 JustAnotherVariable = ObjectCreator *you can attach attributes to it ObjectCreator.class_attribute = 'foo' *you can pass it as a function parameter print(ObjectCreator) 1 Note that merely assigning it to another variable doesn't change the class's __name__, i.e., >>> print(JustAnotherVariable) <class '__main__.ObjectCreator'> >>> print(JustAnotherVariable()) <__main__.ObjectCreator object at 0x8997b4c> Creating classes dynamically Since classes are objects, you can create them on the fly, like any object. First, you can create a class in a function using class: >>> def choose_class(name): ... if name == 'foo': ... class Foo(object): ... pass ... return Foo # return the class, not an instance ... else: ... class Bar(object): ... pass ... return Bar ... >>> MyClass = choose_class('foo') >>> print(MyClass) # the function returns a class, not an instance <class '__main__.Foo'> >>> print(MyClass()) # you can create an object from this class <__main__.Foo object at 0x89c6d4c> But it's not so dynamic, since you still have to write the whole class yourself. Since classes are objects, they must be generated by something. When you use the class keyword, Python creates this object automatically. But as with most things in Python, it gives you a way to do it manually. Remember the function type? The good old function that lets you know what type an object is: >>> print(type(1)) <type 'int'> >>> print(type("1")) <type 'str'> >>> print(type(ObjectCreator)) <type 'type'> >>> print(type(ObjectCreator())) <class '__main__.ObjectCreator'> Well, type has also a completely different ability: it can create classes on the fly. type can take the description of a class as parameters, and return a class. (I know, it's silly that the same function can have two completely different uses according to the parameters you pass to it. It's an issue due to backward compatibility in Python) type works this way: type(name, bases, attrs) Where: * *name: name of the class *bases: tuple of the parent class (for inheritance, can be empty) *attrs: dictionary containing attributes names and values e.g.: >>> class MyShinyClass(object): ... pass can be created manually this way: >>> MyShinyClass = type('MyShinyClass', (), {}) # returns a class object >>> print(MyShinyClass) <class '__main__.MyShinyClass'> >>> print(MyShinyClass()) # create an instance with the class <__main__.MyShinyClass object at 0x8997cec> You'll notice that we use MyShinyClass as the name of the class and as the variable to hold the class reference. They can be different, but there is no reason to complicate things. type accepts a dictionary to define the attributes of the class. So: >>> class Foo(object): ... bar = True Can be translated to: >>> Foo = type('Foo', (), {'bar':True}) And used as a normal class: >>> print(Foo) <class '__main__.Foo'> >>> print(Foo.bar) True >>> f = Foo() >>> print(f) <__main__.Foo object at 0x8a9b84c> >>> print(f.bar) True And of course, you can inherit from it, so: >>> class FooChild(Foo): ... pass would be: >>> FooChild = type('FooChild', (Foo,), {}) >>> print(FooChild) <class '__main__.FooChild'> >>> print(FooChild.bar) # bar is inherited from Foo True Eventually, you'll want to add methods to your class. Just define a function with the proper signature and assign it as an attribute. >>> def echo_bar(self): ... print(self.bar) ... >>> FooChild = type('FooChild', (Foo,), {'echo_bar': echo_bar}) >>> hasattr(Foo, 'echo_bar') False >>> hasattr(FooChild, 'echo_bar') True >>> my_foo = FooChild() >>> my_foo.echo_bar() True And you can add even more methods after you dynamically create the class, just like adding methods to a normally created class object. >>> def echo_bar_more(self): ... print('yet another method') ... >>> FooChild.echo_bar_more = echo_bar_more >>> hasattr(FooChild, 'echo_bar_more') True You see where we are going: in Python, classes are objects, and you can create a class on the fly, dynamically. This is what Python does when you use the keyword class, and it does so by using a metaclass. What are metaclasses (finally) Metaclasses are the 'stuff' that creates classes. You define classes in order to create objects, right? But we learned that Python classes are objects. Well, metaclasses are what create these objects. They are the classes' classes, you can picture them this way: MyClass = MetaClass() my_object = MyClass() You've seen that type lets you do something like this: MyClass = type('MyClass', (), {}) It's because the function type is in fact a metaclass. type is the metaclass Python uses to create all classes behind the scenes. Now you wonder "why the heck is it written in lowercase, and not Type?" Well, I guess it's a matter of consistency with str, the class that creates strings objects, and int the class that creates integer objects. type is just the class that creates class objects. You see that by checking the __class__ attribute. Everything, and I mean everything, is an object in Python. That includes integers, strings, functions and classes. All of them are objects. And all of them have been created from a class: >>> age = 35 >>> age.__class__ <type 'int'> >>> name = 'bob' >>> name.__class__ <type 'str'> >>> def foo(): pass >>> foo.__class__ <type 'function'> >>> class Bar(object): pass >>> b = Bar() >>> b.__class__ <class '__main__.Bar'> Now, what is the __class__ of any __class__ ? >>> age.__class__.__class__ <type 'type'> >>> name.__class__.__class__ <type 'type'> >>> foo.__class__.__class__ <type 'type'> >>> b.__class__.__class__ <type 'type'> So, a metaclass is just the stuff that creates class objects. You can call it a 'class factory' if you wish. type is the built-in metaclass Python uses, but of course, you can create your own metaclass. The __metaclass__ attribute In Python 2, you can add a __metaclass__ attribute when you write a class (see next section for the Python 3 syntax): class Foo(object): __metaclass__ = something... [...] If you do so, Python will use the metaclass to create the class Foo. Careful, it's tricky. You write class Foo(object) first, but the class object Foo is not created in memory yet. Python will look for __metaclass__ in the class definition. If it finds it, it will use it to create the object class Foo. If it doesn't, it will use type to create the class. Read that several times. When you do: class Foo(Bar): pass Python does the following: Is there a __metaclass__ attribute in Foo? If yes, create in-memory a class object (I said a class object, stay with me here), with the name Foo by using what is in __metaclass__. If Python can't find __metaclass__, it will look for a __metaclass__ at the MODULE level, and try to do the same (but only for classes that don't inherit anything, basically old-style classes). Then if it can't find any __metaclass__ at all, it will use the Bar's (the first parent) own metaclass (which might be the default type) to create the class object. Be careful here that the __metaclass__ attribute will not be inherited, the metaclass of the parent (Bar.__class__) will be. If Bar used a __metaclass__ attribute that created Bar with type() (and not type.__new__()), the subclasses will not inherit that behavior. Now the big question is, what can you put in __metaclass__? The answer is something that can create a class. And what can create a class? type, or anything that subclasses or uses it. Metaclasses in Python 3 The syntax to set the metaclass has been changed in Python 3: class Foo(object, metaclass=something): ... i.e. the __metaclass__ attribute is no longer used, in favor of a keyword argument in the list of base classes. The behavior of metaclasses however stays largely the same. One thing added to metaclasses in Python 3 is that you can also pass attributes as keyword-arguments into a metaclass, like so: class Foo(object, metaclass=something, kwarg1=value1, kwarg2=value2): ... Read the section below for how Python handles this. Custom metaclasses The main purpose of a metaclass is to change the class automatically, when it's created. You usually do this for APIs, where you want to create classes matching the current context. Imagine a stupid example, where you decide that all classes in your module should have their attributes written in uppercase. There are several ways to do this, but one way is to set __metaclass__ at the module level. This way, all classes of this module will be created using this metaclass, and we just have to tell the metaclass to turn all attributes to uppercase. Luckily, __metaclass__ can actually be any callable, it doesn't need to be a formal class (I know, something with 'class' in its name doesn't need to be a class, go figure... but it's helpful). So we will start with a simple example, by using a function. # the metaclass will automatically get passed the same argument # that you usually pass to `type` def upper_attr(future_class_name, future_class_parents, future_class_attrs): """ Return a class object, with the list of its attribute turned into uppercase. """ # pick up any attribute that doesn't start with '__' and uppercase it uppercase_attrs = { attr if attr.startswith("__") else attr.upper(): v for attr, v in future_class_attrs.items() } # let `type` do the class creation return type(future_class_name, future_class_parents, uppercase_attrs) __metaclass__ = upper_attr # this will affect all classes in the module class Foo(): # global __metaclass__ won't work with "object" though # but we can define __metaclass__ here instead to affect only this class # and this will work with "object" children bar = 'bip' Let's check: >>> hasattr(Foo, 'bar') False >>> hasattr(Foo, 'BAR') True >>> Foo.BAR 'bip' Now, let's do exactly the same, but using a real class for a metaclass: # remember that `type` is actually a class like `str` and `int` # so you can inherit from it class UpperAttrMetaclass(type): # __new__ is the method called before __init__ # it's the method that creates the object and returns it # while __init__ just initializes the object passed as parameter # you rarely use __new__, except when you want to control how the object # is created. # here the created object is the class, and we want to customize it # so we override __new__ # you can do some stuff in __init__ too if you wish # some advanced use involves overriding __call__ as well, but we won't # see this def __new__(upperattr_metaclass, future_class_name, future_class_parents, future_class_attrs): uppercase_attrs = { attr if attr.startswith("__") else attr.upper(): v for attr, v in future_class_attrs.items() } return type(future_class_name, future_class_parents, uppercase_attrs) Let's rewrite the above, but with shorter and more realistic variable names now that we know what they mean: class UpperAttrMetaclass(type): def __new__(cls, clsname, bases, attrs): uppercase_attrs = { attr if attr.startswith("__") else attr.upper(): v for attr, v in attrs.items() } return type(clsname, bases, uppercase_attrs) You may have noticed the extra argument cls. There is nothing special about it: __new__ always receives the class it's defined in, as the first parameter. Just like you have self for ordinary methods which receive the instance as the first parameter, or the defining class for class methods. But this is not proper OOP. We are calling type directly and we aren't overriding or calling the parent's __new__. Let's do that instead: class UpperAttrMetaclass(type): def __new__(cls, clsname, bases, attrs): uppercase_attrs = { attr if attr.startswith("__") else attr.upper(): v for attr, v in attrs.items() } return type.__new__(cls, clsname, bases, uppercase_attrs) We can make it even cleaner by using super, which will ease inheritance (because yes, you can have metaclasses, inheriting from metaclasses, inheriting from type): class UpperAttrMetaclass(type): def __new__(cls, clsname, bases, attrs): uppercase_attrs = { attr if attr.startswith("__") else attr.upper(): v for attr, v in attrs.items() } # Python 2 requires passing arguments to super: return super(UpperAttrMetaclass, cls).__new__( cls, clsname, bases, uppercase_attrs) # Python 3 can use no-arg super() which infers them: return super().__new__(cls, clsname, bases, uppercase_attrs) Oh, and in Python 3 if you do this call with keyword arguments, like this: class Foo(object, metaclass=MyMetaclass, kwarg1=value1): ... It translates to this in the metaclass to use it: class MyMetaclass(type): def __new__(cls, clsname, bases, dct, kwargs1=default): ... That's it. There is really nothing more about metaclasses. The reason behind the complexity of the code using metaclasses is not because of metaclasses, it's because you usually use metaclasses to do twisted stuff relying on introspection, manipulating inheritance, vars such as __dict__, etc. Indeed, metaclasses are especially useful to do black magic, and therefore complicated stuff. But by themselves, they are simple: * *intercept a class creation *modify the class *return the modified class Why would you use metaclasses classes instead of functions? Since __metaclass__ can accept any callable, why would you use a class since it's obviously more complicated? There are several reasons to do so: * *The intention is clear. When you read UpperAttrMetaclass(type), you know what's going to follow *You can use OOP. Metaclass can inherit from metaclass, override parent methods. Metaclasses can even use metaclasses. *Subclasses of a class will be instances of its metaclass if you specified a metaclass-class, but not with a metaclass-function. *You can structure your code better. You never use metaclasses for something as trivial as the above example. It's usually for something complicated. Having the ability to make several methods and group them in one class is very useful to make the code easier to read. *You can hook on __new__, __init__ and __call__. Which will allow you to do different stuff, Even if usually you can do it all in __new__, some people are just more comfortable using __init__. *These are called metaclasses, damn it! It must mean something! Why would you use metaclasses? Now the big question. Why would you use some obscure error-prone feature? Well, usually you don't: Metaclasses are deeper magic that 99% of users should never worry about it. If you wonder whether you need them, you don't (the people who actually need them know with certainty that they need them, and don't need an explanation about why). Python Guru Tim Peters The main use case for a metaclass is creating an API. A typical example of this is the Django ORM. It allows you to define something like this: class Person(models.Model): name = models.CharField(max_length=30) age = models.IntegerField() But if you do this: person = Person(name='bob', age='35') print(person.age) It won't return an IntegerField object. It will return an int, and can even take it directly from the database. This is possible because models.Model defines __metaclass__ and it uses some magic that will turn the Person you just defined with simple statements into a complex hook to a database field. Django makes something complex look simple by exposing a simple API and using metaclasses, recreating code from this API to do the real job behind the scenes. The last word First, you know that classes are objects that can create instances. Well, in fact, classes are themselves instances. Of metaclasses. >>> class Foo(object): pass >>> id(Foo) 142630324 Everything is an object in Python, and they are all either instance of classes or instances of metaclasses. Except for type. type is actually its own metaclass. This is not something you could reproduce in pure Python, and is done by cheating a little bit at the implementation level. Secondly, metaclasses are complicated. You may not want to use them for very simple class alterations. You can change classes by using two different techniques: * *monkey patching *class decorators 99% of the time you need class alteration, you are better off using these. But 98% of the time, you don't need class alteration at all. A: A metaclass is a class that tells how (some) other class should be created. This is a case where I saw metaclass as a solution to my problem: I had a really complicated problem, that probably could have been solved differently, but I chose to solve it using a metaclass. Because of the complexity, it is one of the few modules I have written where the comments in the module surpass the amount of code that has been written. Here it is... #!/usr/bin/env python # Copyright (C) 2013-2014 Craig Phillips. All rights reserved. # This requires some explaining. The point of this metaclass excercise is to # create a static abstract class that is in one way or another, dormant until # queried. I experimented with creating a singlton on import, but that did # not quite behave how I wanted it to. See now here, we are creating a class # called GsyncOptions, that on import, will do nothing except state that its # class creator is GsyncOptionsType. This means, docopt doesn't parse any # of the help document, nor does it start processing command line options. # So importing this module becomes really efficient. The complicated bit # comes from requiring the GsyncOptions class to be static. By that, I mean # any property on it, may or may not exist, since they are not statically # defined; so I can't simply just define the class with a whole bunch of # properties that are @property @staticmethods. # # So here's how it works: # # Executing 'from libgsync.options import GsyncOptions' does nothing more # than load up this module, define the Type and the Class and import them # into the callers namespace. Simple. # # Invoking 'GsyncOptions.debug' for the first time, or any other property # causes the __metaclass__ __getattr__ method to be called, since the class # is not instantiated as a class instance yet. The __getattr__ method on # the type then initialises the class (GsyncOptions) via the __initialiseClass # method. This is the first and only time the class will actually have its # dictionary statically populated. The docopt module is invoked to parse the # usage document and generate command line options from it. These are then # paired with their defaults and what's in sys.argv. After all that, we # setup some dynamic properties that could not be defined by their name in # the usage, before everything is then transplanted onto the actual class # object (or static class GsyncOptions). # # Another piece of magic, is to allow command line options to be set in # in their native form and be translated into argparse style properties. # # Finally, the GsyncListOptions class is actually where the options are # stored. This only acts as a mechanism for storing options as lists, to # allow aggregation of duplicate options or options that can be specified # multiple times. The __getattr__ call hides this by default, returning the # last item in a property's list. However, if the entire list is required, # calling the 'list()' method on the GsyncOptions class, returns a reference # to the GsyncListOptions class, which contains all of the same properties # but as lists and without the duplication of having them as both lists and # static singlton values. # # So this actually means that GsyncOptions is actually a static proxy class... # # ...And all this is neatly hidden within a closure for safe keeping. def GetGsyncOptionsType(): class GsyncListOptions(object): __initialised = False class GsyncOptionsType(type): def __initialiseClass(cls): if GsyncListOptions._GsyncListOptions__initialised: return from docopt import docopt from libgsync.options import doc from libgsync import __version__ options = docopt( doc.__doc__ % __version__, version = __version__, options_first = True ) paths = options.pop('<path>', None) setattr(cls, "destination_path", paths.pop() if paths else None) setattr(cls, "source_paths", paths) setattr(cls, "options", options) for k, v in options.iteritems(): setattr(cls, k, v) GsyncListOptions._GsyncListOptions__initialised = True def list(cls): return GsyncListOptions def __getattr__(cls, name): cls.__initialiseClass() return getattr(GsyncListOptions, name)[-1] def __setattr__(cls, name, value): # Substitut option names: --an-option-name for an_option_name import re name = re.sub(r'^__', "", re.sub(r'-', "_", name)) listvalue = [] # Ensure value is converted to a list type for GsyncListOptions if isinstance(value, list): if value: listvalue = [] + value else: listvalue = [ None ] else: listvalue = [ value ] type.__setattr__(GsyncListOptions, name, listvalue) # Cleanup this module to prevent tinkering. import sys module = sys.modules[__name__] del module.__dict__['GetGsyncOptionsType'] return GsyncOptionsType # Our singlton abstract proxy class. class GsyncOptions(object): __metaclass__ = GetGsyncOptionsType() A: The tl;dr version The type(obj) function gets you the type of an object. The type() of a class is its metaclass. To use a metaclass: class Foo(object): __metaclass__ = MyMetaClass type is its own metaclass. The class of a class is a metaclass-- the body of a class is the arguments passed to the metaclass that is used to construct the class. Here you can read about how to use metaclasses to customize class construction. A: type is actually a metaclass -- a class that creates another classes. Most metaclass are the subclasses of type. The metaclass receives the new class as its first argument and provide access to class object with details as mentioned below: >>> class MetaClass(type): ... def __init__(cls, name, bases, attrs): ... print ('class name: %s' %name ) ... print ('Defining class %s' %cls) ... print('Bases %s: ' %bases) ... print('Attributes') ... for (name, value) in attrs.items(): ... print ('%s :%r' %(name, value)) ... >>> class NewClass(object, metaclass=MetaClass): ... get_choch='dairy' ... class name: NewClass Bases <class 'object'>: Defining class <class 'NewClass'> get_choch :'dairy' __module__ :'builtins' __qualname__ :'NewClass' Note: Notice that the class was not instantiated at any time; the simple act of creating the class triggered execution of the metaclass. A: Note, this answer is for Python 2.x as it was written in 2008, metaclasses are slightly different in 3.x. Metaclasses are the secret sauce that make 'class' work. The default metaclass for a new style object is called 'type'. class type(object) | type(object) -> the object's type | type(name, bases, dict) -> a new type Metaclasses take 3 args. 'name', 'bases' and 'dict' Here is where the secret starts. Look for where name, bases and the dict come from in this example class definition. class ThisIsTheName(Bases, Are, Here): All_the_code_here def doesIs(create, a): dict Lets define a metaclass that will demonstrate how 'class:' calls it. def test_metaclass(name, bases, dict): print 'The Class Name is', name print 'The Class Bases are', bases print 'The dict has', len(dict), 'elems, the keys are', dict.keys() return "yellow" class TestName(object, None, int, 1): __metaclass__ = test_metaclass foo = 1 def baz(self, arr): pass print 'TestName = ', repr(TestName) # output => The Class Name is TestName The Class Bases are (<type 'object'>, None, <type 'int'>, 1) The dict has 4 elems, the keys are ['baz', '__module__', 'foo', '__metaclass__'] TestName = 'yellow' And now, an example that actually means something, this will automatically make the variables in the list "attributes" set on the class, and set to None. def init_attributes(name, bases, dict): if 'attributes' in dict: for attr in dict['attributes']: dict[attr] = None return type(name, bases, dict) class Initialised(object): __metaclass__ = init_attributes attributes = ['foo', 'bar', 'baz'] print 'foo =>', Initialised.foo # output=> foo => None Note that the magic behaviour that Initialised gains by having the metaclass init_attributes is not passed onto a subclass of Initialised. Here is an even more concrete example, showing how you can subclass 'type' to make a metaclass that performs an action when the class is created. This is quite tricky: class MetaSingleton(type): instance = None def __call__(cls, *args, **kw): if cls.instance is None: cls.instance = super(MetaSingleton, cls).__call__(*args, **kw) return cls.instance class Foo(object): __metaclass__ = MetaSingleton a = Foo() b = Foo() assert a is b A: Python classes are themselves objects - as in instance - of their meta-class. The default metaclass, which is applied when when you determine classes as: class foo: ... meta class are used to apply some rule to an entire set of classes. For example, suppose you're building an ORM to access a database, and you want records from each table to be of a class mapped to that table (based on fields, business rules, etc..,), a possible use of metaclass is for instance, connection pool logic, which is share by all classes of record from all tables. Another use is logic to to support foreign keys, which involves multiple classes of records. when you define metaclass, you subclass type, and can overrided the following magic methods to insert your logic. class somemeta(type): __new__(mcs, name, bases, clsdict): """ mcs: is the base metaclass, in this case type. name: name of the new class, as provided by the user. bases: tuple of base classes clsdict: a dictionary containing all methods and attributes defined on class you must return a class object by invoking the __new__ constructor on the base metaclass. ie: return type.__call__(mcs, name, bases, clsdict). in the following case: class foo(baseclass): __metaclass__ = somemeta an_attr = 12 def bar(self): ... @classmethod def foo(cls): ... arguments would be : ( somemeta, "foo", (baseclass, baseofbase,..., object), {"an_attr":12, "bar": <function>, "foo": <bound class method>} you can modify any of these values before passing on to type """ return type.__call__(mcs, name, bases, clsdict) def __init__(self, name, bases, clsdict): """ called after type has been created. unlike in standard classes, __init__ method cannot modify the instance (cls) - and should be used for class validaton. """ pass def __prepare__(): """ returns a dict or something that can be used as a namespace. the type will then attach methods and attributes from class definition to it. call order : somemeta.__new__ -> type.__new__ -> type.__init__ -> somemeta.__init__ """ return dict() def mymethod(cls): """ works like a classmethod, but for class objects. Also, my method will not be visible to instances of cls. """ pass anyhow, those two are the most commonly used hooks. metaclassing is powerful, and above is nowhere near and exhaustive list of uses for metaclassing. A: The type() function can return the type of an object or create a new type, for example, we can create a Hi class with the type() function and do not need to use this way with class Hi(object): def func(self, name='mike'): print('Hi, %s.' % name) Hi = type('Hi', (object,), dict(hi=func)) h = Hi() h.hi() Hi, mike. type(Hi) type type(h) __main__.Hi In addition to using type() to create classes dynamically, you can control creation behavior of class and use metaclass. According to the Python object model, the class is the object, so the class must be an instance of another certain class. By default, a Python class is instance of the type class. That is, type is metaclass of most of the built-in classes and metaclass of user-defined classes. class ListMetaclass(type): def __new__(cls, name, bases, attrs): attrs['add'] = lambda self, value: self.append(value) return type.__new__(cls, name, bases, attrs) class CustomList(list, metaclass=ListMetaclass): pass lst = CustomList() lst.add('custom_list_1') lst.add('custom_list_2') lst ['custom_list_1', 'custom_list_2'] Magic will take effect when we passed keyword arguments in metaclass, it indicates the Python interpreter to create the CustomList through ListMetaclass. new (), at this point, we can modify the class definition, for example, and add a new method and then return the revised definition. A: A metaclass is the class of a class. A class defines how an instance of the class (i.e. an object) behaves while a metaclass defines how a class behaves. A class is an instance of a metaclass. While in Python you can use arbitrary callables for metaclasses (like Jerub shows), the better approach is to make it an actual class itself. type is the usual metaclass in Python. type is itself a class, and it is its own type. You won't be able to recreate something like type purely in Python, but Python cheats a little. To create your own metaclass in Python you really just want to subclass type. A metaclass is most commonly used as a class-factory. When you create an object by calling the class, Python creates a new class (when it executes the 'class' statement) by calling the metaclass. Combined with the normal __init__ and __new__ methods, metaclasses therefore allow you to do 'extra things' when creating a class, like registering the new class with some registry or replace the class with something else entirely. When the class statement is executed, Python first executes the body of the class statement as a normal block of code. The resulting namespace (a dict) holds the attributes of the class-to-be. The metaclass is determined by looking at the baseclasses of the class-to-be (metaclasses are inherited), at the __metaclass__ attribute of the class-to-be (if any) or the __metaclass__ global variable. The metaclass is then called with the name, bases and attributes of the class to instantiate it. However, metaclasses actually define the type of a class, not just a factory for it, so you can do much more with them. You can, for instance, define normal methods on the metaclass. These metaclass-methods are like classmethods in that they can be called on the class without an instance, but they are also not like classmethods in that they cannot be called on an instance of the class. type.__subclasses__() is an example of a method on the type metaclass. You can also define the normal 'magic' methods, like __add__, __iter__ and __getattr__, to implement or change how the class behaves. Here's an aggregated example of the bits and pieces: def make_hook(f): """Decorator to turn 'foo' method into '__foo__'""" f.is_hook = 1 return f class MyType(type): def __new__(mcls, name, bases, attrs): if name.startswith('None'): return None # Go over attributes and see if they should be renamed. newattrs = {} for attrname, attrvalue in attrs.iteritems(): if getattr(attrvalue, 'is_hook', 0): newattrs['__%s__' % attrname] = attrvalue else: newattrs[attrname] = attrvalue return super(MyType, mcls).__new__(mcls, name, bases, newattrs) def __init__(self, name, bases, attrs): super(MyType, self).__init__(name, bases, attrs) # classregistry.register(self, self.interfaces) print "Would register class %s now." % self def __add__(self, other): class AutoClass(self, other): pass return AutoClass # Alternatively, to autogenerate the classname as well as the class: # return type(self.__name__ + other.__name__, (self, other), {}) def unregister(self): # classregistry.unregister(self) print "Would unregister class %s now." % self class MyObject: __metaclass__ = MyType class NoneSample(MyObject): pass # Will print "NoneType None" print type(NoneSample), repr(NoneSample) class Example(MyObject): def __init__(self, value): self.value = value @make_hook def add(self, other): return self.__class__(self.value + other.value) # Will unregister the class Example.unregister() inst = Example(10) # Will fail with an AttributeError #inst.unregister() print inst + inst class Sibling(MyObject): pass ExampleSibling = Example + Sibling # ExampleSibling is now a subclass of both Example and Sibling (with no # content of its own) although it will believe it's called 'AutoClass' print ExampleSibling print ExampleSibling.__mro__ A: In addition to the published answers I can say that a metaclass defines the behaviour for a class. So, you can explicitly set your metaclass. Whenever Python gets a keyword class then it starts searching for the metaclass. If it's not found – the default metaclass type is used to create the class's object. Using the __metaclass__ attribute, you can set metaclass of your class: class MyClass: __metaclass__ = type # write here other method # write here one more method print(MyClass.__metaclass__) It'll produce the output like this: class 'type' And, of course, you can create your own metaclass to define the behaviour of any class that are created using your class. For doing that, your default metaclass type class must be inherited as this is the main metaclass: class MyMetaClass(type): __metaclass__ = type # you can write here any behaviour you want class MyTestClass: __metaclass__ = MyMetaClass Obj = MyTestClass() print(Obj.__metaclass__) print(MyMetaClass.__metaclass__) The output will be: class '__main__.MyMetaClass' class 'type' A: Others have explained how metaclasses work and how they fit into the Python type system. Here's an example of what they can be used for. In a testing framework I wrote, I wanted to keep track of the order in which classes were defined, so that I could later instantiate them in this order. I found it easiest to do this using a metaclass. class MyMeta(type): counter = 0 def __init__(cls, name, bases, dic): type.__init__(cls, name, bases, dic) cls._order = MyMeta.counter MyMeta.counter += 1 class MyType(object): # Python 2 __metaclass__ = MyMeta class MyType(metaclass=MyMeta): # Python 3 pass Anything that's a subclass of MyType then gets a class attribute _order that records the order in which the classes were defined. A: Note that in python 3.6 a new dunder method __init_subclass__(cls, **kwargs) was introduced to replace a lot of common use cases for metaclasses. Is is called when a subclass of the defining class is created. See python docs. A: look this: Python 3.10.0rc2 (tags/v3.10.0rc2:839d789, Sep 7 2021, 18:51:45) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> class Object: ... pass ... >>> class Meta(type): ... test = 'Worked!!!' ... def __repr__(self): ... return 'This is "Meta" metaclass' ... >>> class ObjectWithMetaClass(metaclass=Meta): ... pass ... >>> Object or type(Object()) <class '__main__.Object'> >>> ObjectWithMetaClass or type(ObjectWithMetaClass()) This is "Meta" metaclass >>> Object.test AttributeError: ... >>> ObjectWithMetaClass.test 'Worked!!!' >>> type(Object) <class 'type'> >>> type(ObjectWithMetaClass) <class '__main__.Meta'> >>> type(type(ObjectWithMetaClass)) <class 'type'> >>> Object.__bases__ (<class 'object'>,) >>> ObjectWithMetaClass.__bases__ (<class 'object'>,) >>> type(ObjectWithMetaClass).__bases__ (<class 'type'>,) >>> Object.__mro__ (<class '__main__.Object'>, <class 'object'>) >>> ObjectWithMetaClass.__mro__ (This is "Meta" metaclass, <class 'object'>) >>> In other words, when an object was not created (type of object), we looking MetaClass. A: One use for metaclasses is adding new properties and methods to an instance automatically. For example, if you look at Django models, their definition looks a bit confusing. It looks as if you are only defining class properties: class Person(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=30) However, at runtime the Person objects are filled with all sorts of useful methods. See the source for some amazing metaclassery. A: Here's another example of what it can be used for: * *You can use the metaclass to change the function of its instance (the class). class MetaMemberControl(type): __slots__ = () @classmethod def __prepare__(mcs, f_cls_name, f_cls_parents, # f_cls means: future class meta_args=None, meta_options=None): # meta_args and meta_options is not necessarily needed, just so you know. f_cls_attr = dict() if not "do something or if you want to define your cool stuff of dict...": return dict(make_your_special_dict=None) else: return f_cls_attr def __new__(mcs, f_cls_name, f_cls_parents, f_cls_attr, meta_args=None, meta_options=None): original_getattr = f_cls_attr.get('__getattribute__') original_setattr = f_cls_attr.get('__setattr__') def init_getattr(self, item): if not item.startswith('_'): # you can set break points at here alias_name = '_' + item if alias_name in f_cls_attr['__slots__']: item = alias_name if original_getattr is not None: return original_getattr(self, item) else: return super(eval(f_cls_name), self).__getattribute__(item) def init_setattr(self, key, value): if not key.startswith('_') and ('_' + key) in f_cls_attr['__slots__']: raise AttributeError(f"you can't modify private members:_{key}") if original_setattr is not None: original_setattr(self, key, value) else: super(eval(f_cls_name), self).__setattr__(key, value) f_cls_attr['__getattribute__'] = init_getattr f_cls_attr['__setattr__'] = init_setattr cls = super().__new__(mcs, f_cls_name, f_cls_parents, f_cls_attr) return cls class Human(metaclass=MetaMemberControl): __slots__ = ('_age', '_name') def __init__(self, name, age): self._name = name self._age = age def __getattribute__(self, item): """ is just for IDE recognize. """ return super().__getattribute__(item) """ with MetaMemberControl then you don't have to write as following @property def name(self): return self._name @property def age(self): return self._age """ def test_demo(): human = Human('Carson', 27) # human.age = 18 # you can't modify private members:_age <-- this is defined by yourself. # human.k = 18 # 'Human' object has no attribute 'k' <-- system error. age1 = human._age # It's OK, although the IDE will show some warnings. (Access to a protected member _age of a class) age2 = human.age # It's OK! see below: """ if you do not define `__getattribute__` at the class of Human, the IDE will show you: Unresolved attribute reference 'age' for class 'Human' but it's ok on running since the MetaMemberControl will help you. """ if __name__ == '__main__': test_demo() The metaclass is powerful, there are many things (such as monkey magic) you can do with it, but be careful this may only be known to you. A: The top answer is correct. But readers may be coming here searching answers about similarly named inner classes. They are present in popular libraries, such as Django and WTForms. As DavidW points out in the comments beneath this answer, these are library-specific features and are not to be confused with the advanced, unrelated Python language feature with a similar name. Rather, these are namespaces within classes' dicts. They are constructed using inner classes for sake of readability. In this example special field, abstract is visibly separate from fields of Author model. from django.db import models class Author(models.Model): name = models.CharField(max_length=50) email = models.EmailField() class Meta: abstract = True Another example is from the documentation for WTForms: from wtforms.form import Form from wtforms.csrf.session import SessionCSRF from wtforms.fields import StringField class MyBaseForm(Form): class Meta: csrf = True csrf_class = SessionCSRF name = StringField("name") This syntax does not get special treatment in the python programming language. Meta is not a keyword here, and does not trigger metaclass behavior. Rather, third-party library code in packages like Django and WTForms reads this property in the constructors of certain classes, and elsewhere. The presence of these declarations modifies the behavior of the classes that have these declarations. For example, WTForms reads self.Meta.csrf to determine if the form needs a csrf field. A: In object-oriented programming, a metaclass is a class whose instances are classes. Just as an ordinary class defines the behavior of certain objects, a metaclass defines the behavior of certain class and their instances The term metaclass simply means something used to create classes. In other words, it is the class of a class. The metaclass is used to create the class so like the object being an instance of a class, a class is an instance of a metaclass. In python classes are also considered objects. A: A class, in Python, is an object, and just like any other object, it is an instance of "something". This "something" is what is termed as a Metaclass. This metaclass is a special type of class that creates other class's objects. Hence, metaclass is responsible for making new classes. This allows the programmer to customize the way classes are generated. To create a metaclass, overriding of new() and init() methods is usually done. new() can be overridden to change the way objects are created, while init() can be overridden to change the way of initializing the object. Metaclass can be created by a number of ways. One of the ways is to use type() function. type() function, when called with 3 parameters, creates a metaclass. The parameters are :- * *Class Name *Tuple having base classes inherited by class *A dictionary having all class methods and class variables Another way of creating a metaclass comprises of 'metaclass' keyword. Define the metaclass as a simple class. In the parameters of inherited class, pass metaclass=metaclass_name Metaclass can be specifically used in the following situations :- * *when a particular effect has to be applied to all the subclasses *Automatic change of class (on creation) is required *By API developers A: I think the ONLamp introduction to metaclass programming is well written and gives a really good introduction to the topic despite being several years old already. http://www.onlamp.com/pub/a/python/2003/04/17/metaclasses.html (archived at https://web.archive.org/web/20080206005253/http://www.onlamp.com/pub/a/python/2003/04/17/metaclasses.html) In short: A class is a blueprint for the creation of an instance, a metaclass is a blueprint for the creation of a class. It can be easily seen that in Python classes need to be first-class objects too to enable this behavior. I've never written one myself, but I think one of the nicest uses of metaclasses can be seen in the Django framework. The model classes use a metaclass approach to enable a declarative style of writing new models or form classes. While the metaclass is creating the class, all members get the possibility to customize the class itself. * *Creating a new model *The metaclass enabling this The thing that's left to say is: If you don't know what metaclasses are, the probability that you will not need them is 99%. A: What are metaclasses? What do you use them for? TLDR: A metaclass instantiates and defines behavior for a class just like a class instantiates and defines behavior for an instance. Pseudocode: >>> Class(...) instance The above should look familiar. Well, where does Class come from? It's an instance of a metaclass (also pseudocode): >>> Metaclass(...) Class In real code, we can pass the default metaclass, type, everything we need to instantiate a class and we get a class: >>> type('Foo', (object,), {}) # requires a name, bases, and a namespace <class '__main__.Foo'> Putting it differently * *A class is to an instance as a metaclass is to a class. When we instantiate an object, we get an instance: >>> object() # instantiation of class <object object at 0x7f9069b4e0b0> # instance Likewise, when we define a class explicitly with the default metaclass, type, we instantiate it: >>> type('Object', (object,), {}) # instantiation of metaclass <class '__main__.Object'> # instance *Put another way, a class is an instance of a metaclass: >>> isinstance(object, type) True *Put a third way, a metaclass is a class's class. >>> type(object) == type True >>> object.__class__ <class 'type'> When you write a class definition and Python executes it, it uses a metaclass to instantiate the class object (which will, in turn, be used to instantiate instances of that class). Just as we can use class definitions to change how custom object instances behave, we can use a metaclass class definition to change the way a class object behaves. What can they be used for? From the docs: The potential uses for metaclasses are boundless. Some ideas that have been explored include logging, interface checking, automatic delegation, automatic property creation, proxies, frameworks, and automatic resource locking/synchronization. Nevertheless, it is usually encouraged for users to avoid using metaclasses unless absolutely necessary. You use a metaclass every time you create a class: When you write a class definition, for example, like this, class Foo(object): 'demo' You instantiate a class object. >>> Foo <class '__main__.Foo'> >>> isinstance(Foo, type), isinstance(Foo, object) (True, True) It is the same as functionally calling type with the appropriate arguments and assigning the result to a variable of that name: name = 'Foo' bases = (object,) namespace = {'__doc__': 'demo'} Foo = type(name, bases, namespace) Note, some things automatically get added to the __dict__, i.e., the namespace: >>> Foo.__dict__ dict_proxy({'__dict__': <attribute '__dict__' of 'Foo' objects>, '__module__': '__main__', '__weakref__': <attribute '__weakref__' of 'Foo' objects>, '__doc__': 'demo'}) The metaclass of the object we created, in both cases, is type. (A side-note on the contents of the class __dict__: __module__ is there because classes must know where they are defined, and __dict__ and __weakref__ are there because we don't define __slots__ - if we define __slots__ we'll save a bit of space in the instances, as we can disallow __dict__ and __weakref__ by excluding them. For example: >>> Baz = type('Bar', (object,), {'__doc__': 'demo', '__slots__': ()}) >>> Baz.__dict__ mappingproxy({'__doc__': 'demo', '__slots__': (), '__module__': '__main__'}) ... but I digress.) We can extend type just like any other class definition: Here's the default __repr__ of classes: >>> Foo <class '__main__.Foo'> One of the most valuable things we can do by default in writing a Python object is to provide it with a good __repr__. When we call help(repr) we learn that there's a good test for a __repr__ that also requires a test for equality - obj == eval(repr(obj)). The following simple implementation of __repr__ and __eq__ for class instances of our type class provides us with a demonstration that may improve on the default __repr__ of classes: class Type(type): def __repr__(cls): """ >>> Baz Type('Baz', (Foo, Bar,), {'__module__': '__main__', '__doc__': None}) >>> eval(repr(Baz)) Type('Baz', (Foo, Bar,), {'__module__': '__main__', '__doc__': None}) """ metaname = type(cls).__name__ name = cls.__name__ parents = ', '.join(b.__name__ for b in cls.__bases__) if parents: parents += ',' namespace = ', '.join(': '.join( (repr(k), repr(v) if not isinstance(v, type) else v.__name__)) for k, v in cls.__dict__.items()) return '{0}(\'{1}\', ({2}), {{{3}}})'.format(metaname, name, parents, namespace) def __eq__(cls, other): """ >>> Baz == eval(repr(Baz)) True """ return (cls.__name__, cls.__bases__, cls.__dict__) == ( other.__name__, other.__bases__, other.__dict__) So now when we create an object with this metaclass, the __repr__ echoed on the command line provides a much less ugly sight than the default: >>> class Bar(object): pass >>> Baz = Type('Baz', (Foo, Bar,), {'__module__': '__main__', '__doc__': None}) >>> Baz Type('Baz', (Foo, Bar,), {'__module__': '__main__', '__doc__': None}) With a nice __repr__ defined for the class instance, we have a stronger ability to debug our code. However, much further checking with eval(repr(Class)) is unlikely (as functions would be rather impossible to eval from their default __repr__'s). An expected usage: __prepare__ a namespace If, for example, we want to know in what order a class's methods are created in, we could provide an ordered dict as the namespace of the class. We would do this with __prepare__ which returns the namespace dict for the class if it is implemented in Python 3: from collections import OrderedDict class OrderedType(Type): @classmethod def __prepare__(metacls, name, bases, **kwargs): return OrderedDict() def __new__(cls, name, bases, namespace, **kwargs): result = Type.__new__(cls, name, bases, dict(namespace)) result.members = tuple(namespace) return result And usage: class OrderedMethodsObject(object, metaclass=OrderedType): def method1(self): pass def method2(self): pass def method3(self): pass def method4(self): pass And now we have a record of the order in which these methods (and other class attributes) were created: >>> OrderedMethodsObject.members ('__module__', '__qualname__', 'method1', 'method2', 'method3', 'method4') Note, this example was adapted from the documentation - the new enum in the standard library does this. So what we did was instantiate a metaclass by creating a class. We can also treat the metaclass as we would any other class. It has a method resolution order: >>> inspect.getmro(OrderedType) (<class '__main__.OrderedType'>, <class '__main__.Type'>, <class 'type'>, <class 'object'>) And it has approximately the correct repr (which we can no longer eval unless we can find a way to represent our functions.): >>> OrderedMethodsObject OrderedType('OrderedMethodsObject', (object,), {'method1': <function OrderedMethodsObject.method1 at 0x0000000002DB01E0>, 'members': ('__module__', '__qualname__', 'method1', 'method2', 'method3', 'method4'), 'method3': <function OrderedMet hodsObject.method3 at 0x0000000002DB02F0>, 'method2': <function OrderedMethodsObject.method2 at 0x0000000002DB0268>, '__module__': '__main__', '__weakref__': <attribute '__weakref__' of 'OrderedMethodsObject' objects>, '__doc__': None, '__d ict__': <attribute '__dict__' of 'OrderedMethodsObject' objects>, 'method4': <function OrderedMethodsObject.method4 at 0x0000000002DB0378>}) A: Python 3 update There are (at this point) two key methods in a metaclass: * *__prepare__, and *__new__ __prepare__ lets you supply a custom mapping (such as an OrderedDict) to be used as the namespace while the class is being created. You must return an instance of whatever namespace you choose. If you don't implement __prepare__ a normal dict is used. __new__ is responsible for the actual creation/modification of the final class. A bare-bones, do-nothing-extra metaclass would like: class Meta(type): def __prepare__(metaclass, cls, bases): return dict() def __new__(metacls, cls, bases, clsdict): return super().__new__(metacls, cls, bases, clsdict) A simple example: Say you want some simple validation code to run on your attributes -- like it must always be an int or a str. Without a metaclass, your class would look something like: class Person: weight = ValidateType('weight', int) age = ValidateType('age', int) name = ValidateType('name', str) As you can see, you have to repeat the name of the attribute twice. This makes typos possible along with irritating bugs. A simple metaclass can address that problem: class Person(metaclass=Validator): weight = ValidateType(int) age = ValidateType(int) name = ValidateType(str) This is what the metaclass would look like (not using __prepare__ since it is not needed): class Validator(type): def __new__(metacls, cls, bases, clsdict): # search clsdict looking for ValidateType descriptors for name, attr in clsdict.items(): if isinstance(attr, ValidateType): attr.name = name attr.attr = '_' + name # create final class and return it return super().__new__(metacls, cls, bases, clsdict) A sample run of: p = Person() p.weight = 9 print(p.weight) p.weight = '9' produces: 9 Traceback (most recent call last): File "simple_meta.py", line 36, in <module> p.weight = '9' File "simple_meta.py", line 24, in __set__ (self.name, self.type, value)) TypeError: weight must be of type(s) <class 'int'> (got '9') Note: This example is simple enough it could have also been accomplished with a class decorator, but presumably an actual metaclass would be doing much more. The 'ValidateType' class for reference: class ValidateType: def __init__(self, type): self.name = None # will be set by metaclass self.attr = None # will be set by metaclass self.type = type def __get__(self, inst, cls): if inst is None: return self else: return inst.__dict__[self.attr] def __set__(self, inst, value): if not isinstance(value, self.type): raise TypeError('%s must be of type(s) %s (got %r)' % (self.name, self.type, value)) else: inst.__dict__[self.attr] = value A: I saw an interesting use case for metaclasses in a package called classutilities. It checks if all class variables are in upper case format (it is convenient to have unified logic for configuration classes), and checks if there are no instance level methods in class. Another interesting example for metaclases was deactivation of unittests based on complex conditions (checking values of multiple environmental variables). A: In Python, a metaclass is a subclass of a subclass that determines how a subclass behaves. A class is an instance of another metaclass. In Python, a class specifies how the class's instance will behave. Since metaclasses are in charge of class generation, you can write your own custom metaclasses to change how classes are created by performing additional actions or injecting code. Custom metaclasses aren't always important, but they can be. A: i want to add a little on why type.__new__() over type() first, take a look at these classes In [1]: class MyMeta(type): ...: def __new__(cls, cls_name, bases, attrs): ...: print(cls, cls_name, bases, attrs) ...: return super().__new__(cls, cls_name, bases, attrs) ...: In [2]: class AClass(metaclass=MyMeta): ...: pass ...: <class '__main__.MyMeta'> AClass () {'__module__': '__main__', '__qualname__': 'AClass'} In [3]: class BClass: ...: pass ...: In [4]: AClass.__class__ Out[4]: __main__.MyMeta In [5]: BClass.__class__ Out[5]: type In [6]: class SubAClass(AClass): ...: pass ...: <class '__main__.MyMeta'> SubAClass (<class '__main__.AClass'>,) {'__module__': '__main__', '__qualname__': 'SubAClass'} when we were trying to create SubAClass a subclass of AClass, Python would take a look at the metaclass we designated to be used to create SubAClass and in this case, we did not pass a metaclass for SubAClass, so Python got a None for metaclass. and Python would try to pick up the first base class of SubAClass, which is AClass, has a metaclass, which is MyMeta, then it would call MyMeta.__new__ to create SubAClass for simplicity, we can say * *type.__new__ just assigned our metaclass to the parent.__class__(AClass.__class__). and when a subclass was defined(SubAClass) by class statement, Python would just use the parent.__class__ to create that subclass unless we've manually passed a metaclass to our subclass *apparently, if you called type() instead of type.__new__, the class would have type as its __class__ attr. that means AClass would be equivalent of BClass, both of them have type as their __class__ attr how the searching of metaclass works in C code? it works pretty much like what we've just mentioned the function builtin___build_class__ would be called when you defined a class and code is just so straightforward static PyObject * builtin___build_class__(PyObject *self, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames){ if (meta == NULL) { /* if there are no bases, use type: */ if (PyTuple_GET_SIZE(bases) == 0) { meta = (PyObject *) (&PyType_Type); } /* else get the type of the first base */ else { PyObject *base0 = PyTuple_GET_ITEM(bases, 0); meta = (PyObject *)Py_TYPE(base0); } Py_INCREF(meta); isclass = 1; /* meta is really a class */ } PyObject *margs[3] = {name, bases, ns}; cls = PyObject_VectorcallDict(meta, margs, 3, mkw); } basically, meta = (PyObject *)Py_TYPE(base0); is everything we want to know it can be translated to be meta = MyMeta = AClass.__class__ = Py_TYPE(AClass) A: In Python or in any other language we have a type for every variable or object we declare. For getting type of anything(variable,object,etc.) in Python we can use type() function. Bypassing the metaclass keyword in the class definition we can customize the class creation process. class meta(type): pass class baseclass(metaclass=meta): # This is Mestaclass pass class derivedclass(baseclass): pass print(type(meta)) print(type(baseclass)) print(type(derivedclass)) When defining a new class if no metaclass is defined the default type metaclass is used. If a given metaclass is not the object(instance) of type(), in that situation it is used directly as a metaclass.
{ "language": "en", "url": "https://stackoverflow.com/questions/100003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7144" }
Q: Logging Application Block - Logging the caller When logging with Log4Net it's very easy to put class that called the log into the log file. I've found in the past that this makes it very easy to trace through the code and see the flow through the classes. In Log4Net I use the %logger property in the conversion pattern like so: <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" /> And this gives me the output I want: 2008-09-19 15:40:26,906 [3132] ERROR <b>Log4NetTechDemo.Tester</b> [(null)] - Failed method You can see from the output that the class that has called the log is Log4NetTechDemo.Tester, so I can trace the error back to that class quite easily. In the Logging Applicaton Block I cannot figure out how to do this with a simple log call. Does anyone know how it can be done? If so, an example or steps to do so would be very helpful. A: Add the calling method to the LogEntry's ExtendedProperties dictionary; assuming you haven't removed the ExtendedProperties tokens from the formatter template, of course. Put something like this in a logging wrapper: public void LogSomething(string msg) { LogEntry le = new LogEntry { Message = msg }; le.ExtendedProperties.Add("Called from", new StackFrame(1).GetMethod().ReflectedType); Logger.Write(le); } Calling this produces something like this at the end of the log: Extended Properties: Called from - LAB_Demo.Tester A: We havn't found an easy way without hitting the StackTrace. If it's an exception, we just grab from that: StackTrace trace = new StackTrace(ex, true); StackFrame frame = trace.GetFrame(0); For chatty items, we just write the string. We have a snippet that's able to grab the class name on insertion. We also declare the const string with the class name. Not pretty, but it's the best we've found. I hope someone else has a bettwe answer in this thread :)
{ "language": "en", "url": "https://stackoverflow.com/questions/100007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Facebook RSS application How can I make a Facebook RSS application that autoupdates from the provided RSS feeds. Of course doing this is trivial for canvas applications, but I need this for showing on the Facebook Page. All the RSS apps I've taken a look at either dont update or dont work on Facebook Pages. Especially now that infinite session keys are deprecated (and maybe even forbidden). A: I specifically said I do not need this for canvas applications (as that is trivial to do), but on Facebook Pages! This is done with profile.setFBML and data published that way does reside on the facebook servers. A: * *Infinite sessions keys do not exist anymore *Every feed is unique. But even if it weren't, can I just stuff the fbml.refreshRefUrl in a cron job and it will work without session (because I can not get an infinite session)= Or maybe I first need to request the offline_access extended permission? Is there any way without using cron jobs? A: You have two options. * *Convert your user session (when the user accesses your app manually) to infinite session, then periodically update the profile information for a user. There is some information on how to do this (and what API calls you can make without sessions) here. *Create a new "handle" (see fb:ref) for each unique feed and update that handle whenever the feed changes. Handles are key-values pairs that are associated with your app, that you can include inline through FBML. This allows you to do a single call to the API that will update all users subscribed to a given feed. The second option is probably the best in the long run.
{ "language": "en", "url": "https://stackoverflow.com/questions/100038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Regular expressions in C# for file name validation What is a good regular expression that can validate a text string to make sure it is a valid Windows filename? (AKA not have \/:*?"<>| characters). I'd like to use it like the following: // Return true if string is invalid. if (Regex.IsMatch(szFileName, "<your regex string>")) { // Tell user to reformat their filename. } A: This isn't as simple as just checking whether the file name contains any of System.IO.Path.GetInvalidFileNameChars (as mentioned in a couple of other answers already). For example what if somebody enters a name that contains no invalid chars but is 300 characters long (i.e. greater than MAX_PATH) - this won't work with any of the .NET file APIs, and only has limited support in the rest of windows using the \?\ path syntax. You need context as to how long the rest of the path is to determine how long the file name can be. You can find more information about this type of thing here. Ultimately all your checks can reliably do is prove that a file name is not valid, or give you a reasonable estimate as to whether it is valid. It's virtually impossible to prove that the file name is valid without actually trying to use it. (And even then you have issues like what if it already exists? It may be a valid file name, but is it valid in your scenario to have a duplicate name?) A: As answered already, GetInvalidFileNameChars should do it for you, and you don't even need the overhead of regular expressions: if (proposedFilename.IndexOfAny(System.IO.Path.GetInvalidFileNameChars()) != -1) { MessageBox.Show("The filename is invalid"); return; } A: Why not using the System.IO.FileInfo class, together with the DirectoryInfo class you have a set of usefull methods. A: Path.GetInvalidFileNameChars - Is not a good way. Try this: if(@"C:\A.txt".IndexOfAny(System.IO.Path.GetInvalidFileNameChars()) != -1) { MessageBox.Show("The filename is invalid"); return; }
{ "language": "en", "url": "https://stackoverflow.com/questions/100045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: I need to join two lists, sort them and remove duplicates. Is there a better way to do this? I have two unsorted lists and I need to produce another list which is sorted and where all the elements are unique. The elements can occur multiple times in both lists and they are originally unsorted. My function looks like this: (defun merge-lists (list-a list-b sort-fn) "Merges two lists of (x, y) coordinates sorting them and removing dupes" (let ((prev nil)) (remove-if (lambda (point) (let ((ret-val (equal point prev))) (setf prev point) ret-val)) (sort (merge 'list list-a list-b sort-fn) ;' sort-fn)))) Is there a better way to achieve the same? Sample call: [CL]> (merge-lists '(9 8 4 8 9 7 2) '(1 7 3 9 2 6) #'>) ==> (9 8 7 6 4 3 2 1) A: Our neighbourhood friendly Lisp guru pointed out the remove-duplicates function. He also provided the following snippet: (defun merge-lists (list-a list-b sort-fn test-fn) (sort (remove-duplicates (append list-a list-b) :test test-fn) sort-fn)) A: I think I would first sort the two lists separately and then merge them with a function that also skips over duplicates. This should be a bit faster as it requires one less traversal of both lists. P.S.: I doubt it can be done much faster as you basically always need at least one sort and one merge. Perhaps you can combine both in one function, but I wouldn't be surprised if that doesn't make a (big) difference. A: If the lists are sorted before you merge them, they can be merged, duplicate-removed and sorted at the same time. If they are sorted AND duplicate-free, then the merge/sort/duplicate-remove function becomes really trivial. In fact, it might be better to change your insert function so that it performs a sorted insertion that checks for duplicates. Then you always have sorted lists that are free of duplicates, and merging them is a trivial matter. Then again, you might prefer to have a fast insert function at the cost of sorting/removing duplicates later on. A: Wouldn't the remove-duplicates function operate better if the sort was applied before the remove-duplicates? A: As Antti pointed out, you probably want to leverage REMOVE-DUPLICATES and SORT, though I'd probably use a keyword (or optional argument) for the test function: (defun merge-lists (list-1 list-2 sort-fn &key (test #'eql)) ...) or (defun merge-lists (list-1 list-2 sort-fn &optional (test #'eql) ...) This way, you won't have to specify the test function (used by REMOVE-DUPLICATES to test for "is these considered duplicates"), unless EQL is not good enough. A: Sounds like you need to be using Sets.
{ "language": "en", "url": "https://stackoverflow.com/questions/100048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: WCF - Overhead of throwing FaultExceptions within your service I posted a question about using Messages versus Fault Exceptions to communicate business rules between services. I was under the impression it carried overhead to throw this exception over the wire, but considering it's just a message that get serialized and deserialized, they were in fact one and the same. But this got me thinking about throwing exceptions in general or more specifically throwing FaultExceptions. Now within my service, if i use throw new FaultException to communicate a simple business rule like "Your account has not been activated", What overhead does this now carry? Is it the same overhead as throwing regular exceptions in .NET? or does WCF service handle these more efficiently with the use of Fault Contracts. So in my user example, which is the optimal/preferred way to write my service method option a public void AuthenticateUser() { throw new FaultException("Your account has not been activated"); } option b public AutheticateDto AutheticateUser() { return new AutheticateDto() { Success = false, Message = "Your account has not been activated"}; } A: Well... In general you shouldn't be throwing exceptions for expected conditions, or anything you expect to happen regularly. They are massively slower than doing normal methods. E.g., if you expect a file open to fail, don't throw a that exception up to your caller, pass the back a failure code, or provide a "CanOpenFile" method to do the test. True, the message text itself isn't much, but a real exception is thrown and handled (possibly more expensively because of IIS), and then real exception is again thrown on the client when the fault is deserialized. So, double hit. Honestly, if it is a low volume of calls, then you probably won't take any noticeable hit, but is not a good idea anyway. Who wants to put business logic in a catch block :) Microsoft : Exceptions And Performance, & Alternatives Developer Fusion: Performance, with example A: It's just like a normal exception, and uses the same wrapping code as a normal exception would to marshal into a fault, including unwinding the stack. Like exceptions SOAP faults shouldn't, to my mind, be used for program flow, but to indicate errors.
{ "language": "en", "url": "https://stackoverflow.com/questions/100053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you test your web UI to see if it renders uniformly across different browsers? Tools like Selenium are good for testing user interactions on the web UI. However, I was curious what are people approaches for strictly testing and verifying that web pages are rendered correctly across a set of browsers? Is this even possible? A: May I recommend browsershots where you can submit pages and have them rendered out in a variety of browsers with various things set on or off such as Flash and JavaScript. At the end of the day you will still want to install FF, IE6-8, Opera and Safari/Chrome for testing manually. Also, if you've got a friend with a Mac (or a PC if you're using a Mac) get them to test in Safari too as I've personally found differences in the way both of them render the same page. I'd also recommend that you develop mainly in Firefox and regularly check it in IE6 as you work. IE6 is the one that will mostly screw up so if it's working in both it's more likely to be working in all. When you find rendering weirdness try and fix it in your markup and CSS first before resorting to CSS hacks as they can lead to 'interesting' problems later or in other browsers. A: There is only a handful of browsers you need to test, as some share a common rendering engine (Gecko or Webkit). Without explaining which or why, here's the current wisdom (2009): * *Build your site using Firefox or Opera (on any platform). BTW Opera uses its own Presto engine; *Test in whichever of the above you didn't use. *Validate the (X)HTML and CSS (important!). *Test it in >=IE7 and note the glitches, if any. *Use conditional comments in separate stylesheets for each version IE - never use CSS hacks as they'll go out of date. *Test in IE <7 if you like and do the same, or use conditional comments to ask users (politely) to upgrade their version of IE. *Test in Safari (Webkit). *Don't test in Chrome, you already have by proxy (Webkit)! *Don't test in IE for Mac - the share is too low and it's no longer updated. Finally, try enlarging the text in Firefox, Opera, IE and Safari. Opera also has a hand-held emulation mode for mobiles. You will have now covered (theatrical guess) 99.9% of browser setups. If you're on OS X or Linux, you can run Windows in a virtual environment like Parallels or Wine. Apparently Wine also has a Windows binary, but I couldn't find it. Caution: you'll need to be sure that your virtual environment allows IE to read conditonal comments. In practice, I find that if a site has valid code and works in Firefox, Safari and Opera, it'll probably be okay in IE7 up. The only HTML/CSS gotcha is IE's 'haslayout' handling. If you don't have the browsers, BrowserStack is an excellent online testing service. Finally, if you're using Javascript, you'll need to go through a similar process, problem being that as a rapidly developing area, newer versions of some browsers handle Javascript in increasingly effective ways, so functions in older versions might break or fail quietly. A: If you just want to see if layout is correct, just submit your website to BrowserShots.org and visit later to see the screenshots. If you want to test the functionality (JavaScript, etc.) then you'll need to test manually. A: Manually? I do not see an alternative if you want strict testing. Just install as many different browsers as possible and test in all of them. Of course this includes different versions of most popular browsers, and you need to check on Windows, Linux and Macintosh. A: Previously I was use WM for different versions of IE, but I find out some new tool for testing layout, and UI as well with this tool, link for FF use fire bug extension, those tools are for manually testing.
{ "language": "en", "url": "https://stackoverflow.com/questions/100058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: LINQ to SQL insert-if-non-existent I'd like to know if there's an easier way to insert a record if it doesn't already exist in a table. I'm still trying to build my LINQ to SQL skills. Here's what I've got, but it seems like there should be an easier way. public static TEntity InsertIfNotExists<TEntity> ( DataContext db, Table<TEntity> table, Func<TEntity,bool> where, TEntity record ) where TEntity : class { TEntity existing = table.SingleOrDefault<TEntity>(where); if (existing != null) { return existing; } else { table.InsertOnSubmit(record); // Can't use table.Context.SubmitChanges() // 'cause it's read-only db.SubmitChanges(); } return record; } A: Agree with marxidad's answer, but see note 1. Note 1: IMHO, it is not wise to call db.SubmitChanges() in a helper method, because you may break the context transaction. This means that if you call the InsertIfNotExists<TEntity> in the middle of a complex update of several entities you are saving the changes not at once but in steps. Note 2: The InsertIfNotExists<TEntity> method is a very generic method that works for any scenario. If you want to just discriminate the entities that have loaded from the database from the entities that have been created from the code, you can utilize the partial method OnLoaded of the Entity class like this: public partial class MyEntity { public bool IsLoaded { get; private set; } partial void OnLoaded() { IsLoaded = true; } } Given that (and note 1), then InsertIfNotExists functionality is reduced to the following: if (!record.IsLoaded) db.InsertOnSubmit(record); A: Small modification for Mark's answer: If you only care about checking if the entity exists by its primary key, Marke's answer can be used like this: public static void InsertIfNotExists<TEntity> (this Table<TEntity> table , TEntity entity ) where TEntity : class { if (!table.Contains(entity)) { table.InsertOnSubmit(entity); } } A: public static void InsertIfNotExists<TEntity> (this Table<TEntity> table, TEntity entity, Expression<Func<TEntity,bool>> predicate) where TEntity : class { if (!table.Any(predicate)) { table.InsertOnSubmit(record); table.Context.SubmitChanges(); } } table.InsertIfNotExists(entity, e=>e.BooleanProperty); A: As others have pointed out, the if (!Any()) { InsertOnSubmit(); } solutions all have a race condition. If you go that route, when you call SubmitChanges, you have to take into account that either a) a SqlException could be raised for a duplicate insert, or b) you could have duplicate records in the table. Fortunately, we can use the database to avoid the race condition by enforcing uniqueness. The following code assumes that there is a primary key or unique constraint on the table to prevent the insertion of duplicate records. using (var db = new DataContext()) { // Add the new (possibly duplicate) record to the data context here. try { db.SubmitChanges(); } catch (SqlException ex) { const int violationOfPrimaryKeyContraint = 2627; const int violationOfUniqueConstraint = 2601; var duplicateRecordExceptionNumbers = new [] { violationOfPrimaryKeyContraint, violationOfUniqueConstraint }; if (!duplicateRecordExceptionNumbers.Contains(ex.Number)) { throw; } } } Now... things get a fair bit more complicated if you have to perform the insert in a batch transaction with other database updates.
{ "language": "en", "url": "https://stackoverflow.com/questions/100068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Why can't I delete a file in %ProgramFiles% from a Unit Test via Resharper's Test Runner Unit Test? I am trying to write a test which, in it's fixtures Setup, it backs up a file and deletes the original, runs the test without the original present, then in the teardown, restores the original from the backup. The file is located in my %ProgramFiles% folder. I get an UnauthorizedAccessException on the fileInfo.Delete() statement. I have no problem deleting this file from another test project on the same machine that is not running from the Resharper Test Runner. I can't move the file to somewhere else - it's ssapi.dll, an installed dll for Visual SourceSafe. (Yes, I'm doing something invasive in a Unit Test.) It's the same user (me) for both ways -- I checked it via Task Manager. My user account is a member of the local Administrators group. What other factors are there which determine my "Authorization" to do something with a file? RESOLVED: Though it doesn't answer my original question (which I'd still like to know the answer to), I have found a workaround for my testing purposes, using the System.Security.Permissions framewok, doing a Demand for FileIOPermissionAccess.Read in the app (non-test) code which requires the file (for an Interop call), and a Deny for the same in the test of that code which requires a scenario that that file is not there. This should work for now (and I love having learned a bit about the System.Security.Permissions namespace)! A: Not really a solution, but I'd consider fixing this problem from a different angle. You could perhaps consider changing the directory to %AppData% (you might need to make this change for you main application also). It might solve your problem and also will see you well when you move to Vista, since UAC could stop you (or the application user) from using the %ProgramFiles% directory. A: It is possible that ReSharper is running its Test Runner as a separate process, and that separate process is not using your Windows identity but, instead, another one with lower privileges. You might be able to verify this opening Task Manager and checking Show processes from all users. A: You can probably fix this by giving your user account full access to that folder. Navigate to the folder in windows explorer. Right click on the folder and select properties. Select the security tab, then the Edit button, and add full control for yourself. Yes - I suppose it's a potential security issue, but you have to change the files in that directory, and you seem to know what you're doing, so it should work. A: You could activate auditing for the file, and check the error message in the event log. Note that you have to turn on auditing in two places, once under Local Security Policy/Local Policies/Audit Policy and once on the file itself. This would not solve the problem, but would at least help diagnose the problem. A: Are you running Vista or Server 2008 with UAC turned on? If yes, this might be the cause - the test runner process might not be in "elevated" mode.
{ "language": "en", "url": "https://stackoverflow.com/questions/100070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Winsock - 10038 Error - Win2K3 Server - baffling behaviour Attempt to send a message through a socket failed with WinSock error 10038. After around 40 seconds, messages are received successfully from the same socket and subsequently the send() is also succeeding in the same socket. This behaviour has been witnessed in Windows Server 2003. Is this any known behaviour with WinSock and Windows Server 2003? A: Winsock error 10038 means "An operation was attempted on something that is not a socket". Little trick to find info about error codes (usefull for all sorts of windows error codes): * *Open a command prompt *Type "net helpmsg 10038" What language is your application written in? If it's C/C++, could it be that you are using an invalid socket handle? A: Thanks so much to a_mole for the idea of checking for layered winsock providers. We are having problems with some of our PC's and TimesTen DB. When we try to setup and ODBC Client DSN, we get a 10038 error. On examining the netsh output from the affected PC's, we found that they have Embassy Trust Suite by Wave Systems installed. Evidently Dell pre-installed this on some of their PC's. Anyway, on uninstalling this software, the problem has been solved. Thanks again Lou A: Not a platform issue, I can guarantee that. Most likely, whatever variable you are using to access the socket handle is not thread-safe and is being used in the send() call before the actual socket is created. Another possible cause is the presence of layered winsock providers. "netsh winsock show" at a cmd prompt will show you the installed providers and you can try removing any non-microsoft ones.
{ "language": "en", "url": "https://stackoverflow.com/questions/100074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's a good threadsafe singleton generic template pattern in C# I have the following C# singleton pattern, is there any way of improving it? public class Singleton<T> where T : class, new() { private static object _syncobj = new object(); private static volatile T _instance = null; public static T Instance { get { if (_instance == null) { lock (_syncobj) { if (_instance == null) { _instance = new T(); } } } return _instance; } } public Singleton() { } } Preferred usage example: class Foo : Singleton<Foo> { } Related: An obvious singleton implementation for .NET? A: Courtesy of Judith Bishop, http://patterns.cs.up.ac.za/ This singleton pattern implementation ensures lazy initialisation. // Singleton PatternJudith Bishop Nov 2007 // Generic version public class Singleton<T> where T : class, new() { Singleton() { } class SingletonCreator { static SingletonCreator() { } // Private object instantiated with private constructor internal static readonly T instance = new T(); } public static T UniqueInstance { get { return SingletonCreator.instance; } } } A: This is my point using .NET 4 public class Singleton<T> where T : class, new() { Singleton (){} private static readonly Lazy<T> instance = new Lazy<T>(()=> new T()); public static T Instance { get { return instance.Value; } } } A: I don't think that you really want to "burn your base class" so that you can save 2 lines of code. You don't really need a base class to implement singleton. Whenever you need a singleton, just do this: class MyConcreteClass { #region Singleton Implementation public static readonly Instance = new MyConcreteClass(); private MyConcreteClass(){} #endregion /// ... } A: More details on this answer on a different thread : How to implement a singleton in C#? However the thread doesn't use generic. A: public sealed class Singleton { private static readonly Singleton instance = new Singleton(); private Singleton(){} public static Singleton Instance { get { return instance; } } } There's no ambiguity in .NET around initialization order; but this raises threading issues. A: :/ The generic "singleton" pattern by Judith Bishop seems kinda flawed, its always possible to create several instances of type T as the constructor must be public to use it in this "pattern". In my opinion it has absolutely nothing to do with singleton, its just a kind of factory, which always returns the same object, but doesn't make it singleton... as long as there can be more than one instance of a class it can't be a singleton. Any reason this pattern is top-rated? public sealed class Singleton { private static readonly Singleton _instance = new Singleton(); private Singleton() { } public static Singleton Instance { get { return _instance; } } } Static initializers are considered thread-safe.. I don't know but you shouldn't use idioms of singleton at all, if you wrap my code above its not more than 3 lines... and inheriting from a singleton doesn't make any sense either. A: According to Jon Skeet in Implementing the Singleton Pattern in C# the code you posted is actually considered as bad code, because it appears broken when checked against the ECMA CLI standard. Also watch out: everytime you instantiate your object with a new type of T, it becomes another instance; it doesn't get reflected in your original singleton. A: This code won't compile, you need "class" constraint on T. Also, this code requires public constructor on target class, which is not good for singleton, because you can't control at compile time that you obtain (single) instance only via Instance property (or field). If you don't have any other static members except Instance, you are ok to go with just this: class Foo { public static readonly Instance = new Foo(); private Foo() {} static Foo() {} } It is thread safe (guaranteed by CLR) and lazy (instance is created with first access to type). For more discussion about BeforeFieldInit and why we need static constructor here, see https://csharpindepth.com/articles/BeforeFieldInit. If you want to have other public static members on type, but create object only on access to Instance, you may create nested type, like in https://csharpindepth.com/articles/Singleton A: I was looking for a better Singleton pattern and liked this one. So ported it to VB.NET, can be useful for others: Public MustInherit Class Singleton(Of T As {Class, New}) Public Sub New() End Sub Private Class SingletonCreator Shared Sub New() End Sub Friend Shared ReadOnly Instance As New T End Class Public Shared ReadOnly Property Instance() As T Get Return SingletonCreator.Instance End Get End Property End Class A: As requested, cross posting from my original answer to another question. My version uses Reflection, works with non-public constructors in the derived class, is threadsafe (obviously) with lazy instantiation (according to the article I found linked below): public class SingletonBase<T> where T : class { static SingletonBase() { } public static readonly T Instance = typeof(T).InvokeMember(typeof(T).Name, BindingFlags.CreateInstance | BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic, null, null, null) as T; } I picked this up a few years ago, not sure how much is mine, but googling on the code might find the original source of the technique if it wasn't me. This is the oldest source of the code that I can find that was not me posting it. A: Try this generic Singleton class implementing the Singleton design pattern in a thread safe and lazy way (thx to wcell). public abstract class Singleton<T> where T : class { /// <summary> /// Returns the singleton instance. /// </summary> public static T Instance { get { return SingletonAllocator.instance; } } internal static class SingletonAllocator { internal static T instance; static SingletonAllocator() { CreateInstance(typeof(T)); } public static T CreateInstance(Type type) { ConstructorInfo[] ctorsPublic = type.GetConstructors( BindingFlags.Instance | BindingFlags.Public); if (ctorsPublic.Length > 0) throw new Exception( type.FullName + " has one or more public constructors so the property cannot be enforced."); ConstructorInfo ctorNonPublic = type.GetConstructor( BindingFlags.Instance | BindingFlags.NonPublic, null, new Type[0], new ParameterModifier[0]); if (ctorNonPublic == null) { throw new Exception( type.FullName + " doesn't have a private/protected constructor so the property cannot be enforced."); } try { return instance = (T)ctorNonPublic.Invoke(new object[0]); } catch (Exception e) { throw new Exception( "The Singleton couldnt be constructed, check if " + type.FullName + " has a default constructor", e); } } } } A: The Double-Check Locking [Lea99] idiom provided by Microsoft here is amazingly similar to your provided code, unfortunately, this fails the ECMA CLI standard for a puritan view of thread-safe code and may not work correctly in all situations. In a multi-threaded program, different threads could try to instantiate a class simultaneously. For this reason, a Singleton implementation that relies on an if statement to check whether the instance is null will not be thread-safe. Don't write code like that! A simple, yet effective means of creating a thread-safe singleton is to use a nested class to instantiate it. The following is an example of a lazy instantiation singleton: public sealed class Singleton { private Singleton() { } public static Singleton Instance { get { return SingletonCreator.instance; } } private class SingletonCreator { static SingletonCreator() { } internal static readonly Singleton instance = new Singleton(); } } Usage: Singleton s1 = Singleton.Instance; Singleton s2 = Singleton.Instance; if (s1.Equals(s2)) { Console.WriteLine("Thread-Safe Singleton objects are the same"); } Generic Solution: public class Singleton<T> where T : class, new() { private Singleton() { } public static T Instance { get { return SingletonCreator.instance; } } private class SingletonCreator { static SingletonCreator() { } internal static readonly T instance = new T(); } } Usage: class TestClass { } Singleton s1 = Singleton<TestClass>.Instance; Singleton s2 = Singleton<TestClass>.Instance; if (s1.Equals(s2)) { Console.WriteLine("Thread-Safe Generic Singleton objects are the same"); } Lastly, here is a somewhat releated and usefull suggestion - to help avoid deadlocks that can be caused by using the lock keyword, consider adding the following attribute to help protect code in only public static methods: using System.Runtime.CompilerServices; [MethodImpl (MethodImplOptions.Synchronized)] public static void MySynchronizedMethod() { } References: * *C# Cookbook (O'Reilly), Jay Hilyard & Stephen Teilhet *C# 3.0 Design Patterns (O'Reilly), Judith Bishop *CSharp-Online.Net - Singleton design pattern: Thread-safe Singleton A: I quite liked your original answer - the only thing missing (according to the link posted by blowdart) is to make the _instance variable volatile, to make sure it has actually been set in the lock. I actually use blowdarts solution when I have to use a singleton, but I dont have any need to late-instantiate etc. A: My contribution for ensuring on demand creation of instance data: /// <summary>Abstract base class for thread-safe singleton objects</summary> /// <typeparam name="T">Instance type</typeparam> public abstract class SingletonOnDemand<T> { private static object __SYNC = new object(); private static volatile bool _IsInstanceCreated = false; private static T _Instance = default(T); /// <summary>Instance data</summary> public static T Instance { get { if (!_IsInstanceCreated) lock (__SYNC) if (!_IsInstanceCreated) _Instance = Activator.CreateInstance<T>(); return _Instance; } } } A: pff... again... :) My contribution for ensuring on demand creation of instance data: /// <summary>Abstract base class for thread-safe singleton objects</summary> /// <typeparam name="T">Instance type</typeparam> public abstract class SingletonOnDemand<T> { private static object __SYNC = new object(); private static volatile bool _IsInstanceCreated = false; private static T _Instance = default(T); /// <summary>Instance data</summary> public static T Instance { get { if (!_IsInstanceCreated) lock (__SYNC) if (!_IsInstanceCreated) { _Instance = Activator.CreateInstance<T>(); _IsInstanceCreated = true; } return _Instance; } } } A: public static class LazyGlobal<T> where T : new() { public static T Instance { get { return TType.Instance; } } private static class TType { public static readonly T Instance = new T(); } } // user code: { LazyGlobal<Foo>.Instance.Bar(); } Or: public delegate T Func<T>(); public static class CustomGlobalActivator<T> { public static Func<T> CreateInstance { get; set; } } public static class LazyGlobal<T> { public static T Instance { get { return TType.Instance; } } private static class TType { public static readonly T Instance = CustomGlobalActivator<T>.CreateInstance(); } } { // setup code: // CustomGlobalActivator<Foo>.CreateInstance = () => new Foo(instanceOf_SL_or_IoC.DoSomeMagicReturning<FooDependencies>()); CustomGlobalActivator<Foo>.CreateInstance = () => instanceOf_SL_or_IoC.PleaseResolve<Foo>(); // ... // user code: LazyGlobal<Foo>.Instance.Bar(); } A: Saw one a while ago which uses reflection to access a private (or public) default constructor: public static class Singleton<T> { private static object lockVar = new object(); private static bool made; private static T _singleton = default(T); /// <summary> /// Get The Singleton /// </summary> public static T Get { get { if (!made) { lock (lockVar) { if (!made) { ConstructorInfo cInfo = typeof(T).GetConstructor(BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic, null, new Type[0], null); if (cInfo != null) _singleton = (T)cInfo.Invoke(new object[0]); else throw new ArgumentException("Type Does Not Have A Default Constructor."); made = true; } } } return _singleton; } } } A: I submit this to the group. It seems to be thread-safe, generic and follows the pattern. You can inherit from it. This is cobbled together from what others have said. public class Singleton<T> where T : class { class SingletonCreator { static SingletonCreator() { } internal static readonly T Instance = typeof(T).InvokeMember(typeof(T).Name, BindingFlags.CreateInstance | BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic, null, null, null) as T; } public static T Instance { get { return SingletonCreator.Instance; } } } Intended implementation: public class Foo: Singleton<Foo> { private Foo() { } } Then: Foo.Instance.SomeMethod(); A: As in wikipedia: the singleton pattern is a design pattern that restricts the instantiation of a class to one object I beleave that there is no guaranteed way to do it using generics, if you have restricted the instantiation of the singleton itself, how to restrict the instantiation of the main class, I think it is not possible to do that, and implementing this simple pattern is not that hard, take this way using the static constructor and private set: public class MyClass { private MyClass() { } static MyClass() { Instance = new MyClass(); } public static MyClass Instance { get; private set; } } OR: public class MyClass { private MyClass() { } static MyClass() { Instance = new MyClass(); } private static MyClass instance; public static MyClass Instance { get { return instance; } private set { instance = value; } } } A: This works for me: public static class Singleton<T> { private static readonly object Sync = new object(); public static T GetSingleton(ref T singletonMember, Func<T> initializer) { if (singletonMember == null) { lock (Sync) { if (singletonMember == null) singletonMember = initializer(); } } return singletonMember; } } Usage: private static MyType _current; public static MyType Current = Singleton<MyType>.GetSingleton(ref _current, () => new MyType()); Consume the singleton: MyType.Current. ... A: No Matter which exmaple you choose, always check for concurrency using Parallel.For! ( loop in which iterations may run in parallel) put in Singleton C'tor : private Singleton () { Console.WriteLine("usage of the Singleton for the first time"); } put in Main : Parallel.For(0, 10, index => { Thread tt = new Thread(new ThreadStart(Singleton.Instance.SomePrintMethod)); tt.Start(); }); A: In many solutions today, people use service lifetime of singleton with dependency injection, as .NET offers this out of the box. If you still want to create a generic singleton pattern in your code where you might also consider initializing the type T to a initialized singleton object, 'settable once' and thread safe, here is a possible way to do it. public sealed class Singleton<T> where T : class, new() { private static Lazy<T> InstanceProxy { get { if (_instanceObj?.IsValueCreated != true) { _instanceObj = new Lazy<T>(() => new T()); } return _instanceObj; } } private static Lazy<T>? _instanceObj; public static T Instance { get { return InstanceProxy.Value; } } public static void Init(Lazy<T> instance) { if (_instanceObj?.IsValueCreated == true) { throw new ArgumentException($"A Singleton for the type <T> is already set"); } _instanceObj = instance ?? throw new ArgumentNullException(nameof(instance)); } private Singleton() { } } The class is sealed and with a private constructor, it accepts types which are classes and must offer a public parameterless constructor 'new'. It uses the Lazy to achieve built in thread safety. You can init also the type T Singleton object for convenience. It is only allowed if a Singleton is not first set. Obviously, you should only init a Singleton of type T early on in your program, such as when the application or service / API starts up. The code will throw an ArgumentException if the Init method is called twice or more times for the type T. You can use it like this : Some model class : public class Aeroplane { public string? Model { get; set; } public string? Manufacturer { get; set; } public int YearBuilt { get; set; } public int PassengerCount { get; set; } } Usage sample : var aeroplane = new Aeroplane { Manufacturer = "Boeing", Model = "747", PassengerCount = 350, YearBuilt = 2005 }; var aeroPlane3 = Singleton<Aeroplane>.Instance; var aeroPlane4 = Singleton<Aeroplane>.Instance; Console.WriteLine($"Aeroplane3 and aeroplane4 is same object? {Object.ReferenceEquals(aeroPlane3, aeroPlane4)}"); Outputs 'true'. Trying to re-init type T Singleton to another object fails : var aeroplane2 = new Aeroplane { Manufacturer = "Sopwith Aviation Company", Model = "Sophwith Camel", PassengerCount = 1, YearBuilt = 1917 }; Singleton<Aeroplane>.Init(new Lazy<Aeroplane>(aeroplane2)); You can of course just access the Singleton with initing it - it will call the default public constructor. Possible you could have a way of setting a custom constructor here instead of passing an object as a sort of 'factory pattern'. var aeroplaneDefaultInstantiated = Singleton<Aeroplane>.Instance; Default instantiation - calls the parameterless public constructor of type T. A: You don't need all that, C# already has a good singleton pattern built-in. static class Foo If you need anything more interesting than that, chances are your new singleton is going to be just different enough that your generic pattern is going to be useless. EDIT: By "anything more interesting" I'm including inheritance. If you can inherit from a singleton, it isn't a singleton any more.
{ "language": "en", "url": "https://stackoverflow.com/questions/100081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: What do you consider the best CMS in Java Which CMS are you using in Java and what is your experience with it (in terms of extensibility, usage comfort, framework API, memory usage, performance etc.). I am looking for suggestions. Specifically any one that supports a search engine(probably lucene or similar). A: We are using opencms. I haven't tried to extend it beyond changing page templates (writing JSPs) but where usage is concerned it gets the job done, albeit the feeling you get is that you fight the system all the way through. Memory consumption on the JVM running opencms is right now 161 Mb, the JVM running since January 2008. This is on a low traffic site serving around 6000 hits per month with an average traffic of 1800 Mb per month. A: It very much depends on your requirements. For instance, Apache Lenya is very complete, but that also makes it large and more complicated. If you don't require most of their functionality, you may be better of with a smaller cms. A: the part "what do you use" is easy to answer, but as "Confusion" already said - the rest depends upon your needs: We're starting to use liferay, which is basically a portal server coming with cms portlets. In terms of extensibility: It uses the portlet api usage comfort: Well... it didn't hinder us using it. framework API: Having the portlet api as the api, this was more appealing than (e.g.) OpenCMS which has its own API. memory usage: No hard knowledge yet, but for our needs we don't expect bad things from any cms available. performance: Same as Memory. If you want to know, what you should use, please ask more specific questions. If you are interested in a list of systems, please refer to http://en.wikipedia.org/wiki/List_of_Content_Management_Systems or http://en.wikipedia.org/wiki/Content_management_framework and filter out the java ones. A: I used Magnolia and found it very clean and customizable. A: take a look at alfresco and day.com A: Here is the comparison. Jahia is also worth checking.
{ "language": "en", "url": "https://stackoverflow.com/questions/100089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to have silverlight get its data from MySQL I've written a small hello world test app in Silverlight which i want to host on a Linux/Apache2 server. I want the data to come from MySQL (or some other linux compatible db) so that I can databind to things in the db. I've managed to get it working by using the MySQL Connector/.NET: MySqlConnection conn = new MySqlConnection("Server=the.server.com;Database=theDb;User=myUser;Password=myPassword;"); conn.Open(); MySqlCommand command = new MySqlCommand("SELECT * FROM test;", conn); using (MySqlDataReader reader = command.ExecuteReader()) { StringBuilder sb = new StringBuilder(); while (reader.Read()) { sb.AppendLine(reader.GetString("myColumn")); } this.txtResults.Text = sb.ToString(); } This works fine if I give the published ClickOnce app full trust (or at least SocketPermission) and run it locally. I want this to run on the server and I can't get it to work, always ending up with permission exception (SocketPermission is not allowed). The database is hosted on the same server as the silverlight app if that makes any difference. EDIT Ok, I now understand why it's a bad idea to have db credentials in the client app (obviously). How do people do this then? How do you secure the proxy web service so that it relays data to and from the client/db in a secure way? Are there any examples out there on the web? Surely, I cannot be the first person who'd like to use a database to power a silverlight application? A: While the "official" answer is to use WCF to push a service to Silverlight, I kind of figure that anyone using MySQL would probably not be using a complete ASP.NET solution. My solution was to build a PHP webservice (like Rob suggested) to interact with the MySQL database and have the Silverlight access it in a RESTful manner. Here is beginning of a three part tutorial for using Silverlight to access a MySQL database through a PHP web service: PHP, MySQL and Silverlight: The Complete Tutorial A: The easiest way to do what you want (having read through your edits now :)) will be to expose services that can be consumed. The pattern that Microsoft is REALLY pushing right now is to expose WCF services, but the truth is that your Silverlight client can use WCF to consume a lot of different types of services. What may be easiest for you to do right now would be to use a .NET service on a web server or maybe a PHP REST service, and then point your Silverlight app at that service. By doing so, you're protecting your database not only from people snooping through it, but more importantly, you're restricting what people can do to your database. If your data is supposed to be read-only, and your service's contract only allows reading operations, you're set. Alternatively, your service may negotiate sessions with credentials, again, set up through WCF. WCF can be a client-only, server-only, or client-server connector platform. What you choose will affect the code you write, but it's all going to be independent of your database. Your code can be structured such that it's a one-to-one mapping to your database table, or it can be far more abstract (you can set up classes that represent full logical views if you choose). A: Silverlight does not have any capability to directly access database servers. What you can do is to expose your database operations through web services (ASMX or WCF, even non-.NET!) and use Silverlight to access those services. A: I just got this working; ASP.NET4 site with Silverlight4 content on Linux Ubuntu 10 / Apache2 server. Content is developed using Visual Studio 2010. VS2008 should work fine too. Server: * *Setup a Linux server with Apache2 and MySQL, there are tons of guides on this. * *Make sure MySQL is accessible from the development PC and optionally from the Internet. See here for details: Causes of Access-Denied Errors. *Setup the database table structures and add some content for testing later. In our example we assume you have the table 'persons' with the column 'name'. *Since Silverlight is a client-side technology you are pretty much good-to-go and can host the application with a simple HTML page. *A web service is required between Silverlight and MySQL. Microsoft's WCF RIA is one flavor, but requires .NET. On the plus-side, you get to host ASP.NET4 pages as well. Here is a thorough guide to setting it up: Setting up Mono 2.8 with Asp.Net 4.0 and MVC2 on Ubuntu with MySql Membership Visual Studio: * *Install latest MySQL Connector/Net and restart VS *Add your MySQL database as data source * *Open Server Explorer -> Add data connection -> Select 'MySQL Database' *Fill in credentials and test connection Setting up the site with MySQL access: Here is a guide I found helpful: Step By Step Guide to WCF RIA enabled SL4 application with Entity Framework * *Create or open a Silverlight project. * *The server-side project is typically named 'ProjectName.Web' *The client-side project is typically named 'ProjectName' *Add 'ADO.NET Entity Data Model' to the server project. This will be a model of your database structure. * *Select 'Generate from database' *Choose the MySQL database connection you created *Select the tables you want to access *Build your solution now before proceeding. *Add 'Domain Service Class' to the server project, f.ex. 'FooDomain'. This will make the database entities available to the client-side Silverlight code. * *In 'Available DataContext/ObjectContext classes:' select the Entity Framework model you created in the previous step. *Check the entities you want to access and check 'Enable editing' where appropriate *Check 'Generate associated classes for metadata' *Build your solution again to generate 'FooDomainContext', based on 'FooDomain' in server project. Testing: Let's get data from MySQL into Silverlight. Assuming there is a table named 'persons' with column name 'name', we can bind a list box to show the names of the persons. First add a Silverlight page, let's say 'Home'. In Home.xaml add: <ListBox x:Name="TestList" Width="100" /> In Home.xaml.cs file add: public partial class Home : Page { public Home() { InitializeComponent(); Loaded += Home_Loaded; } void Home_Loaded(object sender, RoutedEventArgs e) { var context = new FooDomainContext(); var query = context.Load(context.GetPersonsQuery()); TestList.ItemsSource = query.Entities; TestList.DisplayMemberPath = "name"; } } Here we assume you named your domain service 'FooDomain', and this would generate the 'FooDomainContext' class used. Hopefully, if all is set up properly, you will now see a list of person names when running your Silverlight project. Edit: ASP.NET is not optional, but required for the WCF RIA web service used in my example. A: Having DB connections directly to the server from the client side is usually a bad idea. I don't know how easy it is to decompile a Silverlight app, but I would guess it's possible in some way. Then you're basically giving away your DB credentials to your users. A: You can get data from MySQL by using Web Services. Walkthrough: Step 1: Create Web Services Step 2: Add Service Reference to Silverlight Step 1: Create Web Services Add a new Silverlight project. Create a new Web Service. Right click on the web project > Add > New Item Select "Web Service". Initial code of a new Web Service. using System; using System.Collections.Generic; using System.Web; using System.Web.Services; namespace SilverlightApplication1.Web { /// <summary> /// Summary description for WebService1 /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] public class WebService1 : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World"; } } } In order for the Web Service able to connect to MySQL, we need to add a reference of MySql.Data.DLL into the web project and add the Using statement at top of the Web Service class: using MySql.Data.MySqlClient; HelloWorld() is an initial sample method created by Visual Studio. You may want to delete it as it is not needed. I'm going to create 2 simple method to demonstrate how Web Services are used to communicate between SilverLight and MySQL. First method: ExecuteScalar() This method is simple. Get a single object from MySQL. public string ExecuteScalar(string sql) { try { string result = ""; using (MySqlConnection conn = new MySqlConnection(constr)) { using (MySqlCommand cmd = new MySqlCommand()) { conn.Open(); cmd.Connection = conn; cmd.CommandText = sql; result = cmd.ExecuteScalar() + ""; conn.Close(); } } return result; } catch (Exception ex) { return ex.Message; } } Second method: ExecuteNonQuery() For single SQL execution. Example of SQL type: INSERT, UPDATE, DELETE. public string ExecuteNonQuery(string sql) { try { long i = 0; using (MySqlConnection conn = new MySqlConnection(constr)) { using (MySqlCommand cmd = new MySqlCommand()) { conn.Open(); cmd.Connection = conn; cmd.CommandText = sql; i = cmd.ExecuteNonQuery(); conn.Close(); } } return i + " row(s) affected by the last command, no resultset returned."; } catch (Exception ex) { return ex.Message; } } This is how the Web Service looks like after adding the two methods above: using System; using System.Collections.Generic; using System.Web; using System.Web.Services; using MySql.Data.MySqlClient; namespace SilverlightApplication1.Web { [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] public class WebService1 : System.Web.Services.WebService { string constr = "server=localhost;user=root;pwd=1234;database=test;"; [WebMethod] public string ExecuteScalar(string sql) { try { string result = ""; using (MySqlConnection conn = new MySqlConnection(constr)) { using (MySqlCommand cmd = new MySqlCommand()) { conn.Open(); cmd.Connection = conn; cmd.CommandText = sql; result = cmd.ExecuteScalar() + ""; conn.Close(); } } return result; } catch (Exception ex) { return ex.Message; } } [WebMethod] public string ExecuteNonQuery(string sql) { try { long i = 0; using (MySqlConnection conn = new MySqlConnection(constr)) { using (MySqlCommand cmd = new MySqlCommand()) { conn.Open(); cmd.Connection = conn; cmd.CommandText = sql; i = cmd.ExecuteNonQuery(); conn.Close(); } } return i + " row(s) affected by the last command, no resultset returned."; } catch (Exception ex) { return ex.Message; } } } } You will notice that an attribute of [WebMethod] is added to the methods. Rebuild the project and let the Web Service be ready for next step. Web Service Access Permission Please note that, by default, Web Service only allow those Silverlight that is hosted at the same domain with the Web Service to access. If the Silverlight application is hosted on another website/domain, Web Service will deny the communication. Therefore we have to configure the permission for the Web Service to be accessed by Silverlight which is hosted at different domain. You have to create two additional files: clientaccesspolicy.xml and crossdomain.xml. These files has to be put at the root of the domain where the Web Services are hosted. Example: http://www.mywebsite.com/clientaccesspolicy.xml and http://www.mywebsite.com/crossdomain.xml clientaccesspolicy.xml <?xml version="1.0" encoding="utf-8"?> <access-policy> <cross-domain-access> <policy> <allow-from http-request-headers="SOAPAction"> <domain uri="*"/> </allow-from> <grant-to> <resource path="/" include-subpaths="true"/> </grant-to> </policy> </cross-domain-access> </access-policy> If you only want to allow the Web Service to be accessed by specific domain (example: www.myanotherwebsite.com), you can add it within . Example: <?xml version="1.0" encoding="utf-8"?> <access-policy> <cross-domain-access> <policy> <allow-from http-request-headers="SOAPAction"> <domain uri="http://www.myanotherwebsite.com"/> </allow-from> <grant-to> <resource path="/" include-subpaths="true"/> </grant-to> </policy> </cross-domain-access> </access-policy> crossdomain.xml <?xml version="1.0" encoding="utf-8" ?> <!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd"> <cross-domain-policy> <allow-http-request-headers-from domain="*" headers="SOAPAction,Content-Type"/> </cross-domain-policy> To understand more about this, please read: Making a Service Available Across Domain Boundaries (MSDN) Step 2: Add Service Reference to Silverlight Add a Service Reference to Silverlight. Type the address of the Web Service and press [Go]. Example of address: http://www.mywebsite.com/MyCoolWebService.asmx Change the Namespace to your favor, and press [OK]. Visual Studio will analyze the Web Service, do the data binding and create a class. Before continue coding, let's us see what methods that we can use in the new created class. Right click the new class and select [View in Object Browser]. The class that we are going to use is WebService1SoapClient (in this example). The naming is based on the Service name. If we name our service class as MyCoolWebService, then MyCoolWebServiceSoapClient will be chosen as the name of the class in Silverlight. At the right panel, two methods and two events are highlighted. Those are the methods used to call the Web Services. Lets create a simple Silverlight application by adding a Textbox and two Buttons. In this example, user will key in SQL query directly into the Textbox. Button of [ExecuteScalar] will send the SQL to the Web Service and retrieve data back. (SELECT, SHOW, etc.) Button of [ExecuteNonQuery] will send the SQL to the Web Service for execution only. (INSERT, UPDATE, DELETE, etc.) This is the initial code behind of MainPage.xaml: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; namespace SilverlightApplication1 { public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); } private void btExecuteScalar_Click(object sender, RoutedEventArgs e) { } private void btExecuteNonQuery_Click(object sender, RoutedEventArgs e) { } } } Now, these are what we are going to do here: * *Declare the service as static object at class level: ServiceReference1.WebService1SoapClient *Create the service completed event of the two methods. *Call the service in the event of button click. *Display the service result: MessageBox.Show() public partial class MainPage : UserControl { ServiceReference1.WebService1SoapClient myService; public MainPage() { InitializeComponent(); myService = new ServiceReference1.WebService1SoapClient(); myService.ExecuteScalarCompleted += myService_ExecuteScalarCompleted; myService.ExecuteNonQueryCompleted += myService_ExecuteNonQueryCompleted; } void myService_ExecuteNonQueryCompleted(object sender, ServiceReference1.ExecuteNonQueryCompletedEventArgs e) { MessageBox.Show(e.Result); } void myService_ExecuteScalarCompleted(object sender, ServiceReference1.ExecuteScalarCompletedEventArgs e) { MessageBox.Show(e.Result); } private void btExecuteScalar_Click(object sender, RoutedEventArgs e) { myService.ExecuteScalarAsync(textBox1.Text); } private void btExecuteNonQuery_Click(object sender, RoutedEventArgs e) { myService.ExecuteNonQueryAsync(textBox1.Text); } } Press [F5], run and test the Silverlight application. Together with your creativity, I believe you can do something more than this for now Smile | :) If you have done any changes to the Web Service, maybe you added new Service (new web methods), you have to update the Service Reference at Silverlight to re-bind the Services. You might want to update the Web Service address, if you uploaded the files to a different web hosting. Happy coding. Read More: * *Original Post - Connecting MySQL From SilverLight With Web Services - CodeProject.com (written by me) *Access a Web Service from a Silverlight Application *HOW TO: Write a Simple Web Service by Using Visual C# .NET *How to: Build a Service for Silverlight Clients
{ "language": "en", "url": "https://stackoverflow.com/questions/100104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Obfuscation Puzzle: Can you figure out what this Perl function does? sub foo {[$#{$_[!$||$|]}*@{$_[!!$_^!$_]}?@{$_[!$..!!$.]}[$_[@--@+]% @{$_[$==~/(?=)//!$`]}..$#{$_[$??!!$?:!$?]},($)?!$):!!$))..$_[$--$-]%@{ $_[$]/$]]}-(!!$++!$+)]:@{$_[!!$^^^!$^^]}]} update: I thought the word "puzzle" would imply this, but: I know what it does - I wrote it. If the puzzle doesn't interest you, please don't waste any time on it. A: I found this command helpful, when working on my other answer. perl -MO=Concise,foo,-terse,-compact obpuz.pl > obpuz.out B::Concise A: It takes two arrayrefs and returns a new arrayref with the contents of the second array rearranged such that the second part comes before the first part, split at a point based on the memory location of the first array. When the second array is empty or contains one item, just returns a copy of the second array. Equivalent to the following: sub foo { my ($list1, $list2) = @_; my @output; if (@$list2 > 0) { my $split = $list1 % @$list2; @output = @$list2[$split .. $#$list2, 0 .. ($split - 1)]; } else { @output = @$list2; } return \@output; } $list1 % @$list2 essentially picks a random place to split the array, based on $list which evaluates to the memory address of $list when evaluated in a numeric context. The original mostly uses a lot of tautologies involving punctuation variables to obfuscate. e.g. * *!$| | $| is always 1 *@- - @+ is always 0 Updated to note that perltidy was very helpful deciphering here, but it choked on !!$^^^!$^^, which it reformats to !!$^ ^ ^ !$^ ^, which is invalid Perl; it should be !!$^^ ^ !$^^. This might be the cause of RWendi's compile error. A: Here is how you figure out how to de-obfuscate this subroutine. Sorry for the length First let's tidy up the code, and add useful comments. sub foo { [ ( # ($#{$_[1]}) $#{ $_[ ! ( $| | $| ) # $OUTPUT_AUTOFLUSH === $| # $| is usually 0 # ! ( $| | $| ) # ! ( 0 | 0 ) # ! ( 0 ) # 1 ] } * # @{$_[1]} @{ $_[ !!$_ ^ !$_ # !! 1 ^ ! 1 # ! 0 ^ 0 # 1 ^ 0 # 1 # !! 0 ^ ! 0 # ! 1 ^ 1 # 0 ^ 1 # 1 ] } ) ? # @{$_[1]} @{ $_[ !$. . !!$. # $INPUT_LINE_NUMBER === $. # $. starts at 1 # !$. . !!$. # ! 1 . !! 1 # 0 . ! 0 # 0 . 1 # 01 ] } [ # $_[0] $_[ # @LAST_MATCH_START - @LAST_MATCH_END # 0 @- - @+ ] % # @{$_[1]} @{ $_[ $= =~ /(?=)/ / !$` #( fix highlighting )`/ # $= is usually 60 # /(?=)/ will match, returns 1 # $` will be '' # 1 / ! '' # 1 / ! 0 # 1 / 1 # 1 ] } .. # $#{$_[1]} $#{ $_[ $? ? !!$? : !$? # $CHILD_ERROR === $? # $? ? !!$? : !$? # 0 ? !! 0 : ! 0 # 0 ? 0 : 1 # 1 # 1 ? !! 1 : ! 1 # 1 ? 1 : 0 # 1 ] } , # ( 0 ) ( $) ? !$) : !!$) # $EFFECTIVE_GROUP_ID === $) # $) ? !$) : !!$) # 0 ? ! 0 : !! 0 # 0 ? 1 : 0 # 0 # 1 ? ! 1 : !! 1 # 1 ? 0 : 1 # 0 ) .. # $_[0] $_[ $- - $- # 0 # $LAST_PAREN_MATCH = $- # 1 - 1 == 0 # 5 - 5 == 0 ] % # @{$_[1]} @{ $_[ $] / $] # $] === The version + patchlevel / 1000 of the Perl interpreter. # 1 / 1 == 1 # 5 / 5 == 1 ] } - # ( 1 ) ( !!$+ + !$+ # !! 1 + ! 1 # ! 0 + 0 # 1 + 0 # 1 ) ] : # @{$_[1]} @{ $_[ !!$^^ ^ !$^^ # !! 1 ^ ! 1 # ! 0 ^ 0 # 1 ^ 0 # 1 # !! 0 ^ ! 0 # ! 1 ^ 1 # 0 ^ 1 # 1 ] } ] } Now let's remove some of the obfuscation. sub foo{ [ ( $#{$_[1]} * @{$_[1]} ) ? @{$_[1]}[ ( $_[0] % @{$_[1]} ) .. $#{$_[1]} , 0 .. ( $_[0] % @{$_[1]} - 1 ) ] : @{$_[1]} ] } Now that we have some idea of what is going on, let's name the variables. sub foo{ my( $item_0, $arr_1 ) = @_; my $len_1 = @$arr_1; [ # This essentially just checks that the length of $arr_1 is greater than 1 ( ( $len_1 -1 ) * $len_1 ) # ( ( $len_1 -1 ) * $len_1 ) # ( ( 5 -1 ) * 5 ) # 4 * 5 # 20 # 20 ? 1 : 0 == 1 # ( ( $len_1 -1 ) * $len_1 ) # ( ( 2 -1 ) * 2 ) # 1 * 2 # 2 # 2 ? 1 : 0 == 1 # ( ( $len_1 -1 ) * $len_1 ) # ( ( 1 -1 ) * 1 ) # 0 * 1 # 0 # 0 ? 1 : 0 == 0 # ( ( $len_1 -1 ) * $len_1 ) # ( ( 0 -1 ) * 0 ) # -1 * 0 # 0 # 0 ? 1 : 0 == 0 ? @{$arr_1}[ ( $item_0 % $len_1 ) .. ( $len_1 -1 ), 0 .. ( $item_0 % $len_1 - 1 ) ] : # If we get here, @$arr_1 is either empty or has only one element @$arr_1 ] } Let's refactor the code to make it a little bit more readable. sub foo{ my( $item_0, $arr_1 ) = @_; my $len_1 = @$arr_1; if( $len_1 > 1 ){ return [ @{$arr_1}[ ( $item_0 % $len_1 ) .. ( $len_1 -1 ), 0 .. ( $item_0 % $len_1 - 1 ) ] ]; }elsif( $len_1 ){ return [ @$arr_1 ]; }else{ return []; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/100106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Causes of getting a java.lang.VerifyError I'm investigating the following java.lang.VerifyError java.lang.VerifyError: (class: be/post/ehr/wfm/application/serviceorganization/report/DisplayReportServlet, method: getMonthData signature: (IILjava/util/Collection;Ljava/util/Collection;Ljava/util/HashMap;Ljava/util/Collection;Ljava/util/Locale;Lorg/apache/struts/util/MessageRe˜̴MtÌ´MÚw€mçw€mp:”MŒŒ at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2357) at java.lang.Class.getConstructor0(Class.java:2671) It occurs when the jboss server in which the servlet is deployed is started. It is compiled with jdk-1.5.0_11 and I tried to recompile it with jdk-1.5.0_15 without succes. That is the compilation runs fine but when deployed, the java.lang.VerifyError occurs. When I changed the method name and got the following error: java.lang.VerifyError: (class: be/post/ehr/wfm/application/serviceorganization/report/DisplayReportServlet, method: getMD signature: (IILjava/util/Collection;Lj ava/util/Collection;Ljava/util/HashMap;Ljava/util/Collection;Ljava/util/Locale;Lorg/apache/struts/util/MessageResources┬á├ÿ├àN|├ÿ├àN├Üw┬Çm├ºw┬ÇmX#├ûM|X├öM at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2357 at java.lang.Class.getConstructor0(Class.java:2671) at java.lang.Class.newInstance0(Class.java:321) at java.lang.Class.newInstance(Class.java:303) You can see that more of the method signature is shown. The actual method signature is private PgasePdfTable getMonthData(int month, int year, Collection dayTypes, Collection calendarDays, HashMap bcSpecialDays, Collection activityPeriods, Locale locale, MessageResources resources) throws Exception { I already tried looking at it with javap and that gives the method signature as it should be. When my other colleagues check out the code, compile it and deploy it, they have the same problem. When the build server picks up the code and deploys it on development or testing environments (HPUX), the same error occurs. Also an automated testing machine running Ubuntu shows the same error during server startup. The rest of the application runs okay, only that one servlet is out of order. Any ideas where to look would be helpful. A: One thing you might try is using -Xverify:all which will verify bytecode on load and sometimes gives helpful error messages if the bytecode is invalid. A: VerifyError means that the class file contains bytecode that is syntactically correct but violates some semantic restriction e.g. a jump target that crosses method boundaries. Basically, a VerifyError can only occur when there is a compiler bug, or when the class file gets corrupted in some other way (e.g. through faulty RAM or a failing HD). Try compiling with a different JDK version and on a different machine. A: I fixed this error on Android by making the project I was importing a library, as described here http://developer.android.com/tools/projects/projects-eclipse.html#SettingUpLibraryProject Previously, I was just referencing the project (not making it a library) and I was getting this strange VerifyError. Hope it helps someone. A: In my case my Android project depends on another Java project compiled for Java 7. java.lang.VerifyError disappeared after I changed Compiler Compliance Level of that Java project to 6.0 Later I found out that this is a Dalvik issue: https://groups.google.com/forum/?fromgroups#!topic/android-developers/sKsMTZ42pwE A: I was getting this problem due to pack200 mangling a class file. A bit of searching turned this java bug up. Basically, setting --effort=4 caused the problem to go away. Using java 1.5.0_17 (though it cropped up in every single variant of java 1.5 I tried it in). A: I have fixed a similar java.lang.VerifyError issue by replacing catch (MagickException e) with catch (Exception e) where MagickException was defined in a library project (on which my project has a dependency). After that I have got a java.lang.NoClassDefFoundError about a class from the same library (fixed according to https://stackoverflow.com/a/9898820/755804 ). A: This can happen on Android when you're trying to load a library that was compiled against Oracle's JDK. Here is the problem for Ning Async HTTP client. A: java.lang.VerifyError are the worst. You would get this error if the bytecode size of your method exceeds the 64kb limit; but you would probably have noticed that. Are you 100% sure this class isn't present in the classpath elsewhere in your application, maybe in another jar? Also, from your stacktrace, is the character encoding of the source file (utf-8?) Is that correct? A: java.lang.VerifyError can be the result when you have compiled against a different library than you are using at runtime. For example, this happened to me when trying to run a program that was compiled against Xerces 1, but Xerces 2 was found on the classpath. The required classes (in org.apache.* namespace) were found at runtime, so ClassNotFoundException was not the result. There had been changes to the classes and methods, so that the method signatures found at runtime did not match what was there at compile-time. Normally, the compiler will flag problems where method signatures do not match. The JVM will verify the bytecode again when the class is loaded, and throws VerifyError when the bytecode is trying to do something that should not be allowed -- e.g. calling a method that returns String and then stores that return value in a field that holds a List. A: In my case I had to remove this block: compileOptions { sourceCompatibility JavaVersion.VERSION_1_7 targetCompatibility JavaVersion.VERSION_1_7 } It was showing error near Fragment.showDialog() method call. A: Minimal example that generates the error One simple possibility is to use Jasmin, or to manually edit the bytecode with a binary file editor. Lets create void method without a return instruction (generated by the return; statement in Java), which the JVMS says is illegal. In Jasmin we could write: .class public Main .super java/lang/Object .method public static main([Ljava/lang/String;)V aload_0 ; Just so that we won't get another verify error for empty code. .end method We then do javac Main.j and javap -v Main says that we have compiled: public static void main(java.lang.String[]); descriptor: ([Ljava/lang/String;)V flags: ACC_PUBLIC, ACC_STATIC Code: stack=1, locals=1, args_size=1 0: aload_0 so really there is no return instruction. Now if we try to run java Main we get: Error: A JNI error has occurred, please check your installation and try again Exception in thread "main" java.lang.VerifyError: (class: NoReturn, method: main signature: ([Ljava/lang/String;)V) Falling off the end of the code at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) at java.lang.Class.privateGetMethodRecursive(Class.java:3048) at java.lang.Class.getMethod0(Class.java:3018) at java.lang.Class.getMethod(Class.java:1784) at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526) This error can never happen in Java normally, since the Java compiler adds an implicit return to void methods for us. This is why we don't need to add a return to our main methods. You can check this with javap. JVMS VerifyError happens when you try to run certain types of illegal class file as specified by JVMS 7 chapter 4.5 The JVMS says that when Java loads a file, it must run a series of checks to see that the class file is OK before running it. Such errors cannot be generated on a single compile and run cycle of Java code, because JVMS 7 4.10 says: Even though a compiler for the Java programming language must only produce class files that satisfy all the static and structural constraints [... ] So to see a minimal failure example, we will need to generate the source code without javac. A: As Kevin Panko said, it's mostly because of library change. So in some cases a "clean" of the project (directory) followed by a build does the trick. A: This page may give you some hints - http://www.zanthan.com/itymbi/archives/000337.html There may be a subtle bug in the body of that method that javac fails to spot. Difficult to diagnose unless you post the whole method here. You could start by declaring as many variables as possible as final... that would have caught the bug mentioned on the zanthan site, and is often a good practice anyways. A: Well in my case, my project A had a dependency on another, say X(A was using some of the classes defined in X). So when I added X as a reference project in the build path of A , I got this error. However when I removed X as the referenced project and included X's jar as one of the libraries, the problem was solved. A: Check for multiple versions of the same jar file on your classpath. For example, I had opennlp-tools-1.3.0.jar and opennlp-tools-1.5.3.jar on my classpath and got this error. The solution was to delete opennlp-tools-1.3.0.jar. A: Another reason for this error can be the combination of AspectJ <= 1.6.11 with JRE > 6. See Eclipse Bug 353467 and Kieker ticket 307 for details. This is especially true when everything works fine on JRE 6 and moving to JRE7 breaks things. A: CGLIB < 2.2 with JRE > 6 could trigger similar errors, see "Should I upgrade to CGLIB 3.0?" and some commentary at Spring SPR-9669. This is especially true when everything works fine on JRE 6 and simply switching to JRE7 breaks things. A: It could also happen when you have a lot of module imports with maven. There will be two or more classes having exactly the same name ( same qualified name). This error is resulting from difference of interpretation between compile time and runtime. A: If you are migrating to java7 or using java7 then generally this error can be seen. I faced above errors and struggled a lot to find out the root cause, I would suggest to try adding "-XX:-UseSplitVerifier" JVM argument while running your application. A: After updating Gradle in Android Studio 3.6.1 crashes happened on API 19 in release build. There was a Glide library error. Solution is to rewrite proguard-rules.txt. Also downgrading Gradle works (classpath 'com.android.tools.build:gradle:3.5.3'), but it is an outdated solution, don't use it. A: Though the reason mentioned by Kevin is correct, but I would definitely check below before moving to something else: * *Check the cglibs in my classpath. *Check the hibernate versions in my classpath. Chances are good that having multiple or conflicting version of any of the above could cause unexpected issues like the one in question. A: java.lang.VerifyError means your compiled bytecode is referring to something that Android cannot find. This verifyError Issues me only with kitkat4.4 and lesser version not in above version of that even I ran the same build in both Devices. when I used jackson json parser of older version it shows java.lang.verifyerror compile 'com.fasterxml.jackson.core:jackson-databind:2.2.+' compile 'com.fasterxml.jackson.core:jackson-core:2.2.+' compile 'com.fasterxml.jackson.core:jackson-annotations:2.2.+' Then I have changed the Dependancy to the latest version 2.2 to 2.7 without the core library, then it works. which means the Methods and other contents of core is migrated to the latest version of Databind2.7. This fix my Issues. compile 'com.fasterxml.jackson.core:jackson-annotations:2.7.0-rc3' compile 'com.fasterxml.jackson.core:jackson-databind:2.7.0-rc3' A: please remove any unusable jar file and try to run. and its work for me i added a jcommons jar file and also another jcommons.1.0.14 jar file so remove jcommons and its working for me A: write on file: {Wildfly-home}\modules\system\layers\base\org\picketbox\main into dependencies next: <module name="sun.jdk"/> A: In my case, I was getting verify error with below stack trace jasperreports-server-cp-6.4.0-bin\buildomatic\build.xml:61: The following error occurred while executing this line: TIB_js-jrs-cp_6.4.0_bin\jasperreports-server-cp-6.4.0-bin\buildomatic\bin\setup.xml:320: java.lang.VerifyError: (class: org/apache/commons/codec/binary/Base64OutputStream, method: <init> signature: (Ljava/io/OutputStream;ZI[B)V) Incompatible argument to function at com.jaspersoft.jasperserver.crypto.KeystoreManager.createKeystore(KeystoreManager.java:257) at com.jaspersoft.jasperserver.crypto.KeystoreManager.init(KeystoreManager.java:224) at com.jaspersoft.buildomatic.crypto.KeystoreTask.execute(KeystoreTask.java:64) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.helper.ProjectHelper2.parse(ProjectHelper2.java:169) at org.apache.tools.ant.taskdefs.ImportTask.importResource(ImportTask.java:222) at org.apache.tools.ant.taskdefs.ImportTask.execute(ImportTask.java:163) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.helper.ProjectHelper2.parse(ProjectHelper2.java:180) at org.apache.tools.ant.ProjectHelper.configureProject(ProjectHelper.java:93) at org.apache.tools.ant.Main.runBuild(Main.java:826) at org.apache.tools.ant.Main.startAnt(Main.java:235) at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280) at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109) I got it resolved by removing classpath entry for commons-codec-1.3.jar, there was a mismatch in version of this jar with the one comes with Jasper.
{ "language": "en", "url": "https://stackoverflow.com/questions/100107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "206" }
Q: Application wide keyboard shortcut - Java Swing I would like to create an application wide keyboard shortcut for a Java Swing application. Looping over all components and adding the shortcut on each, has focus related side effects, and seems like a brute force solution. Anyone has a cleaner solution? A: When you have a menu, you can add global keyboard shortcuts to menu items: JMenuItem item = new JMenuItem(action); KeyStroke key = KeyStroke.getKeyStroke( KeyEvent.VK_R, KeyEvent.CTRL_DOWN_MASK); item.setAccelerator(key); menu.add(item); A: For each window, use JComponent.registerKeyboardAction with a condition of WHEN_IN_FOCUSED_WINDOW. Alternatively use: JComponent.getInputMap(WHEN_IN_FOCUSED_WINDOW).put(keyStroke, command); JComponent.getActionMap().put(command,action); as described in the registerKeyboardAction API docs. A: A little simplified example: import java.awt.KeyboardFocusManager; import java.awt.KeyEventDispatcher; import java.awt.event.KeyEvent; KeyboardFocusManager keyManager; keyManager = KeyboardFocusManager.getCurrentKeyboardFocusManager(); keyManager.addKeyEventDispatcher(new KeyEventDispatcher() { @Override public boolean dispatchKeyEvent(KeyEvent e) { if (e.getID() == KeyEvent.KEY_PRESSED && e.getKeyCode() == 27) { System.out.println("Esc"); return true; } return false; } }); A: Install a custom KeyEventDispatcher. The KeyboardFocusManager class is also a good place for this functionality. KeyEventDispatcher A: For people wondering (like me) how to use KeyEventDispatcher, here is an example that I put together. It uses a HashMap for storing all global actions, because I don't like large if (key == ..) then .. else if (key == ..) then .. else if (key ==..) .. constructs. /** map containing all global actions */ private HashMap<KeyStroke, Action> actionMap = new HashMap<KeyStroke, Action>(); /** call this somewhere in your GUI construction */ private void setup() { KeyStroke key1 = KeyStroke.getKeyStroke(KeyEvent.VK_A, KeyEvent.CTRL_DOWN_MASK); actionMap.put(key1, new AbstractAction("action1") { @Override public void actionPerformed(ActionEvent e) { System.out.println("Ctrl-A pressed: " + e); } }); // add more actions.. KeyboardFocusManager kfm = KeyboardFocusManager.getCurrentKeyboardFocusManager(); kfm.addKeyEventDispatcher( new KeyEventDispatcher() { @Override public boolean dispatchKeyEvent(KeyEvent e) { KeyStroke keyStroke = KeyStroke.getKeyStrokeForEvent(e); if ( actionMap.containsKey(keyStroke) ) { final Action a = actionMap.get(keyStroke); final ActionEvent ae = new ActionEvent(e.getSource(), e.getID(), null ); SwingUtilities.invokeLater( new Runnable() { @Override public void run() { a.actionPerformed(ae); } } ); return true; } return false; } }); } The use of SwingUtils.invokeLater() is maybe not necessary, but it is probably a good idea not to block the global event loop. A: Use the following piece of code ActionListener a=new ActionListener(){ public void actionPerformed(ActionEvent ae) { // your code } }; getRootPane().registerKeyboardAction(a,KeyStroke.getKeyStroke("ctrl D"),JComponent.WHEN_IN_FOCUSED_WINDOW); Replace "ctrl D" with the shortcut you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/100123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Is Eclipse 3.4 (Ganymede) memory usage significantly higher than 3.2? I was happily using Eclipse 3.2 (or as happy as one can be using Eclipse) when for a forgotten reason I decided to upgrade to 3.4. I'm primarily using PyDev, Aptana, and Subclipse, very little Java development. I've noticed 3.4 tends to really give my laptop a hernia compared to 3.2 (vista, core2duo, 2G). Is memory usage on 3.4 actually higher than on 3.2, and if so is there a way to reduce it? EDIT: I tried disabling plugins (I didn't have much enabled anyway) and used the jvm monitor; the latter was interesting but I couldn't figure out how to use the info in any practical way. I'm still not able to reduce its memory footprint. I've also noticed every once in a while Eclipse just hangs for ~30 seconds, then magically comes back. A: Yes memory usage can get real high and you might run into problems with your JVM, as the default setting is a bit to low. Consider using this startup parameters when running eclipse: -vmargs -XX:MaxPermSize=1024M -Xms256M -Xmx1024M A: With those options, I manage to limit the memory used to 700Mo (which is quite high, but still workable with my 2 Go) -vmargs -Xms128m -Xmx384m -Xssv2m -XX:PermSize=128m -XX:MaxPermSize=128m -XX:CompileThreshold=5 -XX:+UseParallelGC -Dcom.sun.management.jmxremote And consider also to launch C:\[jdk1.6.0_0x path]\bin\jconsole.exe And choose 'Connection / New connection / 'eclipse' to monitor the memory used by eclipse (which is why I use '-Dcom.sun.management.jmxremote') Other options are available here. A: The more plugins you have, the more memory Eclipse will consume. 3.4 includes more plugins by default than 3.3, and so on, and so on, as more and more developers clamor for features to be included. Go to Window->Show View, and start typing "plug in", and one of the options will be the Plug In Registry. Open that view, and click on the arrow to show active plugins only. These are the plugins actually loaded into memory. My Eclipse 3.3 currently has 89 out of 445 or so plugins loaded. You can then selectively start disabling plugins from the Help menu, once you see which ones you won't use (right now, for instance, I"m not using Mylyn, but I hope to in the future). A: To add to my previous answer and to your recent update: Eclipse just hangs for ~30 seconds, then magically comes back. That is usually a sign for a failed network access with a timeout (and the associated 'freeze' while the application is waiting for said timeout). try typing 'net use' in a DOS prompt, and check if you have net path declared there, some of them you could get rid off ('net use /D aUselessPath'). To be sure, check also the shares that you declare (net share). Since you are with Vista, try also to deactivate superfetch and see if you still experience those freezes (both for eclipse and Firefox). Open a CMD prompt with administrative privileges and enter "net stop superfetch" to stop the SuperFetch service. It is not a good long-term solution though, just a quick check to make. Superfetch should be kept on, and will actually restart on your next reboot, since the service is set to start automatically at each Windows session. Again, this is just to see if there is any connection between that service and your freezes.
{ "language": "en", "url": "https://stackoverflow.com/questions/100161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a way to check if there are symbolic links pointing to a directory? I have a folder on my server to which I had a number of symbolic links pointing. I've since created a new folder and I want to change all those symbolic links to point to the new folder. I'd considered replacing the original folder with a symlink to the new folder, but it seems that if I continued with that practice it could get very messy very fast. What I've been doing is manually changing the symlinks to point to the new folder, but I may have missed a couple. Is there a way to check if there are any symlinks pointing to a particular folder? A: I'd use the find command. find . -lname /particular/folder That will recursively search the current directory for symlinks to /particular/folder. Note that it will only find absolute symlinks. A similar command can be used to search for all symlinks pointing at objects called "folder": find . -lname '*folder' From there you would need to weed out any false positives. A: You can audit symlinks with the symlinks program written by Mark Lord -- it will scan an entire filesystem, normalize symlink paths to absolute form and print them to stdout. A: There isn't really any direct way to check for such symlinks. Consider that you might have a filesystem that isn't mounted all the time (eg. an external USB drive), which could contain symlinks to another volume on the system. You could do something with: for a in `find / -type l`; do echo "$a -> `readlink $a`"; done | grep destfolder I note that FreeBSD's find does not support the -lname option, which is why I ended up with the above. A: find . -type l -printf '%p -> %l\n' A: Apart from looking at all other folders if there are links pointing to the original folder, I don't think it is possible. If it is, I would be interested. A: find / -lname 'fullyqualifiedpathoffile' A: For hardlinks, you can get the inode of your directory with one of the "ls" options (-i, I think). Then a find with -inum will locate all common hardlinks. For softlinks, you may have to do an ls -l on all files looking for the text after "->" and normalizing it to make sure it's an absolute path. A: find /foldername -type l -exec ls -lad {} \; A: To any programmers looking here (cmdline tool questions probably should instead go to unix.stackexchange.com nowadays): You should know that the Linux/BSD function fts_open() gives you an easy-to-use iterator for traversing all sub directory contents while also detecting such symlink recursions. Most command line tools use this function to handle this case for them. Those that don't often have trouble with symlink recursions because doing this "by hand" is difficult (any anyone being aware of it should just use the above function instead).
{ "language": "en", "url": "https://stackoverflow.com/questions/100170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "74" }
Q: What tool/format do you use for writing your specifications? I would like to know what kind of tool you use for writing your specifications. I think it's essential to use a tool that supports some kind of plain text format so that one can control the specification with a source control system like SVN. For the specification as for the code as well, it's important to have a history of all changes. At present we write our specification in a XML format. TeX would also be an alternative, but it's hard for people who have never been working with it. So let me know, what kind of tools or formats you use for specifications. A: DocBook edited with XXE, translated to pdf with xslt when needed to be sent to clients. Best change ever, so much easier to write, so much easier to merge, and when it's converted it doesn't look so godawfully unprofessional as MSWord. Plus the structured document style is already there, unlike bloody word which you have to fight with to get working. A: We used TeX (MikTeX) and it was perfect because: * *plaint text - edit in Vim/Notepad - just everywhere *powerful formatting using predefined macros one of us did *onclick generation to PDF The only problem was to get diagrams (from ArgoUML) in. At another project I saw using Word templates - awful stuff directed from above. I'd consider using something like wiki/forum on intranet. Imagine using GoogleDocs - there is versioning, it's online.. but not applicable for commercial development. A: At work a lot of our documents go under Sharepoint or some other document system that really slows down the "release" of a document. This means there are copies of the documents all over the place and getting someone to properly release something is a headache. Due to this I normally received specs in power point or scrap paper. So I put up a wiki (Media Wiki) at work that we now keep all project specs in. This allows them to be viewable by anyone in the company and editable by our development group. Sometimes a developer will ask the boss for a clarification as they pass by or whatever and the developer can update the spec themselves which I think is a huge advantage. Also, when people update a spec with new information using the history it is very easy to see what the most recent changes were - meaning I can see what was happening before and what needs to happen now, which I think is a huge advantage. I still keep a spec that was scribbled on some notebook paper up on my wall as a reminder. A: I've come to use Docbook for all such things. It's easy, flexible, and will generate html, tex (and thus pdf), etc. A: Microsoft Word. I know it doesn't meet with your requirements but in every job I've had I've used Microsoft Word for the specifications. You can, and I have, put Word documents in a source control system - The only thing you lose is the ability to diff between documents. Although I do vaguely remember reading somewhere that there are diff tools for word that can be used. A: At work we use a wiki because they are great for collaboration, but Microsoft Word will work. You can actually diff two different versions of a Word document using Word itself - it uses the "track changes" feature to show the differences. (If you don't believe me, try diffing two versions of a Word document using TortoiseSVN.) For long documents, I actually prefer Word over the wiki because it is well suited to editing long documents and business folks are more comfortable working with Word documents.
{ "language": "en", "url": "https://stackoverflow.com/questions/100174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Best practice for application-icon in windows So far I had "designed" my app-icon in the Visual-Studio-Editor. 16-Colors, 4kB. Now someone created a more sophisticated and up-to-date app-icon for me, which results in a filesize of about 250kB. Problem: A user reported, that win2000 is complaining, because it is not able to digest this amount of data for an icon. Question: What can be regarded as best practice for application-icons. In detail: which resolutions and which color-depth-variations should be contained in an icon? A: I've always tried to stick to the following set of sizes to get a reasonable icon on most systems. * *16 x 16 in 16 colours *16 x 16 in XP Style (true colour with alpha channel info) *32 x 32 in 256 colours *32 x 32 in XP style *48 x 48 in XP style *64 x 64 in XP style This produces an icon of about 35KB in size and seems to work on systems from win95/98 all the way up to Vista. I still develop on a Win2000 machine and these work just fine. A: icoFX is a free icon editor which I just found. It seems to work nicely - you just check the boxes for the formats you want "slaved" to your big 256x256 icon which is the one you edit. Searching stack overflow for icoFX - others agree. A: In general I wouldn't care about Windows 2000 any more as even Microsoft has begun to stop support for it. For Windows XP this article on MSDN might help you. A: I'd say best practise would be to follow the icon example Microsoft is setting with XP and Vista icons. It's so rare to see anything less than 256 colour icons these days, that when I do see them I think the program is quaint and outdated. Perhaps best bet is to wait for Microsoft to add SVG icon support; perhaps in Windows 7, if we're lucky?
{ "language": "en", "url": "https://stackoverflow.com/questions/100177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Adding a classic ASP webapplication to VS 2008 is there a way to add an existing classic ASP webapp into a solution in VS? The application is around 4000 files large and currently maintained outisde Visual Studio. A: The best way to do this IMO is:- In VS 2008 File | Open | Web Site... In the dialog ensure File System is selected. Navigate to the physical folder that represents the root of your Web app. Click Open. Job Done. A: You can't do this with Visual Studio 2008 out of the box, but you can if you install Service Pack 1 - see Scott Guthrie's blog post for more info. Edit: To clarify, whilst you can create a project out of the box, there is no intellisense or debugging unless you install SP1 A: There's no real point though is there? As Classic ASP files aren't compiled. I keep just opening them individually as needed from File Explorer. Although you could add them all to a VBProj or CSProj so you get them all showing up in your Solution Explorer, and could use features like Project -> Deploy.
{ "language": "en", "url": "https://stackoverflow.com/questions/100187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: .NET List Concat vs AddRange What is the difference between the AddRange and Concat functions on a generic List? Is one recommended over the other? A: The big difference is that AddRange mutates that list against which it is called whereas Concat creates a new List. Hence they have different uses. Also Concat is an extension method that applies to any IEnumerable<T> and returns an IEnumerable<T> you need a .ToList() to result in a new List. If you want to extend the content of an existing list use AddRange. If you are creating a new list from two IEnumerable<T> sources then use Concat with .ToList. This has the quality that it does not mutate either of sources. If you only ever need to enumerate the contents of two Lists (or any other IEnumerable) then simply use Concat each time, this has the advantage of not actually allocating new memory to hold the unified list. A: They have totally different semantics. AddRange modifies the list by adding the other items to it. Concat returns a new sequence containing the list and the other items, without modifying the list. Choose whichever one has the semantics you want. A: I found this interesting article talking about the difference between these 2 structures and comparing their performance... The main idea is that AddRange is way much faster when its about big size collections. Here is the Link Hope this helps,
{ "language": "en", "url": "https://stackoverflow.com/questions/100196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: How can I do offline reasoning with Pellet? I have an OWL ontology and I am using Pellet to do reasoning over it. Like most ontologies it starts by including various standard ontologies: <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:owl="http://www.w3.org/2002/07/owl#"> I know that some reasoners have these standard ontologies 'built-in', but Pellet doesn't. Is there any way I can continue to use Pellet when I am offline & can't access them? (Or if their URL goes offline, like dublincore.org did last week for routine maintenance) A: Pellet recognizes all of these namespaces when loading and should not attempt to dereference the URIs. If it does, it suggests the application using Pellet is doing something incorrectly. You may find more help on the pellet-users mailing list. A: A generalized solution to this problem -- access to ontologies w/out public Web access -- is described in Local Ontology Repositories with Pellet. Enjoy. A: Make local copies of the four files and replace the remote URLs with local URIs (i.e. file://... or serve them from your own box: http://localhost...).
{ "language": "en", "url": "https://stackoverflow.com/questions/100209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the standard way to add N seconds to datetime.time in Python? Given a datetime.time value in Python, is there a standard way to add an integer number of seconds to it, so that 11:34:59 + 3 = 11:35:02, for example? These obvious ideas don't work: >>> datetime.time(11, 34, 59) + 3 TypeError: unsupported operand type(s) for +: 'datetime.time' and 'int' >>> datetime.time(11, 34, 59) + datetime.timedelta(0, 3) TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.timedelta' >>> datetime.time(11, 34, 59) + datetime.time(0, 0, 3) TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.time' In the end I have written functions like this: def add_secs_to_time(timeval, secs_to_add): secs = timeval.hour * 3600 + timeval.minute * 60 + timeval.second secs += secs_to_add return datetime.time(secs // 3600, (secs % 3600) // 60, secs % 60) I can't help thinking that I'm missing an easier way to do this though. Related * *python time + timedelta equivalent A: If it's worth adding another file / dependency to your project, I've just written a tiny little class that extends datetime.time with the ability to do arithmetic. When you go past midnight, it wraps around zero. Now, "What time will it be, 24 hours from now" has a lot of corner cases, including daylight savings time, leap seconds, historical timezone changes, and so on. But sometimes you really do need the simple case, and that's what this will do. Your example would be written: >>> import datetime >>> import nptime >>> nptime.nptime(11, 34, 59) + datetime.timedelta(0, 3) nptime(11, 35, 2) nptime inherits from datetime.time, so any of those methods should be usable, too. It's available from PyPi as nptime ("non-pedantic time"), or on GitHub: https://github.com/tgs/nptime A: As others here have stated, you can just use full datetime objects throughout: from datetime import datetime, date, time, timedelta sometime = time(8,00) # 8am later = (datetime.combine(date.today(), sometime) + timedelta(seconds=3)).time() However, I think it's worth explaining why full datetime objects are required. Consider what would happen if I added 2 hours to 11pm. What's the correct behavior? An exception, because you can't have a time larger than 11:59pm? Should it wrap back around? Different programmers will expect different things, so whichever result they picked would surprise a lot of people. Worse yet, programmers would write code that worked just fine when they tested it initially, and then have it break later by doing something unexpected. This is very bad, which is why you're not allowed to add timedelta objects to time objects. A: You can use full datetime variables with timedelta, and by providing a dummy date then using time to just get the time value. For example: import datetime a = datetime.datetime(100,1,1,11,34,59) b = a + datetime.timedelta(0,3) # days, seconds, then other fields. print(a.time()) print(b.time()) results in the two values, three seconds apart: 11:34:59 11:35:02 You could also opt for the more readable b = a + datetime.timedelta(seconds=3) if you're so inclined. If you're after a function that can do this, you can look into using addSecs below: import datetime def addSecs(tm, secs): fulldate = datetime.datetime(100, 1, 1, tm.hour, tm.minute, tm.second) fulldate = fulldate + datetime.timedelta(seconds=secs) return fulldate.time() a = datetime.datetime.now().time() b = addSecs(a, 300) print(a) print(b) This outputs: 09:11:55.775695 09:16:55 A: For completeness' sake, here's the way to do it with arrow (better dates and times for Python): sometime = arrow.now() abitlater = sometime.shift(seconds=3) A: In a real world environment it's never a good idea to work solely with time, always use datetime, even better utc, to avoid conflicts like overnight, daylight saving, different timezones between user and server etc. So I'd recommend this approach: import datetime as dt _now = dt.datetime.now() # or dt.datetime.now(dt.timezone.utc) _in_5_sec = _now + dt.timedelta(seconds=5) # get '14:39:57': _in_5_sec.strftime('%H:%M:%S') A: One little thing, might add clarity to override the default value for seconds >>> b = a + datetime.timedelta(seconds=3000) >>> b datetime.datetime(1, 1, 1, 12, 24, 59) A: You cannot simply add number to datetime because it's unclear what unit is used: seconds, hours, weeks... There is timedelta class for manipulations with date and time. datetime minus datetime gives timedelta, datetime plus timedelta gives datetime, two datetime objects cannot be added although two timedelta can. Create timedelta object with how many seconds you want to add and add it to datetime object: >>> from datetime import datetime, timedelta >>> t = datetime.now() + timedelta(seconds=3000) >>> print(t) datetime.datetime(2018, 1, 17, 21, 47, 13, 90244) There is same concept in C++: std::chrono::duration. A: Thanks to @Pax Diablo, @bvmou and @Arachnid for the suggestion of using full datetimes throughout. If I have to accept datetime.time objects from an external source, then this seems to be an alternative add_secs_to_time() function: def add_secs_to_time(timeval, secs_to_add): dummy_date = datetime.date(1, 1, 1) full_datetime = datetime.datetime.combine(dummy_date, timeval) added_datetime = full_datetime + datetime.timedelta(seconds=secs_to_add) return added_datetime.time() This verbose code can be compressed to this one-liner: (datetime.datetime.combine(datetime.date(1, 1, 1), timeval) + datetime.timedelta(seconds=secs_to_add)).time() but I think I'd want to wrap that up in a function for code clarity anyway. A: If you don't already have a timedelta object, another possibility would be to just initialize a new time object instead with the attributes of the old one and add values where needed: new_time:time = time( hour=curr_time.hour + n_hours, minute=curr_time.minute + n_minutes, seconds=curr_time.second + n_seconds ) Admittedly this only works if you make a few assumptions about your values, since overflow is not handled here. But I just thought it was worth to keep this in mind as it can save a line or two A: Try adding a datetime.datetime to a datetime.timedelta. If you only want the time portion, you can call the time() method on the resultant datetime.datetime object to get it. A: Old question, but I figured I'd throw in a function that handles timezones. The key parts are passing the datetime.time object's tzinfo attribute into combine, and then using timetz() instead of time() on the resulting dummy datetime. This answer partly inspired by the other answers here. def add_timedelta_to_time(t, td): """Add a timedelta object to a time object using a dummy datetime. :param t: datetime.time object. :param td: datetime.timedelta object. :returns: datetime.time object, representing the result of t + td. NOTE: Using a gigantic td may result in an overflow. You've been warned. """ # Create a dummy date object. dummy_date = date(year=100, month=1, day=1) # Combine the dummy date with the given time. dummy_datetime = datetime.combine(date=dummy_date, time=t, tzinfo=t.tzinfo) # Add the timedelta to the dummy datetime. new_datetime = dummy_datetime + td # Return the resulting time, including timezone information. return new_datetime.timetz() And here's a really simple test case class (using built-in unittest): import unittest from datetime import datetime, timezone, timedelta, time class AddTimedeltaToTimeTestCase(unittest.TestCase): """Test add_timedelta_to_time.""" def test_wraps(self): t = time(hour=23, minute=59) td = timedelta(minutes=2) t_expected = time(hour=0, minute=1) t_actual = add_timedelta_to_time(t=t, td=td) self.assertEqual(t_expected, t_actual) def test_tz(self): t = time(hour=4, minute=16, tzinfo=timezone.utc) td = timedelta(hours=10, minutes=4) t_expected = time(hour=14, minute=20, tzinfo=timezone.utc) t_actual = add_timedelta_to_time(t=t, td=td) self.assertEqual(t_expected, t_actual) if __name__ == '__main__': unittest.main()
{ "language": "en", "url": "https://stackoverflow.com/questions/100210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "497" }
Q: How to retrieve params from GET HTTP method using javax.ws.rs.* and Glassfish? I just installed Glassfish V2 on my local machine just to play around with it. I was wondering if there is a way to retrieve a param passed in by the GET HTTP method. For instance, http://localhost:8080/HelloWorld/resources/helloWorld?name=ABC How do I retrieve the "name" param in my Java code? A: Like this: @Path("/helloWorld") @Consumes({"application/xml", "application/json"}) @Produces({"application/xml", "application/json"}) @Singleton public class MyService { @GET public String getRequest(@QueryParam("name") String name) { return "Name was " + name; } } A: By putting: @Context private UriInfo context; in your HelloWorld class, can you access the context.getQueryParameters() ; method to get a map of parameters? http://docs.sun.com/app/docs/doc/820-4867/ggrby?a=view Seems to suggest you can :)
{ "language": "en", "url": "https://stackoverflow.com/questions/100211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Start seleniumRC from Fitnesse I'm trying to integrate running Fitnesse tests from MSBuild im my nightly build on TFS. In an attempt to make it self contained I would like to start the seleniumRC server only when it's needed from fitness. I've seen that there is a "Command Line Fixture" but it's written in java can I use that? A: I think you might be able to. You can call any process easily in MSBuild using the task. However, the problem with doing this is that the exec task will wait for the Selinium process to finish before continuing, which is not the bahaviour you want. You want to run the process, keep it running during your build and then tear it down as your build finishes. Therefore, I think you are probably going to need to create a custom MSBuild task to do this. See the following post for an example of a tasks that someone has created that will run asynchronously returning control back to the build script: http://blog.eleutian.com/2007/03/01/AsyncExecMsBuildTask.aspx And for an example of calling a Java program from MSBuild (but in this case synchronously) take a look at my task that calls Ant from MSBuild here http://teamprise.com/products/build/ As part of your MSBuild task, you will want to output the process id that you created to an output property so that at the end of your build script you can call another custom MSBuild task that kills the process. It can do this by looking for the process id passed in as a variable in MSBuild and then call Process.Kill method i.e. Process process = Process.GetProcessById(ProcessId); process.Kill(); That said, you would need to be careful to ensure that your kill task was always executed in MSBuild by making sure it was included during error paths etc in the build. You could probably make things a bit more resilient by making the selenium RC starter task look for other seleniumRC processes and killing them before starting a new one - that way if a process didn't get closed properly for some reason, it would only run until the next build. Anyway - my answer sounds like a lot of work so hopefully someone else will come up with an easier way. You might be able to create the seleniumRC process in the test suite start up of the FitNesse tests and kill it in the suite tear down, or you might be able to write a custom task that extends your FitNesse runner tasks and fires up seleiniumRC asynronously before running the test process and then kills it afterwards. Good luck, Martin. A: Thanks for your replies! This is how I've done so far. I made a fit fixture (very simple) that starts a process with the supplied command line, in my case startSelenium.bat. The fixture returns the ProcessID so I can store that in my fitnesse context and close that session later. I can now make a SuiteSetUp page in my fitnesse test that looks like this. |RunCommandFixture| |Commandline|RunCommand?| |C:\Projects...\startSeleniumRC.bat|>>seleniumprocess| and a SuiteTearDown like this |RunCommandFixture| |ProcessID|StopCommand?| |< That works for me. No selenium RC starts by request from my fitnesse test. A: What about writing a simple .NET app that does a Process.Start("selenumRC commandline") which gets run by your build script? If you aren't too far down the Selenium route; might I suggest that you look at similar .NET browser automation tools; specifically WatiN or ArtOfTest. The "stacks" in these are completely .NET, so getting them running on different machines is much easier.
{ "language": "en", "url": "https://stackoverflow.com/questions/100216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Tools for finding unused function declarations? Whilst refactoring some old code I realised that a particular header file was full of function declarations for functions long since removed from the .cpp file. Does anyone know of a tool that could find (and strip) these automatically? A: You could if possible make a test.cpp file to call them all, the linker will flag the ones that have no code as unresolved, this way your test code only need compile and not worry about actually running. A: PC-lint can be tunned for dedicated purpose: I tested the following code against for your question: void foo(int ); int main() { return 0; } lint.bat test_unused.cpp and got the following result: ============================================================ --- Module: test_unused.cpp (C++) --- Wrap-up for Module: test_unused.cpp Info 752: local declarator 'foo(int)' (line 2, file test_unused.cpp) not referenced test_unused.cpp(2) : Info 830: Location cited in prior message ============================================================ So you can pass the warning number 752 for your puropse: lint.bat -"e*" +e752 test_unused.cpp -e"*" will remove all the warnings and +e752 will turn on this specific one A: If you index to code with Doxygen you can see from where is each function referenced. However, you would have to browse through each class (1 HTML page per class) and scan for those that don't have anything pointing to them. Alternatively, you could use ctags to generate list of all functions in the code, and then use objdump or some similar tool to get list of all function in .o files - and then compare those lists. However, this can be problematic due to name mangling. A: I don't think there is such thing because some functions not having a body in the actual source tree might be defined in some external library. This can only be done by creating a script which makes a list of declared functions in a header and verifies if they are sometimes called. A: I have a C++ ftplugin for vim that is able is check and report unmatched functions -- vimmers, the ftplugin suite is not yet straightforward to install. The ftplugin is based on ctags results (hence its heuristic could be easily adapted to other environments), sometimes there are false positives in the case of inline functions. HTH, A: In addition Doxygen (@Milan Babuskov), you can see if there are warnings for this in your compiler. E.g. gcc has -Wunused-function for static functions; -fdump-ipa-cgraph. A: I've heard good things about PC-Lint, but I imagine it's probably overkill for your needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/100221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: XML schema construct for "any one or more of these elements but must be at least one" I'm trying to set up part of a schema that's like a "Sequence" where all child elements are optional, but at least one of the elements must be present, and there could be more than one of them. I tried doing the following, but XMLSpy complains that "The content model contains the elements <element name="DateConstant"> and <element name="DateConstant"> which cannot be uniquely determined.": <xs:choice> <xs:sequence> <xs:element name="DateConstant"/> <xs:element name="TimeConstant"/> </xs:sequence> <xs:element name="DateConstant"/> <xs:element name="TimeConstant"/> </xs:choice> Can this be done (and if so, how)? Some clarification: I only want to allow one of each element of the same name. There can be one "DateConstant" and/or one "TimeConstant", but not two of either. Gizmo's answer matches my requirements, but it's impractical for a larger number of elements. Hurst's answer allows two or more elements of the same name, which I don't want. A: Try this: <xs:choice> <xs:sequence> <xs:element name="Elem1" /> <xs:element name="Elem2" minOccurs="0" /> <xs:element name="Elem3" minOccurs="0" /> </xs:sequence> <xs:sequence> <xs:element name="Elem2" /> <xs:element name="Elem3" minOccurs="0" /> </xs:sequence> <xs:element name="Elem3" /> </xs:choice> Doing so, you force either to choose the first element and then the rest is optional, either the second element and the rest is optional, either the third element. This should do what you want, I hope. Of course, you could place the sub-sequences into groups, to avoid to duplicate an element in each sequence if you realize you miss one. A: @hurst, Unfortunately you have failed to understand the original question. Placing minOccurs="1" on the choice is satisfied automatically when ALL elements that have minOccurs="0" are contained as options. Thus you have failed to account for the "at least one" required by the original poster, because no elements correctly satisfies 1 occurrance of two completely optional elements. So far I am unable to find a solution to this as minOccur/maxOccur both relate to the group in which they are defined and DO NOT relate to an overall number of nodes. Nor can you use the choice element to define the same named element more than once or it becomes "ambiguous". I have seen some examples use references instead of elements of a specific type, but I believe this fails the microsoft XSD parser. <xs:choice minOccurs="1" maxOccurs="1"> <xs:sequence minOccurs="1" maxOccurs="1"> <xs:element name="Elem1" minOccurs="1" maxOccurs="1" /> <xs:element name="Elem2" minOccurs="0" maxOccurs="1" /> </xs:sequence> <xs:sequence > <xs:element name="Elem2" minOccurs="1" maxOccurs="1" /> </xs:sequence> </xs:choice> Here you can see that either you have the first sequence (which MUST have Elem1 but may have Elem2 optionally), or you have the second sequence (which MUST have Elem2). Hence you now have "any one or more" of these 2 elements. Of course this gets exponentially more complex the more options you have as you need to provide additional choices for all possible combinations. A: According to the technical article on MSDN titled Understanding XML Schema at http://msdn.microsoft.com/en-us/library/aa468557.aspx#understandxsd_topic5 you can take advantage of constraints such as minOccurs on the choice definition (compositor) itself: "Using occurrence constraints on a compositor applies to the entire group as a whole" (See the more sophisticated example that uses nested complex types and the AuthorType example) You stated your requirement as "at least one of the elements must be present, and there could be more than one of them". Thus, I propose you try the following: <xs:choice minOccurs="1" maxOccurs="unbounded"> <xs:element name="DateConstant" type="..."/> <xs:element name="TimeConstant" type="..."/> </xs:choice>
{ "language": "en", "url": "https://stackoverflow.com/questions/100228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Looking for a simple standalone persistent dictionary implementation in C# For an open source project I am looking for a good, simple implementation of a Dictionary that is backed by a file. Meaning, if an application crashes or restarts the dictionary will keep its state. I would like it to update the underlying file every time the dictionary is touched. (Add a value or remove a value). A FileWatcher is not required but it could be useful. class PersistentDictionary<T,V> : IDictionary<T,V> { public PersistentDictionary(string filename) { } } Requirements: * *Open Source, with no dependency on native code (no sqlite) *Ideally a very short and simple implementation *When setting or clearing a value it should not re-write the entire underlying file, instead it should seek to the position in the file and update the value. Similar Questions * *Persistent Binary Tree / Hash table in .Net *Disk backed dictionary/cache for c# *PersistentDictionary<Key,Value> A: one way is to use the Extensible Storage Engine built into windoows to store your stuff. It's a native win database that supports indexing, transactions etc... A: * *bplustreedotnet The bplusdotnet package is a library of cross compatible data structure implementations in C#, java, and Python which are useful for applications which need to store and retrieve persistent information. The bplusdotnet data structures make it easy to store string keys associated with values permanently. *ESENT Managed Interface Not 100% managed code but it's worth mentioning it as unmanaged library itself is already part of every windows XP/2003/Vista/7 box ESENT is an embeddable database storage engine (ISAM) which is part of Windows. It provides reliable, transacted, concurrent, high-performance data storage with row-level locking, write-ahead logging and snapshot isolation. This is a managed wrapper for the ESENT Win32 API. *Akavache *Akavache is an asynchronous, persistent key-value cache created for writing native desktop and mobile applications in C#. Think of it like memcached for desktop apps. - The C5 Generic Collection Library C5 provides functionality and data structures not provided by the standard .Net System.Collections.Generic namespace, such as persistent tree data structures, heap based priority queues, hash indexed array lists and linked lists, and events on collection changes. A: I was working on porting EHCache to .NET. Take a look at the project http://sourceforge.net/projects/thecache/ Persistent caching is core functionality that is already implemented. All main Unit Tests are passing. I got a bit stuck on distributed caching, but you do not need that part. A: Let me analyze this: * *Retrieve information by key *Persistant storage *Do not want to write back the whole file when 1 value changes *Should survive crashes I think you want a database. Edit: I think you are searching for the wrong thing. Search for a database that fits your requirements. And change some of your requirements, because I think it will be difficult to meet them all. A: Sounds cool, but how will you get around changes to the stored value (if it was a reference type) itself? If its immutable then all is well but if not you're kinda stuffed :-) If you're not dealing with immutable values, I would suspect a better approach would be to handle persistence at the value level and to just rebuild the dictionary as necessary. (edited to add a clarification) A: I think your issue is likely to be that last point: When setting or clearing a value it should not re-write the entire underlying file, instead it should seek to the position in the file and update the value. This is exactly what a DB does - you're basically describing a simple file based table structure. We can illustrate the problem by looking at strings. Strings in memory are flexible things - you don't need to know the length of a string in C# when you declare its type. In data storage strings and everything else are fixed sizes. Your saved dictionary on disk is just a collection of bytes, in order. If you replace a value in the middle it either has to be exactly the same size or you will have to rewrite every byte that comes after it. This is why most databases restrict text and blob fields to fixed sizes. New features like varchar(max)/varbinary(max) in Sql 2005+ are actually clever simplifications to the row only actually storing a pointer to the real data. You can't use the fixed sizes with your example because it's generic - you don't know what type you're going to be storing so you can't pad the values out to a maximum size. You could do: class PersistantDictionary<T,V> : Dictionary<T,V> where V:struct ...as value types don't vary in storage size, although you would have to be careful with your implementation to save the right amount of storage for each type. However your model wouldn't be very performant - if you look at how SQL server and Oracle deal with table changes they don't change the values like this. Instead they flag the old record as a ghost, and add a new record with the new value. Old ghosted records are cleaned up later when the DB is less busy. I think you're trying to reinvent the wheel: * *If you're dealing with large amounts of data then you really need to check out using a full-blown DB. MySql or SqlLite are both good, but you're not going to find a good, simple, open-source and lite implementation. *If you aren't dealing with loads of data then I'd go for whole file serialisation, and there are already plenty of good suggestions here on how to do that. A: I wrote up an implementation myself based on a very similar (I think identical) requirement I had on another project a while ago. When I did it, one thing I realised was that most of the time you'll be doing writes, you only do a read rarely when the program crashes or when it's closed. So the idea is to make the writes as fast as possible. What I did was make a very simple class which would just write a log of all the operations (additions and deletions) to the dictionary as things occurred. So after a while you get a lot of repeating between keys. Because of that, once the object detects a certain amount of repetition, it'll clear the log and rewrite it so each key and its value only appears once. Unfortunately, you can't subclass Dictionary because you can't override anything in it. This is my simple implementation, I haven't tested it though I'm sorry, I thought you might want the idea though. Feel free to use it and change it as much as you like. class PersistentDictManager { const int SaveAllThreshold = 1000; PersistentDictManager(string logpath) { this.LogPath = logpath; this.mydictionary = new Dictionary<string, string>(); this.LoadData(); } public string LogPath { get; private set; } public string this[string key] { get{ return this.mydictionary[key]; } set{ string existingvalue; if(!this.mydictionary.TryGetValue(key, out existingvalue)) { existingvalue = null; } if(string.Equals(value, existingvalue)) { return; } this[key] = value; // store in log if(existingvalue != null) { // was an update (not a create) if(this.IncrementSaveAll()) { return; } // because we're going to repeat a key the log } this.LogStore(key, value); } } public void Remove(string key) { if(!this.mydictionary.Remove(key)) { return; } if(this.IncrementSaveAll()) { return; } // because we're going to repeat a key in the log this.LogDelete(key); } private void CreateWriter() { if(this.writer == null) { this.writer = new BinaryWriter(File.Open(this.LogPath, FileMode.Open)); } } private bool IncrementSaveAll() { ++this.saveallcount; if(this.saveallcount >= PersistentDictManager.SaveAllThreshold) { this.SaveAllData(); return true; } else { return false; } } private void LoadData() { try{ using(BinaryReader reader = new BinaryReader(File.Open(LogPath, FileMode.Open))) { while(reader.PeekChar() != -1) { string key = reader.ReadString(); bool isdeleted = reader.ReadBoolean(); if(isdeleted) { this.mydictionary.Remove(key); } else { string value = reader.ReadString(); this.mydictionary[key] = value; } } } } catch(FileNotFoundException) { } } private void LogDelete(string key) { this.CreateWriter(); this.writer.Write(key); this.writer.Write(true); // yes, key was deleted } private void LogStore(string key, string value) { this.CreateWriter(); this.writer.Write(key); this.writer.Write(false); // no, key was not deleted this.writer.Write(value); } private void SaveAllData() { if(this.writer != null) { this.writer.Close(); this.writer = null; } using(BinaryWriter writer = new BinaryWriter(File.Open(this.LogPath, FileMode.Create))) { foreach(KeyValuePair<string, string> kv in this.mydictionary) { writer.Write(kv.Key); writer.Write(false); // is not deleted flag writer.Write(kv.Value); } } } private readonly Dictionary<string, string> mydictionary; private int saveallcount = 0; private BinaryWriter writer = null; } A: Check this blog out: http://ayende.com/Blog/archive/2009/01/17/rhino.dht-ndash-persistent-amp-distributed-storage.aspx Looks to be exactly what you are looking for. A: Just use serialization. Look at the BinaryFormatter class. A: I don't know of anything to solve your problem. It will need to be a fixed size structure, so that you can meet the requirements of being able to rewrite records without rewriting the entire file. This means normal strings are out. A: Like Douglas said, you need to know the fixed size of your types (both T and V). Also, variable-length instances in the object grid referenced by any of those instances are out. Still, implementing a dictionary backed by a file is quite simple and you can use the BinaryWriter class to write the types to disk, after inheriting or encapsulating the Dictionary<TKey, TValue> class. A: Consider a memory mapped file. I'm not sure if there is direct support in .NET, but you could pinvoke the Win32 calls. A: I haven't actually used it, but this project apparently provides an mmap()-like implementation in C# Mmap A: I'd recommend SQL Server Express or other database. * *It's free. *It integrates very well with C#, including LINQ. *It's faster than a homemade solution. *It's more reliable than a homemade solution. *It's way more powerful than a simple disk-based data structure, so it'll be easy to do more in the future. *SQL is an industry standard, so other developers will understand your program more easily, and you'll have a skill that is useful in the future. A: I am not much of a programmer, but wouldn't creating a really simple XML format to store your data do the trick? <dico> <dicEntry index="x"> <key>MyKey</key> <val type="string">My val</val> </dicEntry> ... </dico> From there, you load the XML file DOM and fill up your dictionary as you like, XmlDocument xdocDico = new XmlDocument(); string sXMLfile; public loadDico(string sXMLfile, [other args...]) { xdocDico.load(sXMLfile); // Gather whatever you need and load it into your dico } public flushDicInXML(string sXMLfile, dictionary dicWhatever) { // Dump the dic in the XML doc & save } public updateXMLDOM(index, key, value) { // Update a specific value of the XML DOM based on index or key } Then whenever you want, you can update the DOM and save it on disk. xdocDico.save(sXMLfile); If you can afford to keep the DOM in memory performance-wise, it's pretty easy to deal with. Depending on your requirements, you may not even need the dictionary at all.
{ "language": "en", "url": "https://stackoverflow.com/questions/100235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: C# String ASCII representation How can I insert ASCII special characters (e.g. with the ASCII value 0x01) into a string? I ask because I am using the following: str.Replace( "<TAG1>", Convert.ToChar(0x01).ToString() ); and I feel that there must be a better way than this. Any Ideas? Update: Also If I use this methodology, do I need to worry about unicode & ASCII clashing? A: I believe you can use \uXXXX to insert specified codes into your string. ETA: I just tested it and it works. :-) using System; class Uxxxx { public static void Main() { Console.WriteLine("\u20AC"); } } A: Also If I use this methodology, do I need to worry about unicode & ASCII clashing? Your first problem will be your tags clashing with ASCII. Once you get to TAG10, you will clash with 0x0A: line feed. If you ensure that you will never get more than nine tags, you should be safe. Unicode-encoding (or rather: UTF8) is identical to ASCII-encoding when the byte-values are between 0 and 127. They only differ when the top-bit is set. A: and I feel that there must be a better way than this. Any Ideas? It looks as if you're trying to manipulate a binary chunk using textual tools. If you want to insert the byte 0x01, for example, you're not manipulating text anymore, since you don't care what that byte might represent, and since it looks like you don't even care which encoding you'll be outputting. A better way would be to treat the thing you're manipulating as a binary chunk of data, which would let you insert bits and bytes easily, without using brittle workarounds and worrying about side effects.
{ "language": "en", "url": "https://stackoverflow.com/questions/100236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I create a "select folder OR file dialog window" in REALbasic? You can use SelectFolder() to get a folder or GetOpenFolderitem(filter as string) to get files but can you select either a folder or file? ( or for that matter selecting multiple files ) A: The MonkeyBread plugin allows this in the OpenDialogMBS class. http://www.monkeybreadsoftware.net/pluginhelp/navigation-opendialogmbs.shtml OpenDialogMBS.AllowFolderSelection as Boolean property, Navigation, MBS Util Plugin (OpenDialog), class OpenDialogMBS, Plugin version: 7.5, Mac OS X: Works, Windows: Does nothing, Linux x86: Does nothing, Feedback. Function: Whether folders can be selected. Example: dim o as OpenDialogMBS dim i,c as integer dim f as FolderItem o=new OpenDialogMBS o.ShowHiddenFiles=true o.PromptText="Select one or more files/folders:" o.MultipleSelection=false o.ActionButtonLabel="Open files/folders" o.CancelButtonLabel="no, thanks." o.WindowTitle="This is a window title." o.ClientName="Client Name?" o.AllowFolderSelection=true o.ShowDialog c=o.FileCount if c>0 then for i=0 to c-1 f=o.Files(i) FileList.List.AddRow f.AbsolutePath next end if Notes: Default is false. Setting this to true on Windows or Linux has no effect there. (Read and Write property) A: It's not possible via any of the built-in APIs. There might be a plugin to do it, but I don't think there's OS support for it. A: A bit late, but it's been included in recent versions. I'll put it here in case someone stumbles like me in this question: RealBasic Multiple Selection: OpenDialog.MultiSelect A: Assuming you're using .Net I think you'll need to create your own control (or buy one).
{ "language": "en", "url": "https://stackoverflow.com/questions/100242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Java Frameworks War: Spring and Hibernate My developers are waging a civil war. In one camp, they've embraced Hibernate and Spring. In the other camp, they've denounced frameworks - they're considering Hibernate though. The question is: Are there any nasty surprises, weaknesses or pit-falls that newbie Hibernate-Spring converts are likely to stumble on? PS: We've a DAO library that's not very sophisticated. I doubt that it has Hibernate's richness, but it's reaching some sort of maturity (i.e. it's not been changed in the last few projects it's included). A: If you have a fairly complex database, Hibernate may not be for you. At work we have a fairly complex database with lots of data, and Hibernate doesn't really work for us. We've started using iBATIS instead. However, I know a lot of development shops who use Hibernate successfully - and it does do a lot of grunt work for you - so it's worth considering. Spring is a good tool if you know how to use it properly. I would say that frameworks are definitely a good thing - like others have pointed out, you don't want to reinvent the wheel. Spring contains a lot of modules which will mean you won't have to write so much code. Don't succumb to the "Not Invented Here" syndrome! A: This is one thing (I could remember) that I fell into when I was in my Hibernate days. When you delete (several) child objects from a collection (in a parent entity) and then add new entities to the same collection in one transaction without flushing in the middle, Hibernate will do "insert" before "delete". If the child table has a unique constraint in one of its columns, and you are expecting that you would not violate it since you have already deleted some data before (just like I was), then get ready to be frustrated. Hibernate forum suggests: *It was a DB design flaw, redesign; *flush (or commit if you will) in between the deletes and inserts; I couldn't do both, and end up tweaking the Hibernate source and recompiling. It was only 1 line of code. But the effort to find that one line was equal to approximately 27 cups of coffee and 3 sleepless nights. This is just one example of problems and quirks you might end up when using Hibernate with no real expert on your team (expert: someone with adequate knowledge about the philosophy and internal working of Hibernate). Your problem, solution, litre of coffee, and sleepless night count may vary. But you get the idea. A: Lazy loading is the big gotcha in MVC applications that use Hibernate for their persistence framework. You load the object in the controller and pass it to the JSP view. Some or all of the members of the class are proxied and everything blows up because you Hibernate session was closed when the controller completed. You will need to read the Open Session in View article to understand the problem and get a solution. If you are using Spring the this blog article describes the Spring solution to the open session in view issue. A: I haven't worked much with Java but I did work in large groups of Java developers. The impression I've got was that Spring is OK. But everybody was upset at Hibernate. Half the team if asked "If you could change one thing, what would you change?" and they'd say "Get rid of Hibernate.". When I started to learn Hibernate it struck me at amazingly complex, but I didn't learn enough (thankfully I've moved along) to know if the complexity was justified or not (maybe it was require to solve some complex problems). The team got rid of Spring in favor of Guice, but that was more like a political change, at least from my point of view and other developers I've talked to. A: I have always found Hibernate to be a bit complex and hard to learn. But as JPA (Java Persistence API) and EJB (Enterprise Java Beans) 3.0 has existed for a while things have gotten a lot easier, I much prefer annotating my classes to create mappings via JavaDoc or XML. Check out the support in Hibernate. The added bonus is that it is possible (but not effortless) to change the database framework later on if needed. I have used OpenJPA with great results. Lately I have been using JCR (Java Content Repository) more and more. I love the way that my modules can share a single data storage and that I can let the structure and properties evolve. I find it a lot easier working with nodes and properties rather that mapping my objects to a database. A good implementation is Jackrabbit. As for Spring, it has a lot of features I like, but the amount of XML needed to configure means I will never use it. Instead I utilize Guice and absolutely love it. To roundup, I would show your doubting developers how Hibernate will make their life easier. As for Spring I would seriously check if Guice is a viable alternative and then try to show how Spring/Guice makes development better and easier. A: They've denounced frameworks? That's nuts. If you don't use an off-the-shelf framework, then you create your own. It's still a framework. A: I've done a lot of Spring/Hibernate development. Over time the way people used both in combination has changed a bit. The original HibernateTemplate approach has proved to be difficult to debug since it swallows and wraps otherwise useful exceptions; talk to the Hiberante API directly! Please keep looking at the generated SQL (configure your development logging to show SQL). Having an abstraction layer to the database doesn't mean you don't have to think in SQL anymore; you won't get good performance if you otherwise. Consider the project. I've choosen iBatis over Hibernate on several occasions where we had stringent performance requirements, complex legacy schemas or good DBa's capable of writing excellent SQL. A: I've used Hibernate a number of times in the past. Each time I've run into edge cases where determining the syntax devolved into a scavenger hunt through the documentation, Google, and old versions. It is a powerful tool but poorly documented (last I looked). As for Spring, just about every job I've interviewed for or looked at in the past few years involved Spring, it's really become the de-facto standard for Java/web. Using it will help your developers be more marketable in the future, and it'll help you as you'll have a large pool of people who'll understand your application. Writing your own framework is tempting, educational, and fun. Not so great on results. A: As for Hibernate: a very good tool for application which deals with a rapidly changing database schema, a large amount of tables, do lots of simple CRUD operations. Reports with complex queries involved are rather less well handled. But in these case I prefer mixing in JDBC or native queries. So, for a short answer: I do think time spent learning Hibernate is a good investment (they say it is compliant with EJB3.0 and JPA standards, also, but that didn't come into the equation when I evaluated it for my personal use). As for Spring... see The Bile Blog :) Remember: frameworks are not silver bullets, but you should not reinvent the wheel either. A: Hibernate has quirks to be sure but that is because the problem it is trying to solve is complex. Every time someone complains about Hibernate I remind them of all of the boring DAO code that they would have to maintain if they weren't using it. A few tips: * *Hibernate is no substitute for a good database design. Hibernate schemas are OK but you will have to tweak them occasionally *Eventually you are going to have to understand how Hibernate lazy loads classes and how that affects things. Hibernate modifies the Java bytecode and you will need to delve into the depths sooner or later if only to explain why object links are null. *Use annotations if you can. *Take the time to learn the Hibernate performance tuning techniques, it will save you in the long run. A: I find it really helps to use well-known frameworks such as Hibernate because it fits your code into a specific mold, or a way of thinking. Meaning, since you're using Hibernate, you write code a certain way, and most if not all developers who know Hibernate will be able to follow your line of thinking quite easily. There's a downside to this, of course. Before you become a hot shot Hibernate developer, you're going to find that you're trying to fit a square into a circular hole. You KNOW what you want to do, and how you were supposed to do it before Hibernate came into the picture, but finding the Hibernate way of doing it may take... quite a bit of time. Still, for companies that frequently hire consultants (who need to understand a lot of source code in a short amount of time) or where the developers sign on and quit frequently, or where you just don't want to bet that your key developers will stay forever and never change jobs -- Hibernate and other standard frameworks are a pretty good idea I think. /Ace A: Spring and Hibernate are frameworks that are tricky to master. It may not be a good idea to use them in projects with tight deadlines while you're still trying to figure out the frameworks. The benefits of the frameworks is basically to try to provide a platform to allow for consistent codes to be products. From experience, you'd be well advised to have developers experienced with the frameworks setting in place best practices. Depending on the design of your application and/or database, there are also quirks that you'll need to circumvent to ensure that the frameworks do not hinder performance. A: I have to agree with many posts on this one. I've used both, extensively, in a variety of settings. If I could undo a design decision it would be to have used Hibernate. We actually budgeted a release in one of our products to swap Hibernate for iBatis and Spring-JDBC for a best-of-all-worlds approach. I can have a new developer get up to speed using Spring-JDBC, Spring-MVC, Spring-Ioc, and iBatis faster than if I just tasked them with Hibernate. Hibernate is just too complicated for this KISS developer. And heaven help you with hibernate if your DBA sees the generated SQL the database sees and sends you back with optimized versions. A: In my opinion, the biggest advantage of Spring is that it encourages and enables better development practices, in particular loose coupling, testing, and more interfaces. Hibernate without Spring can be really painful, but the two together are very useful. Retrofitting an existing project to any framework is going to be painful, but the refactoring process often has serious benefits for long-term maintainability. A: The top answer mentions that Hibernate is poorly documented. I agree that the online reference manual could be more complete. However, a book written by Hibernate's authors, 'Java persistence with Hibernate' is a must-read for every Hibernate user and very complete. A: Frameworks are not evil. even the Java SDK is a framework. What they probably fight is framework proliferation. You shouldn't bring a framework to a project just for the kick of it, it should bring consistent value in a reasonable time. Every framework requires a learning curve, but should reward you with increased productivity and features later on. If you struggle with code that is hard to debug because of inconsistent database usage, complicated cache mechanisms, or a myriad of other reasons. Hibernate will add great value. apart from the learning curve (which took about 1 month of practical work for me) there weren't any pitfalls, provided you have someone around to explain the basics for you. A: @slim - I am with you again this morning. It sounds like a classic case of Not Invented Here Syndrome. If they aren't keen on spring, they should consider other options rather than rolling their own framework (whether they acknowledge doing it or not). Guice comes to mind as an possibility. Also picocontainer. There are others out there, depending on what you need. A: Spring and Hibernate definitely make life easier. Getting started with them might be a little time-consuming at the beginning, but you'll certainly benefit from it later. Now the XML is being replaced by annotations, you don't need to type hundreds of lines of XML either. You may want to consider AppFuse to reduce your learning-curve: generate an application, study and adapt it, and off you go.
{ "language": "en", "url": "https://stackoverflow.com/questions/100243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Test planning/documentation/management tools I'm looking for a good, preferably free, test planning and documentation tool. Ideally something that will keep track of which tests have been run against which software version, with reporting ability. There's a whole bunch of tools listed here but are there any others, and which ones have you had the best experience with? (You do run tests, right?) UPDATE 2008-01-29 So far TestLink and Fitness have been mentioned. A related question yielded also a link to the ReadySet project, an open collection of software planning documentation templates. I have used TestLink and found it okayish, but I cannot say I enjoyed using it. Has anyone had any experience with Fitnesse? Or are there any other free tools out there that you have used and found satisfactory? A: You should definitely try out Klaros-Testmanagement http://www.klaros-testmanagement.com which has an free, unrestricted Community Edition. A: Yes, we do run tests, but not nearly as many as I'd like ! I highly recommend TestLink - the list of tools that you linked to shows that it's had more downloads than all of the other tools put together. A: I've used QualityCenter/TestDirectory for a long time. I'm now using testlink and I must say that I prefer QualityCenter/TestDirectory by far, even if it based on some buggy ActiveX control. QualityCenter/TestDirectory is more easier to use and the interface is quite better. TestLink and QualityCenter/TestDirectory are mainly for manual test case (however, you can use Quick Test Pro on QualityCenter/TestDirectory to automatize your tests). Fitnesse is another kind of tool in my mind : basically, you write your test case on a wiki and link that to a JUnit test. Another tools like that are GreenPepper, Concordion, etc. A: PractiTest is a very good option. Not free but very affordable - http://www.practitest.com A: A little late on this one, but I would have to suggest you try TestLodge for your manual test management. It works in a simular way to what TestLink does, but it gives off a more professional interface and is something that we also allow our clients to use for their uat phase. A: We use Quality Center / Test Director stuff. Its expensive as far as I know, and it's not that great. A: I've heard good things about Fitnesse but I don't know how good it's test tracking is. I know I just recently saw a slick looking test tracker for Trac or something, but I can't find it now... A: Quality Center. It's expensive, but it is the best A: I'm with Patrick - good ol' office tools :) I just write mine in Microsoft Word This is the structure I developed: Writing a System Test Plan. A: One thought, and perhaps not a good one, would be to have every test submit a ticket to your ticketing system when it's run indicating the test name, build version, and date, and test results. That would make the results searchable later-on. A: We use a home-grown Access database. This database keeps track of our requirements, test cases, test plan and test runs. We're able to produce an up-to-date RVTM, keep track of progress against the plan, and assign tasks to testers. We integrated it with Outlook, so each tester is assigned a task from the plan by the QA lead. When they're complete, they just tick it off in Outlook and it updates the database. For our small team of testers it works nicely, and we're free to customize it however we want.
{ "language": "en", "url": "https://stackoverflow.com/questions/100246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Reading a PNG image file in .Net 2.0 I'm using C# in .Net 2.0, and I want to read in a PNG image file and check for the first row and first column that has non-transparent pixels. What assembly and/or class should I use? A: Bitmap class from System.Drawing.dll assembly: Bitmap bitmap = new Bitmap(@"C:\image.png"); Color clr = bitmap.GetPixel(0, 0); A: Of course I googled already and found the PngBitmapDecoder class, but it doesn't seem to be available in .Net 2.0? http://msdn.microsoft.com/en-us/library/system.windows.media.imaging.pngbitmapdecoder.aspx The above link mentions it's in the PresentationCore assembly which I don't seem to have included with .Net 2.0 A: Well, Bitmap class can read a PNG file and access pixels. Can it see transparent pixels? PNG supports transparency while BMP does not. But still, it works. Bitmap bitmap = new Bitmap("icn_loading_animated3a.png"); pictureBox1.Image = bitmap; Color pixel5by10 = bitmap.GetPixel(5, 10); Code above read my little picture and then read a transparent pixel. Color class has RGBA values, and the pixel I read was in recognized as transparent.
{ "language": "en", "url": "https://stackoverflow.com/questions/100247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Where can I get good answers to my Perl-related questions? AFAIK one of the objectives of Stack Overflow is to make sure anyone can come here and find good answers to her Perl related questions. Certainly beginners would ask what is the best online source to learn Perl but others might just want to ask a question. Probably the friendliest place is the Monastery of Perl Monks. It is a web site with a rating system similar to but more simple than Stack Overflow. You can find lots of good answers there and if you don't find an answer you can always ask. The other big resource would be the mailing list of your local Perl Mongers group. Where do you go when you are looking for an answer to a Perl related question? A: I've never asked questions, but that's probably because I tend to write simple scripts rather than applications. Have always found the Perldoc site to be a good way to work out how to do stuff - and I normally learn a bit more whilst looking. A: Sometimes, it might be worthwhile to try to get help via IRC. Quite some time ago, I found the #perl channel on the freenode network to be pretty friendly and helpful. As always it's important that you have exhausted the normal means of solving your problem: Read the documentation, search the web, etc. However, I'll also take this opportunity to mention where you should not go asking for help: The #perl channel on MagNet aka irc.perl.org. It's a channel where people just hang out and talk about essentially anything but help requests. However, on that network, there's quite a few channels particularly about certain Perl-related projects. The people who lurk in those may well be the primary authors of the relevant piece of software. Those channel's aren't help desks either, but if you have a very challenging and interesting problem, you might be able to get them interested enough to help you. Just make sure you do your homework first and be prepared to get involved yourself. A: For documentation on Perl builtins and standard modules, perldoc.perl.org is an web version of the Perl docs with pretty colors and such. I use a keyword bookmark, pd for this. For finding modules, search.cpan.org is the place to go; for this I use the keyword pm. When you have a question that requires humans to answer, Perl Monks is my preferred place, though Stack Overflow seems to have attracted a good crowd already. A: Usenet is pretty good too comp.lang.perl.misc, comp.lang.perl.modules and comp.lang.perl.moderated are good places to ask questions IMHO. A: The Official Perl 5 Wiki is another great resource with lots of info and links. (Also see the bottom of the wiki home page for the latest headlines from the Planet Perl feed aggregator. It's useful to look at, because it sometimes suggests questions that you didn't know that you should be asking.) Incidentally, an ambitious stackoverflow Perl fan could also add a new section to the Perl 5 wiki pointing to questions answered on stackoverflow (and perhaps vice versa). A: I favor use.perl.org over perlmonks. I'm not sure why. It's a smaller community, maybe the signal to noise ratio is higher for me. Incidentally, I get good answers there to any question, not just Perl questions. I ask Java questions, Linux questions, sometimes even cultural questions, and there's always someone there who knows. :) A: I am surprised that no one has mentioned the Perl Beginners mailing list. A: It's worth noting that http://perlmonks.org, in addition to the fora, has the Chatterbox, where simple questions can be answered immediately in conversation with other users. It requires setting up an account and logging in before using the Chatterbox, though. A: I like IRC, try #perl on irc.perl.org or irc.freenode.net, or maybe #perlhelp on irc.efnet.nl. Lots and lots of very clever, helpful people always willing to discuss perl-related issues. Maybe I'll see you there :) A: Thus far, I've been pretty content with the quality of Perl answers I've seen here. Many of the most experienced Perl programmers I know from conferences, Perlmonks, use.perl.org, etc. seem to be present here and answering questions seriously and clearly. In cases where an answer has been wrong or simply bad in a sense of promoting bad practice, those answers have been quickly identified, voted down and/or commented-upon. I'm a great fan of Perlmonks, but it's a different sort of site than this one. Besides being specific to Perl, it also has separate areas set aside for reviewing modules, posting code snippets, reviewing books, etc. A: Best place: here. Each time I asked, I got correct answers, in less than 20 minutes. Faster that anywhere else. A: I don't have a specific site, but tend to just google the main keywords of what I am looking for. There are many sites out there, however, I have got the best responses here for very specific stuff. A: Have a big AIM/Jabber list filled with knowledgeable Perl people you're friends with. A: I talk to my imaginary friends on #catalyst, #perl and other channels on irc.perl.org. [edit] Bearing in mind that due to the limitations of non face to face communication with people you don't really know, you need to be simultaneously respectful of people whom it might superficially look like are being very rude to you. It pays to be thick skinned on IRC. A: I would say Stackoverflow
{ "language": "en", "url": "https://stackoverflow.com/questions/100248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Is there an OpenFileOrFolderDialog object in .NET? Is it possible to use the OpenFileDialog class select a file OR folder? It appears only to allow the selection of a file, if you select a folder and then choose open it will navigate to that folder. If the OpenFileDialog can not be used for this is there another object I should be using? EDIT: The scenario is that I have a tool that can upload one...many files or folders. I need to be able to provide a dialog like the OpenFileDialog that allows a user to select a file, folder, or a combination of. I know about the FolderBrowseDialog and that is not the answer in this case. A: This is the solution I have been looking for, this article provides code and describes how to create a dialog that meets my needs. CodeProject: Full Implementation of IShellBrowser A: Yes, you can use OpenFileDialog to select a folder. Here is an article in CodeProject that demonstrated a way to do it (http://www.codeproject.com/KB/dialog/OpenFileOrFolderDialog.aspx). A: In my experience in .NET, I would have to say no, sorry for the negative and short answer, but I really don't think there is A: If you have time, you can create your own pretty easily by using the System.Windows.Forms.TreeView Class. Each node can have a checkbox, so If you populate the treeview (onexpand) you can let the user select the files/directories he wants to upload. This should get you started on populating the treeview with directories, the job to also add files in the treeview should not be that hard: http://www.java2s.com/Tutorial/VB/0280__GUI-Applications/FileTreeview.htm A: No: the OpenFileDialog is just for opening files. Anyway there is a FolderBrowserDialog you can use for that. [Edit] Answered too fast: the Edit from the questioner was afterwards. A: I'd suggest taking a look at the Ookii Dialogs libraries which have an implementation of a folder browser dialog for Windows Forms and WPF respectively: Ookii.Dialogs.Wpf https://github.com/augustoproiete/ookii-dialogs-wpf Ookii.Dialogs.WinForms https://github.com/augustoproiete/ookii-dialogs-winforms
{ "language": "en", "url": "https://stackoverflow.com/questions/100264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Connecting to IMDB Has any one done this before? It would seem to me that there should be a webservice but i can't find one. I am writing an application for personal use that would just show basic info from IMDB. A: The only "API" the IMDb publishes is a set of plain-text data files containing formatted lists of actors, directors, movies, etc. You would likely need to write your own parser unless somebody has released one for your language. Try Google searches like "imdb api" and "imdb parser". A screen scraper might be useful, but they specifically prohibit scrapers in their terms of use. A: Here is my own solution using RegEx: private const string UglyMovieRegex = "(?<=5>|3>)(Cast|Director:|Fun\\sStuff|Genre:|Plot:|Runtime:|Tagline:|Writers:)" + "|href=\"[\\w\\d/]+?(Genres|name|character)/([\\w]+?)/\".*?>([.\\-\\s\\w]+)</a>" + "|(?<=h\\d>)([.\\w\\s'\\-\"]+)(?=<a\\sc|</d|\\|)"; Regex MovieData = new Regex (UglyMovieRegex, RegexOptions.Compiled | RegexOptions.Multiline | RegexOptions.Singleline ); A: Though this was posted over two years ago, here is a simple python code import urllib2 movie_id = raw_input('Enter the ID of the movie: ') json = urllib2.urlopen('http://imdbapi.com/?i=' + movie_id + '&r=json') print json.read() save as imdb.py and then run as in shell or terminal or whatever if you want xml data just replace json with xml please note that this is using the imdbapi.com website to return a json result visit that website to view more options. A: IMDB prohibits scrapers, and change the page layout every once in a while, so parsing HTML is an option, but be prepared to adjust your code 2-3 times a year (been there, done that, given up). They do have a fee-based service giving the full access to the data, but you'll also need to explain what is it for, and convince them you are not building a competitive website (I had a link to that, but it seems to have changed and can't find it now). A: Another alternative is to run the IMDB database on your local machine. Java Movie Database imports the IMDB database files, converts them and provides a locally-accessible copy of IMDB. IMDB has some functionality which Java Movie Database does not have and visa-versa but if what you're looking for is quick access to all the data it might be worth giving this a try. A: Now there's is an (undocumented) API like http://www.imdb.com/xml/find?json=1&q=Harry+Potter. See Does IMDB provide an API? A: The libraries for IMDb seem quite unreliable at present and highly inefficient. I really wish IMDb would just create a webservice. After a bit of searching I found a reasonable alternative to IMDb. It provides all the basic information such as overview, year, ratings, posters, trailers etc.: The Movie Database (TMDb). It provides a webservice with wrappers for several languages and seems reliable so far. The search results have been, for myself, more accurate as well. A: There is no webservice available. But there are enough html scrapers written in every language to suit your needs! I've used the .NET 3.5 Imdb Services opensource project in a few personal projects. 1 minute google results: * *Perl: IMDB-Film *Ruby: libimdb-ruby *Python: IMDbPY A: TRYNT Heavy Technologies provides (for free) a web service for retrieving basic IMDb data -- check out their site at http://www.trynt.com/trynt-movie-imdb-api/. They also have a separate service for Television data. A: There is at least one unofficial IMDb API called IMDb8. It has about 31 endpoints including * *actors/list-born-today *actors/get-awards-summary *title/get-plots *title/get-top-crew etc. Like any other API it is very straightforward to use. I used this API for building a fun trivia project. You can find a tutorial on how to get started here.
{ "language": "en", "url": "https://stackoverflow.com/questions/100280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How do I check if the scanner is plugged in (C#, .NET TWAIN) I'm using the .NET TWAIN code from http://www.codeproject.com/KB/dotnet/twaindotnet.aspx?msg=1007385#xx1007385xx in my application. When I try to scan an image when the scanner is not plugged in, the application freezes. How can I check if the device is plugged in, using the TWAIN driver? A: I started of with the same source code that you downloaded from CodeProject, but moved most of the code in MainFrame.cs that initiates the scanning to a Scanner class. In order to check for scan errors I call the following method in stead of calling Twain.Acquire directly: enum AcquireResult { OK = 0, InitFailed = 1, DeviceIDFailed = 2, CapabilityFailed = 3, UserInterfaceError = 4 } private void StartScan() { if (!_msgFilter) { _parent.Enabled = false; _msgFilter = true; Application.AddMessageFilter(this); } AcquireResult ar = _twain.Acquire(); if (ar != AcquireResult.OK) { EndingScan(); switch (ar) { case AcquireResult.CapabilityFailed: throw new Exception("Scanner capability setup failed"); case AcquireResult.DeviceIDFailed: throw new Exception("Unable to determine device identity"); case AcquireResult.InitFailed: throw new Exception("Scanner initialisation failed"); case AcquireResult.UserInterfaceError: throw new Exception("Error with the Twain user interface"); default: throw new Exception("Document scanning failed"); } } } I usually initiate the scan event on a seperate thread in order for the app not to freeze while scanning is in progress. A: Maybe I'm taking the question too literally, but using the TWAIN API, it is not possible to check if a device is plugged in i.e. connected and powered on. The TWAIN standard does define a capability for this purpose called CAP_DEVICEONLINE, but this feature is so poorly conceived and so few drivers implement it correctly that it is useless in practice. The closest you can get is this: Open the device (MSG_OPENDS): Almost all drivers will check for device-ready when they are opened, and will display an error dialog to the user. There is no TWAIN mechanism for suppressing or detecting this dialog Some drivers will allow the user to correct the problem and continue, in which case you (your app) will never know there was a problem. Some drivers will allow the user to cancel, in which case the MSG_OPENDS operation will fail, probably returning TWRC_CANCEL but maybe TWRC_FAILURE A few TWAIN drivers will open without error even though the device is off-line. Such a driver may return FALSE to a query of CAP_DEVICEONLINE. Such a driver will probably do the device-online check when you enable the device with MSG_ENABLEDS, and then if the device is not on-line, you get the error dialog to the user, and so on as above. Aside and IMPO: WIA is 'more modern' but also much less comprehensive for scanning than TWAIN, and in my experience unusable for multipage scanning from a document feeder. WIA's designers and maintainers seem not to understand or care about scanners other than low-end consumer flatbeds. It's good for cameras. A: just add this code on your TwainCommand (cmd) case TwainCommand.Null: { EndingScan(); tw.CloseSrc(); Msgbox("There is no device or the scannning has been cancelled."); break; } this will appear if the systems detect no device or the scanning has been cancelled. A: You can check in the registry. In: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{6bdd1fc6-810f-11d0-bec7-08002be2092f} each scanner that's ever been detected is enumerated there in the subkeys. Starting with 0000, go through and check if the CreateFileName value is blank or has data. If it has data, it's a connected scanner, if it's blank, it's not connected. A: i try do this but dont work good with TWAIN mybe try WIA mybe try this: on buton run scanner timer1.Interval = 30000; switch (cmd) { case TwainCommand.TransferReady: { .......... } default: { timer1.Start(); break; } on event timer tick { EndingScan(); tw.CloseSrc(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/100284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Mashups and SharePoint Can somebody in SO please provide me with a list of resources about Enterprise Mashups and technologies related to SharePoint platform? Update (as per suggestion of @Spoon16 in the comments):- The mashup application may typically retrieve a list of contacts from a SharePoint site and display the address of a selected contact person on a map (again maybe Google maps). A: There are a number of different ways to pull external information into SharePoint. For mashing up SharePoint data in external applications: * *Web Services; great web services coverage for the underlying API will allow you to build external mashups (like the one you mention in your question comment). Specifically take a look at the Lists Service. For mashing up external data sources inside of SharePoint: * *Business Data Catalog; when you have the enterprise version of Microsoft Office SharePoint Portal server you can use the Business Data Catalog to interact with a very wide variety of external datasources in a read/write fashion. Works with relational databases and web services. *Enterprise Search; the indexing capabilities provided by SharePoint's Enterprise search technology are extensive *RSS Web Part; allows you to consume and apply XSLT transform to any RSS feed and output the result on any SharePoint page *Page Viewer Web Part; allows an iframe to be embedded on any page, provides an easy mechanism of integrating external applications into the SharePoint environment SharePoint has an extensive development framework that enables you to leverage the full capabilities of the .NET framework to make your wildest mashup dreams come true. You can use even add additional services to SharePoint that expose the underlying data in custom ways (not covered by the Out of the Box web services) if you like. A: Also it is possible to customize List View in order to render Map using Google Maps. Solution description For storing geographical locations and visualizing it on Map is used Custom List. Custom List is based on Generic List with Custom Content Type and with View to render Map. Map List View is implemented using custom XSLT style sheet and JavaScript rendering control for Map. For implementation details please follow post Bringing Map functionality into SharePoint 2010: Rendering Map List View Usage For specified solution we only need to create List instance for storing contacts and populate it. Form for saving of geographical locations Map List View
{ "language": "en", "url": "https://stackoverflow.com/questions/100290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Speed up loop using multithreading in C# (Question) Imagine I have an function which goes through one million/billion strings and checks smth in them. f.ex: foreach (String item in ListOfStrings) { result.add(CalculateSmth(item)); } it consumes lot's of time, because CalculateSmth is very time consuming function. I want to ask: how to integrate multithreading in this kinda process? f.ex: I want to fire-up 5 threads and each of them returns some results, and thats goes-on till the list has items. Maybe anyone can show some examples or articles.. Forgot to mention I need it in .NET 2.0 A: You need to split up the work you want to do in parallel. Here is an example of how you can split the work in two: List<string> work = (some list with lots of strings) // Split the work in two List<string> odd = new List<string>(); List<string> even = new List<string>(); for (int i = 0; i < work.Count; i++) { if (i % 2 == 0) { even.Add(work[i]); } else { odd.Add(work[i]); } } // Set up to worker delegates List<Foo> oddResult = new List<Foo>(); Action oddWork = delegate { foreach (string item in odd) oddResult.Add(CalculateSmth(item)); }; List<Foo> evenResult = new List<Foo>(); Action evenWork = delegate { foreach (string item in even) evenResult.Add(CalculateSmth(item)); }; // Run two delegates asynchronously IAsyncResult evenHandle = evenWork.BeginInvoke(null, null); IAsyncResult oddHandle = oddWork.BeginInvoke(null, null); // Wait for both to finish evenWork.EndInvoke(evenHandle); oddWork.EndInvoke(oddHandle); // Merge the results from the two jobs List<Foo> allResults = new List<Foo>(); allResults.AddRange(oddResult); allResults.AddRange(evenResult); return allResults; A: The first question you must answer is whether you should be using threading If your function CalculateSmth() is basically CPU-bound, i.e. heavy in CPU-usage and basically no I/O-usage, then I have a hard time seeing the point of using threads, since the threads will be competing over the same resource, in this case the CPU. If your CalculateSmth() is using both CPU and I/O, then it might be a point in using threading. I totally agree with the comment to my answer. I made a erroneous assumption that we were talking about a single CPU with one core, but these days we have multi-core CPUs, my bad. A: You could try the Parallel extensions (part of .NET 4.0) These allow you to write something like: Parallel.Foreach (ListOfStrings, (item) => result.add(CalculateSmth(item)); ); Of course result.add would need to be thread safe. A: The Parallel extensions is cool, but this can also be done just by using the threadpool like this: using System.Collections.Generic; using System.Threading; namespace noocyte.Threading { class CalcState { public CalcState(ManualResetEvent reset, string input) { Reset = reset; Input = input; } public ManualResetEvent Reset { get; private set; } public string Input { get; set; } } class CalculateMT { List<string> result = new List<string>(); List<ManualResetEvent> events = new List<ManualResetEvent>(); private void Calc() { List<string> aList = new List<string>(); aList.Add("test"); foreach (var item in aList) { CalcState cs = new CalcState(new ManualResetEvent(false), item); events.Add(cs.Reset); ThreadPool.QueueUserWorkItem(new WaitCallback(Calculate), cs); } WaitHandle.WaitAll(events.ToArray()); } private void Calculate(object s) { CalcState cs = s as CalcState; cs.Reset.Set(); result.Add(cs.Input); } } } A: Note that concurrency doesn't magically give you more resource. You need to establish what is slowing CalculateSmth down. For example, if it's CPU-bound (and you're on a single core) then the same number of CPU ticks will go to the code, whether you execute them sequentially or in parallel. Plus you'd get some overhead from managing the threads. Same argument applies to other constraints (e.g. I/O) You'll only get performance gains in this if CalculateSmth is leaving resource free during its execution, that could be used by another instance. That's not uncommon. For example, if the task involves IO followed by some CPU stuff, then process 1 could be doing the CPU stuff while process 2 is doing the IO. As mats points out, a chain of producer-consumer units can achieve this, if you have the infrastructure. A: Not that I have any good articles here right now, but what you want to do is something along Producer-Consumer with a Threadpool. The Producers loops through and creates tasks (which in this case could be to just queue up the items in a List or Stack). The Consumers are, say, five threads that reads one item off the stack, consumes it by calculating it, and then stores it else where. This way the multithreading is limited to just those five threads, and they will all have work to do up until the stack is empty. Things to think about: * *Put protection on the input and output list, such as a mutex. *If the order is important, make sure that the output order is maintained. One example could be to store them in a SortedList or something like that. *Make sure that the CalculateSmth is thread safe, that it doesn't use any global state.
{ "language": "en", "url": "https://stackoverflow.com/questions/100291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How's the ActionScript2 -> ActionScript3 learning curve? I knew ActionScript and ActionScript2 inside out, but I've been away from Flash for a couple years. What's the magnitude of becoming fluent in ActionScript3 and the new Flash functionality? From Colin Moock's blog, I heard that some of the fundamental movieclip methods have changed... A: You've probably already seen the as2 -> as3 migration doc? Sure, some syntax has changed but if you know as2 well writing as3 won't be a problem at all. Some weird things may come up in the beginning with the syntax, but that's just checking the documentation for the new way of doing it. If you're hacking yourself through as1 & as2, as3 may cause some headaches since it's much stricter (doesn't allow you to do stuff you shouldn't do anyway) ;) You'll probably be fine with as3 in less than a week. A: I would say it depends on the level of your AS2 (and general OOP) knowledge. If you're used to objected-oriented programming and strong typing the learning curve shouldn't really be that steep. I was brought up as a java programmer and find that the new concepts in AS3 are for the most part easy to grasp and that the API is a lot more consistent and makes more sense than in AS2. A: Actually AS3 is much better.. more like C# or Java, with consistent API, naming, packages. It is pleasure to use AS3 while using AS2 is often hell. And that's the problem. If you are used to AS2 with it quirks, hacks needed here and there.. fast and dirty ways.. then AS3 isn't simple to get used to. But in long run it really worth it. And anyway.. AS2 is the old one.. dead one. A: Actionscript3 is indeed far different in many ways, but it is important to realize that you are merely memorizing built-in packages, classes, properties, and methods as similar to learning prior versions. Some of the larger hurdles to get over are the Display list and events (event flow > Example: Bubbling). Much of the language has been changed to the developers advantage, such as a unified way of loading dynamic assets with the Loader class for display objects or the URLLoader class for loading data such as XML and CSS, or calling a php script. Once you feel confident with some of these new aspects of the language you can begin extending prior classes or creating new ones. Actionsscript3 may have a steep learning curve, but the opposite side of the hill is almost equally as steep!. After you have your eye opening, "OH, I GET IT!" moment, it is an addictive and thrilling ride. The possibilities become seemingly limitless and soon your developing whatever comes to mind! I suggest that anyone that wants to learn proper techniques, conventions, and workflow, please head to http://www.gotoandlearn.com where Lee Brimelow does an excellent job displaying leading edge techniques and effects. Lee also authors http://theflashblog.com which I personally check daily. A: Antti's spot on with the link to the migration doc. Colin Moock also starts a discussion about the similarities and differences between AS2 and AS3 and calls on Adobe and the Community to sort them. In the latter article, he brings up 10 solid WTFs about the move to AS3, explaining each problem and then including "What Should Adobe Do" and "What Should We Do" sections for each: * *The removal of on()/onClipEvent() from Flash CS3 makes creating simple interactivity hard. *Getting rid of loaded .swf files is hard. *Casting DisplayObject.parent makes controlling parent movie clips hard. *The removal of getURL() makes linking hard. *The removal of loadMovie() makes loading .swf files and images hard. *ActionScript 3.0's additional errors make coding cumbersome. *Referring to library symbols dynamically is unintuitive. *Adding custom functionality to manually created text fields, to all movie clips, or to all buttons is cumbersome. *The removal of duplicateMovieClip() makes cloning a MovieClip instance (really) hard.
{ "language": "en", "url": "https://stackoverflow.com/questions/100295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I analyze Python code to identify problematic areas? I have a large source repository split across multiple projects. I would like to produce a report about the health of the source code, identifying problem areas that need to be addressed. Specifically, I'd like to call out routines with a high cyclomatic complexity, identify repetition, and perhaps run some lint-like static analysis to spot suspicious (and thus likely erroneous) constructs. How might I go about constructing such a report? A: Thanks to Pydev, you can integrate pylint in the Eclipse IDE really easily and get a code report each time you save a modified file. A: Use flake8, which provides pep8, pyflakes, and cyclomatic complexity analysis in one tool A: There is a tool called CloneDigger that helps you find similar code snippets. A: For measuring cyclomatic complexity, there's a nice tool available at traceback.org. The page also gives a good overview of how to interpret the results. +1 for pylint. It is great at verifying adherence to coding standards (be it PEP8 or your own organization's variant), which can in the end help to reduce cyclomatic complexity. A: For checking cyclomatic complexity, there is of course the mccabe package. Installation: $ pip install --upgrade mccabe Usage: $ python -m mccabe --min=6 path/to/myfile.py Note the threshold of 6 above. Per this answer, scores >5 probably should be simplified. Sample output with --min=3: 68:1: 'Fetcher.fetch' 3 48:1: 'Fetcher._read_dom_tag' 3 103:1: 'main' 3 It can optionally also be used via pylint-mccabe or pytest-mccabe, etc. A: For cyclomatic complexity you can use radon: https://github.com/rubik/radon (Use pip to install it: pip install radon) Additionally it also has these features: * *raw metrics (these include SLOC, comment lines, blank lines, &c.) *Halstead metrics (all of them) *Maintainability Index (the one used in Visual Studio) A: For static analysis there is pylint and pychecker. Personally I use pylint as it seems to be more comprehensive than pychecker. For cyclomatic complexity you can try this perl program, or this article which introduces a python program to do the same A: Pycana works like charm when you need to understand a new project! PyCAna (Python Code Analyzer) is a fancy name for a simple code analyzer for python that creates a class diagram after executing your code. See how it works: http://pycana.sourceforge.net/ output:
{ "language": "en", "url": "https://stackoverflow.com/questions/100298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: Why does a call to BeginPaint() always generate a WM_NCPAINT message? I'm facing a problem on the Win32 API. I have a program that, when it handles WM_PAINT messages, it calls BeginPaint to clip the region and validate the update region, but the BeginPaint function is always generating a WM_NCPAINT message with the same update region, even if the touched part that needs repainting is only inside the client region. Do anyone has any clue why this is happening? It's on child windows with the WS_CHILD style. A: The MSDN entry for WM_PAINT says: The function may also send the WM_NCPAINT message to the window procedure if the window frame must be painted and send the WM_ERASEBKGND message if the window background must be erased. I'm trying to figure out why it is always sending even if the border isn't touched. I test that opening a small Notepad inside the control and minimizing. It doesn't touch the borders of the control, just inside and BeginPaint() generates a WM_NCPAINT. A: I guess the WM_NCPAINT message is always sent with the assumption that the border needs to be repainted as well! A: What happens if you call SetWindowPos and pass SWP_DEFERERASE as argument for the uFlags parameter? This should prevent generation of the WM_SYNCPAINT message, which would indirectly cause the WM_NCPAINT message to be sent.
{ "language": "en", "url": "https://stackoverflow.com/questions/100304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: XSL code coverage tool Are there any tools that can tell me what percentage of a XSL document get actually executed during tests? UPDATE I could not find anything better than Oxygen's XSL debugger and profiler, so I'm accepting Mladen's answer. A: Not sure about code coverage itself, but you can find an XML debugger and profiler from Oxygen which might help you out. A: This didn't exist back when this question was asked, but now there is ONE option for finding code coverage of XSLT documents: http://code.google.com/p/cakupan/ I'll admit that I haven't used it yet, as I'm still gathering information right now, but as far as I'm aware, this is IT. A: If anyone is still interested, Saxon has a performance analysis, which has a functionality that gives you a breakdown of each template and the number of times they are used (which is great for optimisation). This is how my output looks like:
{ "language": "en", "url": "https://stackoverflow.com/questions/100324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Groovy: Correct Syntax for XMLSlurper to find elements with a given attribute Given a HTML file with the structure html -> body -> a bunch of divs what is the correct groovy statement to find all of the divs with a non blank tags attribute? The following is not working: def nodes = html.body.div.findAll { it.@tags != null } because it finds all the nodes. A: Try the following (Groovy 1.5.6): def doc = """ <html> <body> <div tags="1">test1</div> <div>test2</div> <div tags="">test3</div> <div tags="4">test4</div> </body> </html> """ def html = new XmlSlurper().parseText( doc) html.body.div.findAll { it.@tags.text()}.each { div -> println div.text() } This outputs: test1 test4
{ "language": "en", "url": "https://stackoverflow.com/questions/100325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Automatically creating a VMWare image I want to be able to create a VMWare image, by specifying the components that should go into it, preferably in a script, and then get VMWare, or some third process, to build the machine from the specs. So I want to be able to say eg. OS - Windows 2003, Apps - Visual Studio etc, and then it builds the machine automatically from the description. I know that you can create a template from an existing machine, and use that, this is going one step higher, and building the template from a set of specifications. Any ideas? A: You can use an opensource Unattended.
{ "language": "en", "url": "https://stackoverflow.com/questions/100327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Best way to store event times in (My)SQL database I'm trying to decide on the best way to store event times in a MySQL database. These should be as flexible as possible and be able to represent "single events" (starts at a certain time, does not necessarily need an end time), "all day" and "multi day" events, repeating events, repeating all day events, possibly "3rd Saturday of the month" type events etc. Please suggest some tried and proven database schemes. A: Table: Events * *StartTime (dateTime) *EndTime (dateTime) null for no end time *RepeatUnit (int) null = noRepeat, 1 = hour, 2 = day, 3 = week, 4 = dayOfMonth, 5 = month, 6 = year *NthDayOfMonth (int) *RepeatMultiple (int) eg, set RepeatUnit to 3, and this to 2 for every fortnight *Id - if required, StartTime might be suitable for you to uniquely identify an event. *Name (string) - name given to the event, if required This might help. It would require a decent amount of code to interpret when the repeats are. Parts of the time fields that are at lower resolutions than the repeat unit would have to be ignored. Doing the 3rd saturday of the month woudln't be easy either... the NthDayOfMonth info would be required just for doing this kind of functionality. The database schema required for this is simple in comparison with the code required to work out where repeats fall. A: I worked on a planner application which loosely follows the iCalendar standard (to record events). You may want to read RFC 2445 or this schema published by Apple Inc. icalendar schema to see if they are relevant to the problem. My database schema (recurring/whole-day event was not considered at the time) event (event_id, # primary key dtstart, dtend, summary, categories, class, priority, summary, transp, created, calendar_id, # foreign key status, organizer_id, # foreign key comment, last_modified, location, uid); the foreign key calendar_id in the previous table refers this calendar(calendar_id, # primary key name); while organizer_id refers this (with other properties like common name etc. missing) organizer(organizer_id, # primary key name); Another documentation that you may find more readable is located here hope this helps A: You need two tables. One for storing the repeating events (table repeatevent) and one for storing the events (table event). Simple entries are only stored in the event table. Repeating entries are stored in the repeatevent table and all single entries for the repeating event are also stored in the event table. This means that everytime you enter a repeating entry, you have to enter all the single resulting entries. You can do this by using triggers, or as part of your business logic. The advantage of this approach is, that querying events is simple. They are all in the event table. Without the storage of repeating events in the event table, you would have complex SQL or business logic that would make your system slow. create table repeatevent ( id int not null auto_increment, type int, // 0: daily, 1:weekly, 2: monthly, .... starttime datetime not null, // starttime of the first event of the repetition endtime datetime, // endtime of the first event of the repetition allday int, // 0: no, 1: yes until datetime, // endtime of the last event of the repetition description varchar(30) ) create table event ( id int not null auto_increment, repeatevent null references repeatevent, // filled if created as part of a repeating event starttime datetime not null, endtime datetime, allday int, description varchar(30) ) A: Same way the cron does it? Recording both start and end time that way. A: Use datetime and mysql's built in NOW() function. Create the record when the process starts, update your column that tracks the end time when it the process ends.
{ "language": "en", "url": "https://stackoverflow.com/questions/100332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Per-process CPU usage on Win95 / Win98 / WinME How can you programmatically measure per-process (or better, per-thread) CPU usage under windows 95, windows 98 and windows ME? If it requires the DDK, where can you obtain that? Please note the Win9x requirement. It's easy on NT. EDIT: I tried installing the Win95/98 version of WMI, but Win32_Process.KernelModeTime and Win32_Process.UserModeTime return Null (as do most Win32_Process properties under win9x). A: It seems Performance Data Helper should be possible to install on Win9x architecture. Using this you should be able to get the times spent. Link which hopefully will help you or at least give you some starting point: [python-win32] Monitoring CPU Usage A: Take a look at Writing a performance monitor and if you need it the Win98 DDK is available here.
{ "language": "en", "url": "https://stackoverflow.com/questions/100333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Java: Prevent NPE in MetalFileChooserUI$IndentIcon.getIconWidth? on Windows systems. I get the following NPE with the FileChooser. It is a known bug that is not fixed by sun yet. http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6342301 Does somebody know a workaround to prevent this exception? Thanks. André Exception in thread "AWT-EventQueue-2" java.lang.NullPointerException at javax.swing.plaf.metal.MetalFileChooserUI$IndentIcon.getIconWidth(Unknown Source) at javax.swing.SwingUtilities.layoutCompoundLabelImpl(Unknown Source) at javax.swing.SwingUtilities.layoutCompoundLabel(Unknown Source) at javax.swing.plaf.basic.BasicLabelUI.layoutCL(Unknown Source) at javax.swing.plaf.basic.BasicLabelUI.getPreferredSize(Unknown Source) at javax.swing.JComponent.getPreferredSize(Unknown Source) at javax.swing.plaf.basic.BasicListUI.updateLayoutState(Unknown Source) at javax.swing.plaf.basic.BasicListUI.maybeUpdateLayoutState(Unknown Source) at javax.swing.plaf.basic.BasicListUI$Handler.valueChanged(Unknown Source) at javax.swing.DefaultListSelectionModel.fireValueChanged(Unknown Source) at javax.swing.DefaultListSelectionModel.fireValueChanged(Unknown Source) at javax.swing.DefaultListSelectionModel.fireValueChanged(Unknown Source) at javax.swing.DefaultListSelectionModel.changeSelection(Unknown Source) at javax.swing.DefaultListSelectionModel.changeSelection(Unknown Source) at javax.swing.DefaultListSelectionModel.setSelectionInterval(Unknown Source) at javax.swing.JList.setSelectedIndex(Unknown Source) at javax.swing.plaf.basic.BasicComboPopup.setListSelection(Unknown Source) at javax.swing.plaf.basic.BasicComboPopup.access$300(Unknown Source) at javax.swing.plaf.basic.BasicComboPopup$Handler.itemStateChanged(Unknown Source) at javax.swing.JComboBox.fireItemStateChanged(Unknown Source) at javax.swing.JComboBox.selectedItemChanged(Unknown Source) at javax.swing.JComboBox.contentsChanged(Unknown Source) A: In the bug report that you linked to, they also mention a workaround. It seems to come down to calling the methods in a specific order. Have you tried that? A DESCRIPTION OF THE PROBLEM : There appears to be an undocumented bad intereaction between explicitely setting the UI and removing all file filters, even temporarily. If the latter is done before setting the ui, trying to display a file dialog will throw an exception but not if the ui was set prior to messing with the filters. Maybe it is possible to make the code more robust against this or to include a warning in the docs? STEPS TO FOLLOW TO REPRODUCE THE PROBLEM : Run the attached program: it will not bomb. Then move the setUI line to the bottom of the constructor and try again: it will. A: So, now with registered account :) The problem with these steps in the mentioned link is, that the look and feel and therefor the UI is set globaly in our software. So the UI is set before I'm able to manipulate the file filters. Edit: Missunderstood the code for reproduction. The exampled works as mentioned. Thanks. A: It looks like the workaround description says you should try to set the UI before manipulating the filters. Does this not work? If that doesn't work, is it possible to create an instance of your manipulated FileFilters at the same point that you are setting your UI?
{ "language": "en", "url": "https://stackoverflow.com/questions/100343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Looking for C# HTML parser Possible Duplicate: What is the best way to parse html in C#? I would like to extract the structure of the HTML document - so the tags are more important than the content. Ideally, it would be able to cope reasonably with badly-formed HTML to some extent also. Anyone know of a reliable and efficient parser?
{ "language": "en", "url": "https://stackoverflow.com/questions/100358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "112" }
Q: Disable Cakephp's Auto Model "feature" In cake 1.2 there is a feature that allows the developer to no have to create models, but rather have cake do the detective work at run time and create the model for you. This process happens each time and is neat but in my case very hazardous. I read about this somewhere and now I'm experiencing the bad side of this. I've created a plugin with all the files and everything appeared to be just great. That is until i tried to use some of the model's associations and functions. Then cake claims that this model i've created doesn't exist. I've narrowed it down to cake using this auto model feature instead of throwing and error! So i have no idea what's wrong! Does anybody know how to disable this auto model feature? It's a good thought, but I can't seem to find where i've gone wrong with my plugin and an error would be very helpful! A: There's always the possibility to actually create the model file and set var $useTable = false. If this is not what you're asking for and the model and its associations actually do exist, but Cake seems to be unable to find them, you'll have to triple check the names of all models and their class names in both the actual model definition and in the association definitions. AFAIK you can't disable the auto modelling. A: Cake 1.2 It's a hack and it's ugly cus you need to edit core cake files but this is how i do it: \cake\libs\class_registry.php : line 127ish if (App::import($type, $plugin . $class)) { ${$class} =& new $class($options); } elseif ($type === 'Model') { /* Print out whatever debug info we have then exit */ pr($objects); die("unable to find class $type, $plugin$class"); /* We don't want to base this on the app model */ ${$class} =& new AppModel($options); } Cake 2 Costa recommends changing $strict to true in the init function on line 95 of Cake\Utility\ClassRegistry.php See Cake Api Docs for init ClassRegistry.php - init function A: Use var $useTable = false; in your model definition. A: Delete all cached files (all files under app/tmp, keep the folders) In most cases where models seem to be acting in unexpected ways, often they dont include changes you've made, it is because that cake is useing an old cached version of the model. A: Uh...where do we start. First, as Alexander suggested, clear your app cache. If you still get the same behaviour, there is probably something wrong with the class and/or file names. Remember the rules, for controller: * classname: BlastsController * filename: blasts_controller.php for model: * classname: Blast * filename: blast.php Don't foget to handle the irregular inflections properly.
{ "language": "en", "url": "https://stackoverflow.com/questions/100365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to do picture overlay in HTML (something like marker on top of google map)? Anyone know how to do picture overlay or appear on top of each other in HTML? The effect will be something like the marker/icon appear on Google Map where the user can specify the coordinate of the second picture appear on the first picture. Thanks. A: You can use <div> containers to seperate content into multiple layers. Therefore the div containers have to be positioned absolutely and marked with a z-index. for instance: <div style="position: absolute; z-index:100">This is in background</div> <div style="position: absolute; z-index:5000">This is in foreground</div> Of course the content also can contains images, etc. A: Use a DIV tag and CSS absolute positioning, with the z-index property. http://www.w3.org/TR/CSS2/visuren.html A: css layers.
{ "language": "en", "url": "https://stackoverflow.com/questions/100376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Tabs and spaces conversion I would like to automatically convert between tabs and spaces for indentation when I commit/update code to/from our repository. I have found the AnyEdit plugin for eclipse, which can convert directories of files. Not bad for a start, but does anybody have more expierience on how to handle this? Or maybe know of an Ant script or something else? A: A bit overkill, and only something to attempt with certain repository products that can handle it, but a hook script to call indent or astyle could do the trick. It'll format everyone's code the same way for every file, depending how you write the hook script, and it'd have to be pre-commit of course. A: * *Make sure you have your editor set to use spaces instead of tabs. *Select all text and hit CMD + I on mac or CTRL + I on windows. A: Why not just use the code formatter and/or cleanup function? It has settings that take care of that stuff for you. You can even have it run automatically on save. Edit: As Peter Perháč points out in the comments, this only answers half the question. I don't have any practical experience, but you could try the Maven Eclipse Format Plugin to format from a Maven build. Unfortunately, that's Maven only, and I know of no light-weight command line formatter. But if you happen to use Maven, you can bind the format goal to the proper phase, and if you set Eclipse to auto-build, it would format on update. Depending on the SCM tool (git, svn, etc), you could also create a hook that runs the build (but it might be a bit too heavy-weight for that). A: I use the AnyEdit plugin to auto-convert tabs to spaces on the save of a file. I also configure the base text editor (from which pretty much all the others derive) to insert spaces instead of tabs. This sounds redundant, but what it does is ensure that I don't insert any tabs, and any file that I edit that already has tabs will be converted as soon as I save it. Tabs have no place in source code. If someone else looks at the file with their tab-stops set to a different value, they lose most alignment/formatting anyway. (Of course, if you have Makefiles that you edit directly, you'll want to make sure their tabs are retained. But in my projects, if make is used at all the Makefile is derived from a different source, such as a Makefile.PL in Perl.) A: You may lose alignment/formatting by using tabs instead of spaces if and only if the tabs are not at the beginning of the line. Never use tabs insides lines, always use tabs at the front of lines. This allows you to use your editor to adjust to your desired indent level without impacting your co-workers view of the file. Challenge: Find an example where tabs at the front of the line loses alignment. A: I use Kedit for just this thing. It also natively converts text files from Macintosh, UNIX and MS-Dos. Since it's an older editor, I use one of it's scripts to handle unicode files. You might also want to look at some of the other smart editors. A: I use Eclipse for Java EE developer 4.6.0 Neon. I use http://marketplace.eclipse.org/content/anyedit-tools
{ "language": "en", "url": "https://stackoverflow.com/questions/100388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Hiding a queryString in an ASP.NET Webapplication I have two webapplication, one is a simple authenticationsite which can authenticate the logged in user and redirects him then to another site. Therefore I have to pass ther userId (GUID) to the second application. Currently this is done via the URL but i would like to hide this id. Has anybody an idea how to do this properly? [EDIT]: I can't use the Session because of the ApplicationBoundaries (2 different Servers) A: This sounds like a tricky situation. There are however several options you can use but it all depends on what your application does. Let's call WebApp1 your authenticate site, and WebApp2 your distination site once authenticated. Can WebApp2 not call WebApp1 behind the scenes? (Services) THe problem with passing this Guid between applications is it's going through clear text, and considering it's a user id, if anyone manages to intercept this they will have access to WebApp2 for life. Whether you pass it in a querystring or form variable, it's still vulnerable. If you can't use WebApp2 to query WebApp1, you should consider WebApp1 creating a temporary Guid that expires. That would be much safer long term, but as it's clear text is still susceptible to attack. The 2 web apps will also need access to the same data store. Ultimately, i think the AUthentication Site should be a service which WebApp2 can consume. Users should login through WebApp2, which will call WebApp1 securely for authentication. WebApp2 can then manage it's own session. A: If you can't use cookies because it's cross domain then encrypt it, with a nonce. Setup a shared secret/key between the two servers; send the encrypted GUID and nonce combination to the second server. Unencrypt, check the nonce hasn't already been used (to stop reply attacks), then use the unencrypted GUID. If you want to be extra tricky have a web service on app1 where it can check the nonce was actually issued (at this point you're heading towards WSTrust and a single sign-on solution, which generally solve what you're trying to do) Even with cookies, as they're easily edited/faked, you should have some form of checking. A: You have two ASP.NET web applications, and one application does nothing but authenticate a user? this sounds like a job for.... Web Services! Create a new web service on the authentication app (They are the .asmx extension), and add a single method that takes in the user and password etc, and returns authentication info. Then import the WSDL on your 2nd app, and call the 1st app like it was a method. It will simplify your code, and fix your issue. An Example: AuthenticateUserService.asmx goes on the Authentication app: using System; using System.Web; using System.Web.Services; using System.Web.Services.Protocols; [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class AuthenticateUserService : System.Web.Services.WebService { [WebMethod] public bool AuthenticateUser(string username, string passhash) { // Fake authentication for the example return (username == "jon" && passhash == "SomeHashedValueOfFoobar"); } } Once this is setup, fire up your main app, and right click the project and click "Add Web Reference". Enter the url to the asmx on the authentication app, and Visual Studio will discover it and create a proxy class. Once that is done, we can call that method like it was a local method in our main app: protected void Page_Load(object sender, EventArgs e) { // Now we can easily authenticate user in our code AuthenticateUserService authenticationProxy = new AuthenticateUserService(); bool isUserAuthenticated = authenticationProxy.AuthenticateUser("jon", SomeHashMethod("foobar")); } So, what does this really do? It eliminates the client from the authentication process. Your current process: * *Client Enters credentials to AppA *AppA redirects the client to AppB *AppB redirects the client back to AppA if the credentials match. Is replaced with a server side SOAP call between AppA and AppB. Now its like this: * *Client enters credentials in AppA *AppA asks AppB if they are good *AppA serves proper content to the client. A: Pass the GUID through a session, best way. http://www.w3schools.com/ASP/asp_sessions.asp OR, since it's 2 different servers, pass the information by POST method: http://www.w3schools.com/aspnet/aspnet_forms.asp The other possibility is to store the session state in a database on the local server, and remotely access that database from the other server to see if the user has successfully logged in and within session timelimit. With that in mind, you can do the entire authentication remotely as well. Remotely connect to the local database from the remote server and check the login credentials from there...that way you will be able to store the session and/or cookie on the remote server. I would recommend AGAINST the hidden field proposition, as it completely counteracts what you are trying to do! You are trying to hide the GUID in the URL but posting the same information in your HTML code! This is not the way to do it. Best choice is the database option, or if not possible, then use HTTP POST. A: Use session variables or HTTP POST instead of HTTP GET. A: If the servers have a common domain name, you can use a cookie. EDIT: Cookies will just hide the ID visually, it is still accessible. Same with hidden fields or using POST rather than GET. So if the ID is confidental and you want to avoid to send it over the network unencrypted, you need a different approach. A solution could be to encrypt the ID on the auth server with a key which is shared by the servers. Another solution could be to generate a random GUID on the auth server, and then let the auth server directly inform the other server (over SSL) which ID the GUID corresponds to. A: Instead of passing it via a query string you should create a hidden form field with its value and then post to your 2nd page, which can then grab the posted value and it will be hidden from the user. A: go for session mangement or use a HTTP Post as said in the above post.
{ "language": "en", "url": "https://stackoverflow.com/questions/100411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: open source syntax highlighter tool? I'm looking for an open source, cross platform (Windows & Linux at least) command line tool to take some code (C++, but multiple languages would be sweet), and spit out valid a XHTML representation of that code, with syntax highlighting included. Ideally the XHTML should just wrap the code with <span> and <div> tags with different classes so I can supply the CSS code and change the colouration, but that's an optional extra. Does anyone know of such an application? A: I can recommend Pygments. It's easy to work with and supports a lot of languages. It does what you want, i.e., it wraps the code in <span> tags: from pygments import highlight from pygments.lexers import PythonLexer from pygments.formatters import HtmlFormatter code = 'print "Hello World"' print highlight(code, PythonLexer(), HtmlFormatter()) gives <div class="highlight"> <pre><span class="k">print</span> <span class="s">&quot;Hello World&quot;</span></pre> </div> and you can then use one of the supplied style sheets of make your own. You can also call it via it's pygmentize script. The script can format the output in different ways: HTML, LaTeX, ANSI color terminal output. A: Vim can save any code it highlights to "colored" HTML (it runs on several platforms). There is GNU hightlight too. And tons of others. A: There is very good one, driven by XML, fast and opensource: http://sourceforge.net/projects/colorer/ A: I don't recall if GeSHi has a command-line program but even if it doesn't, it shouldn't be hard to whip one up. It does a great job of taking code and generating pretty, coloured HTML/XHTML, even with line numbers (or every X line numbers, even) and other helpful features. A: Enscript looks like what you are asking for : * *spit HTML (or PS, or RTF) from ascii files *It includes features for `pretty-printing' (language-sensitive code highlighting) in several programming languages. A: Not sure how helpful this will be, but my team uses doxygen to produce documentation, which happens to provide color syntax highlighting on our code views as well as a side bonus. Never really needed it, but it does it. A: If you're ok with using ruby, you want coderay. A: I'll add my own one to the list, it colors C# but could be adapted for C, C++ and Java. It produces the inline styles by default and a pre tag. The source is there in C#, you'll need to grab mono/monodevelop and compile it as as a console app, so it's not shrink wrapped in that respect.
{ "language": "en", "url": "https://stackoverflow.com/questions/100415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Force default value when adding column to table - SQL Server In SQL Server 2000/2005, Is it possible to force the default value to be written to already existing rows when adding a new column to a table without using NOT NULL on the new column? A: I understand your question, but you are saying that for future records, NULL (unknown, indeterminate or whatever your semantics are) is acceptable (but if it is left off in an insert, there will be a default), but that for all the existing data, you are going to go ahead and assign it the default. I would have to look hard at this situation and ask why you are even going to allow NULLs in future records at all - given none of the historical records will have it, and there is a default in place for future records. A: You need two statements. First create the column with not null. Then change the not null constraint to nullable alter table mytable add mycolumn varchar(10) not null default ('a value') alter table mytable alter column mycolumn varchar(10) null A: I doubt it. http://msdn.microsoft.com/en-us/library/ms190273(SQL.90).aspx The approach recommended by Microsoft is as follows (taken from the url above) UPDATE MyTable SET NullCol = N'some_value' WHERE NullCol IS NULL ALTER TABLE MyTable ALTER COLUMN NullCOl NVARCHAR(20) NOT NULL A: ALTER TABLE {TABLENAME} ADD {COLUMNNAME} {TYPE} {NULL|NOT NULL} CONSTRAINT {CONSTRAINT_NAME} DEFAULT {DEFAULT_VALUE} [**WITH VALUES]** WITH VALUES can be used to store the default value in the new column for each existing row in the table. more detail on MSDN link . https://msdn.microsoft.com/en-in/library/ms190273.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/100416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Hidden Features of Visual Studio (2005-2010)? Visual Studio is such a massively big product that even after years of working with it I sometimes stumble upon a new/better way to do things or things I didn't even know were possible. For instance- * *Crtl + R, Ctrl + W to show white spaces. Essential for editing Python build scripts. *Under "HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\Text Editor" Create a String called Guides with the value "RGB(255,0,0), 80" to have a red line at column 80 in the text editor. What other hidden features have you stumbled upon? A: Tracepoints! Put a breakpoint on a line of code. Bring up the Breakpoints Window and right click on the new breakpoint. Select 'When Hit...'. By ticking the 'Print a message' check box Visual Studio will print out a message to the Debug Output every time the line of code is executed, rather than (or as well as) breaking on it. You can also get it to execute a macro as it passes the line. A: You can drag code to the ToolBox. Try it! A: I'm not sure if it's "hidden", but not many people know about it -- pseudoregisters. Comes very handy when debugging, I've @ERR, hr in my watch window all the time. A: Ctrl-Minus, Ctrl-Plus, navigates back and forward where you've been recently (only open files, though). A: Here's something I learned (for C#): You can move the cursor to the opening curly brace from the closing curly brace by pressing Control + ]. I learned this on an SO topic that's a dupe of this one: “Hidden Secrets” of the Visual Studio .NET debugger? A: I don't use it often, but I do love: ctrl-alt + mouse select To select in a rectangular block, to 'block' boundaries. As noted in comments, alt + mouse select Does just a plain rectangular block. A: CTRL + Shift + U -> Uppercase highlighted section. CTRL + U -> Lowercase the highlighted section Great for getting my SQL Statements looking just right when putting them into string queries. Also useful for code you've found online where EVERYTHING IS IN CAPS. A: To display any chunk of data as an n-byte "array", use the following syntax in Visual Studio's QuickWatch window: variable, n For example, to view a variable named foo as a 256-byte array, enter the following expression in the QuickWatch window: foo, 256 This is particularly useful when viewing strings that aren't null-terminated or data that's only accessible via a pointer. You can use Visual Studio's Memory window to achieve a similar result, but using the QuickWatch window is often more convenient for a quick check. A: Middle Mouse Button Click on the editor tab closes the tab. A: Click an identifier (class name, variable, etc) then hit F12 for "Go To Definition". I'm always amazed how many people I watch code use the slower right-click -> "Go To Definition" method. EDIT: Then you can use Ctrl+- to jump back to where you were. A: CTRL-D then type ">of " then file name. If the standard toolbar is up crtl-d put you in find combobox and there is now a dropdown with files in your solution that match the start of the filename you typed. Pick one and it will open it. This alternative to the open filedialog is awesome for big solutions with lots of directories. A: Ctrl + Delete deletes the whole word (forward) Ctrl + Backspace deletes the whole word (backward) The following is well known but am I wrong saying it hasn't been listed yet ? Ctrl + Shift + Space inside the parentheses of a method call gives you the parameter info. A: Drag-drop text selections to the Watch window while in the debugger. A: .NET debugger allows you to give objects identifiers, and to refer them via those identifiers later during the session. To do so, you right-click on the variable (or expression) referencing the object in Autos/Locals/Watch window, or in the tooltip, and select "Create Object ID". IDs are sequential integer numbers, starting from 1, and suffixed by "#" - e.g 1# will be the first ID you create. After the ID is created, if the object is associated with a given ID, it is displayed in parentheses. You can use 1# to reference the object by ID anywhere you can normally use expressions - in Watch window, in condition of a conditional breakpoint, and so on. It's most handy when you want to set a breakpoint on a method of some particular object only - if you can first track the object creation, or some other place where this particular object is referenced, you just create the ID for it, and then set a new breakpoint with condition such as this==1#. A: CTRL+SHIFT+V will cycle through your clipboard, Visual Studio keeps a history of copies. A: Sara Ford covers lots of lovely tips: http://blogs.msdn.com/saraford/archive/tags/Visual+Studio+2008+Tip+of+the+Day/default.aspx But some of my favourites are Code Snippets, Ctrl + . to add a using <Namespace> or generate a method stub. I can't live without that. Check out a great list in the Visual Studio 2008 C# Keybinding poster: http://www.microsoft.com/downloadS/details.aspx?familyid=E5F902A8-5BB5-4CC6-907E-472809749973&displaylang=en A: I accidentally found this one just now. When you are anywhere on a line and press Ctrl + Enter, it will insert a new line above the current line and move the cursor there. Also, if you press Ctrl + Shift + Enter, it will insert a new line below the current line and move the cursor there (similar to End, Enter) A: During debugging, Select an identifier or expressing and drag it to the watch window. Beats having to write it from scratch :) A: CTRL-K, CTRL-D Reformat Document! This is under the VB keybindings, not sure about C# A: How many times do you debug an array in a quickwatch or a watch window and only have visual studio show you the first element? Add ",N" to the end of the definition to make studio show you the next N items as well. IE "this->m_myArray" becomes "this->m_array,5". A: Incremental search: While having a source document open hit (CTRL + I) and type the word you are searching for you can hit (CTRL + I) again to see words matching your input. A: * *The memory windows, very useful if you're doing low level stuff. *Control + K , Control + F - Format selection - great for quickly making code neat *Regions, some love them, some hate them, most don't even know they exist *Changing variables in debug windows during execution *Tracepoints *Conditional break points *Hold down Alt and drag for 'rectangular' selection. *Control+B for a breakpoint, to break at function *Control+I for incremental search, F3 to iterate A: Press the F8 key to cycle through search results. (Shift+F8 for reverse direction) Hit F12 to go to definition of variable. Shift + alt + arrow keys = Block select! A: * *Ctrl-K, Ctrl-C to comment a block of text with // at the start *Ctrl-K, Ctrl-U to uncomment a block of text with // at the start Can't live without it! :) A: In the watch window, you can view the current exception even if you have no variable to hold it by adding a watch on $exception A: Ever want to look for a function in your current viewed file but there are too many member to browse? Need a filter? Then, the Navigate box is what you need. You activate it by Ctrl-, (comma). A: You can use the following codes in the watch window. @err - display last error @err,hr - display last error as an HRESULT @exception - display current exception A: * *Ctrl-K, Ctrl-C to comment a block of text with // at the start *Ctrl-K, Ctrl-U to uncomment a block of text with // at the start Can't live without it! :) A: Shift+Alt+F10 brings up the built in refactoring menu. Great for adding method stubs from interfaces, and adding Using statements automatically for specific classes. A: There is an article about this. It seems to be a lengthy collection. A: You can drag down the little gray box above the vertical scrollbar to split the window into two views of the same file, which can be scrolled independently - great if you're comparing two parts of the same file. A: View, Other Windows, Object Test Bench The object test bench can be used to execute code at design-time. You can right-click on a type in Class View, click Create Instance, and select a constructor. You can then supply values for its parameters, if any, and the instance will show up in the Object Test Bench. You can also call static methods by right-clicking a type and clicking Invoke Static Method. In the Object Test Bench, you can right-click on an object to call methods, and you can hover over it and see its structure (like you can when debugging). You can also assign to and interact with these variables in the Immediate window, also at design time. This feature can be useful when writing a library. Please note that to use this, your solution must be compile first. A: Dynamic XSLT Intellisense A very little known fact is that Visual Studio 2008 does support real XSLT intellisense - not a static XSLT schema-based one, but real dynamic intellisense enabling autocompletion of template names, modes, parameter/variable names, attribute set names, namespace prefixes etc. For all versions of VS I like Ctrl + Shift + V for copying data in clipboard cycle. A: I don't know how 'hidden' this is, but some newew people may not know about coniditonal breakpoints. Set a breakpoint, then right click it, and choose Condition, then enter an expression like: (b == 0) And it will only fire when that is true. Very useful when trying to debug a certain stage of a loop. A: The existence of the Resharper add-in. It makes working with Visual Stupidio less of a pain :) It's not really a hidden feature, but worth mention nonetheless as it comes with tons of these tricks and hotkeys. A: I'm surprised no one has mentioned this yet. I find the ability record and play back a series of actions very, very helpful sometimes. Like if I'm applying some repetitive action to a few lines in a text file. For example Ctrl+Shift+R (start recording macro) perform a series of keystrokes Ctrl+Shift+R (stop recording macro) later.... Ctrl+Shift+P (play back keystrokes) This approach is ideal for a short, one time manipulations. If it's something more involved or needed more than once, I'll write a script. A: Pseudovariables in the debugger: http://msdn.microsoft.com/en-us/library/ms164891.aspx $exception: avoids the need to give your exceptions names (and cause variable not referenced warnings). $user: tells you which user is running the application...sometimes useful when trying to diagnose permission issues. A: Close all documents other than the one your on by right clicking the doc's tab and selecting "Close All But This." You can do this in many other IDEs and browsers as well. Not a big feature but I find that I use it 10+ times a day. This feature was hidden from me for many years. I should map it to a keyboard shortcut :p A: Ctrl+Tab - switch between open tabs/windows in Visual Studio 2005 & 2008. Kind of like Alt+Tab in Windows, brings up a little box just for the currently open VS files. Here's a sample screenshot: alt text http://lh3.ggpht.com/_FWrysR9YI18/TFOGxnX9ShI/AAAAAAAAAQI/a-ByCRMmrpw/ctrltab.gif A: Stopping the debugger from stepping into trivial functions. When you’re stepping through code in the debugger, you can spend a lot of time stepping in and out of functions you’re not particularly interested in, with names such as GetID(), or std::vector<>(), to pick a C++ example. You can use the registry to make the debugger ignore these. For Visual Studio 2005, you have to go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio \8.0\NativeDE\StepOver and add string values containing regular expressions for each function or set of functions you wish to exclude; e.g. std::vector.*::.* TextBox::GetID You can also override these for individual exceptions. For instance, suppose you did want to step into the vector class’s destructor: std::vector.*::\~.*=StepInto You can find details for other versions of Visual Studio at http://blogs.msdn.com/andypennell/archive/2004/02/06/69004.aspx A: Ctrl-F10: run to cursor during debugging. Took me ages to find this, and I use it all the time; Ctrl-E, Ctrl-D: apply standard formatting (which you can define). A: TAB key feature. * *If you know snippet key name, write and click double Tab. for example: Write foreach and then click tab key twice to foreach (object var in collection_to_loop) { } 2. If you write any event, write here Button btn = new Button(); btn.Click += and then click tab key twice to private void Form1_Load(object sender, EventArgs e) { Button btn = new Button(); btn.Click += new EventHandler(btn_Click); } void btn_Click(object sender, EventArgs e) { throw new Exception("The method or operation is not implemented."); } btn_Click function write automatically *in XAML Editor, Write any event. for example: MouseLeftButtonDown then click tab MouseLeftButtonDown="" then click tab again MouseLeftButtonDown="Button_MouseLeftButtonDown" in the code section Button_MouseLeftButtonDown method created. A: Sara Ford has this market cornered. http://blogs.msdn.com/saraford/default.aspx More Visual Studio tips and tricks than you can shake a stick at. Some others: * *The Visual Studio 2005 and 2008 3-month trial editions are fully-functional, and can be used indefinitely (forever) by setting the system clock back prior to opening VS. Then, when VS is opened, set the system clock forward again so your datetimes aren't screwed up. *But that's really piracy and I can't recommend it, especially when anybody with a .edu address can get a fully-functional Pro version of VS2008 through Microsoft Dreamspark. *You can use Visual Studio to open 3rd-party executables, and browse embedded resources (dialogs, string tables, images, etc) stored within. *Debugging visualizers are not exactly a "hidden" feature but they are somewhat neglected, and super-useful, since in addition to using the provided visualizers you can roll your own for specific data sets. *Debugger's "Set Instruction Pointer" or "Set Next Statement" command. *Conditional breakpoints (as KiwiBastard noted). *You can use Quickwatch etc. to evaluate not only the value of a variable, but runtime expressions around that variable. A: T4 (Text Template Transformation Toolkit). T4 is a code generator built right into Visual Studio A: The most important feature I can't live without is Visual Studio 2008. :P A: The Debugger :-) Beats Notepad by miles. A: I always map control + alt + f4 to documents.CloseAllWindows in options>environment>keyboard. Is somewhat more intuitive than using the mouse. A: I think the ability to right click on a Stored Procedure in Server Explorer and debug.. A: Copy-paste from a Watch window of an object's expanded properties in the debugger into Excel will perserve the tabular format and persist the data after the debug session is over. A: Here is the Macro source for my aspx/aspx.cs flipper. It works in 2005, but it may have issues in 08.. I'm not sure... This was taken from my other cpp/h flipper, so there might be some clean up needed to make it the best it could be. I'm not paid to write Macros, so I have to blast though them as quickly as possible when I need one. Sub OpenASPOrCS() 'DESCRIPTION: Open .aspx file if in .cs file, open .cs file if in .aspx file On Error Resume Next ' Get current doc path Dim FullName FullName = LCase(ActiveDocument.FullName) If FullName = "" Then MsgBox("Error, not a .cs or asp file!") Exit Sub End If ' Get current doc name Dim DocName DocName = ActiveDocument.Name Dim IsCSFile IsCSFile = False Dim fn Dim dn If (Right(FullName, 3) = ".cs") Then fn = Left(FullName, Len(FullName) - 3) dn = Left(DocName, Len(DocName) - 3) IsCSFile = True ElseIf ((Right(FullName, 5) = ".aspx") Or (Right(FullName, 5) = ".ascx")) Then fn = FullName + ".cs" dn = DocName + ".cs" Else MsgBox("Error, not a .cs, or an asp file!") Exit Sub End If Dim doc As EnvDTE.Documents DTE.ItemOperations.OpenFile(fn) doc.DTE.ItemOperations.OpenFile(fn) If Err.Number = 0 Then Exit Sub End If ' First check to see if the file is already open and activate it For Each doc In DTE.Documents() If doc.Name = dn Then doc.Active = True Exit Sub End If Next End Sub A: Ctrl+L deletes the current selected line. This is an awesome time saver (if used responsibly of course!!!) A: Ctrl-M + Ctrl-L Toggle Collapse All - Expand All A: Ctrl-T swaps the last two letters. For example, "swithc" -> "switch". A: Ctrl+Shift+L deletes the current line (without cutting it to the clipboard) A: View, Code Definition Window. The Code Definition Window shows the definition of the currently selected identifier (If it's in your solution, it'll show your sourced; otherwise, it'll extract metadata, like right-click, Go To Definition) A: I see that lot of us are posting shortcuts. I have printed this poster, it's very helpful to learn those shortcuts - nowadays I look very rarely at the poster 'cause I've learned most of them :) Link for VS posters: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=c15d210d-a926-46a8-a586-31f8a2e576fe My favourites are Refactoring ones (CTRL-R + Something) A: One that I only just discovered. When dealing with COM it's possible to lookup a brief message from the cryptic hexadecimal error number using a tool called errlook.exe. The useful tool is located in your VS\Common7\Tools directory. A: There is this blog on MSDN thats got some nice tips and tricks http://blogs.msdn.com/saraford/archive/tags/Visual+Studio+2008+Tip+of+the+Day/default.aspx A: The features I like the most are * *Bookmarks feature. You can add/remove bookmarks in code(kinda like breakpoints), and you can navigate directly between them by using next/previous bookmark. Very useful if you are making changes in two places at once, and want to swap between the two frequently. *The comment/uncomment feature. Ctrl+E , Ctrl+C/U for C# settings. *The increase/decrease indent of a line. (Only available for VC by default. To assign for C#, go to tools -> Options -> General -> Keyboard and change the Edit.IncreaseLineIndent/Edit.DecreaseLineIndent for TextEditor) PS: I want to know how to navigate to the members drop down list (just below the tabs list) by the keyboard. A: Custom IntelliSense dropdown height, for example displaying 50 items instead of the default which is IMO ridiculously small (8). (To do that, just resize the dropdown next time you see it, and Visual Studio will remember the size you selected next time it opens a dropdown.) A: Discovered today: Ctrl + . Brings up the context menu for refactoring (then one that's accessible via the underlined last letter of a class/method/property you've just renamed - mouse over for menu or "Ctrl" + ".") A: A lot of people don't know or use the debugger to it's fullest - I.E. just use it to stop code, but right click on the red circle and there are a lot more options such as break on condition, run code on break. Also you can change variable values at runtime using the debugger which is a great feature - saves rerunning code to fix a silly logic error etc. A: Line transpose, Shift-Alt-T Swaps two line (current and next) and moves cursor to the next line. I'm lovin it. I've even written a macro which changed again position by one line, executed line transpose and changed line position again so it all looking like I swapping current line with previous (Reverse line transpose). Word transpose, Shift-Ctrl-T A: Make a selection with ALT pressed - selects a square of text instead of whole lines. A: When developing C++, Ctrl-F7 compiles the current file only. A: To auto-sync current file with Solution Explorer. So don't have to look where the file lives in the project structure Tools -> Options -> Projects and Solutions -> "Track Active Item in Solution Explorer" Edit: If this gets too annoying for you then you can use Dan Vanderboom's macro to invoke this feature on demand through a keystroke. (Note: Taken from the comment below by Jerry). A: Document Outline in the FormsDesigner (CTRL + ALT + T) Fast control renaming, ordering and more! A: Not exactly a hidden feature, but one thing I've done is add a "Start Without Debugging" button next to my "Start With Debugging" button. Just click the down arrow at the right end of the toolbar. Then select "Add or Remove buttons". Then Customize. In the commands tab select the Debug category. Find the Start Without Debugging command and drag it to where you want it on the toolbar. A: My best feature is one I had to make myself.. It's a cpp/h flipper. If you are looking at the .h file, and hit this macro, (or its keyboard shortcut), it will open the cpp file, and vice-versa. I can provide the source if anyone wants it. A: Enable Intellisense in Skin Files * *Go to Tools->Options menu. *Pick Text Editor -> File Extesion fom a tree at the left part of Options dialog. *Type skin in Extesion text box. *Select User Control Editor from Editor dropdown. *Click Add and then Ok to close dialog and re-open your skin files. A: The Open button in the File Open dialog has a little down arrrow next to it. Click that and you get the "Open With" option which includes the Binary Editor. As a systems-type guy, I find it quite valuable, but most of my colleagues hadn't known about it until I showed them. A: Re: Stopping the debugger from stepping into trivial functions. In C#, you can also add an attribute [DebuggerStepThrough] (using System.Diagnostics) to a method. This causes the debugger to, ironically, not step through the method. A: Reference tag of Visual Studio 2008 for JavaScript IntelliSense is a brand new hidden feature. Especially jQuery IntelliSense is a devastating! A: CTRL-G for jumping to a specific line number. Saves a few seconds when you've got a line number in a large code file. A: I wanted to talk about comment (Ctrl + k, Ctrl + c) and uncomment (Ctrl + k, Ctrl + u) shortcuts but a Bratt (:p) already mentioned them. How about the Ctrl + k, Ctrl + d shortcut, very convenient to format markup (ASP.NET, HTML) and JavaScript code! A: I don't know how unknown most people consider them to be, but I don't think that a lot of people use snippets. I discovered them a while back and then found that they were customizable by editing the xml in the Visual Studio Program Files directory. They make it super easy to add a lot of code quickly. Also, to save time when using snippets make sure you hit tab twice and not try to do everything through the right click menu. A: Mouse Left Click resets your cursor to the position your pointer is currently hovering. Very useful for navigating through Visual Studio. A: * *Vertical split of the window using "New Window" and "New Vertical Tab Group" combination. There is only horizontal split in VS by default, but trick with window duplication allows to use vertical split too. * *Vertical selection is good (it accessible with keyboard too: Alt+Shift+[Ctrl]+Arrows). But sometimes I need to use Vertical Copy/Cut and Paste. VS is smart enough to handle this correctly. *There are also very useful features: Go Next/Prev Scope (Alt+Down/Up), Go to Implementation (Alt+G), but they are a part of the Visual Assist X plug-in. A: In addition to all others said like: * *Ctrl + K + D *Ctrl + K + U *Ctrl + M + L *Ctrl + M + O Selecting when you hold "Alt". Hiting F12 on the instead of right click and choose "Go To Definition". * *Ctrl + K + C for comment. *Ctrl + K + U for uncommenting. Today if found something new: In WebFroms in Design mode, go to Tools menu and choose "Generate Local Resources". It's really handy for making multilingual web applications. A: How about Ctrl + C to copy the current line to the clipboard without doing any range selection. This is sooooo... simple and useful. A: Ctrl + Shift + F brings up "search solution" dialog and lists all the results in a nice navigable way, rather than visiting each result. Not only it's easier to use, it's also useful because it doesn't tamper with your search scope defaults you use with regular search. A: I'm sure everyone knows this, it's not just VS, you can do it almost everywhere. If you press Ctrl + left arrow/right arrow you will go to the next/last word word. You can also Ctrl + Shift + left/right arrow to select whole words at a time. A: Navigating around the references of a symbol in VS 2010: 1. Place your cursor at the symbol to high light all references 2. Ctrl - Alt - Up/Down to navigate backward/toward reference. ^_^ A: Set next statement by right-clicking code view during debugging or just dragging the yellow arrow around. This is really useful to debug again a part of the code you have recently stepped over, or maybe change the content of some variable and trying to execute a set of statements again. A: Here's an old blog article on some of the hidden debugger features in the expression evaluators. A: * Print the shortcuts from http://www.microsoft.com/downloads/details.aspx?FamilyID=6bb41456-9378-4746-b502-b4c5f7182203&DisplayLang=en">the Microsoft page and put them next to you. Try to learn a new one every day. You'll find all shortcuts already mentioned here + lots more. Some very useful contain formatting a code block, commenting, navigate between pages,... * Get Resharper, it's a plugin which whill greatly increase your efficiency. If you use Resharper, you can find a list with shortcuts. A: I updated my code flipper, I posted earlier. I added support for ASP Controls. Larry A: Vertical selection with Ctrl-Left Click is pretty useful sometimes... A: Shift + Delete to cut whatever line the cursor is on. I use this all the time to delete whole lines of code. A: I just wanted to copy that code without the comments. So, the trick is to simply press the Alt button, and then highlight the rectangle you like.(e. g. below). protected void GridView1_RowCommand(object sender, GridViewCommandEventArgs e) { //if (e.CommandName == "sel") //{ // lblCat.Text = e.CommandArgument.ToString(); //} } In the above code if I want to select : e.CommandName == "sel" lblCat.Text = e.Comman Then I press ALt key and select the rectangle and no need to uncomment the lines. Check this out. A: Just found out back and forward buttons on my mouse moves back or forward one document. Think I was wrong about this one. Only happens when searched for stuff. A: Ever want to see all the implementations of one interface member? Use "Call Hierarchy"! A: Task List Tokens Configured task list tokens are retrieved later while opening task list window and select user comments option, this will display all user comments that contains configured tokens. This will be so useful if you try to retrieve TODO comments for example. To use it; Tools --> Options --> Environment --> Task List, add required tokens. A: A few that I know or haven't seen posted here. * *Crtl + Space encourage Intellisense to complete a word. *Customize toolbox - Right click on toolbox, that brings up popupmenu > Choose items > Check/Uncheck boxes > Ok. *Start Visual Studio without splash page. Windows + R then type devenv /nosplash and press Enter. A: I use it every time I open a file. And that's why I just hate regions. Collapse to definition Ctrl+M+O A: Break on the line where exception occurs If you want to break on the line where Exception has occured then you can use CTRL + ALT + E and select the check box against CLR under Thrown Column. This will work even if the exception is handled by the user. P.S: I tried posting the screenshot but not able to do it since new users aren't allowed to post images. Sorry ! A: Here are a few which I didn't see listed yet: * *Quickly find selected text: When text is selected hit Ctrl + F3 and then subsequently F3 to quickly find that text in a given file *Close multiple files: When you have many windows open and you want to clear only some of them (as apposed to 'close all but this etc.) Go to Window -> Windows... a dialog pops up and now you can select the windows you want to close *Navigate to a particular file: When your solution has many files it can take a while to find a file in the solution explorer. No problem! Select your solution and start typing the name of the file and you are kindly directed to your file! A: * *Ctrl + Z is Undo obviously, but will also Undo auto formatting applied by studio. Very useful when copying/pasting hardcoded tables that are spaced for readability. When you paste Studio will apply formatting and nothing lines up any more. A quick Ctrl-Z restores your nice alignment. A: Visual Assist, in general, while a bit OT for this question, is a great app and really helps with the day-to-day running of visual studio. Their open-any-file and find-any-symbol windows are particularly awesome. A: After having read through all these marvelous (and some repetitive) posts, I have some to add that I don't think I saw: CTRL+Z = undo CTRL+Y = redo ;-) Also, don't forget to modify the keyboard shortcuts! Tools > Options > Environment > Keyboard LOTS of goodies! I have F9 == stepinto, f10 == step over and f11 == step out. VERY useful. Another not cited that I use somewhat often (although most people probably have a toolbar with this button): f6 == Build Solution. Enjoy!
{ "language": "en", "url": "https://stackoverflow.com/questions/100420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "182" }
Q: How can I use multiple sitemap file without multiple root nodes I'm using a ASP.NET menu control. I'd like the menu to look like this, where link 1 through 10 are in one sitemap file and link 11 through 20 in another. root --link 1 (...) --link 10 --link 11 (...) --link 20 However, sitemap file MUST have a root which I cannot seem to suppress. Any thoughts? -Edoode A: You can suppress the root node by doing the following: SiteMapDataSource ds = new SiteMapDataSource(); ds.SiteMapProvider = "providername"; ds.ShowStartingNode = false; TreeView1.DataSource = ds; TreeView1.DataBind(); I use this method to hide the root node for tree views. A: Is there any reason that you can't add a dummy root node and then subclass the ASP.NET menu control to ignore your dummy "root" node? You should be able to tell your SiteMapProvider to use different site maps for the menu. The other question I have is what's the purpose of having multiple sitemap files? I'm sure you have a valid reason for this, but knowing what's going on would make it easier to understand and come up with a better solution. That being said, I would come up with a homegrown menu system. You could use jQuery and the superfish plugin on the front end and use C# to read your site map files on the back end to build the menuing structure.
{ "language": "en", "url": "https://stackoverflow.com/questions/100435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to set breakpoints on future shared libraries with a command flag I'm trying to automate a gdb session using the --command flag. I'm trying to set a breakpoint on a function in a shared library (the Unix equivalent of a DLL) . My cmds.gdb looks like this: set args /home/shlomi/conf/bugs/kde/font-break.txt b IA__FcFontMatch r However, I'm getting the following: shlomi:~/progs/bugs-external/kde/font-breaking$ gdb --command=cmds.gdb... GNU gdb 6.8-2mdv2009.0 (Mandriva Linux release 2009.0) Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i586-mandriva-linux-gnu"... (no debugging symbols found) Function "IA__FcFontMatch" not defined. Make breakpoint pending on future shared library load? (y or [n]) [answered N; input not from terminal] So it doesn't set the breakpoint after all. How can I make it default to answer "y" to set breakpoints on pending future shared library load? I recall that I was able to do something, but cannot recall what. A: With no symbols. objdump -t /lib/libacl.so SYMBOL TABLE: no symbols objdump -T /lib/libacl.so ... 00002bd0 g DF .text 000000d0 ACL_1.0 acl_delete_entry ... (gdb) break 0x0002bd0 (gdb) x/20i acl_delete_entry 0x2bd0 <acl_delete_entry>: stwu r1,-32(r1) 0x2bd4 <acl_delete_entry+4>: mflr r0 0x2bd8 <acl_delete_entry+8>: stw r29,20(r1) 0x2bdc <acl_delete_entry+12>: stw r30,24(r1) 0x2be0 <acl_delete_entry+16>: mr r29,r4 0x2be4 <acl_delete_entry+20>: li r4,28972 A: Replying to myself, I'd like to give the answer that someone gave me on IRC: (gdb) apropos pending actions -- Specify the actions to be taken at a tracepoint set breakpoint -- Breakpoint specific settings set breakpoint pending -- Set debugger's behavior regarding pending breakpoints show breakpoint -- Breakpoint specific settings show breakpoint pending -- Show debugger's behavior regarding pending breakpoints And so set breakpoint pending on does the trick; it is used in cmds.gdb like e.g. set breakpoint pending on break <source file name>:<line number> A: OT: In terminal it would look like this to debug Caja in one line: gdb -ex "set breakpoint pending on" -ex "break gdk_x_error" -ex run --args caja --sync
{ "language": "en", "url": "https://stackoverflow.com/questions/100444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "105" }
Q: Why is the DataSourceSelectArguments sealed? Does anybody know the logic behind making DataSourceSelectArguments sealed? I've implemented a custom DataSource (and related classes) for some custom business objects and custom WebControls. When thinking in filters (like in a grid) I discovered that the DataSourceSelectArguments is sealed. Surely, I'm missing something. (Maybe the logic is related to the fact that is nonsense to ask the DB again, just for filtering?, just a guess.) A: Sorry for the delay, I was on holydays. :) The problem is that a DataBoundControl such as ListView has a SortExpression property, but not a FilterExpression. It is fine to implement a sortable grid/list with a ListView by means of a IButtonControl WebControl that fires a PostBack and a Command event. Then you use the SortExpression or the Sort method and pass a sort expression that will fill the DataSourceSelectArguments.SortExpression and pass it to the DataSource which can construct the apropiate SQL statement (in my case) to retrieve the Data from the DB. This allows for separation between the Data and the WebControl that displays it, IMHO. Following this pattern I was about to implement a filter by filling an extra parameter object in my DataSourceSelectArguments with the requested filter and I will have called Sort, which would have passed this arguments object to the DataSource, where I would have constructed the appropiate select clause. I've finally solve it by "coding" the filter information in the SortExpression, but I find it ugly (for the name, in the first place: sort != filter), and I was wondering if there's a more appropiate way of doing this or if I'm missing something that is more subtle. Edit: Maybe a better approach would be to override ListView's PerformSelect method and ask my own implementation of the DataSourceView if it can filter, then call a special ExecuteSelect method that accepts a special DataSourceSelectArguments with a filter object. Taking care not to do anything that will break when someone use the custom ListView with a non-enhanced DataSourceView, of course. A: My guess is because the class is a dumb data transfer object merely used to pass arguments to a method. This class itself doesn't have any operations defined on it, thus what sort of polymorphism would you expect? For example, the existing methods will only know about the properties of this class, which are all settable, so there's no need to override the properties. If you added new properties, they would get ignored. For your own method, can you create your own Arguments class that just happens to have all the same properties?
{ "language": "en", "url": "https://stackoverflow.com/questions/100454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Alternatives to Live/GamerServices for XNA projects? Using the GamerServices component for XNA to access Xbox/GfW Live for networking purposes requires developers and players each to have a US$100/year subscription to Microsoft's Creators Club. That's not much of an issue for Xbox360 XNA projects as you need the subscription anyway to be able to put your game on the 360. But for PC games using XNA, requiring developers and players to put that much up each year is pretty crazy just for the access to a player's gamer card. Are there any solutions for XNA games that provide similar benefits to GamerServices? Or are developers pretty much restricted to building their own networking functionality if they don't want to subject their players (and themselves) to that $100/head hit? A: Perhaps you could try Lidgren A: Please note that games for windows live is now free: http://www.engadget.com/2008/07/22/games-for-windows-live-now-free/ Since using the Live APIs is your only option on xbox and zune, it makes it a pretty compelling option since your only issue was the cost on windows :-) Especially considering the fact that once game studio 3.0 launches, you'll be able to sell your games on xbox live's new community games section Edit, upon further investigation, it turns out that the games for windows live stuff is kind of half-baked. The gamerservices library doesn't seem to be included in the redistributable bits. So unless you want to break the EULA, your player would have to install gamestudio. That being said, I do still believe that it's free nonetheless, if not inconvenient. A: Well, you can use sockets, obviously, and using sockets you can create a seperate, dedicated server app, which you can't do with Live (as far as I know). You could also try SteamWorks; I haven't heard of anyone trying that, however.
{ "language": "en", "url": "https://stackoverflow.com/questions/100459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: SSO in webpages I would like to know what's the best technique to do single sign-on in web sites. It means that if you enter in more than one website, you will be authenticated in a "transparent" way. I'm using php + curl to set the cookies for the different sites, but i really would like to know if does exist a standard way to do it. A: For a standard secure way you need : * *an authentication server *an authentication filter on each site that need SSO The mechanism is a little bit complex, it involves http redirects and secure authentication tickets. You will find detailled explanation on the CAS website (a popular java SSO server). I recommend to read this page "CAS Java Client Gateway Example", especially the sequence diagram at the bottom of the page. A: The best way is to use Image tags which pull an image stream from your external sites. So if you're at www.some-site.com and you want to also be signed into www.some-partner-site.com you have this displayed after logging in: Because you're using an Image it forces the browser to "pull in" the contents of that URL. I've recently built a solution which does it with ASP.NET but we also have a php-based partner site. What image displayed is irrelivant, really you should not display any image at all (hence the 1x1 size) A: You could also take a look at OpenId. This is the same mechanism used for logging into stackoverflow and features a "global" single sign-on. I believe there are php libraries available to integrate with it. You could also take a look at this question.
{ "language": "en", "url": "https://stackoverflow.com/questions/100460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Neural Network example in .NET Any good tutorial with source that will demonstrate how to develop neural network (step bay step for dummies ;-)) A: Here is an online course on C# neural network programming. http://www.heatonresearch.com/course/intro-neural-nets-cs A: Here is good example: Brainnet 1 - A Neural Netwok Project - With Illustration And Code - Learn Neural Network Programming Step By Step And Develop a Simple Handwriting Detection System that will demonstrate some practical uses of neural network programming. A: There's a really good article on CodeProject: Image Recognition with Neural Networks. A: An interesting tutorial is available here. Hopefully it will act as an introduction for you. A: You can have a look at http://generation5.org/articles.asp?Action=List&Topic=Neural+Networks which contains a lot of articles about various types of neural networks targeting both beginners and advanced fields.
{ "language": "en", "url": "https://stackoverflow.com/questions/100469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: How to pause / resume any external process under Windows? I am looking for different ways to pause and resume programmatically a particular process via its process ID under Windows XP. Process suspend/resume tool does it with SuspendThread / ResumeThread but warns about multi-threaded programs and deadlock problems. PsSuspend looks okay, but I wonder if it does anything special about deadlocks or uses another method? Prefered languages : C++ / Python A: If you "debug the debugger" (for instance, using logger.exe to trace all API calls made by windbg.exe), it appears that the debugger uses SuspendThread()/ResumeThread() to suspend all of the threads in the process being debugged. PsSuspend may use a different way of suspending processes (I'm not sure), but it is still possible to hang other processes: if the process you're suspending is holding a shared synchronization object that is needed by another process, you may block that other process from making any progress. If both programs are well-written, they should recover when you resume the one that you suspended, but not all programs are well-written. And if this causes your program that is doing the suspending to hang, then you have a deadlock. A: I'm not sure if this does the job, but with ProcessExplorer from MS Systernals you can suspend a process. It's been said here: https://superuser.com/a/155263 and I found it there too. A: read here and you also have psutil for python that you can use it like that: >>> import psutil >>> pid = 7012 >>> p = psutil.Process(pid) >>> p.suspend() >>> p.resume() A: I tested http://www.codeproject.com/KB/threads/pausep.aspx on few softwares: it works fine. PsSuspend and Pausep are two valid options. A: So, after I found about UniversalPauseButton, Googling for this ("windows SIGSTOP"), getting this question as the first search result (thanks Ilia K. your comment did its job), and reading the answers, I went back to checkout the code. Apparently, it uses undocumented NT kernel and Win32 APIs _NtSuspendProcess, _NtResumeProcess and _HungWindowFromGhostWindow. PsSuspend, the utility you mentioned and linked to probably uses these APIs, I couldn't verify this, the source code isn't supplied, only executables and a EULA, you can probably figure that out by disassembling the binary but it's against the EULA. so, to answer your specific question, checkout UniversalPauseButton's main.cpp, basically you call _NtSuspendProcess(ProcessHandle) and _NtResumeProcess(ProcessHandle), ProcessHandle being the handle of the process you want to pause or resume. A: I think there is a good reason why there is no SuspendProcess() function in Windows. Having such a function opens the door for an unstable system. You shall not suspend a process unless you created that process yourself. If you wrote that process yourself, you could use an event (see ::SetEvent() etc. in MSDN) or another kind of messaging to trigger a pause command in the process.
{ "language": "en", "url": "https://stackoverflow.com/questions/100480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do you bind in xaml to a dynamic xpath? I have a list box that displays items based on an XPath query. This XPath query changes depending on the user's selection elsewhere in the GUI. The XPath always refers to the same document. At the moment, I use some C# code behind to change the binding of the control to a new XPath expression. I'd like instead to bind in XAML to an XPath, then change the value of that XPath as required. How would I do that? A: I think that you're trying to over complicate the problem. But have you thought about allocating the XPath to a dynamic resource: <.... ={Binding XPath={DynamicResource:res resource-name}} ... /> The best place to read about all-binding is Beatriz's blog: http://www.beacosta.com/blog/
{ "language": "en", "url": "https://stackoverflow.com/questions/100500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the SQL command to return the field names of a table? Say I have a table called myTable. What is the SQL command to return all of the field names of this table? If the answer is database specific then I need SQL Server right now but would be interested in seeing the solution for other database systems as well. A: MySQL 3 and 4 (and 5): desc tablename which is an alias for show fields from tablename SQL Server (from 2000) and MySQL 5: select COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'tablename' Completing the answer: like people below have said, in SQL Server you can also use the stored procedure sp_help exec sp_help 'tablename' A: You can use the provided system views to do this: eg select * from INFORMATION_SCHEMA.COLUMNS where table_name = '[table name]' alternatively, you can use the system proc sp_help eg sp_help '[table name]' A: For those looking for an answer in Oracle: SELECT column_name FROM user_tab_columns WHERE table_name = 'TABLENAME' A: PostgreSQL understands the select column_name from information_schema.columns where table_name = 'myTable' syntax. If you're working in the psql shell, you can also use \d myTable for a description (columns, and their datatypes and constraints) A: Just for completeness, since MySQL and Postgres have already been mentioned: With SQLite, use "pragma table_info()" sqlite> pragma table_info('table_name'); cid name type notnull dflt_value pk ---------- ---------- ---------- ---------- ---------- ---------- 0 id integer 99 1 1 name 0 0 A: This is also MySQL Specific: show fields from [tablename]; this doesnt just show the table names but it also pulls out all the info about the fields. A: In Sybase SQL Anywhere, the columns and table information are stored separately, so you need a join: select c.column_name from systabcol c key join systab t on t.table_id=c.table_id where t.table_name='tablename' A: SQL-92 standard defines INFORMATION_SCHEMA which conforming rdbms's like MS SQL Server support. The following works for MS SQL Server 2000/2005/2008 and MySql 5 and above select COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'myTable' MS SQl Server Specific: exec sp_help 'myTable' This solution returns several result sets within which is the information you desire, where as the former gives you exactly what you want. Also just for completeness you can query the sys tables directly. This is not recommended as the schema can change between versions of SQL Server and INFORMATION_SCHEMA is a layer of abstraction above these tables. But here it is anyway for SQL Server 2000 select [name] from dbo.syscolumns where id = object_id(N'[dbo].[myTable]') A: If you just want the column names, then select COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'tablename' On MS SQL Server, for more information on the table such as the types of the columns, use sp_help 'tablename' A: MySQL is the same: select COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'tablename' A: For IBM DB2 (will double check this on Monday to be sure.) SELECT TABNAME,COLNAME from SYSCAT.COLUMNS where TABNAME='MYTABLE' A: MySQL describe tablename A: select COLUMN_NAME1,COLUMN_NAME2 from SCHEMA_NAME.TABLE_NAME where TABLE_NAME.COLUMN_NAME = 'COLUMN_NAME1';
{ "language": "en", "url": "https://stackoverflow.com/questions/100504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: What are the quickest and easiest ways to ensure existing web pages display well on mobile platforms? The pages in question contain a lot of javascript and CSS. How well are these supported by mobile platforms generally? Is there a browser emulator (or equivalent tool) to assist testing? A: Opera has an option to view pages as through a mobile device. I've found it useful in the past. A: I can tell you that Apple's Mobile Safari on the iPhone renders Stack Overflow perfectly, which I find rather amazing. This is a site for programmers, not average users, so we accepted a lot of JavaScript dependencies. I do wish more mobile devices had browsers as powerful as Mobile Safari. I hear good things about Opera Mini as well. A: One example: The standard BlackBerry browser on my BlackBerry 8130 (Pearl) seems to ignore both CSS and JavaScript when loading my home page. I also installed Opera Mobile on this device, which renders the CSS but not my jQuery hover effects. It does understand some jQuery - for example, I have a form validation that does a show() of error messages if validation fails. That works in Opera, although without the animation effect. The safest thing to do for mobile browsers is to design pages that degrade gracefully without JS or CSS. It's up to you whether that's worth the effort or not. In a few years, hopefully the only rendering differences will be the screen size limits of the phones. A: You can install Opera Mini on an emulator like the Java WTK and test mobile rendering on a PC. One drawback is that Opera Mini still works through a proxy, so debugging local files/sites won't work - you have to upload your site to a world-accessible server. Just google it. A: It depends entirely on the phone. If you want to support every single device out there, don't even bother with CSS or JavaScript since neither will work (or will do something completely non-standard) on 99% of devices. If you are only targeting high-end devices, like the iPhone or the latest Series 60 Nokias, you should be able to get away with limited JS and CSS. Some browser emulators that I know of: * *Openwave. *Nokia tools There are many more manufacturers that simply do not have any tools at all (I dare you to try and find a developer site for LG) so you need to get access to the physical handsets if you want to be sure the site appears as it should. DeviceAnywhere is a superb tool if you have the cash. It was extremely laggy the last time I used it about a year and a half ago. Plus it is pure Java so is a dog on any machine. But it is arguably the single best mobile development tool available and, believe you me, I've tried a lot. A: BlackBerry devices with OS 4.5 or older will not handle Javascript or CSS very well, if at all. Devices with OS 4.6 and higher (Bold, Pearl Flip, Storm, etc..) come with a new rendering engine which has much better support for Javascript, DOM, and CSS. It's not perfect but it should render most pages quite well. You can download the BlackBerry simulator for these devices from their developer website and try it out. Since it runs the same code as on the actual device it's an excellent representation of what you can expect to see on-device.
{ "language": "en", "url": "https://stackoverflow.com/questions/100519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Interface type method parameter implementation Is it possible to do like this: interface IDBBase { DataTable getDataTableSql(DataTable curTable,IDbCommand cmd); ... } class DBBase : IDBBase { public DataTable getDataTableSql(DataTable curTable, SqlCommand cmd) { ... } } I want to use the interface to implement to d/t providers (MS-SQL,Oracle...); in it there are some signatures to be implemented in the corresponding classes that implement it. I also tried like this: genClass<typeOj> { typeOj instOj; public genClass(typeOj o) { instOj=o; } public typeOj getType() { return instOj; } ... interface IDBBase { DataTable getDataTableSql(DataTable curTable,genClass<idcommand> cmd); ... } class DBBase : IDBBase { public DataTable getDataTableSql(DataTable curTable, genClass<SqlCommand> cmd) { ... } } A: No it's not possible. Method should have same signature that one declared in the interface. However you can use type parameter constraints: interface IDBClass<T> where T:IDbCommand { void Test(T cmd); } class DBClass:IDBClass<SqlCommand> { public void Test(SqlCommand cmd) { } } A: Covariance and contravariance are not widely supported as of C# 3.0, except for assigning method groups to delegates. You can emulate it a bit by using private interface implementation and call public method with more specific parameters: class DBBase : IDBBase { DataTable IDBBase.getDataTableSql(DataTable curTable, IDbCommand cmd) { return getDataTableSql(curTable, (SqlCommand)cmd); // of course you should do some type checks } public DataTable getDataTableSql(DataTable curTable, SqlCommand cmd) { ... } } A: Try compiling it. The compiler will report an error if DBBase doesn't implement IDBBase. A: No, it's not possible. I tried compiling this: interface Interface1 { } class Class1 : Interface1 {} interface Interface2 { void Foo(Interface1 i1);} class Class2 : Interface2 {void Foo(Class1 c1) {}} And I got this error: 'Class2' does not implement interface member 'Interface2.Foo(Interface1)'
{ "language": "en", "url": "https://stackoverflow.com/questions/100533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I set specific environment variables when debugging in Visual Studio? On a class library project, I set the "Start Action" on the Debug tab of the project properties to "Start external program" (NUnit in this case). I want to set an environment variable in the environment this program is started in. How do I do that? (Is it even possible?) EDIT: It's an environment variable that influences all .NET applications (COMplus_Version, it sets the runtime version) so setting it system wide really isn't an option. As a workaround I just forced NUnit to start in right .NET version (2.0) by setting it in nunit.exe.config, though unfortunately this also means all my .NET 1.1 unit tests are now also run in .NET 2.0. I should probably just make a copy of the executable so it can have its own configuration file... (I am keeping the question open (not accepting an answer) in case someone does happen to find out how (it might be useful for other purposes too after all...)) A: In Visual Studio 2008 and Visual Studio 2005 at least, you can specify changes to environment variables in the project settings. Open your project. Go to Project -> Properties... Under Configuration Properties -> Debugging, edit the 'Environment' value to set environment variables. For example, if you want to add the directory "c:\foo\bin" to the path when debugging your application, set the 'Environment' value to "PATH=%PATH%;c:\foo\bin". A: Visual Studio 2003 doesn't seem to allow you to set environment variables for debugging. What I do in C/C++ is use _putenv() in main() and set any variables. Usually I surround it with a #if defined DEBUG_MODE / #endif to make sure only certain builds have it. _putenv("MYANSWER=42"); I believe you can do the same thing with C# using os.putenv(), i.e. os.putenv('MYANSWER', '42'); These will set the envrironment variable for that shell process only, and as such is an ephemeral setting, which is what you are looking for. By the way, its good to use process explorer (http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx), which is a sysinternals tool. You can see what a given process' copy of the environment variables is, so you can validate that what you set is what you got. A: Starting with NUnit 2.5 you can use /framework switch e.g.: nunit-console myassembly.dll /framework:net-1.1 This is from NUnit's help pages. A: In Visual Studio for Mac and C# you can use: Environment.SetEnvironmentVariable("<Variable_name>", "<Value>"); But you will need the following namespace using System.Collections; you can check the full list of variables with this: foreach (DictionaryEntry de in Environment.GetEnvironmentVariables()) Console.WriteLine(" {0} = {1}", de.Key, de.Value); A: In Visual Studio 2019 right-click your project, choose Properties. In the project properties window, select the Debug tab. Then, under Environment variables change the value of your environment from Development to Production or other environments. For .Net Core and .Net 5 the property is called ASPNETCORE_ENVIRONMENT. A: If you are using VS 2019, Go to Project-> Properties->Debug. check here Add key and value for your variables. Then it is done. Check launchSettings.json in properties folder you should see your variable there. A: Set up a batch file which you can invoke. Pass the path the batch file, and have the batch file set the environment variable and then invoke NUnit. A: If you can't use bat files to set up your environment, then your only likely option is to set up a system wide environment variable. You can find these by doing * *Right click "My Computer" *Select properties *Select the "advanced" tab *Click the "environment variables" button *In the "System variables" section, add the new environment variable that you desire *"Ok" all the way out to accept your changes I don't know if you'd have to restart visual studio, but seems unlikely. HTH A: As environments are inherited from the parent process, you could write an add-in for Visual Studio that modifies its environment variables before you perform the start. I am not sure how easy that would be to put into your process. A: In Visual Studio 2022, go to solution explorer, right click to project file. Then, click on the Debug link at the left side. Then, click on the Open debug and launch profiles UI. Then, you can add new variables into the field in Environment Variables section. Environment Variables A: In VS 2022 for .NET 5 and 6 you can set environment variables under properties of project -> Debug -> under General click on 'Open debug launch profiles UI' and scroll down to 'Environment variables' A: I prefer to keep all such definitions in the make files, i.e. in the .*proj or .props - because these are under SCM. I avoid the VS-Gui-Property-Dialogs. A lot of the config you write there goes into some .user, .suo or so, which is usually not under SCM. E.g. in case of environment variables you could write (using a text editor) something like the following in your .vcxproj: <PropertyGroup> <LocalDebuggerEnvironment Condition="'$(Configuration)'=='Debug'"> ANSWER=42 RUNTIME_DIR="$(g_runtime_dir)" COLOR=octarin </LocalDebuggerEnvironment> </PropertyGroup> NOTE that you can use MSBuild Conditions and other build properties to define the environment variables. NOTE: this works for me with VS2013 and VS2019. I think it is the same for other VS + MSBuild versions. A: You can set it at Property > Configuration Properties > Debugging > Environment
{ "language": "en", "url": "https://stackoverflow.com/questions/100543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86" }
Q: Which (third-party) debug visualizers for Visual Studio 2005/2008 do you use? I guess this topic is often overseen, but is rather useful when debugging your code. Just today I have stumbled across this simple yet effective visualizer that visualizes images (it's on a german blog, but I guess the code content is self-explanatory): link text I would like to know which debug visualizers you use in your daily work with VS2005/2008. A: I use Mole. Mole was designed to not only allow the developer to view objects or data, but to also allow the developer to drill into properties of those objects and then edit them. Mole allows unlimited drilling into objects and sub-objects. A: Also check out Xml Visualizer v.2 (http://codeplex.com/XmlVisualizer) A: There was a sample on an MSDN blog for the Microsoft.Xna.Framework.Matrix, I think it was. I later made my own, but it was still good. A: Since I do a lot with Graphics and GDI, I found the Graphic Debug Visualizer invaluable. The Bitmap Visualizer it is based on is good too, however I had to recompile it for Visual Studio 2008 (and change the references to the various VisualStudio extension dll's).
{ "language": "en", "url": "https://stackoverflow.com/questions/100548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I perform a Google search on different Google datacentres? I would like to get up-to-date information on Google's index of a website, and it seems that results vary depending on which datacentre happens to process your search query. A: You can use one of the many tools designed specifically for this. One of them is Google Datacenter Search at iWebTool. (It's really all about getting a list of data center IPs and sending the same GET variable to them as to google.com.)
{ "language": "en", "url": "https://stackoverflow.com/questions/100557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the best automation or scripting tool to automate repetitive tasks with applications? I realize you can script Microsoft Office apps, but I'm looking for something more general that I can apply to other apps, such as Adobe Acrobat, web browsers and other apps with no scripting ability. I've used AutoIt but it's a bit clunky, especially when trying to debug why a script fails or stalls. Also, getting the timing of actions, such as clicking a button, or selecting a menu item correctly can be a pain. Are there build tools that could be used for this purpose? A: I recommend AutoHotKey. Its syntax is not pretty, but most of the times you don't have to concoct your own scripts, because its community is so large and well organized. Even if you do, the documentation is extensive and good and the forums will answer your questions quickly. The developer is active and responsive, which means that bugs get fixed quickly, and new features are being considered and added. Since I began using AHK I don't imagine doing without it - it allows to make life on Windows simpler in so many ways. You can also employ the COM interface from Python and other scripting languages. It is more complex, but you get to use a more powerful language. A: I love AutoHotkey (small k...) too, but beside its odd syntax, it has the same lack of debugging tools... Basically, that's "show msgbox alerts, send strings to file or debugview, trace". Which is OK for most cases, since you rarely write long and complex applications with these tools. In both tools, and probably all macro softwares, "timing of actions" will be hard to get anyway, because events are asynchronous: most of the time, you don't wait a given time, but you wait for a window to appear. Hoping it is the right one! There are other automation tools, like Ranorex (I didn't tested it), you can even use some scripting language (Lua, Python) with a library to send messages (WM_XXX) and another to call WinAPI... But tools like AutoIt and AutoHotkey have the advantage of having been extensively tested, so they can handle a large number of behaviors/issues (like waiting for clipboard data to be available, etc.). A: Might be a bit much for your needs but AutoMate is very robust and easy to use. Doesn't require any scripting skills as most tasks can be constructed via drag and drop http://www.networkautomation.com/sales/scripting/
{ "language": "en", "url": "https://stackoverflow.com/questions/100592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Best resources for converting C/C++ dll headers to Delphi? A rather comprehensive site explaining the difficulties and solutions involved in using a dll written in c/c++ and the conversion of the .h header file to delphi/pascal was posted to a mailing list I was on recently, so I thought I'd share it, and invite others to post other useful resources for this, whether they be links, conversion tools, or book/paper titles. One resource per answer please, so we'll end up with the most popular/best resources bubbling to the top. A: Article at Rudy's Delphi Corner A: Also, CodeGear hosts a rudimentary translation tool called CToPas (written by Ural Gunaydin). A: I would like to highlight the Jedi Api Library, it is the Delphi translation of the Windows SDK headers. Might save you a lot of work if you need to translate headers from the SDK and is of course a good sample of conversions! A: Since FreePascal is aimed at Delphi compatibiltiy among other things, i think H2Pas may be helpful too. https://www.freepascal.org/tools/h2pas.var A: Over at Rudy's Delphi Corner, he has an excellent article about the pitfalls of converting C/C++ to Delphi. In my opinion, this is essential information when attempting this task. Here is the description: This article is meant for everyone who needs to translate C/C++ headers to Delphi. I want to share some of the pitfalls you can encounter when converting from C or C++. This article is not a tutorial, just a discussion of frequently encountered problem cases. It is meant for the beginner as well as for the more experienced translator of C and C++. He also wrote a "Conversion Helper Package" that installs into the Delphi IDE which aids in converting C/C++ code to Delphi: (source: rvelthuis.de) His other relevant articles on this topic include: * *Using C++ Objects in Delphi *Using C object files in Delphi A: HeadConv from DrBob is used quite a lot too, although I concur with Graza that manual translation is best. A: use this option so the byte alignment is the same as C/C++ and then you don't need to add padding bytes in structs. {$MINENUMSIZE 4}
{ "language": "en", "url": "https://stackoverflow.com/questions/100596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Can you turn off Visual Studio 2008 Query formatting? Is it possible to turn off the query re-formatting that happens when you edit a query for a database in Visual Studio? (i.e. r-click a data source and select New Query) This is happening when we are writing sql queries against a sql compact 3.5 database. Its rather irritating when your carefully indented and formatted query is munged into visual studio's formatting (which is illegible!). I cannot find any setting in the options dialog. A: I use the Add New "Sql Script" instead of the Query to prevent re-formating but still keep the syntax highlighting. A: Just to close this one off, as far as I can tell, there is no way to do this.
{ "language": "en", "url": "https://stackoverflow.com/questions/100614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I get the mac address or IPAddress from a cisco callmanager? Is there any way to retrieve either the mac address or the IP address of phones on a Cisco server using Callmanager version 6 via the axl with VB.net? The server can retrieve the IP address itself to use the phone and it's not in the database information retrieved from the server. A: You can download the WSDL file from your CallManager server. Then with the included methods, you can make a call to getPhone and it's pretty simple from there. If you want to do it in a more manual way, you can use the method from this blog post: http://blog.crowe.co.nz/Archive/2008/October (See the section "Getting the IP Addresses of Phones from a Cisco Call Manager v6.x using C#")
{ "language": "en", "url": "https://stackoverflow.com/questions/100620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Flash should open window in new tab, but instead it opens a new pop up on Mac using target="_blank" in the navigateToUrl with Firefox on Windows it opens in new tab, with Firefox on Mac it opens a 'popup', How to make the window popup in a new tab on Firefox on Mac as well? A: Check your Firefox preferences >> Tabs >> New windows should be opened in (a new window | a new tab). Do you have different settings for your Firefox on your Windows and on your Mac? A: That is a browser preference, not an actionscript property. A: This is most likely a bug in the browser and/or plug-in. My suggestion would be to try telling JavaScript to open the window, using ExternalInterface. This may be more likely to trigger a pop-up blocker though.
{ "language": "en", "url": "https://stackoverflow.com/questions/100622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Python on Windows - how to wait for multiple child processes? How to wait for multiple child processes in Python on Windows, without active wait (polling)? Something like this almost works for me: proc1 = subprocess.Popen(['python','mytest.py']) proc2 = subprocess.Popen(['python','mytest.py']) proc1.wait() print "1 finished" proc2.wait() print "2 finished" The problem is that when proc2 finishes before proc1, the parent process will still wait for proc1. On Unix one would use waitpid(0) in a loop to get the child processes' return codes as they finish - how to achieve something like this in Python on Windows? A: Building on zseil's answer, you can do this with a mix of subprocess and win32 API calls. I used straight ctypes, because my Python doesn't happen to have win32api installed. I'm just spawning sleep.exe from MSYS here as an example, but clearly you could spawn any process you like. I use OpenProcess() to get a HANDLE from the process' PID, and then WaitForMultipleObjects to wait for any process to finish. import ctypes, subprocess from random import randint SYNCHRONIZE=0x00100000 INFINITE = -1 numprocs = 5 handles = {} for i in xrange(numprocs): sleeptime = randint(5,10) p = subprocess.Popen([r"c:\msys\1.0\bin\sleep.exe", str(sleeptime)], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False) h = ctypes.windll.kernel32.OpenProcess(SYNCHRONIZE, False, p.pid) handles[h] = p.pid print "Spawned Process %d" % p.pid while len(handles) > 0: print "Waiting for %d children..." % len(handles) arrtype = ctypes.c_long * len(handles) handle_array = arrtype(*handles.keys()) ret = ctypes.windll.kernel32.WaitForMultipleObjects(len(handle_array), handle_array, False, INFINITE) h = handle_array[ret] ctypes.windll.kernel32.CloseHandle(h) print "Process %d done" % handles[h] del handles[h] print "All done!" A: Twisted has an asynchronous process-spawning API which works on Windows. There are actually several different implementations, many of which are not so great, but you can switch between them without changing your code. A: Twisted on Windows will perform an active wait under the covers. If you don't want to use threads, you will have to use the win32 API to avoid polling. Something like this: import win32process import win32event # Note: CreateProcess() args are somewhat cryptic, look them up on MSDN proc1, thread1, pid1, tid1 = win32process.CreateProcess(...) proc2, thread2, pid2, tid2 = win32process.CreateProcess(...) thread1.close() thread2.close() processes = {proc1: "proc1", proc2: "proc2"} while processes: handles = processes.keys() # Note: WaitForMultipleObjects() supports at most 64 processes at a time index = win32event.WaitForMultipleObjects(handles, False, win32event.INFINITE) finished = handles[index] exitcode = win32process.GetExitCodeProcess(finished) procname = processes.pop(finished) finished.close() print "Subprocess %s finished with exit code %d" % (procname, exitcode) A: You can use psutil: >>> import subprocess >>> import psutil >>> >>> proc1 = subprocess.Popen(['python','mytest.py']) >>> proc2 = subprocess.Popen(['python','mytest.py']) >>> ls = [psutil.Process(proc1.pid), psutil.Process(proc2.pid)] >>> >>> gone, alive = psutil.wait_procs(ls, timeout=3) 'gone' and 'alive' are lists indicating which processes are gone and which ones are still alive. Optionally you can specify a callback which gets invoked every time one of the watched processes terminates: >>> def on_terminate(proc): ... print "%s terminated" % proc ... >>> gone, alive = psutil.wait_procs(ls, timeout=3, callback=on_terminate) A: It might seem overkill, but, here it goes: import Queue, thread, subprocess results= Queue.Queue() def process_waiter(popen, description, que): try: popen.wait() finally: que.put( (description, popen.returncode) ) process_count= 0 proc1= subprocess.Popen( ['python', 'mytest.py'] ) thread.start_new_thread(process_waiter, (proc1, "1 finished", results)) process_count+= 1 proc2= subprocess.Popen( ['python', 'mytest.py'] ) thread.start_new_thread(process_waiter, (proc2, "2 finished", results)) process_count+= 1 # etc while process_count > 0: description, rc= results.get() print "job", description, "ended with rc =", rc process_count-= 1 A: you can use psutil import psutil with psutil.Popen(["python", "mytest.py"]) as proc1, psutil.Popen( ["python", "mytest.py"] ) as proc2: gone, alive = psutil.wait_procs([proc1, proc2], timeout=3) 'gone' and 'alive' are lists indicating which processes are gone and which ones are still alive. Optionally you can specify a callback which gets invoked every time one of the watched processes terminates: def on_terminate(proc): print "%s terminated" % proc gone, alive = psutil.wait_procs(ls, timeout=3, callback=on_terminate)
{ "language": "en", "url": "https://stackoverflow.com/questions/100624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Licensing Websites - How does it work? I've been looking at several sites that offer a form of "linkware" license where you get the website for free but need to keep all links to the developers site in place. Purchasing a license key and adding it to the site (either in a database or some form of config file) removes these links. I was wondering if anyone has had any experience of running a system like this, specifically how do you generate and check the license keys? I'm thinking of applying a similar model to something I'm working on so any examples in "Classic" ASP would be most appreciated. A: Generally licences work using a public-key system. Your licence string is simply some info (perhaps with info on which domain name this licence is valid for, for example), signed by your private key. The web app contains the public key, which is used to check the validity of the signature. I'm sure there are other ways, but this seems to be one of the more robust ones that I know of. :-) I haven't coded anything in ASP, so I have no examples for you, sorry.
{ "language": "en", "url": "https://stackoverflow.com/questions/100629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: "MySQL server has gone away" with Ruby on Rails After our Ruby on Rails application has run for a while, it starts throwing 500s with "MySQL server has gone away". Often this happens overnight. It's started doing this recently, with no obvious change in our server configuration. Mysql::Error: MySQL server has gone away: SELECT * FROM `widgets` Restarting the mongrels (not the MySQL server) fixes it. How can we fix this? A: Try ActiveRecord::Base.connection.verify! in Ruby on Rails 4. Verify pings the server and reconnects if it is not connected. A: Ruby on Rails 2.3 has a reconnect option for your database connection: production: # Your settings reconnect: true See: * *Ruby on Rails 2.3 Release Notes, sub section 4.8 Reconnecting MySQL Connections. *MySQL auto-reconnect revisited Good luck! A: I had this problem when sending really large statements to MySQL. MySQL limits the size of statements and will close the connection if you go over the limit. set global max_allowed_packet = 1048576; # 2^20 bytes (1 MB) was enough in my case A: As the other contributors to this thread have said, it is most likely that MySQL server has closed the connection to your Ruby on Rails application because of inactivity. The default timeout is 28800 seconds, or 8 hours. set-variable = wait_timeout=86400 Adding this line to your /etc/my.cnf will raise the timeout to 24 hours http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#option_mysqld_wait_timeout. Although the documentation doesn't indicate it, a value of 0 may disable the timeout completely, but you would need to experiment as this is just speculation. There are however three other situations that I know of that can generate that error. The first is the MySQL server being restarted. This will obviously drop all the connections, but as the MySQL client is passive, and this won't be noticed till you do the next query. The second condition is if someone kills your query from the MySQL command line, and this also drops the connection, because it could leave the client in an undefined state. The last is if your MySQL server restarts itself due to a fatal internal error. That is, if you are doing a simple query against a table and instantly see 'MySQL has gone away', I'd take a close look at your server's logs to check for hardware error, or database corruption. A: This is probably caused by the persistent connections to MySQL going away (time out is likely if it's happening over night) and Ruby on Rails is failing to restore the connection, which it should be doing by default: In the file vendor/rails/actionpack/lib/action_controller/dispatcher.rb is the code: if defined?(ActiveRecord) before_dispatch { ActiveRecord::Base.verify_active_connections! } to_prepare(:activerecord_instantiate_observers) {ActiveRecord::Base.instantiate_observers } end The method verify_active_connections! performs several actions, one of which is to recreate any expired connections. The most likely cause of this error is that this is because a monkey patch has redefined the dispatcher to not call verify_active_connections!, or verify_active_connections! has been changed, etc. A: First, determine the max_connections in MySQL: show variables like "max_connections"; You need to make sure that the number of connections you're making in your Ruby on Rails application is less than the maximum allowed number of connections. Note that extra connections can be coming from your cron jobs, delayed_job processes (each would have the same pool size in your database.yml), etc. Monitor the SQL connections as you go through your application, run processes, etc. by doing the following in MySQL: show status where variable_name = 'Threads_connected'; You might want to consider closing connections after a Thread finishes execution as database connections do not get closed automatically (I think this is less of an issue with Ruby on Rails 4 applications Reaper): Thread.new do begin # Thread work here ensure begin if (ActiveRecord::Base.connection && ActiveRecord::Base.connection.active?) ActiveRecord::Base.connection.close end rescue end end end A: The connection to the MySQL server is probably timing out. You should be able to increase the timeout in MySQL, but for a proper fix, have your code check that the database connection is still alive, and re-connect if it's not. A: Do you monitor the number of open MySQL connections or threads? What is your mysql.ini settings for max_connections? mysql> show status; Look at Connections, Max_used_connections, Threads_connected, and Threads_created. You may need to increase the limits in your MySQL configuration, or perhaps rails is not closing the connection properly*. Note: I've only used Ruby on Rails briefly... The MySQL documentation for server status is in http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html. A: Using reconnect: true in the database.yml will cause the database connection to be re-established AFTER the ActiveRecord::StatementInvalid error is raised (As Dave Cheney mentioned). Unfortunately adding a retry on the database operation seemed necessary to guard against the connection timeout: begin do_some_active_record_operation rescue ActiveRecord::StatementInvalid => e Rails.logger.debug("Got statement invalid #{e.message} ... trying again") # Second attempt, now that db connection is re-established do_some_active_record_operation end A: I had this problem in a Ruby on Rails 3 application, using the mysql2 gem. I copied out the offending query and tried running it in MySQL directly, and I got the same error, "MySQL server has gone away.". The query in question was very, very large. A very large insert (+1 MB). The field I was trying to insert into was a TEXT column and their max size is 64 KB. Rather than throwing an errorm, the connection went away. I increased the size of the field and got the same thing, so I'm still not sure what the exact issue was. The point is that it was in the database due to some strange query. Anyway! A: Something else to check is Unicorn config is correct. See before_fork and after_fork handling of ActiveRecord connection here: https://gist.github.com/nebiros/2776085#file-unicorn-rb A: While forking in Rails. For anyone running into this while forking in Rails, try clearing the existing connections before forking and then establish a new connection for each fork, like this: # Clear existing connections before forking to ensure they do not get inherited. ::ActiveRecord::Base.clear_all_connections! fork do # Establish a new connection for each fork. ::ActiveRecord::Base.establish_connection # The rest of the code for each fork... end See this StackOverflow answer here: https://stackoverflow.com/a/8915353/293280
{ "language": "en", "url": "https://stackoverflow.com/questions/100631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Why do you not declare several variables of the same type on the same line? Why is it bad practice to declare variables on one line? e.g. private String var1, var2, var3 instead of: private String var1; private String var2; private String var3; A: In C/C++, you also have the problem that the * used to indicate a pointer type only applies to the directly following identifier. So a rather common mistake of inexperienced developers is to write int* var1, var2, var3; and expecting all three variables to be of type 'int pointer', whereas for the compiler this reads as int* var1; int var2; int var3; making only var1 a pointer. A: With separate lines, you have the opportunity to add a comment on each line describing the use of the variable (if it isn't clear from its name). A: Because in some languages, var2 and var3 in your example would not be strings, they would be variants (untyped). A: Why is that bad practice? I don't think it is, as long as your code is still readable. //not much use int i, j, k; //better int counter, childCounter, percentComplete; A: To be honest I am not against it. I think that its perfectly feasible to group similar variables on the same line e.g. float fMin, fMax; however I steer clear when the variables are unrelated e.g. int iBalance, iColor; A: Relevance. Just because two variables are of type String does not mean they are closely related to each other. If the two (or more) variables are closely related by function, rather then variable type, then maybe they could be declared together. i.e. only if it makes sense for a reader of your program to see the two variables together should they actually be placed together A: Here's my reasons: * *Readability, easier to spot if you know there's only one on each line *Version control, less intra-line changes, more single-line additions, changes, or deletions, easier to merge from one branch to another A: In my opinion, the main goal of having each variable on a separate line would be to facilitate the job of Version Control tools. If several variables are on the same line you risk having conflicts for unrelated modifications by different developers. A: What about the case such as: public static final int NORTH = 0, EAST = 1, SOUTH = 2, WEST = 3; Is that considered bad practice as well? I would consider that okay as it counters some of the points previously made: * *they would all definitely be the same type (in my statically typed Java-world) *comments can be added for each *if you have to change the type for one, you probably have to do it for all, and all four can be done in one change So in an (albeit smelly code) example, is there reasons you wouldn't do that? A: In C++ : int * i, j; i is of type int *, j is of type int. The distinction is too easily missed. Besides having them on one line each makes it easier to add some comments later A: I think that there are various reasons, but they all boil down to that the first is just less readable and more prone to failure because a single line is doing more than one thing. And all that for no real gain, and don't you tell me you find two lines of saved space is a real gain. It's a similar thing to what happens when you have if ((foo = some_function()) == 0) { //do something } Of course this example is much worse than yours. A: Agree with edg, and also because it is more readable and easy for maintenance to have each variable on separate line. You immediately see the type, scope and other modifiers and when you change a modifier it applies only to the variable you want - that avoids errors. A: * *to be more apparent to you when using Version Control tools (covered by Michel) *to be more readable to you when you have the simplest overflow/underflow or compile error and your eyes failed to point out the obvious *to defend the opposite (i.e. multi-variable single-line declaration) has less pros ("code textual vertical visibility" being a singleton) A: It is bad practice mostly when you can and want to initialize variables on the deceleration. An example where this might not be so bad is: string a,b; if (Foo()) { a = "Something"; b = "Something else"; } else { a = "Some other thing"; b = "Out of examples"; } A: Generally it is, for the version control and commenting reasons discussed by others, and I'd apply that in 95% of all cases. however there are circumstances where it does make sense, for example if I'm coding graphics and I want a couple of variables to represent texture coordinates (always referenced by convention as s and t) then the declaring them as int s, t; // texture coordinates IMHO enhances code readability both by shortening the code and by making it explicit that these two variables belong together (of course some would argue for using a single point class variable in this case). A: while attempting this question https://www.interviewbit.com/problems/remove-element-from-array/ Method 1 gives Memory Limit exceeded for this code: Type 1: int i,j; Type 2: int i; int j; type 1: Gives Memory Limit Exceeded int removeElement (int* A, int n1, int B) { int k=0, i; for(i=0;i<n1;i++) if(A[i]!=B) { A[k]=A[i]; k++; } return k; } Whereas type 2 works perfectly fine int removeElement (int* A, int n1, int B) { int k=0; int i; for(i=0;i<n1;i++) if(A[i]!=B) { A[k]=A[i]; k++; } return k; }
{ "language": "en", "url": "https://stackoverflow.com/questions/100633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Calculate Cyclomatic Complexity for Javascript Are there any tools available for calculating Cyclomatic Complexity in Javascript? I've found it a very helpful metric in the past while working on server side code, and would like to be able to use it for the client side Javascript I write. A: Since cyclomatic complexity is evaluated counting the number of keyword "if, switch, while for break" etc.. every tools that works with C will do the job, like sourcemonitor: http://www.campwoodsw.com/sourcemonitor.html Actually, on javascript the more you try to modulize your code, the more you will slow it down, so take it with a grain of salt ;) EDIT: I Really can't understand what's going on on this answer, I get another downvote, when in my answer I tell a good tool for calculating cyclomatic complexity in javascript, and this in particular works very well. For the second assertion, mine is a comment that comes from experience, I never tell don't modulize your js code, I only tell to make attention in doing it, because often there is a tradeoff with speed, and when I talk of speed I mean that 2 different slowdown can happen: at download time and at execution time (and in slow device like pda/smartphone this is important). Since tools like this often drive developer into writing more code trying to chase the smaller index possible, but in js more code unfortunately means that slowdowns can happen, and the overuse of these tools is bad. Surelly these tools can give you hints of where your code can be improved, but you've to master how to use the tool and not blindy rely on it. So if you downvote me again, please write a comment in which you explain why you do so, the discussion can only benefit from this, thank you and sorry for the vent. A: The new version of http://jshint.com is out and has a very good cyclomatic complexity calculator A: You can use the ccm tool from ARCHIVE of blunck.info or the github repo jonasblunck/ccm It supports JavaScript, C/C++ and C#. It's free, runs on Windows (can be run on Linux and Mac OS X as well - using the Mono framework). A: There's now also Yardstick: https://github.com/calmh/yardstick It tries to calculate cyclomatic complexity for idiomatic Javascript, handling more cases than for example jscheckstyle. A: I helped write a tool to perform software complexity analysis on JavaScript projects: complexity-report It reports a bunch of different complexity metrics: lines of code, number of parameters, cyclomatic complexity, cyclomatic density, Halstead complexity measures, the maintainability index, first-order density, change cost and core size. It is released under the MIT license and built using Node.js and the Esprima JavaScript parser. It can be installed via npm, like so: npm i -g complexity-report A: For completeness in the answers, I was looking for the same tool some time ago and didn't find anything that worked well for visualization so I wrote plato Sample reports for : * *jquery *grunt *marionettejs It uses phil's complexity-report (mentioned above) and also aggregates data from jshint (and eventually, others). A: JSHint recently added support for calculating code metrics. You can set maximum values for: * *maxparams - the number of formal parameters allowed *maxdepth - how deeply nested code blocks should be *maxstatements - the number of statements allowed per function *maxcomplexity - the maximum cyclomatic complexity Examples Maximum number of formal parameters allowed per function /*jshint maxparams:3 */ function login(request, onSuccess) { // ... } // JSHint: Too many parameters per function (4). function logout(request, isManual, whereAmI, onSuccess) { // ... } Maximum number of nested code blocks allowed per function /*jshint maxdepth:2 */ function main(meaning) { var day = true; if (meaning === 42) { while (day) { shuffle(); if (tired) { // JSHint: Blocks are nested too deeply (3). sleep(); } } } } Maximum number of statements allowed per function /*jshint maxstatements:4 */ function main() { var i = 0; var j = 0; // Function declarations count as one statement. Their bodies // don't get taken into account for the outer function. function inner() { var i2 = 1; var j2 = 1; return i2 + j2; } j = i + j; return j; // JSHint: Too many statements per function. (5) }
{ "language": "en", "url": "https://stackoverflow.com/questions/100645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55" }
Q: Why do some wav files play in my c# directsound app but some don't? I've got a c# application that plays simple wav files through directsound. With the test data I had, the code worked fine. However when I used real-world data, it produced a very unhelpful error on creation of the secondary buffer: "ArgumentException: Value does not fall within the expected range." The test wavs had a 512kbps bit rate, 16bit audio sample size, and 32kHz audio sample rate. The new wavs is 1152kbps, 24bit and 48kHz respectively. How can I get directsound to cope with these larger values, or if not how can I programatically detect these values before attempting to play the file? it's managed DirectX v9.00.1126 I'm using, and I've included some sample code below: using DS = Microsoft.DirectX.DirectSound; ... DS.Device device = new DS.Device(); device.SetCooperativeLevel(this, CooperativeLevel.Normal); ... BufferDescription bufferDesc = new BufferDescription(); bufferDesc.ControlEffects = false; ... try { SecondaryBuffer sound = new SecondaryBuffer(path, bufferDesc, device); sound.Play(0, BufferPlayFlags.Default); } ... Additional info: the real-world wav files won't play in windows media player either, telling me a codec is needed to play the file, while they play fine in winamp. Additional info 2: Comparing the bytes of the working test data and the bad real-world data, I can see that past the RIFF chunk, the bad data has a "bext" chunk, that the internet informs me is metadata associated with the broadcast audio extension, while the test data goes straight into a fmt chunk. There /is/ a fmt chunk in the bad data, so I don't know if it's badly-formed or if the loaders should be looking further for fmt data. I can see if I can get some information on this rouge bext chunk from the people supplying me the data - if they can remove it my code may still work. A: Not all soundcards support 24 bit sample playback, and even when they do, they often have to be exclusively opened in that mode. There is a similar issue with sample rates. Your soundcard may be operating at 44.1kHz, in which case 48kHz needs to be resampled to be played. I have written an open source .NET audio library called NAudio which will allow you to find out what sample rate and bit depth a given WAV file is. It also offers alternative ways of playing back audio (e.g. through the Wav... APIs), and the ability to resample files using the DMO resampler object. A: In addition to the sampling issue, WAV is just a container format and the audio could be compressed in any of a myriad of audio formats (just like AVI is a container of video). So you could use a tool like GSpot to find out if your WAV is encoded in a non-standard format in and install the codec. Winamp has more codecs installed by default than WMP, which would explain the Winamp plays it and WMP doesn't.
{ "language": "en", "url": "https://stackoverflow.com/questions/100654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Calculate Average Lines of Code per Method for Javascript Are there any tools available for calculating the average number of lines of code per method? I want to know the average size of each method, not just the total number of lines in the project. The per method count will allow me to measure how simple each method is. This will be calculated as part of the build process, and displayed on a dashboard. The idea being that we can see if the average size of each method is increasing. And this will flag the possibility that code complexity is increasing and we may need to think about refactoring. A: short fast and dirty : grep for ";", count the number of lines, this will give you an estimate of the number of statements. A: Do a recursive "for(i in this)" search through your project, and if the object (i) is a function, call "test.toString().split("\n").length". This counts the number of newlines in the function. If it is not a function, but an object, call this function in that object. Also count the number of functions you find, and then divide the total number of newlines by the total number of functions, and then you have the average. Edit function calculateMethodSize(obj){ var fcount = 0; var fsize = 0; for(i in obj){ if(obj[i] instanceof Function){ fcount++; fsize += obj[i].toString().split(";\n").length; }else if(obj[i] instanceof Object){ var ret = calculateMethodSize(obj[i]); fcount += ret.fcount; fsize += ret.fsize; } } return {fsize:fsize, fcount:fcount}; } var data = calculateMethodSize(this); var average = data.fsize / data.fcount; Be careful running this code though. If you run it with this, as I have done, then you might get a stack overflow (I did). A: I am not sure if it does that, but searching, after your previous post, what is cyclomatic complexity, I went to the related Wikipedia page which pointed to Code Analyzer. There they say: When counting for HTML or JSP files, it will count LoC correctly for javascript and vbscript code embedded within the <script> tag. I don't know if this count is dispatched per method, but it might be worth taking a look (it is a free tool). A: Define lines as either "\n" or ";", You could try a simple algorithm like the following: FOR each line in a javascript file (or chunk of text) IF the line starts with "function " THEN PUSH the first left-curly brace you find onto a stack WHILE the stack is non-empty PUSH any left-curly braces in the current line POP any left-curly braces when you encounter a right-curly brace Increment your line-count by 1 Increment your line counter (as mentioned in the FOR loop above) END WHILE Store your total lines for this function ELSE //ignore the line because it's probably a global var or blank END IF END FOR I don't know of a tool that can do this automatically. But, it would be fun to try to make one yourself. A: you probably want to clue in other metrics as well. any way you count the lines, just make sure it does not croak in face of functions defined without the "function" keyword or curly braces. real-world example: var negate = bind1st(compose, not); (here negate is a function built from functions bind1st, compose and not)
{ "language": "en", "url": "https://stackoverflow.com/questions/100661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: ::SendMessage( ) LRESULT result = ::SendMessage(hWnd, s_MaxGetTaskInterface, (WPARAM)&pUnkReturn, 0); The value of result after the call is 0 I expect it to return with a valid value of pUnkReturn , but it returns with a NULL value . Necessary Information before this call : const UINT CMotionUtils::s_MaxGetTaskInterface = RegisterWindowMessage(_T("NI:Max:GetTaskInterface")); The value of s_MaxGetTaskInterface i get here is 49896 . The value of hWnd is also proper . I checked that with Spy++ ( Visual Studio tool ) . Microft Spy++ Messages window shows me the following for this window . <00001> 009F067C S message:0xC2E8 [Registered:"NI:Max:GetTaskInterface"]wParam:0224C2D0 lParam:00000000 <00002> 009F067C S message:0xC2E8 [Registered:"NI:Max:GetTaskInterface"]lResult:00000000 Please help me to get a valid address stored in pUnkReturn after the call . A: I think the & in &pUnkReturn is needed, based on the hungarian prefix. I expect pUnkReturn to have type IUnknown*. The message receiver will provide the IUnknown*. The address where it will store that IUnknown* is an IUnknown**. Hence, this code passes in &pUnkReturn and the message receiver writes to *(IUnknown**)wParam. A: Is the destination hWnd in the same process? If not, you won't be able to pass (or return) a pointer through the message. Note that Windows implements marshalling for built-in messages. A: When I Googled for NI:Max:GetTaskInterface I couldn't find anything. In general, how a window will handle a given message depends entirely on the window concerned. Does the window (specified by hWnd) even support the NI:Max:GetTaskInterface message? A: You're going to have to provide more information - what is "GetTaskInterface" (Google provides no results). SendMessage will return with whatever value is returned from the WndProc that handles the message "s_MaxGetTaskInterface". If it's not handled, you will get zero back and your pointer will still be NULL. A: You'll need to tell us what pUnkReturn is and how it's defined. You'll also need to tell us what the handler for s_MaxGetTaskInterface is expecting. If you expect the handler to populate whatever is pointed to by pUnkReturn, then you'll need to call SendMessage with (WPARAM)pUnkReturn, however if the handler returns a pointer, then call as you're doing now. A: The problem is not with how you are calling SendMessage(). The problem is in your implementation of the message handler for the "NI:Max:GetTaskInterface" registered message. The value that SendMessage() returns is the same as the value that is returned from your message handler. If you need pUnkReturn to be an out-val, then your message handler must populate it. Let's see the code for your message handler.
{ "language": "en", "url": "https://stackoverflow.com/questions/100678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Menu Control: How to Make Whole Item Clickable, not Just Text I have a problem that confuses my users, being that although an item is highlighted (by the hover style) when the user mouses over it, they have to mouse over the actual item text, sometimes quite small compared to the item. Is there a way to make the whole item clickable? A: Add some padding to the A element? Or if it's in a menu contained within a block-level element, make the A display as block too: a { display: block; width: 100%; } A: If you only have the text in the <a> ... </a>, that's the only part that can be clicked on. Move your graphics, etc. inside the link.
{ "language": "en", "url": "https://stackoverflow.com/questions/100689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Moving to ASP.NET - VB or C#? We have a large ASP classic code base but I'm looking to do future development in ASP.NET (and potentially in the future port across what we have). The natural choice of language seems to be VB (existing code is VBScript) but am I being too hasty? Does the choice of language, in the long run, even make a difference? A: When I moved from ASP3 to ASP.NET I finally choosed to use C# instead of VB.NET. I wrote Basics for years (MS-BAsic, GW-Basic, Quick-Basic, Visual Basic, VBScript) and it was very difficult to use VB.NET because I always try to write things as I used to do with VB6 or VBScript. So C# was a better choice for me : I have not been slowed by the weight of my habits. Also, C# was a new language (at this time) while new VB.NET keywords gave him a Cobol smell :) A: Also, don't forget that you can mix and match within a project! A: I would choose the language with the syntax you are most comfortable with as there is very little difference in functionality, although some say there is something of a culture difference between VB and c# developers. Another option is to learn both, the biggest obstacle moving to .net in my opinion is learning the huge .net framework, the actual language syntax you should be able to pick up fairly quickly. A: The important thing is to learn .NET. not a particular language because C# or VB changes just syntax. The logic behind them in .NET environment is same. A: It's learning .Net (framework) that takes the time. The specific language doesn't matter and is more a matter of taste. I'd normally recommend trying a few play projects in both languages (and any others that may take your fancy), and then decide which one you're more comfortable with (or have more success with!) A: First of all, don't port something for the sake of it. If it's working fine and there's little to be gained from porting it, leave it alone, and continue doing bugfixes and small enhancements in VBScript. If there's sufficient reason to move things across, then do it at that point, but you're much better off becoming familiar with the .Net version of web applications first rather than try and learn that on the fly if you do the port upfront. It's much easier to learn the new concepts like PostBack and ViewState in a 'clean' environment. There's also less scope to break out of your old way of thinking as you port stuff (just making it work any old how rather than redesigning where needed). Ultimately there's not a great deal of difference in the long run, it's mostly perceptual and a matter of personal taste but I'd propose learning C# first because it's lack of familiarity will emphasise the fact that you're learning something new. Hopefully thi will help you learn to do things the natural .Net way rather than the largely procedural VB(script) way. You're trying to unlearn as well as learn. Familiarity of some keywords will work against you. Echoing other benefits throughout some of the posts, and adding a few of my own: * *Future potential earnings : C# developers generally are worth more than VB.Net developers because of the perceptual difference. *Most of the open source .Net code out there is C#, and the quality of it generally (though not always) tends to be higher. *There are more Q&A and examples and out there on the internet in C# than VB.Net. At the time I post this there's 1572 posts tagged with C#, only 185 with VB.Net right here on Stack OverFlow. *C# OO keywords are reasonably standard, it makes it easier to read other OO language code. VB.Net goes off and renames things for no good reason e.g. abstract (C#, C++, Java and more...) vs MustInherit (VB.Net only). *It's generally accepted that once your familiar in both languages it's a lot easier to visually parse C# code than VB.Net code *You won't get 'looked down' on by C#'ers and can help poke fun at the VB.Net people (if you so desire - the culture exists...) Once you've initially learned it in C#, it should be much easier to roll across to VB.Net than it would be the other way around. Invest in your own future. A: You can do the same things in both C# or VB.NET. In the end it'd be a choice of which is easier to port to. Keeping in mind that you shouldn't (IMHO) port directly ASP to ASP.NET, since you'd loose much of the features in .NET, this would include a fair amount of rewrite anyway. So I'd go with C# since I think it's less verbose and easier to write and read. A: .NET is really about the framework and the base class library. VB or C# - as Chris mentioned there is really just syntactical difference. Since you've done VBScript migration to VB.NET is more natural - you can learn the new syntax, then move onto the Framework and BCL, when you are familiar with those you can learn the C# syntax as weel - being bi-lingual is always good. 2sontek: There are also features in VB that aren't supported in C# - let's not go there. 80% (or more) of developers use only 20% of the features and the differences are very minor - they certainly don't fit into the 20% A: I develop in both but C# would be my preferance. You can do the majority of things in both languages and I certianly wouldnt re write any vb projects in c#. However with C#: New language features generally come to c# first of all e.g. lamda expressions in linq You can write code in c# C# will force you to be neater C# developers get paid on average more and most larger projects will only be c# C# has similar syntax and function names to javascript There is some snobbery in the .net community towards vb developers A: Not much of a difference, but there is some difference (mostly syntactical in nature). If your old site is VBScript, it's probably easier to go with VB.NET. A: There are a few features in C# 3.5 that aren't supported in Visual Basic but language choice is always preference from the team and what you guys feel most productive in. A: From experience it is easiest to go with where your skillset currently lies. If this is in vb6/vbscript etc I would first look towards vb.net. It also depends on any project timescales and delivery dates. You will be upto speed quicker in vb.net. If you have time on your hands I would seriously consider c#, if only to give you a stronger foothold in the development market when you come to look for pastures new. A: You may also wan to look into C#. VB.Net and C# are both great for creating web applications, it really depends on the needs you will have. For example C# is great with events and VB.Net is great with XML. I'd say go with what ever your developers feel more comfortable. A: I was looking at a site recently that lists average UK wages based on technologies. Can't find the link now (typical!) but I remember that the average C# wage was around £5k higher than the average VB.NET wage. If you're not going to learn both (and there's really no reason not to, as the only real difference between them is syntax) I'd go for C#. A: .net or c# it ups to you on how you will solve the problem and what make you more creative. But i think sooner the .net will be erased and c# will be the only language that can be used in asp. i start learning the .net concept then after i change my language to c# for asp.net mvc so i suggest you must go to c# because it will help you more in future.
{ "language": "en", "url": "https://stackoverflow.com/questions/100691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Location of WSDL.exe I recently heard of WSDL.Exe, but I am not sure where to find this program. Does anyone know where I can find or download it? A: If you have Windows 10 and VS2019, and the .NET Framework 4.8, below you can see the Location of WSDL.exe Path in your pc C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.8 Tools A: C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\wsdl.exe (via this question: Where can I find WSDL.exe?) A: If you have Windows 10 and VS2015, below you can see the Location of WSDL.exe Path in your pc C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.7 Tools A: In case anyone using VS 2008 (.NET 3.5) is also looking for the wsdl.exe. I found it here: C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\wsdl.exe A: And on my Windows 7 machine it is here: C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1A\bin\NETFX 4.5.1 Tools Note that the file wsdl.exe is portable, in that you can copy it to another windows machine and it works. I have not tried to see if the 4.5 exe will work on a machine that only have .NET 2.0, but this would be interesting to know. A: C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6.1 Tools A: You can find what you want in windows by looking with the windows dir command, use the administrator account to make it easy: c:>dir wsdl.exe /s A: You'll get it as part of a Visual Studio install (if you included the SDK), or in a standalone SDK install. It'll live somewhere like C:\program files\Microsoft Visual Studio 8\SDK\v2.0\Bin If you don't already have it, you can download the .NET SDKs from http://msdn.microsoft.com/en-us/netframework/aa569263.aspx A: It is included with .NET (not sure if only in the SDK).
{ "language": "en", "url": "https://stackoverflow.com/questions/100705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }