text
stringlengths
8
267k
meta
dict
Q: Can a web service return a stream? I've been writing a little application that will let people upload & download files to me. I've added a web service to this applciation to provide the upload/download functionality that way but I'm not too sure on how well my implementation is going to cope with large files. At the moment the definitions of the upload & download methods look like this (written using Apache CXF): boolean uploadFile(@WebParam(name = "username") String username, @WebParam(name = "password") String password, @WebParam(name = "filename") String filename, @WebParam(name = "fileContents") byte[] fileContents) throws UploadException, LoginException; byte[] downloadFile(@WebParam(name = "username") String username, @WebParam(name = "password") String password, @WebParam(name = "filename") String filename) throws DownloadException, LoginException; So the file gets uploaded and downloaded as a byte array. But if I have a file of some stupid size (e.g. 1GB) surely this will try and put all that information into memory and crash my service. So my question is - is it possible to return some kind of stream instead? I would imagine this isn't going to be terribly OS independent though. Although I know the theory behind web services, the practical side is something that I still need to pick up a bit of information on. Cheers for any input, Lee A: Stephen Denne has a Metro implementation that satisfies your requirement. My answer is provided below after a short explination as to why that is the case. Most Web Service implementations that are built using HTTP as the message protocol are REST compliant, in that they only allow simple send-receive patterns and nothing more. This greatly improves interoperability, as all the various platforms can understand this simple architecture (for instance a Java web service talking to a .NET web service). If you want to maintain this you could provide chunking. boolean uploadFile(String username, String password, String fileName, int currentChunk, int totalChunks, byte[] chunk); This would require some footwork in cases where you don't get the chunks in the right order (Or you can just require the chunks come in the right order), but it would probably be pretty easy to implement. A: When you use a standardized web service the sender and reciever do rely on the integrity of the XML data send from the one to the other. This means that a web service request and answer only are complete when the last tag was sent. Having this in mind, a web service cannot be treated as a stream. This is logical because standardized web services do rely on the http-protocol. That one is "stateless", will say it works like "open connection ... send request ... receive data ... close request". The connection will be closed at the end, anyway. So something like streaming is not intended to be used here. Or he layers above http (like web services). So sorry, but as far as I can see there is no possibility for streaming in web services. Even worse: depending on the implementation/configuration of a web service, byte[] - data may be translated to Base64 and not the CDATA-tag and the request might get even more bloated. P.S.: Yup, as others wrote, "chuinking" is possible. But this is no streaming as such ;-) - anyway, it may help you. A: Yes, it is possible with Metro. See the Large Attachments example, which looks like it does what you want. JAX-WS RI provides support for sending and receiving large attachments in a streaming fashion. * *Use MTOM and DataHandler in the programming model. *Cast the DataHandler to StreamingDataHandler and use its methods. *Make sure you call StreamingDataHandler.close() and also close the StreamingDataHandler.readOnce() stream. *Enable HTTP chunking on the client-side. A: For WCF I think its possible to define a member on a message as stream and set the binding appropriately - I've seen this work with wcf talking to Java web service. You need to set the transferMode="StreamedResponse" in the httpTransport configuration and use mtomMessageEncoding (need to use a custom binding section in the config). I think one limitation is that you can only have a single message body member if you want to stream (which kind of makes sense). A: Apache CXF supports sending and receiving streams. A: I hate to break it to those of you who think a streaming web service is not possible, but in reality, all http requests are stream based. Every browser doing a GET to a web site is stream based. Every call to a web service is stream based. Yes, all. We don't notice this at the level where we are implementing services or pages because lower levels of the architecture are dealing with this for you - but it is being done. Have you ever noticed in a browser that sometimes it can take a while to fetch a page - the browser just keeps cranking away showing the hourglass? That is because the browser is waiting on a stream. Streams are the reason mime/types have to be sent before the actual data - it's all just a byte stream to the browser, it wouldn't be able to identify a photo if you didn't tell it what it was first. It's also why you have to pass the size of a binary before sending - the browser won't be able to tell where the image stops and the page picks up again. It's all just a stream of bytes to the client. If you want to prove this for yourself, just get a hold of the output stream at any point in the processing of a request and close() it. You will blow up everything. The browser will immediately stop showing the hourglass, and will display a "cannot find" or "connection reset at server" or some other such message. That a lot of people don't know that all of this stuff is stream based shows just how much stuff has been layered on top of it. Some would say too much stuff - I am one of those. Good luck and happy development - relax those shoulders! A: One way to do it is to add a uploadFileChunk(byte[] chunkData, int size, int offset, int totalSize) method (or something like that) that uploads parts of the file and the servers writes it the to disk. A: Keep in mind that a web service request basically boils down to a single HTTP POST. If you look at the output of a .ASMX file in .NET , it shows you exactly what the POST request and response will look like. Chunking, as mentioned by @Guvante, is going to be the closest thing to what you want. I suppose you could implement your own web client code to handle the TCP/IP and stream things into your application, but that would be complex to say the least. A: I think using a simple servlet for this task would be a much easier approach, or is there any reason you can not use a servlet? For instance you could use the Commons open source library. A: The RMIIO library for Java provides for handing a RemoteInputStream across RMI - we only needed RMI, though you should be able to adapt the code to work over other types of RMI . This may be of help to you - especially if you can have a small application on the user side. The library was developed with the express purpose of being able to limit the size of the data pushed to the server to avoid exactly the type of situation you describe - effectively a DOS attack by filling up ram or disk. With the RMIIO library, the server side gets to decide how much data it is willing to pull, where with HTTP PUT and POSTs, the client gets to make that decision, including the rate at which it pushes. A: Yes, a webservice can do streaming. I created a webservice using Apache Axis2 and MTOM to support rendering PDF documents from XML. Since the resulting files could be quite large, streaming was important because we didn't want to keep it all in memory. Take a look at Oracle's documentation on streaming SOAP attachments. Alternately, you can do it yourself, and tomcat will create the Chunked headers. This is an example of a spring controller function that streams. @RequestMapping(value = "/stream") public void hellostreamer(HttpServletRequest request, HttpServletResponse response) throws CopyStreamException, IOException { response.setContentType("text/xml"); OutputStreamWriter writer = new OutputStreamWriter (response.getOutputStream()); writer.write("this is streaming"); writer.close(); } A: It's actually not that hard to "handle the TCP/IP and stream things into your application". Try this... class MyServlet extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) { response.getOutputStream().println("Hello World!"); } } And that is all there is to it. You have, in the above code, responded to an HTTP GET request sent from a browser, and returned to that browser the text "Hello World!". Keep in mind that "Hello World!" is not valid HTML, so you may end up with an error on the browser, but that really is all there is to it. Good Luck in your development! Rodney
{ "language": "en", "url": "https://stackoverflow.com/questions/132590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: How to achieve const-correctness in C#? Possible Duplicate: “const correctness” in C# I have programmed C++ for many years but am fairly new to C#. While learning C# I found that the use of the const keyword is much more limited than in C++. AFAIK, there is, for example, no way to declare arguments to a function const. I feel uncomfortable with the idea that I may make inadvertent changes to my function arguments (which may be complex data structures) that I can only detect by testing. How do you deal with this situation? A: There is an excellent blog post about this issue by Stan Lippman: A question of const A: If it matters, I use immutable objects. Or, at a minimum, I use the logic in my property setters.
{ "language": "en", "url": "https://stackoverflow.com/questions/132592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Maven dependency exclusion for War file, but inclusion for tests I have a maven POM file for a web service. For one of the dependencies I have to specify several exclusions for jar files that are already kept at a higher-level in the web-application server (accessible to all web-applications, not just this particular one). One example of such exclusion is the JAR containing my JDBC driver. Example (with fictional details): <dependency> <groupId>mygroup</groupId> <artifactId>myartifact</artifactId> <version>1.0.0</version> <exclusions> <!--The jdbc driver causes hot-deployment issues--> <exclusion> <groupId>db.drivers</groupId> <artifactId>jdbc</artifactId> </exclusion> </exclusions> </dependency> The problem I am encountering is that I need the JDBC driver for my tests. My tests currently fail since they cannot load the JDBC driver. How can I configure the POM so that the excluded parts are accessible to my tests, but do not get included into my WAR file? Update: I cannot make changes to the POM for mygroup.myartifact since this it is being depended on by many other projects, and this exclusion requirement is unique for my project. Update 2: It seems I did a poor job of phrasing this question. Lars's solution below is perfect for one exclusion (as the example shows), however in my real scenario I have multiple exclusions, and adding additional dependencies for each seems smelly. The solution that seems to work is to set the scope of the shown dependency to compile and then create a second dependency the same artifact (mygroup.myartifact) with no exclusions and the scope set to test. Since Lars both answered my poorly phrased question correctly, as well as led me in the direction of the actual solution, I will mark his reply as the answer. A: Use the "scope" tag inside your dependency. <scope>test</scope> http://maven.apache.org/pom.html#Dependencies edit: if I understand your configuration correctly, the scope=test that you need to add should be added in the mygroup.myartifact POM. That way you can test that artifact with jdbc jar included, but always when other POMS want to include mygroup.myartifact, they don't get jdbc included as a transitive dependency. second edit: Ok, if you don't control the POM you want to include - do an exclusion like you have already done, and then add jdbc as a new dependency, with scope=test.
{ "language": "en", "url": "https://stackoverflow.com/questions/132597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Exception handling using an HttpModule We're reviewing one of the company's system's exception handling and found a couple of interesting things. Most of the code blocks (if not all of them) are inside a try/catch block, and inside the catch block a new BaseApplicationException is being thrown - which seems to be coming from the Enterprise Libraries. I'm in a bit of a trouble here as I don't see the benefits off doing this. (throwing another exception anytime one occurs) One of the developers who's been using the system for a while said it's because that class's in charge of publishing the exception (sending emails and stuff like that) but he wasn't too sure about it. After spending some time going through the code I'm quite confident to say, that's all it does is collecting information about the environment and than publishing it. My question is: - Is it reasonable to wrap all the code inside try { } catch { } blocks and than throw a new exception? And if it is, why? What's the benefit? My personal opinion is that it would be much easier to use an HttpModule, sign up for the Error event of the Application event, and do what's necessary inside the module. If we'd go down this road, would we miss something? Any drawbacks? Your opinion's much appreciated. A: If I am reading the question correctly, I would say that implementing a try / catch which intercept exceptions (you don't mention - is it catching all exceptions, or just a specific one?) and throws a different exception is generally a bad thing. Disadvantages: At the very least you will lose stack trace information - the stack you will see will only extend to the method in which the new exception is thrown - you potentially lose some good debug info here. If you are catching Exception, you are running the risk of masking critical exceptions, like OutOfMemory or StackOverflow with a less critical exception, and thus leaving the process running, where perhaps it should have been torn down. Possible Advantages: In some very specific cases you could take an exception which doesn't have much debug value (like some exceptions coming back from a database) and wrap with an exception which adds more context, e.g id of the object you were dealing with. However, in almost all cases this is a bad smell and should be used with caution. Generally you should only catch an exception when there is something realistic that you can do in that location- ie recovering, rolling back, going to plan B etc. If there is nothing you can do about it, just allow it to pass up the chain. You should only catch and throw a new exception if there is specific and useful data available in that location which can augment the original exception and hence aid debugging. A: Never1 catch (Exception ex). Period2. There is no way you can handle all the different kinds of errors that you may catch. Never3 catch an Exception-derived type if you can't handle it or provide additional information (to be used by subsequent exception handlers). Displaying an error message is not the same as handling the error. A couple of reasons for this, from the top of my head: * *Catching and rethrowing is expensive *You'll end up losing the stack trace *You'll have a low signal-to-noice ratio in your code If you know how to handle a specific exception (and reset the application to pre-error state), catch it. (That's why it's called exception handling.) To handle exceptions that are not caught, listen for the appropriate events. When doing WinForms, you'll need to listen for System.AppDomain.CurrentDomain.UnhandledException, and - if your doing Threading - System.Windows.Forms.Application.ThreadException. For web apps, there are similar mechanisms (System.Web.HttpApplication.Error). As for wrapping framework exceptions in your application (non-)specific exceptions (i.e. throw new MyBaseException(ex);): Utterly pointless, and a bad smell.4 Edit 1 Never is a very harsh word, especially when it comes to engineering, as @Chris pointed out in the comments. I'll admit to being high on principles when I first wrote this answer. 2,3 See 1. 4 If you don't bring anything new to the table, I still stand by this. If you have caught Exception ex as part of a method that you know could fail in any number of ways, I believe that the current method should reflect that in it's signature. And as you know, exceptions is not part of the method signature. A: I'm from the school of thought where try/ catch blocks should be used and exceptions not rethrown. If you have executing code which is likely to error then it should be handled, logged and something returned. Rethrowing the exception only serves the purpose to re-log later in the application life cycle. Here's an interesting post on how to use a HttpModule to handle exceptions: http://blogs.msdn.com/rahulso/archive/2008/07/13/how-to-use-httpmodules-to-troubleshoot-your-asp-net-application.aspx and http://blogs.msdn.com/rahulso/archive/2008/07/18/asp-net-how-to-write-error-messages-into-a-text-file-using-a-simple-httpmodule.aspx A: Check out ELMAH. It does what you're talking about. Very well. When I create libraries I try to always provide a reduced number of exceptions for callers to handle. For example, think of a Repository component that connects to a sql database. There are TONS of exceptions, from sql client exceptions to invalid cast exceptions, that can theoretically be thrown. Many of these are clearly documented and can be accounted for at compile time. So, I catch as many of them as I can, place them in a single exception type, say a RepositoryException, and let that exception roll up the call stack. The original exception is retained, so the original exception can be diagnosed. But my callers only need to worry about handling a single exception type rather than litter their code with tons of different catch blocks. There are, of course, some issues with this. Most notably, if the caller can handle some of these exceptions, they have to root around in the RepositoryException and then switch on the type of the inner exception to handle it. Its less clean than having a single catch block for a single exception type. I don't think thats much of an issue, however. A: Sounds like the exception that is thrown should not have been implemented as an exception. Anyway, I would say that since this BaseApplicationException is a general all-purpose exception, it would be good to throw exceptions that are more context-specific. So when you are trying to retrieve an entity from a database, you might want an EntityNotFoundException. This way when you are debugging you do not have to search through inner exceptions and stack traces to find the real issue. If this BAseApplicationException is collecting information on the exception (like keeping track of the inner exception) then this should not be a problem. I would use the HttpModule only when I could not get any closer to where the exceptions are actually happening in code. You do not really want an HttModule OnError event that is a giant switch statement depending on BaseApplicationexception's error information. To conclude, it is worth it to throw different exceptions when you can give more specific exceptions that tell you the root of the problem right off the bat. A: From my experience, catch the exception, add the error to the Server (?) object. This will allow .NET to do what ever it needs to do, then display your exception.
{ "language": "en", "url": "https://stackoverflow.com/questions/132607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Show a ContextMenuStrip without it showing in the taskbar I have found that when I execute the show() method for a contextmenustrip (a right click menu), if the position is outside that of the form it belongs to, it shows up on the taskbar also. I am trying to create a right click menu for when clicking on the notifyicon, but as the menu hovers above the system tray and not inside the form (as the form can be minimised when right clicking) it shows up on the task bar for some odd reason Here is my code currently: private: System::Void notifyIcon1_MouseClick(System::Object^ sender, System::Windows::Forms::MouseEventArgs^ e) { if(e->Button == System::Windows::Forms::MouseButtons::Right) { this->sysTrayMenu->Show(Cursor->Position); } } What other options do I need to set so it doesn't show up a blank process on the task bar. A: Try assigning your menu to the ContextMenuStrip property of NotifyIcon rather than showing it in the mouse click handler. A: The best and right way, without Reflection is: { UnsafeNativeMethods.SetForegroundWindow(new HandleRef(notifyIcon.ContextMenuStrip, notifyIcon.ContextMenuStrip.Handle)); notifyIcon.ContextMenuStrip.Show(Cursor.Position); } where UnsafeNativeMethods.SetForegroundWindow is: public static class UnsafeNativeMethods { [DllImport("user32.dll", CharSet = CharSet.Auto, ExactSpelling = true)] public static extern bool SetForegroundWindow(HandleRef hWnd); } A: Let's assume that you have 2 context menu items: ContextMenuLeft and ContextMenuRight. By default, from the NotifyIcon properties you already assigned one of them. Before calling Left Button Click, just change them, show the context menu, and then change them again. NotifyIcon.ContextMenuStrip = ContextMenuLeft; //let's asign the other one MethodInfo mi = typeof(NotifyIcon).GetMethod("ShowContextMenu", BindingFlags.Instance | BindingFlags.NonPublic); mi.Invoke(NotifyIcon, null); NotifyIcon.ContextMenuStrip = ContextMenuRight; //switch back to the default one Hope this helps. A: The problem I have is that my menu is available from both a double middle-click and the notification icon. When right clicking the notification icon, there is no taskbar button, but when I manually Show(Cursor.Position) then it shows a taskbar button.
{ "language": "en", "url": "https://stackoverflow.com/questions/132612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do you retrieve a list of logged-in/connected users in .NET? Here's the scenario: You have a Windows server that users remotely connect to via RDP. You want your program (which runs as a service) to know who is currently connected. This may or may not include an interactive console session. Please note that this is the not the same as just retrieving the current interactive user. I'm guessing that there is some sort of API access to Terminal Services to get this info? A: Ok, one solution to my own question. You can use WMI to retreive a list of running processes. You can also look at the owners of these processes. If you look at the owners of "explorer.exe" (and remove the duplicates) you should end up with a list of logged in users. A: Here's my take on the issue: using System; using System.Collections.Generic; using System.Runtime.InteropServices; namespace EnumerateRDUsers { class Program { [DllImport("wtsapi32.dll")] static extern IntPtr WTSOpenServer([MarshalAs(UnmanagedType.LPStr)] string pServerName); [DllImport("wtsapi32.dll")] static extern void WTSCloseServer(IntPtr hServer); [DllImport("wtsapi32.dll")] static extern Int32 WTSEnumerateSessions( IntPtr hServer, [MarshalAs(UnmanagedType.U4)] Int32 Reserved, [MarshalAs(UnmanagedType.U4)] Int32 Version, ref IntPtr ppSessionInfo, [MarshalAs(UnmanagedType.U4)] ref Int32 pCount); [DllImport("wtsapi32.dll")] static extern void WTSFreeMemory(IntPtr pMemory); [DllImport("wtsapi32.dll")] static extern bool WTSQuerySessionInformation( IntPtr hServer, int sessionId, WTS_INFO_CLASS wtsInfoClass, out IntPtr ppBuffer, out uint pBytesReturned); [StructLayout(LayoutKind.Sequential)] private struct WTS_SESSION_INFO { public Int32 SessionID; [MarshalAs(UnmanagedType.LPStr)] public string pWinStationName; public WTS_CONNECTSTATE_CLASS State; } public enum WTS_INFO_CLASS { WTSInitialProgram, WTSApplicationName, WTSWorkingDirectory, WTSOEMId, WTSSessionId, WTSUserName, WTSWinStationName, WTSDomainName, WTSConnectState, WTSClientBuildNumber, WTSClientName, WTSClientDirectory, WTSClientProductId, WTSClientHardwareId, WTSClientAddress, WTSClientDisplay, WTSClientProtocolType } public enum WTS_CONNECTSTATE_CLASS { WTSActive, WTSConnected, WTSConnectQuery, WTSShadow, WTSDisconnected, WTSIdle, WTSListen, WTSReset, WTSDown, WTSInit } static void Main(string[] args) { ListUsers(Environment.MachineName); } public static void ListUsers(string serverName) { IntPtr serverHandle = IntPtr.Zero; List<string> resultList = new List<string>(); serverHandle = WTSOpenServer(serverName); try { IntPtr sessionInfoPtr = IntPtr.Zero; IntPtr userPtr = IntPtr.Zero; IntPtr domainPtr = IntPtr.Zero; Int32 sessionCount = 0; Int32 retVal = WTSEnumerateSessions(serverHandle, 0, 1, ref sessionInfoPtr, ref sessionCount); Int32 dataSize = Marshal.SizeOf(typeof(WTS_SESSION_INFO)); IntPtr currentSession = sessionInfoPtr; uint bytes = 0; if (retVal != 0) { for (int i = 0; i < sessionCount; i++) { WTS_SESSION_INFO si = (WTS_SESSION_INFO)Marshal.PtrToStructure((System.IntPtr)currentSession, typeof(WTS_SESSION_INFO)); currentSession += dataSize; WTSQuerySessionInformation(serverHandle, si.SessionID, WTS_INFO_CLASS.WTSUserName, out userPtr, out bytes); WTSQuerySessionInformation(serverHandle, si.SessionID, WTS_INFO_CLASS.WTSDomainName, out domainPtr, out bytes); Console.WriteLine("Domain and User: " + Marshal.PtrToStringAnsi(domainPtr) + "\\" + Marshal.PtrToStringAnsi(userPtr)); WTSFreeMemory(userPtr); WTSFreeMemory(domainPtr); } WTSFreeMemory(sessionInfoPtr); } } finally { WTSCloseServer(serverHandle); } } } } A: Another option, if you don't want to deal with the P/Invokes yourself, would be to use the Cassia library: using System; using System.Security.Principal; using Cassia; namespace CassiaSample { public static class Program { public static void Main(string[] args) { ITerminalServicesManager manager = new TerminalServicesManager(); using (ITerminalServer server = manager.GetRemoteServer("your-server-name")) { server.Open(); foreach (ITerminalServicesSession session in server.GetSessions()) { NTAccount account = session.UserAccount; if (account != null) { Console.WriteLine(account); } } } } } } A: using System; using System.Collections.Generic; using System.Text; using System.Runtime.InteropServices; namespace TerminalServices { class TSManager { [DllImport("wtsapi32.dll")] static extern IntPtr WTSOpenServer([MarshalAs(UnmanagedType.LPStr)] String pServerName); [DllImport("wtsapi32.dll")] static extern void WTSCloseServer(IntPtr hServer); [DllImport("wtsapi32.dll")] static extern Int32 WTSEnumerateSessions( IntPtr hServer, [MarshalAs(UnmanagedType.U4)] Int32 Reserved, [MarshalAs(UnmanagedType.U4)] Int32 Version, ref IntPtr ppSessionInfo, [MarshalAs(UnmanagedType.U4)] ref Int32 pCount); [DllImport("wtsapi32.dll")] static extern void WTSFreeMemory(IntPtr pMemory); [StructLayout(LayoutKind.Sequential)] private struct WTS_SESSION_INFO { public Int32 SessionID; [MarshalAs(UnmanagedType.LPStr)] public String pWinStationName; public WTS_CONNECTSTATE_CLASS State; } public enum WTS_CONNECTSTATE_CLASS { WTSActive, WTSConnected, WTSConnectQuery, WTSShadow, WTSDisconnected, WTSIdle, WTSListen, WTSReset, WTSDown, WTSInit } public static IntPtr OpenServer(String Name) { IntPtr server = WTSOpenServer(Name); return server; } public static void CloseServer(IntPtr ServerHandle) { WTSCloseServer(ServerHandle); } public static List<String> ListSessions(String ServerName) { IntPtr server = IntPtr.Zero; List<String> ret = new List<string>(); server = OpenServer(ServerName); try { IntPtr ppSessionInfo = IntPtr.Zero; Int32 count = 0; Int32 retval = WTSEnumerateSessions(server, 0, 1, ref ppSessionInfo, ref count); Int32 dataSize = Marshal.SizeOf(typeof(WTS_SESSION_INFO)); Int32 current = (int)ppSessionInfo; if (retval != 0) { for (int i = 0; i < count; i++) { WTS_SESSION_INFO si = (WTS_SESSION_INFO)Marshal.PtrToStructure((System.IntPtr)current, typeof(WTS_SESSION_INFO)); current += dataSize; ret.Add(si.SessionID + " " + si.State + " " + si.pWinStationName); } WTSFreeMemory(ppSessionInfo); } } finally { CloseServer(server); } return ret; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/132620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Adding ListItems to a DropDownList from a generic list I have a this aspx-code: (sample) <asp:DropDownList runat="server" ID="ddList1"></asp:DropDownList> With this codebehind: List<System.Web.UI.WebControls.ListItem> colors = new List<System.Web.UI.WebControls.ListItem>(); colors.Add(new ListItem("Select Value", "0")); colors.Add(new ListItem("Red", "1")); colors.Add(new ListItem("Green", "2")); colors.Add(new ListItem("Blue", "3")); ddList1.DataSource = colors; ddList1.DataBind(); The output looks like this: <select name="ddList1" id="ddList1"> <option value="Select Value">Select Value</option> <option value="Red">Red</option> <option value="Green">Green</option> <option value="Blue">Blue</option> </select> My question is: Why did my values (numbers) disappear and the text used as the value AND the text? I know that it works if I use the ddList1.Items.Add(New ListItem("text", "value")) method, but I need to use a generic list as the datasource for other reasons. A: And here is the method that performs the data binding. You can exactly see what is going on: protected internal override void PerformDataBinding(IEnumerable dataSource) { base.PerformDataBinding(dataSource); if (dataSource != null) { bool flag = false; bool flag2 = false; string dataTextField = this.DataTextField; string dataValueField = this.DataValueField; string dataTextFormatString = this.DataTextFormatString; if (!this.AppendDataBoundItems) { this.Items.Clear(); } ICollection is2 = dataSource as ICollection; if (is2 != null) { this.Items.Capacity = is2.Count + this.Items.Count; } if ((dataTextField.Length != 0) || (dataValueField.Length != 0)) { flag = true; } if (dataTextFormatString.Length != 0) { flag2 = true; } foreach (object obj2 in dataSource) { ListItem item = new ListItem(); if (flag) { if (dataTextField.Length > 0) { item.Text = DataBinder.GetPropertyValue(obj2, dataTextField, dataTextFormatString); } if (dataValueField.Length > 0) { item.Value = DataBinder.GetPropertyValue(obj2, dataValueField, null); } } else { if (flag2) { item.Text = string.Format(CultureInfo.CurrentCulture, dataTextFormatString, new object[] { obj2 }); } else { item.Text = obj2.ToString(); } item.Value = obj2.ToString(); } this.Items.Add(item); } } if (this.cachedSelectedValue != null) { int num = -1; num = this.Items.FindByValueInternal(this.cachedSelectedValue, true); if (-1 == num) { throw new ArgumentOutOfRangeException("value", SR.GetString("ListControl_SelectionOutOfRange", new object[] { this.ID, "SelectedValue" })); } if ((this.cachedSelectedIndex != -1) && (this.cachedSelectedIndex != num)) { throw new ArgumentException(SR.GetString("Attributes_mutually_exclusive", new object[] { "SelectedIndex", "SelectedValue" })); } this.SelectedIndex = num; this.cachedSelectedValue = null; this.cachedSelectedIndex = -1; } else if (this.cachedSelectedIndex != -1) { this.SelectedIndex = this.cachedSelectedIndex; this.cachedSelectedIndex = -1; } } A: If you are building ListItems, you have no need to use DataBind() in the first place. Just add them to your DropDownList: ddList1.Items.Add(new ListItem("Select Value", "0")); ddList1.Items.Add(new ListItem("Red", "1")); ddList1.Items.Add(new ListItem("Green", "2")); ddList1.Items.Add(new ListItem("Blue", "3")); DataBind() is useful when you already have a collection/dataobject (usually a DataTable or DataView) that can be used as a DataSource, by setting the DataTextField and DataValueField (as buyutec wrote). A: Because DataBind method binds values only if DataValueField property is set. If you set DataValueField property to "Value" before calling DataBind, your values will appear on the markup. UPDATE: You will also need to set DataTextField property to "Text". It is because data binding and adding items manually do not work in the same way. Databinding does not know the existence of type ListItem and generates markup by evaluating the items in the data source. A: "If you are building ListItems, you have no need to use DataBind() in the first place." Adding directly to the dropdownlist is the easy way (and given the example code the right one) but lets say you have an unordered datasource and you want the list items sorted. One way of achieving this would be to create a generic list of ListItem and then use the inherited sort method before databinding to the list. There are many wys to skin a cat...
{ "language": "en", "url": "https://stackoverflow.com/questions/132643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What is the difference between overflow:hidden and display:none What is the difference between overflow:hidden and display:none? A: display: none removes the element from the page, and the flow of the page acts as if it's not there at all. overflow: hidden: The CSS overflow: hidden property can be used to reveal more or less of an element based on the width of the browser window. A: Overflow:hidden just says if text flows outside of this element the scrollbars don't show. display:none says the element is not shown. A: Example: .oh { height: 50px; width: 200px; overflow: hidden; } If text in the block with this class is bigger (longer) than what this little box can display, the excess will be just hidden. You will see the start of the text only. display: none; will just hide the block. Note you have also visibility: hidden; which hides the content of the block, but the block will be still in the layout, moving things around. A: Simple example of overflow: hidden http://www.w3schools.com/Css/tryit.asp?filename=trycss_pos_overflow_hidden If you edit the CCS on that page, you can see the difference between the overflow attributes (visible | hidden | scroll | auto ) - and if you add display: none to the css, you will see the whole content block is disappears. Basically it's a way of controlling layout and element "flow" - if you are allowing user input (from a CMS field say), to render in a fixed sized block, you can adjust the overflow attribute to stop the box increasing in size and therefore breaking the layout of the page. (conversely, display: none prevents the element from displaying and therfore the entire page re-adjusts) A: By default, HTML elements are as tall as required to contain their content. If you give an HTML element a fixed height, it may not be big enough to contain its content. So, for example, if you had a paragraph with a fixed height and a blue background: <p>This is an example paragraph. It has some text in it to try and give it a reasonable height. In a separate style sheet, we’re going to give it a blue background and a fixed height. If we add overflow: hidden, we won’t see any text that extends beyond the fixed height of the paragraph. Until then, the text will “overflow” the paragraph, extending beyond the blue background.</p> p { background-color: #ccf; height: 20px; } The text within the paragraph would extend beyond the bottom edge of the paragraph. The overflow property allows you to change this default behaviour. So, if you added overflow: hidden: p { background-color: #ccf; height: 20px; overflow: hidden; } Then you wouldn’t see any of the text beyond the bottom edge of the paragraph. It would be clipped to the fixed height of the paragraph. display: none would simply make the entire paragraph (visually) disappear, blue background and all, as if it didn’t appear in the HTML at all. A: Let's say you have a div that measures 100 x 100px You then put a whole bunch of text into it, such as it overflows the div. If you use overflow: hidden; then the text that fits into the 100x100 will not be displayed, and will not affect layout. display: none is completely different. It renders the rest of the page as if if the div was still visible. Even if there is overflow, that will be taken into account. It simply leaves space for the div, but does not render it. If both are set: display: none; overflow: hidden; then it will not be displayed, the text will not overflow, and the page will be rendered as if the invisible div were still there. In order to make the div not affect the rendering at all, then both display: none; overflow: hidden; should be set, and also, do something such as set height: 0;. Or with the width, or both, then the page will be rendered as if the div did not exist at all. A: display:none means that the the tag in question will not appear on the page at all (although you can still interact with it through the dom). There will be no space allocated for it between the other tags. Overflow hidden means that the tag is rendered with a certain height and any text etc which would cause the tag to expand to render it will not display. I think what you mean to ask is visibility:hidden. This means that unlike display none, the tag is not visible, but space is allocated for it on the page. so for example <span>test</span> | <span>Appropriate style in this tag</span> | <span>test</span> display:none would be: test |   | test visibility:hidden would be: test |                        | test In visibility:hidden the tag is rendered, it just isn't seen on the page. A: overflow: hidden - hides the overflow of the content, in contrast with overflow: auto who shows scrollbars on a fixed sized div where it's inner content is larger than it's size display: none - hides an element and it completely doesn't participant in content layout P.S. there is no difference between the two, they are completely unrelated
{ "language": "en", "url": "https://stackoverflow.com/questions/132649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How can I disable #pragma warnings? While developing a C++ application, I had to use a third-party library which produced a huge amount of warnings related with a harmless #pragma directive being used. ../File.hpp:1: warning: ignoring #pragma ident In file included from ../File2.hpp:47, from ../File3.hpp:57, from File4.h:49, Is it possible to disable this kind of warnings, when using the GNU C++ compiler? A: In my case, I work with Qt under MinGW. I need to set the flag another way, in my .PRO file: QMAKE_CXXFLAGS_WARN_ON += -Wno-unknown-pragmas A: In GCC, compile with -Wno-unknown-pragmas In MS Visual Studio 2005 (this question isn't tagged with gcc, so I'm adding this for reference), you can disable globally in Project Settings->C/C++->Advanced. Enter 4068 in "Disable Specific Warnings" or you can add this to any file to disable warnings locally #pragma warning (disable : 4068 ) /* disable unknown pragma warnings */ A: Perhaps see GCC Diagnostic Pragmas? Alternatively in this case you could use the combination of options that -Wall enables, excluding -Wunknown-pragmas. A: I know the question is about GCC, but for people wanting to do this as portably as possible: Most compilers which can emit this warning have a way to disable the warning from either the command line (exception: PGI) or in code (exception: DMC): * *GCC: -Wno-unknown-pragmas / #pragma GCC diagnostic ignored "-Wunknown-pragmas" *Clang: -Wno-unknown-pragmas / #pragma clang diagnostic ignored "-Wunknown-pragmas" *Intel C/C++ Compiler: -diag-disable 161 / #pragma warning(disable:161) *PGI: #pragma diag_suppress 1675 *MSVC: -wd4068 / #pragma warning(disable:4068) *TI: --diag_suppress,-pds=163 / #pragma diag_suppress 163 *IAR C/C++ Compiler: --diag_suppress Pe161 / #pragma diag_suppress=Pe161 *Digital Mars C/C++ Compiler: -w17 *Cray: -h nomessage=1234 You can combine most of this into a single macro to use in your code, which is what I did for the HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS macro in Hedley #if HEDLEY_HAS_WARNING("-Wunknown-pragmas") # define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS _Pragma("clang diagnostic ignored \"-Wunknown-pragmas\"") #elif HEDLEY_INTEL_VERSION_CHECK(16,0,0) # define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS _Pragma("warning(disable:161)") #elif HEDLEY_PGI_VERSION_CHECK(17,10,0) # define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS _Pragma("diag_suppress 1675") #elif HEDLEY_GNUC_VERSION_CHECK(4,3,0) # define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS _Pragma("GCC diagnostic ignored \"-Wunknown-pragmas\"") #elif HEDLEY_MSVC_VERSION_CHECK(15,0,0) # define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS __pragma(warning(disable:4068)) #elif HEDLEY_TI_VERSION_CHECK(8,0,0) # define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS _Pragma("diag_suppress 163") #elif HEDLEY_IAR_VERSION_CHECK(8,0,0) # define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS _Pragma("diag_suppress=Pe161") #else # define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS #endif Note that Hedley may have more complete information than this answer since I'll probably forget to update this answer, so if you don't want to just use Hedley (it's a single public domain C/C++ header you can easily drop into you project) you might want to use Hedley as a guide instead of the information above. The version checks are probably overly pessimistic, but sometimes it's hard to get good info about obsolete versions of proprietary compilers, and I'd rather be safe than sorry. Again, Hedley's information may be better. Many compilers can also push/pop warnings onto a stack, so you can push, then disable them before including code you don't control, then pop so your code will still trigger the warning in question (so you can clean it up). There are macros for that in Hedley, too: HEDLEY_DIAGNOSTIC_PUSH / HEDLEY_DIAGNOSTIC_POP. A: I believe you can compile with -Wno-unknown-pragmas to suppress these.
{ "language": "en", "url": "https://stackoverflow.com/questions/132667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: Font size in CSS - % or em? When setting the size of fonts in CSS, should I be using a percent value (%) or em? Can you explain the advantage? A: There's a really good article on web typography on A List Apart. Their conclusion: Sizing text and line-height in ems, with a percentage specified on the body (and an optional caveat for Safari 2), was shown to provide accurate, resizable text across all browsers in common use today. This is a technique you can put in your kit bag and use as a best practice for sizing text in CSS that satisfies both designers and readers. A: Given that (nearly?) all browsers now resize the page as a whole, rather than just the text, previous issues with px vs. % vs. ems in terms of accessible font resizing are rather moot. So, the answer is that it probably doesn't matter. Use whatever works for you. % is nice because it allows for relative resizing. px is nice because it's fairly easy to manage expectations when using it. em can be useful when also used for layout elements as it can allow for proportional sizing related to the text size. A: From http://archivist.incutio.com/viewlist/css-discuss/1408 %: Some browsers doesn't handle percent for font-size but interprets 150% as 150px. (Some NN4 versions, for instance.) IE also has problems with percent on nested elements. It seems IE uses percent relative to viewport instead of relative to parent element. Yet another problem (though correct according to the W3C specs), in Moz/Ns6, you can't use percent relative to elements with no specified height/width. em: Sometimes browsers use the wrong reference size, but of the relative units it's the one with least problems. You might find it interpreted as px sometimes though. pt: Differs greatly between resolutions, and should not be used for display. It's quite safe for print use though. px: The only reliable absolute unit on screen. It might be wrongly interpreted in print though, as one point usually consist of several pixels, and thus everything becomes ridiculously small. A: Both adjust the font-size relative to what it was. 1.5em is the same as 150%. The only advantage seems to be readability, choose whichever you are most comfortable with. A: The real difference comes apparent when you use it not for font-sizes. Setting a padding of 1em is not the same as 100%. em is always relative to the font-size. But % might be relative to font-size, width, height and probably some other things I don't know about. A: As Galwegian mentions, px is the most reliable for web typography, as everything else you do on the page is mostly laid out in reference to a computer monitor. The problem with absolute sizes is that some browsers (IE) won't scale pixel-value elements on a web-page, so when you try to zoom in/out, everything adjusts except for those elements. I do not know whether IE8 handles this properly, but all other browser vendors handle pixels just fine and it is still a minority case where a user needs to enlarge/diminish text (this text box on SO perhaps being the exception). If you want to get really dirty, you could always add a javascript function for making your text size larger and offer a "small"/"larger" button to the user. A: Regarding the difference between the css units % and em. As far as I understand (at least theoretically/conceptually, but possibly not how these two units might be implemented in browsers) these two units are equivalent, i.e. if you multiply your em value with 100 and then replace em with % it should be the same thing ? If there actually is some real difference between em and % then can someone explain it (or provide a link to an explanation) ? (I wanted to add this comment of mine where it would belong, i.e. indented just below the answer by "Liam, answered Sep 25 '08 at 11:21" since I also want to know why his answer was downvoted, but I could not figure out how to put my comment there and therefore had to write this "thread global" reply) A: Yahoo User Interface library (http://developer.yahoo.com/yui/) has a nice set of base css classes used to "reset" the browser specific settings so that the basis for displaying the site is same for all (supported) browsers. With YUI one is supposed to use percentages.
{ "language": "en", "url": "https://stackoverflow.com/questions/132685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "119" }
Q: Can "classic" ASP.NET pages and Microsoft MVC coexist in the same web application? I'm thinking about trying out MVC later today for a new app we're starting up, but I'm curious if it's an all or nothing thing or if I can still party like it's 2006 with viewstate and other crutches at the same time... A: Yes you can have your webforms pages and MVC views mixed in a single web application project. This could be useful if you have an application that is already built and you want to migrate your app from webforms to mvc. You need to make sure that none of your webforms pages go in the 'Views' directory in a standard ASP.NET MVC application though. Pages (or views) in the 'Views' directory can't be requested directly through the url. If you are starting an application from scratch, there would be very little benefit to mixing the two. A: Yes. MVC is just a different implementation of the IHttpHandler interface so both classic ASP.NET and ASP.NET MVC pages can coexist in the same app. A: As you've probably noticed with the above answers, yes this is very possible to do. I've actually had to do this on my current project. I was able to get approval to add MVC to our application, but only in the administration section (to limit the risk of affecting current members coming to our site). The biggest problem I had was converting my Web Site to a Web Application, but once that was done, things were pretty straight forward adding MVC side-by-side our classic code-behind web pages. The trick for me was to make my MVC pages look as similar as possible to my code-behind pages so the transition looked as seamless as possible. A: I am currently working on a new project. While I would like to go down the MVC route all the way, some of the project requirements don't allow me. One of those requirements is to have a grouping grid from the client-side. Personally have chosen the Telerik Rad-Grid. While they may be in the process of supporting MVC they are not there as yet. So this means that I have to have a hybrid solution. for the time being until RadGrid fully supports MVC. While we are in this transition period I think that there will be may more hybrid projects out there until the support of the Third Party Controls catches up. Regards Nathan A: You'll need to make sure your MVC routes don't conflict with your Web Forms pages so that requests for a .aspx page don't get routed to a controller action as a parameter etc. See this blog post by Phil Haack for details on how to avoid this. A: Yes, it is very much possible for MVC pages to coexist with asp.net web forms. I implemented that in my existing asp.net application for adding new features. We need to make sure of referring the MVC DLLs, registering routing tables for URL routing and configuring the assemblies and namespaces in Web.config file. A: If you're mixing MVC with other methodologies you're not really getting the benefit out of it. The point of MVC is to allow you to decrease coupling and increase cohesion, and if only half of your code is doing that, then the other half is inevitably going to restrain your development cycle. So, I guess while it's possible, I don't think it's worth it. Go all the way or don't go at all.
{ "language": "en", "url": "https://stackoverflow.com/questions/132687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: In Visual Studio how to give relative path of a .lib file in project properties I am building a project using Visual Studio. The project has a dependency on a lib file generated by another project. This project is there is the parent directory of the actual project I am building. To be more clear, I have a "ParentDir" which has two subDirectories Project1 and Project2 under it. Now Project1 depends on lib generated by Project2. In the properties of Project1, I am trying to give a relative path using $(SolutionDir)/../ParentDir/Project2/Debug But this does not seem to work. Can you tell me where i am going wrong, or suggest the correct way of achieving this. A: Add the dependant project to your solution and set it as a dependency of the other project using project properties. Then it just magically works ;). A solution is just a file that describes a set of related (interconnected) projects and the relation between them, so this is the correct way of doing it. A: Your current dir is your $(ProjectDir), that is where .vcproj file is. So, just write ../Project2/Debug, that will do. Even better, write ../Project2/$(ConfigurationName) for all configurations thus you will be always linking to the correct version of that lib. A: I think Visual Studio does not expand the relative path properly when the ".." is placed somewhere in the middle of the path string. It only knows how to expand ..{sub-path}.
{ "language": "en", "url": "https://stackoverflow.com/questions/132697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: PDB files in Visual Studio bin\debug folders I have a Visual Studio (2008) solution consisting of several projects, not all in the same namespace. When I build the solution, all the DLL files used by the top level project, TopProject, are copied into the TopProject\bin\debug folder. However, the corresponding .pdb files are only copied for some of the other projects. This is a pain, for example when using NDepend. How does Visual Studio decide which .pdb files to copy into higher level bin\debug folders? How can I get Visual Studio to copy the others too? References are as follows: all the DLL files are copied to a central location, without their PDB files. TopProject only has references to these copied DLL files; the DLL files themselves, however, evidently know where their PDB files are, and (most of them) get copied to the debug folder correctly. A: First off, never assume anything. Clean the solution, rebuild it in debug mode, and check to see if all PDB files are created. If not, that's your problem. If they are created, and they're not all getting copied, you can get around this by creating a post build event that manually copies the PDB files to the desired locations. This is just a workaround, of course. The only other thing I can think of is that your solution file has become corrupt. You can open your .sln as an XML file and examine the contents. Check the configuration for the projects that are acting as expected and compare them to those that aren't. If you don't see anything, you have to repeat this at the project level. Compare working .csproj (or whatever) project files and the non-working ones. Edit in response to edit: If you're just manually copying stuff around, then manually copy the PDF files as well. DLL files shouldn't "know" anything about PDB files, I believe. Just stick them in the destination directory and go have a cup of coffee. Relax. A: Check when you clean the solution, that it is actually cleaned. I've seen Visual Studio leave files hanging around in bin\debug directories even after cleaning. Delete the bin\debug directory on all of your projects and rebuild. A: As other posts have said, you may have a compiler/corruption issue. But, as Will said, if the PDB files are being created, but not showing up where you want them, create a post-build step. Here is the post-build step I define for every project in my solution. It makes sure all output files are copied into a common directory. If your project file is in \SolutionDir\ProjDir, then the first line of the post-build step will copy the output files to \Solution\Bin\Release or \Solution\Bin\Debug. The second line copies the PDB file if this is a debug build. I don't copy the PDB file for release builds. So, \SolutionDir\Bin now contains all your output files in one location. xcopy /r /y $(TargetPath) $(ProjectDir)..\$(OutDir) if $(ConfigurationName) == Debug xcopy /r /y $(TargetDir)$(TargetName).pdb $(ProjectDir)..\$(OutDir) A: From MSDN: A program database (PDB) file holds debugging and project state information that allows incremental linking of a Debug configuration of your program. A PDB file is created when you compile a C/C++ program with /ZI or /Zi or a Visual Basic/C#/JScript .NET program with /debug. So it looks like the "issue" here (for lack of a better word) is that some of your DLLs are being built in debug mode (and hence emitting PDB files), and some are being built in release mode (hence not emitting PDB files). If that's the case, it should be easy to fix -- go into each project and update its build settings. This would be the default scenario, if you haven't done any tweaking of command line options. However, it will get trickier if that isn't the case. Maybe you're all in release or debug mode. Now you need to look at the command line compile options (specified in the project properties) for each project. Update them to /debug accordingly if you want the debugger, or remove it if you don't. Edit in Response to Edit Yes, the DLL files "know" that they have PDB files, and have paths to them, but that doesn't mean too much. Copying just DLL files to a given directory, as others have mentioned, won't clear this issue up. You need the PDB files as well. Copying individual files in Windows, with the exception of certain "bundle"-type files (I don't know Microsoft's term for this, but "complete HTML packages" are the concept) doesn't copy associated files. DLL files aren't assembled in the "bundle" way, so copying them leaves their PDB file behind. I'd say the only answer you're going to have is to update your process for getting the DLL files to those central locations, and include the PDB files ... I'd love to be proven wrong on that, though!
{ "language": "en", "url": "https://stackoverflow.com/questions/132719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Method 'XYZ' cannot be reflected We have consumed a third party web service and are trying to invoke it from an ASP.NET web application. However when I instantiate the web service the following System.InvalidOperationException exception is thrown: Method 'ABC.XYZ' can not be reflected. System.InvalidOperationException: Method 'ABC.XYZ' can not be reflected. ---> System.InvalidOperationException: The XML element 'MyDoc' from namespace 'http://mysoftware.com/ns' references a method and a type. Change the method's message name using WebMethodAttribute or change the type's root element using the XmlRootAttribute. From what I can gather there appears to be some ambiguity between a method and a type in the web service. Can anyone clarify the probably cause of this exception and is there anything I can do to rectify this or should I just go to the web service owners to rectify? Edit: Visual Studio 2008 has created the proxy class. Unfortunately I can't provide a link to the wsdl as it is a web service for a locally installed thrid party app. A: It seems the problem is down to data type issues between VS and the web service that was written in Java. Ultimately it was fixed by manually editing the class and schema files that were created by VS. A: I have come across the exact same problem when I was consuming a 3rd party web service. The problem in this instance was that the mustUndertand property in the reference file was looking for a Boolean, whereby the namespace property looked for a string. By looking through the reference i was able to idenitfy the offending property and simply add "overrides" to the method signature. Not ideal as any time you update the service you have to do this but I couldn't find any other way around this. To find the reference file select "all files" from the solution explorer Hope this helps A: I ran into the same problem earlier today. The reason was - the class generated by Visual Studio and passed as a parameter into one of the methods did not have a default parameterless constructor. Once I have added it, the error had gone. A: I'm guessing the wsdl emitted by or supplied with the service is not in a form that wsdl.exe or serviceutil can understand - can you post the wsdl or link to it? how are you creating the proxy classes? Also you might like to try and validate the wsdl against the wsdl schema to check its valid A: In my case I was getting a "method cannot be reflected" error due to that fact that in the class being returned by method, I had failed to expose a default parameter-less constructor. I was working in VB.NET. In my return class I had declared a "New(..)" method that took a couple parameters (because that is how I wanted to use it in my code). But by doing so, I had supressed the default (hidden) parameterless New() constructor that VB adds behind the scenes. Apparently the web service handler requires that a parameterless constructor be available. As soon as I added back into my class a parameterless New() constructor, it all worked fine. A: I got the same message but mine was caused by a missing System.Runtime.Serialization.dll since I tried to run a 3.5 application on a machine with only .NET 2.0 installed. A: I had the same issue but I found that one of the WebMethod parameters has a member that is of type interface that is why VS could not serialise it. here is the exception when trying to download the disco file System.InvalidOperationException: Cannot serialize member 'Leopard.JobDespatchParameters.SendingUser' of type 'Leopard.Interfaces.IUser', see inner exception for more details. ---> System.NotSupportedException: Cannot serialize member Leopard.JobDespatchParameters.SendingUser of type Leopard.Interfaces.IUser because it is an interface. A: Old thread but I had a different issue, Maybe of help to someone. referenced dlls were mixed up between two versions on data layer and service layer that caused the problem. A: Another scenario where this error can happen: I simply had another web method with the same name (but different parameters)in my web service that slipped in during a code merge. After I deleted the old method it worked.
{ "language": "en", "url": "https://stackoverflow.com/questions/132720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Sprint Cumulative Flow Diagram Can anyone give me some tips about how to read a Cumulative Flow Diagram. I'm not sure what kind of things its telling me.
{ "language": "en", "url": "https://stackoverflow.com/questions/132724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Are delphi variables initialized with a value by default? I'm new to Delphi, and I've been running some tests to see what object variables and stack variables are initialized to by default: TInstanceVariables = class fBoolean: boolean; // always starts off as false fInteger: integer; // always starts off as zero fObject: TObject; // always starts off as nil end; This is the behaviour I'm used to from other languages, but I'm wondering if it's safe to rely on it in Delphi? For example, I'm wondering if it might depend on a compiler setting, or perhaps work differently on different machines. Is it normal to rely on default initialized values for objects, or do you explicitly set all instance variables in the constructor? As for stack (procedure-level) variables, my tests are showing that unitialized booleans are true, unitialized integers are 2129993264, and uninialized objects are just invalid pointers (i.e. not nil). I'm guessing the norm is to always set procedure-level variables before accessing them? A: Here's a quote from Ray Lischners Delphi in a Nutshell Chapter 2 "When Delphi first creates an object, all of the fields start out empty, that is, pointers are initialized to nil, strings and dynamic arrays are empty, numbers have the value zero, Boolean fields are False, and Variants are set to Unassigned. (See NewInstance and InitInstance in Chapter 5 for details.)" It's true that local-in-scope variables need to be initialised... I'd treat the comment above that "Global variables are initialised" as dubious until provided with a reference - I don't believe that. edit... Barry Kelly says you can depend on them being zero-initialised, and since he's on the Delphi compiler team I believe that stands :) Thanks Barry. A: Global variables and object instance data (fields) are always initialized to zero. Local variables in procedures and methods are not initialized in Win32 Delphi; their content is undefined until you assign them a value in code. A: Even if a language does offer default initializations, I don't believe you should rely on them. Initializing to a value makes it much more clear to other developers who might not know about default initializations in the language and prevents problems across compilers. A: From Delphi 2007 help file: ms-help://borland.bds5/devcommon/variables_xml.html "If you don't explicitly initialize a global variable, the compiler initializes it to 0." A: I have one little gripe with the answers given. Delphi zeros out the memory space of the globals and the newly-created objects. While this NORMALLY means they are initialized there is one case where they aren't: enumerated types with specific values. What if zero isn't a legal value?? A: Global variables that don't have an explicit initializer are allocated in the BSS section in the executable. They don't actually take up any space in the EXE; the BSS section is a special section that the OS allocates and clears to zero. On other operating systems, there are similar mechanisms. You can depend on global variables being zero-initialized. A: Class fields are default zero. This is documented so you can rely on it. Local stack varaiables are undefined unless string or interface, these are set to zero. A: Newly introduced (since Delphi 10.3) inline variables are making the control of initial values easier. procedure TestInlineVariable; begin var index: Integer := 345; ShowMessage(index.ToString); end; A: Just as a side note (as you are new to Delphi): Global variables can be initialized directly when declaring them: var myGlobal:integer=99; A: Yes, this is the documented behaviour: * *Object fields are always initialized to 0, 0.0, '', False, nil or whatever applies. *Global variables are always initialized to 0 etc as well; *Local reference-counted* variables are always initialized to nil or ''; *Local non reference-counted* variables are uninitialized so you have to assign a value before you can use them. I remember that Barry Kelly somewhere wrote a definition for "reference-counted", but cannot find it any more, so this should do in the meantime: reference-counted == that are reference-counted themselves, or directly or indirectly contain fields (for records) or elements (for arrays) that are reference-counted like: string, variant, interface or dynamic array or static array containing such types. Notes: * *record itself is not enough to become reference-counted *I have not tried this with generics yet
{ "language": "en", "url": "https://stackoverflow.com/questions/132725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "113" }
Q: Why should I ever use inline code? I'm a C/C++ developer, and here are a couple of questions that always baffled me. * *Is there a big difference between "regular" code and inline code? *Which is the main difference? *Is inline code simply a "form" of macros? *What kind of tradeoff must be done when choosing to inline your code? Thanks A: * *Is there a big difference between "regular" code and inline code? Yes - inline code does not involve a function call, and saving register variables to the stack. It uses program space each time it is 'called'. So overall it takes less time to execute because there's no branching in the processor and saving of state, clearing of caches, etc. * *Is inline code simply a "form" of macros? Macros and inline code share similarities. the big difference is that the inline code is specifically formatted as a function so the compiler, and future maintainers, have more options. Specifically it can easily be turned into a function if you tell the compiler to optimize for code space, or a future maintainer ends up expanding it and using it in many places in their code. * *What kind of tradeoff must be done when choosing to inline your code? * *Macro: high code space usage, fast execution, hard to maintain if the 'function' is long *Function: low code space usage, slower to execute, easy to maintain *Inline function: high code space usage, fast execution, easy to maintain It should be noted that the register saving and jumping to the function does take up code space, so for very small functions an inline can take up less space than a function. -Adam A: Performance As has been suggested in previous answers, use of the inline keyword can make code faster by inlining function calls, often at the expense of increased executables. “Inlining function calls” just means substituting the call to the target function with the actual code of the function, after filling in the arguments accordingly. However, modern compilers are very good at inlining function calls automatically without any prompt from the user when set to high optimisation. Actually, compilers are usually better at determining what calls to inline for speed gain than humans are. Declaring functions inline explicitly for the sake of performance gain is (almost?) always unnecessary! Additionally, compilers can and will ignore the inline request if it suits them. Compilers will do this if a call to the function is impossible to inline (i.e. using nontrivial recursion or function pointers) but also if the function is simply too large for a meaningful performance gain. One Definition Rule However, declaring an inline function using the inline keyword has other effects, and may actually be necessary to satisfy the One Definition Rule (ODR): This rule in the C++ standard states that a given symbol may be declared multiple times but may only be defined once. If the link editor (= linker) encounters several identical symbol definitions, it will generate an error. One solution to this problem is to make sure that a compilation unit doesn't export a given symbol by giving it internal linkage by declaring it static. However, it's often better to mark a function inline instead. This tells the linker to merge all definitions of this function across compilation units into one definition, with one address, and shared function-static variables. As an example, consider the following program: // header.hpp #ifndef HEADER_HPP #define HEADER_HPP #include <cmath> #include <numeric> #include <vector> using vec = std::vector<double>; /*inline*/ double mean(vec const& sample) { return std::accumulate(begin(sample), end(sample), 0.0) / sample.size(); } #endif // !defined(HEADER_HPP) // test.cpp #include "header.hpp" #include <iostream> #include <iomanip> void print_mean(vec const& sample) { std::cout << "Sample with x̂ = " << mean(sample) << '\n'; } // main.cpp #include "header.hpp" void print_mean(vec const&); // Forward declaration. int main() { vec x{4, 3, 5, 4, 5, 5, 6, 3, 8, 6, 8, 3, 1, 7}; print_mean(x); } Note that both .cpp files include the header file and thus the function definition of mean. Although the file is saved with include guards against double inclusion, this will result in two definitions of the same function, albeit in different compilation units. Now, if you try to link those two compilation units — for example using the following command: ⟩⟩⟩ g++ -std=c++11 -pedantic main.cpp test.cpp you'll get an error saying “duplicate symbol __Z4meanRKNSt3__16vectorIdNS_9allocatorIdEEEE” (which is the mangled name of our function mean). If, however, you uncomment the inline modifier in front of the function definition, the code compiles and links correctly. Function templates are a special case: they are always inline, regardless of whether they were declared that way. This doesn’t mean that the compiler will inline calls to them, but they won’t violate ODR. The same is true for member functions that are defined inside a class or struct. A: * *Is there a big difference between "regular" code and inline code? Yes and no. No, because an inline function or method has exactly the same characteristics as a regular one, most important one being that they are both type safe. And yes, because the assembly code generated by the compiler will be different; with a regular function, each call will be translated into several steps: pushing parameters on the stack, making the jump to the function, popping the parameters, etc, whereas a call to an inline function will be replaced by its actual code, like a macro. * *Is inline code simply a "form" of macros? No! A macro is simple text replacement, which can lead to severe errors. Consider the following code: #define unsafe(i) ( (i) >= 0 ? (i) : -(i) ) [...] unsafe(x++); // x is incremented twice! unsafe(f()); // f() is called twice! [...] Using an inline function, you're sure that parameters will be evaluated before the function is actually performed. They will also be type checked, and eventually converted to match the formal parameters types. * *What kind of tradeoff must be done when choosing to inline your code? Normally, program execution should be faster when using inline functions, but with a bigger binary code. For more information, you should read GoTW#33. A: It depends on the compiler... Say you have a dumb compiler. By indicating a function must be inlined, it will put a copy of the content of the function on each occurrence were it is called. Advantage: no function call overhead (putting parameters, pushing the current PC, jumping to the function, etc.). Can be important in the central part of a big loop, for example. Inconvenience: inflates the generated binary. Is it a macro? Not really, because the compiler still checks the type of parameters, etc. What about smart compilers? They can ignore the inline directive, if they "feel" the function is too complex/too big. And perhaps they can automatically inline some trivial functions, like simple getters/setters. A: Inline differs from macros in that it's a hint to the compiler (compiler may decide not to inline the code!) and macros are source code text generation before the compilation and as such are "forced" to be inlined. A: Inline code works like macros in essence but it is actual real code, which can be optimized. Very small functions are often good for inlining because the work needed to set up the function call (load the parameters into the proper registers) is costly compared to the small amount of actual work the method does. With inlining, there is no need to set up the function call, because the code is directly "pasted into" any method that uses it. Inlining increases code size, which is its primary drawback. If the code is so big that it cannot fit into the CPU cache, you can get major slowdowns. You only need to worry about this in rare cases, since it is not likely you are using a method in so many places the increased code would cause issues. In summary, inlining is ideal for speeding up small methods that are called many times but not in too many places (100 places is still fine, though - you need to go into quite extreme examples to get any significant code bloat). Edit: as others have pointed out, inlining is only a suggestion to the compiler. It can freely ignore you if it thinks you are making stupid requests like inlining a huge 25-line method. A: Marking a function inline means that the compiler has the option to include in "in-line" where it is called, if the compiler chooses to do so; by contrast, a macro will always be expanded in-place. An inlined function will have appropriate debug symbols set up to allow a symbolic debugger to track the source where it came from, while debugging macros is confusing. Inline functions need to be valid functions, while macros... well, don't. Deciding to declare a function inline is largely a space tradeoff -- your program will be larger if the compiler decides to inline it (particularly if it isn't also static, in which case at least one non-inlined copy is required for use by any external objects); indeed, if the function is large, this could result in a drop in performance as less of your code fits in cache. The general performance boost, however, is just that you're getting rid of the overhead of the function call itself; for a small function called as part of an inner loop, that's a tradeoff that makes sense. If you trust your compiler, mark small functions used in inner loops inline liberally; the compiler will be responsible for Doing The Right Thing in deciding whether or not to inline. A: If you are marking your code as inline in f.e. C++ you are also telling your compiler that the code should be executed inline, ie. that code block will "more or less" be inserted where it is called (thus removing the pushing, popping and jumping on the stack). So, yes... it is recommended if the functions are suitable for that kind of behavior. A: "inline" is like the 2000's equivalent of "register". Don't bother, the compiler can do a better job of deciding what to optimize than you can. A: By inlining, the compiler inserts the implementation of the function, at the calling point. What you are doing with this is removing the function call overhead. However, there is no guarantee that your all candidates for inlining will actually be inlined by the compiler. However, for smaller functions, compilers always inline. So if you have a function that is called many times but only has a limited amount of code - a couple of lines - you could benefit from inlining, because the function call overhead might take longer than the execution of the function itself. A classic example of a good candidate for inlining are getters for simple concrete classes. CPoint { public: inline int x() const { return m_x ; } inline int y() const { return m_y ; } private: int m_x ; int m_y ; }; Some compilers ( e.g. VC2005 ) have an option for aggressive inlining, and you wouldn't need to specify the 'inline' keyword when using that option. A: I won't reiterate the above, but it's worth noting that virtual functions will not be inlined as the function called is resolved at runtime. A: Inlining usually is enabled at level 3 of optimization (-O3 in case of GCC). It can be a significant speed improvement in some cases (when it is possible). Explicit inlining in your programs can add some speed improvement with the cost of an incresed code size. You should see which is suitable: code size or speed and decide wether you should include it in your programs. You can just turn on level 3 of optimization and forget about it, letting the compiler do his job. A: The answer of should you inline comes down to speed. If you're in a tight loop calling a function, and it's not a super huge function, but one where a lot of the time is wasted in CALLING the function, then make that function inline and you'll get a lot of bang for your buck. A: First of all inline is a request to compiler to inline the function .so it is upto compiler to make it inline or not. * *When to use?When ever a function is of very few lines(for all accessors and mutator) but not for recursive functions *Advantage?Time taken for invoking the function call is not involved *Is compiler inline any function of its own?yes when ever a function is defined in header file inside a class A: inlining is a technique to increase speed. But use a profiler to test this in your situation. I have found (MSVC) that inlining does not always deliver and certainly not in any spectacular way. Runtimes sometimes decreased by a few percent but in slightly different circumstances increased by a few percent. If the code is running slowly, get out your profiler to find troublespots and work on those. I have stopped adding inline functions to header files, it increases coupling but gives little in return. A: Inline code is faster. There is no need to perform a function call (every function call costs some time). Disadvantage is you cannot pass a pointer to an inline function around, as the function does not really exist as function and thus has no pointer. Also the function cannot be exported to public (e.g. an inline function in a library is not available within binaries linking against the library). Another one is that the code section in your binary will grow, if you call the function from various places (as each time a copy of the function is generated instead of having just one copy and always jumping there) Usually you don't have to manually decide if a function shall be inlined or not. E.g. GCC will decide that automatically depending on optimizing level (-Ox) and depending on other parameters. It will take things into consideration like "How big is the function?" (number of instructions), how often is it called within the code, how much the binary will get bigger by inlining it, and some other metrics. E.g. if a function is static (thus not exported anyway) and only called once within your code and you never use a pointer to the function, chances are good that GCC will decide to inline it automatically, as it will have no negative impact (the binary won't get bigger by inlining it only once).
{ "language": "en", "url": "https://stackoverflow.com/questions/132738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: jQuery - running a function on a new image I'm a jQuery novice, so the answer to this may be quite simple: I have an image, and I would like to do several things with it. When a user clicks on a 'Zoom' icon, I'm running the 'imagetool' plugin (http://code.google.com/p/jquery-imagetool/) to load a larger version of the image. The plugin creates a new div around the image and allows the user to pan around. When a user clicks on an alternative image, I'm removing the old one and loading in the new one. The problem comes when a user clicks an alternative image, and then clicks on the zoom button - the imagetool plugin creates the new div, but the image appears after it... The code is as follows: // Product Zoom (jQuery) $(document).ready(function(){ $("#productZoom").click(function() { // Set new image src var imageSrc = $("#productZoom").attr("href"); $("#productImage").attr('src', imageSrc); // Run the imagetool plugin on the image $(function() { $("#productImage").imagetool({ viewportWidth: 300, viewportHeight: 300, topX: 150, topY: 150, bottomX: 450, bottomY: 450 }); }); return false; }); }); // Alternative product photos (jQuery) $(document).ready(function(){ $(".altPhoto").click(function() { $('#productImageDiv div.viewport').remove(); $('#productImage').remove(); // Set new image src var altImageSrc = $(this).attr("href"); $("#productZoom").attr('href', altImageSrc); var img = new Image(); $(img).load(function () { $(this).hide(); $('#productImageDiv').append(this); $(this).fadeIn(); }).error(function () { // notify the user that the image could not be loaded }).attr({ src: altImageSrc, id: "productImage" }); return false; }); }); It seems to me, that the imagetool plugin can no longer see the #productImage image once it has been replaced with a new image... So I think this has something to do with binding? As in because the new image is added to the dom after the page has loaded, the iamgetool plugin can no longer use it correctly... is this right? If so, any ideas how to deal with it? A: Wehey! I've sorted it out myself... Turns out if I remove the containing div completely, and then rewrite it with .html, the imagetool plugin recognises it again. Amended code for anyone who's interested: $(document).ready(function(){ // Product Zoom (jQuery) $("#productZoom").click(function() { $('#productImage').remove(); $('#productImageDiv').html('<img src="" id="productImage">'); // Set new image src var imageSrc = $("#productZoom").attr("href"); $("#productImage").attr('src', imageSrc); // Run the imagetool plugin on the image $(function() { $("#productImage").imagetool({ viewportWidth: 300, viewportHeight: 300, topX: 150, topY: 150, bottomX: 450, bottomY: 450 }); }); return false; }); // Alternative product photos (jQuery) $(".altPhoto").click(function() { $('#productImageDiv div.viewport').remove(); $('#productImage').remove(); // Set new image src var altImageSrc = $(this).attr("href"); // Set new image Zoom link (from the ID... is that messy?) var altZoomLink = $(this).attr("id"); $("#productZoom").attr('href', altZoomLink); var img = new Image(); $(img).load(function () { $(this).hide(); $('#productImageDiv').append(this); $(this).fadeIn(); }).error(function () { // notify the user that the image could not be loaded }).attr({ src: altImageSrc, id: "productImage" }); return false; }); }); A: You could try abstracting the productZoom.click() function to a named function, and then re-binding it after changing to an alternate image. Something like: // Product Zoom (jQuery) $(document).ready(function(){ $("#productZoom").click(bindZoom); // Alternative product photos (jQuery) $(".altPhoto").click(function() { $('#productImageDiv div.viewport').remove(); $('#productImage').remove(); // Set new image src var altImageSrc = $(this).attr("href"); $("#productZoom").attr('href', altImageSrc); var img = new Image(); $(img).load(function () { $(this).hide(); $('#productImageDiv').append(this); $(this).fadeIn(); }).error(function () { // notify the user that the image could not be loaded }).attr({ src: altImageSrc, id: "productImage" }); $("#productZoom").click(bindZoom); return false; }); }); function bindZoom() { // Set new image src var imageSrc = $("#productZoom").attr("href"); $("#productImage").attr('src', imageSrc); // Run the imagetool plugin on the image $(function() { $("#productImage").imagetool({ viewportWidth: 300, viewportHeight: 300, topX: 150, topY: 150, bottomX: 450, bottomY: 450 }); }); return false; } Also, rolled both your ready() blocks into the same block. A: First, i have one question, are the .altPhoto links or images? Cause if its images then this line is wrong var altImageSrc = $(this).attr("href"); it should be var altImageSrc = $(this).attr("src"); its the only thing i could find in a glance
{ "language": "en", "url": "https://stackoverflow.com/questions/132750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Different layouts and i18n in JSP application I have a bunch of JSP files and backend in Tomcat. I have 3 different versions of JSP with same logic inside but with different layouts. So if I change some logic I have three JSP file to fix. What is the proper soution for such a scenario? I thought of some XML and XSLT stack: backend gives only data in XML and than for each layout I have XSLT that does some magic. Just imagine that you need to provide stackoverflow in various languages in JSP. Logic the same but layout different. Could you point me into some direction? A: This is usually solved by using some templating engine - you create smaller page fragments, and then you declare to the template engine that certain views should consist of these parts, put together in a certain way. Struts tiles is the classic example in the Java world, but it is really getting old and crufty compared to more modern framworks in Java and other languages. Tapestry and Wicket are two more modern ones (haven't used them though). For only 3 pages applying a whole web framework is probably overkill though, but if your site grows... A: With plain old JSP without any kinds of fameworks: 1) Use controllers to do the processing and only use jsp to display the data 2) Use jsp include directives to include header, navigation, menu, footer and other necessary common/shared elements to all of those layouts. Or/and: Use the following in web.xml <jsp-property-group> <url-pattern>/customers/*</url-pattern> <include-prelude>/shared/layout/_layout_customers_top.jsp</include-prelude> <include-coda>/shared/layout/_layout_customers_bottom.jsp</include-coda> </jsp-property-group> The url pattern determines which jsps get which jsp fragments (partials in Ruby on Rails) attached to top/bottom. A: Learn about MVC (Model View Controller) and the idea that JSP should be the View part of it and should not contain any logic whatsoever. Logic belongs in a Model class. A: Take a look at Tiles. A: This is a very classical problem domain and there lots of concepts and frameworks out there that are trying to deal with this issue (MVC frameworks like Struts and JSF, SessionBeans to name but). As I suspect you're not really a Java enterprise "evangelist" I will give you 2 simple advices. * *You obviously have a lot of redundant code in you're JSPs. Extract this code into "real" Java-Classes and use them on all of your JSPs. That way you will be able to modify business logic in one place and redundancy will be less of a problem. *Take a look at Cascading Style Sheets (CSS). This is the state of the art way to layout webpages. You may not even need different JSPs for different layouts, if you have well designed html + CSS. Regards
{ "language": "en", "url": "https://stackoverflow.com/questions/132754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I - in ASP.NET save the info from a page when a user leaves the page? In our CMS, we have a place in which we enable users to play around with their site hierarchy - move pages around, add and remove pages, etc. We use drag & drop to implement moving pages around. Each move has to saved in th DB, and exported to many HTML files. If we do that in every move, it will slow down the users. Therefore we thought that it's preferable to let the users play around as much as they want, saving each change to the DB, but only when they leave the page - to export their changes to the HTML files. We thought of making the user click a "publish" button when they're ready to commit their changes, but we're afraid users won't remember to do that, because from their stand point once they've moved a page to a new place - the action is done. Another problem with the button is that it's inconsistent with the behavior of the other parts of the site (for example, when a user moves a text inside a page, the changes are saved automatically, as there is only 1 HTML file to update) So how can we automatically save user changes on leaving the page? A: You should warn the user when he leaves the page with javascript. From http://www.siafoo.net/article/67: Modern browsers have an event called window.beforeunload that is fired right when any event occurs that would cause the page to unload. This includes clicking on a link, submitting a form, or closing the tab or window. Visit this page for a sample the works in most browsers: http://www.webreference.com/dhtml/diner/beforeunload/bunload4.html I think it's bad practice to save the page without asking the user first, thats not how normal web pages work. Sample: <SCRIPT LANGUAGE="JavaScript1.2" TYPE="text/javascript"> <!-- function unloadMess(){ mess = "Wait! You haven't finished." return mess; } function setBunload(on){ window.onbeforeunload = (on) ? unloadMess : null; } setBunload(true); //--> </SCRIPT> A: The easiest way I can think of is to store the page info each time the user moves items around using Ajax (e.g. with an UpdatePanel, onUpdated event, let it fire some script that updates the users page config. Alternatively - .Net's WebParts implementation does this automatically without intervention by the programmer (unless you want to change the storage engine, it uses a local mdb in by default. A: Use a "Publish" checkbox/button and when the user interacts with the page in a way that causes them to navigate away ask them if they want to publish if that box is NOT checked/button not clicked. Be aware that there are actions (closing the browser, accessing their favorites menu, etc.) that you will probably not want or not be able to prompt the user. A: I would force them to click a button such as publish. That is a 'training' issue. Automatically saving changes when they leave could have other ramifications. For example if a user opens up a record and plays around with it and has no intention of changing it, they close it, like a word document, excel, etc. . . I would have your site mimic that model. You also have to remember that the web is a disconnected environment and is not required all web applications run like a windows application. If the user doesn't click the publish/save button then there changes are not saved and that is up to them to remember to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/132764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I echo a newline in a batch file? How can you you insert a newline from your batch file output? I want to do something like: echo hello\nworld Which would output: hello world A: You can also do like this, (for %i in (a b "c d") do @echo %~i) The output will be, a b c d Note that when this is put in a batch file, '%' shall be doubled. (for %%i in (a b "c d") do @echo %%~i) A: Like the answer of Ken, but with the use of the delayed expansion. setlocal EnableDelayedExpansion (set \n=^ %=Do not remove this line=% ) echo Line1!\n!Line2 echo Works also with quotes "!\n!line2" First a single linefeed character is created and assigned to the \n-variable. This works as the caret at the line end tries to escape the next character, but if this is a Linefeed it is ignored and the next character is read and escaped (even if this is also a linefeed). Then you need a third linefeed to end the current instruction, else the third line would be appended to the LF-variable. Even batch files have line endings with CR/LF only the LF are important, as the CR's are removed in this phase of the parser. The advantage of using the delayed expansion is, that there is no special character handling at all. echo Line1%LF%Line2 would fail, as the parser stops parsing at single linefeeds. More explanations are at SO:Long commands split over multiple lines in Vista/DOS batch (.bat) file SO:How does the Windows Command Interpreter (CMD.EXE) parse scripts? Edit: Avoid echo. This doesn't answer the question, as the question was about single echo that can output multiple lines. But despite the other answers who suggests the use of echo. to create a new line, it should be noted that echo. is the worst, as it's very slow and it can completly fail, as cmd.exe searches for a file named ECHO and try to start it. For printing just an empty line, you could use one of echo, echo; echo( echo/ echo+ echo= But the use of echo., echo\ or echo: should be avoided, as they can be really slow, depending of the location where the script will be executed, like a network drive. A: If anybody comes here because they are looking to echo a blank line from a MINGW make makefile, I used @cmd /c echo. simply using echo. causes the dreaded process_begin: CreateProcess(NULL, echo., ...) failed. error message. I hope this helps at least one other person out there :) A: Use: echo hello echo: echo world A: echo hello & echo.world This means you could define & echo. as a constant for a newline \n. A: echo. Enough said. If you need it in a single line, use the &. For example, echo Line 1 & echo. & echo line 3 would output as: Line 1 line 3 Now, say you want something a bit fancier, ... set n=^&echo. echo hello %n% world Outputs hello world Then just throw in a %n% whenever you want a new line in an echo statement. This is more close to your \n used in various languages. Breakdown set n= sets the variable n equal to: ^ Nulls out the next symbol to follow: & Means to do another command on the same line. We don't care about errorlevel(its an echo statement for crying out loud), so no && is needed. echo. Continues the echo statement. All of this works because you can actually create variables that are code, and use them inside of other commands. It is sort of like a ghetto function, since batch is not exactly the most advanced of shell scripting languages. This only works because batch's poor usage of variables, not designating between ints, chars, floats, strings, etc naturally. If you are crafty, you could get this to work with other things. For example, using it to echo a tab set t=^&echo. ::there are spaces up to the double colon A: After a sleepless night and after reading all answers herein, after reading a lot of SS64 > CMD and after a lot of try & error I found: The (almost) Ultimate Solution TL;DR ... for early adopters. Important! Use a text editor for C&P that supports Unicode, e.g. Notepad++! Set Newline Environment Variable ... ... in the Current CMD Session Important! Do not edit anything between '=' and '^'! (There's a character in between though you don't see it. Neither here nor in edit mode. C&P works here.) :: Sets newline variables in the current CMD session set \n=​^&echo( set nl=​^&echo( ... for the Current User Important! Do not edit anything between (the second) '␣' and '^'! (There's a character in between though you don't see it. Neither here nor in edit mode. C&P works here.) :: Sets newline variables for the current user [HKEY_CURRENT_USER\Environment] setx \n ​^&echo( setx nl ​^&echo( ... for the Local Machine Important! Do not edit anything between (the second) '␣' and '^'! (There's a character in between though you don't see it. Neither here nor in edit mode. C&P works here.) :: Sets newline variables for the local machine [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Environment] setx \n ​^&echo( /m setx nl ​^&echo( /m Why just almost? It does not work with double-quotes that are not paired (opened and closed) in the same printed line, except if the only unpaired double-quote is the last character of the text, e.g.: * *works: ""echo %\n%...after "newline". Before "newline"...%\n%...after "newline" (paired in each printed line) *works: echo %\n%...after newline. Before newline...%\n%...after newline" (the only unpaired double-quote is the last character) *doesn't work: echo "%\n%...after newline. Before newline...%\n%...after newline" (double-quotes are not paired in the same printed line) Workaround for completely double-quoted texts (inspired by Windows batch: echo without new line): set BEGIN_QUOTE=echo ^| set /p !=""" ... %BEGIN_QUOTE% echo %\n%...after newline. Before newline...%\n%...after newline" It works with completely single-quoted texts like: echo '%\n%...after newline. Before newline...%\n%...after newline' Added value: Escape Character Note There's a character after the '=' but you don't see it here but in edit mode. C&P works here. :: Escape character - useful for color codes when 'echo'ing :: See https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences#text-formatting set ESC= For the colors see also https://imgur.com/a/EuNXEar and https://gist.github.com/gerib/f2562474e7ca0d3cda600366ee4b8a45. 2nd added value: Getting Unicode characters easily A great page for getting 87,461 Unicode characters (AToW) by keyword(s): https://www.amp-what.com/. The Reasons * *The version in Ken's answer works apparently (I didn't try it), but is somehow...well...you see: set NLM=^ set NL=^^^%NLM%%NLM%^%NLM%%NLM% *The version derived from user2605194's and user287293's answer (without anything between '=' and '^'): set nl=^&echo( set \n=^&echo( works partly but fails with the variable at the beginning of the line to be echoed: > echo %\n%Hello%\n%World! echo & echo(Hello & echo(World! echo is ON. Hello World due to the blank argument to the first echo. *All others are more or less invoking three echos explicitely. *I like short one-liners. The Story Behind To prevent set \n=^&echo: suggested in answers herein echoing blank (and such printing its status) I first remembered the Alt+255 user from the times when Novell was a widely used network and code pages like 437 and 850 were used. But 0d255/0xFF is ›Ÿ‹ (Latin Small Letter Y with diaeresis) in Unicode nowadays. Then I remembered that there are more spaces in Unicode than the ordinary 0d32/0x20 but all of them are considered whitespaces and lead to the same behaviour as ›␣‹. But there are even more: the zero width spaces and joiners which are not considered as whitespaces. The problem with them is, that you cannot C&P them since with their zero width there's nothing to select. So, I copied one that is close to one of them, the hair space (U+200A) which is right before the zero width space (U+200B) into Notepad++, opened its Hex-Editor plugin, found its bit representation E2 80 8A and changed it to E2 80 8B. Success! I had a non-whitespace character that's not visible in my \n environment variable. A: Ken and Jeb solutions works well. But the new lines are generated with only an LF character and I need CRLF characters (Windows version). To this, at the end of the script, I have converted LF to CRLF. Example: TYPE file.txt | FIND "" /V > file_win.txt del file.txt rename file_win.txt file.txt A: If one needs to use famous \n in string literals that can be passed to a variable, may write a code like in the Hello.bat script below: @echo off set input=%1 if defined input ( set answer=Hi!\nWhy did you call me a %input%? ) else ( set answer=Hi!\nHow are you?\nWe are friends, you know?\nYou can call me by name. ) setlocal enableDelayedExpansion set newline=^ rem Two empty lines above are essential echo %answer:\n=!newline!% This way multiline output may by prepared in one place, even in other scritpt or external file, and printed in another. The line break is held in newline variable. Its value must be substituted after the echo line is expanded so I use setlocal enableDelayedExpansion to enable exclamation signs which expand variables on execution. And the execution substitutes \n with newline contents (look for syntax at help set). We could of course use !newline! while setting the answer but \n is more convenient. It may be passed from outside (try Hello R2\nD2), where nobody knows the name of variable holding the line break (Yes, Hello C3!newline!P0 works the same way). Above example may be refined to a subroutine or standalone batch, used like call:mlecho Hi\nI'm your comuter: :mlecho setlocal enableDelayedExpansion set text=%* set nl=^ echo %text:\n=!nl!% goto:eof Please note, that additional backslash won't prevent the script from parsing \n substring. A: When echoing something to redirect to a file, multiple echo commands will not work. I think maybe the ">>" redirector is a good choice: echo hello > temp echo world >> temp A: To start a new line in batch, all you have to do is add "echo[", like so: echo Hi! echo[ echo Hello! A: why not use substring/replace space to echo;? set "_line=hello world" echo\%_line: =&echo;% * *Results: hello world * *Or, replace \n to echo; set "_line=hello\nworld" echo\%_line:\n=&echo;% A: For windows 10 with virtual terminal sequences there exists the means control the cursor position to a high degree. To define the escape sequence 0x1b, the following can be used: @Echo off For /f %%a in ('echo prompt $E^| cmd')Do set \E=%%a To output a single newline Between Strings: <nul set /p "=Hello%\E%[EWorld" To output n newlines where n is replaced with an integer: <nul set /p "=%\E%[nE" Many A: Please note that all solutions that use cursor positioning according to Console Virtual Terminal Sequences, Cursor Positioning with: Sequence Code Description Behaviour ESC [ <n> E CNL Cursor Next Line Cursor down <n> lines from current position only work as long as the bottom of the console window is not reached. At the bottom there is no space left to move the cursor down so it just moves left (with the CR of CRLF) and the line printed before is overwritten from its beginning. A: To echo a newline, add a dot . right after the echo: echo. A: If you need to put results to a file, you can use: (echo a & echo: & echo b) > file_containing_multiple_lines.txt A: Here you go, create a .bat file with the following in it : @echo off REM Creating a Newline variable (the two blank lines are required!) set NLM=^ set NL=^^^%NLM%%NLM%^%NLM%%NLM% REM Example Usage: echo There should be a newline%NL%inserted here. echo. pause You should see output like the following: There should be a newline inserted here. Press any key to continue . . . You only need the code between the REM statements, obviously. A: Just like Grimtron suggests - here is a quick example to define it: @echo off set newline=^& echo. echo hello %newline%world Output C:\>test.bat hello world A: There is a standard feature echo: in cmd/bat-files to write blank line, which emulates a new line in your cmd-output: @echo off echo line1 echo: echo line2 or @echo line1 & echo: & echo line2 Output of cited above cmd-file: line1 line2 A: This worked for me, no delayed expansion necessary: @echo off ( echo ^<html^> echo ^<body^> echo Hello echo ^</body^> echo ^</html^> ) pause It writes output like this: <html> <body> Hello </body> </html> Press any key to continue . . . A: You can use @echo ( @echo + [space] + [insecable space] ) Note: The insecable space can be obtained with Alt+0160 Hope it helps :) [edit] Hmm you're right, I needed it in a Makefile, it works perfectly in there. I guess my answer is not adapted for batch files... My bad. A: simple set nl=. echo hello echo%nl% REM without space ^^^ echo World Result: hello world A: Be aware, this won't work in console because it'll simulate an escape key and clear the line. Using this code, replace <ESC> with the 0x1b escape character or use this Pastebin link: :: Replace <ESC> with the 0x1b escape character or copy from this Pastebin: :: https://pastebin.com/xLWKTQZQ echo Hello<ESC>[Eworld! :: OR set "\n=<ESC>[E" echo Hello%\n%world! A: Adding a variant to Ken's answer, that shows setting values for environment variables with new lines in them. We use this method to append error conditions to a string in a VAR, then at the end of all the error checking output to a file as a summary of all the errors. This is not complete code, just an example. @echo off SETLOCAL ENABLEDELAYEDEXPANSION :: the two blank lines are required! set NLM=^ set NL=^^^%NLM%%NLM%^%NLM%%NLM% :: Example Usage: Set ErrMsg=Start Reporting: :: some logic here finds an error condition and appends the error report set ErrMsg=!ErrMsg!!NL!Error Title1!NL!Description!NL!Summary!NL! :: some logic here finds another error condition and appends the error report set ErrMsg=!ErrMsg!!NL!Error Title2!NL!Description!NL!Summary!NL! :: some logic here finds another error condition and appends the error report set ErrMsg=!ErrMsg!!NL!Error Title3!NL!Description!NL!Summary!NL! echo %ErrMsg% pause echo %ErrMsg% > MyLogFile.log Log and Screen output look like this...
{ "language": "en", "url": "https://stackoverflow.com/questions/132799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "759" }
Q: The permissions granted to user ' are insufficient for performing this operation. (rsAccessDenied)"} I created a report model using SSRS (2005) and published to the local server. But when I tried to run the report for the model I published using report builder I get the following error. Report execution error:The permissions granted to user are insufficient for performing this operation. (rsAccessDenied) A: You can also make sure that the Identity in your Application Pool has the right permissions. * *Go to IIS Manager *Click Application pools *Identify the application pool of the site you are deploying reports on *Check that the identity is set to some service account or user account that has admin permissions *You can change the identity by stopping the pool, right clicking it, and selecting Advanced Settings... Under Process Model is the Identity field A: I have used following steps and it is working for me. Open Reporting Services Configuration Manager -> then connect to the report server instance -> then click on Report Manager URL. In the Report Manager URL page, click the Advanced button -> then in the Multiple Identities for Report Manager, click Add. In the Add a Report Manager HTTP URL popup box, select Host Header and type in: localhost Click OK to save your changes. Then: * *copied the report server URL *Run Google chrome/Internet Explorer as administrator *Paste URL in address bar and press enter. it is working fine for me on Internet Explorer and Google Chrome but not for mozilla Firefox. In case of Firefox asking for username and Password I am providing it but it is not working. I am admin and have full right. I have done 1 more change set "User Account Control Settings" to never notify. If you are getting such type of exception while deploying this report from Visual Studio then do the following things: * *Open Google chrome/Internet Explorer with administrator right. *open report server URL in it. 3.Click on "New Role Assignment" add the then enter the user name and select the Roles . *click ok. *Now deploy the report from Visual studio it will work and deploy the reports at specified server. A: under Site setting in Reports manager >Configure system-level role definitions > check ExecuteReport Defination option then Create a System UserGroup, Give the access to that group at Connect to your reporting Services Data base in server properties and add a group and permite the access as System User... It should work A: I have SQL2008 / Windows 2008 Enterprise and this is what I had to do to correct the rs.accessdenied, 404, 401 and 503 errors: * *Added NT Users to SQL Report Server Users and IIS_USR Group *I changed SQL Reporting Service to Local account (it was Domain with Local Admin) *I deleted encryption key in Reporting Services Configuration (last tab on the list) *and THEN it worked. A: Open internet explorer as administrator. Open the reports url http://machinename/reportservername then in 'folder settings' give permission to required user-groups. A: Old but relevant issue. I solved for 2012 by logging in to the reporting server and: * *browse to http://localhost/reports/ *Click 'Site Settings' in the top-right (was only available when logging in to the report server) *Go to the 'Security' tab and click 'New Role Assignment' *Added my DOMAIN\USERNAME as a System Administrator Can't say that I'm comfortable with this solution, but I needed something that worked and it worked. Hope this helps someone else. A: After setting up SSRS 2016, I RDP'd into the server (Windows Server 2012 R2), navigated to the reports URL (https://reports.fakeserver.net/Reports/browse/) and created a folder title FakeFolder; everything appeared to be working fine. I then disconnected from the server, browsed to the same URL, logged in as the same user, and encountered the error below. The permissions granted to user 'fakeserver\mitchs' are insufficient for performing this operation. Confused, I tried pretty much every solution suggested on this page and still could not create the same behavior both locally and externally when navigating to the URL and authenticating. I then clicked the ellipsis of FakeFolder, clicked Manage, clicked Security (on the left hand side of the screen), and added myself as a user with full permissions. After disconnecting from the server, I browsed to https://reports.fakeserver.net/Reports/browse/FakeFolder, and was able to view the folder's contents without encountering the permissions error. However, when I clicked home I received the permissions error. For my purposes, this was good enough as no on else will ever need to browse to the root URL, so I just made a mental note whenever I need to make changes in SSRS to first connect to the server and then browse to the Reports URL. A: Problem: Error rsAccessDenied : The permissions granted to user 'User\User' are insufficient for performing this operation. Solution: Click "Folder Setting" > "New Role Assignment" Then type "User\User" in the 'Group or user name text box'. Check the Roles check boxes that you would want the user to have. A: I know it's for a long time ago but you (or any other new comers) can resolve this issue by * *Add the [Domain\User] to Administrator, IISUser, SQLReportingUser groups *Delete Encryption Key in SSRS configuration tools *ReRun the Database Change in SSRS configuration tools *Open WebServiceUrl from SSRS configuration tools (http://localhost/reportserver) *creating Reports Folder manually *go to Properties of created folder and add these roles to security (builtin\users , builtin\Administrator, domain\user) *Deploy your reports and your problem resolved A: What Worked for me was: Open localhost/reports Go to properties tab (SSRS 2008) Security->New Role Assignment Add DOMAIN/USERNAME or DOMAIN/USERGROUP Check Report builder A: This worked for me- -go to the report manager, check site settings-> Security -> New Role Assignment-> add the user -Also, go to Datasets in report manager -> your report dataset -> Security -> New Role Assignment -> add the user with the required role. Thanks! A: I know it's for a long time ago but may be helpful to any other new comers, I decided to pass user name,password and domain while requesting SSRS reports, so I created one class which implements IReportServerCredentials. public class ReportServerCredentials : IReportServerCredentials { #region Class Members private string username; private string password; private string domain; #endregion #region Constructor public ReportServerCredentials() {} public ReportServerCredentials(string username) { this.Username = username; } public ReportServerCredentials(string username, string password) { this.Username = username; this.Password = password; } public ReportServerCredentials(string username, string password, string domain) { this.Username = username; this.Password = password; this.Domain = domain; } #endregion #region Properties public string Username { get { return this.username; } set { this.username = value; } } public string Password { get { return this.password; } set { this.password = value; } } public string Domain { get { return this.domain; } set { this.domain = value; } } public WindowsIdentity ImpersonationUser { get { return null; } } public ICredentials NetworkCredentials { get { return new NetworkCredential(Username, Password, Domain); } } #endregion bool IReportServerCredentials.GetFormsCredentials(out System.Net.Cookie authCookie, out string userName, out string password, out string authority) { authCookie = null; userName = password = authority = null; return false; } } while calling SSRS Reprots, put following piece of code ReportViewer rptViewer = new ReportViewer(); string RptUserName = Convert.ToString(ConfigurationManager.AppSettings["SSRSReportUser"]); string RptUserPassword = Convert.ToString(ConfigurationManager.AppSettings["SSRSReportUserPassword"]); string RptUserDomain = Convert.ToString(ConfigurationManager.AppSettings["SSRSReportUserDomain"]); string SSRSReportURL = Convert.ToString(ConfigurationManager.AppSettings["SSRSReportURL"]); string SSRSReportFolder = Convert.ToString(ConfigurationManager.AppSettings["SSRSReportFolder"]); IReportServerCredentials reportCredentials = new ReportServerCredentials(RptUserName, RptUserPassword, RptUserDomain); rptViewer.ServerReport.ReportServerCredentials = reportCredentials; rptViewer.ServerReport.ReportServerUrl = new Uri(SSRSReportURL); SSRSReportUser,SSRSReportUserPassword,SSRSReportUserDomain,SSRSReportFolder are defined in web.config files. A: The report might want to access a DataSource or DataView where the AD user (or AD group) has insuficcient access rights. Make sure you check out the following URLs: * *http://REPORTSERVERNAME/Reports/Pages/Folder.aspx?ItemPath=%2fDataSources *http://REPORTSERVERNAME/Reports/Pages/Folder.aspx?ItemPath=%2fDataSets Then choose Folder Settings (or the appropriate individual DataSource or DataSet) and select Security. The user group needs to have the Browser permission. A: Right Click Microsoft BI -> Click Run as Administrator -> either open your existing SSRS report or create your new SSRS report and then deploy your report after that complied you will be received one web URL for to view your report. Copy that URL and paste to web browser(Run as Administrator) and you will get your report view. You could use Internet Explorer, which would be essential for web service If it is wrong means,Please forgive me since i did like this so that i just written. A: Make sure you have access configured to the URL http://localhost/reports using the SQL Reporting Services Configuration. To do this: * *Open Reporting Services Configuration Manager -> then connect to the report server instance -> then click on Report Manager URL. *In the Report Manager URL page, click the Advanced button -> then in the Multiple Identities for Report Manager, click Add. *In the Add a Report Manager HTTP URL popup box, select Host Header and type in: localhost *Click OK to save your changes. *Now start/ run Internet Explorer using Run as Administator... (NOTE: If you don't see the 'Site Settings' link in the top left corner while at http://localhost/reports it is probably because you aren't running IE as an Administator or you haven't assigned your computers 'domain\username' to the reporting services roles, see how to do this in the next few steps.) *Then go to: http://localhost/reports (you may have to login with your Computer's username and password) *You should now be directed to the Home page of SQL Server Reporting Services here: http://localhost/Reports/Pages/Folder.aspx *From the Home page, click the Properties tab, then click New Role Assignment *In the Group or user name textbox, add the 'domain\username' which was in the error message (in my case, I added: DOUGDELL3-PC\DOUGDELL3 for the 'domain\username', in your case you can find the domain\username for your computer in the rsAccessDenied error message). *Now check all the checkboxes; Browser, Content Manager, My Reports, Publisher, Report Builder, and then click OK. *You're domain\username should now be assigned to the Roles that will give you access to deploy your reports to the Report Server. If you're using Visual Studio or SQL Server Business Intelligence Development Studio to deploy your reports to your local reports server, you should now be able to. *Hopefully, that helps you solve your Reports Server rsAccessDenied error message... Just to let you know this tutorial was done on a Windows 7 computer with SQL Server Reporting Services 2008. Reference Article: http://techasp.blogspot.co.uk/2013/06/how-to-fix-reporting-services.html A: It's because of lack of privilege for the user you are running the report builder, just give that user or a group a privilege to run report builder. Please visit this article Or for shortcut: * *Start Internet Explorer using "Run as Administrator" *Open http://localhost/reports *Go to properties tab (SSRS 2008) *Security->New Role Assignment *Add DOMAIN/USERNAME or DOMAIN/USERGROUP *Check Report builder A: What worked for me was: * *Go to Site Setting *Click on "Configure site-wide security" *Click "New Role Assignment" button in top bar *Give the new role the following name "Everyone" *Of the available roles, grant it "System User" only *Click "Apply" That should do it, Good luck! A: Just like Nasser, I know this was a while ago but I wanted to post my solution for anyone who has this problem in the future. I had my report setup so that it would use a data connection in a Data Connection library hosted on SharePoint. My issue was that I did not have the data connection 'approved' so that it was usable by other users. Another thing to look for would to make sure that the permissions on that Data Connection library also allows read to the select users. Hope this helps someone sooner or later! A: For SQL Reporting Services 2012 - SP1 and SharePoint 2013. I got the same issue: The permissions granted to user '[AppPoolAccount]' are insufficient for performing this operation. I went into the service application settings, clicked Key Management, then Change key and had it regenerate the key. A: Thanks for Sharing. After struggling for 1.5 days, noticed that Report Server was configured with wrong domain IP. It was configured with backup domain IP which is offline. I have identified this in the user group configuration where Domain name was not listed. Changed IP and reboot the Report server. Issue resolved. A: Run BIDS as administrator despite of existing membership of Administrators group.
{ "language": "en", "url": "https://stackoverflow.com/questions/132812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "108" }
Q: Deterministic dispose of ThreadStatic objects The ThreadStatic attribute declares a static variable as unique-per-thread. Do you know an easy pattern to correctly dispose such variables? What we used before ThreadStatic is a ThreadContextManager. Every thread was allocated a ThreadContext which retained all thread-specific information. We spawned some threads and let them work. Then, when they all finished, we disposed of the ThreadContentManager, which in turn disposed all the contexts if they were IDisposable. I don't see an immediate way to translate this pattern to ThreadStatic objects. The objects will be disposed of eventualy, because the threads die, and so nothing reference them. However, we prefer deterministic dispose whenever possible. Update I do not really control the threads directly - I'm using Microsoft CCR, which has a ThreadPool that does tasks. When all the tasks are done, I'm disposing the Dispatcher (which holds the threadpool). The thing is - I do not get a chance to do anything "at the end of a thread's main function" - so I can't dispose things manually at the end of a thread's run. Can I access the thread's static objects from outside the thread somehow? A: You can still use the equivalent of your ThreadContextManager class to handle the dispose. The spawned threads dispose of this 'manager' object which in turn takes out all the other thread static objects it knows about. I prefer to have relatively few thread static objects and use a context object instead. This keeps the thread specific state in only a few places, and makes patterns like this easier. Update: to handle the threadpool case you could create a base 'task' object that is the one that you pass to the thread pool. It can perform any generic initialization your code needs, invoke the 'real' task and then performs any cleanup needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/132830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Test automation using batch files: I have the following layout for my test suite: TestSuite1.cmd: * *Run my program *Check its return result *If the return result is not 0, convert the error to textual output and abort the script. If it succeeds, write out success. In my single .cmd file, I call my program about 10 times with different input. The problem is that the program that I run 10 times takes several hours to run each time. Is there a way for me to parallelize all of these 10 runnings of my program while still somehow checking the return result and providing a proper output file and while still using a single .cmd file and to a single output file? A: Assuming they won't interfere with each other by writing to the same files,etc: test1.cmd :: intercept sub-calls. if "%1"=="test2" then goto :test2 :: start sub-calls. start test1.cmd test2 1 start test1.cmd test2 2 start test1.cmd test2 3 :: wait for sub-calls to complete. :loop1 if not exist test2_1.flg goto :loop1 :loop2 if not exist test2_2.flg goto :loop2 :loop3 if not exist test2_3.flg goto :loop3 :: output results sequentially type test2_1.out >test1.out del /s test2_1.out del /s test2_1.flg type test2_2.out >test1.out del /s test2_2.out del /s test2_2.flg type test2_3.out >test1.out del /s test2_3.out del /s test2_3.flg goto :eof :test2 :: Generate one output file echo %1 >test2_%1.out ping -n 31 127.0.0.1 >nul: 2>nul: :: generate flag file to indicate finished echo x >test2_%1.flg This will start three concurrent processes each which echoes it's sequence number then wait 30 seconds. All with one cmd file and (eventually) one output file. A: Running things in parallel in batch files can be done via 'start' executable/command. A: Windows: you create a Batch File that essentially calls: start TestSuite1.cmd [TestParams1] start TestSuite1.cmd [TestParams2] and so on, which is essentially forking new command lines, which would work, if the application can handle concurrent users (even if its the same User), and your TestSuite1.cmd is able to handle parameters. A: try the command start, it spawns a new command prompt and you can send along any commands you want it to run. I'd use this to spawn batch files that run the tests and then appends to a output.txt using >> as such: testthingie.cmd >> output.txt A: You will need to start the script with different parameters on different machines because whatever makes the program take so long for a task (IO, CPU time) will be in even shorter supply when multiple instances of your program run at once. Only exception: the run time is cause by the program putting itself to sleep.
{ "language": "en", "url": "https://stackoverflow.com/questions/132857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I add an action to visio (2003) In a visio shapesheet one can add actions. I want to create an action that updates the value of another cell (the position of a control). How can one do that. Does it need a separate macro, or can it be specified directly? And how? A: You don't need an addon or macro, you can do this in the Shapesheet. In the Shapesheet look for the Action section. If you don't find it right click and add it. In the Action section add a row. Set the cells to something like: Action = SETF(GetRef(Controls.Row_1),"2 in.")+SETF(GetRef(Controls.Row_1.Y),"2 in.") Menu = "Move Control" Change Row_1 to the name of your control row. You can also change "2 in." to a reference to a cell in which you calculate the new position. To learn more see: MSDN: Shortcut Menu Commands Bill Morein's: Meet A Shapesheet Function: Setf
{ "language": "en", "url": "https://stackoverflow.com/questions/132860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: I have a gem installed but require 'gemname' does not work. Why? The question I'm really asking is why require does not take the name of the gem. Also, In the case that it doesn't, what's the easiest way to find the secret incantation to require the damn thing!? As an example if I have memcache-client installed then I have to require it using require 'rubygems' require 'memcache' A: Also rails people should remember to restart the rails server after installing a gem A: My system also doesn't seem to know about RubyGems' existence - unless I tell it to. The 'require' command gets overwritten by RubyGems so it can load gems, but unless you have RubyGems already required it has no idea how to do that. So if you're writing your own, you can do: require 'rubygems' require 'gem-name-here' If you're running someone else's code, you can do it on the command line with: ruby -r rubygems script.rb Also, there's an environment variable Ruby uses to determine what it should load up on startup: export RUBYOPT=rubygems (from http://www.rubygems.org/read/chapter/3. The environment variable thing was pointed out to me by Orion Edwards) (If "require 'rubygems' doesn't work for you, however, this advice is of limited help :) A: There is no standard for what the file you need to include is. However there are some commonly followed conventions that you can can follow try and make use of: * *Often the file is called the same name as the gem. So require mygem will work. *Often the file is the only .rb file in the lib subdirectory of the gem, So if you can get the name of the gem (maybe you are itterating through vendor/gems in a pre 2.1 rails project), then you can inspect #{gemname}/lib for .rb files, and if there is only one, its a pretty good bet that is the one to require If all of that works, then all you can do is look into the gem's directory (which you can find by running gem environment | grep INSTALLATION | awk '{print $4}' and looking in the lib directory, You will probably need to read the files and hope there is a comment explaining what to do A: You need to include "rubygems" only if you installed the gem using gem . Otherwise , the secret incantation would be to fire up irb and try different combinations . Also , you can pass the -I option to the ruby interpreter so that you include the instalation directory of the gem , in the LOAD_PATH . Note that $LOAD_PATH is an array , which means you can add directories to it from within your script. A: The question I'm really asking is why require does not take the name of the gem. Installing a gem gets the files onto your system. It doesn't make any claims as to what those files will be called. As laurie points out there are several conventions for how they are named, but there's nothing to enforce that, and many gem authors unfortunately don't stick to them. Also, In the case that it doesn't, what's the easiest way to find the secret incantation to require the damn thing!? Read the docs for your gem? I find googling for rdoc gemname will usually find the official rdocs for your gem, which usually show you how to use it. Memcache is perhaps not the best example, as they assume you'll be using it from rails, and the 'require' will have already been done for you, but most other ones I've seen have examples which show the correct 'require' incantations A: The require has to map to a file in ruby's path. You can find out where gems are installed by running 'gem environment' (look for INSTALLATION DIRECTORY): kburton@hypothesisf:~$ gem environment RubyGems Environment: - RUBYGEMS VERSION: 1.2.0 - RUBY VERSION: 1.8.7 (2008-08-08 patchlevel 71) [i686-linux] - INSTALLATION DIRECTORY: /usr/local/ruby/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/local/ruby/bin/ruby - EXECUTABLE DIRECTORY: /usr/local/ruby/bin - RUBYGEMS PLATFORMS: - ruby - x86-linux - GEM PATHS: - /usr/local/ruby/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - REMOTE SOURCES: - http://gems.rubyforge.org/ kburton@editconf:~$ You can then look for the particular .rb file you're attempting to require. Additionally, you can print the contents of $: from irb to see the list of paths that ruby will search for modules: kburton@hypothesis:~$ irb irb(main):001:0> $: => ["/usr/local/ruby/lib/ruby/site_ruby/1.8", "/usr/local/ruby/lib/ruby/site_ruby/1.8/i686-linux", "/usr/local/ruby/lib/ruby/site_ruby", "/usr/local/ruby/lib/ruby/vendor_ruby/1.8", "/usr/local/ruby/lib/ruby/vendor_ruby/1.8/i686-linux", "/usr/local/ruby/lib/ruby/vendor_ruby", "/usr/local/ruby/lib/ruby/1.8", "/usr/local/ruby/lib/ruby/1.8/i686-linux", "."] irb(main):002:0> A: I had this problem because I use rvm and was trying to use the wrong version of ruby. The gem in question needed 1.9.2 and I had set 2.0.0 as my default! Maybe a dumb error but one that someone else arriving on this page will probably have made. A: An issue I just ran into was that the actual built gem was not including all the files that it should have. The issue with files was that there was a syntax mistake in the in the gemspec, but no errors were thrown during the build. Just adding this here in case anybody else runs into the same issue. A: It could also be the gem name mismatch: e.g. dummy-spi-0.1.1/lib/spi.rb should be named dummy-spi-0.1.1/lib/dummy-spi.rb then you can require 'dummy-spi' A: I too had this problem since installing OS X Lion, and found that even if I ran the following code I would still get the warning message. require 'rubygems' require 'nokogiri' I tried loads of solutions posted here and on the web, but in the end my work around solution was to simply follow the instructions at http://martinisoftware.com/2009/07/31/nokogiri-on-leopard.html to reinstall LibXML & LibXSLT from source, but ensuring the version of LibXML I installed matched the one that was expected by Nokogiri. Once I had done that, the warnings went away. A: Watch source of gem and check lib directory. If there is no rb file then you must point to gem main rb file in subdirectory: require 'dir/subdir/file' for /lib/dir/subdir/file.rb.
{ "language": "en", "url": "https://stackoverflow.com/questions/132867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: How to improve your dev team's Joel Test score? Joel Test is a good and famous list checking some requisites every software company should concern about. They are: * *Do you use source control? *Can you make a build in one step? *Do you make daily builds? *Do you have a bug database? *Do you fix bugs before writing new code? *Do you have an up-to-date schedule? *Do you have a spec? *Do programmers have quiet working conditions? *Do you use the best tools money can buy? *Do you have testers? *Do new candidates write code during their interview? *Do you do hallway usability testing? My current company hits 0 (I said ZERO) points when I arrived there some month ago. Now we 'proudly' hits 3 - source control, one step build and daily builds. But I'm trying to do more (bug database, wiki, quiet conditions, better interviews...)! What about your company? How many hits? List what you will do to achive more! A: My current project: 1 Y, 2 N, 3 N, 4 Y, 5 N, 6 N, 7 N, 8 N, 9 N, 10 Y, 11 N, 12 N Total score: 3 Guess what, it sucks. The dev team has been pushing hard for 2, 3, and 5, but it never quite gets approved by management. The operational software is so buggy that hack fixes take all the time and no one is allowed to do these "low priority" type activities. A funny thing is that this project is in a CMMI level 5 company. Goes to show what that is worth. A: * *Do you use source control? Of course, I simply cannot understand how companies cannot see the necessity for a decent source control system. We're using SVN. Total: 1 point. * *Can you make a build in one step? Our build process takes at least 5 steps and although we discussed a lot of times ways to make the magical 1-step-build happen, we did not find the time to implement that scenario yet. Total: 1 point. * *Do you make daily builds? Yes. As stated before, they're not created automatically, but we have daily builds incorporated into a code-review step we do every day. Total: 2 points. * *Do you have a bug database? Yes, Mantis is used by our company for this purpose. Total: 3 points. * *Do you fix bugs before writing new code? Unfortunately not. New features seem to be more important than bugfixes. Up until the time, when they definately need to be fixed. Which is often way too late. Total: 3 points. * *Do you have an up-to-date schedule? We update the schedule all the time, using burndown-charts to estimate the time we're finished. Total: 4 points. * *Do you have a spec? We have some specs, but I wouldn't call our projects spec-complete. There is much room for improvements here at our company. Total: 4 points. * *Do programmers have quiet working conditions? Yes, our company building resides in a quiet neighbourhood, with no more than 2 or 3 developers in the same room. Total: 5 points. * *Do you use the best tools money can buy? Nope. Total: 5 points. * *Do you have testers? We have only recently implemented an entire QA-department consisting of three testers. Total: 6 points. * *Do new candidates write code during their interview? We do not have too much fluctuation in our team, but the interviews contains a couple of coding-relevant questions where the candidates have to write some sample classes etc. Total: 7 points. * *Do you do hallway usability testing? No, sadly not, but it's a great idea. Total: 7 points All-in-all I think there's a lot of room for improvement, but 7 points might not be the worst score compared to other companies we're working with. A: Right now, we hit number 5 sometimes if we know about the bug and 8 99% of the time time. Tomorrow, I'll be meeting to push for 1, 4, 5, 6, and 7. I think the only thing you can do is pick one or two and go after those. Set something up, start using them and show everyone else how much easier/better your life is with them. A: One (1). we have source control. but it's a small start - up company, so I still have high hopes. A: Current Company across most projects, some are worse(much worse!) 1:Y, 2:Y, 3:Y, 4:Y, 5:N, 6:N, 7:Usually, 8:N, 9:N, 10:N, 11:N, 12:N For me the big issues in my current Company are 10 and 11. We don't have a dedicated test resource even though we have a development resource of 100+ developers, not one professional tester! Guess what? Testing aint't good, I surprised at the quality of applications we produce, a testiment to the quality of some of our development teams. Our interveiew process sucks big style. One developer that we hired recently only had a background in C and emdedded code for satelight receivers. Bearing in mind we're microsoft/.NET/VB6/SQL Server. He had no experience what so ever with databases of any description or WinForms development. When I quizzed how he got hired I was told by the technical lead who was on the interview panel that personnel had banned him from asking TECHNICAL questions because when the guy had been invited to the interview he wasn't told it was going to be a technical interview! A: I have mixed feelings about #11. On the one hand, I think a few casual interview white board questions can be very misleading. Candidates don't always expect it, they are nervous and asked to code in front of an audience. Gah! On the other hand, I feel like you could get a feel for how someone would fit in your organization with a short computer quiz. If you use a temp service with temp to hire, does that count if you do code reviews on their early work? Then the job becomes the quiz. A: The problem with the Joel test is that even hitting 12 doesn't mean that you're working for a good company. Although if you're at zero you're probably not. I currently have a client that is running a seven which means that in theory they're not doing too bad. The fact is they're still pretty badly screwed up because of other issues (poor architecture, lack of management support, etc.)
{ "language": "en", "url": "https://stackoverflow.com/questions/132872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Best way to switch configuration between Development/UAT/Prod environments in ASP.NET? I need to switch among 3 different environments when developing my web app - Development, UAT, and Prod. I have different database connections in my configuration files for all 3. I have seen switching these settings done manually by changing all references and then rebuilding the solution, and also done with preprocessor directives. Is there an easy way to do this based on some variable so that the configuration doesn't have to be revised when deploying to a new environment every time? A: I'm a big fan of using MSBuild, in particular the MSBuild Community Tasks (http://msbuildtasks.tigris.org/) and there is an XSLT task to transform the web.config with the appropriate connection string settings, etc. I keep these tasks handy: <Target Name="Configs"> <Xslt RootTag="" Inputs="web.config" Output="Web.$(COMPUTERNAME).config" Xsl="web.config.$(COMPUTERNAME).xslt" Condition="Exists('web.config.$(COMPUTERNAME).xslt')" /> Obviously this isn't 100% what you're after, it's so each dev can have their own web.config. But there's no reason you couldn't use the above principle to have multiple build configurations which applies the right XSLT. My XSLT looks like this: <?xml version="1.0" encoding="utf-8"?> <!-- Dev --> <xsl:template match="/configuration/connectionStrings/add[@name='MyConnectionString']/@connectionString"> <xsl:attribute name="connectionString">Data Source=MyServer;Initial Catalog=MyBD;User ID=user;password=pwd</xsl:attribute> </xsl:template> <xsl:template match="node()"> <xsl:copy> <xsl:apply-templates select="@*"/> <xsl:apply-templates/> </xsl:copy> </xsl:template> A: Scott Hanselman has suggested one way to do this: http://www.hanselman.com/blog/ManagingMultipleConfigurationFileEnvironmentsWithPreBuildEvents.aspx A: You can always use NAnt + NAnt.Contrib to modify the web.config during the build. NAnt has xmlpeek and xmlpoke tasks which allow you to update xml files. e.g. <xmlpoke file="${dist.dir}/Web.config" xpath="/configuration/applicationSettings/MyProj.Web.Properties.Settings/setting[@name = 'MyProj_Web_Service']/value" value="http://${AppServer}/Service.asmx" /> A: To me it seems that you can benefit from the Visual Studio 2005 Web Deployment Projects. With that, you can tell it to update/modify sections of your web.config file depending on the build configuration. Take a look at this blog entry from Scott Gu for a quick overview/sample. A: I have adopted the Jean Paul Boodhoo Method of changing configurations. The general idea is to have one or more TOKENIZED configuration TEMPLATE files instead of the configuration files themselves. Then you have a build script task that replaces the tokens with values from a SINGLE local properties file. This properties file contains all differences in configuration and is unique per working copy. This system has worked great for me and once initially setup is a breeze to manage environment changes.
{ "language": "en", "url": "https://stackoverflow.com/questions/132885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What is the best way to generate XML Binding Code from a DTD? Most Java-XML binding frameworks and code generators need XML Schema Defintions. Can you suggest the best way to generate binding code from DTD. I know that the XJC in JAXB 2 supports DTD but it is considered experimental. In the spirit of Stack Overflow, one suggestion per answer please - to be voted up or down instead of duplicated A: Convert the DTD to a schema (lots of online and offline tools available). This step should be lossless. Now use this schema with your favorite Java-XML binding framework and/or code generator that needs schema definitions.
{ "language": "en", "url": "https://stackoverflow.com/questions/132895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I split the output from mysqldump into smaller files? I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB. Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server. Or is there another solution I have missed? Edit The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB. A: This bash script splits a dumpfile of one database into separate files for each table and names with csplit and names them accordingly: #!/bin/bash #### # Split MySQL dump SQL file into one file per table # based on https://gist.github.com/jasny/1608062 #### #adjust this to your case: START="/-- Table structure for table/" # or #START="/DROP TABLE IF EXISTS/" if [ $# -lt 1 ] || [[ $1 == "--help" ]] || [[ $1 == "-h" ]] ; then echo "USAGE: extract all tables:" echo " $0 DUMP_FILE" echo "extract one table:" echo " $0 DUMP_FILE [TABLE]" exit fi if [ $# -ge 2 ] ; then #extract one table $2 csplit -s -ftable $1 "/-- Table structure for table/" "%-- Table structure for table \`$2\`%" "/-- Table structure for table/" "%40103 SET TIME_ZONE=@OLD_TIME_ZONE%1" else #extract all tables csplit -s -ftable $1 "$START" {*} fi [ $? -eq 0 ] || exit mv table00 head FILE=`ls -1 table* | tail -n 1` if [ $# -ge 2 ] ; then # cut off all other tables mv $FILE foot else # cut off the end of each file csplit -b '%d' -s -f$FILE $FILE "/40103 SET TIME_ZONE=@OLD_TIME_ZONE/" {*} mv ${FILE}1 foot fi for FILE in `ls -1 table*`; do NAME=`head -n1 $FILE | cut -d$'\x60' -f2` cat head $FILE foot > "$NAME.sql" done rm head foot table* based on https://gist.github.com/jasny/1608062 and https://stackoverflow.com/a/16840625/1069083 A: I wrote a new version of the SQLDumpSplitter, this time with a proper parser, allowing nice things like INSERTs with many values to be split over files and it is multi platform now: https://philiplb.de/sqldumpsplitter3/ A: First dump the schema (it surely fits in 2Mb, no?) mysqldump -d --all-databases and restore it. Afterwards dump only the data in separate insert statements, so you can split the files and restore them without having to concatenate them on the remote server mysqldump --all-databases --extended-insert=FALSE --no-create-info=TRUE A: You don't need ssh access to either of your servers. Just a mysql[dump] client is fine. With the mysql[dump], you can dump your database and import it again. In your PC, you can do something like: $ mysqldump -u originaluser -poriginalpassword -h originalhost originaldatabase | mysql -u newuser -pnewpassword -h newhost newdatabase and you're done. :-) hope this helps A: You can split existent file by AWK. It's very quik and simple Let's split table dump by 'tables' : cat dump.sql | awk 'BEGIN {output = "comments"; } $data ~ /^CREATE TABLE/ {close(output); output = substr($3,2,length($3)-2); } { print $data >> output }'; Or you can split dump by 'database' cat backup.sql | awk 'BEGIN {output="comments";} $data ~ /Current Database/ {close(output);output=$4;} {print $data>>output}'; A: There is this excellent mysqldumpsplitter script which comes with tons of option for when it comes to extracting-from-mysqldump. I would copy the recipe here to choose your case from: 1) Extract single database from mysqldump: sh mysqldumpsplitter.sh --source filename --extract DB --match_str database-name Above command will create sql for specified database from specified "filename" sql file and store it in compressed format to database-name.sql.gz. 2) Extract single table from mysqldump: sh mysqldumpsplitter.sh --source filename --extract TABLE --match_str table-name Above command will create sql for specified table from specified "filename" mysqldump file and store it in compressed format to database-name.sql.gz. 3) Extract tables matching regular expression from mysqldump: sh mysqldumpsplitter.sh --source filename --extract REGEXP --match_str regular-expression Above command will create sqls for tables matching specified regular expression from specified "filename" mysqldump file and store it in compressed format to individual table-name.sql.gz. 4) Extract all databases from mysqldump: sh mysqldumpsplitter.sh --source filename --extract ALLDBS Above command will extract all databases from specified "filename" mysqldump file and store it in compressed format to individual database-name.sql.gz. 5) Extract all table from mysqldump: sh mysqldumpsplitter.sh --source filename --extract ALLTABLES Above command will extract all tables from specified "filename" mysqldump file and store it in compressed format to individual table-name.sql.gz. 6) Extract list of tables from mysqldump: sh mysqldumpsplitter.sh --source filename --extract REGEXP --match_str '(table1|table2|table3)' Above command will extract tables from the specified "filename" mysqldump file and store them in compressed format to individual table-name.sql.gz. 7) Extract a database from compressed mysqldump: sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB --match_str 'dbname' --decompression gzip Above command will decompress filename.sql.gz using gzip, extract database named "dbname" from "filename.sql.gz" & store it as out/dbname.sql.gz 8) Extract a database from compressed mysqldump in an uncompressed format: sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB --match_str 'dbname' --decompression gzip --compression none Above command will decompress filename.sql.gz using gzip and extract database named "dbname" from "filename.sql.gz" & store it as plain sql out/dbname.sql 9) Extract alltables from mysqldump in different folder: sh mysqldumpsplitter.sh --source filename --extract ALLTABLES --output_dir /path/to/extracts/ Above command will extract all tables from specified "filename" mysqldump file and extracts tables in compressed format to individual files, table-name.sql.gz stored under /path/to/extracts/. The script will create the folder /path/to/extracts/ if not exists. 10) Extract one or more tables from one database in a full-dump: Consider you have a full dump with multiple databases and you want to extract few tables from one database. Extract single database: sh mysqldumpsplitter.sh --source filename --extract DB --match_str DBNAME --compression none Extract all tables sh mysqldumpsplitter.sh --source out/DBNAME.sql --extract REGEXP --match_str "(tbl1|tbl2)" though we can use another option to do this in single command as follows: sh mysqldumpsplitter.sh --source filename --extract DBTABLE --match_str "DBNAME.(tbl1|tbl2)" --compression none Above command will extract both tbl1 and tbl2 from DBNAME database in sql format under folder "out" in current directory. You can extract single table as follows: sh mysqldumpsplitter.sh --source filename --extract DBTABLE --match_str "DBNAME.(tbl1)" --compression none 11) Extract all tables from specific database: mysqldumpsplitter.sh --source filename --extract DBTABLE --match_str "DBNAME.*" --compression none Above command will extract all tables from DBNAME database in sql format and store it under "out" directory. 12) List content of the mysqldump file mysqldumpsplitter.sh --source filename --desc Above command will list databases and tables from the dump file. You may later choose to load the files: zcat filename.sql.gz | mysql -uUSER -p -hHOSTNAME * *Also once you extract single table which you think is still bigger, you can use linux split command with number of lines to further split the dump. split -l 10000 filename.sql *That said, if that is your need (coming more often), you might consider using mydumper which actually creates individual dumps you wont need to split! A: You say that you don't have access to the second server. But if you have shell access to the first server, where the tables are, you can split your dump by table: for T in `mysql -N -B -e 'show tables from dbname'`; \ do echo $T; \ mysqldump [connecting_options] dbname $T \ | gzip -c > dbname_$T.dump.gz ; \ done This will create a gzip file for each table. Another way of splitting the output of mysqldump in separate files is using the --tab option. mysqldump [connecting options] --tab=directory_name dbname where directory_name is the name of an empty directory. This command creates a .sql file for each table, containing the CREATE TABLE statement, and a .txt file, containing the data, to be restored using LOAD DATA INFILE. I am not sure if phpMyAdmin can handle these files with your particular restriction, though. A: Late reply but was looking for same solution and came across following code from below website: for I in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $I | gzip > "$I.sql.gz"; done http://www.commandlinefu.com/commands/view/2916/backup-all-mysql-databases-to-individual-files A: You can dump individual tables with mysqldump by running mysqldump database table1 table2 ... tableN If none of the tables are too large, that will be enough. Otherwise, you'll have to start splitting the data in the larger tables. A: i would recommend the utility bigdump, you can grab it here. http://www.ozerov.de/bigdump.php this staggers the execution of the dump, in as close as it can manage to your limit, executing whole lines at a time. A: Try this: https://github.com/shenli/mysqldump-hugetable It will dump data into many small files. Each file contains less or equal MAX_RECORDS records. You can set this parameter in env.sh. A: I wrote a Python script to split a single large sql dump file into separate files, one for each CREATE TABLE statement. It writes the files to a new folder that you specify. If no output folder is specified, it creates a new folder with the same name as the dump file, in the same directory. It works line-by-line, without writing the file to memory first, so it is great for large files. https://github.com/kloddant/split_sql_dump_file import sys, re, os if sys.version_info[0] < 3: raise Exception("""Must be using Python 3. Try running "C:\\Program Files (x86)\\Python37-32\\python.exe" split_sql_dump_file.py""") sqldump_path = input("Enter the path to the sql dump file: ") if not os.path.exists(sqldump_path): raise Exception("Invalid sql dump path. {sqldump_path} does not exist.".format(sqldump_path=sqldump_path)) output_folder_path = input("Enter the path to the output folder: ") or sqldump_path.rstrip('.sql') if not os.path.exists(output_folder_path): os.makedirs(output_folder_path) table_name = None output_file_path = None smallfile = None with open(sqldump_path, 'rb') as bigfile: for line_number, line in enumerate(bigfile): line_string = line.decode("utf-8") if 'CREATE TABLE' in line_string.upper(): match = re.match(r"^CREATE TABLE (?:IF NOT EXISTS )?`(?P<table>\w+)` \($", line_string) if match: table_name = match.group('table') print(table_name) output_file_path = "{output_folder_path}/{table_name}.sql".format(output_folder_path=output_folder_path.rstrip('/'), table_name=table_name) if smallfile: smallfile.close() smallfile = open(output_file_path, 'wb') if not table_name: continue smallfile.write(line) smallfile.close() A: Try csplit(1) to cut up the output into the individual tables based on regular expressions (matching the table boundary I would think). A: Check out SQLDumpSplitter 2, I just used it to split a 40MB dump with success. You can get it at the link below: sqldumpsplitter.com Hope this help. A: This script should do it: #!/bin/sh #edit these USER="" PASSWORD="" MYSQLDIR="/path/to/backupdir" MYSQLDUMP="/usr/bin/mysqldump" MYSQL="/usr/bin/mysql" echo - Dumping tables for each DB databases=`$MYSQL --user=$USER --password=$PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"` for db in $databases; do echo - Creating "$db" DB mkdir $MYSQLDIR/$db chmod -R 777 $MYSQLDIR/$db for tb in `$MYSQL --user=$USER --password=$PASSWORD -N -B -e "use $db ;show tables"` do echo -- Creating table $tb $MYSQLDUMP --opt --delayed-insert --insert-ignore --user=$USER --password=$PASSWORD $db $tb | bzip2 -c > $MYSQLDIR/$db/$tb.sql.bz2 done echo done A: I've created MySQLDumpSplitter.java which, unlike bash scripts, works on Windows. It's available here https://github.com/Verace/MySQLDumpSplitter. A: A clarification on the answer of @Vérace : I specially like the interactive method; you can split a large file in Eclipse. I have tried a 105GB file in Windows successfully: Just add the MySQLDumpSplitter library to your project: http://dl.bintray.com/verace/MySQLDumpSplitter/jar/ Quick note on how to import: - In Eclipse, Right click on your project --> Import - Select "File System" and then "Next" - Browse the path of the jar file and press "Ok" - Select (thick) the "MySQLDumpSplitter.jar" file and then "Finish" - It will be added to your project and shown in the project folder in Package Explorer in Eclipse - Double click on the jar file in Eclipse (in Package Explorer) - The "MySQL Dump file splitter" window opens which you can specify the address of your dump file and proceed with split.
{ "language": "en", "url": "https://stackoverflow.com/questions/132902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Force Https in Websphere 6.1 I was wondering how i can force a user who has requested a page using Http to use the secure https version? I am using Websphere 6.1 as my application server and Rad 7 as my development environment Thanks Damien A: One way that you could do this within your application rather than in the server configuration would be to use a Filter (specified in your web.xml) to check if ServletRequest.getScheme() is "http" or "https", and re-direct the user to the appropriate URL (using HttpServletResponse.sendRedirect(String url)). A: You can add the following entry in your web.xml and it will make sure all requests are converted to https <!--******************************** *** SSL Security Constraint *** *****************************--> <security-constraint> <web-resource-collection> <web-resource-name>SSL</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <!--********************************* --> A: Websphere is not a complete http server. It does have 'Transport Chains', which act like an HTTP Server. Normally you will put a HTTP server in front. IBM provides IHS (IBM HTTP Server) which is a lightly modified Apache HTTP Server. The HTTP Server is configured with the httpd.conf file. There you add redirects in such a way that request for http are redirected to https. Maybe you can give some detailed information about your infrastructure. A: I agree. I think using a Filter will achieve this. Here is a Filter I wrote for load balancing and port redirection but it should be easy to figure out how to edit it to fit your needs. public class RequestWrapperFilter implements Filter { public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { HttpServletRequest httpRequest = (HttpServletRequest) servletRequest; HttpServletResponse httpResponse = (HttpServletResponse) servletResponse; String requestWrapperClassName = (String) (httpRequest .getAttribute(LoadBalancerRequestWrapper.class.getName())); String initiatingServerName = httpRequest.getServerName(); if (requestWrapperClassName == null && initiatingServerName.equals(loadBalancerHostName)) { httpRequest = new LoadBalancerRequestWrapper(AuthenticationUtil .getHttpServletRequest(httpRequest)); } filterChain.doFilter(httpRequest, httpResponse); } } /** * The custom implementation of the request wrapper. It simply overrides the * getScheme() and getServerPort() methods to perform the redirect * filtering. * * */ private static class LoadBalancerRequestWrapper extends HttpServletRequestWrapper { /** * Default Constructor. Simply declares the Wrapper as injected. * * @param httpServletRequest * the app-server HttpServletRequest. * */ public LoadBalancerRequestWrapper(HttpServletRequest httpServletRequest) { super(httpServletRequest); } /** * The overridden scheme. * */ public final String getScheme() { if (loadBalancerHttpScheme.equals(EMPTY_STRING)) { return super.getScheme(); } return loadBalancerHttpScheme; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/132921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why does Castle Windsor hold onto transient objects? Recently I noticed my application appears to be eating memory that never gets released. After profiling with CLRProfiler I've found that the Castle Windsor container I'm using is holding onto objects. These objects are declared with the lifestyle="transient" attribute in the config xml. I've found if I put an explicit call to IWindsorContainer.Release(hangingObject), that it will drop its references. This is causing a problem though, I wasn't expecting that with a transient lifestyle object CastleWindsor would keep a reference and effectively create a leak. It's going to be a rather mundane and error prone task going around inserting explicit Release calls in all the appropriate places. Have you seen this problem, and do you have any suggestions for how to get around it? A: I think the answers here are missing a vital point - that this behavior is configurable out of the box via release policies - check out the documentation on the castle project site here. In many scenarios especially where your container exists for the lifetime of the hosting application, and where transient components really don't need to be tracked (because you're handling disposal in your calling code or component that's been injected with the service) then you can just set the release policy to the NoTrackingReleasePolicy implementation and be done with it. Prior to Castle v 1.0 I believe Component Burden will be implemented/introduced - which will help alleviate some of these issues as well around disposal of injected dependencies etc. Edit: Check out the following posts for more discussion of component burden. The Component Burden - Davy Brions Also component burden is implemented in the official 2.0 release of the Windsor Container. A: One thing to note is that this seems to have been fixed in the Castle Trunk. In r5475, Hammett changed the default release policy in MicroKernel to LifecycledComponentsReleasePolicy. A: You can set a lifestyle of singleton or transient though on objects in the container. Singleton objects I understand should last the life of the application, but I don't understand the usefulness of this behvaviour being the same for transient ones! Custom lifestyles can be created by implementing ILifestyleManager. Maybe it's possible to implement this suitably to create a ReallyTransient lifestyle type!
{ "language": "en", "url": "https://stackoverflow.com/questions/132940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: How do I set a task to run every so often? How do I have a script run every, say 30 minutes? I assume there are different ways for different OSs. I'm using OS X. A: Syntax of Cron You can use cron to schedule tasks. crontab -e A job is specified in the following format. * * * * * command to execute │ │ │ │ │ │ │ │ │ └─── day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0) │ │ │ └──────── month (1 - 12) │ │ └───────────── day of month (1 - 31) │ └────────────────── hour (0 - 23) └─────────────────────── min (0 - 59) Example: 0 12 * * * cd ~/backupfolder && ./backup.sh Translation - Run every day at noon Example: 45 * * * * cd ~/backupfolder && ./backup.sh Translation - Run once an hour every day at 45 mins Registering the job You can run your script as root. sudo crontab -e Once you installed your cron tasks, you can use crontab -l to list your tasks. crontab -l If you want to know more about cron schedule expressions, you can access https://crontab.guru https://ole.michelsen.dk/blog/schedule-jobs-with-crontab-on-mac-osx.html A: As Mecki pointed out, launchd would be the way to go with this. There's a GUI interface for launchd called Lingon that you might want to check out, as opposed to editing the launchd files by hand: Lingon is a graphical user interface for creating an editing launchd configuration files for Mac OS X Leopard 10.5. [snip...] Editing a configuration file is easier than ever in this version and it has two different modes. Basic Mode which has the most common settings readily available in a very simple interface and Expert Mode where you can add all settings either directly in the text or insert them through a menu. A: you could use the very convenient plist generator: http://launched.zerowidth.com/ (no need to buy anything…) it will give you a shell one-liner to register a new scheduled job with the already recommended launchd A: MAC OS has an Automator Tool which is same as that of Task Scheduler in windows. And using Automator you can schedule tasks on daily basis and link the task with recurring calendar event to run scripts on specified time daily. refer link to run scripts on daily basis in Mac OS A: For apple scripts, I set up a special iCal calendar and use alarms to run them periodically. For command line tools, I use launchd. A: Just use launchd. It is a very powerful launcher system and meanwhile it is the standard launcher system for Mac OS X (current OS X version wouldn't even boot without it). For those who are not familiar with launchd (or with OS X in general), it is like a crossbreed between init, cron, at, SysVinit (init.d), inetd, upstart and systemd. Borrowing concepts of all these projects, yet also offering things you may not find elsewhere. Every service/task is a file. The location of the file depends on the questions: "When is this service supposed to run?" and "Which privileges will the service require?" System tasks go to /Library/LaunchDaemons/ if they shall run no matter if any user is logged in to the system or not. They will be started with "root" privileges. If they shall only run if any user is logged in, they go to /Library/LaunchAgents/ and will be executed with the privileges of the user that just logged in. If they shall run only if you are logged in, they go to ~/Library/LaunchAgents/ where ~ is your HOME directory. These task will run with your privileges, just as if you had started them yourself by command line or by double clicking a file in Finder. Note that there also exists /System/Library/LaunchDaemons and /System/Library/LaunchAgents, but as usual, everything under /System is managed by OS X. You shall not place any files there, you shall not change any files there, unless you really know what you are doing. Messing around in the Systems folder can make your system unusable (get it into a state where it will even refuse to boot up again). These are the directories where Apple places the launchd tasks that get your system up and running during boot, automatically start services as required, perform system maintenance tasks, and so on. Every launchd task is a file in PLIST format. It should have reverse domain name notation. E.g. you can name your task com.example.my-fancy-task.plist This plist can have various options and settings. Writing one per hand is not for beginners, so you may want to get a tool like LaunchControl (commercial, $18) or Lingon (commercial, $14.99) to create your tasks. Just as an example, it could look like this <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.example.my-fancy-task</string> <key>OnDemand</key> <true/> <key>ProgramArguments</key> <array> <string>/bin/sh</string> <string>/usr/local/bin/my-script.sh</string> </array> <key>StartInterval</key> <integer>1800</integer> </dict> </plist> This agent will run the shell script /usr/local/bin/my-script.sh every 1800 seconds (every 30 minutes). You can also have task run on certain dates/times (basically launchd can do everything cron can do) or you can even disable "OnDemand" causing launchd to keep the process permanently running (if it quits or crashes, launchd will immediately restart it). You can even limit how much resources a process may use. Update: Even though OnDemand is still supported, it is deprecated. The new setting is named KeepAlive, which makes much more sense. It can have a boolean value, in which case it is the exact opposite of OnDemand (setting it to false behaves as if OnDemand is true and the other way round). The great new feature is, that it can also have a dictionary value instead of a boolean one. If it has a dictionary value, you have a couple of extra options that give you more fine grain control under which circumstances the task shall be kept alive. E.g. it is only kept alive as long as the program terminated with an exit code of zero, only as long as a certain file/directory on disk exists, only if another task is also alive, or only if the network is currently up. Also you can manually enable/disable tasks via command line: launchctl <command> <parameter> command can be load or unload, to load a plist or unload it again, in which case parameter is the path to the file. Or command can be start or stop, to just start or stop such a task, in which case parameter is the label (com.example.my-fancy-task). Other commands and options exist as well. Update: Even though load, unload, start, and stop do still work, they are legacy now. The new commands are bootstrap, bootout, enable, and disable with slightly different syntax and options. One big difference is that disable is persistent, so once a service has been disabled, it will stay disabled, even across reboots until you enable it again. Also you can use kickstart to run a task immediately, regardless how it has been configured to run. The main difference between the new and the old commands is that they separate tasks by "domain". The system has domain and so has every user. So equally labeled tasks may exist in different domains and launctl can still distinguish them. Even different login and different UI sessions of the same user have their own domain (e.g. the same user may once be logged locally and once remote via SSH and different tasks may run for either session) and so does every single running processes. Thus instead of com.example.my-fancy-task, you now would use system/com.example.my-fancy-task or user/501/com.example.my-fancy-task to identify a task, with 501 being the user ID of a specific user. See documentation of the plist format and of the launchctl command line tool. A: On MacOSX, you have at least the following options: * *Recurring iCal alarm with a "Run Script" action *launchd *cron (link1, link2) From personal experience, cron is the most reliable. When I tested, launchd had a number of bugs and quirks. iCal alarms only run when you are logged in (but that might be something you prefer).
{ "language": "en", "url": "https://stackoverflow.com/questions/132955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: What is the Windows version of cron? A Google search turned up software that performs the same functions as cron, but nothing built into Windows. I'm running Windows XP Professional, but advice for any version of Windows would be potentially helpful to someone. Is there also a way to invoke this feature (which based on answers is called the Task Scheduler) programatically or via the command line? A: Zcron is available free for personal use. A: The closest equivalent are the Windows Scheduled Tasks (Control Panel -> Scheduled Tasks), though they are a far, far cry from cron. The biggest difference (to me) is that they require a user to be logged into the Windows box, and a user account (with password and all), which makes things a nightmare if your local security policy requires password changes periodically. I also think it is less flexible than cron as far as setting intervals for items to run. A: Is there also a way to invoke this feature (which based on answers is called the Task Scheduler) programatically [...]? Task scheduler API on MSDN. A: If you prefer good ol' cron, CRONw is the way to go. Supported systems * Windows 2000 (any version) works * Windows XP (SP 2) works * Windows Server 2003 works * Windows NT 4 (SP 6) should work but not tested * Windows 3.11, Windows 95, Windows 98, Windows ME, Windows XP beneath SP2 not supported by design A: Not exactly a Windows version, however you can use Cygwin's crontab. For install instructions, see here: here. A: There is NNCron for Windows. IT can schedule jobs to be run periodically. A: For the original question, asking about Windows XP (and Windows 7): Windows Task Scheduler For command-line usage, you can schedule with the AT command. For newer Microsoft OS versions, Windows Server 2012 / Windows 8, look at the schtasks command line utility. If using PowerShell, the Scheduled Tasks Cmdlets in Windows PowerShell are made for scripting. A: The Windows "AT" command is very similar to cron. It is available through the command line. A: In addition to Windows Task Scheduler you also have 'at' on Windows. I'm not sure how it differs from Task Scheduler besides the fact that it has a command line interface. A: Use the Windows Task Scheduler to schedule tasks over time and dates. A: pycron is close match on Windows. The following entries are supported: 1 Minute (0-59) 2 Hour (2-24) 3 Day of month (1-31) 4 Month (1-12, Jan, Feb, etc) 5 Day of week (0-6) 0 = Sunday, 1 = Monday etc or Sun, Mon, etc) 6 User that the command will run as 7 Command to execute A: The 'at' command. "The AT command schedules commands and programs to run on a computer at a specified time and date. The Schedule service must be running to use the AT command." A: The At command is now deprecated you can use the SCHTASKS A: * *You can use the Scheduled-Tasks API in PowerShell along with a config.json file for parameters input. I guess the minimum limitation is 5 minutes. A sample tutorial for very basic Schedule Tasks creation via APIs *You can use the schtasks.exe via cmd too. I could see the minute modifier limitation to 1 minute on executing schtasks.exe /Create /?. Anyways AT is now deprecated. Anyways, I am working on a tool to behave like CRON. I will update here if it is successfull. A: Check out the excellent Cronical program at https://github.com/mgefvert/Cronical It is a .NET program that reads a text file with unix-like cron lines. Very convenient to use. It will send emails if stdout just like unix cron. It even supports acting as the service runner.
{ "language": "en", "url": "https://stackoverflow.com/questions/132971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "295" }
Q: How do you force a maven MOJO to be executed only once at the end of a build? I have a MOJO I would like executed once, and once only after the test phase of the last project in the reactor to run. Using: if (!getProject().isExecutionRoot()) { return ; } at the start of the execute() method means my mojo gets executed once, however at the very beginning of the build - before all other child modules. A: The best solution is relying on a lifecycle extension by extending your class from org.apache.maven.AbstractMavenLifecycleParticipant (see also https://maven.apache.org/examples/maven-3-lifecycle-extensions.html) which got a method afterSessionEnd added with https://issues.apache.org/jira/browse/MNG-5640 (fixed in Maven 3.2.2). A: There is a Sonatype blog entry that describes how to do this. The last project to be run will be the root project as it will contain module references to the rest. Thereforec you need a test in your mojo to check if the current project's directory is the same as the directory from where Maven was launched: boolean result = mavenSession.getExecutionRootDirectory().equalsIgnoreCase(basedir.toString()); In the referenced entry there is a pretty comprehensive example of how to use this in your mojo. A: The best solution I have found for this is: /** * The projects in the reactor. * * @parameter expression="${reactorProjects}" * @readonly */ private List reactorProjects; public void execute() throws MojoExecutionException { // only execute this mojo once, on the very last project in the reactor final int size = reactorProjects.size(); MavenProject lastProject = (MavenProject) reactorProjects.get(size - 1); if (lastProject != getProject()) { return; } // do work ... } This appears to work on the small build hierarchies I've tested with. A: I think you might get what you need if you use the @aggregator tag and bind your mojo to one of the following lifecycle phases: * *prepare-package *package *pre-integration-test *integration-test *post-integration-test *verify *install *deploy A: The solution with using session.getEventDispatcher() no longer works since Maven 3.x. The whole eventing has been removed in this commit: https://github.com/apache/maven/commit/505423e666b9a8814e1c1aa5d50f4e73b8d710f4 A: Check out maven-monitor API You can add an EventMonitor to the dispatcher, and then trap the END of the 'reactor-execute' event: this is dispatched after everything is completed, i.e. even after you see the BUILD SUCCESSFUL/FAILED output. Here's how I used it recently to print a summary right at the end: /** * The Maven Project Object * * @parameter expression="${project}" * @required * @readonly */ protected MavenProject project; /** * The Maven Session. * * @parameter expression="${session}" * @required * @readonly */ protected MavenSession session; ... @Override public void execute() throws MojoExecutionException, MojoFailureException { //Register the event handler right at the start only if (project.isExecutionRoot()) registerEventMonitor(); ... } /** * Register an {@link EventMonitor} with Maven so that we can respond to certain lifecycle events */ protected void registerEventMonitor() { session.getEventDispatcher().addEventMonitor( new EventMonitor() { @Override public void endEvent(String eventName, String target, long arg2) { if (eventName.equals("reactor-execute")) printSummary(); } @Override public void startEvent(String eventName, String target, long arg2) {} @Override public void errorEvent(String eventName, String target, long arg2, Throwable arg3) {} } ); } /** * Print summary at end */ protected void printSummary() { ... } A: you can use mavensession to solve this public boolean isThisTheLastProject() { return session.getProjectDependencyGraph().getSortedProjects(). get(session.getProjectDependencyGraph().getSortedProjects().size()-1).getArtifactId().equalsIgnoreCase(project.getArtifactId()); } A: Normally, this is a matter of configuration. You might have to setup a project just for the mojo and make it dependent on all of the other projects. Or you could force one of the child projects to be last by making it dependent on all of the other children.
{ "language": "en", "url": "https://stackoverflow.com/questions/132976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is there a difference between "==" and "is"? My Google-fu has failed me. In Python, are the following two tests for equality equivalent? n = 5 # Test one. if n == 5: print 'Yay!' # Test two. if n is 5: print 'Yay!' Does this hold true for objects where you would be comparing instances (a list say)? Okay, so this kind of answers my question: L = [] L.append(1) if L == [1]: print 'Yay!' # Holds true, but... if L is [1]: print 'Yay!' # Doesn't. So == tests value where is tests to see if they are the same object? A: Your answer is correct. The is operator compares the identity of two objects. The == operator compares the values of two objects. An object's identity never changes once it has been created; you may think of it as the object's address in memory. You can control comparison behaviour of object values by defining a __cmp__ method or a rich comparison method like __eq__. A: Is there a difference between == and is in Python? Yes, they have a very important difference. ==: check for equality - the semantics are that equivalent objects (that aren't necessarily the same object) will test as equal. As the documentation says: The operators <, >, ==, >=, <=, and != compare the values of two objects. is: check for identity - the semantics are that the object (as held in memory) is the object. Again, the documentation says: The operators is and is not test for object identity: x is y is true if and only if x and y are the same object. Object identity is determined using the id() function. x is not y yields the inverse truth value. Thus, the check for identity is the same as checking for the equality of the IDs of the objects. That is, a is b is the same as: id(a) == id(b) where id is the builtin function that returns an integer that "is guaranteed to be unique among simultaneously existing objects" (see help(id)) and where a and b are any arbitrary objects. Other Usage Directions You should use these comparisons for their semantics. Use is to check identity and == to check equality. So in general, we use is to check for identity. This is usually useful when we are checking for an object that should only exist once in memory, referred to as a "singleton" in the documentation. Use cases for is include: * *None *enum values (when using Enums from the enum module) *usually modules *usually class objects resulting from class definitions *usually function objects resulting from function definitions *anything else that should only exist once in memory (all singletons, generally) *a specific object that you want by identity Usual use cases for == include: * *numbers, including integers *strings *lists *sets *dictionaries *custom mutable objects *other builtin immutable objects, in most cases The general use case, again, for ==, is the object you want may not be the same object, instead it may be an equivalent one PEP 8 directions PEP 8, the official Python style guide for the standard library also mentions two use-cases for is: Comparisons to singletons like None should always be done with is or is not, never the equality operators. Also, beware of writing if x when you really mean if x is not None -- e.g. when testing whether a variable or argument that defaults to None was set to some other value. The other value might have a type (such as a container) that could be false in a boolean context! Inferring equality from identity If is is true, equality can usually be inferred - logically, if an object is itself, then it should test as equivalent to itself. In most cases this logic is true, but it relies on the implementation of the __eq__ special method. As the docs say, The default behavior for equality comparison (== and !=) is based on the identity of the objects. Hence, equality comparison of instances with the same identity results in equality, and equality comparison of instances with different identities results in inequality. A motivation for this default behavior is the desire that all objects should be reflexive (i.e. x is y implies x == y). and in the interests of consistency, recommends: Equality comparison should be reflexive. In other words, identical objects should compare equal: x is y implies x == y We can see that this is the default behavior for custom objects: >>> class Object(object): pass >>> obj = Object() >>> obj2 = Object() >>> obj == obj, obj is obj (True, True) >>> obj == obj2, obj is obj2 (False, False) The contrapositive is also usually true - if somethings test as not equal, you can usually infer that they are not the same object. Since tests for equality can be customized, this inference does not always hold true for all types. An exception A notable exception is nan - it always tests as not equal to itself: >>> nan = float('nan') >>> nan nan >>> nan is nan True >>> nan == nan # !!!!! False Checking for identity can be much a much quicker check than checking for equality (which might require recursively checking members). But it cannot be substituted for equality where you may find more than one object as equivalent. Note that comparing equality of lists and tuples will assume that identity of objects are equal (because this is a fast check). This can create contradictions if the logic is inconsistent - as it is for nan: >>> [nan] == [nan] True >>> (nan,) == (nan,) True A Cautionary Tale: The question is attempting to use is to compare integers. You shouldn't assume that an instance of an integer is the same instance as one obtained by another reference. This story explains why. A commenter had code that relied on the fact that small integers (-5 to 256 inclusive) are singletons in Python, instead of checking for equality. Wow, this can lead to some insidious bugs. I had some code that checked if a is b, which worked as I wanted because a and b are typically small numbers. The bug only happened today, after six months in production, because a and b were finally large enough to not be cached. – gwg It worked in development. It may have passed some unittests. And it worked in production - until the code checked for an integer larger than 256, at which point it failed in production. This is a production failure that could have been caught in code review or possibly with a style-checker. Let me emphasize: do not use is to compare integers. A: Have a look at Stack Overflow question Python's “is” operator behaves unexpectedly with integers. What it mostly boils down to is that "is" checks to see if they are the same object, not just equal to each other (the numbers below 256 are a special case). A: As the other people in this post answer the question in details the difference between == and is for comparing Objects or variables, I would emphasize mainly the comparison between is and == for strings which can give different results and I would urge programmers to carefully use them. For string comparison, make sure to use == instead of is: str = 'hello' if (str is 'hello'): print ('str is hello') if (str == 'hello'): print ('str == hello') Out: str is hello str == hello But in the below example == and is will get different results: str2 = 'hello sam' if (str2 is 'hello sam'): print ('str2 is hello sam') if (str2 == 'hello sam'): print ('str2 == hello sam') Out: str2 == hello sam Conclusion and Analysis: Use is carefully to compare between strings. Since is for comparing objects and since in Python 3+ every variable such as string interpret as an object, let's see what happened in above paragraphs. In python there is id function that shows a unique constant of an object during its lifetime. This id is using in back-end of Python interpreter to compare two objects using is keyword. str = 'hello' id('hello') > 140039832615152 id(str) > 140039832615152 But str2 = 'hello sam' id('hello sam') > 140039832615536 id(str2) > 140039832615792 A: In a nutshell, is checks whether two references point to the same object or not.== checks whether two objects have the same value or not. a=[1,2,3] b=a #a and b point to the same object c=list(a) #c points to different object if a==b: print('#') #output:# if a is b: print('##') #output:## if a==c: print('###') #output:## if a is c: print('####') #no output as c and a point to different object A: == determines if the values are equal, while is determines if they are the exact same object. A: There is a simple rule of thumb to tell you when to use == or is. * *== is for value equality. Use it when you would like to know if two objects have the same value. *is is for reference equality. Use it when you would like to know if two references refer to the same object. In general, when you are comparing something to a simple type, you are usually checking for value equality, so you should use ==. For example, the intention of your example is probably to check whether x has a value equal to 2 (==), not whether x is literally referring to the same object as 2. Something else to note: because of the way the CPython reference implementation works, you'll get unexpected and inconsistent results if you mistakenly use is to compare for reference equality on integers: >>> a = 500 >>> b = 500 >>> a == b True >>> a is b False That's pretty much what we expected: a and b have the same value, but are distinct entities. But what about this? >>> c = 200 >>> d = 200 >>> c == d True >>> c is d True This is inconsistent with the earlier result. What's going on here? It turns out the reference implementation of Python caches integer objects in the range -5..256 as singleton instances for performance reasons. Here's an example demonstrating this: >>> for i in range(250, 260): a = i; print "%i: %s" % (i, a is int(str(i))); ... 250: True 251: True 252: True 253: True 254: True 255: True 256: True 257: False 258: False 259: False This is another obvious reason not to use is: the behavior is left up to implementations when you're erroneously using it for value equality. A: As John Feminella said, most of the time you will use == and != because your objective is to compare values. I'd just like to categorise what you would do the rest of the time: There is one and only one instance of NoneType i.e. None is a singleton. Consequently foo == None and foo is None mean the same. However the is test is faster and the Pythonic convention is to use foo is None. If you are doing some introspection or mucking about with garbage collection or checking whether your custom-built string interning gadget is working or suchlike, then you probably have a use-case for foo is bar. True and False are also (now) singletons, but there is no use-case for foo == True and no use case for foo is True. A: Most of them already answered to the point. Just as an additional note (based on my understanding and experimenting but not from a documented source), the statement == if the objects referred to by the variables are equal from above answers should be read as == if the objects referred to by the variables are equal and objects belonging to the same type/class . I arrived at this conclusion based on the below test: list1 = [1,2,3,4] tuple1 = (1,2,3,4) print(list1) print(tuple1) print(id(list1)) print(id(tuple1)) print(list1 == tuple1) print(list1 is tuple1) Here the contents of the list and tuple are same but the type/class are different. A: What's the difference between is and ==? == and is are different comparison! As others already said: * *== compares the values of the objects. *is compares the references of the objects. In Python names refer to objects, for example in this case value1 and value2 refer to an int instance storing the value 1000: value1 = 1000 value2 = value1 Because value2 refers to the same object is and == will give True: >>> value1 == value2 True >>> value1 is value2 True In the following example the names value1 and value2 refer to different int instances, even if both store the same integer: >>> value1 = 1000 >>> value2 = 1000 Because the same value (integer) is stored == will be True, that's why it's often called "value comparison". However is will return False because these are different objects: >>> value1 == value2 True >>> value1 is value2 False When to use which? Generally is is a much faster comparison. That's why CPython caches (or maybe reuses would be the better term) certain objects like small integers, some strings, etc. But this should be treated as implementation detail that could (even if unlikely) change at any point without warning. You should only use is if you: * *want to check if two objects are really the same object (not just the same "value"). One example can be if you use a singleton object as constant. *want to compare a value to a Python constant. The constants in Python are: * *None *True1 *False1 *NotImplemented *Ellipsis *__debug__ *classes (for example int is int or int is float) *there could be additional constants in built-in modules or 3rd party modules. For example np.ma.masked from the NumPy module) In every other case you should use == to check for equality. Can I customize the behavior? There is some aspect to == that hasn't been mentioned already in the other answers: It's part of Pythons "Data model". That means its behavior can be customized using the __eq__ method. For example: class MyClass(object): def __init__(self, val): self._value = val def __eq__(self, other): print('__eq__ method called') try: return self._value == other._value except AttributeError: raise TypeError('Cannot compare {0} to objects of type {1}' .format(type(self), type(other))) This is just an artificial example to illustrate that the method is really called: >>> MyClass(10) == MyClass(10) __eq__ method called True Note that by default (if no other implementation of __eq__ can be found in the class or the superclasses) __eq__ uses is: class AClass(object): def __init__(self, value): self._value = value >>> a = AClass(10) >>> b = AClass(10) >>> a == b False >>> a == a So it's actually important to implement __eq__ if you want "more" than just reference-comparison for custom classes! On the other hand you cannot customize is checks. It will always compare just if you have the same reference. Will these comparisons always return a boolean? Because __eq__ can be re-implemented or overridden, it's not limited to return True or False. It could return anything (but in most cases it should return a boolean!). For example with NumPy arrays the == will return an array: >>> import numpy as np >>> np.arange(10) == 2 array([False, False, True, False, False, False, False, False, False, False], dtype=bool) But is checks will always return True or False! 1 As Aaron Hall mentioned in the comments: Generally you shouldn't do any is True or is False checks because one normally uses these "checks" in a context that implicitly converts the condition to a boolean (for example in an if statement). So doing the is True comparison and the implicit boolean cast is doing more work than just doing the boolean cast - and you limit yourself to booleans (which isn't considered pythonic). Like PEP8 mentions: Don't compare boolean values to True or False using ==. Yes: if greeting: No: if greeting == True: Worse: if greeting is True: A: They are completely different. is checks for object identity, while == checks for equality (a notion that depends on the two operands' types). It is only a lucky coincidence that "is" seems to work correctly with small integers (e.g. 5 == 4+1). That is because CPython optimizes the storage of integers in the range (-5 to 256) by making them singletons. This behavior is totally implementation-dependent and not guaranteed to be preserved under all manner of minor transformative operations. For example, Python 3.5 also makes short strings singletons, but slicing them disrupts this behavior: >>> "foo" + "bar" == "foobar" True >>> "foo" + "bar" is "foobar" True >>> "foo"[:] + "bar" == "foobar" True >>> "foo"[:] + "bar" is "foobar" False A: https://docs.python.org/library/stdtypes.html#comparisons is tests for identity == tests for equality Each (small) integer value is mapped to a single value, so every 3 is identical and equal. This is an implementation detail, not part of the language spec though A: is will return True if two variables point to the same object (in memory), == if the objects referred to by the variables are equal. >>> a = [1, 2, 3] >>> b = a >>> b is a True >>> b == a True # Make a new copy of list `a` via the slice operator, # and assign it to variable `b` >>> b = a[:] >>> b is a False >>> b == a True In your case, the second test only works because Python caches small integer objects, which is an implementation detail. For larger integers, this does not work: >>> 1000 is 10**3 False >>> 1000 == 10**3 True The same holds true for string literals: >>> "a" is "a" True >>> "aa" is "a" * 2 True >>> x = "a" >>> "aa" is x * 2 False >>> "aa" is intern(x*2) True Please see this question as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/132988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "628" }
Q: Flex HttpService : appending to destination I am using Flex to connect to a Rest service. To access order #32, for instance, I can call the URL http://[service]/orders/32. The URL must be configured as a destination - since the client will connect to different instances of the service. All of this is using the Blaze Proxy, since it involves GET, PUT, DELETE and POST calls. The problem is:- how do I append the "32" to the end of a destination when using HttpService? All I do is set the destination, and at some point this is converted into a URL. I have traced the code, but I don't know where this is done, so can't replace it. Options are: 1. Resolve the destination to a URL within the Flex client, and then set the URL (with the appended data) as the URL. 2. Write my own java Flex Adapter that overrides the standard Proxy, and map parameters to the url like the following: http://[service]/order/{id}?id=32 to http://[service]/order/32 Has anyone come across this problem before, and are there any simple ways to resolve this? A: Just so everyone knows, this is how I resolved this issue: I created a custom HTTPProxyAdapter on the server public MyHTTPProxyAdapter extends flex.messaging.services.http.HTTPProxyAdapter { public Object invoke(Message message) { // modify the message - if required process(message); return super.invoke(message); } private void process(Message message) { HTTPMessage http = (HTTPMessage)message; if(http != null) { String url = http.getUrl(); ASObject o = (ASObject)http.getBody(); if(o != null) { Set keys = o.keySet(); Iterator it = keys.iterator(); while(it.hasNext()) { String key = (String)it.next(); String token = "[" + key +"]"; if(url.contains(token)) { url = url.replace(token, o.get(key).toString()); o.remove(key); } } http.setUrl(url); } } } } Then replaced the destination adapter to my adapter. I can now use the following URL in the config.xml and anything in square brackets will be replaced by the Query string: <destination id="user-getbytoken"> <properties> <url>http://localhost:8080/myapp/public/client/users/token/[id]</url> </properties> </destination> In this example, setting the destination to user-getbytoken and the parameters {id:123} will result in the url of http://localhost:8080/myapp/public/client/users/token/123 A: Here's a simple way to resolve the url to the HTTPService within Flex via the click event's handler. here's a service: <mx:HTTPService id="UCService" result="UCServiceHandler(event)" showBusyCursor="true" resultFormat="e4x" /> Then here's the handler: private function UCmainHandler(UCurl:String) { UCService.url = UCurl; UCService.send(); } And here's a sample click event: <mx:Button label="add to cart" click="UCmainHandler('http://sampleurl.com/cart/add/p18_q1?destination=cart')" /> Of course you could pass other values to the click handler, or even have the handler add things to the url based on other current settings etc... Hope that helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/133002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is Big O notation? Do you use it? What is Big O notation? Do you use it? I missed this university class I guess :D Does anyone use it and give some real life examples of where they used it? See also: Big-O for Eight Year Olds? Big O, how do you calculate/approximate it? Did you apply computational complexity theory in real life? A: Big O notation denotes the limiting factor of an algorithm. Its a simplified expression of how run time of an algorithm scales with relation to the input. For example (in Java): /** Takes an array of strings and concatenates them * This is a silly way of doing things but it gets the * point across hopefully * @param strings the array of strings to concatenate * @returns a string that is a result of the concatenation of all the strings * in the array */ public static String badConcat(String[] Strings){ String totalString = ""; for(String s : strings) { for(int i = 0; i < s.length(); i++){ totalString += s.charAt(i); } } return totalString; } Now think about what this is actually doing. It is going through every character of input and adding them together. This seems straightforward. The problem is that String is immutable. So every time you add a letter onto the string you have to create a new String. To do this you have to copy the values from the old string into the new string and add the new character. This means you will be copying the first letter n times where n is the number of characters in the input. You will be copying the character n-1 times, so in total there will be (n-1)(n/2) copies. This is (n^2-n)/2 and for Big O notation we use only the highest magnitude factor (usually) and drop any constants that are multiplied by it and we end up with O(n^2). Using something like a StringBuilder will be along the lines of O(nLog(n)). If you calculate the number of characters at the beginning and set the capacity of the StringBuilder you can get it to be O(n). So if we had 1000 characters of input, the first example would perform roughly a million operations, StringBuilder would perform 10,000, and the StringBuilder with setCapacity would perform 1000 operations to do the same thing. This is rough estimate, but O(n) notation is about orders of magnitudes, not exact runtime. It's not something I use per say on a regular basis. It is, however, constantly in the back of my mind when trying to figure out the best algorithm for doing something. A: One important thing most people forget when talking about Big-O, thus I feel the need to mention that: You cannot use Big-O to compare the speed of two algorithms. Big-O only says how much slower an algorithm will get (approximately) if you double the number of items processed, or how much faster it will get if you cut the number in half. However, if you have two entirely different algorithms and one (A) is O(n^2) and the other one (B) is O(log n), it is not said that A is slower than B. Actually, with 100 items, A might be ten times faster than B. It only says that with 200 items, A will grow slower by the factor n^2 and B will grow slower by the factor log n. So, if you benchmark both and you know how much time A takes to process 100 items, and how much time B needs for the same 100 items, and A is faster than B, you can calculate at what amount of items B will overtake A in speed (as the speed of B decreases much slower than the one of A, it will overtake A sooner or later—this is for sure). A: A very similar question has already been asked at Big-O for Eight Year Olds?. Hopefully the answers there will answer your question although the question asker there did have a bit of mathematical knowledge about it all which you may not have so clarify if you need a fuller explanation. A: Every programmer should be aware of what Big O notation is, how it applies for actions with common data structures and algorithms (and thus pick the correct DS and algorithm for the problem they are solving), and how to calculate it for their own algorithms. 1) It's an order of measurement of the efficiency of an algorithm when working on a data structure. 2) Actions like 'add' / 'sort' / 'remove' can take different amounts of time with different data structures (and algorithms), for example 'add' and 'find' are O(1) for a hashmap, but O(log n) for a binary tree. Sort is O(nlog n) for QuickSort, but O(n^2) for BubbleSort, when dealing with a plain array. 3) Calculations can be done by looking at the loop depth of your algorithm generally. No loops, O(1), loops iterating over all the set (even if they break out at some point) O(n). If the loop halves the search space on each iteration? O(log n). Take the highest O() for a sequence of loops, and multiply the O() when you nest loops. Yeah, it's more complex than that. If you're really interested get a textbook. A: 'Big-O' notation is used to compare the growth rates of two functions of a variable (say n) as n gets very large. If function f grows much more quickly than function g we say that g = O(f) to imply that for large enough n, f will always be larger than g up to a scaling factor. It turns out that this is a very useful idea in computer science and particularly in the analysis of algorithms, because we are often precisely concerned with the growth rates of functions which represent, for example, the time taken by two different algorithms. Very coarsely, we can determine that an algorithm with run-time t1(n) is more efficient than an algorithm with run-time t2(n) if t1 = O(t2) for large enough n which is typically the 'size' of the problem - like the length of the array or number of nodes in the graph or whatever. This stipulation, that n gets large enough, allows us to pull a lot of useful tricks. Perhaps the most often used one is that you can simplify functions down to their fastest growing terms. For example n^2 + n = O(n^2) because as n gets large enough, the n^2 term gets so much larger than n that the n term is practically insignificant. So we can drop it from consideration. However, it does mean that big-O notation is less useful for small n, because the slower growing terms that we've forgotten about are still significant enough to affect the run-time. What we now have is a tool for comparing the costs of two different algorithms, and a shorthand for saying that one is quicker or slower than the other. Big-O notation can be abused which is a shame as it is imprecise enough already! There are equivalent terms for saying that a function grows less quickly than another, and that two functions grow at the same rate. Oh, and do I use it? Yes, all the time - when I'm figuring out how efficient my code is it gives a great 'back-of-the-envelope- approximation to the cost. A: The "Intuitition" behind Big-O Imagine a "competition" between two functions over x, as x approaches infinity: f(x) and g(x). Now, if from some point on (some x) one function always has a higher value then the other, then let's call this function "faster" than the other. So, for example, if for every x > 100 you see that f(x) > g(x), then f(x) is "faster" than g(x). In this case we would say g(x) = O(f(x)). f(x) poses a sort of "speed limit" of sorts for g(x), since eventually it passes it and leaves it behind for good. This isn't exactly the definition of big-O notation, which also states that f(x) only has to be larger than C*g(x) for some constant C (which is just another way of saying that you can't help g(x) win the competition by multiplying it by a constant factor - f(x) will always win in the end). The formal definition also uses absolute values. But I hope I managed to make it intuitive. A: It may also be worth considering that the complexity of many algorithms is based on more than one variable, particularly in multi-dimensional problems. For example, I recently had to write an algorithm for the following. Given a set of n points, and m polygons, extract all the points that lie in any of the polygons. The complexity is based around two known variables, n and m, and the unknown of how many points are in each polygon. The big O notation here is quite a bit more involved than O(f(n)) or even O(f(n) + g(m)). Big O is good when you are dealing with large numbers of homogenous items, but don't expect this to always be the case. It is also worth noting that the actual number of iterations over the data is often dependent on the data. Quicksort is usually quick, but give it presorted data and it slows down. My points and polygons alogorithm ended up quite fast, close to O(n + (m log(m)), based on prior knowledge of how the data was likely to be organised and the relative sizes of n and m. It would fall down badly on randomly organised data of different relative sizes. A final thing to consider is that there is often a direct trade off between the speed of an algorithm and the amount of space it uses. Pigeon hole sorting is a pretty good example of this. Going back to my points and polygons, lets say that all my polygons were simple and quick to draw, and I could draw them filled on screen, say in blue, in a fixed amount of time each. So if I draw my m polygons on a black screen it would take O(m) time. To check if any of my n points was in a polygon, I simply check whether the pixel at that point is green or black. So the check is O(n), and the total analysis is O(m + n). Downside of course is that I need near infinite storage if I'm dealing with real world coordinates to millimeter accuracy.... ...ho hum. A: It may also be worth considering amortized time, rather than just worst case. This means, for example, that if you run the algorithm n times, it will be O(1) on average, but it might be worse sometimes. A good example is a dynamic table, which is basically an array that expands as you add elements to it. A naïve implementation would increase the array's size by 1 for each element added, meaning that all the elements need to be copied every time a new one is added. This would result in a O(n2) algorithm if you were concatenating a series of arrays using this method. An alternative is to double the capacity of the array every time you need more storage. Even though appending is an O(n) operation sometimes, you will only need to copy O(n) elements for every n elements added, so the operation is O(1) on average. This is how things like StringBuilder or std::vector are implemented. A: What is Big O notation? Big O notation is a method of expressing the relationship between many steps an algorithm will require related to the size of the input data. This is referred to as the algorithmic complexity. For example sorting a list of size N using Bubble Sort takes O(N^2) steps. Do I use Big O notation? I do use Big O notation on occasion to convey algorithmic complexity to fellow programmers. I use the underlying theory (e.g. Big O analysis techniques) all of the time when I think about what algorithms to use. Concrete Examples? I have used the theory of complexity analysis to create algorithms for efficient stack data structures which require no memory reallocation, and which support average time of O(N) for indexing. I have used Big O notation to explain the algorithm to other people. I have also used complexity analysis to understand when linear time sorting O(N) is possible. A: From Wikipedia..... Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size n might be found to be T(n) = 4n² − 2n + 2. As n grows large, the n² term will come to dominate, so that all other terms can be neglected — for instance when n = 500, the term 4n² is 1000 times as large as the 2n term. Ignoring the latter would have negligible effect on the expression's value for most purposes. Obviously I have never used it.. A: You should be able to evaluate an algorithm's complexity. This combined with a knowledge of how many elements it will take can help you to determine if it is ill suited for its task. A: It says how many iterations an algorithm has in the worst case. to search for an item in an list, you can traverse the list until you got the item. In the worst case, the item is in the last place. Lets say there are n items in the list. In the worst case you take n iterations. In the Big O notiation it is O(n). It says factualy how efficient an algorithm is.
{ "language": "en", "url": "https://stackoverflow.com/questions/133008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: How to check if a column exists in a SQL Server table I need to add a specific column if it does not exist. I have something like the following, but it always returns false: IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'myTableName' AND COLUMN_NAME = 'myColumnName') How can I check if a column exists in a table of the SQL Server database? A: select distinct object_name(sc.id) from syscolumns sc,sysobjects so where sc.name like '%col_name%' and so.type='U' A: Wheat's answer is good, but it assumes you do not have any identical table name / column name pairs in any schema or database. To make it safe for that condition, use this... select * from Information_Schema.Columns where Table_Catalog = 'DatabaseName' and Table_Schema = 'SchemaName' and Table_Name = 'TableName' and Column_Name = 'ColumnName' A: There are several ways to check the existence of a column. I would strongly recommend to use INFORMATION_SCHEMA.COLUMNS as it is created in order to communicate with user. Consider following tables: sys.objects sys.columns and even some other access methods available to check system catalog. Also, no need to use SELECT *, simply test it by NULL value IF EXISTS( SELECT NULL FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'myTableName' AND COLUMN_NAME = 'myColumnName' ) A: Do something if the column does not exist: BEGIN IF (COL_LENGTH('[dbo].[Table]', 'Column ') IS NULL) BEGIN // Do something END END; Do something if the column does exist: BEGIN IF (COL_LENGTH('[dbo].[Table]', 'Column ') IS NOT NULL) BEGIN // Do something END END; A: Try this... IF NOT EXISTS( SELECT TOP 1 1 FROM INFORMATION_SCHEMA.COLUMNS WHERE [TABLE_NAME] = 'Employees' AND [COLUMN_NAME] = 'EmployeeID') BEGIN ALTER TABLE [Employees] ADD [EmployeeID] INT NULL END A: For the people who are checking the column existence before dropping it. From SQL Server 2016 you can use new DIE (Drop If Exists) statements instead of big IF wrappers ALTER TABLE Table_name DROP COLUMN IF EXISTS Column_name A: Here is a simple script I use to manage addition of columns in the database: IF NOT EXISTS ( SELECT * FROM sys.Columns WHERE Name = N'QbId' AND Object_Id = Object_Id(N'Driver') ) BEGIN ALTER TABLE Driver ADD QbId NVARCHAR(20) NULL END ELSE BEGIN PRINT 'QbId is already added on Driver' END In this example, the Name is the ColumnName to be added and Object_Id is the TableName A: Another contribution is the following sample that adds the column if it does not exist. USE [Northwind] GO IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Categories' AND COLUMN_NAME = 'Note') BEGIN ALTER TABLE Categories ADD Note NVARCHAR(800) NULL END GO A: I'd prefer INFORMATION_SCHEMA.COLUMNS over a system table because Microsoft does not guarantee to preserve the system tables between versions. For example, dbo.syscolumns does still work in SQL Server 2008, but it's deprecated and could be removed at any time in future. A: You can use the information schema system views to find out pretty much anything about the tables you're interested in: SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'yourTableName' ORDER BY ORDINAL_POSITION You can also interrogate views, stored procedures and pretty much anything about the database using the Information_schema views. A: Try something like: CREATE FUNCTION ColumnExists(@TableName varchar(100), @ColumnName varchar(100)) RETURNS varchar(1) AS BEGIN DECLARE @Result varchar(1); IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.Columns WHERE TABLE_NAME = @TableName AND COLUMN_NAME = @ColumnName) BEGIN SET @Result = 'T' END ELSE BEGIN SET @Result = 'F' END RETURN @Result; END GO GRANT EXECUTE ON [ColumnExists] TO [whoever] GO Then use it like this: IF ColumnExists('xxx', 'yyyy') = 'F' BEGIN ALTER TABLE xxx ADD yyyyy varChar(10) NOT NULL END GO It should work on both SQL Server 2000 and SQL Server 2005. I am not sure about SQL Server 2008, but I don't see why not. A: First check if the table/column(id/name) combination exists in dbo.syscolumns (an internal SQL Server table that contains field definitions), and if not issue the appropriate ALTER TABLE query to add it. For example: IF NOT EXISTS ( SELECT * FROM syscolumns WHERE id = OBJECT_ID('Client') AND name = 'Name' ) ALTER TABLE Client ADD Name VARCHAR(64) NULL A: The below query can be used to check whether searched column exists or not in the table. We can take a decision based on the searched result, also as shown below. IF EXISTS (SELECT 'Y' FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = <YourTableName> AND COLUMN_NAME = <YourColumnName>) BEGIN SELECT 'Column Already Exists.' END ELSE BEGIN ALTER TABLE <YourTableName> ADD <YourColumnName> <DataType>[Size] END A: A good friend and colleague of mine showed me how you can also use an IF block with SQL functions OBJECT_ID and COLUMNPROPERTY in SQL Server 2005 and later to check for a column. You can use something similar to the following: You can see for yourself here: IF (OBJECT_ID(N'[dbo].[myTable]') IS NOT NULL AND COLUMNPROPERTY( OBJECT_ID(N'[dbo].[myTable]'), 'ThisColumnDoesNotExist', 'ColumnId') IS NULL) BEGIN SELECT 'Column does not exist -- You can add TSQL to add the column here' END A: declare @myColumn as nvarchar(128) set @myColumn = 'myColumn' if not exists ( select 1 from information_schema.columns columns where columns.table_catalog = 'myDatabase' and columns.table_schema = 'mySchema' and columns.table_name = 'myTable' and columns.column_name = @myColumn ) begin exec('alter table myDatabase.mySchema.myTable add' +' ['+@myColumn+'] bigint null') end A: This worked for me in SQL Server 2000: IF EXISTS ( SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name = 'table_name' AND column_name = 'column_name' ) BEGIN ... END A: SQL Server 2005 onwards: IF EXISTS(SELECT 1 FROM sys.columns WHERE Name = N'columnName' AND Object_ID = Object_ID(N'schemaName.tableName')) BEGIN -- Column Exists END Martin Smith's version is shorter: IF COL_LENGTH('schemaName.tableName', 'columnName') IS NOT NULL BEGIN -- Column Exists END A: Try this SELECT COLUMNS.* FROM INFORMATION_SCHEMA.COLUMNS COLUMNS, INFORMATION_SCHEMA.TABLES TABLES WHERE COLUMNS.TABLE_NAME = TABLES.TABLE_NAME AND Upper(COLUMNS.COLUMN_NAME) = Upper('column_name') A: Yet another variation... SELECT Count(*) AS existFlag FROM sys.columns WHERE [name] = N 'ColumnName' AND [object_id] = OBJECT_ID(N 'TableName') A: You can check multiple columns in SQLDB at once and return a string as status to check if columns exist: IF EXISTS ( SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Table Name' AND(COLUMN_NAME = 'column 1' or COLUMN_NAME = 'column 2' or COLUMN_NAME = 'column 3' or COLUMN_NAME = 'column 4') ) SELECT 'Column exists in table' AS[Status]; ELSE SELECT 'Column does not exist in table' AS[Status]; A: I needed something similar for SQL Server 2000 and, as Mitch points out, this only works in SQL Server 2005 or later. This is what worked for me in the end: if exists ( select * from sysobjects, syscolumns where sysobjects.id = syscolumns.id and sysobjects.name = 'table' and syscolumns.name = 'column') A: Tweak the below to suit your specific requirements: if not exists (select column_name from INFORMATION_SCHEMA.columns where table_name = 'MyTable' and column_name = 'MyColumn') alter table MyTable add MyColumn int That should work - take a careful look over your code for stupid mistakes; are you querying INFORMATION_SCHEMA on the same database as your insert is being applied to for example? Do you have a typo in your table/column name in either statement? A: IF NOT EXISTS(SELECT NULL FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name = 'TableName' AND table_schema = 'SchemaName' AND column_name = 'ColumnName') BEGIN ALTER TABLE [SchemaName].[TableName] ADD [ColumnName] int(1) NOT NULL default '0'; END; A: if exists ( select * from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = '<table_name>' and COLUMN_NAME = '<column_name>' ) begin print 'Column you have specified exists' end else begin print 'Column does not exist' end A: One of the simplest and understandable solutions is: IF COL_LENGTH('Table_Name','Column_Name') IS NULL BEGIN -- Column Not Exists, implement your logic END ELSE BEGIN -- Column Exists, implement your logic END A: A more concise version IF COL_LENGTH('table_name','column_name') IS NULL BEGIN /* Column does not exist or caller does not have permission to view the object */ END The point about permissions on viewing metadata applies to all answers, not just this one. Note that the first parameter table name to COL_LENGTH can be in one, two, or three part name format as required. An example referencing a table in a different database is: COL_LENGTH('AdventureWorks2012.HumanResources.Department','ModifiedDate') One difference with this answer, compared to using the metadata views, is that metadata functions, such as COL_LENGTH, always only return data about committed changes, irrespective of the isolation level in effect. A: A temporary table version of the accepted answer: if (exists(select 1 from tempdb.sys.columns where Name = 'columnName' and Object_ID = object_id('tempdb..#tableName'))) begin ... end A: Execute the below query to check if the column exists in the given table: IF(SELECT COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') IS NOT NULL PRINT 'Column Exists in the given table'; A: IF EXISTS ( SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_CATALOG = 'Database Name' and TABLE_SCHEMA = 'Schema Name' and TABLE_NAME = 'Table Name' and COLUMN_NAME = 'Column Name' and DATA_TYPE = 'Column Type') -- Where statement lines can be deleted. BEGIN -- Column exists in table END ELSE BEGIN -- Column does not exist in table END A: IF EXISTS(SELECT 1 FROM sys.columns WHERE Name = N'columnName' AND Object_ID = Object_ID(N'schemaName.tableName')) This should be the fairly easier way and straightforward solution to this problem. I have used this multiple times for similar scenarios. It works like a charm, no doubts on that. A: Table → script table as → new windows - you have design script. Check and find the column name in the new windows.
{ "language": "en", "url": "https://stackoverflow.com/questions/133031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2126" }
Q: Control for tags with auto-completion in Winforms? I am seeking a WinForm control that would provide an autocomplete behavior for multiple space-separated - exactly ala del.icio.us (or stackoverflow.com for that matter). Does anyone knows how to do that within a .NET 2.0 WinForm application? A: ComboBox can autocomplete, but only one word at a time. If you want to have each word separately autocompleted, you have to write your own. I already did, hope it's not too long. It's not 100% exactly what you want, this was used for autocompleting in email client when typing in email adress. /// <summary> /// Extended TextBox with smart auto-completion /// </summary> public class TextBoxAC: TextBox { private List<string> completions = new List<string>(); private List<string> completionsLow = new List<string>(); private bool autocompleting = false; private bool acDisabled = true; private List<string> possibleCompletions = new List<string>(); private int currentCompletion = 0; /// <summary> /// Default constructor /// </summary> public TextBoxAC() { this.TextChanged += new EventHandler(TextBoxAC_TextChanged); this.KeyPress += new KeyPressEventHandler(TextBoxAC_KeyPress); this.KeyDown += new KeyEventHandler(TextBoxAC_KeyDown); this.TabStop = true; } /// <summary> /// Sets autocompletion data, list of possible strings /// </summary> /// <param name="words">Completion words</param> /// <param name="wordsLow">Completion words in lowerCase</param> public void SetAutoCompletion(List<string> words, List<string> wordsLow) { if (words == null || words.Count < 1) { return; } this.completions = words; this.completionsLow = wordsLow; this.TabStop = false; } private void TextBoxAC_TextChanged(object sender, EventArgs e) { if (this.autocompleting || this.acDisabled) { return; } string text = this.Text; if (text.Length != this.SelectionStart) { return; } int pos = this.SelectionStart; string userPrefix = text.Substring(0, pos); int commaPos = userPrefix.LastIndexOf(","); if (commaPos == -1) { userPrefix = userPrefix.ToLower(); this.possibleCompletions.Clear(); int n = 0; foreach (string s in this.completionsLow) { if (s.StartsWith(userPrefix)) { this.possibleCompletions.Add(this.completions[n]); } n++; } if (this.possibleCompletions.Count < 1) { return; } this.autocompleting = true; this.Text = this.possibleCompletions[0]; this.autocompleting = false; this.SelectionStart = pos; this.SelectionLength = this.Text.Length - pos; } else { string curUs = userPrefix.Substring(commaPos + 1); if (curUs.Trim().Length < 1) { return; } string trimmed; curUs = this.trimOut(curUs, out trimmed); curUs = curUs.ToLower(); string oldUs = userPrefix.Substring(0, commaPos + 1); this.possibleCompletions.Clear(); int n = 0; foreach (string s in this.completionsLow) { if (s.StartsWith(curUs)) { this.possibleCompletions.Add(this.completions[n]); } n++; } if (this.possibleCompletions.Count < 1) { return; } this.autocompleting = true; this.Text = oldUs + trimmed + this.possibleCompletions[0]; this.autocompleting = false; this.SelectionStart = pos; this.SelectionLength = this.Text.Length - pos + trimmed.Length; } this.currentCompletion = 0; } private void TextBoxAC_KeyDown(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.Back || e.KeyCode == Keys.Delete) { this.acDisabled = true; } if (e.KeyCode == Keys.Up || e.KeyCode == Keys.Down) { if ((this.acDisabled) || (this.possibleCompletions.Count < 1)) { return; } e.Handled = true; if (this.possibleCompletions.Count < 2) { return; } switch (e.KeyCode) { case Keys.Up: this.currentCompletion--; if (this.currentCompletion < 0) { this.currentCompletion = this.possibleCompletions.Count - 1; } break; case Keys.Down: this.currentCompletion++; if (this.currentCompletion >= this.possibleCompletions.Count) { this.currentCompletion = 0; } break; } int pos = this.SelectionStart; string userPrefix = this.Text.Substring(0, pos); int commaPos = userPrefix.LastIndexOf(","); if (commaPos == -1) { pos--; userPrefix = this.Text.Substring(0, pos); this.autocompleting = true; this.Text = userPrefix + this.possibleCompletions[this.currentCompletion].Substring(userPrefix.Length); this.autocompleting = false; this.SelectionStart = pos + 1; this.SelectionLength = this.Text.Length - pos; } else { string curUs = userPrefix.Substring(commaPos + 1); if (curUs.Trim().Length < 1) { return; } string trimmed; curUs = this.trimOut(curUs, out trimmed); curUs = curUs.ToLower(); string oldUs = userPrefix.Substring(0, commaPos + 1); this.autocompleting = true; this.Text = oldUs + trimmed + this.possibleCompletions[this.currentCompletion]; this.autocompleting = false; this.SelectionStart = pos; this.SelectionLength = this.Text.Length - pos + trimmed.Length; } } } private void TextBoxAC_KeyPress(object sender, KeyPressEventArgs e) { if (!Char.IsControl(e.KeyChar)) { this.acDisabled = false; } } private string trimOut(string toTrim, out string trim) { string ret = toTrim.TrimStart(); int pos = toTrim.IndexOf(ret); trim = toTrim.Substring(0, pos); return ret; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/133049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the difference between visibility:hidden and display:none? The CSS rules visibility:hidden and display:none both result in the element not being visible. Are these synonyms? A: display:none removes the element from the layout flow. visibility:hidden hides it but leaves the space. A: There is a big difference when it comes to child nodes. For example: If you have a parent div and a nested child div. So if you write like this: <div id="parent" style="display:none;"> <div id="child" style="display:block;"></div> </div> In this case none of the divs will be visible. But if you write like this: <div id="parent" style="visibility:hidden;"> <div id="child" style="visibility:visible;"></div> </div> Then the child div will be visible whereas the parent div will not be shown. A: In addition to all other answers, there's an important difference for IE8: If you use display:none and try to get the element's width or height, IE8 returns 0 (while other browsers will return the actual sizes). IE8 returns correct width or height only for visibility:hidden. A: display: none; It will not be available on the page and does not occupy any space. visibility: hidden; it hides an element, but it will still take up the same space as before. The element will be hidden, but still, affect the layout. visibility: hidden preserve the space, whereas display: none doesn't preserve the space. Display None Example:https://www.w3schools.com/css/tryit.asp?filename=trycss_display_none Visibility Hidden Example : https://www.w3schools.com/cssref/tryit.asp?filename=trycss_visibility A: visibility:hidden will keep the element in the page and occupies that space but does not show to the user. display:none will not be available in the page and does not occupy any space. A: display: none It will remove the element from the normal flow of the page, allowing other elements to fill in. An element will not appear on the page at all but we can still interact with it through the DOM. There will be no space allocated for it between the other elements. visibility: hidden It will leave the element in the normal flow of the page such that is still occupies space. An element is not visible and Element’s space is allocated for it on the page. Some other ways to hide elements Use z-index #element { z-index: -11111; } Move an element off the page #element { position: absolute; top: -9999em; left: -9999em; } Interesting information about visibility: hidden and display: none properties visibility: hidden and display: none will be equally performant since they both re-trigger layout, paint and composite. However, opacity: 0 is functionality equivalent to visibility: hidden and does not re-trigger the layout step. And CSS-transition property is also important thing that we need to take care. Because toggling from visibility: hidden to visibility: visible allow for CSS-transitions to be use, whereas toggling from display: none to display: block does not. visibility: hidden has the additional benefit of not capturing JavaScript events, whereas opacity: 0 captures events A: If visibility property set to "hidden", the browser will still take space on the page for the content even though it's invisible. But when we set an object to "display:none", the browser does not allocate space on the page for its content. Example: <div style="display:none"> Content not display on screen and even space not taken. </div> <div style="visibility:hidden"> Content not display on screen but it will take space on screen. </div> View details A: There are a lot of detailed answers here, but I thought I should add this to address accessibility since there are implications. display: none; and visibility: hidden; may not be read by all screen reader software. Keep in mind what visually-impaired users will experience. The question also asks about synonyms. text-indent: -9999px; is one other that is roughly equivalent. The important difference with text-indent is that it will often be read by screen readers. It can be a bit of a bad experience as users can still tab to the link. For accessibility, what I see used today is a combination of styles to hide an element while being visible to screen readers. { clip: rect(1px, 1px, 1px, 1px); clip-path: inset(50%); height: 1px; width: 1px; margin: -1px; overflow: hidden; padding: 0; position: absolute; } A great practice is to create a "Skip to content" link to the anchor of the main body of content. Visually-impaired users probably don't want to listen to your full navigation tree on every single page. Make the link visually hidden. Users can just hit tab to access the link. For more on accessibility and hidden content, see: * *https://webaim.org/techniques/css/invisiblecontent/ *https://webaim.org/techniques/skipnav/ A: Summarizing all the other answers: visibility display element with visibility: hidden, is hidden for all practical purposes (mouse pointers, keyboard focus, screenreaders), but still occupies space in the rendered markup element with display:none, is hidden for all practical purposes (mouse pointers, keyboard focus, screenreaders), and DOES NOT occupy space in the rendered markup css transitions can be applied for visibility changes css transitions can not be applied on display changes you can make a parent visibility:hidden but a child with visibility: visible would still be shown when parent is display:none, children can't override and make themselves visible part of the DOM tree (so you can still target it with DOM queries) part of the DOM tree (so you can still target it with DOM queries) part of the render tree NOT part of the render tree any reflow / layout in the parent element or child elements, would possibly trigger a reflow in these elements as well, as they are part of the render tree. any reflow / layout in the parent element, would not impact these elements, as these are not part of the render tree toggling between visibility: hidden and visible, would possibly not trigger a reflow / layout. (According to this comment it does: What is the difference between visibility:hidden and display:none? and possibly according to this as well https://developers.google.com/speed/docs/insights/browser-reflow) toggling between display:none to display: (something else), would lead to a layout /reflow as this element would now become part of the render tree you can measure the element through DOM methods you can not measure the element or its descendants using DOM methods If you have a huge number of elements using visibility: none on the page, the browser might hang while rendering, as all these elements require layout, even though they are not shown If you have a huge number of elements using display:none, they wouldn't impact the rendering as they are not part of the render tree Resources: * *https://developers.google.com/speed/docs/insights/browser-reflow *http://www.stubbornella.org/content/2009/03/27/reflows-repaints-css-performance-making-your-javascript-slow/ *Performance differences between visibility:hidden and display:none Other Info: * *There are some browser support idiosyncrancies as well, but they seem to apply to very old browsers, and are available in the other answers, so I have not discussed them here. *There are some other alternatives to hide element, like opacity, or absolute positioning off screen. All of them have been touched upon in some or the other answers, and have some drawbacks. *According to this comment (Performance differences between visibility:hidden and display:none), if you have a lot of elements using display:none and you change to display: (something else), it will cause a single reflow, while if you have multiple visibility: hidden elements and you turn them visible, it will cause reflow for each element. (I don't really understand this) A: One other difference is that visibility:hidden works in really, really old browsers, and display:none does not: https://www.w3schools.com/cssref/pr_class_visibility.asp https://www.w3schools.com/cssref/pr_class_display.asp A: They are not synonyms. display:none removes the element from the normal flow of the page, allowing other elements to fill in. visibility:hidden leaves the element in the normal flow of the page such that is still occupies space. Imagine you are in line for a ride at an amusement park and someone in the line gets so rowdy that security plucks them from the line. Everyone in line will then move forward one position to fill the now empty slot. This is like display:none. Contrast this with the similar situation, but that someone in front of you puts on an invisibility cloak. While viewing the line, it will look like there is an empty space, but people can't really fill that empty looking space because someone is still there. This is like visibility:hidden. A: They're not synonyms - display: none removes the element from the flow of the page, and rest of the page flows as if it weren't there. visibility: hidden hides the element from view but not the page flow, leaving space for it on the page. A: The difference goes beyond style and is reflected in how the elements behave when manipulated with JavaScript. Effects and side effects of display: none: * *the target element is taken out of the document flow (doesn't affect layout of other elements); *all descendants are affected (are not displayed either and cannot “snap out” of this inheritance); *measurements cannot be made for the target element nor for its descendants – they are not rendered at all, thus their clientWidth, clientHeight, offsetWidth, offsetHeight, scrollWidth, scrollHeight, getBoundingClientRect(), getComputedStyle(), all return 0s. Effects and side-effects of visibility: hidden: * *the target element is hidden from view, but is not taken out of the flow and affects layout, occupying its normal space; *innerText (but not innerHTML) of the target element and descendants returns empty string. A: As described elsewhere in this stack, the two are not synonymous. visibility:hidden will leave space on the page whereas display:none will hide the element entirely. I think it's important to talk about how this affects the children of a given element. If you were to use visibility:hidden then you could show the children of that element with the right styling. But with display:none you hide the children regardless of whether you use display: block | flex | inline | grid | inline-block or not. A: display:none means that the tag in question will not appear on the page at all (although you can still interact with it through the dom). There will be no space allocated for it between the other tags. visibility:hidden means that unlike display:none, the tag is not visible, but space is allocated for it on the page. The tag is rendered, it just isn't seen on the page. For example: test | <span style="[style-tag-value]">Appropriate style in this tag</span> | test Replacing [style-tag-value] with display:none results in: test | | test Replacing [style-tag-value] with visibility:hidden results in: test |                        | test A: display: none removes the element from the page entirely, and the page is built as though the element were not there at all. Visibility: hidden leaves the space in the document flow even though you can no longer see it. This may or may not make a big difference depending on what you are doing. A: One thing worth adding, though it wasn't asked, is that there is a third option of making the object completely transparent. Consider: 1st <a href="http://example.com" style="display: none;">unseen</a> link.<br /> 2nd <a href="http://example.com" style="visibility: hidden;">unseen</a> link.<br /> 3rd <a href="http://example.com" style="opacity: 0;">unseen</a> link. (Be sure to click "Run code snippet" button above to see the result.) The difference between 1 and 2 has already been pointed out (namely, 2 still takes up space). However, there is a difference between 2 and 3: in case 3, the mouse will still switch to the hand when hovering over the link, and the user can still click on the link, and Javascript events will still fire on the link. This is usually not the behavior you want (but maybe sometimes it is?). Another difference is if you select the text, then copy/paste as plain text, you get the following: 1st link. 2nd link. 3rd unseen link. In case 3 the text does get copied. Maybe this would be useful for some type of watermarking, or if you wanted to hide a copyright notice that would show up if a carelessly user copy/pasted your content? A: With visibility:hidden the object still takes up vertical height on the page. With display:none it is completely removed. If you have text beneath an image and you do display:none, that text will shift up to fill the space where the image was. If you do visibility:hidden the text will remain in the same location. A: display:none will hide the element and collapse the space is was taking up, whereas visibility:hidden will hide the element and preserve the elements space. display:none also effects some of the properties available from javascript in older versions of IE and Safari. A: visibility:hidden preserves the space; display:none doesn't. A: display:none; will neither display the element nor will it allot space for the element on the page whereas visibility:hidden; will not display the element on the page but will allot space on the page. We can access the element in DOM in both cases. To understand it in a better way please look at the following code: display:none vs visibility:hidden
{ "language": "en", "url": "https://stackoverflow.com/questions/133051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1372" }
Q: How should I bind a web UI against XML attributes? I want to bind my UI against a collection of XElements and their properties on a webpage. Hypothetically, this could be for any object that represents an XML tree. I'm hoping that there might be a better way of doing this. Should I use an XPath query to get out the elements of the collection and the attribute values of each (in this case) XElement? Is there a type of object that is designed to ease databinding against XML? <% foreach(var x in element.Descendants()) {%> <%= DateTime.Parse(x.Attribute["Time"]).ToShortDate() %> <% } %> <%-- excuse me, I just vomited a little in my mouth --%> A: I normally use a "placeholder" class with [XmlRoot], [XmlElement], [XmlAttribute] and I have the xml passed to a deserializer which gives me an object of the type of the placeholder. Once this is done, the only thing left to do is some basic DataBinding to a strongly typed object. Here is a sample class that is "Xml Enabled": [XmlRoot(ElementName = "Car", IsNullable = false, Namespace="")] public class Car { [XmlAttribute(AttributeName = "Model")] public string Model { get; set; } [XmlAttribute(AttributeName = "Make")] public string Make { get; set ;} } And here is how to deserialize it properly from a file: public Car ReadXml(string fileLocation) { XmlSerializer carXml = new XmlSerializer(typeof(Car)); FileStream fs = File.OpenRead(fileLocation); Car result = imageConfig.Deserialize(fs) as Car; return result; } Of course, you could replace the FileStream by a MemoryStream to read the Xml directly from memory. Once in the Html, it would translate to something like this: <!-- It is assumed that MyCar is a public property of the current page. --> <div> Car Model : <%= MyCar.Model %> <br/> Car Make : <%= MyCar.Make %> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/133077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Most efficient way in SQL Server to get date from date+time? In MS SQL 2000 and 2005, given a datetime such as '2008-09-25 12:34:56' what is the most efficient way to get a datetime containing only '2008-09-25'? Duplicated here. A: in SQL server 2012 use select cast(getdate() as date) A: Select DateAdd(Day, DateDiff(Day, 0, GetDate()), 0) DateDiff(Day, 0, GetDate()) is the same as DateDiff(Day, '1900-01-01', GetDate()) Since DateDiff returns an integer, you will get the number of days that have elapsed since Jan 1, 1900. You then add that integer number of days to Jan 1, 1900. The net effect is removing the time component. I should also mention that this method works for any date/time part (like year, quarter, month, day, hour, minute, and second). Select DateAdd(Year, DateDiff(Year, 0, GetDate()), 0) Select DateAdd(Quarter, DateDiff(Quarter, 0, GetDate()), 0) Select DateAdd(Month, DateDiff(Month, 0, GetDate()), 0) Select DateAdd(Day, DateDiff(Day, 0, GetDate()), 0) Select DateAdd(Hour, DateDiff(Hour, 0, GetDate()), 0) Select DateAdd(Second, DateDiff(Second, '20000101', GetDate()), '20000101') The last one, for seconds, requires special handling. If you use Jan 1, 1900 you will get an error. Difference of two datetime columns caused overflow at runtime. You can circumvent this error by using a different reference date (like Jan 1, 2000). A: select cast(floor(cast(@datetime as float)) as datetime) Works because casting a datetime to float gives the number of days (including fractions of a day) since Jan 1, 1900. Flooring it removes the fractional days and leaves the number of whole days, which can then be cast back to a datetime. A: I must admit I hadn't seen the floor-float conversion shown by Matt before. I had to test this out. I tested a pure select (which will return Date and Time, and is not what we want), the reigning solution here (floor-float), a common 'naive' one mentioned here (stringconvert) and the one mentioned here that I was using (as I thought it was the fastest). I tested the queries on a test-server MS SQL Server 2005 running on a Win 2003 SP2 Server with a Xeon 3GHz CPU running on max memory (32 bit, so that's about 3.5 Gb). It's night where I am so the machine is idling along at almost no load. I've got it all to myself. Here's the log from my test-run selecting from a large table containing timestamps varying down to the millisecond level. This particular dataset includes dates ranging over 2.5 years. The table itself has over 130 million rows, so that's why I restrict to the top million. SELECT TOP 1000000 CRETS FROM tblMeasureLogv2 SELECT TOP 1000000 CAST(FLOOR(CAST(CRETS AS FLOAT)) AS DATETIME) FROM tblMeasureLogv2 SELECT TOP 1000000 CONVERT(DATETIME, CONVERT(VARCHAR(10), CRETS, 120) , 120) FROM tblMeasureLogv2 SELECT TOP 1000000 DATEADD(DAY, DATEDIFF(DAY, 0, CRETS), 0) FROM tblMeasureLogv2 SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 1 ms. (1000000 row(s) affected) Table 'tblMeasureLogv2'. Scan count 1, logical reads 4752, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 422 ms, elapsed time = 33803 ms. (1000000 row(s) affected) Table 'tblMeasureLogv2'. Scan count 1, logical reads 4752, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 625 ms, elapsed time = 33545 ms. (1000000 row(s) affected) Table 'tblMeasureLogv2'. Scan count 1, logical reads 4752, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 1953 ms, elapsed time = 33843 ms. (1000000 row(s) affected) Table 'tblMeasureLogv2'. Scan count 1, logical reads 4752, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 531 ms, elapsed time = 33440 ms. SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 1 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 1 ms. What are we seeing here? Let's focus on the CPU time (we're looking at conversion), and we can see that we have the following numbers: Pure-Select: 422 Floor-cast: 625 String-conv: 1953 DateAdd: 531 From this it looks to me like the DateAdd (at least in this particular case) is slightly faster than the floor-cast method. Before you go there, I ran this test several times, with the order of the queries changed, same-ish results. Is this something strange on my server, or what? A: select cast(getdate()as varchar(11))as datetime A: To get YYYY-MM-DD, use: select convert(varchar(10), getdate(), 120) Edit: Oops, he wants a DateTime instead of a string. The equivalent of TRUNC() in Oracle. You can take what I posted and cast back to a DateTime: select convert(datetime, convert(varchar(10), getdate(), 120) , 120) A: CONVERT, FLOOR ,and DATEDIFF will perform just the same. How to return the date part only from a SQL Server datetime datatype A: Three methods described in the link below. I haven't performance tested them to determine which is quickest. http://www.blackwasp.co.uk/SQLDateFromDateTime.aspx A: CAST(FLOOR(CAST(yourdate AS DECIMAL(12, 5))) AS DATETIME) performs the best by far. you can see the proof & tests when getting the date without time in sql server A: CONVERT(VARCHAR(10), GETDATE(), 120) AS [YYYY-MM-DD] A: What About SELECT CAST(CASt(GETDATE() AS int) AS DATETIME)??
{ "language": "en", "url": "https://stackoverflow.com/questions/133081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "78" }
Q: What's the best to method call a Webservice from ASP? Note: not ASP.NET. I've read about various methods including using SOAPClient (is this part of the standard Windows 2003 install?), ServerXMLHTTP, and building up the XML from scratch and parsing the result manually. Has anyone ever done this? What did you use and would you recommend it? A: Well, since the web service talks XML over standard HTTP you could roll your own using the latest XML parser from Microsoft. You should make sure you have the latest versions of MSXML and XML Core Services (see Microsoft Downloads). <% SoapUrl = "http://www.yourdomain.com/yourwebservice.asmx" set xmlhttp = CreateObject("MSXML2.ServerXMLHTTP") xmlhttp.open "GET", SoapUrl, false xmlhttp.send() Response.write xmlhttp.responseText set xmlhttp = nothing %> Here is a good tutorial on ASPFree.com A: We use the MS Soap Toolkit version 3 here. Seems to work ok (I only wrote the services).
{ "language": "en", "url": "https://stackoverflow.com/questions/133087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you identify duplicate values in a numerical sequence using XPath 2.0? I have an XPath expression which provides me a sequence of values like the one below: 1 2 2 3 4 5 5 6 7 This is easy to convert to a sequence of unique values 1 2 3 4 5 6 7 using distinct-values(). However, what I want to extract is the list of duplicate values = 2 5. I can't think of an easy way to do this. Can anyone help? A: What about: distinct-values( for $item in $seq return if (count($seq[. eq $item]) > 1) then $item else ()) This iterates through the items in the sequence, and returns the item if the number of items in the sequence that are equal to that item is greater than one. You then have to use distinct-values() to remove the duplicates from that list. A: Use this simple XPath 2.0 expression: $vSeq[index-of($vSeq,.)[2]] where $vSeq is the sequence of values in which we want to find the duplicates. For explanation of how this "works", see: http://dnovatchev.wordpress.com/2008/11/16/xpath-2-0-gems-find-all-duplicate-values-in-a-sequence-part-2/ TLDR; This picture can be a visual explanation. If the sequence is: $vSeq = 1, 2, 3, 2, 4, 5, 6, 7, 5, 7, 5 Then evaluating the above XPath expression produces: 2, 5, 7 A: Calculate the difference between your original set and the set of distinct values. This is the set of numbers that occur more than once. Note that numbers in this result set are not necessarily distinct if they occur more than twice in the original sequence so convert again to a set of distinct values if this is required. A: What about xslt? Is it applicable to your request? <xsl:for-each select="/r/a"> <xsl:variable name="cur" select="." /> <xsl:if test="count(./preceding-sibling::a[. = $cur]) > 0 and count(./following-sibling::a[. = $cur]) = 0"> <xsl:value-of select="." /> </xsl:if> </xsl:for-each>
{ "language": "en", "url": "https://stackoverflow.com/questions/133092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Best Practices for Internationalizing a Flex Application? I am looking into internationalizing a Flex application I am working on and I am curious if there are any best practices or recommendations for doing so. Googling for such information results in a handful of small articles and blog posts, each about doing it differently, and the advantages and disadvantages are not exactly clear. Edited to narrow scope: * *Need to support only two languages (en_CA and fr_CA) *Need to be able to switch at runtime A: Of course, after googling a bit more I come across an article on runtime localization. And followed these steps: Add the following to the compiler arguments to specify the supported locales and their path: (In Flex Builder, select project and go properties -> Flex Compiler -> Additional Compiler Arguments) -locale=en_CA,fr_CA -source-path=locale/{locale} Create the following files: src/locale/en_CA/resources.properties src/locale/fr_CA/resources.properties And then the compiler complains: unable to open 'C:\Program Files\Adobe\Flex Builder 3\sdks\3.1.0\frameworks\locale\en_CA' Which looks to be related to bug SDK-12507 Work around: In the sdks\3.1.0\bin directory, execute the following commands: copylocale en_US en_CA copylocale en_US fr_CA This will create the locale directories in the Flex Builder installation and build some required resources into them. Then in your .mxml files, reference the resource bundle: <mx:Metadata> [ResourceBundle("resources")] </mx:Metadata> And internationalize the strings: <mx:TitleWindow title="Window Title"> becomes: <mx:TitleWindow title="{resourceManager.getString('resources', 'windowTitle')}"> and var name:String = "Name"; becomes: var name:String = resourceManager.getString("resources", "name"); And in your src/locale/en_CA/resources.properties file: windowTitle=Window Title name=Name
{ "language": "en", "url": "https://stackoverflow.com/questions/133094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How secure is basic forms authentication in asp.net? Imagine that you have a simple site with only 2 pages: login.aspx and secret.aspx. Your site is secured using nothing but ASP.net forms authentication and an ASP.net Login server control on login.aspx. The details are as follows: * *The site is configured to use the SqlMembershipProvider *The site denies all anonymous users *Cookies are disabled The are obviously many things to consider regarding security but I am more interested in the zero code out of box experience that comes with the .net framework. If, for the sake of this question, the only attack points are the username/password textboxes in login.aspx, can a hacker inject code that will allow them to gain access to our secret.aspx page? How secure is the zero code out-of-box experience that Microsoft provides? A: As far as I know password will be sent as plain text (but encoded). So the most important thing to do is to use HTTPS protocol on login screens. The other setting seems to be secure for me. A: With HTTP Basic Authentication, which is what the .NET basic forms authentication is using, in order to view the secret.aspx page, the browser must send a Base64 encoded concatenation of the username and password. Unless you utilize SSL, anyone who has access to scan the network between the server and the browser can read this information. They can decode the username and password. They can replay the username and password in the future to gain access to the secret.aspx page. That said, unless you use SSL, someone can also scan the whole session of someone else using secret.aspx, so in effect, they would have access to the content of the page as well. A: Well, try and look behind the scenes: Password Protection Applications that store user names, passwords, and other authentication information in a database should never store passwords in plaintext, lest the database be stolen or compromised. To that end, SqlMembershipProvider supports three storage formats ("encodings") for passwords and password answers. The provider's PasswordFormat property, which is initialized from the passwordFormat configuration attribute, determines which format is used: * *MembershipPasswordFormat.Clear, which stores passwords and password answers in plaintext. *MembershipPasswordFormat.Hashed (the default), which stores salted hashes generated from passwords and password answers. The salt is a random 128-bit value generated by the .NET Framework's RNGCryptoServiceProvider class. Each password/password answer pair is salted with this unique value, and the salt is stored in the aspnet_Membership table's PasswordSalt field. The result of hashing the password and the salt is stored in the Password field. Similarly, the result of hashing the password answer and the salt is stored in the PasswordAnswer field. *MembershipPasswordFormat.Encrypted, which stores encrypted passwords and password answers. SqlMembershipProvider encrypts passwords and password answers using the symmetric encryption/decryption key specified in the configuration section's decryptionKey attribute, and the encryption algorithm specified in the configuration section's decryption attribute. SqlMembershipProvider throws an exception if it is asked to encrypt passwords and password answers, and if decryptionKey is set to Autogenerate. This prevents a membership database containing encrypted passwords and password answers from becoming invalid if moved to another server or another application. So the strength of your security (out of the box) will depend on which password protection format strategy you are using: * *If you use clear text, it is obviously easier to hack into your system. *Using Encrypted on the other hand, security will depend on physical access to your machine (or at least, machine.config). *Using Hashed passwords (the default) will guarantee security depending on: a) known reversals of the hashing strategy of RNGCryptoServiceProvider class and b) access to the database to compromise the randomly generated salt. I do not know if it is possible to use some sort of rainbow table hack into the default Hash-base system. For more details, check out this link: http://msdn.microsoft.com/en-us/library/aa478949.aspx A: You still have some variables that aren't accounted for: * *Security into the data store used by your membership provider (in this case, the Sql Server database). *security of other sites hosted in the same IIS *general network security of the machines involved in hosting the site, or on the same network where the site is hosted *physical security of the machines hosting the site *Are you using appropriate measures to encrypt authentication traffic? (HTTPS/SSL) Not all of those issues are MS specific, but they're worth mentioning because any of them could easily outweigh the issue you're asking about, if not taken care of. But, for the purpose of your question I'll assume there aren't any problems with them. In that case, I'm pretty sure the forms authentication does what it's supposed to do. I don't think there's any currently active exploit out there. A: If configured correctly through the membership provider, you will have a adequate level of security. Outside of that, access to that page might be accessible through cannonical attacks, but that has to do with your general security. I gave a presentation on using the Security Enterprise Application Blocks. You might want to read up on those and look into that when implementing security on your site, and just be aware of common security threats. No site will ever be 100% unhackable, given that you are on an open shared network and total security would be an unplugged server locked in a safe guarded 24/7 by the military (around DoD "A" level security, based of Orange book). But the out of the box functionality of the Membership Providers (when configured correctly) will offer a good amount of security. Edit: Yeah, I agree with the other comment that was made, HTTPS on at least the log in screens is a given, if you want to protect the username/passwords from packet sniffers and network monitors. A: Asp.Net supports cookieless sessions, as this blog post shows. Instead of a session cookie, it uses an identifier in the url to track users. I am not sure how secure this is, but I would think it is a secure as the difficulty to brute force the identity string. It looks like it works more or less out of the box, however when redirecting a user and wanting to maintain session state you must include the session id. The blog post shows how to do that, as well as many other articles on the web. A: Here are two good articles from Microsoft on the subject: How To: Protect Forms Authentication in ASP.NET 2.0 INFO: Help Secure Forms Authentication by Using Secure Sockets Layer (SSL) A: Cookies over URL is not secure enough, there are so many different problems with it (especially referrer leakage if you've got any) and usage of HTTPS.
{ "language": "en", "url": "https://stackoverflow.com/questions/133106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: ASP.Net Medium Trust setup I am trying to configure the IPermission node as part of medium trust. However I am unable to find the valid values list for the PathDiscovery attribute on the node <IPermission class="FileIOPermission" version="1" Read="$AppDir$" Write="$AppDir$" Append="$AppDir$" PathDiscovery="$AppDir$"/> I need to set the permission so that the account will be able to access all subdirectories under the main path. currently a .svc (WCF service file) throws a 404 error because the ASP.Net account is not able to get it from a sub-folder couple of levels deep. I am trying to avoid changing the node to <IPermission class="FileIOPermission" version="1" Unrestricted="true"/> Any ideas? TIA A: I certainly agree that you shouldn't change the node to Unrestricted, as that would pretty much defeat the purpose of partial trust. According to the System.Security.Permissions.FileIOPermission documentation on MSDN, FileIOPermission is supposed to imply permissions to everything below that path as well. From that doc: Access to a folder implies access to all the files it contains, as well as access to all the files and folders in its subfolders. For example, Read access to C:\folder1\ implies Read access to C:\folder1\file1.txt, C:\folder1\folder2\, C:\folder1\folder2\file2.txt, and so on. Of course, building custom trust .config files is woefully poorly documented by Microsoft, so it's possible that the .config markup for FileIOPermission behaves differently than the code class... That would be surprising to me though. Is it possible some other problem is causing the 404? Double-check that the service is executing under the credentials you'd expect, and that the path is being evaluated correctly...
{ "language": "en", "url": "https://stackoverflow.com/questions/133109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I carry out math functions in the Ant 'ReplaceRegExp' task? I need to increment a number in a source file from an Ant build script. I can use the ReplaceRegExp task to find the number I want to increment, but how do I then increment that number within the replace attribute? Heres what I've got so far: <replaceregexp file="${basedir}/src/path/to/MyFile.java" match="MY_PROPERTY = ([0-9]{1,});" replace="MY_PROPERTY = \1;"/> In the replace attribute, how would I do replace="MY_PROPERTY = (\1 + 1);" I can't use the buildnumber task to store the value in a file since I'm already using that within the same build target. Is there another ant task that will allow me to increment a property? A: You can use something like: <propertyfile file="${version-file}"> <entry key="revision" type="string" operation="=" value="${revision}" /> <entry key="build" type="int" operation="+" value="1" /> so the ant task is propertyfile. A: In ant, you've always got the fallback "script" tag for little cases like this that don't quite fit into the mold. Here's a quick (messy) implementation of the above: <property name="propertiesFile" location="test-file.txt"/> <script language="javascript"> regex = /.*MY_PROPERTY = (\d+).*/; t = java.io.File.createTempFile('test-file', 'txt'); w = new java.io.PrintWriter(t); f = new java.io.File(propertiesFile); r = new java.io.BufferedReader(new java.io.FileReader(f)); line = r.readLine(); while (line != null) { m = regex.exec(line); if (m) { val = parseInt(m[1]) + 1; line = 'MY_PROPERTY = ' + val; } w.println(line); line = r.readLine(); } r.close(); w.close(); f.delete(); t.renameTo(f); </script> A: Good question, it can be done in perl similar to that, but I think its not possible in ant, .NET and other areas.. If I'm wrong, I'd really like to know, because that's a cool concept that I've used in Perl many times that I could really use in situations like you've mentioned.
{ "language": "en", "url": "https://stackoverflow.com/questions/133111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What is the list of JConnect error-codes? I have recently changed an application from storing the database username and password in a configuration file (gasp password stored in plain text in a config file). The application now asks the user to type in her username and password before it can continue. The new version of the application now has to interrogate the SQLException to determine what caused the exception (invalid user name or password, database server unreachable, connection timeout, etc.) so that it can decide what to do next (prompt the user to correct the username and password, tell the user to try again later once the network issue has been sorted, reconnect invisibly, etc.). Trying to find the SQLException errorCodes (SQLException.getErrorCode()) that relate to these (and other) causes has been near to impossible and we have been forced to guessing (which at times can be dangerous). The Java API docs say this is vendor-specific. Does anyone have the error-codes that can be set by the Sybase JConnect JDBC drivers? * *JRE 1.5 *jConnect for JDBC 2.0 (spec version 5.2) *Sybase IQ 12.7 A: It seems that most of the errorCodes in the caught SQLException are 0. That means you have to check the Connection/Statement/ResultSet's SQLWarnings connection.getWarnings(); or the exceptions chained on to the thrown exception sqlException.getNextException(); the tricky thing here is that the chained exceptions might be SQLException or IOException and possibly others (can't remember coming across others).
{ "language": "en", "url": "https://stackoverflow.com/questions/133112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to change a Window Owner using its handle I want to make a .NET Form as a TopMost Form for another external App (not .NET related, pure Win32) so it stays above that Win32App, but not the rest of the apps running. I Have the handle of the Win32App (provided by the Win32App itself), and I've tried Win32 SetParent() function, via P/Invoke in C#, but then my .NET Form gets confined into the Win32App and that's not what I want. A: Yes! I've already have a P/Invoke import of SetWindowLongPtr (which is x64 safe). And using Reflector I searched upon the Form.Owner property ( i.e. the get_Owner(Form value) method ) and managed to change the owner with SetWindowLongPtr(childHdl, -8, OwnerHdl) I was looking what the -8 (0xFFFFFFFFFFFFFFF8) meant before I could post the solution here, but Joel has already pointed it out. Thanks! A: I think you're looking for is to P/Invoke SetWindowLongPtr(win32window, GWLP_HWNDPARENT, formhandle) Google Search A: It has now been 12 years since this question was asked so I thought I would provide an updated answer from here. Do not call SetWindowLongPtr with the GWLP_HWNDPARENT index to change the parent of a child window. Instead, use the SetParent function.
{ "language": "en", "url": "https://stackoverflow.com/questions/133122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Any other IDEs for Lotus Notes other than Domino Designer? Are there any other IDEs worth my time for Lotus Notes development? We're doing mostly LotusScript development and would kill for features of Eclipse or Visual Studio, like "Show Declaration". I know there's an Eclipse plugin for Java development in Notes, but seems like it only does Java, and we have too many pieces of legacy code in LotusScript to abandon it. A: Lotus Notes has moved to the Eclipse platform in version 8. You can run the client in 2 different modes, basic mode which is the version we all know or on the Eclipse platform (know as the standard). The IDE is also moving to eclipse, version 8.5 beta 2 is currently available with the new Eclipse based IDE. Bear in mind that it's a Beta version and it's not feature complete. A: Time is on our side. The Domino Designer based on Eclipse is now a free download from http://www.ibm.com/developerworks/downloads/ls/dominodesigner/learn.html It has brilliant Java and LotusScript editors with all the nice Eclipse features like refactoring and typeahead of custom classes. Every Domino addict should look at this. Admins too, as the above download includes the admin client. A: As of version 8.5 Domino Designer is run as an Eclipse application. 8.5.1 will bring a whole ton of improvements including Eclipse based LotusScript and Java editing as well as improvements to performance, stability and XPages. Matt A: The closest thing you're going to find is the Teamstudio LotusScript Browser. It's not very good, but it is free and that almost makes up for it. Features: * *No support for keyboard shortcuts. *Not completely integrated into the designer so is a bit sluggish. *Only works in script libraries *It does have Find Definition and References functionality which are almost useful. There is also a rumored LotusScript plug-in for eclipse. A: Teamstudio sell a number of tools to assist your Lotus Notes development, and it looks like they can do some of the things you want, but it doesn't look like they can be assembled into an IDE. http://www.teamstudio.com/products/product-index.html (Disclosure: I worked for a sister company of Team Studio a number of years back, but never had much to do with their products) A: You could give the Zeus IDE a test drive. It is highly and language neutral so it might be possible to configure it for Lotus Notes. Zeus automatically maintains a tags database based on the information produced by ctags, so provided ctags generates tags information for Lotus Notes it will be able display, browser and search this tags information. PS: If decide give it a test drive and find it does not support Notes correctly, feel free to post a bug report to the Zeus forum. (source: zeusedit.com)
{ "language": "en", "url": "https://stackoverflow.com/questions/133125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Eclipse Search Menus disabled randomly I use Eclipse 3.3 in my daily work, and have also used Eclipse 3.2 extensively as well. In both versions, sometimes the Search options (Java Search, File Search, etc) in the menu get disabled, seemingly at random times. However, with Ctrl+H, I am able to access the search functionality. Does anyone know why this happens? Has it been fixed in Eclipse 3.4? A: Using Eclipse 4.3(!) this happened to me after doing a case-sensitive search. Window -> Close All Perspectives didn't fix it and neither did restarting Eclipse using -clean. While messing with the search box, I discovered that simply clicking to a previous search entry allowed me to edit it and search again! Clicking back to the case-sensitive search grayed the option out again. So before you reset anything in your workspace, try pulling up an older search entry using the Down Arrow. A: window > close all perspective works for me. A: I think this answer is what you all need to solve the issue on all versions. I am using RAD 8 and I have also faced this problem than I removed org.eclipse.search directory in (workspace currently using) workspace/metadata/plugins folder then restart the eclipse. That's all. A: I don't have an exact answer. I will recommend that you try to correlate the disablement with which perspective is active. Likewise, which view is active. I have been using 3.4 and not experienced this issue. A: Darn! I have that problem too -- in Eclipse 3.4.2. Seems to be related to Navigator and Project Explorer views: - Switch to Debug perspective: Search menu items are there. - Switch to Java or Java EE perspective: Search menu items still there. - Click on a project in Navigator or Project Explorer: Search menu items all DISABLED. (Curse! I use search in Selected Resources a lot! )-: Hmmm... It may slso depend on the file type currently open in the editor. (Like Java vs xml.) A: Still present in Eclipse 3.5.2 -- and for the first time really sticky. I checked out the "close all opened files and open any other file afterwards" answer and that brought back the Search menu items. Additionally, if you were lucky and have the Search result view open, than indeed there is this little link "Open search dialog". By the way, additionally lots of other project-related menu items seem to be greyed out also together with this, and they did not reanimate :-( But I did not really check out if these are only items for which it is useful and planned to be greyed out in this situation. A: I'm using RAD 7.5.1 which runs on Eclipse 3.4 and I get this problem frustratingly often. It doesn't matter which perspective or view I'm in, or which editor I have open. Restarting RAD usually clears it up, but because that's such a colossal pain, I found that you can get around it in the Search View, there is a link; "Start a search from the search dialog" which will bring up the search dialog. This isn't a great workaround because the link only shows if you have no search history. To do another search, you'll have to clear your search history in the view. A: A late comment for anyone getting bitten by this, but I found "eclipse -clean" fixed it => this does a cleanup of the workspace before starting Thanks http://letsgetdugg.com/2009/04/19/recovering-a-corrupt-eclipse-workspace/ for the tip, after I guessed my workspace might be corrupt. A: Before search, check you may choose scope in empty Working set. Most Search menu disables Search button when you choose it. And mine, too :) A: window > close all perspectives worked for me too. But if you are just looking for a text search in the project you could press Ctrl+Alt+G on a marked text A: I have this problem with MyEclipse 7 (eclipse 3.4) under Debian Lenny. Perspective doesn't seem to matter. I get around it with the shortcut Ctrl+H but I was hoping for a better way. A: I couldn't get it to work even when restarting Eclipse. Here's what worked for me: Closing all open files and opening a different file. The different file happened to be .java, but not sure if that had anything to do with it. A: I get this problem from time to time. In the past I've fixed by starting eclipse with the -clean option. Once when that didn't work I created a new workspace. I followed these instructions for those two solutions. The clean option didn't work for me today and I found this thread because I didn't want to create a new workspace. The closing all files and reopening one file did work however. A: I had this issue also in eclipse 3.6.2: Helios Service Release 1. Closed all the editor windows, and the search has been enabled. A: Switching to another perspective, then back, works quickly for me. A: I've faced similar issue in Ctrl+H "File Search" tab. The "Search" and "Replace" button was grayed out (disabled). The solution is fill the "File name patterns" text box (for eg, *.py). May be this is by design! A: Just had this problem in Eclipse Neon 3. It is a very common problem in RAD. I could find using in the console, then switch back to the source and search. RAD would disable the find/search options per source file open. This is very frustrating. A: I had this problem too. It appeared when I installed the m2eclipse plugin. I had not found a solution, but you can use Ctrl+H shortcut instead. And you can navigate between tabs with Ctrl+PgDown or Ctrl+PgUp keys. I've uninstalled the following pluings and it worked. * *Maven integration *PMD *eclipse checkstyle plugin *EclEmma (coverage) I don't know which of those cause the problem. To uninstall a plugin: Help -> Software Updates...-> "Installed Software" tab.
{ "language": "en", "url": "https://stackoverflow.com/questions/133129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: What is the best way to configure iPlanet/Sun ONE be the HTTP/HTTPS front end to a JBoss/Tomcat application? What is the best way to configure iPlanet/Sun ONE be the HTTP/HTTPS front end to a JBoss/Tomcat application? Are there any performance considerations? How would this compare with the native integration between Apache httpd and Tomcat? A: There are plugins available for iPlanet which do exactly this. Check out the Reverse Proxy plugin in the documentation for iPlanet. This may help: http://docs.sun.com/source/816-7156-10/agplugin.html#18923
{ "language": "en", "url": "https://stackoverflow.com/questions/133136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: LLBLGen: How can I softdelete a entry I have inherited a project that uses LLBLGen Pro for the DB layer. The DB model requires that when a entry is deleted a flag (DeletedDate is set to the current time). The last programmer ignored this requirement and has used regular deletes throughout the entire application. Is there a way to set the code generator to do this automatically or do I have to overload each delete operator for the Entities that requires it? A: I implemented this in SQL Server 2005 using INSTEAD OF triggers on delete for any soft delete table. The triggers set the delete flag and perform clean-up. The beauty of this solution is that it correctly handles deletes issued by any system that accesses the database. INSTEAD OF is relatively new in SQL Server, I know there's an Oracle equivalent. This solution also plays nicely with our O/R mapper -- I created views that filter out soft deleted records and mapped those. The views are also used for all reporting. A: You could create custom task in LLBLGen that would override those for you when you are generating entities. Check out their template studio and template examples on the website. A: It depends if you are using self-servicing or adapter. If SS you will need to modify the template so that it sets the flag for you rather than deleting the entity. If adapter, you can inherit from DataAccessAdapter and override the delete methods to set the flag for you rather than deleting the entities. It's generally a crappy solution for performace though as every query then needs to filter out "deleted" entities - and because the selectvity on the "deleted" column won't be very high (all of your "undelted" records are null - i'm guessing this will be the majority of them) indexing it doesn't gain you a great deal - you will end up with a lot of table scans.
{ "language": "en", "url": "https://stackoverflow.com/questions/133143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I implement quicksort using a batch file? While normally it's good to always choose the right language for the job, it can sometimes be instructive to try and do something in a language which is wildly inappropriate. * *It can help you understand the problem better. Maybe you don't have to solve it the way you thought you did. *It can help you understand the language better. Maybe it supports more features than you realized. And pushing this idea to it's illogical conclusion...how would you implement quicksort in a batch file? Is it even possible? A: Here's a more legible version that I wrote awhile ago: @echo off echo Sorting: %* set sorted= :sort :: If we've only got one left, we're done. if "%2"=="" ( set sorted=%sorted% %1 :: We have to do this so that sorted gets actually set before we print it. goto :finalset ) :: Check if it's in order. if %1 LEQ %2 ( :: Add the first value to sorted. set sorted=%sorted% %1 shift /1 goto :sort ) :: Out of order. :: Reverse them and recursively resort. set redo=%sorted% %2 %1 set sorted= shift /1 shift /1 :loop if "%1"=="" goto :endloop set redo=%redo% %1 shift /1 goto :loop :endloop call :sort %redo% :: When we get here, we'll have already echod our result. goto :eof :finalset echo Final Sort: %sorted% goto :eof Example: C:\Path> sort 19 zebra blah 1 interesting 21 bleh 14 think 2 ninety figure it out produces: Sorting: 19 zebra blah 1 interesting 21 bleh 14 think 2 ninety figure it out Final Sort: 1 2 14 19 21 blah bleh figure interesting it ninety out think zebra A: Turns out, it's not as hard as you might think. The syntax is ugly as hell, but the batch syntax is actually capable of some surprising things, including recursion, local variables, and some surprisingly sophisticated parsing of strings. Don't get me wrong, it's a terrible language, but to my surprise, it isn't completely crippled. I don't think I learnt anything about quicksort, but I learned a lot about batch files! In any case, here's quicksort in a batch file - and I hope you have as much fun trying to understand the bizarre syntax while reading it as I did while writing it. :-) @echo off SETLOCAL ENABLEDELAYEDEXPANSION call :qSort %* for %%i in (%return%) do set results=!results! %%i echo Sorted result: %results% ENDLOCAL goto :eof :qSort SETLOCAL set list=%* set size=0 set less= set greater= for %%i in (%*) do set /a size=size+1 if %size% LEQ 1 ENDLOCAL & set return=%list% & goto :eof for /f "tokens=2* delims== " %%i in ('set list') do set p=%%i & set body=%%j for %%x in (%body%) do (if %%x LEQ %p% (set less=%%x !less!) else (set greater=%%x !greater!)) call :qSort %less% set sorted=%return% call :qSort %greater% set sorted=%sorted% %p% %return% ENDLOCAL & set return=%sorted% goto :eof Call it by giving it a set of numbers to sort on the command line, seperated by spaces. Example: C:\dev\sorting>qsort.bat 1 3 5 1 12 3 47 3 Sorted result: 1 1 3 3 3 5 12 47 The code is a bit of a pain to understand. It's basically standard quicksort. Key bits are that we're storing numbers in a string - poor man's array. The second for loop is pretty obscure, it's basically splitting the array into a head (the first element) and a tail (all other elements). Haskell does it with the notation x:xs, but batch files do it with a for loop called with the /f switch. Why? Why not? The SETLOCAL and ENDLOCAL calls let us do local variables - sort of. SETLOCAL gives us a complete copy of the original variables, but all changes are completely wiped when we call ENDLOCAL, which means you can't even communicate with the calling function using globals. This explains the ugly "ENDLOCAL & set return=%sorted%" syntax, which actually works despite what logic would indicate. When the line is executed the sorted variable hasn't been wiped because the line hasn't been executed yet - then afterwards the return variable isn't wiped because the line has already been executed. Logical! Also, amusingly, you basically can't use variables inside a for loop because they can't change - which removes most of the point of having a for loop. The workaround is to set ENABLEDELAYEDEXPANSION which works, but makes the syntax even uglier than normal. Notice we now have a mix of variables referenced just by their name, by prefixing them with a single %, by prefixing them with two %, by wrapping them in %, or by wrapping them in !. And these different ways of referencing variables are almost completely NOT interchangeable! Other than that, it should be relatively easy to understand!
{ "language": "en", "url": "https://stackoverflow.com/questions/133154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Classic ASP and ASP.NET Integration In a previous job we had a classic ASP application that no one wanted to migrate to ASP.NET. The things that it did, it did very well. However there was some new functionality that needed to be added that just seemed best suited to ASP.NET. The decision was made to allow the system to become a weird hybrid of ASP and ASP.NET. Our biggest sticking point was session management and we hacked together a solution to pass session values through form variables. I've talked to others that handled this same problem through cookies. Both methods seem a horrible kluge (in addition to being terribly insecure). Is there a better or cleaner way or is this just such a bad idea to begin with that discussion on the topic is pointless? A: Can you not persist session data to a serverside data store? ie XML file, database etc. You could then pass just a hash (calculated based on some criteria that securely identifies the session) to a .NET page which can the pick the data up from the data store using this identifier and populate your session data. It still means having to pass requests from ASP to ASP.NET through a proxy each time to ensure the latest session data is available in each app but I don't know of an alternative way to achieve this I'm afraid. A: I have had to deal with the same problem. In my case, I encrypted a key into a cookie and used the database for any other information. I wrote the encryption in .NET and inter-op'd to decrypt the id on the ASP side. There is some weirdness dealing with base-64 string in that ASP will not get the same string as .NET, so you may have to do as I did and rewrite the base-64 string to hex equivalent or some similar lowest-common-denominator tactic. It is relatively secure (save a XSS attack). A: Well, ultimately the best idea would probably to have converted the ASP app to .NET. I think that probably goes without saying though. If security is a big concern there are steps you can take as far as encryption and maintaining the integrity of the session information, to make it more secure, such as some symmetrical encryption and hashing and what not. A: I dont know of any cleaner way to do this in the general case. But perhaps you can describe more specifically what state you have to share between the systems? There may be a cleaner solution in your specific case. Session object state are not always the best way of keeping state. A: I would have to agree with Wes P ... what are the long-term goals? If the long-term goal is migrate the classic ASP application to ASP.NET then I think I short-term fix, whatever it may be, will work. If the long-term is to keep the classic ASP application then you would be better off going with a more robust solution for session management, similar to what Oglester recommended. A: You can create a new .NET Web Forms application, include your .asp classic code within it, and add the following to your web.config file. Be sure to run the application pool in Integrated Pipeline mode. This will ensure the authentication modules (like forms authentication) are backwards compatible with ASP Classic. Your ASP Classic code will be protected by the same authentication mechanisms that your .NET code is using (as defined in web.config), without having to implement homegrown mechanisms like you mentioned. <system.webServer> <modules> <remove name="FormsAuthenticationModule" /> <add name="FormsAuthenticationModule" type="System.Web.Security.FormsAuthenticationModule" /> <remove name="UrlAuthorization" /> <add name="UrlAuthorization" type="System.Web.Security.UrlAuthorizationModule" /> <remove name="DefaultAuthentication" /> <add name="DefaultAuthentication" type="System.Web.Security.DefaultAuthenticationModule" /> <remove name="Session" /> <add name="Session" type="Microsoft.AspNet.SessionState.SessionStateModuleAsync, Microsoft.AspNet.SessionState.SessionStateModule, Version=1.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" preCondition="integratedMode" /> </modules> </system.webServer>
{ "language": "en", "url": "https://stackoverflow.com/questions/133173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Rules for properly organized bugtracker (Mantis et al) On a particular project we're working with a total of 10 team members. After about a year working on the project (and using Mantis as a bug-/feature-tracker eversince), the bugtracker gets more and more difficult to use, as no standard has been setup that explains how to create new tasks, how to comment tasks etc. This leads to multiple entries for the same bugs, inability to easily find bugs when searching for them etc. How do you organize your bugtracker? Do you use a lot of (sub)categories for different portions of your application (GUI, Backend etc), do you use tags in the title of tasks (i.e. "[GUI][OptionPage] The error")? Is anyone in your team allowed to introduce new tasks or is this step channeled through a single "Mantis-master" (who would then know whether a new report is a duplicate or an entirely new entry)? A: Always link a version control system commit to an issue and back so that you know which commits were made do solve which issue and why a certain commit was done. A: What we did is to introduce a role for approve entries to the bug tracker. This role can be shared by different people. The process is either to approve, to approve with a small edit, or to reject the entry with the request for further editing or clarification. It is better for the general understanding if the role is not given to people working in the (core) team. A: In a "large" mantis system on the open web, I've seen the rules go something like New: Anyone can enter a bug. Acknowledged: A select few people can upgrade it to this level. These people have seen every new bug for a while, and thus they'll know if it's a duplicate. Or they can pass it back to the reporter for clarification until they understand it well enough to do this job. Confirmed: Set by decision makers who basically say "We will be doing this". I don't actually remember where it was, and more importantly I don't know how well it worked.
{ "language": "en", "url": "https://stackoverflow.com/questions/133175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Embedded Outlook View Control I am trying to make an Outlook 2003 add-in using Visual Studio 2008 on Windows XP SP3 and Internet Explorer 7. My add-in is using custom Folder Home Page which displays my custom form, which wraps Outlook View Control. I get COM Exception with 'Exception from HRESULT: 0xXXXXXXXX' description every time when I try to set Folder property of the OVC. Error code is a random number, every time is different. It is not the first access to control's properties, before that, View and ViewXML properties are set already. Control is marked as Safe for Scripting. I am using value of the CurrentFolder.FolderPath property of the active explorer, which seems to be a right one: Outlook.Explorer currentExplorer = app.ActiveExplorer(); if (currentExplorer != null) { ovcWrapper.Folder = currentExplorer.CurrentFolder.FolderPath; } This is top of the stack trace: System.Runtime.InteropServices.COMException (0xXXXXXXXX): Exception from HRESULT: 0xXXXXXXXX at Microsoft.Office.Interop.OutlookViewCtl.ViewCtlClass.set_Folder(String pVal) at AxMicrosoft.Office.Interop.OutlookViewCtl.AxViewCtl.set_Folder(String value).. This is happening only if the folder is located in non-default PST file. Changing to folder inside default PST file will produce no exception. I must underline that everything worked just fine before I went to holiday :). It seems that Windows XP installed some updates which changed default security of Internet Explorer or Outlook 2003 while I was absent. On the other (virtual machine) with Office 2007 and Internet Explorer 6, without any updates, everything is working just fine. A: After a while, I finally find out what is the solution: change a name of the external storage to something new. During startup of the addin, it loads the non-default PST file, and changes its name (not the name of the pst file, but the name of the root folder) to "Documents". This is code: session.AddStore("C:\\test.pst"); // loads existing or creates a new one, if there is none. storage = session.Folders.GetLast(); // grabs root folder of the new fileStorage. if (storage.Name != storageName) // if fileStorage is brand new, it has default name. { storage.Name = "Documents"; session.RemoveStore(storage); // to apply new fileStorage name, it have to be removed and added again. session.AddStore(storagePath); } Solution is not to use 'Documents' as a name any more, but something new. Problem is not related to specific name. A: Dobri Dan, nency :)I don't know if I can really offer a "silver bullet" solution given the information here...but here are a few ideas/notes to try out:Having worked with Outlook on a few projects in the past, I can tell you that it is a funny bird sometimes when it comes to giving/granting access to outside users/processes. It sometimes requires the user to manually confirm access or log in...so make certain that you have app.Session.Logon() taken care of somewhere.The other thing I notice is the use of app.ActiveExplorer() Make certain that this function is returning exactly what you think it is; It takes the topmost window on the user's desktop...which is usualyy but not always the window you are trying to work with, so just doublecheck.
{ "language": "en", "url": "https://stackoverflow.com/questions/133194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: In Perforce, how do I get a list of checked out files? How do I get a list of the files checked out by users (including the usernames) using P4V or P4? I want to provide a depot location and see a list of any files under that location (including sub folders) that are checked out. A: In case you want to search for a particular user: p4 opened -u the_user_name In case you want to search for particular Changelist: p4 opened -u the_user_name -c cl_number A: I just want to point out something about about the command line arguments. It is important to add the "/..." after the folder you want to look over because it will tell perforce to do it recursively. So, I was trying this at the beginning : p4 opened -a //myP4Path/dev_project Which wasn't working until I did this: p4 opened -a //myP4Path/dev_project/... A: From the command line: p4 opened -a //depot/Your/Location/... The ... indicates that sub folders should be included. A: Seeing as you also asked about P4V and only had command line answers so far, here's what you do for P4V. The "Pending" pane gets you part way to what you want. Ensure the "User" and "Workspace" filters are cleared, and you'll get a list of all files grouped by changelist and client spec. Not as clean as the straight list of files you get when using the P4 command line as suggested by Iain and Mark, but may help in some situations. An alternative is to create a custom menu in P4V that uses one of the command line solutions suggested. For example: * *Tools->Manage Custom Tools *New *Call it something e.g. Open files by user *Check the "Add to applicable context menus" *In Application field, browse to p4.exe *In Arguments, type opened -a %D (the latter takes the currently selected depot path) *Check the box to run in a console. I'm sure you could fancy this up a bit if needed to filter the output. A: You can also restrict the output of p4 opened like so: p4 opened -C <client-spec> //depot/... to get a list of files opened on that client-spec p4 opened //depot/... will give you a list of files opened by the current P4USER A: In p4v : try to do a rename of the top directory. You will get a warning and list of the currently checked out files with user names.
{ "language": "en", "url": "https://stackoverflow.com/questions/133204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: Is there a typical state machine implementation pattern? We need to implement a simple state machine in C. Is a standard switch statement the best way to go? We have a current state (state) and a trigger for the transition. switch(state) { case STATE_1: state = DoState1(transition); break; case STATE_2: state = DoState2(transition); break; } ... DoState2(int transition) { // Do State Work ... if(transition == FROM_STATE_2) { // New state when doing STATE 2 -> STATE 2 } if(transition == FROM_STATE_1) { // New State when moving STATE 1 -> STATE 2 } return new_state; } Is there a better way for simple state machines EDIT: For C++, I think the Boost Statechart library might be the way to go. However, it does not help with C. Lets concentrate on the C use case. A: For simple cases, you can use your switch style method. What I have found that works well in the past is to deal with transitions too: static int current_state; // should always hold current state -- and probably be an enum or something void state_leave(int new_state) { // do processing on what it means to enter the new state // which might be dependent on the current state } void state_enter(int new_state) { // do processing on what is means to leave the current state // might be dependent on the new state current_state = new_state; } void state_process() { // switch statement to handle current state } I don't know anything about the boost library, but this type of approach is dead simple, doesn't require any external dependencies, and is easy to implement. A: switch() is a powerful and standard way of implementing state machines in C, but it can decrease maintainability down if you have a large number of states. Another common method is to use function pointers to store the next state. This simple example implements a set/reset flip-flop: /* Implement each state as a function with the same prototype */ void state_one(int set, int reset); void state_two(int set, int reset); /* Store a pointer to the next state */ void (*next_state)(int set, int reset) = state_one; /* Users should call next_state(set, reset). This could also be wrapped by a real function that validated input and dealt with output rather than calling the function pointer directly. */ /* State one transitions to state one if set is true */ void state_one(int set, int reset) { if(set) next_state = state_two; } /* State two transitions to state one if reset is true */ void state_two(int set, int reset) { if(reset) next_state = state_one; } A: I found a really slick C implementation of Moore FSM on the edx.org course Embedded Systems - Shape the World UTAustinX - UT.6.02x, chapter 10, by Jonathan Valvano and Ramesh Yerraballi.... struct State { unsigned long Out; // 6-bit pattern to output unsigned long Time; // delay in 10ms units unsigned long Next[4]; // next state for inputs 0,1,2,3 }; typedef const struct State STyp; //this example has 4 states, defining constants/symbols using #define #define goN 0 #define waitN 1 #define goE 2 #define waitE 3 //this is the full FSM logic coded into one large array of output values, delays, //and next states (indexed by values of the inputs) STyp FSM[4]={ {0x21,3000,{goN,waitN,goN,waitN}}, {0x22, 500,{goE,goE,goE,goE}}, {0x0C,3000,{goE,goE,waitE,waitE}}, {0x14, 500,{goN,goN,goN,goN}}}; unsigned long currentState; // index to the current state //super simple controller follows int main(void){ volatile unsigned long delay; //embedded micro-controller configuration omitteed [...] currentState = goN; while(1){ LIGHTS = FSM[currentState].Out; // set outputs lines (from FSM table) SysTick_Wait10ms(FSM[currentState].Time); currentState = FSM[currentState].Next[INPUT_SENSORS]; } } A: You might have seen my answer to another C question where I mentioned FSM! Here is how I do it: FSM { STATE(x) { ... NEXTSTATE(y); } STATE(y) { ... if (x == 0) NEXTSTATE(y); else NEXTSTATE(x); } } With the following macros defined #define FSM #define STATE(x) s_##x : #define NEXTSTATE(x) goto s_##x This can be modified to suit the specific case. For example, you may have a file FSMFILE that you want to drive your FSM, so you could incorporate the action of reading next char into the the macro itself: #define FSM #define STATE(x) s_##x : FSMCHR = fgetc(FSMFILE); sn_##x : #define NEXTSTATE(x) goto s_##x #define NEXTSTATE_NR(x) goto sn_##x now you have two types of transitions: one goes to a state and read a new character, the other goes to a state without consuming any input. You can also automate the handling of EOF with something like: #define STATE(x) s_##x : if ((FSMCHR = fgetc(FSMFILE) == EOF)\ goto sx_endfsm;\ sn_##x : #define ENDFSM sx_endfsm: The good thing of this approach is that you can directly translate a state diagram you draw into working code and, conversely, you can easily draw a state diagram from the code. In other techniques for implementing FSM the structure of the transitions is buried in control structures (while, if, switch ...) and controlled by variables value (tipically a state variable) and it may be a complex task to relate the nice diagram to a convoluted code. I learned this technique from an article appeared on the great "Computer Language" magazine that, unfortunately, is no longer published. A: In Martin Fowler's UML Distilled, he states (no pun intended) in Chapter 10 State Machine Diagrams (emphasis mine): A state diagram can be implemented in three main ways: nested switch, the State pattern, and state tables. Let's use a simplified example of the states of a mobile phone's display: Nested switch Fowler gave an example of C# code, but I've adapted it to my example. public void HandleEvent(PhoneEvent anEvent) { switch (CurrentState) { case PhoneState.ScreenOff: switch (anEvent) { case PhoneEvent.PressButton: if (powerLow) { // guard condition DisplayLowPowerMessage(); // action // CurrentState = PhoneState.ScreenOff; } else { CurrentState = PhoneState.ScreenOn; } break; case PhoneEvent.PlugPower: CurrentState = PhoneState.ScreenCharging; break; } break; case PhoneState.ScreenOn: switch (anEvent) { case PhoneEvent.PressButton: CurrentState = PhoneState.ScreenOff; break; case PhoneEvent.PlugPower: CurrentState = PhoneState.ScreenCharging; break; } break; case PhoneState.ScreenCharging: switch (anEvent) { case PhoneEvent.UnplugPower: CurrentState = PhoneState.ScreenOff; break; } break; } } State pattern Here's an implementation of my example with the GoF State pattern: State Tables Taking inspiration from Fowler, here's a table for my example: Source State Target State Event Guard Action -------------------------------------------------------------------------------------- ScreenOff ScreenOff pressButton powerLow displayLowPowerMessage ScreenOff ScreenOn pressButton !powerLow ScreenOn ScreenOff pressButton ScreenOff ScreenCharging plugPower ScreenOn ScreenCharging plugPower ScreenCharging ScreenOff unplugPower Comparison Nested switch keeps all the logic in one spot, but the code can be hard to read when there are a lot of states and transitions. It's possibly more secure and easier to validate than the other approaches (no polymorphism or interpreting). The State pattern implementation potentially spreads the logic over several separate classes, which may make understanding it as a whole a problem. On the other hand, the small classes are easy to understand separately. The design is particularly fragile if you change the behavior by adding or removing transitions, as they're methods in the hierarchy and there could be lots of changes to the code. If you live by the design principle of small interfaces, you'll see this pattern doesn't really do so well. However, if the state machine is stable, then such changes won't be needed. The state tables approach requires writing some kind of interpreter for the content (this might be easier if you have reflection in the language you're using), which could be a lot of work to do up front. As Fowler points out, if your table is separate from your code, you could modify the behavior of your software without recompiling. This has some security implications, however; the software is behaving based on the contents of an external file. Edit (not really for C language) There is a fluent interface (aka internal Domain Specific Language) approach, too, which is probably facilitated by languages that have first-class functions. The Stateless library exists and that blog shows a simple example with code. A Java implementation (pre Java8) is discussed. I was shown a Python example on GitHub as well. A: This article is a good one for the state pattern (though it is C++, not specifically C). If you can put your hands on the book "Head First Design Patterns", the explanation and example are very clear. A: You might want to look into the libero FSM generator software. From a state description language and/or a (windows) state diagram editor you may generate code for C, C++, java and many others ... plus nice documentation and diagrams. Source and binaries from iMatix A: One of my favourite patterns is the state design pattern. Respond or behave differently to the same given set of inputs. One of the problems with using switch/case statements for state machines is that as you create more states, the switch/cases becomes harder/unwieldy to read/maintain, promotes unorganized spaghetti code, and increasingly difficult to change without breaking something. I find using design patterns helps me to organize my data better, which is the whole point of abstraction. Instead of designing your state code around what state you came from, instead structure your code so that it records the state when you enter a new state. That way, you effectively get a record of your previous state. I like @JoshPetit's answer, and have taken his solution one step further, taken straight from the GoF book: stateCtxt.h: #define STATE (void *) typedef enum fsmSignal { eEnter =0, eNormal, eExit }FsmSignalT; typedef struct fsm { FsmSignalT signal; // StateT is an enum that you can define any which way you want StateT currentState; }FsmT; extern int STATECTXT_Init(void); /* optionally allow client context to set the target state */ extern STATECTXT_Set(StateT stateID); extern void STATECTXT_Handle(void *pvEvent); stateCtxt.c: #include "stateCtxt.h" #include "statehandlers.h" typedef STATE (*pfnStateT)(FsmSignalT signal, void *pvEvent); static FsmT fsm; static pfnStateT UsbState ; int STATECTXT_Init(void) { UsbState = State1; fsm.signal = eEnter; // use an enum for better maintainability fsm.currentState = '1'; (*UsbState)( &fsm, pvEvent); return 0; } static void ChangeState( FsmT *pFsm, pfnStateT targetState ) { // Check to see if the state has changed if (targetState != NULL) { // Call current state's exit event pFsm->signal = eExit; STATE dummyState = (*UsbState)( pFsm, pvEvent); // Update the State Machine structure UsbState = targetState ; // Call the new state's enter event pFsm->signal = eEnter; dummyState = (*UsbState)( pFsm, pvEvent); } } void STATECTXT_Handle(void *pvEvent) { pfnStateT newState; if (UsbState != NULL) { fsm.signal = eNormal; newState = (*UsbState)( &fsm, pvEvent ); ChangeState( &fsm, newState ); } } void STATECTXT_Set(StateT stateID) { prevState = UsbState; switch (stateID) { case '1': ChangeState( State1 ); break; case '2': ChangeState( State2); break; case '3': ChangeState( State3); break; } } statehandlers.h: /* define state handlers */ extern STATE State1(void); extern STATE State2(void); extern STATE State3(void); statehandlers.c: #include "stateCtxt.h:" /* Define behaviour to given set of inputs */ STATE State1(FsmT *fsm, void *pvEvent) { STATE nextState; /* do some state specific behaviours * here */ /* fsm->currentState currently contains the previous state * just before it gets updated, so you can implement behaviours * which depend on previous state here */ fsm->currentState = '1'; /* Now, specify the next state * to transition to, or return null if you're still waiting for * more stuff to process. */ switch (fsm->signal) { case eEnter: nextState = State2; break; case eNormal: nextState = null; break; case eExit: nextState = State2; break; } return nextState; } STATE State3(FsmT *fsm, void *pvEvent) { /* do some state specific behaviours * here */ fsm->currentState = '2'; /* Now, specify the next state * to transition to */ return State1; } STATE State2(FsmT *fsm, void *pvEvent) { /* do some state specific behaviours * here */ fsm->currentState = '3'; /* Now, specify the next state * to transition to */ return State3; } For most State Machines, esp. Finite state machines, each state will know what its next state should be, and the criteria for transitioning to its next state. For loose state designs, this may not be the case, hence the option to expose the API for transitioning states. If you desire more abstraction, each state handler can be separated out into its own file, which are equivalent to the concrete state handlers in the GoF book. If your design is simple with only a few states, then both stateCtxt.c and statehandlers.c can be combined into a single file for simplicity. A: I prefer to use a table driven approach for most state machines: typedef enum { STATE_INITIAL, STATE_FOO, STATE_BAR, NUM_STATES } state_t; typedef struct instance_data instance_data_t; typedef state_t state_func_t( instance_data_t *data ); state_t do_state_initial( instance_data_t *data ); state_t do_state_foo( instance_data_t *data ); state_t do_state_bar( instance_data_t *data ); state_func_t* const state_table[ NUM_STATES ] = { do_state_initial, do_state_foo, do_state_bar }; state_t run_state( state_t cur_state, instance_data_t *data ) { return state_table[ cur_state ]( data ); }; int main( void ) { state_t cur_state = STATE_INITIAL; instance_data_t data; while ( 1 ) { cur_state = run_state( cur_state, &data ); // do other program logic, run other state machines, etc } } This can of course be extended to support multiple state machines, etc. Transition actions can be accommodated as well: typedef void transition_func_t( instance_data_t *data ); void do_initial_to_foo( instance_data_t *data ); void do_foo_to_bar( instance_data_t *data ); void do_bar_to_initial( instance_data_t *data ); void do_bar_to_foo( instance_data_t *data ); void do_bar_to_bar( instance_data_t *data ); transition_func_t * const transition_table[ NUM_STATES ][ NUM_STATES ] = { { NULL, do_initial_to_foo, NULL }, { NULL, NULL, do_foo_to_bar }, { do_bar_to_initial, do_bar_to_foo, do_bar_to_bar } }; state_t run_state( state_t cur_state, instance_data_t *data ) { state_t new_state = state_table[ cur_state ]( data ); transition_func_t *transition = transition_table[ cur_state ][ new_state ]; if ( transition ) { transition( data ); } return new_state; }; The table driven approach is easier to maintain and extend and simpler to map to state diagrams. A: I also have used the table approach. However, there is overhead. Why store a second list of pointers? A function in C without the () is a const pointer. So you can do: struct state; typedef void (*state_func_t)( struct state* ); typedef struct state { state_func_t function; // other stateful data } state_t; void do_state_initial( state_t* ); void do_state_foo( state_t* ); void do_state_bar( state_t* ); void run_state( state_t* i ) { i->function(i); }; int main( void ) { state_t state = { do_state_initial }; while ( 1 ) { run_state( state ); // do other program logic, run other state machines, etc } } Of course depending on your fear factor (i.e. safety vs speed) you may want to check for valid pointers. For state machines larger than three or so states, the approach above should be less instructions than an equivalent switch or table approach. You could even macro-ize as: #define RUN_STATE(state_ptr_) ((state_ptr_)->function(state_ptr_)) Also, I feel from the OP's example, that there is a simplification that should be done when thinking about / designing a state machine. I don't thing the transitioning state should be used for logic. Each state function should be able to perform its given role without explicit knowledge of past state(s). Basically you design for how to transition from the state you are in to another state. Finally, don't start the design of a state machine based on "functional" boundaries, use sub-functions for that. Instead divide the states based on when you will have to wait for something to happen before you can continue. This will help minimize the number of times you have to run the state machine before you get a result. This can be important when writing I/O functions, or interrupt handlers. Also, a few pros and cons of the classic switch statement: Pros: * *it is in the language, so it is documented and clear *states are defined where they are called *can execute multiple states in one function call *code common to all states can be executed before and after the switch statement Cons: * *can execute multiple states in one function call *code common to all states can be executed before and after the switch statement *switch implementation can be slow Note the two attributes that are both pro and con. I think the switch allows the opportunity for too much sharing between states, and the interdependency between states can become unmanageable. However for a small number of states, it may be the most readable and maintainable. A: there is also the logic grid which is more maintainable as the state machine gets bigger A: For a simple state machine just use a switch statement and an enum type for your state. Do your transitions inside the switch statement based on your input. In a real program you would obviously change the "if(input)" to check for your transition points. Hope this helps. typedef enum { STATE_1 = 0, STATE_2, STATE_3 } my_state_t; my_state_t state = STATE_1; void foo(char input) { ... switch(state) { case STATE_1: if(input) state = STATE_2; break; case STATE_2: if(input) state = STATE_3; else state = STATE_1; break; case STATE_3: ... break; } ... } A: In my experience using the 'switch' statement is the standard way to handle multiple possible states. Although I am surpirsed that you are passing in a transition value to the per-state processing. I thought the whole point of a state machine was that each state performed a single action. Then the next action/input determines which new state to transition into. So I would have expected each state processing function to immediately perform whatever is fixed for entering state and then afterwards decide if transition is needed to another state. A: There is a book titled Practical Statecharts in C/C++. However, it is way too heavyweight for what we need. A: For compiler which support __COUNTER__ , you can use them for simple (but large) state mashines. #define START 0 #define END 1000 int run = 1; state = START; while(run) { switch (state) { case __COUNTER__: //do something state++; break; case __COUNTER__: //do something if (input) state = END; else state++; break; . . . case __COUNTER__: //do something if (input) state = START; else state++; break; case __COUNTER__: //do something state++; break; case END: //do something run = 0; state = START; break; default: state++; break; } } The advantage of using __COUNTER__ instead of hard coded numbers is that you can add states in the middle of other states, without renumbering everytime everything. If the compiler doesnt support __COUNTER__, in a limited way its posible to use with precaution __LINE__ A: You can use minimalist UML state machine framework in c. https://github.com/kiishor/UML-State-Machine-in-C It supports both finite and hierarchical state machine. It has only 3 API's, 2 structures and 1 enumeration. The State machine is represented by state_machine_t structure. It is an abstract structure that can be inherited to create a state machine. //! Abstract state machine structure struct state_machine_t { uint32_t Event; //!< Pending Event for state machine const state_t* State; //!< State of state machine. }; State is represented by pointer to state_t structure in the framework. If framework is configured for finite state machine then state_t contains, typedef struct finite_state_t state_t; // finite state structure typedef struct finite_state_t{ state_handler Handler; //!< State handler function (function pointer) state_handler Entry; //!< Entry action for state (function pointer) state_handler Exit; //!< Exit action for state (function pointer) }finite_state_t; The framework provides an API dispatch_event to dispatch the event to the state machine and two API's for the state traversal. state_machine_result_t dispatch_event(state_machine_t* const pState_Machine[], uint32_t quantity); state_machine_result_t switch_state(state_machine_t* const, const state_t*); state_machine_result_t traverse_state(state_machine_t* const, const state_t*); For further details on how to implement hierarchical state machine refer the GitHub repository. code examples https://github.com/kiishor/UML-State-Machine-in-C/blob/master/demo/simple_state_machine/readme.md https://github.com/kiishor/UML-State-Machine-in-C/blob/master/demo/simple_state_machine_enhanced/readme.md A: In C++, consider the State pattern. A: Your question is similar to "is there a typical Data Base implementation pattern"? The answer depends upon what do you want to achieve? If you want to implement a larger deterministic state machine you may use a model and a state machine generator. Examples can be viewed at www.StateSoft.org - SM Gallery. Janusz Dobrowolski A: I would also prefer a table driven approach. I have used switch statements in the past. The main problem I have encountered is debugging transitions and ensuring that the designed state machine has been implemented properly. This occurred in cases where there was a large number of states and events. With the table driven approach are the states and transitions are summarized in one place. Below is a demo of this approach. /*Demo implementations of State Machines * * This demo leverages a table driven approach and function pointers * * Example state machine to be implemented * * +-----+ Event1 +-----+ Event2 +-----+ * O---->| A +------------------->| B +------------------->| C | * +-----+ +-----+ +-----+ * ^ | * | Event3 | * +-----------------------------------------------------+ * * States: A, B, C * Events: NoEvent (not shown, holding current state), Event1, Event2, Event3 * * Partly leveraged the example here: http://web.archive.org/web/20160808120758/http://www.gedan.net/2009/03/18/finite-state-machine-matrix-style-c-implementation-function-pointers-addon/ * * This sample code can be compiled and run using GCC. * >> gcc -o demo_state_machine demo_state_machine.c * >> ./demo_state_machine */ #include <stdio.h> #include <assert.h> // Definitions of state id's, event id's, and function pointer #define N_STATES 3 #define N_EVENTS 4 typedef enum { STATE_A, STATE_B, STATE_C, } StateId; typedef enum { NOEVENT, EVENT1, EVENT2, EVENT3, } Event; typedef void (*StateRoutine)(); // Assert on number of states and events defined static_assert(STATE_C==N_STATES-1, "Number of states does not match defined number of states"); static_assert(EVENT3==N_EVENTS-1, "Number of events does not match defined number of events"); // Defining State, holds both state id and state routine typedef struct { StateId id; StateRoutine routine; } State; // General functions void evaluate_state(Event e); // State routines to be executed at each state void state_routine_a(void); void state_routine_b(void); void state_routine_c(void); // Defining each state with associated state routine const State state_a = {STATE_A, state_routine_a}; const State state_b = {STATE_B, state_routine_b}; const State state_c = {STATE_C, state_routine_c}; // Defning state transition matrix as visualized in the header (events not // defined, result in mainting the same state) State state_transition_mat[N_STATES][N_EVENTS] = { { state_a, state_b, state_a, state_a}, { state_b, state_b, state_c, state_b}, { state_c, state_c, state_c, state_a}}; // Define current state and initialize State current_state = state_a; int main() { while(1) { // Event to receive from user int ev; printf("----------------\n"); printf("Current state: %c\n", current_state.id + 65); printf("Event to occur: "); // Receive event from user scanf("%u", &ev); evaluate_state((Event) ev); // typecast to event enumeration type printf("-----------------\n"); }; return (0); } /* * Determine state based on event and perform state routine */ void evaluate_state(Event ev) { //Determine state based on event current_state = state_transition_mat[current_state.id][ev]; printf("Transitioned to state: %c\n", current_state.id + 65); // Run state routine (*current_state.routine)(); } /* * State routines */ void state_routine_a() { printf("State A routine ran. \n"); } void state_routine_b() { printf("State B routine ran. \n"); } void state_routine_c() { printf("State C routine ran. \n"); } A: Boost has the statechart library. http://www.boost.org/doc/libs/1_36_0/libs/statechart/doc/index.html I can't speak to the use of it, though. Not used it myself (yet)
{ "language": "en", "url": "https://stackoverflow.com/questions/133214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "133" }
Q: ASP.NET/IIS: 404 for all file types I set up 404 handler page in web.config, but it works ONLY when extension of URL is .aspx (or other which is handled by ASP.NET). I know I can setup static HTML page in website options, but I want to have a page. Is there any options to assign ASPX handler page for all request extensions in IIS? A: For information: This is one of the several nice things that IIS7 brings - all pages are routed through the handler such that you can do custom 404s and - usefully - directory and file level security for any file (based on the same web.config stuff as for asp.net files prior to IIS7). So notionally "use II7" is an answer (will be "the" answer in time) - but of course its not a terribly practical one if you're not hosting/being hosted on W2k8 (or higher). A: The direct question was whether or not there are options to assign the ASPX handler to all request extensions: Yes, there is. I'll discuss how to do that shortly. First, I think the "hidden" question -- the answer you really want -- is whether or not there's a way to redirect all 404 errors for pages other than ASPX, ASMX, etc. Yes, there is, and this is the better choice if it'll solve the issue you're having. To redirect all 404s in IIS 6, right click your web application root (whether it be its own site or a virtual directory in the main site), and choose "Properties." From there, choose the "Custom Errors" tab. Find 404 in the list and change it to the redirect you want. Now, if that won't suffice -- and I really hope it does -- yes, you can run every page through the ASPX handler. However, doing so comes at a fairly high cost in terms of efficiency -- raw HTML/image serving is considerably faster than anything dynamic. To do this, right click your web application root and choose "Properties." Choose the "Home Directory" tab. Click "Configuration;" a new window will pop up. Copy the path from one of the ASP.NET page serves, and then use it for a wildcard application map. Bear in mind, again, this is the wrong answer most of the time. It will negatively impact your performance, and is the equivalent of using a chainsaw to carve a turkey. I highly recommend the first option over this one, if it will work out for you. A: The web.config can only set up errors pages for pages controlled by it's web site. If you have any other pages outside the purview of the ASP.Net application, then you set up handling for them in IIS. There's an option in there for configuring the 404 page where you can point it to your custom page. A: Only other thing i can think of is passing ALL extensions to asp.net. This way all types of files get processed by asp.net and your custom error page will work. A: In the IIS application configuration, you can set a wildcard mapping (".*") to C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll A: * *You can setup wild card mapping in IIS (Application configuration/Mappings/Wildcard mappings/ - just set aspnet_isapi.dll as executable and uncheck the Verify that file exists box) that will route all incoming requests to your app - so you can control the behavior directly from it. *You don't have to setup static page in your IIS application settings. Imho, you should be able to setup valid url (e.g. /error_handler.aspx) from your app that will be used as landing page in case of specific server error. A: In IIS you can set a Custom Error for 404 errors and direct it to a URL in the site properties. It shows a static html by default C:\WINDOWS\help\iisHelp\common\404b.htm You can change it to a relative url on your site.
{ "language": "en", "url": "https://stackoverflow.com/questions/133225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Eval scripts in server side control property how do i implement an eval script in a sever side control? eg. <a runat="server" href="?id=<%= Eval("Id") %>">hello world</a> A: If the server-side control is within a databound control (ListView, GridView, FormView, DetailsView), then the syntax is: <%# Eval("Id") %>. If the server-side control is not within a databound control, you will have to access it via the code-behind and set your attribute there. A: as far as I know, its <%# instead of <%= A: The Data-Binding expression is your friend, see MSDN for examples.
{ "language": "en", "url": "https://stackoverflow.com/questions/133226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I discover resources in a Java jar with a wildcard name? I want to discover all xml files that my ClassLoader is aware of using a wildcard pattern. Is there any way to do this? A: A Spring ApplicationContext can do this trivially: ApplicationContext context = new ClassPathXmlApplicationContext("applicationConext.xml"); Resource[] xmlResources = context.getResources("classpath:/**/*.xml"); See ResourcePatternResolver#getResources, or ApplicationContext. A: List<URL> resources = CPScanner.scanResources(new PackageNameFilter("net.sf.corn.cps.sample"), new ResourceNameFilter("A*.xml")); put the snippet in your pom.xml <dependency> <groupId>net.sf.corn</groupId> <artifactId>corn-cps</artifactId> <version>1.0.1</version> </dependency> A: It requires a little trickery, but here's an relevant blog entry. You first figure out the URLs of the jars, then open the jar and scan its contents. I think you would discover the URLs of all jars by looking for `/META-INF/MANIFEST.MF'. Directories would be another matter. A: A JAR-file is just another ZIP-file, right? So I suppose you could iterate the jar-files using http://java.sun.com/javase/6/docs/api/java/util/zip/ZipInputStream.html I'm thinking something like: ZipSearcher searcher = new ZipSearcher(new ZipInputStream(new FileInputStream("my.jar"))); List xmlFilenames = searcher.search(new RegexFilenameFilter(".xml$")); Cheers. Keith. A: Well, it is not from within Java, but jar -tvf jarname | grep xml$ will show you all the XMLs in the jar.
{ "language": "en", "url": "https://stackoverflow.com/questions/133229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Building Eclipse plugins and features on the command line I have a bunch of Eclipse plugins and features, which I would like to build as part of a nightly (headless) build. I've managed to do this using a complicated setup involving manually specifying paths to Eclipse plugin jars, copying customTargets.xml, etc. Is there really no simpler solution? Isn't there a way to just point out my Update Site's site.xml and say "build"; i.e. the equivalent of clicking "Build All" in the Update Site project? A: Given that all the answers to this question are all 3-5 years old, I figure an update would be useful to others. For those who want to add the building of Eclipse plugins to the CI process, I recommend they check out the Eclipse Tycho project. This is essentially a Maven plugin that allows you you to wrap eclipse projects within Maven project. With this we use Atlassian Bamboo to build our Eclipse plugin. This also allows us to use the Maven jarsigner plugin to sign our plugin files. A: I've just been fighting with this problem myself. Are you using the productBuild script? Maybe putting your features into a product would help you out. I am doing a headless build on a product configuration. The only script that I customized was to add some ant tasks to customTargets.xml to get my sources from SVN and to do a little cleanup on JNLP manifests after the build as I am using WebStart. Then you only need to invoke antRunner on the out of the box productBuild.xml in the scripts/productBuild directory (in the pde-build plugin). A: Check out Ant4Eclipse. I've used it to parse Eclipse's .classpath/.project files to determine project dependencies and classpaths. In combination with Groovy Ant Task, I have automatically built multiple projects in Ant using the Eclipse project files for build information. A buildPlugin task exists, but I have not personally used it. A: We are currently using PDE to automatically build features and our complete product. It works quite well. Make sure you use the right script for product build or feature build. Eclipse Help on using PDE EDIT: We've now migrated to Buckminster, which has an excellent command line interface. A: You might look into buckminster and maven. There is a learning curve for sure, but they seem to do their jobs well. A: We are using headlesseclipse, which can be found on Google Code: http://code.google.com/p/headlesseclipse/ It works quite well, and can easily automate command-line building of plugins and features. However, I have not yet found a way to automate building of the update site via the command line.
{ "language": "en", "url": "https://stackoverflow.com/questions/133234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: ASP.Net Session I am wanting to store the "state" of some actions the user is performing in a series of different ASP.Net webforms. What are my choices for persisting state, and what are the pros/cons of each solution? I have been using Session objects, and using some helper methods to strongly type the objects: public static Account GetCurrentAccount(HttpSessionState session) { return (Account)session[ACCOUNT]; } public static void SetCurrentAccount(Account obj, HttpSessionState session) { session[ACCOUNT] = obj; } I have been told by numerous sources that "Session is evil", so that is really the root cause of this question. I want to know what you think "best practice", and why. A: As for "Session being evil" ... if you were developing in classic ASP I would have to agree, but ASP.NET/IIS does a much better job. The real question is what is the best way to maintain state. In our case, when it comes to the current logged in user, we store that object in Session, as we are constantly referring to it for their name, email address, authorization and so forth. Other little tidbits of information that doesn't need any long-term persistence we use a combination of cookies and viewstate. A: One of the reasons for its sinister reputation is that hurried developers overuse it with string literals in UI code (rather than a helper class like yours) as the item keys, and end up with a big bag of untestable promiscuous state. Some sort of wrapper is an entry-level requirement for non-evil session use. A: When you want to store information that can be accessed globally in your web application, a way of doing this is the ThreadStatic attribute. This turns a static member of a Class into a member that is shared by the current thread, but not other threads. The advantage of ThreadStatic is that you don't have to have a web context available. For instance, if you have a back end that does not reference System.Web, but want to share information there as well, you can set the user's id at the beginning of every request in the ThreadStatic property, and reference it in your dependency without the need of having access to the Session object. Because it is static but only to a single thread, we ensure that other simultaneous visitors don't get our session. This works, as long as you ensure that the property is reset for every request. This makes it an ideal companion to cookies. A: There is nothing inherently evil with session state. There are a couple of things to keep in mind that might bite you though: * *If the user presses the browser back button you go back to the previous page but your session state is not reverted. So your CurrentAccount might not be what it originally was on the page. *ASP.NET processes can get recycled by IIS. When that happens you next request will start a new process. If you are using in process session state, the default, it will be gone :-( *Session can also timeout with the same result if the user isn't active for some time. This defaults to 20 minutes so a nice lunch will do it. *Using out of process session state requires all objects stored in session state to be serializable. *If the user opens a second browser window he will expect to have a second and distinct application but the session state is most likely going to be shared between to two. So changing the CurrentAccount in one browser window will do the same in the other. A: I think using Session object is OK in this case, but you should remember Session can expire if there is no browser activity for long time (HttpSessionState.Timeout property determines in how many minutes session-state provider terminates the session), so it's better to check for value existence before return: public static Account GetCurrentAccount(HttpSessionState session) { if (Session[ACCOUNT]!=null) return (Account)Session[ACCOUNT]; else throw new Exception("Can't get current account. Session expired."); } A: http://www.tigraine.at/2008/07/17/session-handling-in-aspnet/ hope this helps. A: Your two choices for temporarily storing form data are, first, to store each form's information in session state variable(s) and, second, to pass the form information along using URL parameters. Using Cookies as a potential third option is simply not workable for the simple reason that many of your visitors are likely to have cookies turned off (this doesn't affect session cookies, however). Also, I am assuming by the nature of your question that you do not want to store this information in a database table until it is fully committed. Using Session variable(s) is the classic solution to this problem but it does suffer from a few drawbacks. Among these are (1) large amounts of data can use up server RAM if you are using inproc session management, (2) sharing session variables across multiple servers in a server farm requires additional considerations, and (3) a professionally-designed app must guard against session expiration (don't just cast a session variable and use it - if the session has expired the cast will throw an error). However, for the vast majority of applications, session variables are unquestionably the way to go. The alternative is to pass each form's information along in the URL. The primary problem with this approach is that you'll have to be extremely careful about "passing along" information. For example, if you are collecting information in four pages, you would need to collect information in the first, pass it in the URL to the second page where you must store it in that page's viewstate. Then, when calling the third page, you'll collect form data from the second page plus the viewstate variables and encode both in the URL, etc. If you have five or more pages or if the visitor will be jumping around the site, you'll have a real mess on your hands. Keep in mind also that all information will need to A) be serialized to a URL-safe string and B) encoded in such a manner as to prevent simple URL-based hacks (e.g. if you put the price in clear-text and pass it along, someone could change the price). Note that you can reduce some of these problems by creating a kind of "session manager" and have it manage the URL strings for you but you would still have to be extremely sensitive to the possibility that any given link could blow away someone's entire session if it isn't managed properly. In the end, I use URL variables only for passing along very limited data from one page to the next (e.g. an item's ID as encoded in a link to that item). Let us assume, then, that you would indeed manage a user's data using the built-in Sessions capability. Why would someone tell you that "Session is evil"? Well, in addition to the memory load, server-farm, and expiration considerations presented above, the primary critique of Session variables that they are, effectively, untyped variables. Fortunately, prudent use of Session variables can avoid memory problems (big items should be kept in the database anyhow) and if you are running a site large enough to need a server farm, there are plenty of mechanisms available for sharing state built in to ASP.NET (hint: you will not use inproc storage). To avoid essentially all of the rest of Session's drawbacks, I recommend that implement an object to hold your session data as well as some simple Session object management capabilities. Then build these into a descendent of the Page class and use this descendent Page class for all of your pages. It is then a simple matter to access your Session data via the page class as a set of strongly-typed values. Note that your Object's fields will give you a way to access each of your "session variables" in a strongly typed manner (e.g. one field per variable). Let me know if this is a straightforward task for you or if you'd like some sample code! A: As far as I know, Session is the intended way of storing this information. Please keep in mind that session state generally is stored in the process by default. If you have multiple web servers, or if there is an IIS reboot, you lose session state. This can be fixed by using a ASP.NET State Service, or even an SQL database to store sessions. This ensures people get their session back, even if they are rerouted to a different web server, or in case of a recycle of the worker process. A: Short term information, that only needs to live until the next request, can also be stored in the ViewState. This means that objects are serialized and stored in the page sent to the browser, which is then posted back to the server on a click event or similar. Then the ViewState is decoded and turned into objects again, ready to be retrieved. A: Sessions are not evil, they serve an important function in ASP.NET application, serving data that must be shared between multiple pages during a user's "session". There are some suggestions, I would say to use SQL Session management when ever possible, and make certain that the objects you are using in your session collection are "serializable". The best practices would be to use the session object when you absolutely need to share state information across pages, and don't use it when you don't need to. The information is not going to be available client side, A session key is kept either in a cookie, or through the query string, or using other methods depending on how it is configured, and then the session objects are available in the database table (unless you use InProc, in which case your sessions will have the chance of being blown away during a reload of the site, or will be rendered almost useless in most clustered environments). A: Session as evil: Not in ASP.NET, properly configured. Yes, it's ideal to be as stateless as possible, but the reality is that you can't get there from here. You can, however, make Session behave in ways that lessen its impact -- Notably StateServer or database sessions. A: I think the "evil" comes from over-using the session. If you just stick anything and everything in it (like using global variables for everything) you will end up having poor performance and just a mess. A: Anything you put in the session object stays there for the duration of the session unless it is cleaned up. Poor management of memory stored using inproc and stateserver will force you to scale out earlier than necessary. Store only an ID for the session/user in the session and load what is needed into the cache object on demand using a helper class. That way you can fine tune it's lifetime according to how often that data us used. The next version of asp.net may have a distributed cache(rumor).
{ "language": "en", "url": "https://stackoverflow.com/questions/133236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Text from UITextView does not display in UIScrollView I want to have a UIScrollView with a set of subviews where each of these subviews has a UITextView with a different text. For this task, I have modified the PageControl example from the apple "iphone dev center" in order to add it a simple UITextView to the view which is used to generate the subviews of the scroll view. When I run the app (both on the simulator and the phone), NO Text is seen but if i activate the "user interaction" and click on it, the text magically appears (as well as the keyboard). Does anyone has a solution or made any progress with UITextView inside a UIScrollView? Thanks. A: I resolved the problem forcing a "fake" scroll: textView.contentOffset = CGPointMake(0, 1); textView.contentOffset = CGPointMake(0, 0); A: I think the problem stems from the fact that UITextView is a subclass of UIScrollView, so you basically have scroll views embedded within UIScrollViews. Even if the text displayed properly, you would have usability problems, as it would never be clear if a finger swipe was supposed to scroll the outer view or the text view. Yeah, Safari sort of does this, but it has to, and it's not the most pleasant part of using Safari. I think this is one of those times where the difficulty indicates you are working against the system. I strongly recommend going back and re-thinking the UI. A: You may be suffering from the problem where UITextView's don't update properly when they are scrolled from an offscreen to an onscreen area. Check the "Offscreen UITextViews don't update correctly" section of this page: Multiple Virtual Pages in a UIScrollView The solution I used was to force a redraw of scroll views when they begin to appear onscreen. This is a complete nuisance but does fix the problem. A: The problem is likely to be the nexted UIScrollViews. I think there are three solutions: * *Enable and disable userInteractionEnabled as the various controls are selected. *Assuming the text is static, use a UILabel instead of a UITextView (UILabel is not a subclass of UIScrollView) *Implement your own view and draw the text yourself in a drawRect message rather than relying on the UITextView. A: My solution to this problem was different: It only worked when I set the property "Autoresize Subviews” of the UIScrollView to off. [scrollView setAutoresizesSubviews:NO]; A: it works for me by placing the text value assignment into the scrollViewDidScroll method. Sample snippets: SAMPLE.h ... @interface myRootUIViewController : UIViewController <UIScrollViewDelegate> ... Comment: Just to remember: don't forget the UIScrollViewDelegate protocol. SAMPLE.m - (void)viewDidLoad { ... whatever is created before and/or after... NSString * text = @"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc semper lacus quis erat. Cras sapien magna, porta non, suscipit nec, egestas in, arcu. Maecenas sit amet est. Quisque felis risus, tempor eu, dictum ac, volutpat id, libero. Ut gravida, purus vitae interdum elementum, tortor justo porttitor nisi, id rhoncus massa."; // calculate the required frame height according to defined font size and // given text CGRect frame = CGRectMake(0.0, 500.0, self.view.bounds.size.width, 1000.0); CGSize calcSize = [text sizeWithFont:[UIFont systemFontOfSize:13.0] constrainedToSize:frame.size lineBreakMode: UILineBreakModeWordWrap]; // for whatever reasons, contraintedToSize seem only be able to // calculate an appropriate height if the input frame height is larger // than required. Means: if your text requires height=250 and input // frame height=100, then this method won't give you the expected // result. frame.size = calcSize; frame.size.height += 0; // calcSize might be not pixel-precise, // so add here additional padding pixels UITextView * tmpTextView = [[UITextView alloc]initWithFrame:frame]; // do whatever adjustments tmpTextView.backgroundColor = [UIColor blueColor]; // show area explicitly (dev // purpose) self.myTextView = tmpTextView; self.myTextView.editable = NO; self.myTextView.scrollEnabled = NO; self.myTextView.multipleTouchEnabled = NO; self.myTextView.userInteractionEnabled = NO; // pass on events to parentview self.myTextView.font = [UIFont systemFontOfSize:13.0]; [tmpTextView release]; [self.scrollView addSubview:self.myTextView]; } ... - (void)scrollViewDidScroll:(UIScrollView *)sender { // for simplicity text is repeated again, of course it can be a member var/etc... NSString * text = @"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc semper lacus quis erat. Cras sapien magna, porta non, suscipit nec, egestas in, arcu. Maecenas sit amet est. Quisque felis risus, tempor eu, dictum ac, volutpat id, libero. Ut gravida, purus vitae interdum elementum, tortor justo porttitor nisi, id rhoncus massa."; self.myTextView.text = text; // assign value within this method and it is // painted as expected. } Comment: I have adjusted the source code snippet with sample namings and values obviously. Hopefully there is no typo. However, the code contains also the calculation of the required frame height for the text, in case the text's value can change and therefore would require different frame sizes actually. Placing the actual text value assignment into the scrollViewDidScroll method worked for me without any kind of flashing during scrolling etc. (so far only tested in iPhone Simulator). Hope that helps. Of course I am open for any constructive feedback, improvement proposals or even other ways to solve this isuse.
{ "language": "en", "url": "https://stackoverflow.com/questions/133243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Illustrating usage of the volatile keyword in C# I would like to code a little program which visually illustrates the behavior of the volatile keyword. Ideally, it should be a program which performs concurrent access to a non volatile static field and which gets incorrect behavior because of that. Adding the volatile keyword in the same program should fix the problem. That something I didn't manage to achieve. Even trying several times, enabling optimization, etc., I always get a correct behavior without the 'volatile' keyword. Do you have any idea about this topic? Do you know how to simulate such a problem in a simple demo app? Does it depend on hardware? A: It's not really a matter of a fault happening when the 'volatile' keyword isn't specified, more that an error could happen when it hasn't been specified. Generally you are going to know when this is the case better than the compiler! The easiest way of thinking about it would be that the compiler could, if it wanted to, inline certain values. By marking the value as volatile, you are telling yourself and the compiler that the value may actually change (even if the compiler doesn't think so). This means the compiler should not in-line values, keep cache or read the value early (in an attempt to optimize). This behaviour isn't really the same keyword as in C++. MSDN has a short description here. Here is a perhaps a more in depth post on the subjects of Volatility, Atomicity and Interlocking A: It's hard to demonstrate in C#, as the code is abstracted by a virtual machine, thus on one implementation of this machine it work right without volatile, while it might fail on another one. The Wikipedia has a good example how to demonstrate it in C, though. The same thing could happen in C# if the JIT compiler decides that the value of the variable cannot change anyway and thus creates machine code that doesn't even check it any longer. If now another thread was changing the value, your first thread might still be caught in the loop. Another example is Busy Waiting. Again, this could happen with C# as well, but it strongly depends on the virtual machine and on the JIT compiler (or interpreter, if it has no JIT... in theory, I think MS always uses a JIT compiler and also Mono uses one; but you might be able to disable it manually). A: Here's my contribution to the collective understanding of this behaviour... It's not much, just a demonstration (based on xkip's demo) which shows the behaviour of a volatile verses a non-volatile (i.e. "normal") int value, side-by-side, in the same program... which is what I was looking for when I found this thread. using System; using System.Threading; namespace VolatileTest { class VolatileTest { private volatile int _volatileInt; public void Run() { new Thread(delegate() { Thread.Sleep(500); _volatileInt = 1; }).Start(); while ( _volatileInt != 1 ) ; // Do nothing Console.WriteLine("_volatileInt="+_volatileInt); } } class NormalTest { private int _normalInt; public void Run() { new Thread(delegate() { Thread.Sleep(500); _normalInt = 1; }).Start(); // NOTE: Program hangs here in Release mode only (not Debug mode). // See: http://stackoverflow.com/questions/133270/illustrating-usage-of-the-volatile-keyword-in-c-sharp // for an explanation of why. The short answer is because the // compiler optimisation caches _normalInt on a register, so // it never re-reads the value of the _normalInt variable, so // it never sees the modified value. Ergo: while ( true )!!!! while ( _normalInt != 1 ) ; // Do nothing Console.WriteLine("_normalInt="+_normalInt); } } class Program { static void Main() { #if DEBUG Console.WriteLine("You must run this program in Release mode to reproduce the problem!"); #endif new VolatileTest().Run(); Console.WriteLine("This program will now hang!"); new NormalTest().Run(); } } } There are some really excellent succinct explanations above, as well as some great references. Thanks to all for helping me get my head around volatile (atleast enough to know not rely to on volatile where my first instinct was lock it). Cheers, and Thanks for ALL the fish. Keith. PS: I'd be very interested in a demo of the original request, which was: "I'd like to see a static volatile int behaving correctly where a static int misbehaves. I have tried and failed this challenge. (Actually I gave up pretty quickly ;-). In everything I tried with static vars they behave "correctly" regardless of whether or not they're volatile ... and I'd love an explanation of WHY that is the case, if indeed it is the case... Is it that the compiler doesn't cache the values of static vars in registers (i.e. it caches a reference to that heap-address instead)? No this isn't a new question... it's an attempt to stear the community back to the original question. A: Yes, it's hardware dependent (you are unlikely to see the problem without multiple processors), but it's also implementation dependent. The memory model specifications in the CLR spec permit things which the Microsoft implementation of the CLR do not necessarily do. A: I came across the following text by Joe Albahari that helped me a lot. * *Memory Barriers and Volatility I grabbed an example from the above text which I altered a little bit, by creating a static volatile field. When you remove the volatile keyword the program will block indefinitely. Run this example in Release mode. class Program { public static volatile bool complete = false; private static void Main() { var t = new Thread(() => { bool toggle = false; while (!complete) toggle = !toggle; }); t.Start(); Thread.Sleep(1000); //let the other thread spin up complete = true; t.Join(); // Blocks indefinitely when you remove volatile } } A: I've achieved a working example! The main idea received from wiki, but with some changes for C#. The wiki article demonstrates this for static field of C++, it is looks like C# always carefully compile requests to static fields... and i make example with non static one: If you run this example in Release mode and without debugger (i.e. using Ctrl+F5) then the line while (test.foo != 255) will be optimized to 'while(true)' and this program never returns. But after adding volatile keyword, you always get 'OK'. class Test { /*volatile*/ int foo; static void Main() { var test = new Test(); new Thread(delegate() { Thread.Sleep(500); test.foo = 255; }).Start(); while (test.foo != 255) ; Console.WriteLine("OK"); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/133270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89" }
Q: Do I have to use Viewstate in ASP.NET I am moving from classic ASP to ASP.NET and have encountered what many of you already know as "viewstate". I might be jumping the gun with my assumption, but it looks highly cumbersome. I have developed many ASP forms in the past and never had issues with keeping state. Is there another way OR am I going to have to learn this Viewstate thing in ASP.NET? I am using Visual Studio 2008, VB.NET as the code behind language and Framework v3.5 with SQL Server 2005. A: You don't have to. Check out MVC framework. It eliminates ViewState and works as old ASP (at least from this point of view). A: This series of posts is must reading for understanding ViewState I disable it and do most of my work in Page_Init instead of Load (values are still maintained because of ControlState). This setup has worked out well for me. A: ViewState is optional, but helpful. What ViewState is, is all the changes which occur on a control on the SERVER SIDE. So, if you're assigning text to a label, and you want that text to persist without the need to reassign it on every postback, then you'll want to maintain that. Another example where I always leave ViewState on is anything databound. That said, there are times when it's helpful to turn ViewState off for that same reason. For example, the one place where I always turn ViewState off is a MESSAGE label. That way, when I have to print out a message to the user (one which should only appear once and then go away) I just add the text to the label and then forget about it. During the next PostBack, the label will automatically revert to the text which is found in the ASPX declaration for that control (in this case an empty string). Now, note that this has nothing to do with the form collection, which are the values posted to IIS during the PostBack. The form collection sends the values that the user enters into form elements (textboxes, checkboxes, droplists,etc). These .NET will populate into the appropriate place--and this occurs AFTER ViewState has been processed. This way, if you send a textbox with the phrase "hi there" to the client, the user changes it to "See ya" and then submits the form, what the textbox will have by the time the Page_Load event fires is a textbox with "See ya" in the TEXT attribute. A: In classic ASP we always just used a HIDDEN field to do the job. Viewstate is just a way of doing that for you automatically. Trust me the learning curve is not as high as you might think. A: Some controls are deeply crippled when you turn ViewState off, so be prepared to address these concerns. It's easiest to just be lazy and leave it on, but left unchecked, ViewState can easily account for 30% of the size of your HTML. For example, say you have a DropDown, and you bind it to a list of Fruits. You bind it in the if(! IsPostBack) { } block in the page load. If you turn off ViewState, you'll lose the items when you click a button. They need to be bound every page load. You'll also lose your selected index, so you'd need to pull that off of the Request.Form[] variables. A: Viewstate is part of the package when you are working with ASP.NET. For a basic page/website you shouldn't have to 'know' how to use Viewstate. It just gets used as you put controls on pages. It's pretty hard to avoid Viewstate with ASP.NET because even if you turn it off at the project level, some individual controls still use Viewstate to persist their information. If you don't want to deal with Viewstate, consider using the ASP.NET MVC framework. You will likely be more comfortable with the MVC framework coming from Classic ASP. A: ViewState is completely optional in almost all if not all cases. ASP.NET re-populates fields automatically even if ViewStateEnabled=false. I've been using ASP.NET for 5 or 6 years and have never had to depend on ViewState. I even disable it when I can. A: ViewState works automatically for the most part. It's just how ASP.NET keeps track of the current state of all it's controls. You can manually use viewstate too, if you want to store some extra data. That is as simple as: Viewstate["Key"] = value; The only caveat with that is that any object you store in viewstate must be serializable. A: I can definitely recommend avoiding ViewState in DataGrids and DropDownLists because I just recently started doing it myself. I didn't do this for fun, I had to fix a page that had grown so large that it was causing other problems. But this turned out to be easy, and the results were so dramatic that I am very pleased. Of course for a small simple app or for small amounts of data this will not be necessary, but on the other hand it's good to be consistent (always go from known to known so you can continually improve your process...), and why carry around extra baggage, ever? This will require a little manual intervention on your part. For example, if you turn off viewstate for drop down lists, you'll need to rebind them on each postback, and then restore the SelectedValue from the Request object. You'll need to read up on this, but google has lots of readily available information. A: Viewstate is kept automatically for asp.net controls "rooted" to the page. There is little you have to do, the values and some other information is passed in a hidden input B64 encoded. You can look at it if you want, but it doesn't matter, it's all handled automagically for you. A: If you're writing code for your own consumption, you can just turn it off and not worry. Presumably you're going to maintain Web Forms code written by other people, so you should know what the config options and pain points are. Top few I can think of * *how to disable it at site, page and control level *why MachineKey is relevant in web farms *why your event log is full of ViewStateAuthentication errors *what ViewStateUserKey is In terms of actual learning curve this is probably a thorough read of a couple of MSDN articles. A: ViewState is a necessary evil inherent to the web forms metaphor. I personally find this methodology obsolete, bloated and generally not web-friendly. Better check out MVC framework as suggested above. I suggest you avoid the temptation to use ViewState as a "cache" to pass data back and forth (I've seen websites doing this because of clustered setup and no SQL-backed session state). The data is serialized and added to the page and must do roundtrips every request, adding to the total size of the page and making your site slower to load. A: '<%@ Control Language="C#" AutoEventWireup="true" CodeFile="HomePage.ascx.cs" Inherits="HomePage" %> <script runat="server"> void testHF_ValueChanged(object sender, EventArgs e) { this.HFvalue.Text = this.testHF.Value ; } </script> <asp:Label ID="UserNamelbl" runat="server" Text="User Name : " Visible="false"></asp:Label> <asp:TextBox ID="UserNametxt" runat="server" Visible="false" ></asp:TextBox> <asp:Label ID="HFvalue" Text="......" runat="server"></asp:Label> <asp:HiddenField ID="testHF" OnValueChanged="testHF_ValueChanged" value="" runat="server" ></asp:HiddenField> <input type="submit" name="SubmitButton" value="Submit" onclick="CL()" /> <script type="text/javascript"> function CL() { this.testHF.Value = this.UserNametxt.Text; } </script> '
{ "language": "en", "url": "https://stackoverflow.com/questions/133277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Castle-ActiveRecord Tutorial with .NET 3.5 broken? Has anyone tried the ActiveRecord Intro Sample with C# 3.5? I somehow have the feeling that the sample is completely wrong or just out of date. The XML configuration is just plain wrong: <add key="connection.connection_string" value="xxx" /> should be : <add key="hibernate.connection.connection_string" value="xxx" /> (if I understand the nhibernate config syntax right..) I am wondering what I'm doing wrong. I get a "Could not perform ExecuteQuery for User" Exception when calling Count() on the User Model. No idea what this can be. The tutorial source differs strongly from the source on the page (most notably in the XML configuration), and it's a VS2003 sample with different syntax on most things (no generics etc). Any suggestions? ActiveRecord looks awesome.. A: The 'hibernate' portion of the key was removed in NHibernate version 2.0. This version is correct for NHibernate 2.0 onwards: <add key="connection.connection_string" value="xxx" /> Edit: I see that the quickstart doesn't come with the binaries for Castle and NHibernate. You must have downloaded the binaries from somewhere; it would be helpful if you could provide the version number of your NHibernate.dll file. Confusingly, at least SOME of the quickstart has been updated to be current with NHibernate (NH) 2.0, but the latest 'proper' Castle release is still the 1.0 RC3 (almost a year old now), which does not include NH 2.0. You can go two ways. You can continue using Castle RC3 and in this case you will indeed need to add the 'hibernate' prefix to your configuration entries. Or you can download a build of Castle from the trunk, which should be running against NH 2.0. The problem with the latter approach is that some of the other breaking changes introduced in NH 2.0 might not be fixed in the quick start. A: (This was too long for a comment post) [@Tigraine] From your comments on my previous answer it looks like the error lies not with the configuration, but with one of your entities. Removing the "hibernate" corrected the configuration so that it geve you the real error, which appears to be that the entity "Post" is not properly attributed for ActiveRecord to create its mapping. If you further down in the error that it gives, it likely has some details as to what about "Post" failed. Some common things include: * *THe class does not have the [ActiveRecord] attribute. *There is no property with the [PrimaryKey] attribute. *There is no matching table called "Post" (or "Posts" if PluralizeTableNames is "true"). *There is no matching column(s) for attributed properties. *Your attributed properties and public methods are not virtual (this one kills me all the time). A: Delete the "hibernate." part for all configuration entries. Your first example is the correct one.
{ "language": "en", "url": "https://stackoverflow.com/questions/133281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are there any emulator for the JTAPI API? We want test the JTAPI feature of our application. Are there any emulator for JTAPI available? The testing with real hardware is a little diffcult. A: gjtapi seems to have one.
{ "language": "en", "url": "https://stackoverflow.com/questions/133282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Threading when sending emails I have a simple function that sends out emails, how would I go about using threads to speed email delivery? Sample code would be ideal. A: Use SendAsync isntead. A: Check out the following link for a demonstration of the sendAsync method. [MSDN] http://msdn.microsoft.com/en-ca/library/x5x13z6h(VS.80).aspx A: You can run the function in another thread. Being SendMail your mail sender function you can: ThreadPool.QueueUserWorkItem(delegate { SendMail(message); }); A: IN 4.0 you can use the following, new Thread(x => SendMail(message)).Start(); and public static void SendEmail(MailMessage message) { using (SmtpClient client = new SmtpClient("smtp.XXXXXX.com")) { client.Send(message); } } A: Create your class with a static void method that will make your class start doing what you want to do on the separate thread with something like: using System; using System.Threading; class Test { static void Main() { Thread newThread = new Thread(new ThreadStart(Work.DoWork)); newThread.Start(); } } class Work { Work() {} public static void DoWork() {} } Another alternative is to use the ThreadPool class if you dont want to manage your threads yourself. More info on Threads - http://msdn.microsoft.com/en-us/library/xx3ezzs2.aspx More info on ThreadPool - http://msdn.microsoft.com/en-us/library/3dasc8as(VS.80).aspx A: Having a seperate thread will not speed the delivery of email however. All it will do is return control back to the calling method faster. So unless you need to do that, i wouldnt even bother with it. A: When you send e-mails using multiple threads, be careful about getting identified as spam by your isp. It will be better to opt for a smaller batches with some delay between each batch. A: You know what would be nicer and easier is to create an application back end and send emails every 30 minutes. Throw the information into a database that you need to send to and from there, create an application pool that launches every 30 minutes. When it launches, you can send an email. No need to wait for your event handler to send the email... It works for us. Just thought it would help for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/133287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Subfolders in CodeIgniter I'm new to CodeIgniter, and I need some help. I'd like to implement the following: * *View a user's profile via: http://localhost/profile/johndoe *Administrate a user's profile via: http://localhost/admin/profile/johndoe *Be able to accomplish even further processing via: http://localhost/admin/profile/create ...and... http://localhost/admin/profile/edit/johndoe I've already created the admin object and secured it. Do I have to create a profile function under admin, and work with the URI to process accordingly? Is there a better way? A: This is not such a good idea. If you want to implement those URLs, you need two controllers: * *Profile, with the function index *Admin, with the function profile In Admin, the profile function has to read the first argument (create/edit/[userid]) and then do something accordingly. (You also must make sure that no user can call himself "create" or "edit".) I would instead use only one controller with the functions show, edit, and create (or add). Much easier. Then you would get these URLs: * *http://localhost/profile/show/johndoe *http://localhost/profile/edit/johndoe *http://localhost/profile/create/johndoe A: I found the solution I was looking for: http://www.clipmarks.com/clipmark/75D02C9E-3E76-483E-8CCE-30403D891969/ Thanks, Christian D, I like your solution better than mine. I'm going with it.
{ "language": "en", "url": "https://stackoverflow.com/questions/133308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I get jQuery to perform a synchronous, rather than asynchronous, Ajax request? I have a JavaScript widget which provides standard extension points. One of them is the beforecreate function. It should return false to prevent an item from being created. I've added an Ajax call into this function using jQuery: beforecreate: function (node, targetNode, type, to) { jQuery.get('http://example.com/catalog/create/' + targetNode.id + '?name=' + encode(to.inp[0].value), function (result) { if (result.isOk == false) alert(result.message); }); } But I want to prevent my widget from creating the item, so I should return false in the mother-function, not in the callback. Is there a way to perform a synchronous AJAX request using jQuery or any other in-browser API? A: All of these answers miss the point that doing an Ajax call with async:false will cause the browser to hang until the Ajax request completes. Using a flow control library will solve this problem without hanging up the browser. Here is an example with Frame.js: beforecreate: function(node,targetNode,type,to) { Frame(function(next)){ jQuery.get('http://example.com/catalog/create/', next); }); Frame(function(next, response)){ alert(response); next(); }); Frame.init(); } A: Firstly we should understand when we use $.ajax and when we use $.get/$.post When we require low level control over the ajax request such as request header settings, caching settings, synchronous settings etc.then we should go for $.ajax. $.get/$.post: When we do not require low level control over the ajax request.Only simple get/post the data to the server.It is shorthand of $.ajax({ url: url, data: data, success: success, dataType: dataType }); and hence we can not use other features(sync,cache etc.) with $.get/$.post. Hence for low level control(sync,cache,etc.) over ajax request,we should go for $.ajax $.ajax({ type: 'GET', url: url, data: data, success: success, dataType: dataType, async:false }); A: this is my simple implementation for ASYNC requests with jQuery. I hope this help anyone. var queueUrlsForRemove = [ 'http://dev-myurl.com/image/1', 'http://dev-myurl.com/image/2', 'http://dev-myurl.com/image/3', ]; var queueImagesDelete = function(){ deleteImage( queueUrlsForRemove.splice(0,1), function(){ if (queueUrlsForRemove.length > 0) { queueImagesDelete(); } }); } var deleteImage = function(url, callback) { $.ajax({ url: url, method: 'DELETE' }).done(function(response){ typeof(callback) == 'function' ? callback(response) : null; }); } queueImagesDelete(); A: function getURL(url){ return $.ajax({ type: "GET", url: url, cache: false, async: false }).responseText; } //example use var msg=getURL("message.php"); alert(msg); A: Because XMLHttpReponse synchronous operation is deprecated I came up with the following solution that wraps XMLHttpRequest. This allows ordered AJAX queries while still being asycnronous in nature, which is very useful for single use CSRF tokens. It is also transparent so libraries such as jQuery will operate seamlessly. /* wrap XMLHttpRequest for synchronous operation */ var XHRQueue = []; var _XMLHttpRequest = XMLHttpRequest; XMLHttpRequest = function() { var xhr = new _XMLHttpRequest(); var _send = xhr.send; xhr.send = function() { /* queue the request, and if it's the first, process it */ XHRQueue.push([this, arguments]); if (XHRQueue.length == 1) this.processQueue(); }; xhr.processQueue = function() { var call = XHRQueue[0]; var xhr = call[0]; var args = call[1]; /* you could also set a CSRF token header here */ /* send the request */ _send.apply(xhr, args); }; xhr.addEventListener('load', function(e) { /* you could also retrieve a CSRF token header here */ /* remove the completed request and if there is more, trigger the next */ XHRQueue.shift(); if (XHRQueue.length) this.processQueue(); }); return xhr; }; A: Keep in mind that if you're doing a cross-domain Ajax call (by using JSONP) - you can't do it synchronously, the async flag will be ignored by jQuery. $.ajax({ url: "testserver.php", dataType: 'jsonp', // jsonp async: false //IGNORED!! }); For JSONP-calls you could use: * *Ajax-call to your own domain - and do the cross-domain call server-side *Change your code to work asynchronously *Use a "function sequencer" library like Frame.js (this answer) *Block the UI instead of blocking the execution (this answer) (my favourite way) A: Note: You shouldn't use async: false due to this warning messages: Starting with Gecko 30.0 (Firefox 30.0 / Thunderbird 30.0 / SeaMonkey 2.27), synchronous requests on the main thread have been deprecated due to the negative effects to the user experience. Chrome even warns about this in the console: Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user's experience. For more help, check https://xhr.spec.whatwg.org/. This could break your page if you are doing something like this since it could stop working any day. If you want to do it a way that still feels like if it's synchronous but still don't block then you should use async/await and probably also some ajax that is based on promises like the new Fetch API async function foo() { var res = await fetch(url) console.log(res.ok) var json = await res.json() console.log(json) } Edit chrome is working on Disallowing sync XHR in page dismissal when the page is being navigated away or closed by the user. This involves beforeunload, unload, pagehide and visibilitychange. if this is your use case then you might want to have a look at navigator.sendBeacon instead It is also possible for the page to disable sync req with either http headers or iframe's allow attribute A: You can put the jQuery's Ajax setup in synchronous mode by calling jQuery.ajaxSetup({async:false}); And then perform your Ajax calls using jQuery.get( ... ); Then just turning it on again once jQuery.ajaxSetup({async:true}); I guess it works out the same thing as suggested by @Adam, but it might be helpful to someone that does want to reconfigure their jQuery.get() or jQuery.post() to the more elaborate jQuery.ajax() syntax. A: I used the answer given by Carcione and modified it to use JSON. function getUrlJsonSync(url){ var jqxhr = $.ajax({ type: "GET", url: url, dataType: 'json', cache: false, async: false }); // 'async' has to be 'false' for this to work var response = {valid: jqxhr.statusText, data: jqxhr.responseJSON}; return response; } function testGetUrlJsonSync() { var reply = getUrlJsonSync("myurl"); if (reply.valid == 'OK') { console.dir(reply.data); } else { alert('not valid'); } } I added the dataType of 'JSON' and changed the .responseText to responseJSON. I also retrieved the status using the statusText property of the returned object. Note, that this is the status of the Ajax response, not whether the JSON is valid. The back-end has to return the response in correct (well-formed) JSON, otherwise the returned object will be undefined. There are two aspects to consider when answering the original question. One is telling Ajax to perform synchronously (by setting async: false) and the other is returning the response via the calling function's return statement, rather than into a callback function. I also tried it with POST and it worked. I changed the GET to POST and added data: postdata function postUrlJsonSync(url, postdata){ var jqxhr = $.ajax({ type: "POST", url: url, data: postdata, dataType: 'json', cache: false, async: false }); // 'async' has to be 'false' for this to work var response = {valid: jqxhr.statusText, data: jqxhr.responseJSON}; return response; } Note that the above code only works in the case where async is false. If you were to set async: true the returned object jqxhr would not be valid at the time the AJAX call returns, only later when the asynchronous call has finished, but that is much too late to set the response variable. A: Excellent solution! I noticed when I tried to implement it that if I returned a value in the success clause, it came back as undefined. I had to store it in a variable and return that variable. This is the method I came up with: function getWhatever() { // strUrl is whatever URL you need to call var strUrl = "", strReturn = ""; jQuery.ajax({ url: strUrl, success: function(html) { strReturn = html; }, async:false }); return strReturn; } A: With async: false you get yourself a blocked browser. For a non blocking synchronous solution you can use the following: ES6/ECMAScript2015 With ES6 you can use a generator & the co library: beforecreate: function (node, targetNode, type, to) { co(function*(){ let result = yield jQuery.get('http://example.com/catalog/create/' + targetNode.id + '?name=' + encode(to.inp[0].value)); //Just use the result here }); } ES7 With ES7 you can just use asyc await: beforecreate: function (node, targetNode, type, to) { (async function(){ let result = await jQuery.get('http://example.com/catalog/create/' + targetNode.id + '?name=' + encode(to.inp[0].value)); //Just use the result here })(); } A: From the jQuery documentation: you specify the asynchronous option to be false to get a synchronous Ajax request. Then your callback can set some data before your mother function proceeds. Here's what your code would look like if changed as suggested: beforecreate: function (node, targetNode, type, to) { jQuery.ajax({ url: 'http://example.com/catalog/create/' + targetNode.id + '?name=' + encode(to.inp[0].value), success: function (result) { if (result.isOk == false) alert(result.message); }, async: false }); } A: This is example: $.ajax({ url: "test.html", async: false }).done(function(data) { // Todo something.. }).fail(function(xhr) { // Todo something.. }); A: Since the original question was about jQuery.get, it is worth mentioning here that (as mentioned here) one could use async: false in a $.get() but ideally avoid it since asynchronous XMLHTTPRequest is deprecated (and the browser may give a warning): $.get({ url: url,// mandatory data: data, success: success, dataType: dataType, async:false // to make it synchronous });
{ "language": "en", "url": "https://stackoverflow.com/questions/133310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1318" }
Q: Good database table design for storing localized versions of data I'm trying to design some tables to store some data, which has to be converted to different languages later. Can anybody provide some "best practices" or guidelines for this? Thanks A: Let's say you have a products table that looks like this: Products ---------- id price Products_Translations ---------------------- product_id locale name description Then you just join on product_id = product.id and where locale='en-US' of course this has an impact on performance, since you now need a join to get the name and description, but it allows any number of locales later on. A: Can you describe the nature of the 'dynamic data'? One way to implement this would be to have 3 different tables: * *Language Table * *This table would store the language and a key : [1, English], [2, Spanish] * *Data Definition Table * *When dynamic data is first entered make a record in this table with and identifier to the data: [1, 'Data1'], [2, 'Data2'] * *Data_Language Table * *This table will link the language, data definition and translation So: [Data_Language, Data_Definition, Language, Translation] [1, 1, 1, 'Red'] [2, 1, 2, 'Rojo'] [3, 2, 1, 'Green'] [4, 2, 2, 'Verde'] etc ... When the dynamic data is entered create the default 'English' record and then translate at your leisure. A: I beleve that more information on what you are doing would be helpful. CAn you give some samples of the data? And what do you mean by dynamic? That there will be lots of data inserted over time and lots of changes to the data or that the data only needs to be available for a small period of time. A: In general, you should probably be looking at a parent with common non-localized data, and a child table with the localized data and the language key. If by dynamic, you mean that it changes frequently, you may want to have a look at using triggers and something like a 'translationRequired' flag to mark things that are in need to translation after a change is made.
{ "language": "en", "url": "https://stackoverflow.com/questions/133313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: In Perforce, can you rename a folder to the same name but cased differently? Can I rename a folder in Perforce from //depot/FooBar/ to //depot/Foobar/? I've tried this by renaming from //depot/FooBar/ to //depot/Temp/ to //Depot/Foobar/ but the end result ends up the same as //depot/FooBar/. A: Maybe not needed anymore, but here's the official Perforce HowTo about changing file cases on Windows and Unix: http://answers.perforce.com/articles/KB/3448/?q=change+file+case A: I'm not sure about directories, but we've had this problem with files. To fix it, we have to delete the file, submit that change, then p4 add the file with the correct case and submit the second change. Once that's done, unix users who have sync'ed the incorrect-case file have to p4 sync, then physically delete the file (because p4 won't update the case) and then p4 sync -f the file. Our server is on Windows, so that might make a difference. A: Once it is in Perforce, the case remains set. As mentioned by Johan you can obliterate, set the name up correctly, and add it in again. However, there is a slight gotcha.... If anyone else (running Windows) has already synced the wrong-cased version, then when they sync again the right one, it will not change the case on their PC. This is a peculiarity of the Windows file system acknowledging case but still being fundamentally case-independent. If a number of users have synced, and it is not convenient to get them to remove-from-client too (and blasting the folders from their machines), then you can resort to a dark and dirty Perforce technique called "Checkpoint surgery". It's not for the fainthearted, but you do this: * *Stop your server, take a checkpoint. *Using your favourite text editor that can handle multi-megabyte files, search & replace all occurances of the old case name with the new. You could of course use a script too. *Replay your checkpoint file to recreate the Perforce database meta data. *Restart your server. This will affect all user client specs transparently, and so when they sync they will get the right case as if by magic. It sounds hairy, but I've had to do it before and as long as you take care, backup, do a trial run etc, then all should be OK. A: I guess it treats files and folders the same. For files: It depends (on whether you have a Windows or Unix server). We have this problem with our Windows perforce server (which versions our Java code), where very occasionally someone will check in a file with a case problem (this then causes compile errors because it's Java). The only way to fix this is to obliterate the file and resubmit it with the correct case. A: I think you should remove the Perforce Cache, so that your modification can be shown. You can rename with ABC rename to abc_TMP, then abc_TMP rename to abc, then clear cache. Setps to clear cache: * *Open windows user home folder (on windows7 ==> C:\Users\) *Locate the folder called ".p4qt" *Rename the folder to "old.p4qt" *Launch Perforce, now everything works! NOTE: these steps will rest your default setting. A: The question is over 3 years old, but I ran into an issue like this while doing a Subversion import into Perforce and figured the info I got could be useful to some. It's similar to the obliterate method, but helps you retain history. You use the duplicate command that may not have been available back then to retain the history. The process basically being: * *Duplicate to temporary location. *Obliterate the location you just duplicated. *Duplicate from the temporary location to the renamed case location. *Obliterate the temporary location. Through this you retain the history of file changes, but get them all in the new path as well. Unfortunately there will be no history of the path case change, but that seems to be unavoidable. Similar to other methods mentioned here, users will need to either manually rename the directories in their workspace or delete and re-sync to get the new path name. Also, P4V caches the paths it shows in the tree so after doing this it may still show up as the old name. a p4 dirs command however will show the new case.
{ "language": "en", "url": "https://stackoverflow.com/questions/133320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Minimize a external application with Delphi Is there a way to Minimize an external application that I don't have control over from with-in my Delphi application? for example notepad.exe, except the application I want to minimize will only ever have one instance. A: You can use FindWindow to find the application handle and ShowWindow to minimize it. var Indicador :Integer; begin // Find the window by Classname Indicador := FindWindow(PChar('notepad'), nil); // if finded if (Indicador <> 0) then begin // Minimize ShowWindow(Indicador,SW_MINIMIZE); end; end; A: I'm not a Delphi expert, but if you can invoke win32 apis, you can use FindWindow and ShowWindow to minimize a window, even if it does not belong to your app. A: Thanks for this, in the end i used a modifyed version of Neftali's code, I have included it below in case any one else has the same issues in the future. FindWindow(PChar('notepad'), nil); was always returning 0, so while looking for a reason why I found this function that would find the hwnd, and that worked a treat. function FindWindowByTitle(WindowTitle: string): Hwnd; var NextHandle: Hwnd; NextTitle: array[0..260] of char; begin // Get the first window NextHandle := GetWindow(Application.Handle, GW_HWNDFIRST); while NextHandle > 0 do begin // retrieve its text GetWindowText(NextHandle, NextTitle, 255); if Pos(WindowTitle, StrPas(NextTitle)) <> 0 then begin Result := NextHandle; Exit; end else // Get the next window NextHandle := GetWindow(NextHandle, GW_HWNDNEXT); end; Result := 0; end; procedure hideExWindow() var Indicador:Hwnd; begin // Find the window by Classname Indicador := FindWindowByTitle('MyApp'); // if finded if (Indicador <> 0) then begin // Minimize ShowWindow(Indicador,SW_HIDE); //SW_MINIMIZE end; end; A: I guess FindWindow(PChar('notepad'), nil) should be FindWindow(nil, PChar('notepad')) to find the window by title.
{ "language": "en", "url": "https://stackoverflow.com/questions/133325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Develop SharePoint web parts in ASP.NET I have been asked to develop some usercontrols in ASP.NET that will at a later point be pulled into a SharePoint site as web parts. I am new to SharePoint and will not have access to a SharePoint server during the time I need to prototype these parts. Does anyone know of any reasons that this approach will not work? If this approach is not recommended, what would other options be? Any suggestions on a resource/tutorial on what to consider when developing an ASP.NET web part with SharePoint in mind? Thanks Edit: 12/31/2008 I finally marked an answer to this one. It took me a while to realize that going the SharePoint route right away, though painful at first, is the best way to go about it. The free VPC image makes getting set up to develop relatively painless. While you can, as I did, develop web parts in ASP.NET without SharePoint, when it comes to developing and deploying SharePoint applications you haven't learned a thing, only pushed the learning curve off into a time when you think you are done, (and have probably informed stakeholders to that effect). To delay the SharePoint learning curve doesn't do you or your project any favors, and your final product will better for the expertise you gain along the way. A: ASP.NET web parts work in SharePoint the same as they work in ASP.NET. That's the route I would take (custom control that derives from the ASP.NET Web Part class). This will alleviate any requirement to actually develop on a SharePoint server. The only issue you are going to encounter is that you will not be able to take advantage of the SharePoint framework. If you are doing anything advanced in SharePoint this is a big deal. However, SharePoint is ASP.NET plus some additional functionality, so anything you can develop using the System.Web.UI.WebControls.WebPart class should work great in SharePoint. Some considerations that will help ease your pain as you go from pure ASP.NET to SharePoint: * *If you can put everything inside of a single assembly, deployment will be easier * *try to put everything you need into the DLL's that are deployed to SharePoint *use assembly resources to embed JS, CSS, and image files if needed *Strong name the assembly you are building * *Most SharePoint deployments end up in the GAC and a strong name will be required Here is a relevant blog post; Developing Basic Web Parts in SharePoint 2007 A: I guess the easiest way is to use the SmartPart for SharePoint from CodePlex. The project description says "The SharePoint web part which can host any ASP.NET web user control. Create your web parts without writing code!", which I guess is exactly what you want to do. A: Setting up my machine to develop for Sharepoint took me a couple of days. See http://weblogs.asp.net/erobillard/archive/2007/02/23/build-a-sharepoint-development-machine.aspx A: If it's a very short-term thing, Microsoft has a time-limited WSS evaluation VPC image: WSS3 SP1 Developer Evaluation VPC image That will get you started if you don't have time/resources to set up your own VPC image right now. A: you need to have access to a sharepoint server because you can't simulate your webpart without it, you have to deploy it to your sharepoint site to test if it's working. debugging would also be a pain. or you can use SmartPart, it's a webpart that acts like a wrapper for your user controls to display in a sharepoint site. A: Build and test the control as you would for a typical .net web site. Solution 1 = the controls Solution 2 = dummy website to host the controls. Deployment on Sharepoint: You'll need to sign the controls. Drop the signed DLL into the GAC on the sharepoint server (Windows/assembly) Mark the control as safe in the virtual server root web.config on the sharepoint site. i.e. <SafeControl Assembly="MyControl, Version=1.0.0.0, Culture=neutral, PublicKeyToken=975cc42deafbee31" Namespace="MyNamespace" TypeName="*" Safe="True" AllowRemoteDesigner="True" /> Register the component in your sharepoint page: <%@ Register Namespace="MyNamespace" Assembly="MyControl, Version=1.0.0.0, Culture=Neutral, PublicKeyToken=975cc42deafbee31" TagPrefix="XXXX" %> Use the control: <XXXX:ClassName runat="server" Field1="Value1" Field2="Value2" ....></XXXX:Classname> If you need to replace the control using the same version number, then you'll need to recycle the app pool to reload. A: If you don't need to do anything SharePoint-specific (ie accessing lists, other webparts, etc) then you can build your webpart just like a regular webpart (derived from System.Web.UI.WebControls.WebParts.WebPart class) and it will work when added to a SharePoint site. A: You do not need SharePoint to develop WebParts. You can develop webparts by inheriting from the System.Web.UI.WebControls.WebParts. And this is the preferable way of creating web parts unless you want the following features like * Connections between web parts that are outside of a Web Part zone * Cross page connections * A data caching infrastructure that allows caching to the content database * Client-side connections (Web Part Page Services Component) In which case you need to develop webparts by inheriting from Microsoft.SharePoint.WebPartpages.WebPart. You can find more useful info here A: Is there any particular reason why your user controls must be deployed as web parts? It is perfectly feasible to deploy user controls directly to Sharepoint sites either through the CONTROLTEMPLATES folder in the 12 hive or to a location in the web app virtual directory, which you can then reference from web pages using Sharepoint Designer. If however the web part requirement is crucial then I recommend Smartpart for Sharepoint as already mentioned. A: Actually, Web Parts should always be deployed to the sharepoint's bin folder due to their 'abusive' nature. Always deploy web parts to the bin if possible and write your own CAS and include it in your manifest.
{ "language": "en", "url": "https://stackoverflow.com/questions/133328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Programmatically delete emails and SMSs on a Window Mobile device I'm looking for a code snippet that can delete all emails and text messages on a Windows Mobile device. Preferably the code would delete items in the Sent and Draft folders as well as the Inbox. My platform is Windows Mobile (5.0 SDK) and .net 2.0 compact framework (C# / VB.NET) A: Unfortunately Microsoft has not made this easy for managed developers. Why the WindowsMobile.PocketOutlook class wrappers don't provide this functionality one can only guess. What you have to do is write your own COM interop object to MAPI. Sorry, I don't have one to give you as a sample, but I can at least give you pointers to the methods you'll be interested in: * *IMAPI::GetMsgStoresTable *IMAPISession::OpenMessageStore *IMsgStore::OpenEntry *IMAPIFolder::DeleteMessages InTheHand has a wrapper that has additional methods for POOM, but I've never used it and I don't know if it has anything that does what you need. Might be worth a look, though, before embarking on rolling this yourself.
{ "language": "en", "url": "https://stackoverflow.com/questions/133330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Subclipse error message "Expected format '3' of repository; found format '5'" I installed subclipse in eclipse, but I get an error message "Expected format '3' of repository; found format '5'" when I try to open a repository. Here is the sequence of steps that leads to the error message. Select "Window -> Open Perspective -> SNV Repository Exploring" from the Eclipse main menu. Right click on the "SVN Repository" tab. Select "New -> Repository Location..." from the pop-up menu. The "Add SVN Repository" panel appears. Enter "file:///Users/caylespandon/svn/MyProject" in the "Url" field. Click on the "Finish" buton. A panel with the following error message appears: Unable to Validate Error validating location: "org.tigris.subversion.javahl.ClientException: Couldn't open a repository svn: Unable to open an ra_local session to URL svn: Unable to open repository 'file:///Users/caylespandon/svn/MyProject' Unsupported repository version svn: Expected format '3' of repository; found format '5' " Note that I can access the same repository from the command line just fine: ~> svn checkout file:///Users/caylespandon/svn/MyProject A MyProject/trunk A MyProject/trunk/Jamrules A MyProject/trunk/.project A MyProject/trunk/setenv [...] Here is the version information: Eclipse: version 3.4.0 build id I20080617-2000 Subclipse version: 1.2.0 SVN version: 1.4.4 (r25188) Running on a Mac: OS X version 10.5.4 PS -- If your answer involves switching from file to svn+ssh, please explain why and how to convert an existing repository from file to svn+ssh without losing any history. A: Just guessing here, but make sure your version of the libsvnjavahl libraries are the same as the version of SVN you're using. A: Have a look at these answers to a similar problem. A: (Answering myself) I ended up picking the solution suggested by Cory Engebretson, which is to use Subversive instead of Subclipse. I did some googling to see if one is better than the other, and they seem to be pretty much equivalent some like one and some the other. I found the help (particualarly the installation instructions) for Subversive clearer and I was able to get it to work without too much trouble. A: I can't help on your posted problem, but I would recommend trying subversive instead. I made the switch out of frustration with some subclipse bugs and have been much happier. It does take a bit more work to install. Eclipse Subversive Project A: The root of the problem is that you are using an old SVN client that does not understand the newer format (5) of the SVN repository.
{ "language": "en", "url": "https://stackoverflow.com/questions/133335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What's the difference between a "Data Service Layer" and a "Data Access Layer"? I remember reading that one abstracts the low level calls into a data agnostic framework (eg. ExecuteCommand methods etc), and the other usually contains business specific methods (eg. UpdateCustomer). Is this correct? Which is which? A: Here's another perspective deep from the trenches! A Data Access Layer is a software abstraction layer which hides the complexity / implementation of actually getting the data. The applications asks the Data Access Layer (See DAO design pattern) to "get me this" or "update that" etc (indirection). The Data Access Layer is responsible for performing implementation-specific operations, such as reading/updating various data sources, such as Oracle, MySQL, Cassandra, RabbitMQ, Redis, a simple file system, a cache, or even delegate to another Data Service Layer. If all this work happens inside a single machine and in the same application, the term Data Service Layer is equivalent to a Service Facade (indirection). It is responsible for servicing and delegating application calls to the correct Data Access Layer. Somewhat confusingly, in a distributed computing world, or Service Oriented Architecture, a Data Service Layer can actually be a web service that acts as a standalone application. In this context, the Data Service Layer delegates received upstream application data requests to the correct Data Access Layer. In this instance, web services are indirecting data access from applications - the application only needs to know what service to call to get the data, so as a rule-of-thumb, in distributed computing environments, this approach will reduces application complexity (and there are always be exceptional cases) So just to be clear, the application uses a DSL and a DAL. The DSL in the app should talk to a DAL in the same application. DAL's have the choice of using a single datasource, or delegate to another web service. Web Service DSL can delegate the work to the DAL for that request. Indeed, it's possible for a single web service request to use a number of data sources in order to respond to the data. With all that said, from a pragmatic perspecive, it's only when systems become increasingly complex, should more attention be paid to architectural patterns. It's good practice to do things right, but there's no point in unnecessarily gold-plating your work. Remember YAGNI? Well that fails to resonate come the time it's needed! To conclude: A famous aphorism of David Wheeler goes: "All problems in computer science can be solved by another level of indirection";[1] this is often deliberately mis-quoted with "abstraction layer" substituted for "level of indirection". Kevlin Henney's corollary to this is, "...except for the problem of too many layers of indirection." A: To me this is a personal design decision on how you want to handle your project design. At times data access and data service are one and the same. For .NET and LINQ that is the case. To me the data service layer is what actually does the call to the database. The data access layer receives the objects and creates them or modify them for the data service layer to make the call to the database. In my designs the Business Logic Layer manipulates the objects based on the business rules, then passes them to the data access layer which will format them to go into the database or the objects from the database, and the data service layer handles the actual database call. A: I think in general the two terms are interchangeable, but could have more specific meanings depending on the context of your development environment. A Data Access Layer sits on the border between data and the application. The "data" is simply the diverse set of data sources used by the application. This can mean that substantial coding must be done in each application to pull data together from multiple sources. The code which creates the data views required will be redundant across some applications. As the number of data sources grows and becomes more complex, it becomes necessary to isolate various tasks of data access to address details of data access, transformation, and integration. With well-designed data services, Business Services will be able to interact with data at a higher level of abstraction. The data logic that handles data access, integration, semantic resolution, transformation, and restructuring to address the data views and structures needed by applications is best encapsulated in the Data Services Layer. It is possible to break the Data Services Layer down even further into its constituent parts (i.e. data access, transformation, and integration). In such a case you might have a "Data Access Layer" that concerns itself with only retrieving data, and a "Data Service Layer" that retrieves its data through the Data Access Layer and combines and transforms the retrieved data into the various objects required by the Business Service Layer. A: The Data Service Layer concept done in the WebSphere Commerce documentation is straightforward: The data service layer (DSL) provides an abstraction layer for data access that is independent of the physical schema. The purpose of the data service layer is to provide a consistent interface (called the data service facade) for accessing data, independent of the object-relational mapping framework Currently in internet the DSL concept is mainly associated with the SOAs (Service Oriented Architectures) but not only. Here is mentioned in an example of N-tier applications.
{ "language": "en", "url": "https://stackoverflow.com/questions/133350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do you find the namespace/module name programmatically in Ruby on Rails? How do I find the name of the namespace or module 'Foo' in the filter below? class ApplicationController < ActionController::Base def get_module_name @module_name = ??? end end class Foo::BarController < ApplicationController before_filter :get_module_name end A: This would work if the controller did have a module name, but would return the controller name if it did not. class ApplicationController < ActionController::Base def get_module_name @module_name = self.class.name.split("::").first end end However, if we change this up a bit to: class ApplicatioNController < ActionController::Base def get_module_name my_class_name = self.class.name if my_class_name.index("::").nil? then @module_name = nil else @module_name = my_class_name.split("::").first end end end You can determine if the class has a module name or not and return something else other than the class name that you can test for. A: For the simple case, You can use : self.class.parent A: This should do it: def get_module_name @module_name = self.class.to_s.split("::").first end A: I know this is an old thread, but I just came across the need to have separate navigation depending on the namespace of the controller. The solution I came up with was this in my application layout: <%= render "#{controller.class.name[/^(\w*)::\w*$/, 1].try(:downcase)}/nav" %> Which looks a bit complicated but basically does the following - it takes the controller class name, which would be for example "People" for a non-namespaced controller, and "Admin::Users" for a namespaced one. Using the [] string method with a regular expression that returns anything before two colons, or nil if there's nothing. It then changes that to lower case (the "try" is there in case there is no namespace and nil is returned). This then leaves us with either the namespace or nil. Then it simply renders the partial with or without the namespace, for example no namespace: app/views/_nav.html.erb or in the admin namespace: app/views/admin/_nav.html.erb Of course these partials have to exist for each namespace otherwise an error occurs. Now the navigation for each namespace will appear for every controller without having to change any controller or view. A: my_class.name.underscore.split('/').slice(0..-2) or my_class.name.split('::').slice(0..-2) A: With many sub-modules: module ApplicationHelper def namespace controller.class.name.gsub(/(::)?\w+Controller$/, '') end end Example: Foo::Bar::BazController => Foo::Bar A: No one has mentioned using rpartition? const_name = 'A::B::C' namespace, _sep, module_name = const_name.rpartition('::') # or if you just need the namespace namespace = const_name.rpartition('::').first A: For Rails 6.1 self.class.module_parent Hettomei answer works fine up to Rails 6.0 DEPRECATION WARNING: Module#parent has been renamed to module_parent. parent is deprecated and will be removed in Rails 6.1. A: None of these solutions consider a constant with multiple parent modules. For instance: A::B::C As of Rails 3.2.x you can simply: "A::B::C".deconstantize #=> "A::B" As of Rails 3.1.x you can: constant_name = "A::B::C" constant_name.gsub( "::#{constant_name.demodulize}", '' ) This is because #demodulize is the opposite of #deconstantize: "A::B::C".demodulize #=> "C" If you really need to do this manually, try this: constant_name = "A::B::C" constant_name.split( '::' )[0,constant_name.split( '::' ).length-1] A: I don't think there is a cleaner way, and I've seen this somewhere else class ApplicationController < ActionController::Base def get_module_name @module_name = self.class.name.split("::").first end end A: I recommend gsub instead of split. It's more effective that split given that you don't need any other module name. class ApplicationController < ActionController::Base def get_module_name @module_name = self.class.to_s.gsub(/::.*/, '') end end
{ "language": "en", "url": "https://stackoverflow.com/questions/133357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: How do you handle strings in C++? Which is your favorite way to go with strings in C++? A C-style array of chars? Or wchar_t? CString, std::basic_string, std::string, BSTR or CComBSTR? Certainly each of these has its own area of application, but anyway, which is your favorite and why? A: std::string !! There's a reason why they call it a "Standard". basic_string is an implementation detail and should be ignored. BSTR & CComBSTR only for interOp with COM, and only for the moment of interop. A: std::string unless I need to call an API that specifically takes one of the others that you listed. A: Here's an article comparing the most common kinds of strings in C++ and how to convert between them. Unraveling Strings in Visual C++ A: If you can use MFC, use CString. Otherwise use std::string. Plus, std::string works on any platform that supports standard C++. A: std::string or std::wstring, depending on your needs. Why? * *They're standard *They're portable *They can handle I18N *They have performance guarantees (as per the standard) *Protected against buffer overflows and similar attacks *Are easily converted to other types as needed *Are nicely templated, giving you a wide variety of options while reducing code bloat and improving performance. Really. Compilers that can't handle templates are long gone now. A C-style array of chars is just asking for trouble. You'll still need to deal with them on occasion (and that's what std::string.c_str() is for), but, honestly -- one of the biggest dangers in C is programmers doing Bad Things with char* and winding up with buffer overflows. Just don't do it. An array of wchar__t is the same thing, just bigger. CString, BSTR, and CComBSTR are not standard and not portable. Avoid them unless absolutely forced. Optimally, just convert a std::string/std::wstring to them when needed, which shouldn't be very expensive. Note that std::string is just a child of std::basic_string, but you're still better off using std::string unless you have a really good reason not to. Really Good. Let the compiler take care of the optimization in this situation. A: When I have a choice (I usually don't), I tend to use std::string with UTF-8 encoding (and the help of UTF8 CPP library. Not that I like std::string that much, but at least it is standard and portable. Unfortunatelly, in almost all real-life projects I've worked on, there have been internal string classes - most of them actually better than std::string, but still... A: I am a Qt dev, so of course I tend to use QString whenever possible :). It's quite nice: unicode compliant, thread-safe implicit-sharing (aka copy-on-write), and it comes with an API designed to solve practical real-world problems (split, join, replace (with and without regex), conversion to/from numbers...) If I can't use QString, then std::wstring. If you are stuck with C, I recommend glib GString. A: C-style char arrays have their place, but if you use them extensively you are asking to waste time debugging off by one errors. We have our own string class tailored for use in our (embedded development environment). We don't use std::string because it isn't always available for us. A: I use std::string (or basic_string<TCHAR>) whenever I can. It's quite versatile (just like CStringT), it's type-safe (unlike printf), and it's available on every platform. A: Other, std::wstring. std::string is 20th century technology. Use Unicode, and sell to 6 billion people instead of 300 milion. A: If you're using MFC, use CString. Otherwise I agree with most of the others, std::string or std::wstring all the way. Microsoft could have done the world a huge favor by adding std::basic_string<TCHAR> overloads in their latest update of MFC. A: I like to use TCHAR which is a define for wchar or char according to the projects settings. It's defined in tchar.h where you can find all of the related definitions for functions and types you need. A: std::string and std::wstring if I can, and something else if I have to. They may not be perfect, but they are well tested, well understood, and very versatile. They play nicely with the rest of the standard library which is also a huge bonus. Also worth mentioning, stringstreams. A: Unicode is the future. Do not use char* and std::string. Please ) I am tired of localization bugs. A: std::string is better than nothing, but it's annoying that it's missing basic functionality like split, join and even a decent format call...
{ "language": "en", "url": "https://stackoverflow.com/questions/133364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: .NET: SqlDataReader.Close or .Dispose results in Timeout Expired exception When trying to call Close or Dispose on an SqlDataReader i get a timeout expired exception. If you have a DbConnection to SQL Server, you can reproduce it yourself with: String CRLF = "\r\n"; String sql = "SELECT * " + CRLF + "FROM (" + CRLF + " SELECT (a.Number * 256) + b.Number AS Number" + CRLF + " FROM master..spt_values a," + CRLF + " master..spt_values b" + CRLF + " WHERE a.Type = 'p'" + CRLF + " AND b.Type = 'p') Numbers1" + CRLF + " FULL OUTER JOIN (" + CRLF + " SELECT (print("code sample");a.Number * 256) + b.Number AS Number" + CRLF + " FROM master..spt_values a," + CRLF + " master..spt_values b" + CRLF + " WHERE a.Type = 'p'" + CRLF + " AND b.Type = 'p') Numbers2" + CRLF + " ON 1=1"; DbCommand cmd = connection.CreateCommand(); cmd.CommandText = sql; DbDataReader rdr = cmd.ExecuteReader(); rdr.Close(); If you call reader.Close() or reader.Dispose() it will throw a System.Data.SqlClient.SqlException: * *ErrorCode: -2146232060 (0x80131904) *Message: "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." A: Cruizer had the answer: call command.Cancel(): using (DbCommand cmd = connection.CreateCommand()) { cmd.CommandText = sql; using (DbDataReader rdr = cmd.ExecuteReader()) { while (rdr.Read()) { if (WeShouldCancelTheOperation()) { cmd.Cancel(); break; } } } } It is also helpful to know that you can call Cancel even if the reader has already read all the rows (i.e. it doesn't throw some "nothing to cancel" exception.) DbCommand cmd = connection.CreateCommand(); try { cmd.CommandText = sql; DbDataReader rdr = cmd.ExecuteReader(); try { while (rdr.Read()) { if (WeShouldCancelTheOperation()) break; } cmd.Cancel(); } finally { rdr.Dispose(); } } finally { cmd.Dispose(); } A: it's because you have just opened the data reader and have not completely iterated through it yet. you will need to .Cancel() your DbCommand object before you attempt to close a data reader that hasn't completed yet (and the DbConnection as well). of course, by .Cancel()-ing your DbCommand, I'm not sure of this but you might encounter some other exception. but you should just catch it if it happens. A: Where do you actually read the data? You're just creating a reader, but not reading Data. It's just a guess but maybe the reader has problems to close if you're not reading ;) DbDataReader rdr = cmd.ExecuteReader(); while(rdr.Read()) { int index = rdr.GetInt32(0); }
{ "language": "en", "url": "https://stackoverflow.com/questions/133374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Elevating process privilege programmatically? I'm trying to install a service using InstallUtil.exe but invoked through Process.Start. Here's the code: ProcessStartInfo startInfo = new ProcessStartInfo (m_strInstallUtil, strExePath); System.Diagnostics.Process.Start (startInfo); where m_strInstallUtil is the fully qualified path and exe to "InstallUtil.exe" and strExePath is the fully qualified path/name to my service. Running the command line syntax from an elevated command prompt works; running from my app (using the above code) does not. I assume I'm dealing with some process elevation issue, so how would I run my process in an elevated state? Do I need to look at ShellExecute for this? This is all on Windows Vista. I am running the process in the VS2008 debugger elevated to admin privilege. I also tried setting startInfo.Verb = "runas"; but it didn't seem to solve the problem. A: [PrincipalPermission(SecurityAction.Demand, Role = @"BUILTIN\Administrators")] This will do it without UAC - no need to start a new process. If the running user is member of Admin group as for my case. A: This code puts the above all together and restarts the current wpf app with admin privs: if (IsAdministrator() == false) { // Restart program and run as admin var exeName = System.Diagnostics.Process.GetCurrentProcess().MainModule.FileName; ProcessStartInfo startInfo = new ProcessStartInfo(exeName); startInfo.Verb = "runas"; System.Diagnostics.Process.Start(startInfo); Application.Current.Shutdown(); return; } private static bool IsAdministrator() { WindowsIdentity identity = WindowsIdentity.GetCurrent(); WindowsPrincipal principal = new WindowsPrincipal(identity); return principal.IsInRole(WindowsBuiltInRole.Administrator); } // To run as admin, alter exe manifest file after building. // Or create shortcut with "as admin" checked. // Or ShellExecute(C# Process.Start) can elevate - use verb "runas". // Or an elevate vbs script can launch programs as admin. // (does not work: "runas /user:admin" from cmd-line prompts for admin pass) Update: The app manifest way is preferred: Right click project in visual studio, add, new application manifest file, change the file so you have requireAdministrator set as shown in the above. A problem with the original way: If you put the restart code in app.xaml.cs OnStartup, it still may start the main window briefly even though Shutdown was called. My main window blew up if app.xaml.cs init was not run and in certain race conditions it would do this. A: i know this is a very old post, but i just wanted to share my solution: System.Diagnostics.ProcessStartInfo StartInfo = new System.Diagnostics.ProcessStartInfo { UseShellExecute = true, //<- for elevation Verb = "runas", //<- for elevation WorkingDirectory = Environment.CurrentDirectory, FileName = "EDHM_UI_Patcher.exe", Arguments = @"\D -FF" }; System.Diagnostics.Process p = System.Diagnostics.Process.Start(StartInfo); NOTE: If VisualStudio is already running Elevated then the UAC dialog won't show up, to test it run the exe from the bin folder. A: According to the article Chris Corio: Teach Your Apps To Play Nicely With Windows Vista User Account Control, MSDN Magazine, Jan. 2007, only ShellExecute checks the embedded manifest and prompts the user for elevation if needed, while CreateProcess and other APIs don't. Hope it helps. See also: same article as .chm. A: You can indicate the new process should be started with elevated permissions by setting the Verb property of your startInfo object to 'runas', as follows: startInfo.Verb = "runas"; This will cause Windows to behave as if the process has been started from Explorer with the "Run as Administrator" menu command. This does mean the UAC prompt will come up and will need to be acknowledged by the user: if this is undesirable (for example because it would happen in the middle of a lengthy process), you'll need to run your entire host process with elevated permissions by Create and Embed an Application Manifest (UAC) to require the 'highestAvailable' execution level: this will cause the UAC prompt to appear as soon as your app is started, and cause all child processes to run with elevated permissions without additional prompting. Edit: I see you just edited your question to state that "runas" didn't work for you. That's really strange, as it should (and does for me in several production apps). Requiring the parent process to run with elevated rights by embedding the manifest should definitely work, though. A: You should use Impersonation to elevate the state. WindowsIdentity identity = new WindowsIdentity(accessToken); WindowsImpersonationContext context = identity.Impersonate(); Don't forget to undo the impersonated context when you are done.
{ "language": "en", "url": "https://stackoverflow.com/questions/133379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "165" }
Q: Configure db used for ASP.Net Authentication I want to use forms authentication in my asp.net mvc site. Can I use an already existing sql db (on a remote server) for it? How do I configure the site to use this db for authentication? Which tables do I need/are used for authentication? A: You can. Check aspnet_regsql.exe program parameters in your Windows\Microsoft.NET\Framework\v2.xxx folder, specially sqlexportonly. After creating the needed tables, you can configure: create a connection string in the web.config file and then set up the MemberShipProvider to use this connection string: <connectionStrings> <add name="MyLocalSQLServer" connectionString="Initial Catalog=aspnetdb;data source=servername;uid=whatever;pwd=whatever;"/> </connectionStrings> <authentication mode="Forms"> <forms name="SqlAuthCookie" timeout="10" loginUrl="Login.aspx"/> </authentication> <authorization> <deny users="?"/> <allow users="*"/> </authorization> <membership defaultProvider="MySqlMembershipProvider"> <providers> <clear/> <add name="MySqlMembershipProvider" connectionStringName="MyLocalSQLServer" applicationName="MyAppName" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </providers> </membership> Ps: There are some very good articles about the whole concept here. A: The easiest manner is to just use the windows interface for the aspnet_regsql.exe application. You can find it in the c:\windows\microsoft.net\framework\v2.0.50727 folder. Just type in aspnet_regsql.exe, it will then open a wizard, this way you don't need to remember any command line switches.
{ "language": "en", "url": "https://stackoverflow.com/questions/133390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Password encryption in Delphi I need to store database passwords in a config file. For obvious reasons, I want to encrypt them (preferably with AES). Does anyone know a Delphi implementation that is easy to introduce into an existing project with > 10,000 lines of historically grown (URGH!) source code? Clarification: Easy means adding the unit to the project, adding max. 5 lines of code where the config file is read and be done with it. Should not take more than 15 minutes. Another clarification: The password is needed in order to create a connection to the db, not to support a user management scheme for the application. So using hashes does not help. The db engine checks if the password is valid, not the app. A: I think Turbopower LockBox is an excellent library for criptography: http://sourceforge.net/projects/tplockbox/ I don't know if it's too big for your uses but it is very easy to use and you can encrypt a string with 5 lines of code. It is all in the examples. A: TOndrej has the right approach. You should never store a password using a reversible cypher. As it was correctly pointed out, if your "master" key were ever compromised, the entire system is compromised. Using a non-reversible hash, such as MD5, is much more secure and you can store the hashed value as clear text. Simply hash the entered password and then compare it with the stored hash. A: I've always user Turbopower Lockbox. It works well, and very easy to use. I actually use it for exactly the same thing, storing a password in a config text file. http://sourceforge.net/projects/tplockbox/ A: TurboPower LockBox 3 (http://lockbox.seanbdurkin.id.au/) uses automatic salting. I recommend against Barton's DCPCrypt because the IV's are not salted. In some situations this is a very serious sercurity flaw. Contrary to an earlier commment, LB3's implementation of AES is fully compliant with the standard. A: I've used this library, really quick to add. But wiki shows few more solutions. A: Even if you encrypt, it seems to me that your decryption key as well as the encrypted password will both be in your executable, which means that in no way is just security by obscurity. Anyone can take the decryption key and the encrypted passwords and generate the raw passwords. What you want is a one-way hash. A: I second the recommendation for David Barton's DCPCrypt library. I've used it successfuly in several projects, and it won't take more than 15 minutes after you've read the usage examples. It uses MIT license, so you can use it freely in commercial projects and otherwise. DCPCrypt implements a number of algorithms, including Rijndael, which is AES. There are many googlable stand-alone (single-unit) implementations too - the question is which one you trust, unless you are prepared to verify the correctedness of a particular library yourself. A: For typical authentication purposes, you don't need to store the passwords, you only need to check if the password entered by the user is correct. If that's your case then you can just store a hash signature (e.g. MD5) instead and compare it with the signature of the entered password. If the two signatures match the entered password is correct. Storing encrypted passwords may be dangerous because if someone gets your "master" password they can retrieve passwords of all your users. If you decide to use MD5 you can use MessageDigest_5.pas which comes with Delphi (at least it's included with my copy of Delphi 2007). There are also other implementations with Delphi source code you can choose from. A: Just a reminder. If you don´t need to interoperate with others crypt libs, then DCP or LockBox would do the job. BUT if you need it to be fully compliant with the rinjdael specs, forget free components, they´re kinda "lousy" most of the time. A: As others have pointed out, for authentication purposes you should avoid storing the passwords using reversible encryption, i.e. you should only store the password hash and check the hash of the user-supplied password against the hash you have stored. However, that approach has a drawback: it's vulnerable to rainbow table attacks, should an attacker get hold of your password store database. What you should do is store the hashes of a pre-chosen (and secret) salt value + the password. I.e., concatenate the salt and the password, hash the result, and store this hash. When authenticating, do the same - concatenate your salt value and the user-supplied password, hash, then check for equality. This makes rainbow table attacks unfeasible. Of course, if the user send passwords across the network (for example, if you're working on a web or client-server application), then you should not send the password in clear text across, so instead of storing hash(salt + password) you should store and check against hash(salt + hash(password)), and have your client pre-hash the user-supplied password and send that one across the network. This protects your user's password as well, should the user (as many do) re-use the same password for multiple purposes. A: I reccomend using some type of salt. Do not store crypt(password) in config file, but insted of this store crypt(salt + password). As 'salt' you can use something that is required to open database, eg. db_name+user_name. For crypt function you can use some well known algortithm as AES, Idea, DES, or something as simple as xoring each byte with byte from some other string, that string will be your key. To make it more different to solve you can use some random bytes, and store them. So to store: * *init_str := 5 random bytes *new_password := salt + password // salt := db_name + user_name *crypted_password = xor_bytes(init_str + new_password, 'my keyphrase') *crypted_password := init_str + crypted_password *store crypted_password in config, as this will be bytes you can hexify or base64 it And to connect: * *split data read from config into init_str and crypted_password *new_password = xor_bytes(init_str + crypted_password, 'my keyphrase') *password := remove (db_name + user_name) from new_password A: Nick is of course right - I just assume you know what you are doing when you say you want to spend all of 15 minutes on implementing a security solution. The DCPCrypt library also implements a number of hashing algorithms if you decide to go that (better) route. A: A couple of solutions: * *Don't store the password at all. If the database supports integrated authentication, use it. The process can be set to run with a specific identity, and be automatically authenticated by the database *Use Windows certificate stores and a certificate to encrypt your password. If you store the key used to crypt your password in your application, you have very little security anyway, you have to protect the key also. A: You need to store it in a place where only the current user has acccess too. Basically there are two ways to do this: * *Store it in an EFS encrypted file. *Store it in the secure local storage. Internet Explorer uses 2. But if you can get local access, you can decrypt both 1. and 2. if you have the right master key and algorithm (for instance, iepv can get at the Internet Explorer passwords). So: If you can, avoid storing passwords. Look for alternatives (like Windows authentication, directory services, etc) first. --jeroen A: A simple but for most applications strong enough system is given by this Embarcadero's demo: https://edn.embarcadero.com/article/28325
{ "language": "en", "url": "https://stackoverflow.com/questions/133393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How do I set the Content-type in Joomla? I am developing a Joomla component and one of the views needs to render itself as PDF. In the view, I have tried setting the content-type with the following line, but when I see the response, it is text/html anyways. header('Content-type: application/pdf'); If I do this in a regular php page, everything works as expected. It seems that I need to tell Joomla to use application/pdf instead of text/html. How can I do it? Note: Setting other headers, such as Content-Disposition, works as expected. A: Since version 1.5 Joomla has the JDocument object. Use JDocument::setMimeEncoding() to set the content type. $doc =& JFactory::getDocument(); $doc->setMimeEncoding('application/pdf'); In your special case, a look at JDocumentPDF may be worthwile. A: For those of you thinking that the above is a very old answer, I confirm that the JDocument::setMimeEncoding() still works, even on the 1.6 version (haven't tried it on 1.7 yet). A: I had the same problem in joomla 2.5. After 8 hours of clicking around in the joomla admin panel I found a solution. * *Log into your joomla admin panel and click on media manager *Click the options button in the top right hand corner. This opens a configuration tab with various options *In the box for legal file extensions, application/pdf or whatever you need. Values are separated by a comma. Note, apparently you have to list things in alphabetical order according to a forum I just found *Click save button You should now be able to load pdfs into media manager. Hope this works for you. I just set mine up to upload .mov extensions. The problem is it only worked once. Now the browse link doesn't work whenever I navigate to a movie .mov file on my hard drive. But it does if I select any other file type ?
{ "language": "en", "url": "https://stackoverflow.com/questions/133394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is an orthogonal index? A table in my area of responsibility of our product has been criticised as having more than one orthogonal index. What is an orthogonal index? Why is it bad? How can the situation be avoided? --Update-- The back-end database engine isn't necessarily relevant here as our application is database-agnostic. But if it helps, Oracle is one possibility. The table in question isn't used for financial analysis. A: Orthogonal means independent of each other. No idea why it would be bad. In fact, i usually use secondary indexes (besides the 'id' autoincrement primary key) when there's a common query that has nothing to do with the primary one. A: Othogonal simply means independent i.e. unrelated to the main concern. A: I believe I have heard the term 'orthogonal index' in two separate occasions, but I have no further knowledge if that is an acknowledged term. In both occasions, they were referring to an index that wasn't to be used by the query optimiser, since: * *in one occasion, it indexed a completely irrelevant column, never used as search criteria or sorting column; the term sounded "outlandish" to me, but I didn't object :) *in the other occasion, it was a two column index, but the order of the columns in the field was the reverse than the one needed. I have absolutely no idea if this is relevant to your question :) A: I might be answering my own question here, but feel free to jump in with your own thoughts. Javier's answer (+1) led me on to think that maybe the point here is that having more than one unique index could be a bad thing if the items in that index are completely unrelated. In other words, you're increasing the chances of real data being impossible to store because of a secondary indexes uniqueness constraints. It would also potentially introduce artificial constraints on the data that shouldn't necessarily be there.
{ "language": "en", "url": "https://stackoverflow.com/questions/133418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you handle small sets of data? With really small sets of data, the policy where I work is generally to stick them into text files, but in my experience this can be a development headache. Data generally comes from the database and when it doesn't, the process involved in setting it/storing it is generally hidden in the code. With the database you can generally see all the data available to you and the ways with which it relates to other data. Sometimes for really small sets of data I just store them in an internal data structure in the code (like A Perl hash) but then when a change is needed, it's in the hands of a developer. So how do you handle small sets of infrequently changed data? Do you have set criteria of when to use a database table or a text file or..? I'm tempted to just use a database table for absolutely everything but I'm not sure if there are any implications to this. Edit: For context: I've been asked to put a new contact form on the website for a handful of companies, with more to be added occasionally in the future. Except, companies don't have contact email addresses.. the users inside these companies do (as they post jobs through their own accounts). Now though, we want a "speculative application" type functionality and the form needs an email address to send these applications to. But we also don't want to put an email address as a property in the form or else spammers can just use it as an open email gateway. So clearly, we need an ID -> contact_email type relationship with companies. SO, I can either add a column to a table with millions of rows which will be used, literally, about 20 times OR create a new table that at most is going to hold about 20 rows. Typically how we handle this in the past is just to create a nasty text file and read it from there. But this creates maintenance nightmares and these text files are frequently looked over when data that they depend on changes. Perhaps this is a fault with the process, but I'm just interested in hearing views on this. A: Put it in the database. If it changes infrequently, cache it in your middle tier. A: The example that springs to mind immediately is what is appropriate to have stored as an enumeration and what is appropriate to have stored in a "lookup" database table. I tend to "draw the line" with the rule that if it will result in a column in the database containing a "magic number" that maps to an enumeration value, then the enumeration should really exist as a lookup table. If it's unrelated to the data stored in the database (eg. Application configuration data rather than user generated data), then it's an enumeration all the way. A: Surely it depends on the user of the software tool you've developed to consume the set of data, regardless of size? It might just be that they know Excel, so your tool would have to parse a .csv file that they create. If it's written for the developers, then who cares what you use. I'm not a fan of cluttering databases with minor or transient data however. A: We have a standard config file format (key:value) and a class to handle it. We just use that on all projects. Mostly we're just setting persistent properties for our applications (mobile phone development) so that's an appropriate thing to do. YMMV A: In cases where the program accesses a database, I'll store everything in there: easier for backup and moving data around. For small programs without database access I store my data in the .net settings, which are stored in an xml file - of course this is a feature of c#, so it might not apply to you. Anyway, I make sure to store all data in one place. Usually a database. A: Have you considered sqlite ? It's file-based, which addresses your feeling that "just a file might do" (zero configuration), but it's a perfectly good database and scales remarkably well. It supports a number of APIs and there are numerous front ends for administering it. A: If these are small config-like data, i use some simple and common format. ini, json and yaml are usually ok. Java and .NET fans also like XML. in short, use something that you can easily read to an in-memory object and forget about it. A: I would add it to the database in the main table: * *Backup and recovery (you do want to recover this text file, right?) *Adhoc querying (since you can do it will a SQL tool and join it to the other database data) *If the database column is empty the store requirements for it should be minimal (nothing if it's a NULL column at the end of the table in Oracle) *It will be easier if you want to have multiple application servers as you will not need to keep multiple copies of some extra config file around *Putting it into a little child table only complicates the design without giving any real benefits You may well already be going to that same row in the database as part of your processing anyway, so performance is not likely to be a problem. If you are not, you could cache it in memory.
{ "language": "en", "url": "https://stackoverflow.com/questions/133420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: propertyNameFieldSpecified when generating a 2.0 web service proxy from a WCF Service I have created a web reference (Add Web Reference) from Visual Studio 2008 and strangely, I need to set the propertyNameFieldSpecified to true for all the fields I want to submit. Failure to do that and values are not passed back to the WCF Service. I have read at several places that this was fixed in the RTM version of Visual Studio. Why does it still occurring? My data contracts are all valid with nothing else than properties and lists. Any ideas? A: Here is a complete answer: http://blogs.msdn.com/eugeneos/archive/2007/02/05/solving-the-disappearing-data-issue-when-using-add-web-reference-or-wsdl-exe-with-wcf-services.aspx A: I saw this happen in VB.NET with nullable values, C# however had the 'correct' code. Maybe an idea will be to reference the service from a C# project. Then reference that project from your VB.NET code. A: I'm using C#. I suspected that it has something to do with automatic properties but no luck. Here is a sample class that : [DataContract] public class BrowserBase : IBrowser { [DataMember] public BrowserType BrowserType { get; set; } [DataMember] public IList<ResolutionBase> Resolutions { get; set; } } A: The XSD.EXE tool is to blame. When you do "Add Web Reference" Visual Studio will generate classes for all the referenced types. To do this it uses the xsd.exe tool. There are replacements for xsd.exe out on the net ie: http://www.bware.biz/DotNet/Development/CodeXS/Article/Article_web.htm but I haven't see how to replace the behavior of Add Web Reference.
{ "language": "en", "url": "https://stackoverflow.com/questions/133430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I get access to the HttpServletRequest object when using Java Web Services I'm using Java 6, Tomcat 6, and Metro. I use WebService and WebMethod annotations to expose my web service. I would like to obtain information about the request. I tried the following code, but wsCtxt is always null. What step must I take to not get null for the WebServiceContext. In other words: how can I execute the following line to get a non-null value for wsCtxt? MessageContext msgCtxt = wsCtxt.getMessageContext(); @WebService public class MyService{ @Resource WebServiceContext wsCtxt; @WebMethod public void myWebMethod(){ MessageContext msgCtxt = wsCtxt.getMessageContext(); HttpServletRequest req = (HttpServletRequest)msgCtxt.get(MessageContext.SERVLET_REQUEST); String clientIP = req.getRemoteAddr(); } A: The following code works for me using Java 5, Tomcat 6 and Metro Could it possibly be that there is a conflict between the WS support in Java 6 and the version of Metro you are using. Have you tried it on a Java 5 build? @WebService public class Sample { @WebMethod public void sample() { HttpSession session = findSession(); //Stuff } private HttpSession findSession() { MessageContext mc = wsContext.getMessageContext(); HttpServletRequest request = (HttpServletRequest)mc.get(MessageContext.SERVLET_REQUEST); return request.getSession(); } @Resource private WebServiceContext wsContext; } A: I still have this problem. Here is my work-around was to write a ServletRequestListener that puts the request into a ThreadLocal var. Then the WebService can obtain the request from the ThreadLocal. In other words, I'm reimplementing something that just doesn't work for me. Here's the Listener: import javax.servlet.ServletRequest; import javax.servlet.ServletRequestEvent; import javax.servlet.ServletRequestListener; public class SDMXRequestListener implements ServletRequestListener { public SDMXRequestListener() { } public void requestDestroyed(ServletRequestEvent event) { } public void requestInitialized(ServletRequestEvent event) { final ServletRequest request = event.getServletRequest(); ServletRequestStore.setServletRequest(request); } } Here's the ThreadLocal wrapper: import javax.servlet.ServletRequest; public class ServletRequestStore { private final static ThreadLocal<ServletRequest> servletRequests = new ThreadLocal<ServletRequest>(); public static void setServletRequest(ServletRequest request) { servletRequests.set(request); } public static ServletRequest getServletRequest() { return servletRequests.get(); } } And the web.xml wiring: <listener> <listener-class>ecb.sdw.webservices.SDMXRequestListener</listener-class> </listener> The Web service uses the following code to obtain the request: final HttpServletRequest request = (HttpServletRequest) ServletRequestStore.getServletRequest(); A: I recommend you either rename your variable from wsCtxt to wsContext or assign the name attribute to the @Resource annotation. The J2ee tutorial on @Resource indicates that the name of the variable is used as part of the lookup. I've encountered this same problem using resource injection in Glassfish injecting a different type of resource. Though your correct name may not be wsContext. I'm following this java tip. If you like the variable name wsCtxt, then use the name attribute in the variable declaration: @Resource(name="wsContext") WebServiceContext wsCtxt; A: Maybe the javax.ws.rs.core.Context annotation is for what you are looking for, instead of Resource?
{ "language": "en", "url": "https://stackoverflow.com/questions/133436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Can a TCP/IP Stack be killed programmatically? Our server application is listening on a port, and after a period of time it no longer accepts incoming connections. (And while I'd love to solve this issue, it's not what I'm asking about here;) The strange this is that when our app stops accepting connections on port 44044, so does IIS (on port 8080). Killing our app fixes everything - IIS starts responding again. So the question is, can an application mess up the entire TCP/IP stack? Or perhaps, how can an application do that? Senseless detail: Our app is written in C#, under .Net 2.0, on XP/SP2. Clarification: IIS is not "refusing" the attempted connections. It is never seeing them. Clients are getting a "server did not respond in a timely manner" message (using the .Net TCP Client.) A: You may well be starving the stack. It is pretty easy to drain in a high open/close transactions per second environment e.g. webserver serving lots of unpooled requests. This is exhacerbated by the default TIME-WAIT delay - the amount of time that a socket has to be closed before being recycled defaults to 90s (if I remember right) There are a bunch of registry keys that can be tweaked - suggest at least the following keys are created/edited HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters TcpTimedWaitDelay = 30 MaxUserPort = 65534 MaxHashTableSize = 65536 MaxFreeTcbs = 16000 Plenty of docs on MSDN & Technet about the function of these keys. A: You haven't maxed out the available port handles have you ? netstat -a I saw something similar when an app was opening and closing ports (but not actually closing them correctly). A: Use netstat -a to see the active connections when this happens. Perhaps, your server app is not closing/disposing of 'closed' connections. A: Good suggestions from everyone, thanks for your help. So here's what was going on: It turns out that we had several services competing for the same port, and most of the time the "proper" service would get the port. Occasionally a second service would grab the port away, and the first service would try to open a different port. From that time on, the services would keep grabbing new ports every time they serviced a request (since they weren't using their preferred ports) and eventually we would exhaust all available ports. Of course, the actual question was: "Can an application mess up the entire TCP/IP stack?", and the answer to that question is: Yes. One way to do it is to listen on a whole bunch of ports. A: I guess the port number comment from RichS is correct. Other than that, the TCP/IP stack is just a module in your operating system and, as such, can have bugs that might allow an application to kill it. It wouldn't be the first driver to be killed by a program. (A tip to the hat towards Andrew Tanenbaum for insisting that operating systems should be modular instead of monolithic.) A: I've been in a couple of similar situations myself. A good troubleshooting step is to attempt a connection from the affected machine to good known destination that isn't at that moment experiencing any connectivity issues. If the connection attempt fails, you are very likely to get more interesting details in the error message/code. For example, it could say that there aren't enough handles, or memory. A: From a support and sys admin standpoint, I have only seen this on the rarest of occasions (more than once), but it certainly can happen. When you are diagnosing the problem, you should carefully eliminate the possible causes, rather than blindly rebooting the system at the first sign of trouble. I only say this because many customers I work with are tempted to do that.
{ "language": "en", "url": "https://stackoverflow.com/questions/133442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: IPSec AES 256 encryption in Windows XP with Service Pack 3? Does IPsec in Windows XP Sp3 support AES-256 encryption? Update: * *Windows IPsec FAQ says that it's not supported in Windows XP, but maybe they changed it in Service Pack 3? http://www.microsoft.com/technet/network/ipsec/ipsecfaq.mspx Question: Is Advanced Encryption Standard (AES) encryption supported? *origamigumby, please specify where, because I cannot find it. A: EDIT http://technet.microsoft.com/en-us/library/dd125380.aspx indicates that my original link (https://web.archive.org/web/1/http://search.techrepublic%2ecom%2ecom/search/microsoft+windows+and+network+security.html) was wrong. It is not supported prior to Vista. A: I'm using Windows XP SP3. When I add a new IPsec filter rule, the only options for ESP I get are DES and 3DES, so the FAQ is correct - there is no support for AES prior to Windows Vista.
{ "language": "en", "url": "https://stackoverflow.com/questions/133453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What DNSs have API access? I saw this over on slashdot and realized if I could program in dns control into some of my apps it would make life a lot easier. Someone over there recommended dynect which apparently had a wonderful api. Unfortunately not a lot else was recomended. I don't know anything about managing dns servers I mostly work with the lamp stack, so on linux I understand bind is the way to go but other then a basic setup I'd be lost. I'd rather outsource the details. What DNS services have API's and are easy to use without breaking the bank? A: I guess in the last 3 years this is a bit of a solved problem. Here are some to check out: * *Amazon has a nice dns service now http://aws.amazon.com/route53/ *Linode has a free api based dns if you're a customer. *Dynadot has a fee dns with an api if you're a customer. A: Hey I haven't used them, but Zerigo looks promising. We will probably wind up going with them if they allow enough hosts. Their API is standard REST stuff... very straightforward. http://www.zerigo.com/docs/apis/dns/1.1 Thanks, Eric. A: We use DjbDNS and it's backended onto MySQL so we just hit the DB to make changes and periodically rebuild the the config data. A: Has anyone seen any of the following DNS providers with APIs: * *http://durabledns.com/ *https://dnsimple.com/ (also supports registration by API) *http://www.loaddns.com/ A: We use Zonomi. Its very cheap and never gone down for us. With API A: You can try http://customdns.ca. I have a couple of domains with them - no problems so far. They provide RestFul API. A: http://www.dns.com Here's the link to the API documentation: https://github.com/dnsdotcom/API_DOC/ Have fun! A: Haven't used the api, but I have been using the registrar for 10+ years and never had a problem: namecheap.com Here is the API intro. Here is the API methods list. Pretty comprehensive. From purchasing to host and e-mail forwarding setup.
{ "language": "en", "url": "https://stackoverflow.com/questions/133458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: How do I remove an element that matches a given criteria from a LinkedList in C#? I have a LinkedList, where Entry has a member called id. I want to remove the Entry from the list where id matches a search value. What's the best way to do this? I don't want to use Remove(), because Entry.Equals will compare other members, and I only want to match on id. I'm hoping to do something kind of like this: entries.RemoveWhereTrue(e => e.id == searchId); edit: Can someone re-open this question for me? It's NOT a duplicate - the question it's supposed to be a duplicate of is about the List class. List.RemoveAll won't work - that's part of the List class. A: list.Remove(list.First(e => e.id == searchId)); A: Here's a simple solution: list.Remove(list.First((node) => node.id == searchId)); A: Just use the Where extension method. You will get a new list (IIRC).
{ "language": "en", "url": "https://stackoverflow.com/questions/133487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Check for a valid guid How can you check if a string is a valid GUID in vbscript? Has anyone written an IsGuid method? A: This function is working in classic ASP: Function isGUID(byval strGUID) if isnull(strGUID) then isGUID = false exit function end if dim regEx set regEx = New RegExp regEx.Pattern = "^({|\()?[A-Fa-f0-9]{8}-([A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}(}|\))?$" isGUID = regEx.Test(strGUID) set RegEx = nothing End Function A: This is similar to the same question in c#. Here is the regex you will need... ^[A-Fa-f0-9]{32}$|^({|()?[A-Fa-f0-9]{8}-([A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}(}|))?$|^({)?[0xA-Fa-f0-9]{3,10}(, {0,1}[0xA-Fa-f0-9]{3,6}){2}, {0,1}({)([0xA-Fa-f0-9]{3,4}, {0,1}){7}[0xA-Fa-f0-9]{3,4}(}})$ But that is just for starters. You will also have to verify that the various parts such as the date/time are within acceptable ranges. To get an idea of just how complex it is to test for a valid GUID, look at the source code for one of the Guid constructors. A: In VBScript you can use the RegExp object to match the string using regular expressions. A: Techek's function did not work for me in classic ASP (vbScript). It always returned True for some odd reason. With a few minor changes it did work. See below Function isGUID(byval strGUID) if isnull(strGUID) then isGUID = false exit function end if dim regEx set regEx = New RegExp regEx.Pattern = "{[0-9A-Fa-f-]+}" isGUID = regEx.Test(strGUID) set RegEx = nothing End Function A: there is another solution: try { Guid g = new Guid(stringGuid); safeUseGuid(stringGuid); //this statement will execute only if guid is correct }catch(Exception){}
{ "language": "en", "url": "https://stackoverflow.com/questions/133493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Ruby to Actionscript3 Bytecode Hi I was looking into Ruby to actionscript 3 bytecode compilers and found a mention of a project called Red Sun but can find very little information on it. So my Question is ... what tools are available to convert Ruby source into AS3 bytecode A: I am the lead developer on the Red Sun project. There is very little information because it is really not ready to be used yet. I worked on the original prototype and presented it to a handful of people at 360|Flex San Jose. This generated further interest and encouraged me to propose it for RubyConf, for which an introductory talk on Red Sun was accepted. Since then, I have had the time to flesh out the framework and really see what was possible. I'll be outlining all of this publicly soon, but I decided to diverge from the idea of generating ActionScript 3 bytecode becuase of the limitations of it. Ruby and ActionScript are somewhat similar, but Ruby has some significant differences in its method dispatch semantics that require any implementation in ActionScript to use a custom method lookup solution. It could perhaps be done in JavaScript, however ActionScript 3 does not allow re-assigning the prototype field to point at a different object, and Ruby relies on modifying the inheritance hierarchy at runtime. The presentation at RubyConf will be on Saturday, Nov 8th. You can track Red Sun's development on http://github.com here http://github.com/jonathanbranam/redsun and I will be posting information on my website at http://jonathanbranam.net. I am planning to launch a site just for information about Red Sun, but that is not completed yet to share a link. Red Sun does not include a Ruby parser or compiler, so it relies on Ruby 1.9 bytecode being generated by a true Ruby 1.9 implementation. It is currently based on 1.9.0-4 and may need changes if there is deviation from this version. As far as capabilities, as of right now (10/27/2008) it supports basic method dispatch, classes and modules. That's really about it. The standard library has not been ported and I hope to depend on Rubinius for a good portion of this. A: I don't know of any Ruby->AS3 converters but in the future, Iron Monkey may make it possible to run Ruby on Tamarin (AS3 virtual machine). A: As an aside, I'm pretty sure there are things you can do in Ruby that you can't do in AS3, so any converter would probably only be able to convert a subset of Ruby code.
{ "language": "en", "url": "https://stackoverflow.com/questions/133506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Autogeneration of a DataContext designer file when using SqlMetal and Visual Studio I am using SqlMetal to general my DataContext.dbml class for my ASP.net application using LinqToSql. When I initially created the DataContext.dbml file, Visual Studio used this to create a related DataContext.designer.cs file. This designer file contains the DataContext class in C# that is used throughout the app (and is derived from the XML in the dbml file) and is essential to bridging the gap between the output of SqlMetal and using the DataContext with LinqToSql. However, when I make a change to the database and recreate the dbml file, the designer file never gets regenerated in my website. Instead, the old designer file is maintained (and therefore none of the changes to the DBML file are accessible through the LinqToSql DataContext class). The only process I have been able to use so far to regenerate the designer file is * *Go to Windows Explorer and delete both the dbml and designer.cs files *Go to Visual Studio and hit Refresh in the Solution Explorer. The dbml and designer.cs files now disappear from the project. *Regenerate the dbml file using SqlMetal *Go to Visual Studio and hit Refresh in the Solution Explorer. Now the designer.cs file is recreated. It seems that Visual Studio will only generate the designer.cs file when a new dbml file is detected that does not yet have a designer.cs file. This process is pretty impractical, since it involves several manual steps and messes things up with source control. Does anyone know how I can get the designer.cs file automatically regenerated without having to follow the manual delete/refresh/regenerate/delete process outlined above? A: The designer.cs file is normally maintained automatically as you make changes to the DBML within Visual Studio. If VS isn't running when you recreate the DBML it may not know. Check that the .DBML file in Visual Studio has Custom Tool property set to MSLinqToSQLGenerator. If it isn't, then set it to that. If it is try right-clicking on the DBML after making changes and choosing Run Custom Tool to see if that updates the .designer.cs. You can also generate the class file using SqlMetal: sqlmetal /code:DataContext.designer.cs /language:csharp DataContext.dbml A: Not sure how It did it, but here are some things I worked on to get it back. Something had it locked, so it generated a new db.designer.cs file (db1.designer.cs). I had beyond compare open, comparing that file to the previous one (BC isn't supposed to lock and I don't think it was the problem, never had that problem before with it.) Open the project file in notepad and look for these entries, i revereted to the previous version in source control.. this is what i brought back. <Compile Include="db.designer.cs"> <AutoGen>True</AutoGen> <DesignTime>True</DesignTime> <DependentUpon>db.dbml</DependentUpon> </Compile> ... <LastGenOutput>db.designer.cs</LastGenOutput> the lastgenOutput was set to db1.desginer.cs
{ "language": "en", "url": "https://stackoverflow.com/questions/133515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: First impressions of the Fantom programming language? Has anyone here given the Fantom programming language a whirl? (pun intended). My first impression: * *I like the ability to have the code run on either the .NET or Java VM. *The syntax is nice and clean and does not try anything fancy. *I have a belief that "the library is the language" and the developers of Fan believe that their USP is their APIs: But getting a language to run on both Java and .NET is the easy part - in fact there are many solutions to this problem. The hard part is getting portable APIs. Fan provides a set of APIs which abstract away the Java and .NET APIs. We actually consider this one of Fan's primary benefits, because it gives us a chance to develop a suite of system APIs that are elegant and easy to use compared to the Java and .NET counter parts. Any other thoughts, first impressions, pros and cons? A: It looks very inspired by Ruby. It says that it's RESTful but I don't see how exactly. Compare with boo, which is more mature yet similar in many ways (its syntax is Python inspired, though). The design decisions to keep generics and namespaces very limited are questionable. A: I think their explanation sums it up: "The primary reason we created Fan is to write software that can seamlessly run on both the Java VM and the .NET CLR. The reality is that many software organizations are committed to one or the other of these platforms." It doesn't look better than all other non-JVM/.NET languages. In the absence of any information about them (their blog is just an error page), I see no reason why they would necessarily get this righter than others. Every language starts out fairly elegant for the set of things it was designed for (though I see some awkwardness in the little Fan code I looked at just now) -- the real question is how well it scales to completely new things, and we simply don't know that yet. But if your organization has a rule that "everything must run on our VM", then it may be an acceptable compromise for you. You're giving up an awful lot just for VM independence. For example, yours is the first Fan question here on SO -- a couple orders of magnitude fewer than Lisp. For what problem is Fan the best solution? Python and Ruby can already run on both VMs (or neither), have big communities and big libraries, and seem to be about the same level of abstraction, but are far more mature. A: I have never heard of Fan until a couple of weeks ago. From the web site, it is about one year old so still pretty young and unproven. There are a couple of interesting points however: First the language is tackling the problem of concurrency by providing an actor model (similar to erlang) and by supporting immutable objects. Second, the object follows the example of Scala with type inference. Type inference allows the programmer to omit type declarations but have it computed by the compiler providing the advantage of short and cleaner code as in a dynamically type language while preserving the efficiency of a statically type language. And last, it seems like a very fast language, nearly as fast as Java and really close or beating the second fastest language on the JM: scala. Benchmark showing the performance can be found at http://www.slideshare.net/michael.galpin/performance-comparisons-of-dynamic-languages-on-the-java-virtual-machine?type=powerpoint. A: This is very interesting. Java (or C#) was created in order to eliminate Platform dependency by creating a JVM (or CLR) that will compile the code into a specific machine code at run time. Now , There is a languege which is Virtual Machine independent? umm .... what the hell?!?! Again , this is a very interesting topic , That might be the future...:) going to one universal single languege A: I think it looks like a great language feature-wise, but I'm not sure how useful it is. I don't think it is all that useful to target .NET and JVM. Java is already cross-platform, and .NET is too, with Mono. By targeting two VMs, you have to use only the APIs that are available on both. You can't use any of the great native APIs that are available for Java and .NET. I can't imagine that their API is anywhere near as complete as either Java's of .NET's.
{ "language": "en", "url": "https://stackoverflow.com/questions/133528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }