text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: UI During Custom Installer Action What is the correct way to display UI during a custom installer action?
I would like my UI to be modal on the install dialog, or alternatively, I'd like a way to display text/progress from my custom action in the installer dislog.
The installer is a VS2005 setup project and the custom action is a C# Installer-derived class.
A: Displaying any kind of non-standard UI would require changes to the UI handler object. This isn't trivial, and the implementation depends on the toolkit you use to author your MSIs: I'm not sure it's even possible with VS setup projects.
Displaying simple status/progress messages and logging to the MSI log isn't too hard to do from a custom action, though, at least not using the Windows Installer XML (WiX) toolset, which is what I use myself for this purpose.
When authoring your custom actions with WiX, you get access to the active installer session through the Microsoft.Deployment.WindowsInstaller.Session object, which has 'Log' (writes a message to the log, if logging is enabled) and 'Message' (performs any enabled logging operations and defers execution to the UI handler object associated with the engine) functions, amongst many other goodies.
If you're currently already creating your custom actions in C#, you may be able to find something similar in your current environment (I've never worked with VS.net installer projects, so I'm not exactly sure how they work -- I'm quite surprised actually that these allow you to create managed custom actions...). Otherwise, I'd definitely recommend looking into WiX for custom actions: these work with any MSI authoring environment, and are quite flexible.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/126946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is there a good lightweight multiplatform C++ timer queue? What I'm looking for is a simple timer queue possibly with an external timing source and a poll method (in this way it will be multi-platform). Each enqueued message could be an object implementing a simple interface with a virtual onTimer() member function.
A: Boost::ASIO contains an asynchronous timer implementation. That might work for you.
A: There is a nice article in CodeProject, here, that describes the various timers available in Windows, and has chapters titled "Queue timers" and "Make your own timer".
For platform independence, you'd have to make implementations for the different platforms inside #ifdef -- #endif pairs. I can see nothing less ugly than that.
A: It doesn't fit all of your criteria, but... I wrote a series of blog posts about a timer queue for windows that is implemented in terms of an external time provider and that is either polled or driven by a thread. The series comes with source code and tests and the point of it was to demonstrate the testing of reasonably complex code. Anyway, you might be able to make use of some of the code or ideas if nobody comes up with a better fit.
Articles start here: http://www.lenholgate.com/archives/000306.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/126966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Is there a good description of the "system call" mechanism used in OSes? I am looking for a good primer or technical description of the System Call mechanism that is used by operating systems to transition from user space to the kernel to invoke functions such as "open", "read", "write", etc...
Is there anything other than the Wikipedia entry?
Websites, pdfs, books, source code, all are welcome :)
A: The exact method depends on the processor architecture and what operations it defines for transferring to kernel mode. One approach, and the traditional one on x86, was to use a software interrupt. It turns out this wasn't very fast for the general case so later, Intel added SYSCALL and AMD added SYSENTER. Windows XP and later choose an appropriate system call technique for the platform, at boot time.
You could choose to use specific software interrupt numbers for specific functions, but generally the processor doesn't have enough interrupts to cover all the system functions, so it's necessary to make one of the registers contain the function number required. If you're doing that anyway, it's not much of a hardship to only use the one system call function.
Windows CE, before version 6.0, uses a side-by-side process virtual address model that actually allows processes to call into each other directly. The page protections are set up so that when this is done, an access violation fault occurs: the kernel gets control, fixes up the process address space (moving the called process into slot 0), fixes up slot-0-based arguments to point to the calling process, and returns to user mode. Because the return address is in another process, when the function call returns, the reverse process occurs. Unfortunately this model only allows very small virtual address spaces for each process (32MB) and a low number of processes (32), so Windows CE 6.0 reverts to a more traditional system call model.
A: Well for source code, there are plenty of open source kernels to dive into.
As for books, Robert Love's book on the Linux kernel is very informative.
A: You may want to have a look at the minix kernel. It's open source, designed to be simple, and is used in a lot of Uni-level OS courses. Have a dig around in /usr/src/kernel/proc.c especially the sys_call function and surrounding functionality. Keep in mind that minix is a microkernel, so some things may be subtlety different to what you are used to.
A: If you want to purchase a book that is extremely useful for *nix programming. I would recommend "Advanced Programming in the UNIX Environment" by Stevens and Rago.
It has in depth explanations, and code examples.
A: For a good explanation of system calls in Linux, look at the sample device drivers in Linux Device Drivers.
A: It's architecture dependent, and requires an understanding of computer architecture. Tanenbaum's "Structured Computer Organisation" has a good summary of the basics of a system call. For more, read any textbook on operating system design.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/126976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: ORM: Handwritten schema or auto-generated? Should I use a hand-written schema for my projected developed in a high-level language (such as Python, Ruby) or should I let my ORM solution auto-generate it?
Eventually I will need to migrate without destroying all the data. It's okay to be tied to a specific RDBMS but it would be nice if features such as constraints and procedures could be supported somehow.
A: I never go with ORM-generated schema.
I find that the ways in which the ORM wants to generate the schema are often at total odds with how I want my database to be structured. Also, and I know this is trivial, the nomenclature scheme is usually poor.
Database structure has its own constraints, that I find that usually the ORM autogeneration tools don't consider fully. And if you're going to be wanting to run reports on your database later (and you will), then having good database structure and design is very important.
A: See this Coding Horror article and links for discussion on that migration you'll eventually need to do. Plan for it now.
Also see Martin Fowler on database evolution; I particularly recommend the notion that test data generation is part of database set-up. The idea may be a little underdeveloped, in that there is not a clear delineation of the different problems in different environments, development versus QA versus production.
A: Let the ORM generate the schema it wants. Then you can always change things that are too slow or that you want differently. But it allows you to quickly get started and have something working plus the ORM people usually know what they do when it comes to generating schemas.
A: Let your ORM solution generate it, but don't just blindly use it; read through it and sanity-check it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/126978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: XML vs Text for Non-web development applications I do alot of systems programming where my apps have no chance of being used to communicate over the web or viewed through a browser. But, there has been some push by management to use XML. For example, if I want to keep a time log I could use a text file like this:
command date time project
in 2008/09/23 08:00:00 PROJ1
change 2008/09/23 09:00:00 PROJ2
out 2008/09/23 12:00:00 PROJ2
in 2008/09/23 01:00:00 PROJ3
out 2008/09/23 05:00:00 PROJ3
The XML would look something like this:
<timelog>
<timecommand cmd=in date=2008/09/23 time=8:00:00 proj=PROJ1/>
...
<timecommand cmd=out date=2008/09/23 time=5:00:00 proj=PROJ3/>
</timelog>
Some of the initial advantages of the text version that I see is that it is easily readable and parsable with regex. What are the advantages to using XML in this case?
A: A couple of benefits come to mind:
*
*It's easier to parse into other applications
*It's easier to understand what the document holds at a glance
*Makes it easier to pull data into a managerial dashboard
*Makes the management happy with little pain for you
The downsides, as I see them:
*
*Means changing existing code, probably unnecessarily
*Possible slight performance degradation, depending on how you build the documents compared to how you build the current docs
*It's XML for XML's sake, which is effin' stupid
And, to close, a quote intended as irony: XML is like violence. If it's not solving your problems, you're not using it enough
A: There's absolutely nothing wrong with using text-based data formatting. It has been the de-facto standard for decades. Big huge mainframe financial systems still use it today. The benefits are that it's trivial to produce, trivial to consume and incredibly lightweight. And how about log files? Do you know any production platform that doesn't generate its log file in a delimited text format (web, app, db server)?
The downside of flat text files is that if the format changes, then you have to modify both the producer and the consumer ends non-trivially to be able to support the format change. Of course if it's just a human consuming the result, then you only have to change the producer.
The beauty of XML is that the parsing of the data is independent from not only the data but the format of the data. Logically you pass it both the data and the data format, and presto! Everything works. It's not exactly that simple, but that's the premise. You can change the format of the data, and your producers and consumers only have to change trivially (if at all).
The ugly of XML is that it can be a huge performance dog (SOAP anyone?) and very heavy weight. You definitely pay a price for its extensibility. There are cases where it is absolutely the optimized technical solution for a given problem domain, and there are other cases where it's not.
So if it's a simple log that a human will read, keep it flat file. If it's a simple app communicating with another single app and the communications will not change dramatically over time, flat file is definitely faster and lighter to implement, but XML is not a bad choice. If multiple apps need to consume the data you're providing or if the volume of communication change is going to be high, then go with XML. The maintenance of the interface will be more easily maintained over time if you do.
A: XML's main feature in a case like this is that XML can be validated & controlled. In the text version, how would you be able to programmatically verify that the file is properly formatted? XML is designed to create structured, valid documents, and the resulting benefit is a format is rigidly controlled, and reliably structured. Maintaining code that reads from XML nodes is also going to be a lot easier and more logically laid out than maintaining a series of regular expressions for reading text files.
A: If you use XML then, in some ways, the data would be more "portable". You'd essentially have parsers for your data available in most environments, so writing a tool to analyze the data might be easier. Also, if it's in XML then you can write an XSLT to transform it into various other formats, making it easier to read.
That said, if you switch to using XML, even a simple format like the example you gave, your log files are going to become a lot larger.
There are some options other than XML that you could use. Jeff's Angle Bracket Tax blog post talks about this a bit.
Really, what you should do is find out how these logs are going to be used, and then determine what format would make those usages the easiest to implement.
A: It's easily parsable using regex and xml and xsl.
Truth be told, there's not really an "advantage" to using XML unless you're sending the data to another system.
A: XML is a meta-format, meaning it makes it easier to define a format for your data. This makes it easier for multiple programs, including ones by different companies, to read and write data in the same format. It's especially suitable as a description for complex, hierarchical data.
In the example you outline above, the data looks to be isolated records in a fixed format, with no structure or hierarchy - in which case I can see no advantage in using XML. However, the example may be unrepresentative - your other files may contain more structured data.
A: Is that an ongoing log file?
How are you ever going to write the to create a valid document? Or are you going to read it in, add the new entry, and write it out each time?
Log files are perfect candidates for well structured plain text lines that you simply append to.
A: I most cases (not always), XML makes it easier to understand the data because all of the sudden you have that meta data around your asset describing what is there in front of you (human-readable).
XML is also very accessable. What I mean by that is, that - since you mentioned it - you don't want to use regular expressions on XML. There are tools like XPATH (XML Path Language) which make querying XML fun. No need to whip out something no one else can read when you can travers easily through XML using something like XPATH.
There are cases where XML does the opposite (in terms of readability) and sometimes XML is also overhead. It's not always the best choice when you exchange data between systems (e.g. take a look at something really light-weight like JSON). And this sort of exchange doesn't need to be on the web either.
A: Whilst using XML for data files would mean that your data can be self describing and perhaps better organised, the end result is often data files that are far larger than before.
Ask yourself, what are the files used for? Are they to be changed? If so, who's paying and who has budgeted for it?
I love XML in some cases, and in others I hate it!
A: In the case of systems batch programming like you are talking about, a major feature of xml is that it's supported almost everywhere. So you write a program to handle some data today using xml, and in 10 years when you need to overhaul that program and want to use a completely different platform, you xml data will still be well supported.
A: If your developing in .NET (especially .NET 3.5 with LINQ to XML) you'll write less code to read/write the XML than if you used just a plain text file. Plus, XML just makes it easier for any person down the line to read the file and know exactly what's in it and what it's for. And, don't worry about the XML taking up a little more disk space, disk space is cheap.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/126979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Keyboard shortcuts in an XBAP I would like to support keyboard shortcuts in my WPF XBAP application, such as Ctrl+O for 'Open' etc. How do I disable the browsers built-in keyboard shortcuts and replace them with my own?
A: You can't disable the browser's built-in handling of keys. It's not your place as browser content to override the browser's own shortcut keys.
A: If you're desperate to do it then you could try adding a windows hook and intercepting the keystrokes you're interested in.
We had to do it to prevent IE help from opening (may God have mercy on my soul).
See:
http://msdn.microsoft.com/en-us/library/system.windows.interop.hwndsource.addhook(VS.85).aspx
And here's some code (ripped out of our app.xaml.vb) which may help (sorry, VB):
Private Shared m_handle As IntPtr
Private Shared m_hook As Interop.HwndSourceHook
Private Shared m_hookCreated As Boolean = False
'Call on application start
Public Shared Sub SetWindowHook(ByVal visualSource As Visual)
'Add in a Win32 hook to stop the browser app from loading
If Not m_hookCreated Then
m_handle = DirectCast(PresentationSource.FromVisual(visualSource), Interop.HwndSource).Handle
m_hook = New Interop.HwndSourceHook(AddressOf WindowProc)
Interop.HwndSource.FromHwnd(m_handle).AddHook(m_hook)
m_hookCreated = True
End If
End Sub
'Call on application exit
Public Shared Sub RemoveWindowHook()
'Remove the win32 hook
If m_hookCreated AndAlso Not m_hook Is Nothing Then
If Not Interop.HwndSource.FromHwnd(m_handle) Is Nothing Then
Interop.HwndSource.FromHwnd(m_handle).RemoveHook(m_hook)
End If
m_hook = Nothing
m_handle = IntPtr.Zero
End If
End Sub
'Intercept key presses
Private Shared Function WindowProc(ByVal hwnd As System.IntPtr, ByVal msg As Integer, ByVal wParam As System.IntPtr, ByVal lParam As System.IntPtr, ByRef handled As Boolean) As System.IntPtr
'Stop the OS from handling help
If msg = WM_HELP Then
handled = True
End If
Return IntPtr.Zero
End Function
A: Not an answer, but a comment. It would be nice to disable the Backspace key behavior in an XBAP, nothing more annoying than hitting the backspace key while not in an element and the browser navigates you to the previous web page.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/126980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Spring.net + Nhibernate Integration Tests Pass When They Should Not I'm using Spring.net with NHiberante (HibernateTemplate) to implement my DAO's.
I also have some integration tests, that extend from 'AbstractTransactionalDbProviderSpringContextTests '.
DI is working fine, and all test pass BUT sometimes they pass even when they shouldn't.
For example if my hbm.xml files have an error like this:
<class name="Confluence.Domain.User" table="THIS TABLE DOES NOT EXIST">
The tests fails, but if the error is like this one:
<many-to-many
class="Confluence.Domain.User"
column="THIS COLUMN DOES NOT EXIST"/>
the tests pass silently hiding the bug.
I'm testing it using SetComplete() and checking the DB for the changes, but I think the whole idea of this kind of tests is not to do so.
Can anyone tell me how to fix this issue?
Thank you very much!
@Ben: If I have to actually execute the SQL scripts to see if they work, what is the benefit of using this kind of Spring tests?
A: When testing your NH based DAO's you should flush the session so that the database is updated with the new information but still rollback as before. How to do this is explained here - http://forum.springframework.net/showthread.php?t=5246 I've added this to the reference docs. Hope this helps.
Cheers,
Mark
A: If you have a syntax error in your mapping, then NHibernate will fail on config.BuildSessionFactory()
But for mispelled/non-existence database objects, the only way for NHibernate to know is to actually run a query... So you might employ some integration tests to test insert/select on a single entity, to make sure it works.
Not sure what this has to do with Spring.NET though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/126987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to read console output in j2me cellphones? Where does System.out.println() go? When writing j2me applications for cellphones, using System.out.println() prints on the console if using an emulator. However, when the code is deployed on a cellphone, where does the console output go?
If it is impossible to see this in the untethered cellphone, is there a way to see it if the cellphone is still connected to the deploying PC [via USB] ?
A: On Symbian phones (Nokia, Sony-Ericsson, Motorola, Samsung, Panasonic, Siemens, check for the Series60, Series80, Series90 or UIQ platforms), You can retrieve both System.out and System.err. Most importantly, you can retrieve Throwable.printStackTrace() as well.
Early versions of Symbian OS came with a native tool called Redirector. It ended up becoming available to third party MIDlet developers too. It might be hard to find these days but can be re-developed using C++ code that plugs into the Symbian implementation of the C standard library that the Java Virtual Machine uses.
Newer versions of Symbian OS come with an additional GCF protocol that allows retrieval of System.out, System.err and Throwable.printStackTrace() by simply using
javax.microedition.io.Connector.openDataInputStream("redirect://");
You may need to use "redirect://test" on some versions of Series60, during the transition from the Sun Ltd cldc-hi virtual machine to the IBM J9 virtual machine.
The connection needs to be opened before you launch the MIDlet whose output you want to log so you'll need to open it in a separate MIDlet.
A: I found this question with answers on j2me logging. Maybe one can try this if nothing else works. A simple way to access System.out.println() would be nice though.
A: Thank you for your response QuickRecipesOnSymbian.
In the following article you can find a corresponding J2ME MIDlet to display the standard error/output on Symbian phones:
http://wiki.forum.nokia.com/index.php/How_to_get_System.out_output_from_a_MIDlet_and_save_it_to_a_file_in_S60_devices
The solution works fine on Samsung i8910 as well.
(As I'm a newbie on this site, I could not post more than one links, so you need to find the zip with the sources and the jar at the end of the article.)
A: The simple answer is: nowhere you can see them. It does print somewhere (since loads of phones slow down if you have loads of prints) but, on most phones, there's no way to access it.
Some devices do display the console output, for example Sony Ericssons, which show it all if tethered and running the on-device debugging program. You can find out which do it and which don't by searching the developer sites (if they exist) of the various manufacturers.
Your best bet is to write a small method that appends to a StringBuffer within your program. Then map a key press that will display the contents of the StringBuffer on screen. It is invaluable for searching for those nasty device issues.
A: The only input I can offer is that this will differ greatly on each platform. I work mainly with BlackBerrys and System.err goes to the devices event log but I have no idea where System.out goes.
A: You should also be sure to not include println in your production code.
I have seen some phones that actually have a memory leak when doing this (e.g. Sanyo 8400 on Sprint.
A: Logging on devices is always a problem with J2ME because there's not a standard implementation for it.
I can address you to GEAR. It's a J2ME graphic framework that implements also a "in-device" debug console where you can print your lines and display them on your screen.
A: BlackBerry J2ME devices for example can be connected to the computer and the debugger can connect to that device. Once you run a program on the actual device, your System.out OutputStream goes to the debugger console.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/126999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What's a good compression library for Java? I need to compress portions of our application's network traffic for performance. I presume this means I need to stay away from some of the newer algorithms like bzip2, which I think I have heard is slower.
A: You can use Deflater/Inflater which is built into the JDK. There are also GZIPInputStream and GZIPOutputStream, but it really depends on your exact use.
Edit:
Reading further comments it looks like the network taffic is HTTP. Depending on the server, it probably has support for compression (especially with deflate/gzip). The problem then becomes on the client. If the client is a browser it probably already supports it. If your client is a webservices client or an http client check the documentation for that package to see if it is supported.
It looks like jakarta-commons httpclient may require you to manually do the compression. To enable this on the client side you will need to do something like
.addRequestHeader("Accept-Encoding","gzip,deflate");
A: If the network traffic is going over HTTP, most of the various web servers/servlet containers support for negotiated zipping, e.g., mod_deflate for Apache.
A: Your compression algorithm depends on what your trying to optimize, and how much bandwidth you have available.
If you're on a gigibit LAN, almost any compression algorithm is going to slow your program down just a bit. If your connecting over a WAN or internet, you can afford to do a bit more compression. If you connected to a dialup, you should compress as much as it absolutely possible.
If this is a WAN, you may find hardware solutions like Riverbed's are more effective, as they work across a range of traffic, and don't require any changes to software.
I have a test case which shows the relative compression difference between Deflate, Filtered, BZip2, and lzma. Simply plug in a sample of your data, and test the timing between two machines.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: VBA PowerPoint Online Guide and How to Record a Macro Could anyone recommend to me a good online guide to PowerPoint VBA? Also, does anyone has advice on how to record a macro in PowerPoint?
A: Microsoft remove macro recorder from PowerPoint 2007.
To view the struct of objects use Watch (Shift +F9) in object.
For Example
dim ppt as powerpoint.Presentation
set ppt =activepresentation
add watch to ppt to view the struct of object Presentation
otherwise
Add a new class module
in the class declare
Private WithEvents ppt As PowerPoint.Application
in comon module declare one instance from classCreate(default is Class1) using
set x= new class1
Now in module class you can get events for you presentation (in left combo up the code window)
Bruno Leite
Office Developer
A: To record a powerpoint macro:
*
*In the menu bar, click on Tools
*Mouse over Macro > and the submenu will be displayed
*Click the Record button - a new toolbar will be displayed
*Do your thing
*Click the stop button on the new macro toolbar
Click on Tools->Macro->Macros. Find the macro you just recorded and click the Edit button. That will show you what was recorded. Make your modifications and click the triangular run button (or push F5) to run your code.
As far as an online guide, I usually think of a question and use Google or ask a question here on StackOverflow.com. I've been able to answer most of my questions that way, I haven't found a particular main resource for all things Powerpoint VBA.
Also, you can find answers that can help you by looking into VBA articles for other MS Office products - a lot of things that are not Powerpoint-specific (general VBA) will be the same as for the other products.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Returning an 'any kind of input iterator' instead of a vector::iterator or a list::iterator Suppose I want to implement in C++ a data-structure to store oriented graphs. Arcs will be stored in Nodes thanks to STL containers. I'd like users to be able to iterate over the arcs of a node, in an STL-like way.
The issue I have is that I don't want to expose in the Node class (that will actually be an abstract base class) which STL container I will actually use in the concrete class. I therefore don't want to have my methods return std::list::iterator or std::vector::iterator...
I tried this:
class Arc;
typedef std::iterator<std::random_access_iterator_tag, Arc*> ArcIterator; // Wrong!
class Node {
public:
ArcIterator incomingArcsBegin() const {
return _incomingArcs.begin();
}
private:
std::vector<Arc*> _incomingArcs;
};
But this is not correct because a vector::const_iterator can't be used to create an ArcIterator. So what can be this ArcIterator?
I found this paper about Custom Iterators for the STL but it did not help. I must be a bit heavy today... ;)
A: Try this:
class Arc;
class Node {
private:
std::vector<Arc*> incoming_;
public:
typedef std::vector<Arc*>::iterator iterator;
iterator incoming_arcs_begin()
{ return incoming_.begin(); }
};
And use Node::iterator in the rest of the code. When/if you change the container, you have to change the typedef in a single place. (You could take this one step further with additional typedef for the storage, in this case vector.)
As for the const issue, either define vector's const_iterator to be your iterator, or define double iterator types (const and non-const version) as vector does.
A: Have a look at Adobe's any_iterator: this class uses a technique called type erase by which the underyling iterator type is hidden behind an abstract interface. Beware: the use of any_iterator incurs a runtime penalty due to virtual dispatching.
A: I want to think there should be a way to do this through straight STL, similar to what you are trying to do.
If not, you may want to look into using boost's iterator facades and adaptors where you can define your own iterators or adapt other objects into iterators.
A: To hide the fact that your iterators are based on std::vector<Arc*>::iterator you need an iterator class that delegates to std::vector<Arc*>::iterator. std::iterator does not do this.
If you look at the header files in your compiler's C++ standard library, you may find that std::iterator isn't very useful on its own, unless all you need is a class that defines typedefs for iterator_category, value_type, etc.
As Doug T. mentioned in his answer, the boost library has classes that make it easier to write iterators. In particular, boost::indirect_iterator might be helpful if you want your iterators to return an Arc when dereferenced instead of an Arc*.
A: Consider using the Visitor Pattern and inverting the relationship: instead of asking the graph structure for a container of data, you give the graph a functor and let the graph apply that functor to its data.
The visitor pattern is a commonly used pattern on graphs, check out boost's graph library documentation on visitors concepts.
A: If you really don't want the client's of that class to know that it uses a vector underneath, but still want them to be able to somehow iterate over it, you most likely will need to create a class that forwards all its methods to std::vector::iterator.
An alternative would be to templatize Node based on the type of container it should use underneath. Then the clients specifically know what type of container it is using because they told them to use it.
Personally I don't think it usually makes sense to encapsulate the vector away from the user, but still provide most (or even some) of its interface. Its too thin of an encapsulation layer to really provide any benefit.
A: I looked in the header file VECTOR.
vector<Arc*>::const_iterator
is a typedef for
allocator<Arc*>::const_pointer
Could that be your ArcIterator? Like:
typedef allocator<Arc*>::const_pointer ArcIterator;
A: You could templatize the Node class, and typedef both iterator and const_iterator in it.
For example:
class Arc {};
template<
template<class T, class U> class Container = std::vector,
class Allocator = std::allocator<Arc*>
>
class Node
{
public:
typedef typename Container<Arc*, Allocator>::iterator ArcIterator;
typedef typename Container<Arc*, Allocator>::Const_iterator constArcIterator;
constArcIterator incomingArcsBegin() const {
return _incomingArcs.begin();
}
ArcIterator incomingArcsBegin() {
return _incomingArcs.begin();
}
private:
Container<Arc*, Allocator> _incomingArcs;
};
I haven't tried this code, but it gives you the idea. However, you have to notice that using a ConstArcIterator will just disallow the modification of the pointer to the Arc, not the modification of the Arc itself (through non-const methods for example).
A: C++0x will allow you do this with automatic type determination.
In the new standard, this
for (vector::const_iterator itr = myvec.begin(); itr != myvec.end(); ++itr
can be replaced with this
for (auto itr = myvec.begin(); itr != myvec.end(); ++itr)
By the same token, you will be able to return whatever iterator is appropriate, and store it in an auto variable.
Until the new standard kicks in, you would have to either templatize your class, or provide an abstract interface to access the elements of your list/vector. For instance, you can do that by storing an iterator in member variable, and provide member functions, like begin() and next(). This, of course, would mean that only one loop at a time can safely iterate over your elements.
A: Well because std::vector is guaranteed to have contiguous storage, it should be perfect fine to do this:
class Arc;
typedef Arc* ArcIterator;
class Node {
public:
ArcIterator incomingArcsBegin() const {
return &_incomingArcs[0]
}
ArcIterator incomingArcsEnd() const {
return &_incomingArcs[_incomingArcs.size()]
}
private:
std::vector<Arc*> _incomingArcs;
};
Basically, pointers function enough like random access iterators that they are a sufficient replacement.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can I check my byte flag, verifying that a specific bit is at 1 or 0? I use a byte to store some flag like 10101010, and I would like to know how to verify that a specific bit is at 1 or 0.
A: As an extension of Patrick Desjardins' answer:
When doing bit-manipulation it really helps to have a very solid knowledge of bitwise operators.
Also the bitwise "AND" operator in C is &, so you want to do this:
unsigned char a = 0xAA; // 10101010 in hex
unsigned char b = (1 << bitpos); // Where bitpos is the position you want to check
if(a & b) {
//bit set
}
else {
//not set
}
Above I used the bitwise "AND" (& in C) to check whether a particular bit was set or not. I also used two different ways of formulating binary numbers. I highly recommend you check out the Wikipedia link above.
A: Here's a function that can be used to test any bit:
bool is_bit_set(unsigned value, unsigned bitindex)
{
return (value & (1 << bitindex)) != 0;
}
Explanation:
The left shift operator << creates a bitmask. To illustrate:
*
*(1 << 0) equals 00000001
*(1 << 1) equals 00000010
*(1 << 3) equals 00001000
So a shift of 0 tests the rightmost bit. A shift of 31 would be the leftmost bit of a 32-bit value.
The bitwise-and operator (&) gives a result where all the bits that are 1 on both sides are set. Examples:
*
*1111 & 0001 equals 0001
*1111 & 0010 equals 0010
*0000 & 0001 equals 0000.
So, the expression:
(value & (1 << bitindex))
will return the bitmask if the associated bit (bitindex) contains a 1
in that position, or else it will return 0 (meaning it does not contain a 1 at the assoicated bitindex).
To simplify, the expression tests if the result is greater than zero.
*
*If Result > 0 returns true, meaning the byte has a 1 in the tested
bitindex position.
*All else returns false meaning the result was zero, which means there's a 0 in tested bitindex position.
Note the != 0 is not required in the statement since it's a bool, but I like to make it explicit.
A: You can use an AND operator. The example you have: 10101010 and you want to check the third bit you can do: (10101010 AND 00100000) and if you get 00100000 you know that you have the flag at the third position to 1.
A: Kristopher Johnson's answer is very good if you like working with individual fields like this. I prefer to make the code easier to read by using bit fields in C.
For example:
struct fieldsample
{
unsigned short field1 : 1;
unsigned short field2 : 1;
unsigned short field3 : 1;
unsigned short field4 : 1;
}
Here you have a simple struct with four fields, each 1 bit in size. Then you can write your code using simple structure access.
void codesample()
{
//Declare the struct on the stack.
fieldsample fields;
//Initialize values.
fields.f1 = 1;
fields.f2 = 0;
fields.f3 = 0;
fields.f4 = 1;
...
//Check the value of a field.
if(fields.f1 == 1) {}
...
}
You get the same small size advantage, plus readable code because you can give your fields meaningful names inside the structure.
A: If you are using C++ and the standard library is allowed, I'd suggest storing your flags in a bitset:
#include <bitset>
//...
std::bitset<8> flags(someVariable);
as then you can check and set flags using the [] indexing operator.
A: Nobody's been wrong so far, but to give a method to check an arbitrary bit:
int checkBit( byte in, int bit )
{
return in & ( 1 << bit );
}
If the function returns non-zero, the bit is set.
A: byte THIRDBIT = 4; // 4 = 00000100 i.e third bit is set
int isThirdBitSet(byte in) {
return in & THIRDBIT; // Returns 1 if the third bit is set, 0 otherwise
}
A: You can do as Patrick Desjardins says and you make a bit-to-bit OR to the resulting of the previous AND operation.
In this case, you will have a final result of 1 or 0.
A: Traditionally, to check if the lowest bit is set, this will look something like:
int MY_FLAG = 0x0001;
if ((value & MY_FLAG) == MY_FLAG)
doSomething();
A: Use a bitwise (not logical!) AND to compare the value against a bitmask.
if (var & 0x08) {
/* The fourth bit is set */
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Q: Comparing cold-start to warm start Our application takes significantly more time to launch after a reboot (cold start) than if it was already opened once (warm start).
Most (if not all) the difference seems to come from loading DLLs, when the DLLs' are in cached memory pages they load much faster. We tried using ClearMem to simulate rebooting (since its much less time consuming than actually rebooting) and got mixed results, on some machines it seemed to simulate a reboot very consistently and in some not.
To sum up my questions are:
*
*Have you experienced differences in launch time between cold and warm starts?
*How have you delt with such differences?
*Do you know of a way to dependably simulate a reboot?
Edit:
Clarifications for comments:
*
*The application is mostly native C++ with some .NET (the first .NET assembly that's loaded pays for the CLR).
*We're looking to improve load time, obviously we did our share of profiling and improved the hotspots in our code.
Something I forgot to mention was that we got some improvement by re-basing all our binaries so the loader doesn't have to do it at load time.
A: As for simulating reboots, have you considered running your app from a virtual PC? Using virtualization you can conveniently replicate a set of conditions over and over again.
I would also consider some type of profiling app to spot the bit of code causing the time lag, and then making the judgement call about how much of that code is really necessary, or if it could be achieved in a different way.
A: It would be hard to truly simulate a reboot in software. When you reboot, all devices in your machine get their reset bit asserted, which should cause all memory system-wide to be lost.
In a modern machine you've got memory and caches everywhere: there's the VM subsystem which is storing pages of memory for the program, then you've got the OS caching the contents of files in memory, then you've got the on-disk buffer of sectors on the harddrive itself. You can probably get the OS caches to be reset, but the on-disk buffer on the drive? I don't know of a way.
A: How did you profile your code? Not all profiling methods are equal and some find hotspots better than others. Are you loading lots of files? If so, disk fragmentation and seek time might come into play.
Maybe even sticking basic timing information into the code, writing out to a log file and examining the files on cold/warm start will help identify where the app is spending time.
Without more information, I would lean towards filesystem/disk cache as the likely difference between the two environments. If that's the case, then you either need to spend less time loading files upfront, or find faster ways to load files.
Example: if you are loading lots of binary data files, speed up loading by combining them into a single file, then do a slerp of the whole file into memory in one read and parse their contents. Less disk seeks and time spend reading off of disk. Again, maybe that doesn't apply.
I don't know offhand of any tools to clear the disk/filesystem cache, but you could write a quick application to read a bunch of unrelated files off of disk to cause the filesystem/disk cache to be loaded with different info.
A: @Morten Christiansen said:
One way to make apps start cold-start faster (sort of) is used by e.g. Adobe reader, by loading some of the files on startup, thereby hiding the cold start from the users. This is only usable if the program is not supposed to start up immediately.
That makes the customer pay for initializing our app at every boot even when it isn't used, I really don't like that option (neither does Raymond).
A: One succesful way to speed up application startup is to switch DLLs to delay-load. This is a low-cost change (some fiddling with project settings) but can make startup significantly faster. Afterwards, run depends.exe in profiling mode to figure out which DLLs load during startup anyway, and revert the delay-load on them. Remember that you may also delay-load most Windows DLLs you need.
A: A very effective technique for improving application cold launch time is optimizing function link ordering.
The Visual Studio linker lets you pass in a file lists all the functions in the module being linked (or just some of them - it doesn't have to be all of them), and the linker will place those functions next to each other in memory.
When your application is starting up, there are typically calls to init functions throughout your application. Many of these calls will be to a page that isn't in memory yet, resulting in a page fault and a disk seek. That's where slow startup comes from.
Optimizing your application so all these functions are together can be a big win.
Check out Profile Guided Optimization in Visual Studio 2005 or later. One of the thing sthat PGO does for you is function link ordering.
It's a bit difficult to work into a build process, because with PGO you need to link, run your application, and then re-link with the output from the profile run. This means your build process needs to have a runtime environment and deal cleaning up after bad builds and all that, but the payoff is typically 10+ or more faster cold launch with no code changes.
There's some more info on PGO here:
http://msdn.microsoft.com/en-us/library/e7k32f4k.aspx
A: As an alternative to function order list, just group the code that will be called within the same sections:
#pragma code_seg(".startUp")
//...
#pragma code_seg
#pragma data_seg(".startUp")
//...
#pragma data_seg
It should be easy to maintain as your code changes, but has the same benefit as the function order list.
I am not sure whether function order list can specify global variables as well, but use this #pragma data_seg would simply work.
A: One way to make apps start cold-start faster (sort of) is used by e.g. Adobe reader, by loading some of the files on startup, thereby hiding the cold start from the users. This is only usable if the program is not supposed to start up immediately.
Another note, is that .NET 3.5SP1 supposedly has much improved cold-start speed, though how much, I cannot say.
A: It could be the NICs (LAN Cards) and that your app depends on certain other
services that require the network to come up. So profiling your application alone may not quite tell you this, but you should examine the dependencies for your application.
A: If your application is not very complicated, you can just copy all the executables to another directory, it should be similar to a reboot. (Cut and Paste seems not work, Windows is smart enough to know the files move to another folder is cached in the memory)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Why is it not a good idea to use SOAP for communicating with the front end (ie web browser)? Why is it not a good idea to use SOAP for communicating with the front end? For example, a web browser using JavaScript.
A: If the web browser is your only client then I would have to agree that SOAP is overkill.
However, if you are going to have multiple types of front end clients on running on different platforms then SOAP may be appropriate. The nice part about SOAP is that there are a lot of tools out there that will generate code for you to handle sending, receiving, and parsing of SOAP based on the WSDL file.
For example, if you wanted to develop a C++ front end client then all you need is the WSDL file and Microsoft's tools will generate all the C++ code to generate the SOAP request based on a data structure, send the request, receive the response, and parse the response into a return data structure.
There are tools to do this both on the client and server side.
A: *
*Because it's bloated
*Because JSON is natively understandable by the JavaScript
*Because XML isn't fast to manipulate with JavaScript.
A: It could be done. Just remember that SOAP is not the fastest way to exchange information as there is a big overhead (big XMLs have to be sent back and forth) - that's probably why you don't see it used that often
A: Because SOAP reinvents a lot of the HTTP wheel in its quest for protocol-independence. What's the point if you know you're going to serve the response over HTTP anyway (since your client is a web browser)?
UPDATE: I second gizmo's (implied) suggestion of JSON.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Q: Copy / Put text on the clipboard with FireFox, Safari and Chrome In Internet Explorer I can use the clipboardData object to access the clipboard. How can I do that in FireFox, Safari and/or Chrome?
A: As of 2017, you can do this:
function copyStringToClipboard (string) {
function handler (event){
event.clipboardData.setData('text/plain', string);
event.preventDefault();
document.removeEventListener('copy', handler, true);
}
document.addEventListener('copy', handler, true);
document.execCommand('copy');
}
And now to copy copyStringToClipboard('Hello, World!')
If you noticed the setData line, and wondered if you can set different data types, the answer is yes.
A: Firefox does allow you to store data in the clipboard, but due to security implications it is disabled by default. See how to enable it in "Granting JavaScript access to the clipboard" in the Mozilla Firefox knowledge base.
The solution offered by amdfan is the best if you are having a lot of users and configuring their browser isn't an option. Though you could test if the clipboard is available and provide a link for changing the settings, if the users are tech savvy. The JavaScript editor TinyMCE follows this approach.
A: For security reasons, Firefox doesn't allow you to place text on the clipboard. However, there is a workaround available using Flash.
function copyIntoClipboard(text) {
var flashId = 'flashId-HKxmj5';
/* Replace this with your clipboard.swf location */
var clipboardSWF = 'http://appengine.bravo9.com/copy-into-clipboard/clipboard.swf';
if(!document.getElementById(flashId)) {
var div = document.createElement('div');
div.id = flashId;
document.body.appendChild(div);
}
document.getElementById(flashId).innerHTML = '';
var content = '<embed src="' +
clipboardSWF +
'" FlashVars="clipboard=' + encodeURIComponent(text) +
'" width="0" height="0" type="application/x-shockwave-flash"></embed>';
document.getElementById(flashId).innerHTML = content;
}
The only disadvantage is that this requires Flash to be enabled.
The source is currently dead: http://bravo9.com/journal/copying-text-into-the-clipboard-with-javascript-in-firefox-safari-ie-opera-292559a2-cc6c-4ebf-9724-d23e8bc5ad8a/ (and so is its Google cache)
A: The copyIntoClipboard() function works for Flash 9, but it appears to be broken by the release of Flash player 10. Here's a solution that does work with the new flash player:
http://bowser.macminicolo.net/~jhuckaby/zeroclipboard/
It's a complex solution, but it does work.
A: I have to say that none of these solutions really work. I have tried the clipboard solution from the accepted answer, and it does not work with Flash Player 10. I have also tried ZeroClipboard, and I was very happy with it for awhile.
I'm currently using it on my own site (http://www.blogtrog.com), but I've been noticing weird bugs with it. The way ZeroClipboard works is that it puts an invisible flash object over the top of an element on your page. I've found that if my element moves (like when the user resizes the window and i have things right aligned), the ZeroClipboard flash object gets out of whack and is no longer covering the object. I suspect it's probably still sitting where it was originally. They have code that's supposed to stop that, or restick it to the element, but it doesn't seem to work well.
So... in the next version of BlogTrog, I guess I'll follow suit with all the other code highlighters I've seen out in the wild and remove my Copy to Clipboard button. :-(
(I noticed that dp.syntaxhiglighter's Copy to Clipboard is broken now also.)
A: Check this link:
Granting JavaScript access to the clipboard
Like everybody said, for security reasons, it is by default disabled. The page above shows the instructions of how to enable it (by editing about:config in Firefox or the user.js file).
Fortunately, there is a plugin called "AllowClipboardHelper" which makes things easier with only a few clicks. however you still need to instruct your website's visitors on how to enable the access in Firefox.
A: Use the modern document.execCommand("copy") and jQuery. See this Stack Overflow answer.
var ClipboardHelper = { // As Object
copyElement: function ($element)
{
this.copyText($element.text())
},
copyText:function(text) // Linebreaks with \n
{
var $tempInput = $("<textarea>");
$("body").append($tempInput);
$tempInput.val(text).select();
document.execCommand("copy");
$tempInput.remove();
}
};
How to call it:
ClipboardHelper.copyText('Hello\nWorld');
ClipboardHelper.copyElement($('body h1').first());
// jQuery document
;(function ( $, window, document, undefined ) {
var ClipboardHelper = {
copyElement: function ($element)
{
this.copyText($element.text())
},
copyText:function(text) // Linebreaks with \n
{
var $tempInput = $("<textarea>");
$("body").append($tempInput);
//todo prepare Text: remove double whitespaces, trim
$tempInput.val(text).select();
document.execCommand("copy");
$tempInput.remove();
}
};
$(document).ready(function()
{
var $body = $('body');
$body.on('click', '*[data-copy-text-to-clipboard]', function(event)
{
var $btn = $(this);
var text = $btn.attr('data-copy-text-to-clipboard');
ClipboardHelper.copyText(text);
});
$body.on('click', '.js-copy-element-to-clipboard', function(event)
{
ClipboardHelper.copyElement($(this));
});
});
})( jQuery, window, document );
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<span data-copy-text-to-clipboard=
"Hello
World">
Copy Text
</span>
<br><br>
<span class="js-copy-element-to-clipboard">
Hello
World
Element
</span>
A: There is now a way to easily do this in most modern browsers using
document.execCommand('copy');
This will copy currently selected text. You can select a textArea or input field using
document.getElementById('myText').select();
To invisibly copy text you can quickly generate a textArea, modify the text in the box, select it, copy it, and then delete the textArea. In most cases this textArea wont even flash onto the screen.
For security reasons, browsers will only allow you copy if a user takes some kind of action (ie. clicking a button). One way to do this would be to add an onClick event to a html button that calls a method which copies the text.
A full example:
function copier(){
document.getElementById('myText').select();
document.execCommand('copy');
}
<button onclick="copier()">Copy</button>
<textarea id="myText">Copy me PLEASE!!!</textarea>
A: I've used GitHub's Clippy for my needs and is a simple Flash-based button. It works just fine if one doesn't need styling and is pleased with inserting what to paste on the server-side beforehand.
A: Online spreadsheet applications hook Ctrl + C and Ctrl + V events and transfer focus to a hidden TextArea control and either set its contents to desired new clipboard contents for copy or read its contents after the event had finished for paste.
See also Is it possible to read the clipboard in Firefox, Safari and Chrome using JavaScript?.
A: It is summer 2015, and with so much turmoil surrounding Flash, here is how to avoid its use altogether.
clipboard.js is a nice utility that allows copying of text or html data to the clipboard. It's very easy to use, just include the .js and use something like this:
<button id='markup-copy'>Copy Button</button>
<script>
document.getElementById('markup-copy').addEventListener('click', function() {
clipboard.copy({
'text/plain': 'Markup text. Paste me into a rich text editor.',
'text/html': '<i>here</i> is some <b>rich text</b>'
}).then(
function(){console.log('success'); },
function(err){console.log('failure', err);
});
});
</script>
clipboard.js is also on GitHub.
A: A slight improvement on the Flash solution is to detect for Flash 10 using swfobject:
http://code.google.com/p/swfobject/
And then if it shows as Flash 10, try loading a Shockwave object using JavaScript. Shockwave can read/write to the clipboard (in all versions) as well using the copyToClipboard() command in Lingo.
A: http://www.rodsdot.com/ee/cross_browser_clipboard_copy_with_pop_over_message.asp works with Flash 10 and all Flash enabled browsers.
Also ZeroClipboard has been updated to avoid the bug mentioned about page scrolling causing the Flash movie to no longer be in the correct place.
Since that method "Requires" the user to click a button to copy this is a convenience to the user and nothing nefarious is occurring.
A: Try creating a memory global variable storing the selection. Then the other function can access the variable and do a paste. For example,
var memory = ''; // Outside the functions but within the script tag.
function moz_stringCopy(DOMEle, firstPos, secondPos) {
var copiedString = DOMEle.value.slice(firstPos, secondPos);
memory = copiedString;
}
function moz_stringPaste(DOMEle, newpos) {
DOMEle.value = DOMEle.value.slice(0, newpos) + memory + DOMEle.value.slice(newpos);
}
A: If you support Flash, you can use https://everyplay.com/assets/clipboard.swf and use the flashvars text to set the text.
https://everyplay.com/assets/clipboard.swf?text=It%20Works
That’s the one I use to copy and you can set as extra if it doesn't support these options. You can use:
For Internet Explorer:
window.clipboardData.setData(DataFormat, Text) and window.clipboardData.getData(DataFormat)
You can use the DataFormat's Text and URL to getData and setData.
And to delete data:
You can use the DataFormat's File, HTML, Image, Text and URL. PS: You need to use window.clipboardData.clearData(DataFormat);.
And for other that’s not support window.clipboardData and swf Flash files you can also use Control + C button on your keyboard for Windows and for Mac its Command + C.
A: From addon code:
For how to do it from Chrome code, you can use the nsIClipboardHelper interface as described here: https://developer.mozilla.org/en-US/docs/Using_the_Clipboard
A: Use document.execCommand('copy'). It is supported in the latest versions of Chrome, Firefox, Edge, and Safari.
function copyText(text){
function selectElementText(element) {
if (document.selection) {
var range = document.body.createTextRange();
range.moveToElementText(element);
range.select();
} else if (window.getSelection) {
var range = document.createRange();
range.selectNode(element);
window.getSelection().removeAllRanges();
window.getSelection().addRange(range);
}
}
var element = document.createElement('DIV');
element.textContent = text;
document.body.appendChild(element);
selectElementText(element);
document.execCommand('copy');
element.remove();
}
var txt = document.getElementById('txt');
var btn = document.getElementById('btn');
btn.addEventListener('click', function(){
copyText(txt.value);
})
<input id="txt" value="Hello World!" />
<button id="btn">Copy To Clipboard</button>
A: Clipboard API is designed to supersede document.execCommand. Safari is still working on support, so you should provide a fallback until the specification settles and Safari finishes implementation.
const permalink = document.querySelector('[rel="bookmark"]');
const output = document.querySelector('output');
permalink.onclick = evt => {
evt.preventDefault();
window.navigator.clipboard.writeText(
permalink.href
).then(() => {
output.textContent = 'Copied';
}, () => {
output.textContent = 'Not copied';
});
};
<a href="https://stackoverflow.com/questions/127040/" rel="bookmark">Permalink</a>
<output></output>
For security reasons clipboard Permissions may be necessary to read and write from the clipboard. If the snippet doesn't work on Stack Overflow give it a shot on localhost or an otherwise trusted domain.
A: Building off the excellent answer from David from Studio.201, this works in Safari, Firefox, and Chrome. It also ensures no flashing could occur from the textarea by placing it off-screen.
// ================================================================================
// ClipboardClass
// ================================================================================
var ClipboardClass = (function() {
function copyText(text) {
// Create temp element off-screen to hold text.
var tempElem = $('<textarea style="position: absolute; top: -8888px; left: -8888px">');
$("body").append(tempElem);
tempElem.val(text).select();
document.execCommand("copy");
tempElem.remove();
}
// ============================================================================
// Class API
// ============================================================================
return {
copyText: copyText
};
})();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "115"
}
|
Q: How to UAC elevate a COM component with .NET I've found an article on how to elevate a COM object written in C++ by calling
CoCreateInstanceAsAdmin. But what I have not been able to find or do, is a way to implement a component of my .NET (c#) application as a COM object and then call into that object to execute the tasks which need UAC elevation. MSDN documents this as the admin COM object model.
I am aware that it is possible and quite easy to launch the application (or another app) as an administrator, to execute the tasks in a separate process (see for instance the post from Daniel Moth, but what I am looking for is a way to do everything from within the same, un-elevated .NET executable. Doing so will, of course, spawn the COM object in a new process, but thanks to transparent marshalling, the caller of the .NET COM object should not be (too much) aware of it.
Any ideas as to how I could instanciate a COM object written in C#, from a C# project, through the CoCreateInstanceAsAdmin API would be very helpful. So I am really interested in learning how to write a COM object in C#, which I can then invoke from C# through the COM elevation APIs.
Never mind if the elevated COM object does not run in the same process. I just don't want to have to launch the whole application elevated; I would just like to have the COM object which will execute the code be elevated. If I could write something along the lines:
// in a dedicated assembly, marked with the following attributes:
[assembly: ComVisible (true)]
[assembly: Guid ("....")]
public class ElevatedClass
{
public void X() { /* do something */ }
}
and then have my main application just instanciate ElevatedClass through the CoCreateInstanceAsAdmin call. But maybe I am just dreaming.
A: Look at Windows Vista UAC Demo Sample Code
(You also need the Vista Bridge sample for UnsafeNativeMethods.CoGetObject method)
Which gives you C# code that shows a few different ways to elevate, including a COM object
(Incomplete code sample - grab the files above)
[return: MarshalAs(UnmanagedType.Interface)]
static internal object LaunchElevatedCOMObject(Guid Clsid, Guid InterfaceID)
{
string CLSID = Clsid.ToString("B"); // B formatting directive: returns {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
string monikerName = "Elevation:Administrator!new:" + CLSID;
NativeMethods.BIND_OPTS3 bo = new NativeMethods.BIND_OPTS3();
bo.cbStruct = (uint)Marshal.SizeOf(bo);
bo.hwnd = IntPtr.Zero;
bo.dwClassContext = (int)NativeMethods.CLSCTX.CLSCTX_ALL;
object retVal = UnsafeNativeMethods.CoGetObject(monikerName, ref bo, InterfaceID);
return (retVal);
}
A: I think the only way CoCreateInstanceAsAdmin works is if you have registered the COM component ahead of time. That may be a problem if you intend your application to work in an XCopy deployment setting.
For my own purposes in Gallio I decided to create a little hosting process on the side with a manifest to require admin privileges. Then when I need to perform an elevated action, I spin up an instance of the hosting process and instruct it via .Net remoting to execute a particular command registered in Gallio's Inversion of Control container.
This is a fair bit of work but Gallio already had an out of process hosting facility so adding elevation into the mix was not too hard. Moreover, this mechanism ensures that Gallio can perform privilege elevation without requiring prior installation of any other COM components in the registry.
A: The elements of elevation are processes. So, if I understand your question correctly, and you want a way to elevate a COM object in your process, than the answer is you can't. The entire point of CoCreateInstanceAsAdmin is to NOT run it in your process.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: Find out number of capture groups in Python regular expressions Is there a way to determine how many capture groups there are in a given regular expression?
I would like to be able to do the follwing:
def groups(regexp, s):
""" Returns the first result of re.findall, or an empty default
>>> groups(r'(\d)(\d)(\d)', '123')
('1', '2', '3')
>>> groups(r'(\d)(\d)(\d)', 'abc')
('', '', '')
"""
import re
m = re.search(regexp, s)
if m:
return m.groups()
return ('',) * num_of_groups(regexp)
This allows me to do stuff like:
first, last, phone = groups(r'(\w+) (\w+) ([\d\-]+)', 'John Doe 555-3456')
However, I don't know how to implement num_of_groups. (Currently I just work around it.)
EDIT: Following the advice from rslite, I replaced re.findall with re.search.
sre_parse seems like the most robust and comprehensive solution, but requires tree traversal and appears to be a bit heavy.
MizardX's regular expression seems to cover all bases, so I'm going to go with that.
A: def num_groups(regex):
return re.compile(regex).groups
A: f_x = re.search(...)
len_groups = len(f_x.groups())
A: Something from inside sre_parse might help.
At first glance, maybe something along the lines of:
>>> import sre_parse
>>> sre_parse.parse('(\d)\d(\d)')
[('subpattern', (1, [('in', [('category', 'category_digit')])])),
('in', [('category', 'category_digit')]),
('subpattern', (2, [('in', [('category', 'category_digit')])]))]
I.e. count the items of type 'subpattern':
import sre_parse
def count_patterns(regex):
"""
>>> count_patterns('foo: \d')
0
>>> count_patterns('foo: (\d)')
1
>>> count_patterns('foo: (\d(\s))')
1
"""
parsed = sre_parse.parse(regex)
return len([token for token in parsed if token[0] == 'subpattern'])
Note that we're only counting root level patterns here, so the last example only returns 1. To change this, tokens would need to searched recursively.
A: First of all if you only need the first result of re.findall it's better to just use re.search that returns a match or None.
For the groups number you could count the number of open parenthesis '(' except those that are escaped by '\'. You could use another regex for that:
def num_of_groups(regexp):
rg = re.compile(r'(?<!\\)\(')
return len(rg.findall(regexp))
Note that this doesn't work if the regex contains non-capturing groups and also if '(' is escaped by using it as '[(]'. So this is not very reliable. But depending on the regexes that you use it might help.
A: Using your code as a basis:
def groups(regexp, s):
""" Returns the first result of re.findall, or an empty default
>>> groups(r'(\d)(\d)(\d)', '123')
('1', '2', '3')
>>> groups(r'(\d)(\d)(\d)', 'abc')
('', '', '')
"""
import re
m = re.search(regexp, s)
if m:
return m.groups()
return ('',) * len(m.groups())
A: Might be wrong, but I don't think there is a way to find the number of groups that would have been returned had the regex matched. The only way I can think of to make this work the way you want it to is to pass the number of matches your particular regex expects as an argument.
To clarify though: When findall succeeds, you only want the first match to be returned, but when it fails you want a list of empty strings? Because the comment seems to show all matches being returned as a list.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
}
|
Q: Does .NET Framework 3.5 SP1 require restart? We have a production machine and are trying to slowly introduce some 3.0 and up features on our web application. 3.5 is installed but I would like to upgrade to SP1, I would just like to know if it requires a restart of the machine in the end so I can schedule some down time.
Thanks!
EDIT: so it did require the restart, thanks guys for the answer...but the hosting company didn't give us the rights to do so...LOL
A: He he. I installed it on about 4 machines...two required a restart, two did not. The configuration was similar between them, so there was no obvious way to determine why some needed a restart and others didn't. The best theory I have currently is that the ones which needed restarts tended to be the ones which were more active (they were all running ASP.Net sites), so it is possible that the framework bits had not yet been loaded by IIS for the ones which did not need a restart.
To be safe, plan on restarting and schedule the update accordingly.
A: 3.5 SP1 updates 3.0 to SP2 and 2.0 (which contains the CLR) to SP2. If the CLR is loaded in any process the DLLs will not be writable, and a reboot will be required.
A: Mine did not. But I do anyway. Clear the bits.
A: I ran the update on a Windows XP machine yesterday and it did not require a restart.
A: every machine that I have touched with SP1 for .Net 3.5 has required a reboot. It totally trashed my Team Foundation Server box.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: GridView Row to Object Type In ASP.NET, if I databind a gridview with a array of objects lets say , how can I retrieve and use foo(index) when the user selects the row?
i.e.
dim fooArr() as foo;
gv1.datasource = fooArr;
gv1.databind();
On Row Select
Private Sub gv1_RowCommand(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewCommandEventArgs) Handles gv1.RowCommand
If e.CommandName = "Select" Then
'get and use foo(index)
End If
End Sub
A: If you can be sure the order of items in your data source has not changed, you can use the CommandArgument property of the CommandEventArgs.
A more robust method, however,is to use the DataKeys/SelectedDataKey properties of the GridView. The only caveat is that your command must be of type "Select" (so, by default RowCommand will not have access to the DataKey).
Assuming you have some uniqueness in the entities comprising your list, you can set one or more key property names in the GridView's DataKeys property. When the selected item in the GridView is set, you can retrieve your key value(s) and locate the item in your bound list. This method gets you out of the problem of having the ordinal position in the GridView not matching the ordinal position of your element in the data source.
Example:
<asp:GridView ID="GridView1" runat="server" AutoGenerateSelectButton="True"
DataKeyNames="Name" onrowcommand="GridView1_RowCommand1"
onselectedindexchanged="GridView1_SelectedIndexChanged">
</asp:GridView>
Then the code-behind (or inline) for the Page would be something like:
protected void GridView1_SelectedIndexChanged(object sender, EventArgs e)
{
// Value is the Name property of the selected row's bound object.
string foo = GridView1.SelectedDataKey.Value as string;
}
Another choice would be to go spelunking in the Rows collection of the GridView, fetching values a column at a time by getting control values, but that's not recommended unless you have to.
Hope this helps.
A: in theory the index of the row, should be the index of foo (maybe +1 for header row, you'll need to test). so, you should be able to do something along these lines
dim x as object = foo(e.row.selectedIndex)
The other alternative is to find a way to databind the index to the commandArgument attribute of the button.
A: There's probably a cleaner way of doing this, but you could set the CommandArgument property of the row to its index. Then something like foo(CInt(e.CommandArgument)) would do the trick.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Why do VS2008 spawn one Cassini for each web-site/application when going into debug mode? I have quite a big solution here with a lot of different web applications and sites, around 10-15 of them.
When I attach VS2008 to any process (most likely Nunit.exe or something similar) I get one Cassini process per website/application in the solution. Is there a quick way to get rid of this behaviour in VS or do I have to connect them to the IIS?
A: Just as multiple apps cannot have the same URL, you need multiple instances of Cassini to expose the apps (i.e. projects) locally. This more closely mimics the way the code will be ultimately deployed. You won't take the output from multiple projects and stick them in a single folder on the production app server, so VS doesn't do it either. Additionally, spawning a new process for each project gives them their own resources, which keeps them from stepping on each other during debugging.
VS 2008 exposes a project level setting that prevents Cassini from starting unless the project is explicitly debugged. Setting this would keep your sites from spinning up if all you want to do is run your unit tests (because those are in a separate project, right?).
A: I think what you want to do is set the "Always start when debugging" property to "false" for each of your website projects. Just click the project in the solution explorer, hit F4, and it's the first property in the list.
This property is annoying because even when you attach a debugger to IIS (i.e. you're not even building the solution), the little servers start automatically.
A: From what I know the mini web server that comes with Visual Studio is only capable of hosting one web app at a time. For what you want you really have to go with IIS.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What is the preferred method of commenting JavaScript objects and methods? I'm used to Atlas where the preferred (from what I know) method is to use XML comments such as:
/// <summary>
/// Method to calculate distance between two points
/// </summary>
///
/// <param name="pointA">First point</param>
/// <param name="pointB">Second point</param>
///
function calculatePointDistance(pointA, pointB) { ... }
Recently I've been looking into other third-party JavaScript libraries and I see syntax like:
/*
* some comment here
* another comment here
* ...
*/
function blahblah() { ... }
As a bonus, are there API generators for JavaScript that could read the 'preferred' commenting style?
A: There's JSDoc
/**
* Shape is an abstract base class. It is defined simply
* to have something to inherit from for geometric
* subclasses
* @constructor
*/
function Shape(color){
this.color = color;
}
A: Yahoo offers YUIDoc.
It's well documented, supported by Yahoo, and is a Node.js app.
It also uses a lot of the same syntax, so not many changes would have to be made to go from one to the other.
A: The use of the triple comment in the first example is actually used for external XML documentation tools and (in Visual Studio) intellisense support. Its still a valid comment, but its special :) The actuall comment 'operator' is //
The only limitation there is that its for a single line.
The second example uses C style block commenting which allows for commenting across multiple lines or in the middle of a line.
A: Try pasting the following into a javascript file in Visual Studio 08 and play around with it:
var Namespace = {};
Namespace.AnotherNamespace = {};
Namespace.AnotherNamespace.annoyingAlert = function(_message)
{
/// <param name="_message">The message you want alerted two times</param>
/// <summary>This is really annoying!!</summary>
alert(_message);
alert(_message);
};
Intellisense galore!
More info about this (including how to reference external javascript-files, for use in large libraries) can be found on Scott Gu's blog.
A: The simpler the better, comments are good, use them :)
var something = 10; // My comment
/*
Lorem ipsum dolor sit amet, consectetur adipisicing elit,
sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco
nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor
in reprehenderit in voluptate velit esse cillum dolore eu
fugiat nulla pariatur.
*/
function bigThing() {
// ...
}
But for autogenerated doc...
/**
* Adds two numbers.
* @param {number} num1 The first number to add.
* @param {number} num2 The second number to add.
* @return {number} The result of adding num1 and num2.
*/
function bigThing() {
// ...
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "68"
}
|
Q: So, who should daemonize? The script or the caller? I'm always wondering who should do it. In Ruby, we have the Daemons library which allows Ruby scripts to daemonize themselves. And then, looking at God (a process monitoring tool, similar to monit) page, I see that God can daemonize processes.
Any definitive answer out there?
A: You probably cannot get a definitive answer, as we generally end up with both: the process has the ability to daemonize itself, and the process monitor has the ability to daemonize its children.
Personally I prefer to have the process monitor or script do it, for a few reasons:
1. if the process monitor wishes to closely follow its children to restart them if they die, it can choose not to daemonize them. A SIGCHLD will be delivered to the monitor when one of its child processes exits. In embedded systems we do this a lot.
2. Typically when daemonizing, you also set the euid and egid. I prefer not to encode into every child process a knowledge of system-level policy like uids to use.
3. It allows re-use of the same application as either a command line tool or a daemon (I freely admit that this rarely happens in practice).
A: I would say it is better for your script to do it. I don't know your process monitoring tool there, but I would think users could potentially use an alternative tool, which means that having the script do it would be preferable.
If you can envision the script run in non-daemon fashion, I would add an option to the script to enable or disable daemonization.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: SQL Server Convert integer to binary string I was wondering if there was an easy way in SQL to convert an integer to its binary representation and then store it as a varchar.
For example 5 would be converted to "101" and stored as a varchar.
A: this is a generic base converter
http://dpatrickcaldwell.blogspot.com/2009/05/converting-decimal-to-hexadecimal-with.html
you can do
select reverse(dbo.ConvertToBase(5, 2)) -- 101
A: Here's a bit of a change to the accepted answer from Sean, since I found it limiting to only allow a hardcoded number of digits in the output. In my daily use, I find it more useful to either get only up to the highest 1 digit, or specify how many digits I'm expecting back. It will automatically pad the side with 0s, so that it lines up to 8, 16, or whatever number of bits you want.
Create function f_DecimalToBinaryString
(
@Dec int,
@MaxLength int = null
)
Returns varchar(max)
as Begin
Declare @BinStr varchar(max) = '';
-- Perform the translation from Dec to Bin
While @Dec > 0 Begin
Set @BinStr = Convert(char(1), @Dec % 2) + @BinStr;
Set @Dec = Convert(int, @Dec /2);
End;
-- Either pad or trim the output to match the number of digits specified.
If (@MaxLength is not null) Begin
If @MaxLength <= Len(@BinStr) Begin -- Trim down
Set @BinStr = SubString(@BinStr, Len(@BinStr) - (@MaxLength - 1), @MaxLength);
End Else Begin -- Pad up
Set @BinStr = Replicate('0', @MaxLength - Len(@BinStr)) + @BinStr;
End;
End;
Return @BinStr;
End;
A: Actually this is REALLY SIMPLE using plain old SQL. Just use bitwise ANDs. I was a bit amazed that there wasn't a simple solution posted online (that didn't invovled UDFs). In my case I really wanted to check if bits were on or off (the data is coming from dotnet eNums).
Accordingly here is an example that will give you seperately and together - bit values and binary string (the big union is just a hacky way of producing numbers that will work accross DBs:
select t.Number
, cast(t.Number & 64 as bit) as bit7
, cast(t.Number & 32 as bit) as bit6
, cast(t.Number & 16 as bit) as bit5
, cast(t.Number & 8 as bit) as bit4
, cast(t.Number & 4 as bit) as bit3
, cast(t.Number & 2 as bit) as bit2
,cast(t.Number & 1 as bit) as bit1
, cast(cast(t.Number & 64 as bit) as CHAR(1))
+cast( cast(t.Number & 32 as bit) as CHAR(1))
+cast( cast(t.Number & 16 as bit) as CHAR(1))
+cast( cast(t.Number & 8 as bit) as CHAR(1))
+cast( cast(t.Number & 4 as bit) as CHAR(1))
+cast( cast(t.Number & 2 as bit) as CHAR(1))
+cast(cast(t.Number & 1 as bit) as CHAR(1)) as binary_string
--to explicitly answer the question, on MSSQL without using REGEXP (which would make it simple)
,SUBSTRING(cast(cast(t.Number & 64 as bit) as CHAR(1))
+cast( cast(t.Number & 32 as bit) as CHAR(1))
+cast( cast(t.Number & 16 as bit) as CHAR(1))
+cast( cast(t.Number & 8 as bit) as CHAR(1))
+cast( cast(t.Number & 4 as bit) as CHAR(1))
+cast( cast(t.Number & 2 as bit) as CHAR(1))
+cast(cast(t.Number & 1 as bit) as CHAR(1))
,
PATINDEX('%1%', cast(cast(t.Number & 64 as bit) as CHAR(1))
+cast( cast(t.Number & 32 as bit) as CHAR(1))
+cast( cast(t.Number & 16 as bit) as CHAR(1))
+cast( cast(t.Number & 8 as bit) as CHAR(1))
+cast( cast(t.Number & 4 as bit) as CHAR(1))
+cast( cast(t.Number & 2 as bit) as CHAR(1))
+cast(cast(t.Number & 1 as bit) as CHAR(1) )
)
,99)
from (select 1 as Number union all select 2 union all select 3 union all select 4 union all select 5 union all select 6
union all select 7 union all select 8 union all select 9 union all select 10) as t
Produces this result:
num bit7 bit6 bit5 bit4 bit3 bit2 bit1 binary_string binary_string_trimmed
1 0 0 0 0 0 0 1 0000001 1
2 0 0 0 0 0 1 0 0000010 10
3 0 0 0 0 0 1 1 0000011 11
4 0 0 0 1 0 0 0 0000100 100
5 0 0 0 0 1 0 1 0000101 101
6 0 0 0 0 1 1 0 0000110 110
7 0 0 0 0 1 1 1 0000111 111
8 0 0 0 1 0 0 0 0001000 1000
9 0 0 0 1 0 0 1 0001001 1001
10 0 0 0 1 0 1 0 0001010 1010
A: Following could be coded into a function. You would need to trim off leading zeros to meet requirements of your question.
declare @intvalue int
set @intvalue=5
declare @vsresult varchar(64)
declare @inti int
select @inti = 64, @vsresult = ''
while @inti>0
begin
select @vsresult=convert(char(1), @intvalue % 2)+@vsresult
select @intvalue = convert(int, (@intvalue / 2)), @inti=@inti-1
end
select @vsresult
A: I used the following ITVF function to convert from decimal to Binary
as it is a inline function you don't need to "worry" about multiple reads performed by the optimizer.
CREATE FUNCTION dbo.udf_DecimalToBinary
(
@Decimal VARCHAR(32)
)
RETURNS TABLE AS RETURN
WITH Tally (n) AS
(
--32 Rows
SELECT TOP 30 ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) -1
FROM (VALUES (0),(0),(0),(0)) a(n)
CROSS JOIN (VALUES(0),(0),(0),(0),(0),(0),(0),(0)) b(n)
)
, Anchor (n, divisor , Result) as
(
SELECT t.N ,
CONVERT(BIGINT, @Decimal) / POWER(2,T.N) ,
CONVERT(BIGINT, @Decimal) / POWER(2,T.N) % 2
FROM Tally t
WHERE CONVERT(bigint,@Decimal) >= POWER(2,t.n)
)
SELECT TwoBaseBinary = '' +
(SELECT Result
FROM Anchor
ORDER BY N DESC
FOR XML PATH ('') , TYPE).value('.','varchar(200)')
/*How to use*/
SELECT TwoBaseBinary
FROM dbo.udf_DecimalToBinary ('1234')
/*result -> 10011010010*/
A: declare @i int /* input */
set @i = 42
declare @result varchar(32) /* SQL Server int is 32 bits wide */
set @result = ''
while 1 = 1 begin
select @result = convert(char(1), @i % 2) + @result,
@i = convert(int, @i / 2)
if @i = 0 break
end
select @result
A: declare @intVal Int
set @intVal = power(2,12)+ power(2,5) + power(2,1);
With ComputeBin (IntVal, BinVal,FinalBin)
As
(
Select @IntVal IntVal, @intVal %2 BinVal , convert(nvarchar(max),(@intVal %2 )) FinalBin
Union all
Select IntVal /2, (IntVal /2) %2, convert(nvarchar(max),(IntVal /2) %2) + FinalBin FinalBin
From ComputeBin
Where IntVal /2 > 0
)
select FinalBin from ComputeBin where intval = ( select min(intval) from ComputeBin);
A: with t as (select * from (values (0),(1)) as t(c)),
t0 as (table t),
t1 as (table t),
t2 as (table t),
t3 as (table t),
t4 as (table t),
t5 as (table t),
t6 as (table t),
t7 as (table t),
t8 as (table t),
t9 as (table t),
ta as (table t),
tb as (table t),
tc as (table t),
td as (table t),
te as (table t),
tf as (table t)
select '' || t0.c || t1.c || t2.c || t3.c || t4.c || t5.c || t6.c || t7.c || t8.c || t9.c || ta.c || tb.c || tc.c || td.c || te.c || tf.c as n
from t0,t1,t2,t3,t4,t5,t6,t7,t8,t9,ta,tb,tc,td,te,tf
order by n
limit 1 offset 5
Standard SQL (tested in PostgreSQL).
A: On SQL Server, you can try something like the sample below:
DECLARE @Int int = 321
SELECT @Int
,CONCAT
(CAST(@Int & power(2,15) AS bit)
,CAST(@Int & power(2,14) AS bit)
,CAST(@Int & power(2,13) AS bit)
,CAST(@Int & power(2,12) AS bit)
,CAST(@Int & power(2,11) AS bit)
,CAST(@Int & power(2,10) AS bit)
,CAST(@Int & power(2,9) AS bit)
,CAST(@Int & power(2,8) AS bit)
,CAST(@Int & power(2,7) AS bit)
,CAST(@Int & power(2,6) AS bit)
,CAST(@Int & power(2,5) AS bit)
,CAST(@Int & power(2,4) AS bit)
,CAST(@Int & power(2,3) AS bit)
,CAST(@Int & power(2,2) AS bit)
,CAST(@Int & power(2,1) AS bit)
,CAST(@Int & power(2,0) AS bit) ) AS BitString
,CAST(@Int & power(2,15) AS bit) AS BIT15
,CAST(@Int & power(2,14) AS bit) AS BIT14
,CAST(@Int & power(2,13) AS bit) AS BIT13
,CAST(@Int & power(2,12) AS bit) AS BIT12
,CAST(@Int & power(2,11) AS bit) AS BIT11
,CAST(@Int & power(2,10) AS bit) AS BIT10
,CAST(@Int & power(2,9) AS bit) AS BIT9
,CAST(@Int & power(2,8) AS bit) AS BIT8
,CAST(@Int & power(2,7) AS bit) AS BIT7
,CAST(@Int & power(2,6) AS bit) AS BIT6
,CAST(@Int & power(2,5) AS bit) AS BIT5
,CAST(@Int & power(2,4) AS bit) AS BIT4
,CAST(@Int & power(2,3) AS bit) AS BIT3
,CAST(@Int & power(2,2) AS bit) AS BIT2
,CAST(@Int & power(2,1) AS bit) AS BIT1
,CAST(@Int & power(2,0) AS bit) AS BIT0
A: I know I'm a bit late to the game here but I recently came up with a slick solution for this that leverages a tally table (similar to @hkravitz solution above.) The key difference is that my leverages what I call the Virtual Index to sort the results in descending order without a sort operator in the execution plan. I accomplish this using dbo.rangeAB which is included at the end of this post.
Note that this returns the numbers 0 to 30 (as "RN" for RowNumber) in ascending order:
SELECT r.RN
FROM dbo.rangeAB(0,30,1,0) AS r
ORDER BY r.RN;
It does so without sorting. RN can be defined as ROW_NUMBER() OVER (ORDER BY (SELECT NULL)). Sorting by RN does not require a sort, again - that's the virtual index at play.
When I try a descending sort however I do get a sort in the execution plan.
Enter Finite Opposites. RangeAB includes a column named Op - OP RN's Finite Opposite Number. By "finite opposite" I mean, 0 is the opposite of 30, 1 is the opposite of 29, etc.. Unlike traditional opposite numbers (-1 is opposite of 1). Finite opposites are returned in descending order.
SELECT r.RN, r.OP
FROM dbo.rangeAB(0,30,1,0) AS r
ORDER BY r.RN;
Returns:
RN OP
----- -------
0 30
1 29
2 28
3 27
....
27 3
28 2
29 1
30 0
I can use Op I can leverage RN's finite opposite to get the numbers in descending order while still leveraging the virtual index to avoid a sort. These two queries return the same thing but, when comparing execution plans, according to SSMS removing the sort reduces the query cost by a factor of 50X.
THE FUNCTION
CREATE FUNCTION dbo.NumberToBinary(@input INT)
RETURNS TABLE WITH SCHEMABINDING AS RETURN
/* Created By Alan Burstein 20191112, Requires RangeAB (code below) */
SELECT BIN = (
SELECT @input/f.Np2%2
FROM dbo.rangeAB(0,30,1,0) AS r
CROSS APPLY (VALUES(POWER(2,r.Op))) AS f(NP2)
WHERE (@input = 0 AND f.Np2 = 1) OR @input >= f.Np2
ORDER BY ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FOR XML PATH(''));
RangeAB
CREATE FUNCTION dbo.rangeAB
(
@low bigint,
@high bigint,
@gap bigint,
@row1 bit
)
/****************************************************************************************
[Purpose]:
Creates up to 531,441,000,000 sequentia1 integers numbers beginning with @low and ending
with @high. Used to replace iterative methods such as loops, cursors and recursive CTEs
to solve SQL problems. Based on Itzik Ben-Gan's getnums function with some tweeks and
enhancements and added functionality. The logic for getting rn to begin at 0 or 1 is
based comes from Jeff Moden's fnTally function.
The name range because it's similar to clojure's range function. The name "rangeAB" as
used because "range" is a reserved SQL keyword.
[Author]: Alan Burstein
[Compatibility]:
SQL Server 2008+ and Azure SQL Database
[Syntax]:
SELECT r.RN, r.OP, r.N1, r.N2
FROM dbo.rangeAB(@low,@high,@gap,@row1) AS r;
[Parameters]:
@low = a bigint that represents the lowest value for n1.
@high = a bigint that represents the highest value for n1.
@gap = a bigint that represents how much n1 and n2 will increase each row; @gap also
represents the difference between n1 and n2.
@row1 = a bit that represents the first value of rn. When @row = 0 then rn begins
at 0, when @row = 1 then rn will begin at 1.
[Returns]:
Inline Table Valued Function returns:
rn = bigint; a row number that works just like T-SQL ROW_NUMBER() except that it can
start at 0 or 1 which is dictated by @row1.
op = bigint; returns the "opposite number that relates to rn. When rn begins with 0 and
ends with 10 then 10 is the opposite of 0, 9 the opposite of 1, etc. When rn begins
with 1 and ends with 5 then 1 is the opposite of 5, 2 the opposite of 4, etc...
n1 = bigint; a sequential number starting at the value of @low and incrimentingby the
value of @gap until it is less than or equal to the value of @high.
n2 = bigint; a sequential number starting at the value of @low+@gap and incrimenting
by the value of @gap.
[Dependencies]:
N/A
[Developer Notes]:
1. The lowest and highest possible numbers returned are whatever is allowable by a
bigint. The function, however, returns no more than 531,441,000,000 rows (8100^3).
2. @gap does not affect rn, rn will begin at @row1 and increase by 1 until the last row
unless its used in a query where a filter is applied to rn.
3. @gap must be greater than 0 or the function will not return any rows.
4. Keep in mind that when @row1 is 0 then the highest row-number will be the number of
rows returned minus 1
5. If you only need is a sequential set beginning at 0 or 1 then, for best performance
use the RN column. Use N1 and/or N2 when you need to begin your sequence at any
number other than 0 or 1 or if you need a gap between your sequence of numbers.
6. Although @gap is a bigint it must be a positive integer or the function will
not return any rows.
7. The function will not return any rows when one of the following conditions are true:
* any of the input parameters are NULL
* @high is less than @low
* @gap is not greater than 0
To force the function to return all NULLs instead of not returning anything you can
add the following code to the end of the query:
UNION ALL
SELECT NULL, NULL, NULL, NULL
WHERE NOT (@high&@low&@gap&@row1 IS NOT NULL AND @high >= @low AND @gap > 0)
This code was excluded as it adds a ~5% performance penalty.
8. There is no performance penalty for sorting by rn ASC; there is a large performance
penalty for sorting in descending order WHEN @row1 = 1; WHEN @row1 = 0
If you need a descending sort the use op in place of rn then sort by rn ASC.
Best Practices:
--===== 1. Using RN (rownumber)
-- (1.1) The best way to get the numbers 1,2,3...@high (e.g. 1 to 5):
SELECT RN FROM dbo.rangeAB(1,5,1,1);
-- (1.2) The best way to get the numbers 0,1,2...@high-1 (e.g. 0 to 5):
SELECT RN FROM dbo.rangeAB(0,5,1,0);
--===== 2. Using OP for descending sorts without a performance penalty
-- (2.1) The best way to get the numbers 5,4,3...@high (e.g. 5 to 1):
SELECT op FROM dbo.rangeAB(1,5,1,1) ORDER BY rn ASC;
-- (2.2) The best way to get the numbers 0,1,2...@high-1 (e.g. 5 to 0):
SELECT op FROM dbo.rangeAB(1,6,1,0) ORDER BY rn ASC;
--===== 3. Using N1
-- (3.1) To begin with numbers other than 0 or 1 use N1 (e.g. -3 to 3):
SELECT N1 FROM dbo.rangeAB(-3,3,1,1);
-- (3.2) ROW_NUMBER() is built in. If you want a ROW_NUMBER() include RN:
SELECT RN, N1 FROM dbo.rangeAB(-3,3,1,1);
-- (3.3) If you wanted a ROW_NUMBER() that started at 0 you would do this:
SELECT RN, N1 FROM dbo.rangeAB(-3,3,1,0);
--===== 4. Using N2 and @gap
-- (4.1) To get 0,10,20,30...100, set @low to 0, @high to 100 and @gap to 10:
SELECT N1 FROM dbo.rangeAB(0,100,10,1);
-- (4.2) Note that N2=N1+@gap; this allows you to create a sequence of ranges.
-- For example, to get (0,10),(10,20),(20,30).... (90,100):
SELECT N1, N2 FROM dbo.rangeAB(0,90,10,1);
-- (4.3) Remember that a rownumber is included and it can begin at 0 or 1:
SELECT RN, N1, N2 FROM dbo.rangeAB(0,90,10,1);
[Examples]:
--===== 1. Generating Sample data (using rangeAB to create "dummy rows")
-- The query below will generate 10,000 ids and random numbers between 50,000 and 500,000
SELECT
someId = r.rn,
someNumer = ABS(CHECKSUM(NEWID())%450000)+50001
FROM rangeAB(1,10000,1,1) r;
--===== 2. Create a series of dates; rn is 0 to include the first date in the series
DECLARE @startdate DATE = '20180101', @enddate DATE = '20180131';
SELECT r.rn, calDate = DATEADD(dd, r.rn, @startdate)
FROM dbo.rangeAB(1, DATEDIFF(dd,@startdate,@enddate),1,0) r;
GO
--===== 3. Splitting (tokenizing) a string with fixed sized items
-- given a delimited string of identifiers that are always 7 characters long
DECLARE @string VARCHAR(1000) = 'A601225,B435223,G008081,R678567';
SELECT
itemNumber = r.rn, -- item's ordinal position
itemIndex = r.n1, -- item's position in the string (it's CHARINDEX value)
item = SUBSTRING(@string, r.n1, 7) -- item (token)
FROM dbo.rangeAB(1, LEN(@string), 8,1) r;
GO
--===== 4. Splitting (tokenizing) a string with random delimiters
DECLARE @string VARCHAR(1000) = 'ABC123,999F,XX,9994443335';
SELECT
itemNumber = ROW_NUMBER() OVER (ORDER BY r.rn), -- item's ordinal position
itemIndex = r.n1+1, -- item's position in the string (it's CHARINDEX value)
item = SUBSTRING
(
@string,
r.n1+1,
ISNULL(NULLIF(CHARINDEX(',',@string,r.n1+1),0)-r.n1-1, 8000)
) -- item (token)
FROM dbo.rangeAB(0,DATALENGTH(@string),1,1) r
WHERE SUBSTRING(@string,r.n1,1) = ',' OR r.n1 = 0;
-- logic borrowed from: http://www.sqlservercentral.com/articles/Tally+Table/72993/
--===== 5. Grouping by a weekly intervals
-- 5.1. how to create a series of start/end dates between @startDate & @endDate
DECLARE @startDate DATE = '1/1/2015', @endDate DATE = '2/1/2015';
SELECT
WeekNbr = r.RN,
WeekStart = DATEADD(DAY,r.N1,@StartDate),
WeekEnd = DATEADD(DAY,r.N2-1,@StartDate)
FROM dbo.rangeAB(0,datediff(DAY,@StartDate,@EndDate),7,1) r;
GO
-- 5.2. LEFT JOIN to the weekly interval table
BEGIN
DECLARE @startDate datetime = '1/1/2015', @endDate datetime = '2/1/2015';
-- sample data
DECLARE @loans TABLE (loID INT, lockDate DATE);
INSERT @loans SELECT r.rn, DATEADD(dd, ABS(CHECKSUM(NEWID())%32), @startDate)
FROM dbo.rangeAB(1,50,1,1) r;
-- solution
SELECT
WeekNbr = r.RN,
WeekStart = dt.WeekStart,
WeekEnd = dt.WeekEnd,
total = COUNT(l.lockDate)
FROM dbo.rangeAB(0,datediff(DAY,@StartDate,@EndDate),7,1) r
CROSS APPLY (VALUES (
CAST(DATEADD(DAY,r.N1,@StartDate) AS DATE),
CAST(DATEADD(DAY,r.N2-1,@StartDate) AS DATE))) dt(WeekStart,WeekEnd)
LEFT JOIN @loans l ON l.lockDate BETWEEN dt.WeekStart AND dt.WeekEnd
GROUP BY r.RN, dt.WeekStart, dt.WeekEnd ;
END;
--===== 6. Identify the first vowel and last vowel in a along with their positions
DECLARE @string VARCHAR(200) = 'This string has vowels';
SELECT TOP(1) position = r.rn, letter = SUBSTRING(@string,r.rn,1)
FROM dbo.rangeAB(1,LEN(@string),1,1) r
WHERE SUBSTRING(@string,r.rn,1) LIKE '%[aeiou]%'
ORDER BY r.rn;
-- To avoid a sort in the execution plan we'll use op instead of rn
SELECT TOP(1) position = r.op, letter = SUBSTRING(@string,r.op,1)
FROM dbo.rangeAB(1,LEN(@string),1,1) r
WHERE SUBSTRING(@string,r.rn,1) LIKE '%[aeiou]%'
ORDER BY r.rn;
---------------------------------------------------------------------------------------
[Revision History]:
Rev 00 - 20140518 - Initial Development - Alan Burstein
Rev 01 - 20151029 - Added 65 rows to make L1=465; 465^3=100.5M. Updated comment section
- Alan Burstein
Rev 02 - 20180613 - Complete re-design including opposite number column (op)
Rev 03 - 20180920 - Added additional CROSS JOIN to L2 for 530B rows max - Alan Burstein
****************************************************************************************/
RETURNS TABLE WITH SCHEMABINDING AS RETURN
WITH L1(N) AS
(
SELECT 1
FROM (VALUES
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0)) T(N) -- 90 values
),
L2(N) AS (SELECT 1 FROM L1 a CROSS JOIN L1 b CROSS JOIN L1 c),
iTally AS (SELECT rn = ROW_NUMBER() OVER (ORDER BY (SELECT 1)) FROM L2 a CROSS JOIN L2 b)
SELECT
r.RN,
r.OP,
r.N1,
r.N2
FROM
(
SELECT
RN = 0,
OP = (@high-@low)/@gap,
N1 = @low,
N2 = @gap+@low
WHERE @row1 = 0
UNION ALL -- ISNULL required in the TOP statement below for error handling purposes
SELECT TOP (ABS((ISNULL(@high,0)-ISNULL(@low,0))/ISNULL(@gap,0)+ISNULL(@row1,1)))
RN = i.rn,
OP = (@high-@low)/@gap+(2*@row1)-i.rn,
N1 = (i.rn-@row1)*@gap+@low,
N2 = (i.rn-(@row1-1))*@gap+@low
FROM iTally AS i
ORDER BY i.rn
) AS r
WHERE @high&@low&@gap&@row1 IS NOT NULL AND @high >= @low AND @gap > 0;
GO
A: I believe that this method simplifies a lot of the other ideas that others have presented. It uses bitwise arithmetic along with the FOR XML trick with a CTE to generate the binary digits.
DECLARE @my_int INT = 5
;WITH CTE_Binary AS
(
SELECT 1 AS seq, 1 AS val
UNION ALL
SELECT seq + 1 AS seq, power(2, seq)
FROM CTE_Binary
WHERE
seq < 8
)
SELECT
(
SELECT
CAST(CASE WHEN B2.seq IS NOT NULL THEN 1 ELSE 0 END AS CHAR(1))
FROM
CTE_Binary B1
LEFT OUTER JOIN CTE_Binary B2 ON
B2.seq = B1.seq AND
@my_int & B2.val = B2.val
ORDER BY
B1.seq DESC
FOR XML PATH('')
) AS val
A: You can use a recursive CTE table to do this. In this example code, it is set for 16 bits, but you can do any length by changing 16-> your choice.
Also, the data you want to convert is table DecimalTable
WITH DecimalTable AS (SELECT 10 decimal_num UNION SELECT 20),
DtoB AS (SELECT decimal_num
,1 n
,CAST(CAST(decimal_num%2 AS bit) AS VARCHAR(16)) binary_num
FROM DecimalTable
UNION ALL
SELECT decimal_num
,n*2 n
,CAST(CONCAT(CAST(decimal_num&n as bit), binary_num)
AS VARCHAR(16)) binary_num
FROM DtoB
WHERE n<POWER(2,16))
SELECT decimal_num, binary_num
FROM DtoB
A: This function is a generic convertor, allowing an integer to be converted to a string depiction for any base numbering system, like binary, octal, hexadecimal, etc.
-- specify a string and numbering system Base value, for example 16 for hexadecimal
CREATE FUNCTION udf_IntToBaseXStr(@baseVal BIGINT,
@baseX BIGINT)
returns VARCHAR(63)
AS
BEGIN
--bigint : -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807)
-- or 63 ones (1111111,11111111,11111111,11111111,11111111,11111111,11111111,11111111) in binary
DECLARE @val BIGINT -- value from all
DECLARE @cv BIGINT -- value from a single char
DECLARE @baseStr VARCHAR(63)
SET @baseStr = '';
-- assumes a numbering method of 0123456789ABCDEF.....
SET @val = @baseVal
WHILE ( @val > 0 )
BEGIN
SET @cv = @val % @basex -- calculate the right most char's value
SET @baseStr = -- add it to (any existing) string
CASE
WHEN @cv < 10 THEN Char(Ascii('0') + @cv)
ELSE Char(Ascii('A') + ( @cv - 10 ))
END
+ @baseStr
SET @val = ( @val - @cv ) / @basex
END
RETURN @baseStr
END
GO
If you need to guarantee a minimum length, the next function wraps the above function, prepending a number of ZEROES, forcing the returned string to your desired minimum length. It does not truncate to the specified length.
-- specify a string and numbering system Base value, for example, 16 for hexadecimal
-- prepends LEADING ZEROS to force length of returned string to be AT LEAST minLength chars
CREATE FUNCTION udf_IntToBaseXStr_MinLength(@baseVal BIGINT,
@baseX BIGINT,
@minLength INT)
returns VARCHAR(63)
AS
BEGIN
DECLARE @baseStr VARCHAR(63)
SET @baseStr = dbo.udf_IntToBaseXStr(@baseVal, @baseX)
IF Len(@baseStr) < @minLength
SET @baseStr = Replicate('0', @minLength - Len(@baseStr))
+ @baseStr
RETURN @baseStr
END
GO
udf_IntToBaseXStr Usage:
;with CTE as
(
SELECT BaseX = 2, AKA = 'binary'
UNION SELECT 8, 'octal'
UNION SELECT 10, 'decimal'
UNION SELECT 15, 'pentadecimal'
UNION SELECT 16, 'hexadecimal'
)
SELECT BaseX, AKA, Result = dbo.udf_IntToBaseXStr(328239523, BaseX) FROM CTE
udf_IntToBaseXStr Result:
BaseX
AKA
Result
2
binary
10011100100001000100110100011
8
octal
2344104643
10
decimal
328239523
15
pentadecimal
1DC3B24D
16
hexadecimal
139089A3
udf_IntToBaseXStr_MinLength Usage:
;with CTE as
(
SELECT BaseX = 2, AKA = 'binary'
UNION SELECT 8, 'octal'
UNION SELECT 10, 'decimal'
UNION SELECT 15, 'pentadecimal'
UNION SELECT 16, 'hexadecimal'
)
SELECT BaseX, AKA, Result = dbo.udf_IntToBaseXStr_MinLength(328239523, BaseX, 24) FROM CTE
udf_IntToBaseXStr_MinLength Result:
BaseX
AKA
Result
2
binary
10011100100001000100110100011
8
octal
000000000000002344104643
10
decimal
000000000000000328239523
15
pentadecimal
00000000000000001DC3B24D
16
hexadecimal
0000000000000000139089A3
A: Want easy? Do some bitwise math to map out each binary digit.
CREATE FUNCTION dbo.BinaryRep (@val INT)
RETURNS VARCHAR(32)
WITH EXECUTE AS CALLER
AS
BEGIN
DECLARE @ret VARCHAR(32)
DECLARE @cnt INT = 30; -- 30 to 0 inclusive in loop
-- handle negative (we're using signed magnitude because that's simple)
SET @ret = IIF(@val < 0, '1', '0');
SET @val = ABS(@val); -- totally cheating here.
-- bitwise masking madness, one digit at a time.
WHILE @cnt > -1
BEGIN
SET @ret = CONCAT(@ret, IIF(@val & POWER(2, @cnt) = 0, 0, 1));
SET @cnt = @cnt - 1;
END;
RETURN @ret;
END
The only twist is exactly what Constantin notes: How do you like your negatives?
This version cheaps out and uses signed magnitude where you just have the first bit as 1 for negatives with no other changes. -123 and 123 only differ by their high bits.
select dbo.BinaryRep(123) as plus, dbo.BinaryRep(-123) as minus
plus minus
-------------------------------- --------------------------------
00000000000000000000000001111011 10000000000000000000000001111011
Note that SQL Server INT supports 2-31 to 231 so we need to loop through 31 times (30 to 0, inclusive), not 32.
A: How about this...
SELECT number_value
,MOD(number_value / 32768, 2) AS BIT15
,MOD(number_value / 16384, 2) AS BIT14
,MOD(number_value / 8192, 2) AS BIT13
,MOD(number_value / 4096, 2) AS BIT12
,MOD(number_value / 2048, 2) AS BIT11
,MOD(number_value / 1024, 2) AS BIT10
,MOD(number_value / 512, 2) AS BIT9
,MOD(number_value / 256, 2) AS BIT8
,MOD(number_value / 128, 2) AS BIT7
,MOD(number_value / 64, 2) AS BIT6
,MOD(number_value / 32, 2) AS BIT5
,MOD(number_value / 16, 2) AS BIT4
,MOD(number_value / 8, 2) AS BIT3
,MOD(number_value / 4, 2) AS BIT2
,MOD(number_value / 2, 2) AS BIT1
,MOD(number_value , 2) AS BIT0
FROM your_table;
A: Why not simply...
declare @num int = 75
select
@num [Dec]
, convert (varchar(1), @num / 128 % 2)
+ convert (varchar(1), @num / 64 % 2)
+ convert (varchar(1), @num / 32 % 2)
+ convert (varchar(1), @num / 16 % 2)
+ convert (varchar(1), @num / 8 % 2)
+ convert (varchar(1), @num / 4 % 2)
+ convert (varchar(1), @num / 2 % 2)
+ convert (varchar(1), @num % 2) as [Bin]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
}
|
Q: How to test for object existence in Firebird SQL? I need to test whether various types of database objects exist in a given database, and I don't know how to formulate these tests in Firebird SQL. Each test has the form "Does object of type X with name Y exist?". For example, I need to test whether a table with a given name exists. The object types I need to test are:
*
*Table
*View
*Domain
*Trigger
*Procedure
*Exception
*Generate
*UDF
*Role
One can find how to query for a given table on the Internet, but the other types are more difficult to find ...
A: It seems like you need to query against the system tables to reliably get that information. Here's a tutorial that looks like it can help:
http://www.alberton.info/firebird_sql_meta_info.html
A: I think a lot of what you are asking can be found at this forum post. If you want to dive a little deeper, this site seems to have a graphical representation of the tables.
A: Every year, Martijn Tonies made a session in Firebird Conference
so find in timetable
in 2005
http://www.ibphoenix.com/main.nfs?a=ibphoenix&page=fb_conf_timetable_2005
in 2006
http://www.ibphoenix.com/main.nfs?a=ibphoenix&page=fb_conf_timetable_2006
there is also for 2007 and 2008
http://www.firebirdconference.net/index.php?option=com_content&view=article&id=3&Itemid=3
but I don't know where to download papers
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Resolve Windows device path to drive letter How do you resolve an NT style device path, e.g. \Device\CdRom0, to its logical drive letter, e.g. G:\ ?
Edit: A Volume Name isn't the same as a Device Path so unfortunately GetVolumePathNamesForVolumeName() won't work.
A: Hopefully the following piece of code will give you enough to solve this - after you've initialised it, you just need to iterate through the collection to find your match. You may want to convert everything to upper/lower case before you insert into the collection to help with lookup performance.
typedef basic_string<TCHAR> tstring;
typedef map<tstring, tstring> HardDiskCollection;
void Initialise( HardDiskCollection &_hardDiskCollection )
{
TCHAR tszLinkName[MAX_PATH] = { 0 };
TCHAR tszDevName[MAX_PATH] = { 0 };
TCHAR tcDrive = 0;
_tcscpy_s( tszLinkName, MAX_PATH, _T("a:") );
for ( tcDrive = _T('a'); tcDrive < _T('z'); ++tcDrive )
{
tszLinkName[0] = tcDrive;
if ( QueryDosDevice( tszLinkName, tszDevName, MAX_PATH ) )
{
_hardDiskCollection.insert( pair<tstring, tstring>( tszLinkName, tszDevName ) );
}
}
}
A: Maybe you could use GetVolumeNameForMountPoint and iterate through all mount points A:\ through Z:\, breaking when you find a match?
http://msdn.microsoft.com/en-us/library/aa364994(VS.85).aspx
(I haven't tried this)
A: Following function does the job using C only
BOOL GetWin32FileName(const TCHAR* pszNativeFileName, TCHAR *pszWin32FileName)
{
BOOL bFound = FALSE;
// Translate path with device name to drive letters.
TCHAR szTemp[MAX_PATH];
szTemp[0] = '\0';
if (GetLogicalDriveStrings(MAX_PATH - 1, szTemp))
{
TCHAR szName[MAX_PATH];
TCHAR szDrive[3] = TEXT(" :");
TCHAR* p = szTemp;
do
{
// Copy the drive letter to the template string
*szDrive = *p;
// Look up each device name
if (QueryDosDevice(szDrive, szName, MAX_PATH))
{
size_t uNameLen = _tcslen(szName);
if (uNameLen < MAX_PATH)
{
bFound = _tcsnicmp(pszNativeFileName, szName, uNameLen) == 0
&& *(pszNativeFileName + uNameLen) == _T('\\');
if (bFound)
{
// Replace device path with DOS path
StringCchPrintf(pszWin32FileName,
MAX_PATH,
TEXT("%s%s"),
szDrive,
pszNativeFileName + uNameLen);
}
}
}
// Go to the next NULL character.
while (*p++);
} while (!bFound && *p);
}
return(bFound);
}
A: You can lookup all volumes' name to match a device name and get drive letter.Here is a sample:
int DeviceNameToVolumePathName(WCHAR *filepath) {
WCHAR fileDevName[MAX_PATH];
WCHAR devName[MAX_PATH];
WCHAR fileName[MAX_PATH];
HANDLE FindHandle = INVALID_HANDLE_VALUE;
WCHAR VolumeName[MAX_PATH];
DWORD Error = ERROR_SUCCESS;
size_t Index = 0;
DWORD CharCount = MAX_PATH + 1;
int index = 0;
// \Device\HarddiskVolume1\windows,locate \windows.
for (int i = 0; i < lstrlenW(filepath); i++) {
if (!memcmp(&filepath[i], L"\\", 2)) {
index++;
if (index == 3) {
index = i;
break;
}
}
}
filepath[index] = L'\0';
memcpy(fileDevName, filepath, (index + 1) * sizeof(WCHAR));
FindHandle = FindFirstVolumeW(VolumeName, ARRAYSIZE(VolumeName));
if (FindHandle == INVALID_HANDLE_VALUE)
{
Error = GetLastError();
wprintf(L"FindFirstVolumeW failed with error code %d\n", Error);
return FALSE;
}
for (;;)
{
// Skip the \\?\ prefix and remove the trailing backslash.
Index = wcslen(VolumeName) - 1;
if (VolumeName[0] != L'\\' ||
VolumeName[1] != L'\\' ||
VolumeName[2] != L'?' ||
VolumeName[3] != L'\\' ||
VolumeName[Index] != L'\\')
{
Error = ERROR_BAD_PATHNAME;
wprintf(L"FindFirstVolumeW/FindNextVolumeW returned a bad path: %s\n", VolumeName);
break;
}
VolumeName[Index] = L'\0';
CharCount = QueryDosDeviceW(&VolumeName[4], devName, 100);
if (CharCount == 0)
{
Error = GetLastError();
wprintf(L"QueryDosDeviceW failed with error code %d\n", Error);
break;
}
if (!lstrcmpW(devName, filepath)) {
VolumeName[Index] = L'\\';
Error = GetVolumePathNamesForVolumeNameW(VolumeName, fileName, CharCount, &CharCount);
if (!Error) {
Error = GetLastError();
wprintf(L"GetVolumePathNamesForVolumeNameW failed with error code %d\n", Error);
break;
}
// concat drive letter to path
lstrcatW(fileName, &filepath[index + 1]);
lstrcpyW(filepath, fileName);
Error = ERROR_SUCCESS;
break;
}
Error = FindNextVolumeW(FindHandle, VolumeName, ARRAYSIZE(VolumeName));
if (!Error)
{
Error = GetLastError();
if (Error != ERROR_NO_MORE_FILES)
{
wprintf(L"FindNextVolumeW failed with error code %d\n", Error);
break;
}
//
// Finished iterating
// through all the volumes.
Error = ERROR_BAD_PATHNAME;
break;
}
}
FindVolumeClose(FindHandle);
if (Error != ERROR_SUCCESS)
return FALSE;
return TRUE;
}
If you want to resolve it in driver,you can check this link for reference.
A: Here is refactored version of the solution.
I replaced TChAR with wchar_t because afaik it's not a good idea to use it in most projects.
std::map<std::wstring, std::wstring> GetDosPathDevicePathMap()
{
// It's not really related to MAX_PATH, but I guess it should be enough.
// Though the docs say "The first null-terminated string stored into the buffer is the current mapping for the device.
// The other null-terminated strings represent undeleted prior mappings for the device."
wchar_t devicePath[MAX_PATH] = { 0 };
std::map<std::wstring, std::wstring> result;
std::wstring dosPath = L"A:";
for (wchar_t letter = L'A'; letter <= L'Z'; ++letter)
{
dosPath[0] = letter;
if (QueryDosDeviceW(dosPath.c_str(), devicePath, MAX_PATH)) // may want to properly handle errors instead ... e.g. check ERROR_INSUFFICIENT_BUFFER
{
result[dosPath] = devicePath;
}
}
return result;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Sample Code for R? Does anyone know a good online resource for example of R code?
The programs do not have to be written for illustrative purposes, I am really just looking for some places where a bunch of R code has been written to give me a sense of the syntax and capabilities of the language?
Edit: I have read the basic documentation on the main site, but was wondering if there was some code samples or even programs that show how R is used by different people.
A: The simplest way of seeing code, is to
*
*install R
*type "help.start()" or look at online documentation, to get names of functions
*type the function name at the prompt
This will print the source code right at the prompt, and illustrate all manner of odd and interesting syntax corners.
A: The Learning R blog has a lot of good examples. Lately, the author has been doing a visualization series, comparing Lattice and ggplot2.
A: It is hard to google r, because of it being too short a name. Try http://rseek.org/, which provides an r-customized Google search instead. Search on examples, code in repositories, etc.
A: Some simple examples can be found at Mathesaurus - if you know e.g. Python or Matlab, look at the respective comparison charts to find the R idioms that correspond to your familiar idioms in the other language.
A: I use the R Graph Gallery. It has been a lot of help on graphing itself. Lots of good examples.
#R on Freenode has also been very useful.
A: http://had.co.nz/ggplot2/ has a lot of graphics with example code. And you only need one package to create almost every graph you need.
A: There is also the R Wiki which is slowly growing.
A: As you probably know, R and S are pretty similar (apart from the cost!).
I use to use both, and I highly recommend S Poetry.
I can also highly recommend the M.J. Crawley book, and the shorter Venables & Ripley one.
A: here are links to the R project group on Linkedin. I put together this list of links and a lot of people have found it useful (some have also made very useful additions)
A: Use Google Code Search with command "lang:r" and your keyword(s)
A: Why not look at www.r-project.org under documentation and read at least the introduction? The language is sufficiently different from what you're used to that just looking at code samples won't be enough for you to pick it up. (At least, not beyond basic calculator-like functionality.)
If you want to look a bit deeper, you might want to look at CRAN: an online collection of R modules with source code: cran.r-project.org
A: I just found this question and thought I would add a few resources to it. I really like the Quick-R site:
http://www.statmethods.net/
Muenchen has written a book about using R if you come from SAS or SPSS. Originally it was an 80 page online doc that Springer encouraged him to make a 400+ page book out of. The original short form as well as the book are here:
http://rforsasandspssusers.com/
You've probably already seen these, but worth listing:
http://cran.r-project.org/doc/manuals/R-intro.pdf
http://cran.r-project.org/doc/contrib/Owen-TheRGuide.pdf
http://cran.r-project.org/doc/contrib/Kuhnert+Venables-R_Course_Notes.zip
I don't want to sound like a trite RTFM guy, but the help files generally have great short snips of working code as examples. I'm no R pro so I end up having to deconstruct the examples to understand them. That process, while tedious, is really useful.
Good luck!
EDIT: well I hesitated to be self linking (it feels a bit masturbatory) but here's my own list of R resources with descriptions and comments on each: http://www.cerebralmastication.com/?page_id=62
A: The Rosetta Code project shows R compared to other languages.
A: How about CRAN? You've got over a thousand packages of code to choose from.
A: Steve McIntyre at http://www.climateaudit.org/ is a big fan of R and often posts working code.
There is a scripts category, and the Statistics and R lists some other resources
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Q: Is it possible to create INTERNATIONAL permalinks? i was wondering how you deal with permalinks on international sites. By permalink i mean some link which is unique and human readable.
E.g. for english phrases its no problem e.g. /product/some-title/
but what do you do if the product title is in e.g chinese language??
how do you deal with this problem?
i am implementing an international site and one requirement is to have human readable URLs.
Thanks for every comment
A: Characters outside the ISO Latin-1 set are not permitted in URLs according to this spec, so Chinese strings would be out immediately.
Where the product name can be localised, you can use urls like <DOMAIN>/<LANGUAGE>/DIR/<PRODUCT_TRANSLATED>, e.g.:
http://www.example.com/en/products/cat/
http://www.example.com/fr/products/chat/
accompanied by a mod_rewrite rule to the effect of:
RewriteRule ^([a-z]+)/product/([a-z]+)? product_lookup.php?lang=$1&product=$2
For the first example above, this rule will call product_lookup.php?lang=en&product=cat. Inside this script is where you would access the internal translation engine (from the lang parameter, en in this case) to do the same translation you do on the user-facing side to translate, say, "Chat" on the French page, "Cat" on the English, etc.
Using an external translation API would be a good idea, but tricky to get a reliable one which works correctly in your business domain. Google have opened up a translation API, but it currently only supports a limited number of languages.
*
*English <=> Arabic
*English <=> Chinese
*English <=> Russian
A: I usually transliterate the non-ascii characters. For example "täst" would become "taest". GNU iconv can do this for you (I'm sure there are other libraries):
$ echo täst | iconv -t 'ascii//translit'
taest
Alas, these transliterations are locale dependent: in languages other than german, 'ä' could be translitertated as simply 'a', for example. But on the other side, there should be a transliteration for every (commonly used) character set into ASCII.
A: Take a look at Wikipedia.
They use national characters in URLs.
For example, Russian home page URL is: http://ru.wikipedia.org/wiki/Заглавная_страница. The browser transparently encodes all non-ASCII characters and replaces them by their codes when sending URL to the server.
But on the web page all URLs are human-readable.
So you don't need to do anything special -- just put your product names into URLs as is.
The webserver should be able to decode them for your application automatically.
A: How about some scheme like /productid/{product-id-number}/some-title/
where the site looks at the {number} and ignores the 'some-title' part entirely. You can put that into whatever language or encoding you like, because it's not being used.
A: If memory serves, you're only able to use English letters in URLs. There's a discussion to change that, but I'm fairly positive that it's not been implemented yet.
that said, you'd need to have a look up table where you assign translations of products/titles into whatever word that they'll be in the other language. For example:
foo.com/cat will need a translation look up for "cat" "gato" "neko" etc.
Then your HTTP module which is parsing those human reading objects into an exact url will know which page to serve based upon the translations.
A: Creating a look up for such thing seems an overflow to me. I cannot create a lookup for all the different words in all languages. Maybe accessing an translation API would be a good idea.
So as far as I can see its not possible to use foreign chars in the permalink as the sepecs of the URL does not allow it.
What do you think of encoding the specials chars? are those URLs recognized by Google then?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: QTP_errObject CaptureBitmap May I request you to clarify the issue I have with QTP err object.
I am trying to capture the screen shot of an object on which error occured.
I use the code object.captureBitmap(filename) to achieve this.
I would like to know if it is possible to get the screen shot of the entire page with the err object higlighted.
A: You can get this in your results file. go to tools->options and select the run tab. Check the box "Save still image captures to results" and select either always or for errors. When you run your test it will show the full screen and highlight the object it has a problem with, if it could find it.
A: (if I'm not too late with my reply)
Use CaptureBitmap method for both problem object and parent of the object.
Then you can display it in a variety of ways using simple html page automatically generated.
Albert
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to determine differences in two lists of data This is an exercise for the CS guys to shine with the theory.
Imagine you have 2 containers with elements. Folders, URLs, Files, Strings, it really doesn't matter.
What is AN algorithm to calculate the added and the removed?
Notice: If there are many ways to solve this problem, please post one per answer so it can be analysed and voted up.
Edit: All the answers solve the matter with 4 containers. Is it possible to use only the initial 2?
A: Assuming you have two lists of unique items, and the ordering doesn't matter, you can think of them both as sets rather than lists
If you think of a venn diagram, with list A as one circle and list B as the other, then the intersection of these two is the constant pool.
Remove all the elements in this intersection from both A and B, and and anything left in A has been deleted, whilst anything left in B has been added.
So, iterate through A looking for each item in B. If you find it, remove it from both A and B
Then A is a list of things that were deleted, and B is a list of things that were added
I think...
[edit] Ok, with the new "only 2 container" restriction, the same still holds:
foreach( A ) {
if( eleA NOT IN B ) {
DELETED
}
}
foreach( B ) {
if( eleB NOT IN A ) {
ADDED
}
}
Then you aren't constructing a new list, or destroying your old ones...but it will take longer as with the previous example, you could just loop over the shorter list and remove the elements from the longer. Here you need to do both lists
An I'd argue my first solution didn't use 4 containers, it just destroyed two ;-)
A: I have not done this in a while but I believe the algorithm goes like this...
sort left-list and right-list
adds = {}
deletes = {}
get first right-item from right-list
get first left-item from left-list
while (either list has items)
if left-item < right-item or right-list is empty
add left-item to deletes
get new left-item from left-list
else if left-item > right-item or left-list is empty
add right-item to adds
get new right-item from right-list
else
get new right-item from right-list
get new left-item from left-list
In regards to right-list's relation to left-list, deletes contains items removed and adds now contains new items.
A: What Joe said. And, if the lists are too large to fit into memory, use an external file sorting utility or a Merge sort.
A: Missing information: How do you define added/removed? E.g. if the lists (A and B) show the same directory on Server A and Server B, that is in sync. If I now wait for 10 days, generate the lists again and compare them, how can I tell if something has been removed? I cannot. I can only tell there are files on Server A not found on Server B and/or the other way round. Whether that is because a file has been added to Server A (thus the file is not found on B) or a file has been deleted on Server B (thus the file is not found on B anymore) is something I cannot determine by just having a list of file names.
For the solution I suggest, I will just assume that you have one list named OLD and one list named NEW. Everything found on OLD but not on NEW has been removed. Everything found on NEW, but not on OLD has been added (e.g. the content of the same directory on the same server, however lists have been created at different dates).
Further I will assume there are no duplicates. That means every item on either list is unique in the sense of: If I compare this item to any other item on the list (no matter how this compare works), I can always say the item is either smaller or bigger than the one I'm comparing it to, but never equal. E.g. when dealing with strings, I can compare them lexicographically and the same string is never twice in the list.
In that case the simplest (not necessarily best solution, though) is:
*
*Sort the OLD lists. E.g. if the list consists of strings, sort them alphabetically. Sorting is necessary, because it means I can use binary search to quickly find an object in the list, assuming it does exist there (or to quickly determine, it does not exist in the list at all). If the list is unsorted, finding the object has a complexity of O(n) (I need to look at every single item on the list). If the list is sorted, complexity is only O(log n), as after every try to match an item on the list I can always exclude 50% of the items on the list not being a match. Even if the list has 100 items, finding an item (or detecting that the item is not on the list) takes at most 7 tests (or is it 8? Anyway, far less than 100). The NEW list doesn't have to be sorted.
*Now we perform list elimination. For every item on the NEW list, try to find this item on the OLD list (using binary search). If the item is found, remove this item from the OLD list and also remove it from the NEW list. This also means the lists get smaller the further the elimination progresses and thus the lookups will become faster and faster. Since removing an item from the a list has no effect on the correct sort order of the lists, there is no need to ever resort the OLD list during the elimination phase.
*At the end of elimination, both lists might be empty, in which case they were equal. If they are not empty, all items still on the OLD list are items missing on the NEW list (otherwise we had removed them), hence these are the removed items. All items still on the NEW list are items that were not on the OLD list (again, we had removed them otherwise), hence these are the added items.
A: Are the objects in the list "unique"? In this case I would first build two maps (hashmaps) and then scan the lists and lookup every object in the maps.
map1
map2
removedElements
addedElements
list1.each |item|
{
map1.add(item)
}
list2.each |item|
{
map2.add(item)
}
list1.each |item|
{
removedElements.add(item) unless map2.contains?(item)
}
list2.each |item|
{
addedElements.add(item) unless map1.contains?(item)
}
Sorry for the horrible meta-language mixing Ruby and Java :-P
In the end removedElements will contain the elements belonging to list1, but not to list2, and addedElements will contain the elements belonging to list2.
The cost of the whole operation is O(4*N) since the lookup in the map/dictionary may be considered constant. On the other hand linear/binary searching each elements in the lists will make that O(N^2).
EDIT: on a second thought moving the last check into the second loop you may remove one of the loops... but that's ugly... :)
list1.each |item|
{
map1.add(item)
}
list2.each |item|
{
map2.add(item)
addedElements.add(item) unless map1.contains?(item)
}
list1.each |item|
{
removedElements.add(item) unless map2.contains?(item)
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to change slow parametrized inserts into fast bulk copy (even from memory) I had someting like this in my code (.Net 2.0, MS SQL)
SqlConnection connection = new SqlConnection(@"Data Source=localhost;Initial
Catalog=DataBase;Integrated Security=True");
connection.Open();
SqlCommand cmdInsert = connection.CreateCommand();
SqlTransaction sqlTran = connection.BeginTransaction();
cmdInsert.Transaction = sqlTran;
cmdInsert.CommandText =
@"INSERT INTO MyDestinationTable" +
"(Year, Month, Day, Hour, ...) " +
"VALUES " +
"(@Year, @Month, @Day, @Hour, ...) ";
cmdInsert.Parameters.Add("@Year", SqlDbType.SmallInt);
cmdInsert.Parameters.Add("@Month", SqlDbType.TinyInt);
cmdInsert.Parameters.Add("@Day", SqlDbType.TinyInt);
// more fields here
cmdInsert.Prepare();
Stream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read);
StreamReader reader = new StreamReader(stream);
char[] delimeter = new char[] {' '};
String[] records;
while (!reader.EndOfStream)
{
records = reader.ReadLine().Split(delimeter, StringSplitOptions.None);
cmdInsert.Parameters["@Year"].Value = Int32.Parse(records[0].Substring(0, 4));
cmdInsert.Parameters["@Month"].Value = Int32.Parse(records[0].Substring(5, 2));
cmdInsert.Parameters["@Day"].Value = Int32.Parse(records[0].Substring(8, 2));
// more here complicated stuff here
cmdInsert.ExecuteNonQuery()
}
sqlTran.Commit();
connection.Close();
With cmdInsert.ExecuteNonQuery() commented out this code executes in less than 2 sec. With SQL execution it takes 1m 20 sec. There are around 0.5 milion records. Table is emptied before. SSIS data flow task of similar functionality takes around 20 sec.
*
*Bulk Insert was not an option (see below). I did some fancy stuff during this import.
*My test machine is Core 2 Duo with 2 GB RAM.
*When looking in Task Manager CPU was not fully untilized. IO seemed also not to be fully utilized.
*Schema is simple like hell: one table with AutoInt as primary index and less than 10 ints, tiny ints and chars(10).
After some answers here I found that it is possible to execute bulk copy from memory! I was refusing to use bulk copy beacuse I thought it has to be done from file...
Now I use this and it takes aroud 20 sec (like SSIS task)
DataTable dataTable = new DataTable();
dataTable.Columns.Add(new DataColumn("ixMyIndex", System.Type.GetType("System.Int32")));
dataTable.Columns.Add(new DataColumn("Year", System.Type.GetType("System.Int32")));
dataTable.Columns.Add(new DataColumn("Month", System.Type.GetType("System.Int32")));
dataTable.Columns.Add(new DataColumn("Day", System.Type.GetType("System.Int32")));
// ... and more to go
DataRow dataRow;
object[] objectRow = new object[dataTable.Columns.Count];
Stream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read);
StreamReader reader = new StreamReader(stream);
char[] delimeter = new char[] { ' ' };
String[] records;
int recordCount = 0;
while (!reader.EndOfStream)
{
records = reader.ReadLine().Split(delimeter, StringSplitOptions.None);
dataRow = dataTable.NewRow();
objectRow[0] = null;
objectRow[1] = Int32.Parse(records[0].Substring(0, 4));
objectRow[2] = Int32.Parse(records[0].Substring(5, 2));
objectRow[3] = Int32.Parse(records[0].Substring(8, 2));
// my fancy stuf goes here
dataRow.ItemArray = objectRow;
dataTable.Rows.Add(dataRow);
recordCount++;
}
SqlBulkCopy bulkTask = new SqlBulkCopy(connection, SqlBulkCopyOptions.TableLock, null);
bulkTask.DestinationTableName = "MyDestinationTable";
bulkTask.BatchSize = dataTable.Rows.Count;
bulkTask.WriteToServer(dataTable);
bulkTask.Close();
A: Instead of inserting each record individually, Try using the SqlBulkCopy class to bulk insert all the records at once.
Create a DataTable and add all your records to the DataTable, and then use SqlBulkCopy.WriteToServer to bulk insert all the data at once.
A: Is required the transaction? Using transaction need much more resources than simple commands.
Also If you are sure, that inserted values are corect, you can use a BulkInsert.
A: 1 minute sounds pretty reasonable for 0.5 million records. That's a record every 0.00012 seconds.
Does the table have any indexes? Removing these and reapplying them after the bulk insert would improve performance of the inserts, if that is an option.
A: It doesn't seem unreasonable to me to process 8,333 records per second...what kind of throughput are you expecting?
A: If you need better speed, you might consider implementing bulk insert:
http://msdn.microsoft.com/en-us/library/ms188365.aspx
A: If some form of bulk insert isn't an option, the other way would be multiple threads, each with their own connection to the database.
The issue with the current system is that you have 500,000 round trips to the database, and are waiting for the first round trip to complete before starting the next - any sort of latency (ie, a network between the machines) will mean that most of your time is spent waiting.
If you can split the job up, perhaps using some form of producer/consumer setup, you might find that you can get much more utilisation of all the resources.
However, to do this you will have to lose the one great transaction - otherwise the first writer thread will block all the others until its transaction is completed. You can still use transactions, but you'll have to use a lot of small ones rather than 1 large one.
The SSIS will be fast because it's using the bulk-insert method - do all the complicated processing first, generate the final list of data to insert and give it all at the same time to bulk-insert.
A: I assume that what is taking the approximately 58 seconds is the physical inserting of 500,000 records - so you are getting around 10,000 inserts a second. Without knowing the specs of your database server machine (I see you are using localhost, so network delays shouldn't be an issue), it is hard to say if this is good, bad, or abysmal.
I would look at your database schema - are there a bunch of indices on the table that have to be updated after each insert? This could be from other tables with foreign keys referencing the table you are working on. There are SQL profiling tools and performance monitoring facilities built into SQL Server, but I've never used them. But they may show up problems like locks, and things like that.
A: Do the fancy stuff on the data, on all records first. Then Bulk-Insert them.
(since you're not doing selects after an insert .. i don't see the problem of applying all operations on the data before the BulkInsert
A: If I had to guess, the first thing I would look for are too many or the wrong kind of indexes on the tbTrafficLogTTL table. Without looking at the schema definition for the table, I can't really say, but I have experienced similar performance problems when:
*
*The primary key is a GUID and the primary index is CLUSTERED.
*There's some sort of UNIQUE index on a set of fields.
*There are too many indexes on the table.
When you start indexing half a million rows of data, the time spent to create and maintain indexes adds up.
I will also note that if you have any option to convert the Year, Month, Day, Hour, Minute, Second fields into a single datetime2 or timestamp field, you should. You're adding a lot of complexity to your data architecture, for no gain. The only reason I would even contemplate using a split-field structure like that is if you're dealing with a pre-existing database schema that cannot be changed for any reason. In which case, it sucks to be you.
A: I had a similar problem in my last contract. You're making 500,000 trips to SQL to insert your data. For a dramatic increase in performance, you want to investigate the BulkInsert method in the SQL namespace. I had "reload" processes that went from 2+ hours to restore a couple of dozen tables down to 31 seconds once I implemented Bulk Import.
A: This could best be accomplished using something like the bcp command. If that isn't available, the suggestions above about using BULK INSERT are your best bet. You're making 500,000 round trips to the database and writing 500,000 entries to the log files, not to mention any space that needs to be allocated to the log file, the table, and the indexes.
If you're inserting in an order that is different from your clustered index, you also have to deal with the time require to reorganize the physical data on disk. There are a lot of variables here that could possibly be making your query run slower than you would like it to.
~10,000 transactions per second isn't terrible for individual inserts coming roundtripping from code/
A: BULK INSERT = bcp from a permission
You could batch the INSERTs to reduce roundtrips
SQLDataAdaptor.UpdateBatchSize = 10000 gives 50 round trips
You still have 500k inserts though...
Article
MSDN
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to check if an index exists on a table field in MySQL How do I check if an index exists on a table field in MySQL?
I've needed to Google this multiple times, so I'm sharing my Q/A.
A: SHOW KEYS FROM tablename WHERE Key_name='unique key name'
will show if a unique key exists in the table.
A: Use the following statement:
SHOW INDEX FROM *your_table*
And then check the result for the fields: row["Table"], row["Key_name"]
Make sure you write "Key_name" correctly
A: Try:
SELECT * FROM information_schema.statistics
WHERE table_schema = [DATABASE NAME]
AND table_name = [TABLE NAME] AND column_name = [COLUMN NAME]
It will tell you if there is an index of any kind on a certain column without the need to know the name given to the index. It will also work in a stored procedure (as opposed to show index)
A: To look at a table's layout from the CLI, you would use
desc mytable
or
show table mytable
A: Use SHOW INDEX like so:
SHOW INDEX FROM [tablename]
Docs: https://dev.mysql.com/doc/refman/5.0/en/show-index.html
A: show index from table_name where Column_name='column_name';
A: Try to use this:
SELECT TRUE
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE TABLE_SCHEMA = "{DB_NAME}"
AND TABLE_NAME = "{DB_TABLE}"
AND COLUMN_NAME = "{DB_INDEXED_FIELD}";
A: Adding to what GK10 suggested:
Use the following statement: SHOW INDEX FROM your_table
And then check the result for the fields: row["Table"],
row["Key_name"]
Make sure you write "Key_name" correctly
One can take that and work it into PHP (or other language) wrapped around an sql statement to find the index columns. Basically you can pull in the result of SHOW INDEX FROM 'mytable' into PHP and then use the column 'Column_name' to get the index column.
Make your database connection string and do something like this:
$mysqli = mysqli_connect("localhost", "my_user", "my_password", "world");
$sql = "SHOW INDEX FROM 'mydatabase.mytable' WHERE Key_name = 'PRIMARY';" ;
$result = mysqli_query($mysqli, $sql);
while ($row = $result->fetch_assoc()) {
echo $rowVerbatimsSet["Column_name"];
}
A: You can use the following SQL to check whether the given column on table was indexed or not:
select a.table_schema, a.table_name, a.column_name, index_name
from information_schema.columns a
join information_schema.tables b on a.table_schema = b.table_schema and
a.table_name = b.table_name and
b.table_type = 'BASE TABLE'
left join (
select concat(x.name, '/', y.name) full_path_schema, y.name index_name
FROM information_schema.INNODB_SYS_TABLES as x
JOIN information_schema.INNODB_SYS_INDEXES as y on x.TABLE_ID = y.TABLE_ID
WHERE x.name = 'your_schema'
and y.name = 'your_column') d on concat(a.table_schema, '/', a.table_name, '/', a.column_name) = d.full_path_schema
where a.table_schema = 'your_schema'
and a.column_name = 'your_column'
order by a.table_schema, a.table_name;
Since the joins are against INNODB_SYS_*, the match indexes only came from the INNODB tables only.
A: If you need to check if a index for a column exists as a database function, you can use/adopt this code.
If you want to check if an index exists at all regardless of the position in a multi-column-index, then just delete the part AND SEQ_IN_INDEX = 1.
DELIMITER $$
CREATE FUNCTION `fct_check_if_index_for_column_exists_at_first_place`(
`IN_SCHEMA` VARCHAR(255),
`IN_TABLE` VARCHAR(255),
`IN_COLUMN` VARCHAR(255)
)
RETURNS tinyint(4)
LANGUAGE SQL
DETERMINISTIC
CONTAINS SQL
SQL SECURITY DEFINER
COMMENT 'Check if index exists at first place in sequence for a given column in a given table in a given schema. Returns -1 if schema does not exist. Returns -2 if table does not exist. Returns -3 if column does not exist. If index exists in first place it returns 1, otherwise 0.'
BEGIN
-- Check if index exists at first place in sequence for a given column in a given table in a given schema.
-- Returns -1 if schema does not exist.
-- Returns -2 if table does not exist.
-- Returns -3 if column does not exist.
-- If the index exists in first place it returns 1, otherwise 0.
-- Example call: SELECT fct_check_if_index_for_column_exists_at_first_place('schema_name', 'table_name', 'index_name');
-- check if schema exists
SELECT
COUNT(*) INTO @COUNT_EXISTS
FROM
INFORMATION_SCHEMA.SCHEMATA
WHERE
SCHEMA_NAME = IN_SCHEMA
;
IF @COUNT_EXISTS = 0 THEN
RETURN -1;
END IF;
-- check if table exists
SELECT
COUNT(*) INTO @COUNT_EXISTS
FROM
INFORMATION_SCHEMA.TABLES
WHERE
TABLE_SCHEMA = IN_SCHEMA
AND TABLE_NAME = IN_TABLE
;
IF @COUNT_EXISTS = 0 THEN
RETURN -2;
END IF;
-- check if column exists
SELECT
COUNT(*) INTO @COUNT_EXISTS
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_SCHEMA = IN_SCHEMA
AND TABLE_NAME = IN_TABLE
AND COLUMN_NAME = IN_COLUMN
;
IF @COUNT_EXISTS = 0 THEN
RETURN -3;
END IF;
-- check if index exists at first place in sequence
SELECT
COUNT(*) INTO @COUNT_EXISTS
FROM
information_schema.statistics
WHERE
TABLE_SCHEMA = IN_SCHEMA
AND TABLE_NAME = IN_TABLE AND COLUMN_NAME = IN_COLUMN
AND SEQ_IN_INDEX = 1;
IF @COUNT_EXISTS > 0 THEN
RETURN 1;
ELSE
RETURN 0;
END IF;
END$$
DELIMITER ;
A: You can't run a specific show index query because it will throw an error if an index does not exist. Therefore, you have to grab all indexes into an array and loop through them if you want to avoid any SQL errors.
Heres how I do it. I grab all of the indexes from the table (in this case, leads) and then, in a foreach loop, check if the column name (in this case, province) exists or not.
$this->name = 'province';
$stm = $this->db->prepare('show index from `leads`');
$stm->execute();
$res = $stm->fetchAll();
$index_exists = false;
foreach ($res as $r) {
if ($r['Column_name'] == $this->name) {
$index_exists = true;
}
}
This way you can really narrow down the index attributes. Do a print_r of $res in order to see what you can work with.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "123"
}
|
Q: Could you explain STA and MTA? Can you explain STA and MTA in your own words?
Also, what are apartment threads and do they pertain only to COM? If so, why?
A: I find the existing explanations too gobbledygook. Here's my explanation in plain English:
STA:
If a thread creates a COM object that's set to STA (when calling CoCreateXXX you can pass a flag that sets the COM object to STA mode), then only this thread can access this COM object (that's what STA means - Single Threaded Apartment), other thread trying to call methods on this COM object is under the hood silently turned into delivering messages to the thread that creates(owns) the COM object. This is very much like the fact that only the thread that created a UI control can access it directly. And this mechanism is meant to prevent complicated lock/unlock operations.
MTA:
If a thread creates a COM object that's set to MTA, then pretty much every thread can directly call methods on it.
That's pretty much the gist of it. Although technically there're some details I didn't mention, such as in the 'STA' paragraph, the creator thread must itself be STA. But this is pretty much all you have to know to understand STA/MTA/NA.
A: Code that calls COM object dlls (for example, to read proprietary data files), may work fine in a user interface but hang mysteriously from a service. The reason is that as of .Net 2.0 user interfaces assume STA (thread-safe) while services assume MTA ((before that, services assumed STA). Having to create an STA thread for every COM call in a service can add significant overhead.
A: The COM threading model is called an "apartment" model, where the execution context of initialized COM objects is associated with either a single thread (Single Thread Apartment) or many threads (Multi Thread Apartment). In this model, a COM object, once initialized in an apartment, is part of that apartment for the duration of its runtime.
The STA model is used for COM objects that are not thread safe. That means they do not handle their own synchronization. A common use of this is a UI component. So if another thread needs to interact with the object (such as pushing a button in a form) then the message is marshalled onto the STA thread. The windows forms message pumping system is an example of this.
If the COM object can handle its own synchronization then the MTA model can be used where multiple threads are allowed to interact with the object without marshalled calls.
A: STA (Single Threaded Apartment) is basically the concept that only one thread will interact with your code at a time. Calls into your apartment are marshaled via windows messages (using a non-visible) window. This allows calls to be queued and wait for operations to complete.
MTA (Multi Threaded Apartment) is where many threads can all operate at the same time and the onus is on you as the developer to handle the thread security.
There is a lot more to learn about threading models in COM, but if you are having trouble understanding what they are then I would say that understanding what the STA is and how it works would be the best starting place because most COM objects are STA’s.
Apartment Threads, if a thread lives in the same apartment as the object it is using then it is an apartment thread. I think this is only a COM concept because it is only a way of talking about the objects and threads they interact with…
A: It's all down to how calls to objects are handled, and how much protection they need. COM objects can ask the runtime to protect them against being called by multiple threads at the same time; those that don't can potentially be called concurrently from different threads, so they have to protect their own data.
In addition, it's also necessary for the runtime to prevent a COM object call from blocking the user interface, if a call is made from a user interface thread.
An apartment is a place for objects to live, and they contain one or more threads. The apartment defines what happens when calls are made. Calls to objects in an apartment will be received and processed on any thread in that apartment, with the exception that a call by a thread already in the right apartment is processed by itself (i.e. a direct call to the object).
Threads can be either in a Single-Threaded Apartment (in which case they are the only thread in that apartment) or in a Multi-Threaded Apartment. They specify which when the thread initializes COM for that thread.
The STA is primarily for compatibility with the user interface, which is tied to a specific thread. An STA receives notifications of calls to process by receiving a window message to a hidden window; when it makes an outbound call, it starts a modal message loop to prevent other window messages being processed. You can specify a message filter to be called, so that your application can respond to other messages.
By contrast all MTA threads share a single MTA for the process. COM may start a new worker thread to handle an incoming call if no threads are available, up to a pool limit. Threads making outbound calls simply block.
For simplicity we'll consider only objects implemented in DLLs, which advertise in the registry what they support, by setting the ThreadingModel value for their class's key. There are four options:
*
*Main thread (ThreadingModel value not present). The object is created on the host's main UI thread, and all calls are marshalled to that thread. The class factory will only be called on that thread.
*Apartment. This indicates that the class can run on any single-threaded-mode thread. If the thread that creates it is an STA thread, the object will run on that thread, otherwise it will be created in the main STA - if no main STA exists, an STA thread will be created for it. (This means MTA threads that create Apartment objects will be marshalling all calls to a different thread.) The class factory can be called concurrently by multiple STA threads so it must protect its internal data against this.
*Free. This indicates a class designed to run in the MTA. It will always load in the MTA, even if created by an STA thread, which again means the STA thread's calls will be marshalled. This is because a Free object is generally written with the expectation that it can block.
*Both. These classes are flexible and load in whichever apartment they're created from. They must be written to fit both sets of requirements, however: they must protect their internal state against concurrent calls, in case they're loaded in the MTA, but must not block, in case they're loaded in an STA.
From the .NET Framework, basically just use [STAThread] on any thread that creates UI. Worker threads should use the MTA, unless they're going to use Apartment-marked COM components, in which case use the STA to avoid marshalling overhead and scalability problems if the same component is called from multiple threads (as each thread will have to wait for the component in turn). It's much easier all around if you use a separate COM object per thread, whether the component is in the STA or MTA.
A: Each EXE which hosts COM or OLE controls defines it's apartment state. The apartment state is by default STA (and for most programs should be STA).
STA - All OLE controls by necessity must live in a STA. STA means that your COM-object must be always manipulated on the UI thread and cannot be passed to other threads (much like any UI element in MFC). However, your program can still have many threads.
MTA - You can manipulate the COM object on any thread in your program.
A: This article explains STA & MTA very clearly.
Understanding COM Apartments, Part I
Understanding COM Apartments, Part II
Points about what Apartment is:
*
*An apartment is a concurrency boundary; it’s an imaginary box drawn around objects and client threads that separates COM clients and COM objects that have incompatible threading characteristics.
*Every thread that uses COM, and every object that those threads create, is assigned to an apartment.
*When a thread calls COM’s CoInitialize or CoInitializeEx function, that thread is placed in an apartment. And when an object is created, it too is placed in an apartment.
*Whenever it creates a new apartment, COM allocates an apartment object on the heap and initializes it with important information such as the apartment ID and apartment type. When it assigns a thread to an apartment, COM records the address of the corresponding apartment object in thread-local storage (TLS).
A: As my understanding, the 'Apartment' is used to protect the COM objects from multi-threading issues.
If a COM object is not thread-safe, it should declare it as a STA object. Then only the thread who creates it can access it. The creation thread should declare itself as a STA thread. Under the hood, the thread stores the STA information in its TLS(Thread Local Storage). We call this behavior as that the thread enters a STA apartment. When other threads want to access this COM object, it should marshal the access to the creation thread. Basically, the creation thread uses messages mechanism to process the in-bound calls.
If a COM object is thread-safe, it should declare it as a MTA object. The MTA object can be accessed by multi-threads.
A: Side note: If you are using some PowerShell 2.0 snap-ins, you need to launch PowerShell version 3 or greater with -MTA option to use them. PowerShell 2 apartment model is MTA versus later versions use STA as default. Other point is bitness. Normal calls in apartment are not marshalled (direct calls), so if your caller is x64 then callee must be also x64. Only way around this is to use remote procedure call (RPC), which add huge amount of overhead (spawn a new 32-bit process to load snap-in DLL and query result by some means).For developer: always publish type library - it make your COM object discovery and usage much easier! Every interface should be public and unique - implementation can be proprietary or open source.
Another situation
Example:
IStorage_vtbl** reference; // you got it by some means of factory
public unsafe int OpenStorage(char* pwcsName, IStorage pstgPriority, uint grfMode, char** snbExclude, uint reserved, IStorage* ppstg)
{
IStorage_vtbl** @this = (IStorage_vtbl**)reference;
IStorage_vtbl* vtbl = *@this;
if (vtbl == null)
throw new InvalidComObjectException();
Delegate genericDelegate = Marshal.GetDelegateForFunctionPointer(vtbl->method_6, typeof(delegate_6));
delegate_6 method = (delegate_6)genericDelegate;
return method(@this, pwcsName, pstgPriority, grfMode, snbExclude, reserved, ppstg);
}
This fragment of code just add 'this' pointer of instance for real call to COM subsystem
So, is this call to open instance of IStorage STA or MTA?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "409"
}
|
Q: Good Haskell coding style of if/else control block? I'm learning Haskell in the hope that it will help me get closer to functional programming. Previously, I've mostly used languages with C-like syntax, like C, Java, and D.
I have a little question about the coding style of an if/else control block used by the tutorial on Wikibooks. The code looks like the following:
doGuessing num = do
putStrLn "Enter your guess:"
guess <- getLine
if (read guess) < num
then do putStrLn "Too low!"
doGuessing num
else if (read guess) > num
then do putStrLn "Too high!"
doGuessing num
else do putStrLn "You Win!"
It makes me confused, because this coding style totally violates the recommended style in C-like languages, where we should indent if, else if, and else at the same column.
I know it just does not work in Haskell, because it would be a parse error if I indented else at the same column as if.
But what about the following style? I think it is much more clear than the above one. But since the above is used by Wikibooks and Yet Another Haskell Tutorial, which is marked "best tutorial available online" at the official Haskell website, I'm not sure whether this coding style is a convention in Haskell programs.
doGuessing num = do
putStrLn "Enter your guess:"
guess <- getLine
if (read guess) < num then
do
putStrLn "Too low!"
doGuessing num
else if (read guess) > num then do
putStrLn "Too high!"
doGuessing num
else do
putStrLn "You Win!"
So, I'm curious about which coding style is used more often—or is there another coding style for this piece of code?
A: You can use the "case"-construct:
doGuessing num = do
putStrLn "Enter your guess:"
guess <- getLine
case (read guess) of
g | g < num -> do
putStrLn "Too low!"
doGuessing num
g | g > num -> do
putStrLn "Too high!"
doGuessing num
otherwise -> do
putStrLn "You Win!"
A: A minor improvement to mattiast's case statement (I'd edit, but I lack the karma) is to use the compare function, which returns one of three values, LT, GT, or EQ:
doGuessing num = do
putStrLn "Enter your guess:"
guess <- getLine
case (read guess) `compare` num of
LT -> do putStrLn "Too low!"
doGuessing num
GT -> do putStrLn "Too high!"
doGuessing num
EQ -> putStrLn "You Win!"
I really like these Haskell questions, and I'd encourage others to post more. Often you feel like there's got to be a better way to express what you're thinking, but Haskell is initially so foreign that nothing will come to mind.
Bonus question for the Haskell journyman: what's the type of doGuessing?
A: The way Haskell interprets if ... then ... else within a do block is very much in keeping with the whole of Haskell's syntax.
But many people prefer a slightly different syntax, permitting then and else to appear at the same indentation level as the corresponding if. Therefore, GHC comes with an opt-in language extension called DoAndIfThenElse, which permits this syntax.
The DoAndIfThenElse extension is made into part of the core language in the latest revision of the Haskell specification, Haskell 2010.
A: Note that the fact that you have to indent the 'then' and 'else' inside a 'do' block is considered a bug by many. It will probably be fixed in Haskell' (Haskell prime), the next version of the Haskell specification.
A: Haskell style is functional, not imperative! Rather than "do this then that," think about combining functions and describing what your program will do, not how.
In the game, your program asks the user for a guess. A correct guess is a winner. Otherwise, the user tries again. The game continues until the user guesses correctly, so we write that:
main = untilM (isCorrect 42) (read `liftM` getLine)
This uses a combinator that repeatedly runs an action (getLine pulls a line of input and read converts that string to an integer in this case) and checks its result:
untilM :: Monad m => (a -> m Bool) -> m a -> m ()
untilM p a = do
x <- a
done <- p x
if done
then return ()
else untilM p a
The predicate (partially applied in main) checks the guess against the correct value and responds accordingly:
isCorrect :: Int -> Int -> IO Bool
isCorrect num guess =
case compare num guess of
EQ -> putStrLn "You Win!" >> return True
LT -> putStrLn "Too high!" >> return False
GT -> putStrLn "Too low!" >> return False
The action to be run until the player guesses correctly is
read `liftM` getLine
Why not keep it simple and just compose the two functions?
*Main> :type read . getLine
<interactive>:1:7:
Couldn't match expected type `a -> String'
against inferred type `IO String'
In the second argument of `(.)', namely `getLine'
In the expression: read . getLine
The type of getLine is IO String, but read wants a pure String.
The function liftM from Control.Monad takes a pure function and “lifts” it into a monad. The type of the expression tells us a great deal about what it does:
*Main> :type read `liftM` getLine
read `liftM` getLine :: (Read a) => IO a
It's an I/O action that when run gives us back a value converted with read, an Int in our case. Recall that readLine is an I/O action that yields String values, so you can think of liftM as allowing us to apply read “inside” the IO monad.
Sample game:
1
Too low!
100
Too high!
42
You Win!
A: You can also use explicit grouping with curly braces. See the layout section of http://www.haskell.org/tutorial/patterns.html
I wouldn't recommend that though. I've never seen anyone use explicit grouping besides in a few special cases. I usually look at the Standard Prelude code for examples of style.
A: I use a coding style like your example from Wikibooks. Sure, it doesn't follow the C guidelines, but Haskell's not C, and it's fairly readable, especially once you get used to it. It's also patterned after the style of algorithms used in many textbooks, like Cormen.
A: You will see a bunch of different indentation styles for Haskell. Most of them are very hard to maintain without an editor that is set up to indent exactly in whatever style.
The style you display is much simpler and less demanding of the editor, and I think you should stick with it. The only inconsistency I can see is that you put the first do on its own line while you put the other dos after the then/else.
Heed the other advice about how to think about code in Haskell, but stick to your indentation style.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
}
|
Q: How do I get the full name of the current user from a SQL Reporting Services 2008 report? I know that the name of the user account can be retrieved from the built in variable User!UserID but how can I get the full user name of the user?
I guess it would be possible to hook up some .NET code and do a Active Directory look up but are there any alternatives?
A: In one of my projects I used a table in SQL Server to store the user data from Active Directory ("displayName", "sAMAccountName", "userPrincipalName"), using a C# application for data transfer. The table was updated nightly and also after any change in Active Directory which could impact my project users.
In my case (Infopath form hosted on Sharepoint) I was using an web service to get the account name of the current user from Sharepoint, in order to display the coresponding user full name.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using 'this' as a parameter to a method call in a constructor I have a constructor like as follows:
public Agent(){
this.name = "John";
this.id = 9;
this.setTopWorldAgent(this, "Top_World_Agent", true);
}
I'm getting a null pointer exception here in the method call. It appears to be because I'm using 'this' as an argument in the setTopWorldAgent method. By removing this method call everything appears fine. Why does this happen? Has anyone else experienced this?
A: You can pass this to methods, but setTopWorldAgent() cannot be abstract. You can't make a virtual call in the constructor.
In the constructor of an object, you can call methods defined in that object or base classes, but you cannot expect to call something that will be provided by a derived class, because parts of the derived class are not constructed yet. I would have expected some kind of compiler error if setTopWorldAgent() was abstract.
In Java, you can get surprising behavior with the contructor and derived classes -- here is an example
http://en.wikipedia.org/wiki/Virtual_functions#Java_3
If you are used to C# or C++, you might think it's safe to call virtual functions and not be calling overridden ones. In Java, the virtual call is made even though the derived class is not fully constructed.
If this isn't what's happening, then presumably, all of the parts of this that setTopWorldAgent() needs are initialized -- if not, it's probably one of the members of this that needs to be initialized.
Edit: thought this was C#
A: Out of curiousity, why are you passing 'this' to a member function of the same class? setTopWorldAgent() can use 'this' directly. It doesn't look like your constructor or setTopWorldAgent() are static, so I'm not sure why you would pass a member function something it already has access to.
Unless I'm missing something...
A: Why would setTopWorldAgent need this as an argument? Based on the invocation, it's an instance method, so it could reference this without needing to receive it as a parameter.
A: "this" should never be null. Are you sure that the exception is being thrown because of that?
Something to beware of is that if the method is virtual, or calls any virtual methods, then a method belonging to a subclass might be run before the subclass's variables are initialised.
A: I think more to the point, why on earth are you passing 'this' as a parameter to a method in 'this'?
The following would test what you say is happening to you and I have no troubles with it.
public class Test {
public Test() {
this.hi(this);
}
public void hi(Test t) {
System.out.println(t);
}
public static void main(String[] args) throws Exception {
Test t = new Test();
}
}
A: Given that setTopWorldAgent appears to be an instance method, why are you passing through this to it anyway?
A: this is not null, that much is sure. It's been allocated.
That said, there's no need to pass this into the method, it's automatically available in all instance methods. If the method's static, you may want refactor it into an instance method.
A: The error must be somewhere else because the above code definitely works, the null reference must be something else.
A: If your Agent is implementing ITopWorldAgent then you should actually do this:
Agent agent = new Agent("John", 9);
agent.setTopWorldAgent(agent, "Top_World_Agent", true);
If not, then why you are setting something in the manner you are?
I presume that something in the setTopWorldAgent method is using a value that hasn't been initialised yet in your constructor.
A: The rules of Java state that you should never pass 'this' to another method from its constructor, for the simple reason that the object has not been fully constructed. The object it references may be in an inconsistent state. I'm surprised that the actual 'this' reference is null, but not at all surprised that some member of 'this' is null when it is passed to setTopWorldAgent, and that the method is throwing the exception because of this.
Usually you can get away with passing 'this' from constructors as long as you don't actually access any members or call methods for example if you want to set a reference to 'this' in another object.
In this case of course the argument is unnecessary as the method already has a reference to 'this'.
A: Glad you got to an answer. I'd like to add that passing 'this' as a parameter can lead to unexpected concurrency issues. You basically are providing the possibility of the state of the object to be unsafely manipulated by potentially non-thread safe code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: log4net/c# - Different layout based on the level Is there any way to have different layout based on level of the log message when using log4net? Say, if it is a fatal error, I want to see all kind of information possible - class name, method name, line number etc. But for normal, debug and warning, I want to see only the message (I hope, this can increase the performance).
I am using log4net in C# WinForms. My requirement is to log all the previous 512 messages in to a file when a fatal error occures, and I want to see class name, method name, line number etc only for Fatal errors, for all other levels, just a message.
A: I think you're looking for LevelRangeFilter and a two-appender combination. One appender/filter combo for FATAL level (fatal being the min and max) and one appender/filter combo for everthing else (with ERROR or INFO being the max depending on if you wanted to include errors for debugging purposes)
Example here: What do you have in your log4net config? Hacks, optimizations, observations?
A: You could just use a different Appender for each "Level" and have them identical but for the pattern layout.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Does Class need to implement IEnumerable to use Foreach This is in C#, I have a class that I am using from some else's DLL. It does not implement IEnumerable but has 2 methods that pass back a IEnumerator. Is there a way I can use a foreach loop on these. The class I am using is sealed.
A: Re: If foreach doesn't require an explicit interface contract, does it find GetEnumerator using reflection?
(I can't comment since I don't have a high enough reputation.)
If you're implying runtime reflection then no. It does it all compiletime, another lesser known fact is that it also check to see if the returned object that might Implement IEnumerator is disposable.
To see this in action consider this (runnable) snippet.
using System;
using System.Collections.Generic;
using System.Text;
namespace ConsoleApplication3
{
class FakeIterator
{
int _count;
public FakeIterator(int count)
{
_count = count;
}
public string Current { get { return "Hello World!"; } }
public bool MoveNext()
{
if(_count-- > 0)
return true;
return false;
}
}
class FakeCollection
{
public FakeIterator GetEnumerator() { return new FakeIterator(3); }
}
class Program
{
static void Main(string[] args)
{
foreach (string value in new FakeCollection())
Console.WriteLine(value);
}
}
}
A: According to MSDN:
foreach (type identifier in expression) statement
where expression is:
Object collection or array expression.
The type of the collection element
must be convertible to the identifier
type. Do not use an expression that
evaluates to null. Evaluates to a type
that implements IEnumerable or a type
that declares a GetEnumerator method.
In the latter case, GetEnumerator
should either return a type that
implements IEnumerator or declares all
the methods defined in IEnumerator.
A: Short answer:
You need a class with a method named GetEnumerator, which returns the IEnumerator you already have. Achieve this with a simple wrapper:
class ForeachWrapper
{
private IEnumerator _enumerator;
public ForeachWrapper(Func<IEnumerator> enumerator)
{
_enumerator = enumerator;
}
public IEnumerator GetEnumerator()
{
return _enumerator();
}
}
Usage:
foreach (var element in new ForeachWrapper(x => myClass.MyEnumerator()))
{
...
}
From the C# Language Specification:
The compile-time processing of a
foreach statement first determines the
collection type, enumerator type and
element type of the expression. This
determination proceeds as follows:
*
*If the type X of expression is an array type then there is an implicit
reference conversion from X to the
System.Collections.IEnumerable
interface (since System.Array
implements this interface). The
collection type is the
System.Collections.IEnumerable
interface, the enumerator type is the
System.Collections.IEnumerator
interface and the element type is the
element type of the array type X.
*Otherwise, determine whether the type X has an appropriate
GetEnumerator method:
*
*Perform member lookup on the type X with identifier GetEnumerator and no
type arguments. If the member lookup
does not produce a match, or it
produces an ambiguity, or produces a
match that is not a method group,
check for an enumerable interface as
described below. It is recommended
that a warning be issued if member
lookup produces anything except a
method group or no match.
*Perform overload resolution using the resulting method group and an
empty argument list. If overload
resolution results in no applicable
methods, results in an ambiguity, or
results in a single best method but
that method is either static or not
public, check for an enumerable
interface as described below. It is
recommended that a warning be issued
if overload resolution produces
anything except an unambiguous public
instance method or no applicable
methods.
*If the return type E of the GetEnumerator method is not a class,
struct or interface type, an error is
produced and no further steps are
taken.
*Member lookup is performed on E with the identifier Current and no
type arguments. If the member lookup
produces no match, the result is an
error, or the result is anything
except a public instance property that
permits reading, an error is produced
and no further steps are taken.
*Member lookup is performed on E with the identifier MoveNext and no
type arguments. If the member lookup
produces no match, the result is an
error, or the result is anything
except a method group, an error is
produced and no further steps are
taken.
*Overload resolution is performed on the method group with an empty
argument list. If overload resolution
results in no applicable methods,
results in an ambiguity, or results in
a single best method but that method
is either static or not public, or its
return type is not bool, an error is
produced and no further steps are
taken.
*The collection type is X, the enumerator type is E, and the element
type is the type of the Current
property.
*Otherwise, check for an enumerable interface:
*
*If there is exactly one type T such that there is an implicit
conversion from X to the interface
System.Collections.Generic.IEnumerable<T>,
then the collection type is this
interface, the enumerator type is the
interface
System.Collections.Generic.IEnumerator<T>,
and the element type is T.
*Otherwise, if there is more than one such type T, then an error is
produced and no further steps are
taken.
*Otherwise, if there is an implicit conversion from X to the
System.Collections.IEnumerable
interface, then the collection type is
this interface, the enumerator type is
the interface
System.Collections.IEnumerator, and
the element type is object.
*Otherwise, an error is produced and no further steps are taken.
A: Not strictly. As long as the class has the required GetEnumerator, MoveNext, Reset, and Current members, it will work with foreach
A: No, you don't and you don't even need an GetEnumerator method, e.g.:
class Counter
{
public IEnumerable<int> Count(int max)
{
int i = 0;
while (i <= max)
{
yield return i;
i++;
}
yield break;
}
}
which is called this way:
Counter cnt = new Counter();
foreach (var i in cnt.Count(6))
{
Console.WriteLine(i);
}
A: foreach does not require IEnumerable, contrary to popular belief. All it requires is a method GetEnumerator that returns any object that has the method MoveNext and the get-property Current with the appropriate signatures.
/EDIT: In your case, however, you're out of luck. You can trivially wrap your object, however, to make it enumerable:
class EnumerableWrapper {
private readonly TheObjectType obj;
public EnumerableWrapper(TheObjectType obj) {
this.obj = obj;
}
public IEnumerator<YourType> GetEnumerator() {
return obj.TheMethodReturningTheIEnumerator();
}
}
// Called like this:
foreach (var xyz in new EnumerableWrapper(yourObj))
…;
/EDIT: The following method, proposed by several people, does not work if the method returns an IEnumerator:
foreach (var yz in yourObj.MethodA())
…;
A: You could always wrap it, and as an aside to be "foreachable" you only need to have a method called "GetEnumerator" with the proper signature.
class EnumerableAdapter
{
ExternalSillyClass _target;
public EnumerableAdapter(ExternalSillyClass target)
{
_target = target;
}
public IEnumerable GetEnumerator(){ return _target.SomeMethodThatGivesAnEnumerator(); }
}
A: Given class X with methods A and B that both return IEnumerable, you could use a foreach on the class like this:
foreach (object y in X.A())
{
//...
}
// or
foreach (object y in X.B())
{
//...
}
Presumably the meaning for the enumerables returned by A and B are well-defined.
A: @Brian: Not sure you try to loop over the value return from method call or the class itself,
If what you want is the class then by make it an array you can use with foreach.
A: For a class to be usable with foeach all it needs to do is have a public method that returns and IEnumerator named GetEnumerator(), that's it:
Take the following class, it doesn't implement IEnumerable or IEnumerator :
public class Foo
{
private int[] _someInts = { 1, 2, 3, 4, 5, 6 };
public IEnumerator GetEnumerator()
{
foreach (var item in _someInts)
{
yield return item;
}
}
}
alternatively the GetEnumerator() method could be written:
public IEnumerator GetEnumerator()
{
return _someInts.GetEnumerator();
}
When used in a foreach ( Note that the no wrapper is used, just a class instance ):
foreach (int item in new Foo())
{
Console.Write("{0,2}",item);
}
prints:
1 2 3 4 5 6
A: The type only requires to have a public/non-static/non-generic/parameterless method named GetEnumerator which should return something that has a public MoveNext method and a public Current property. As I recollect Mr Eric Lippert somewhere, this was designed so as to accommodate pre generic era for both type safety and boxing related performance issues in case of value types.
For instance this works:
class Test
{
public SomethingEnumerator GetEnumerator()
{
}
}
class SomethingEnumerator
{
public Something Current //could return anything
{
get { }
}
public bool MoveNext()
{
}
}
//now you can call
foreach (Something thing in new Test()) //type safe
{
}
This is then translated by the compiler to:
var enumerator = new Test().GetEnumerator();
try {
Something element; //pre C# 5
while (enumerator.MoveNext()) {
Something element; //post C# 5
element = (Something)enumerator.Current; //the cast!
statement;
}
}
finally {
IDisposable disposable = enumerator as System.IDisposable;
if (disposable != null) disposable.Dispose();
}
From 8.8.4 section of the spec.
Something worth noting is the enumerator precedence involved - it goes like if you have a public GetEnumerator method, then that is the default choice of foreach irrespective of who is implementing it. For example:
class Test : IEnumerable<int>
{
public SomethingEnumerator GetEnumerator()
{
//this one is called
}
IEnumerator<int> IEnumerable<int>.GetEnumerator()
{
}
}
(If you don't have a public implementation (ie only explicit implementation), then precedence goes like IEnumerator<T> > IEnumerator.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Contextual Natural Language Resources, Where Do I Start? Where can i find some .Net or conceptual resources to start working with Natural Language where I can pull context and subjects from text. I wish not to work with word frequency algorithms.
A: To find resources in part of speech tagging (a natural language processing task) look at this:
*
*Natural Language Toolkit
*PoS tagger in perl
*SVM Tool
*NLP Group at Stanford
Hope it helps.
A: For English, there are the WordNet files, are these the kind of resources you are looking for?
A: Hank's tag change leads to a list of parsers and to libraries
A: Alex. S by speech you mean vocal processing or text processing? I'm looking for text processing resources at the moment. i have downloaded the book and will be going through it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: .NET client app: how to reach Web Services in case of proxy? We are developing a .NET 2.0 winform application. The application needs to access Web Services. Yet, we are encountering issues with users behind proxies.
Popular windows backup applications (think Mozy) are providing a moderately complex dialog window dedicated the proxy settings. Yet, re-implementing yet-another proxy handling logic and GUI looks a total waste of time to me.
What are best ways to deal with proxy with .NET client apps?
More specifically, we have a case where the user has recorded his proxy settings in Internet Explorer (including username and password), so the default proxy behavior of .NET should work. Yet, the user is still prompted for his username and password when launching IE (both fields are pre-completed, the user just need to click OK) - and our winform application still fails at handling the proxy.
What should we do to enforce that the user is not prompted for his username and password when launching IE?
A: Use WebProxy and WebRequest classes. Wrap it into you own library just for one time and use everywhere you want work with proxy.
A: Put this in your application's config file:
<configuration>
<system.net>
<defaultProxy>
<proxy autoDetect="true" />
</defaultProxy>
</system.net>
</configuration>
and your application will use the proxy settings from IE. If you can see your web service in IE using the proxy server, you should be able to "see" it from your application.
A: Look into using the .NET WebProxy class. It has support for automatically selecting the correct default settings.
A: The easiest way is to use the proxy settings from IE Explorer.
A: If you open IE, click OK to the proxy dialog, and then (leaving IE open) try to connect with your winforms app, does your app then work? Or does your app fail to handle the proxy no matter what?
A: Are your clients that are experiencing proxy problems all on the same network (i.e. are they all using the same proxy server)?
A: I think the asker understands he has to use WebProxy if the user requires a proxy, the question is "how do I get IE's proxy settings so I don't have to ask the user to type them in to my app as well?"
System.Net.WebProxy.GetDefaultProxy is obsolete, you have to use System.Net.WebRequest.DefaultWebProxy. There is an article describing it at http://msdn.microsoft.com/en-ca/magazine/cc300743.aspx.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to bind extenders to controls on clientside I have some dynamically created inputs which are not server-side controls. I want to relate them to some CalendarExtender and MaskedEditExtender on the clientside. Is there a way to do that?
A: Yes I think it may be possible here is how:
On the server side set the BehaviourID attribute of the Ajax control to a known value:
_calendarExtender.BehaviorID = "_behaviour_id"
This allows you then in your javascript to get hold of the underlying CalendarBehaviour object with the $find function :
var calBehaviour = $find('_behaviour_id' );
You can now call the various object functions such as hide() and show() :
calBehaviour.show();
You can get hold of the underlying TextBox input field for the CalendarExtender as follows :
var tbElement = calBehaviour._textbox._element;
I've not tried it but you may thus be able to swap out the original text box for your own client side input control if that's what you want to do or simply manipulate the extender in other ways.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: LINQ to XML Newbie Question: Returning More Than One Result Greetings!
I'm working on wrapping my head around LINQ. If I had some XML such as this loaded into an XDocument object:
<Root>
<GroupA>
<Item attrib1="aaa" attrib2="000" attrib3="true" />
</GroupA>
<GroupB>
<Item attrib1="bbb" attrib2="111" attrib3="true" />
<Item attrib1="ccc" attrib2="222" attrib3="false" />
<Item attrib1="ddd" attrib2="333" attrib3="true" />
</GroupB>
<GroupC>
<Item attrib1="eee" attrib2="444" attrib3="true" />
<Item attrib1="fff" attrib2="555" attrib3="true" />
</GroupC>
</Root>
I'd like to get the attribute values of all of the Item child elements of a Group element. Here's what my query looks like:
var results = from thegroup in l_theDoc.Elements("Root").Elements(groupName)
select new
{
attrib1_val = thegroup.Element("Item").Attribute("attrib1").Value,
attrib2_val = thegroup.Element("Item").Attribute("attrib2").Value,
};
The query works, but if for example the groupName variable contains "GroupB", only one result (the first Item element) is returned instead of three. Am I missing something?
A: XElement e = XElement.Parse(testStr);
string groupName = "GroupB";
var items = from g in e.Elements(groupName)
from i in g.Elements("Item")
select new {
attr1 = (string)i.Attribute("attrib1"),
attr2 = (string)i.Attribute("attrib2")
};
foreach (var item in items)
{
Console.WriteLine(item.attr1 + ":" + item.attr2);
}
A: Yes, .Element() only returns the first matching element. You want .Elements() and you need to re-write your query somewhat:
var results = from group in l_theDoc.Root.Elements(groupName)
select new
{
items = from i in group.Elements("Item")
select new
{
attrib1_val = i.Attribute("attrib1").Value,
attrib2_val = i.Attribute("attrib2").Value
}
};
A: Here's the query method form of the answer:
var items =
e.Elements("GroupB")
.SelectMany(g => g.Elements("Item"))
.Select(i => new {
attr1 = i.Attribute("attrib1").Value,
attr2 = i.Attribute("attrib2").Value,
attr3 = i.Attribute("attrib3").Value
} )
.ToList()
A: Another possibility is using a where clause:
var groupName = "GroupB";
var results = from theitem in doc.Descendants("Item")
where theitem.Parent.Name == groupName
select new
{
attrib1_val = theitem.Attribute("attrib1").Value,
attrib2_val = theitem.Attribute("attrib2").Value,
};
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Does NHaml have a content_for ability for layouts? I am currently starting a project utilizing ASP.NET MVC and would like to use NHaml as my view engine as I love Haml from Rails/Merb. The main issue I face is the laying out of my pages. In Webforms, I would place a ContentPlaceHolder in the head so that other pages can have specific CSS and JavaScript files.
In Rails, this is done utilizing yield and content_for
File: application.haml
%html
%head
- yield :style
File: page.haml
- content_for :style do
/ specific styles for this page
In NHaml, I can do this with partials, however any partials are global for the entire controller folder.
File: application.haml
!!!
%html{xmlns="http://www.w3.org/1999/xhtml"}
%head
_ Style
File: _Style.haml
%link{src="http://www.thescore.com/css/style.css?version=1.1" type="text/css"}
Does anyone know of a way to get NHaml to work in the Rails scenario?
A: Use the ^ evaluator in the master page, and set it's value in each of the layouts(content pages).
See NHaml Samples from it's source on Google Code.
A: The "content placeholders" are not yet supported.
But there is a request for that.
You can vote for it too
BUT this is how I provided per-page content in NHAML:
http://dnagir.blogspot.com/2009/07/nhaml-scripts-and-styles-code-block.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: ScriptManager.RegisterClientScript in a UserControl within a FormView inside an Async Panel I'm having an annoying problem registering a javascript event from inside a user control within a formview in an Async panel. I go to my formview, and press a button to switch into insert mode. This doesn't do a full page postback. Within insert mode, my user control's page_load event should then register a javascript event using ScriptManager.RegisterStartupScript:
ScriptManager.RegisterStartupScript(base.Page, this.GetType(), ("dialogJavascript" + this.ID), "alert(\"Registered\");", true);
However when I look at my HTML source, the event isn't there. Hence the alert box is never shown. This is the setup of my actual aspx file:
<igmisc:WebAsyncRefreshPanel ID="WebAsyncRefreshPanel1" runat="server">
<asp:FormView ID="FormView1" runat="server" DataSourceID="odsCurrentIncident">
<EditItemTemplate>
<uc1:SearchSEDUsers ID="SearchSEDUsers1" runat="server" />
</EditItemTemplate>
<ItemTemplate>
Hello
<asp:Button ID="Button1" runat="server" CommandName="Edit" Text="Button" />
</ItemTemplate>
</asp:FormView>
</igmisc:WebAsyncRefreshPanel>
Does anyone have any idea what I might be missing here?
A: Have you tried using RegisterClientSideScript? You can always check the key for the script with IsClientSideScriptRegistered to ensure you don't register it multiple times.
I'm assuming the async panel is doing a partial page past back which doesn't trigger the mechansim to regenerate the startup scripts. Perhaps someone with a better understanding of the ASP.Net Page Life Cycle and the CLR can fill in those blanks.
A: For me that works fine. resizeChartMid() is a function name.
ScriptManager.RegisterStartupScript(this, typeof(string), "getchart48", "resizeChartMid();", true);
A: Try this, i got the same issue
ScriptManager.RegisterClientScriptBlock(MyBase.Page, Me.[GetType](),
("dialogJavascript" + this.ID), "alert(\"Registered\");", True)
This worked for me!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Can't select Primary Output as target of shortcut in Visual Studio 2005 setup project I've Added a setup project to my solution (didn't use the wizard) I then added the primary output of the Windows Application I have coded to the Applcation Folder node (Right click the setup project in Solution-Explorer and select View -> File System). I Right clicked the User's Desktop node and selected 'Create Shortcut to user's Desktop' in the context menu. Typed a name for the shortcut and then in the properties window clicked the ellipsis button for the Target property. A dialog is displayed but it won't expand the Application folder node and let me select the Primary Output as the target !!! WTF ???
I have done this on another project but I can't for the life of me figure out why I can't do it on this one. Hell! the projects are almost identical in every other way. Gah! going bald. Hoping someone out there is having a better day than me and has time to give me the probably screamingly obvious solution and make me feel lame.
A: You actually have to right-click on 'Primary Output From ' in your setup project, and create a shortcut to that. Then, you can move the shortcut over to the 'Users Desktop' location within your setup project.
A: I think you've created a shortcut to the desktop, on the desktop.
Try clicking on User's Desktop, then right-clicking in the right hand pane, and selecting "Create new shortcut" from there.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Is it possible to subclass a C struct in C++ and use pointers to the struct in C code? Is there a side effect in doing this:
C code:
struct foo {
int k;
};
int ret_foo(const struct foo* f){
return f.k;
}
C++ code:
class bar : public foo {
int my_bar() {
return ret_foo( (foo)this );
}
};
There's an extern "C" around the C++ code and each code is inside its own compilation unit.
Is this portable across compilers?
A: This is entirely legal. In C++, classes and structs are identical concepts, with the exception that all struct members are public by default. That's the only difference. So asking whether you can extend a struct is no different than asking if you can extend a class.
There is one caveat here. There is no guarantee of layout consistency from compiler to compiler. So if you compile your C code with a different compiler than your C++ code, you may run into problems related to member layout (padding especially). This can even occur when using C and C++ compilers from the same vendor.
I have had this happen with gcc and g++. I worked on a project which used several large structs. Unfortunately, g++ packed the structs significantly looser than gcc, which caused significant problems sharing objects between C and C++ code. We eventually had to manually set packing and insert padding to make the C and C++ code treat the structs the same. Note however, that this problem can occur regardless of subclassing. In fact we weren't subclassing the C struct in this case.
A:
“Never derive from concrete classes.” — Sutter
“Make non-leaf classes abstract.” — Meyers
It’s simply wrong to subclass non-interface classes. You should refactor your libraries.
Technically, you can do what you want, so long as you don’t invoke undefined behavior by, e. g., deleting a pointer to the derived class by a pointer to its base class subobject. You don’t even need extern "C" for the C++ code. Yes, it’s portable. But it’s poor design.
A: This is perfectly legal, though it might be confusing for other programmers.
You can use inheritance to extend C-structs with methods and constructors.
Sample :
struct POINT { int x, y; }
class CPoint : POINT
{
public:
CPoint( int x_, int y_ ) { x = x_; y = y_; }
const CPoint& operator+=( const POINT& op2 )
{ x += op2.x; y += op2.y; return *this; }
// etc.
};
Extending structs might be "more" evil, but is not something you are forbidden to do.
A: Wow, that's evil.
Is this portable across compilers?
Most definitely not. Consider the following:
foo* x = new bar();
delete x;
In order for this to work, foo's destructor must be virtual which it clearly isn't. As long as you don't use new and as long as the derived objectd don't have custom destructors, though, you could be lucky.
/EDIT: On the other hand, if the code is only used as in the question, inheritance has no advantage over composition. Just follow the advice given by m_pGladiator.
A: This is perfectly legal, and you can see it in practice with the MFC CRect and CPoint classes. CPoint derives from POINT (defined in windef.h), and CRect derives from RECT. You are simply decorating an object with member functions. As long as you don't extend the object with more data, you're fine. In fact, if you have a complex C struct that is a pain to default-initialize, extending it with a class that contains a default constructor is an easy way to deal with that issue.
Even if you do this:
foo *pFoo = new bar;
delete pFoo;
then you're fine, since your constructor and destructor are trivial, and you haven't allocated any extra memory.
You also don't have to wrap your C++ object with 'extern "C"', since you're not actually passing a C++ type to the C functions.
A: I certainly not recommend using such weird subclassing. It would be better to change your design to use composition instead of inheritance.
Just make one member
foo* m_pfoo;
in the bar class and it will do the same job.
Other thing you can do is to make one more class FooWrapper, containing the structure in itself with the corresponding getter method. Then you can subclass the wrapper. This way the problem with the virtual destructor is gone.
A: I don't think it is necessarily a problem. The behaviour is well defined, and as long as you are careful with life-time issues (don't mix and match allocations between the C++ and C code) will do what you want. It should be perfectly portable across compilers.
The problem with destructors is real, but applies any time the base class destructor isn't virtual not just for C structs. It is something you need to be aware of but doesn't preclude using this pattern.
A: It will work, and portably BUT you cannot use any virtual functions (which includes destructors).
I would recommend that instead of doing this you have Bar contain a Foo.
class Bar
{
private:
Foo mFoo;
};
A: I don't get why you don't simply make ret_foo a member method. Your current way makes your code awfully hard to understand. What is so difficult about using a real class in the first place with a member variable and get/set methods?
I know it's possible to subclass structs in C++, but the danger is that others won't be able to understand what you coded because it's so seldom that somebody actually does it. I'd go for a robust and common solution instead.
A: It probably will work but I do not believe it is guaranteed to. The following is a quote from ISO C++ 10/5:
A base class subobject might have a layout (3.7) different from the layout of a most derived object of the same type.
It's hard to see how in the "real world" this could actually be the case.
EDIT:
The bottom line is that the standard has not limited the number of places where a base class subobject layout can be different from a concrete object with that same Base type. The result is that any assumptions you may have, such as POD-ness etc. are not necessarily true for the base class subobject.
EDIT:
An alternative approach, and one whose behaviour is well defined is to make 'foo' a member of 'bar' and to provide a conversion operator where it's necessary.
class bar {
public:
int my_bar() {
return ret_foo( foo_ );
}
//
// This allows a 'bar' to be used where a 'foo' is expected
inline operator foo& () {
return foo_;
}
private:
foo foo_;
};
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
}
|
Q: Is there any way, in java, to check on the status of a windows service? I am looking for a library that will allow me to look up the status of a windows service to verify that the service is started and running. I looked into the Sigar library, but it is GPL and therefor I cannot use it. A Commercial or BSD(ish) license is required as this will be bundled into commercial software.
A: If nothing else helps, try to think of a slightly different approach (if you can, of course), e.g.:
*
*There is a plenty of free/non-free software which does monitoring, including Windows service monitoring (e.g. nagios, Zabbix, etc.). These monitors typically have open API where your Java app could integrate into in a number of different ways.
*If you have the control over depending service application, expose another, different way for your Java application to check (e.g. run a dummy listener on a port, create a file, etc.). Windows services aren't a cross-platform thing therefore is not something you would expect to be supported anytime soon.
A: I don't think there is any pure-Java way to do this because some operating systems don't have the notion of "services" like Windows does. In our projects, we wrote a wrapper around calls to the "sc" command from the command line. To get the status of a service, you can do:
sc \\some-computer query "my service name"
You'll have to manually parse the output but it's pretty straightforward.
A: I don't know of any libraries, but depending on how detailed you need to get you might get by with some shell commands and parsing the output.
NET START servicename
will either start the service, or give you back an error message that tells you its already started. I don't know of any command that will just give you the status though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Are there any all-in-one packages that help install wamp on a production server? I need to install amp on a windows2003 production server. I'd like, if possible, an integrated install/management tool so I don't have to install/integrate the components of amp separately. Those that I've found are 'development' servers. Are there any packages out there that install amp in a production ready (locked down state)?
I'm aware of LAMP... Windows, since we have IIS apps already and we've paid for this box, is a requirement. I'll take care of all the other hangups. I just want a simple way to install, integrate, and manage AMP.
A: I'm not sure running WAMP as a production server is a good idea. I use wamp to stage proyects and then I move them to a Linux server.
You can try any of this solutions:
http://www.uniformserver.com/
Some people state that they are working fine with WAMP Server, but again, I wouldn't recommend it.
A: There doesn't appear to be any all-in one packages that are up to date and 'designed' for production. You just can't trust the default installs to be secure on whats out there.
I ended up just doing this manually. It wasn't painful though. Each component's install procedure was documented reasonably well. Took me about 3.5hrs. A nice side effect of the involved setup was that it gave me a much better understanding of each component's dependencies and the ways in which they touch. In hind sight I should have done it manually from the start.
Note: make sure you read the comments below each component's documentation pages. Some contain valuable corrections to the install process.
A: Xampp is quite popular, i just don't know how "production level" it is:
http://www.apachefriends.org/en/xampp.html
Without wanting to sound elite: For "real" production Environments, it's possibly not a bad idea to setup and configure the components individually, but this requires some deeper knowledge than "hit setup and run".
A: Since the time this question was asked Zend has released Zend Server.
Zend Server is a complete,
enterprise-ready Web Application
Server for running and managing PHP
applications that require a high level
of reliability, performance and
security.
A:
There doesn't appear to be any all-in one packages that are up to date and 'designed' for production. You just can't trust the default installs to be secure on whats out there.
WampDeveloper Pro is a commercial WAMP package that is specifically designed for production use (which I use).
I don't think that when this question was asked there was a viable solution for the above.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What methods for measuring progress during a release sprint are effective EDIT TO IMPROVE CLARITY
Scrum suggests that you split your development into a number of sprints. Each sprint being a fixed duration. At the end of each sprint you ask the client if you should release the software. If they say yes, you perform a Release Sprint, during which you do all the tasks that you woud like to do contineously, but are too expensive, such as external user testing, performance load testing and sign off, burning CDs (if relevent), writing user centered documentation and so on
In my current project we have just performend our first release sprint. We found we lost a lot of the advantages of scrum such as the burndown (as a lot of things were fixing minor tweaks or temporaraly removing security from the site so the load testing could happen), a clear goal as to how much work was to be done next etc. Basically the relase tasks were too close to firefighting to be easaly trackable via normal scrum tools.
What methods have other people used for during a release sprint, and what pitfalls did you find that should be avoided?
A: Actually, I prefer this tool. It does task-tracking, burndowns, burn-ups, and is useful for project notes.
But to answer the question, tracking hours-remaining on a burndown should still work. It'll still tell you whether you're going to get all your release-sprint tasks (bugs/tweaks) done in time for launch. If the answer is "not all of them", then it's time to get the product owner in to do some prioritisation, and kick some of the tasks out of the sprint.
A: We're using a kanban board with scrum. Each product item is represented by a post-it note on the whiteboard. Its really obvious during the daily standups where everyone is with each of their tasks, and we can see how many tickets we have queued up in the 'pending' area on the board compared to the 'done' area at the other end.
A: Your goal should be to get to a point where you don't need a release sprint to deploy to production:) But with that said, what are you doing in your release sprint? There are still tasks to be done, but they are even more predictable than developing code. I've never seen a difference in how the burndown/planning works other than it usually involves adding people to the team from ops. That of course can be its own problem. Maybe you could give a quick idea of what a release sprint looks like in your organization.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Problem Exporting DataGrid to Excel I first got an error usign the code below, explaining that "DataGridLinkButton' must be placed inside a form tag with runat=server."
Now I've tried setting AllowSorting to false, as well as removing the sort expression from each column, with the same error. Then I tried creating a new, plain, DataGrid, with the same data source, but now I get a blank page and FF doesn't recognise the content type properly any more. Please help.
Response.Clear();
base.Response.Buffer = true;
base.Response.ContentType = "application/vnd.ms-excel";
base.Response.AddHeader("Content-Disposition", "attachment;filename=file.xls");
base.Response.Charset = "";
this.EnableViewState = false;
StringWriter writer = new StringWriter();
HtmlTextWriter writer2 = new HtmlTextWriter(writer);
this.lblExport.RenderControl(writer2);
base.Response.Write(writer.ToString());
A: Add the following empty method to your code. That should fix it.
public override void VerifyRenderingInServerForm(Control control)
{
}
A: public override void VerifyRenderingInServerForm(Control control)
{
}
more help look on
http://techdotnets.blogspot.com/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there any sed like utility for cmd.exe? I want to programmatically edit file content using windows command line (cmd.exe). In *nix there is sed for this tasks. Are there any useful native equivalents (cmd or ps) in windows?
A: You could try powershell. There are get-content and set-content commandlets build in that you could use.
A: I use Cygwin. I run into a lot of people that do not realize that if you put the Cygwin binaries on your PATH, you can use them from within the Windows Command shell. You do not have to run Cygwin's Bash.
You might also look into Windows Services for Unix available from Microsoft (but only on the Professional and above versions of Windows).
A: Try fart.exe. It's a Find-and-replace-text utility that can be used in command batch programs.
http://sourceforge.net/projects/fart-it/
A: You could install Cygwin (http://www.cygwin.com/) and use sed from there.
A: There is a helper batch file for Windows called repl.bat which has much of the ability of SED but doesn't require any additional download or installation. It is a hybrid batch file that uses Jscript to implement the features and so is swift, and doesn't suffer from the usual poison characters of batch processing and handles blank lines with ease.
Download repl from - https://www.dropbox.com/s/qidqwztmetbvklt/repl.bat
Alternative link - https://www.dostips.com/forum/viewtopic.php?f=3&t=6044
The author is @dbenham from stack overflow and dostips.com
Another helper batch file called findrepl.bat gives the Windows user much of the capabilty of GREP and is also based on Jscript and is likewise a hybrid batch file. It shares the benefits of repl.bat
Download findrepl from - https://www.dropbox.com/s/rfdldmcb6vwi9xc/findrepl.bat
The author is @aacini from stack overflow and dostips.com
A: edlin or edit
plus there is Windows Services for Unix which comes with many unix tools for windows.
http://technet.microsoft.com/en-us/interopmigration/bb380242.aspx
Update 12/7/12
In Windows 2003 R2, Windows 7 & Server 2008, etc. the above is replaced by the Subsystem for UNIX-Based Applications (SUA) as an add-on. But you have to download the utilities:
http://www.microsoft.com/en-us/download/details.aspx?id=2391
A: You could look at GNU Tools, they provide (amongst other things) sed on windows.
A: As far as I know nothing like sed is bundled with windows. However, sed is available for Windows in several different forms, including as part of Cygwin, if you want a full POSIX subsystem, or as a Win32 native executable if you want to run just sed on the command line.
Sed for Windows (GnuWin32 Project)
If it needs to be native to Windows then the only other thing I can suggest would be to use a scripting language supported by Windows without add-ons, such as VBScript.
A: UnxUtils provides sed for Win32, as does GNUWin32.
A: If you don't want to install anything (I assume you want to add the script into some solution/program/etc that will be run in other machines), you could try creating a vbs script (lets say, replace.vbs):
Const ForReading = 1
Const ForWriting = 2
strFileName = Wscript.Arguments(0)
strOldText = Wscript.Arguments(1)
strNewText = Wscript.Arguments(2)
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.OpenTextFile(strFileName, ForReading)
strText = objFile.ReadAll
objFile.Close
strNewText = Replace(strText, strOldText, strNewText)
Set objFile = objFSO.OpenTextFile(strFileName, ForWriting)
objFile.Write strNewText
objFile.Close
And you run it like this:
cscript replace.vbs "C:\One.txt" "Robert" "Rob"
Which is similar to the sed version provided by "bill weaver", but I think this one is more friendly in terms of special (' > < / ) characters.
Btw, I didn't write this, but I can't recall where I got it from.
A: Today powershell saved me.
For grep there is:
get-content somefile.txt | where { $_ -match "expression"}
or
select-string somefile.txt -pattern "expression"
and for sed there is:
get-content somefile.txt | %{$_ -replace "expression","replace"}
For more detail about replace PowerShell function see this Microsoft article.
A: > (Get-content file.txt) | Foreach-Object {$_ -replace "^SourceRegexp$", "DestinationString"} | Set-Content file.txt
This is behaviour of
sed -i 's/^SourceRegexp$/DestinationString/g' file.txt
A: sed (and its ilk) are contained within several packages of Unix commands.
*
*Cygwin works but is gigantic.
*UnxUtils is much slimmer.
*GnuWin32 is another port that works.
*Another alternative is AT&T Research's UWIN system.
*MSYS from MinGw is yet another option.
*Windows Subsystem for Linux is a most "native" option, but it's not installed on Windows by default; it has sed, grep etc. out of the box, though.
*https://github.com/mbuilov/sed-windows offers recent 4.3 and 4.4 versions, which support -z option unlike listed upper ports
If you don't want to install anything and your system ain't a Windows Server one, then you could use a scripting language (VBScript e.g.) for that. Below is a gross, off-the-cuff stab at it. Your command line would look like
cscript //NoLogo sed.vbs s/(oldpat)/(newpat)/ < inpfile.txt > outfile.txt
where oldpat and newpat are Microsoft vbscript regex patterns. Obviously I've only implemented the substitute command and assumed some things, but you could flesh it out to be smarter and understand more of the sed command-line.
Dim pat, patparts, rxp, inp
pat = WScript.Arguments(0)
patparts = Split(pat,"/")
Set rxp = new RegExp
rxp.Global = True
rxp.Multiline = False
rxp.Pattern = patparts(1)
Do While Not WScript.StdIn.AtEndOfStream
inp = WScript.StdIn.ReadLine()
WScript.Echo rxp.Replace(inp, patparts(2))
Loop
A: There is Super Sed an enhanced version of sed. For Windows this is a standalone .exe, intended for running from the command line.
A: Cygwin works, but these utilities are also available. Just plop them on your drive, put the directory into your path, and you have many of your friendly unix utilities. Lighterweight IMHO that Cygwin (although that works just as well).
A: I needed a sed tool that worked for the Windows cmd.exe prompt. Eric Pement's port of sed to a single DOS .exe worked great for me.
It's pretty well documented.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "189"
}
|
Q: gss_acquire_cred returning Key table entry not found error I have been trying to follow the guidelines in this Microsoft article to authenticate
against Apache with Kerberos and AD. I have successfully tested the communication between the apache server and the AD server with kinit. However when I attempt to access a restricted page on the server with IE I get an Internal server error and the following appears in the apache error log.
[Wed Sep 24 14:18:15 2008] [debug] src/mod_auth_kerb.c(1483): [client 172.31.37.38] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos
[Wed Sep 24 14:18:15 2008] [debug] src/mod_auth_kerb.c(1174): [client 172.31.37.38] Acquiring creds for HTTP/srvnfssol1.dev.local@DEV.LOCAL
[Wed Sep 24 14:18:15 2008] [error] [client 172.31.37.38] gss_acquire_cred() failed: Miscellaneous failure (see text) (Key table entry not found)
I have run a truss on the apache process and confirmed that it is in fact loading up the keytab file ok. I am wondering if there is something wrong with the format of the keytab file...
HTTP/srvnfssol1.dev.local@DEV.LOCAL
I am not sure what I am missing though. Or what other things to check.
Any suggestions?
Thanks
Peter
A: Ok. Keytabs are supposed to contain the Service principal name, in this case "HTTP/srvnfssol1.dev.local@DEV.LOCAL" and the encryption key. I see where the MS docs say just to echo that to a file, but I don't think that's right.
You'll need to use the ktpass utility to create the keytab. The MS docs are here.
In particular, you'll need to specify KRB5_NT_SRV_HST, and most of the rest of the options can be default.
Sample of it on my machine:
C:\>ktpass /out test.keytab /princ HTTP/srvnfssol1.dev.local@DEV.LOCAL
/ptype KRB5_NT_SRV_HST /pass *
Type the password for HTTP/srvnfssol1.dev.local:
Key created.
Output keytab to test.keytab:
Keytab version: 0x502
keysize 62 HTTP/srvnfssol1.dev.local@DEV.LOCAL
ptype 3 (KRB5_NT_SRV_HST) vno 1 etype 0x1 (DES-CBC-CRC)
keylength 8 (0xa7f1fb38041c199e)
If the active directory server is the KDC, you'll need to use the /map <name> argument, where <name> is the computer account in active directory representing the server.
Some details on how all this works. When you browse to the website it should respond with a WWW-Authenticate: Negotiate header, and your browser will send a request to the KDC (active directory server) to get a kerberos ticket for the service. The AD server will look up the encryption key for the ticket using the service principal name, and send an encrypted service ticket back to the browser. Once the browser has the service ticket, it'll reissue the HTTP request with an authenticate header containing the ticket. The apache server will look up its key in the keytab, decrypt the ticket, and grant access.
The "key table entry not found" error happens because apache isn't finding itself in the keytab. Can also happen if the name resolution/realms aren't set up right.
You should be able to see all the kerberos requests AP-REQ/AP-REP/TGS-REQ/TGS-REP using wireshark on the client, tcp or udp port 88.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Java constants in JSP I have a class that defines the names of various constants, e.g.
class Constants {
public static final String ATTR_CURRENT_USER = "current.user";
}
I would like to use these constants within a JSP without using Scriptlet code such as:
<%@ page import="com.example.Constants" %>
<%= Constants.ATTR_CURRENT_USER %>
There appears to be a tag in the Apache unstandard taglib that provides this functionality. However, I cannot find any way to download this taglib. I'm beginning to wonder if it's been deprecated and the functionality has been moved to another (Apache) tag library?
Does anyone know where I can get this library, or if it's not available, if there's some other way I can access constants in a JSP without using scriptlet code?
Cheers,
Don
A: On application startup, you can add the Constants class to the servletContext and then access it in any jsp page
servletContext.setAttribute("Constants", com.example.Constants);
and then access it in a jsp page
<c:out value="${Constants.ATTR_CURRENT_USER}"/>
(you might have to create getters for each constant)
A: Turns out there's another tag library that provides the same functionality. It also works for Enum constants.
A: Looks like a duplicate of accessing constants in JSP (without scriptlet)
My answer was:
Static properties aren't accessible in EL. The workaround I use is to create a non-static variable which assigns itself to the static value.
public final static String MANAGER_ROLE = 'manager';
public String manager_role = MANAGER_ROLE;
I use lombok to generate the getter and setter so that's pretty well it. Your EL looks like this:
${bean.manager_role}
Full code at https://rogerkeays.com/access-java-static-methods-and-constants-from-el
A: What kind of functionality do you want to use?
That tag sould be able to access any public class field by class name and field name?
Scriptlets linking done at compile time but taglib class field access has to use such java API as reflection at runtime. Do You really need that?
A: I'll use jakarta-taglibs-unstandard-20060829.jar in my project but, you're true, it seems not available for download anymore.
I've got that in my pom.xml in order to get that library but I think It will work only because that library is now on my local repository (I cannot find it in official repositories) :
<dependency>
<groupId>jakarta</groupId>
<artifactId>jakarta-taglibs-unstandard</artifactId>
<version>20060829</version>
</dependency>
I do not know if there's another alternative.
I hope so because it was a good way to access constants in JSP.
A: Why do you want to print the value of the constant on the JSP? Surely you are defining them so that in the JSP you can extract objects from the session and request before you present them?
<%@ page import="com.example.Constants" %>
<%@ page import="com.example.model.User" %>
<%
User user = (User) session.getAttribute(Constants.ATTR_CURRENT_USER);
%>
<h1>Welcome <%=user.getFirstName()%></h1>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: Decipher Client-Side Rules binary definition for Microsoft Outlook Outlook saves its client-side rule definitions in a binary blob in a hidden message in the Inbox folder of the default store for a profile. The hidden message is named "Outlook Rules Organizer" with a message class IPM.RuleOrganizer. The binary blob is saved in property 0x6802. The same binary blob is written to the exported RWZ file when you manually export the rules through the Rules and Alerts Wizard.
Has anyone deciphered the layout of this binary blob?
A: Hmmm, that is a tough one...
Here's the server side rules protocol
According to this cryptic affair it looks as though you'll probably need to spend some time in Reflector as well...
Ah, these look closer to the mark and promising, give them a look:
Description of programming with Outlook rules
How to use the Rule.dll sample to create an inbox rule in Visual Basic
In general, Microsoft is explicitly saying it hasn't kept the documentation up on the rules in the last two versions and so the caveats...
A: I had exactly the same problem, so I spent a (too) long time looking into the format.
I developed a library https://github.com/hughbe/OutlookRulesReader that contains a specification and reference implementation library (in Swift) for reading and writing Outlook Rules Files
A full description of the format can be found here
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: S/MIME libraries for .net? I need to create S/MIME messages using C# (as specified in RFC 2633, "S/MIME Version 3 message specification", and RFC 3335).
The only S/MIME library I can find is a commercial library (http://www.example-code.com/csharp/smime.asp), which is no good for us.
Are there any existing libraries to accomplish creating S/MIME messages, and in particular, .p7s files?
I have all the encrypted and signed elements that need to go into this file, but I'd like to create the .p7s file without handrolling my own library with the aid of the RFC document...
EDIT:
I've found another commercial S/MIME library, which is still no good for our requirements.
It's looking more and more like I'm going to have to hand roll a S/MIME library, which is sad.
Is everyone in .net who needs S/MIME using commercial, closed source libraries to do it?
A: Have a look at Rebex Secure Mail. This is a very stable library I use for many years now. It's 100% managed code and the source code is also available.
A: I spent a lot of time looking for a good S/MIME library for .NET, with no luck. I ended up creating my own, called OpaqueMail.
It's open source and completely free. It inherits from the System.Net.Mail.SmtpClient class, so porting existing code is straightforward. It also includes classes for working with POP3 and IMAP.
Check it out at http://opaquemail.org/.
An example of sending a S/MIME triple-wrapped message (which is digitally signed, encrypted, then digitally signed again) is:
// Instantiate a new SMTP connection to Gmail using TLS/SSL protection.
SmtpClient smtpClient = new SmtpClient("smtp.gmail.com", 587);
smtpClient.Credentials = new NetworkCredential("username@gmail.com", "Pass@word1");
smtpClient.EnableSsl = true;
// Create a new MailMessage class with lorem ipsum.
MailMessage message = new MailMessage("username@gmail.com", "user@example.com", "Example subject", "Lorem ipsum body.");
// Specify that the message should be signed, have its envelope encrypted, and then be signed again (triple-wrapped).
message.SmimeSigned = true;
message.SmimeEncryptedEnvelope = true;
message.SmimeTripleWrapped = true;
// Specify that the message should be timestamped.
message.SmimeSigningOptionFlags = SmimeSigningOptionFlags.SignTime;
// Load the signing certificate from the Local Machine store.
message.SmimeSigningCertificate = CertHelper.GetCertificateBySubjectName(StoreLocation.LocalMachine, "username@gmail.com");
// Send the message.
await smtpClient.SendAsync(message);
Hope this helps.
A: There's a pretty good S/MIME class available on CodeProject.
http://www.codeproject.com/KB/security/CPI_NET_SecureMail.aspx
A: I've written my own MIME library with support for S/MIME called MimeKit which is far more robust than anything based on System.Net.Mail which is horrendously broken.
It supports raw 8bit headers, rfc822 group addresses, scraping names out of rfc822 comments in address headers (To/Ccc/Bcc/etc), parsing mbox formatted message spools (including support for the Content-Length-based SunOS format) and it's an order of magnitude faster than any other C# MIME parser out there because it is byte-stream based instead of TextReader-based (which is also how it supports raw 8bit headers much better than any other C# parser).
A: I haven't used this S/MIME library, but my application uses another library from the same vendor and it works fine:
http://www.chilkatsoft.com/mime-dotnet.asp
Their library to do the p7s signatures is separate, which might be an issue depending on your budget:
http://www.chilkatsoft.com/crypt-dotnet.asp
A: It's quite hard to implement complete s/mime as it requires lots of extra work. You can use SMIME components in SecureBlackbox for your task.
Update: SecureBlackbox is our product. It completely supports Silverlight and Windows Phone (including Mango).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Programming Languages and Design Patterns different programming languages have different features or lack certain features. Design patterns are a way to work around those shortcomings. I have seen the books and lists about design patterns in static, object oriented languages (Java, C++), but also the Videos about design patterns in Python.
I'm interested in to see some common design patterns in other languages like Forth, Icon, Lisp etc. A short description how they look like and why are they needed in a language would be nice. Maybe a short comparision to another language that solves this problem without a design pattern.
A: Design patterns are sometimes called "idioms". In non-OO languages (C, Forth, COBOL, etc.) they're just "the usual ways of doing things". Sometimes, they're called "algorithms". Every language (indeed, every discipline) has patterns of designing solutions.
If you've seen something two or three times, you've seen a pattern. If you can describe the context, the problem, the solution and consequences, you've elevated the pattern from something vague to something concrete and specific.
In non-OO languages, the patterns aren't often named and catalogued. Don't know why this would be the case, it seems to be so.
A: For design pattern in LISP, you could read this, by Peter Norvig.
Quoting this slide:
16 of the 23 design patterns are either invisible or simpler
A: In Lisp, instead of design patterns you are using:
*
*lambda and closure (anonymous functions and capture environments)
*higher order functions (functions dealing with functions)
*macros (syntax extesnions)
*different evaluation strategies (lazy evaluation, backtracking)
*first class functions, classes, namespaces, modules, etc.
*dynamic environment (e.g. replace functions at any time)
*etc.
I don't really know what a design pattern means in this context. If a design pattern is a recipe which one should follow to solve certain kinds of problems, then it is a lack of feature in the programming language or the environment. Computers can handle repetitive tasks pretty well, so design patterns must be implemented and just called with the actual parameters.
A: Design Patterns aren't really meant to be tied to any language. They are more general solutions to common problems.
A: Delegates and events in C# and .Net make it trivial to implement the observer pattern, since it is so commonly used, e.g. to handle GUI events.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What are the debug memory fill patterns in Visual Studio C++ and Windows? In Visual Studio, we've all had "baadf00d", have seen seen "CC" and "CD" when inspecting variables in the debugger in C++ during run-time.
From what I understand, "CC" is in DEBUG mode only to indicate when a memory has been new() or alloc() and unitilialized. While "CD" represents delete'd or free'd memory. I've only seen "baadf00d" in RELEASE build (but I may be wrong).
Once in a while, we get into a situation of tacking memory leaks, buffer overflows, etc and these kind of information comes in handy.
Would somebody be kind enough to point out when and in what modes the memory are set to recognizable byte patterns for debugging purpose?
A: Regarding 0xCC and 0xCD in particular, these are relics from the Intel 8088/8086 processor instruction set back in the 1980s. 0xCC is a special case of the software interrupt opcode INT 0xCD. The special single-byte version 0xCC allows a program to generate interrupt 3.
Although software interrupt numbers are, in principle, arbitrary, INT 3 was traditionally used for the debugger break or breakpoint function, a convention which remains to this day. Whenever a debugger is launched, it installs an interrupt handler for INT 3 such that when that opcode is executed the debugger will be triggered. Typically it will pause the currently running programming and show an interactive prompt.
Normally, the x86 INT opcode is two bytes: 0xCD followed by the desired interrupt number from 0-255. Now although you could issue 0xCD 0x03 for INT 3, Intel decided to add a special version--0xCC with no additional byte--because an opcode must be only one byte in order to function as a reliable 'fill byte' for unused memory.
The point here is to allow for graceful recovery if the processor mistakenly jumps into memory that does not contain any intended instructions. Multi-byte instructions aren't suited this purpose since an erroneous jump could land at any possible byte offset where it would have to continue with a properly formed instruction stream.
Obviously, one-byte opcodes work trivially for this, but there can also be quirky exceptions: for example, considering the fill sequence 0xCDCDCDCD (also mentioned on this page), we can see that it's fairly reliable since no matter where the instruction pointer lands (except perhaps the last filled byte), the CPU can resume executing a valid two-byte x86 instruction CD CD, in this case for generating software interrupt 205 (0xCD).
Weirder still, whereas CD CC CD CC is 100% interpretable--giving either INT 3 or INT 204--the sequence CC CD CC CD is less reliable, only 75% as shown, but generally 99.99% when repeated as an int-sized memory filler.
Macro Assembler Reference, 1987
A: This link has more information:
https://en.wikipedia.org/wiki/Magic_number_(programming)#Debug_values
* 0xABABABAB : Used by Microsoft's HeapAlloc() to mark "no man's land" guard bytes after allocated heap memory
* 0xABADCAFE : A startup to this value to initialize all free memory to catch errant pointers
* 0xBAADF00D : Used by Microsoft's LocalAlloc(LMEM_FIXED) to mark uninitialised allocated heap memory
* 0xBADCAB1E : Error Code returned to the Microsoft eVC debugger when connection is severed to the debugger
* 0xBEEFCACE : Used by Microsoft .NET as a magic number in resource files
* 0xCCCCCCCC : Used by Microsoft's C++ debugging runtime library to mark uninitialised stack memory
* 0xCDCDCDCD : Used by Microsoft's C++ debugging runtime library to mark uninitialised heap memory
* 0xDDDDDDDD : Used by Microsoft's C++ debugging heap to mark freed heap memory
* 0xDEADDEAD : A Microsoft Windows STOP Error code used when the user manually initiates the crash.
* 0xFDFDFDFD : Used by Microsoft's C++ debugging heap to mark "no man's land" guard bytes before and after allocated heap memory
* 0xFEEEFEEE : Used by Microsoft's HeapFree() to mark freed heap memory
A: There's actually quite a bit of useful information added to debug allocations. This table is more complete:
http://www.nobugs.org/developer/win32/debug_crt_heap.html#table
Address Offset After HeapAlloc() After malloc() During free() After HeapFree() Comments
0x00320FD8 -40 0x01090009 0x01090009 0x01090009 0x0109005A Win32 heap info
0x00320FDC -36 0x01090009 0x00180700 0x01090009 0x00180400 Win32 heap info
0x00320FE0 -32 0xBAADF00D 0x00320798 0xDDDDDDDD 0x00320448 Ptr to next CRT heap block (allocated earlier in time)
0x00320FE4 -28 0xBAADF00D 0x00000000 0xDDDDDDDD 0x00320448 Ptr to prev CRT heap block (allocated later in time)
0x00320FE8 -24 0xBAADF00D 0x00000000 0xDDDDDDDD 0xFEEEFEEE Filename of malloc() call
0x00320FEC -20 0xBAADF00D 0x00000000 0xDDDDDDDD 0xFEEEFEEE Line number of malloc() call
0x00320FF0 -16 0xBAADF00D 0x00000008 0xDDDDDDDD 0xFEEEFEEE Number of bytes to malloc()
0x00320FF4 -12 0xBAADF00D 0x00000001 0xDDDDDDDD 0xFEEEFEEE Type (0=Freed, 1=Normal, 2=CRT use, etc)
0x00320FF8 -8 0xBAADF00D 0x00000031 0xDDDDDDDD 0xFEEEFEEE Request #, increases from 0
0x00320FFC -4 0xBAADF00D 0xFDFDFDFD 0xDDDDDDDD 0xFEEEFEEE No mans land
0x00321000 +0 0xBAADF00D 0xCDCDCDCD 0xDDDDDDDD 0xFEEEFEEE The 8 bytes you wanted
0x00321004 +4 0xBAADF00D 0xCDCDCDCD 0xDDDDDDDD 0xFEEEFEEE The 8 bytes you wanted
0x00321008 +8 0xBAADF00D 0xFDFDFDFD 0xDDDDDDDD 0xFEEEFEEE No mans land
0x0032100C +12 0xBAADF00D 0xBAADF00D 0xDDDDDDDD 0xFEEEFEEE Win32 heap allocations are rounded up to 16 bytes
0x00321010 +16 0xABABABAB 0xABABABAB 0xABABABAB 0xFEEEFEEE Win32 heap bookkeeping
0x00321014 +20 0xABABABAB 0xABABABAB 0xABABABAB 0xFEEEFEEE Win32 heap bookkeeping
0x00321018 +24 0x00000010 0x00000010 0x00000010 0xFEEEFEEE Win32 heap bookkeeping
0x0032101C +28 0x00000000 0x00000000 0x00000000 0xFEEEFEEE Win32 heap bookkeeping
0x00321020 +32 0x00090051 0x00090051 0x00090051 0xFEEEFEEE Win32 heap bookkeeping
0x00321024 +36 0xFEEE0400 0xFEEE0400 0xFEEE0400 0xFEEEFEEE Win32 heap bookkeeping
0x00321028 +40 0x00320400 0x00320400 0x00320400 0xFEEEFEEE Win32 heap bookkeeping
0x0032102C +44 0x00320400 0x00320400 0x00320400 0xFEEEFEEE Win32 heap bookkeeping
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "241"
}
|
Q: Route-problem regarding Url-encoded Umlauts (using the Zend-framework) Today I stumbled about a Problem which seems to be a bug in the Zend-Framework. Given the following route:
<test>
<route>citytest/:city</route>
<defaults>
<controller>result</controller>
<action>test</action>
</defaults>
<reqs>
<city>.+</city>
</reqs>
</test>
and three Urls:
*
*mysite.local/citytest/Berlin
*mysite.local/citytest/Hamburg
*mysite.local/citytest/M%FCnchen
the last Url does not match and thus the correct controller is not called. Anybody got a clue why?
Fyi, where are using Zend-Framework 1.0 ( Yeah, I know that's ancient but I am not in charge to change that :-/ )
Edit: From what I hear, we are going to upgrade to Zend 1.5.6 soon, but I don't know when, so a Patch would be great.
Edit: I've tracked it down to the following line (Zend/Controller/Router/Route.php:170):
$regex = $this->_regexDelimiter . '^' .
$part['regex'] . '$' .
$this->_regexDelimiter . 'iu';
If I change that to
$this->_regexDelimiter . 'i';
it works. From what I understand, the u-modifier is for working with asian characters. As I don't use them, I'm fine with that patch for know. Thanks for reading.
A: Please its working perfect for me
/^[\p{L}-. ]*$/u
*
*^ Start of the string
*[ ... ]* Zero or more of the following:
*\p{L} Unicode letter characters
*– dashes
*. periods
*spaces
*$ End of the string
*/u Enable Unicode mode in PHP
EXAMPLE:
$str= ‘Füße’;
if (!preg_match(“/^[\p{L}-. ]*$/u”, $str))
{
echo ‘error’;
}
else
{
echo “success”;
}
A: The u modifier makes the regexp expect utf-8 input. This would suggest that ZF expects utf-8 encoded input, and not ISO-8859-1 (I'm not too familiar with ZF, so I'm just guessing here).
If that's the case, you'll have to utf-8 encode the ü before using it in a URL. It would then become: mysite.local/citytest/M%C3%BCnchen
Note that since the rest of your application probably speaks ISO-8859-1 (Which is default for PHP <= 5), you will have to explicitly decode the variable with utf8_decode, before you can use it.
A: The problem is the following:
Using the /u pattern modifier prevents
words from being mangled but instead
PCRE skips strings of characters with
code values greater than 127.
Therefore, \w will not match a
multibyte (non-lower ascii) word at
all (but also won’t return portions of
it). From the pcrepattern man page;
In UTF-8 mode, characters with values
greater than 128 never match \d, \s,
or \w, and always match \D, \S, and
\W. This is true even when Unicode
character property support is
available.
From Handling UTF-8 with PHP.
Therefore it's actually irrelevant if your URL is ISO-8859-1 encoded (mysite.local/citytest/M%FCnchen) or UTF-8 encoded (mysite.local/citytest/M%C3%BCnchen), the default regex won't match.
I also made experiments with umlauts in URLs in Zend Framework and came to the conclusion that you wouldn't really want umlauts in your URLs. The problem is, that you cannot rely on the encoding used by the browser for the URL. Firefox (prior to 3.0) for example does not UTF-8 encode URLs entered into the address textbox (if not specified in about:config) and IE does have a checkbox within its options to choose between regular and UTF-8 encoding for its URLs. But if you click on links within a page both browsers use the URL in the given encoding (UTF-8 on an UTF-8 page). Therefore you cannot be sure in which encoding the URLs are sent to your application - and detecting the encoding used is not that trivial to do.
Perhaps it's better to use transliterated parameters in your URLs (e.g. change Ä to Ae and so on). There is a really simple way to this (I don't know if this works with every language but I'm using it with German strings and it works quite well):
function createUrlFriendlyName($name) // $name must be an UTF-8 encoded string
{
$name=mb_convert_encoding(trim($name), 'HTML-ENTITIES', 'UTF-8');
$name=preg_replace(
array('/ß/', '/&(..)lig;/', '/&([aouAOU])uml;/', '/&(.)[^;]*;/', '/\W/'),
array('ss', '$1', '$1e', '$1', '-'),
$name);
$name=preg_replace('/-{2,}/', '-', $name);
return trim($name, '-');
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Finding the name of a variable in C I was asked a question in C last night and I did not know the answer since I have not used C much since college so I thought maybe I could find the answer here instead of just forgetting about it.
If a person has a define such as:
#define count 1
Can that person find the variable name count using the 1 that is inside it?
I did not think so since I thought the count would point to the 1 but do not see how the 1 could point back to count.
A: Building on @Cade Roux's answer, if you use a preprocessor #define to associate a value with a symbol, the code won't have any reference to the symbol once the preprocessor has run:
#define COUNT (1)
...
int myVar = COUNT;
...
After the preprocessor runs:
...
int myVar = (1);
...
So as others have noted, this basically means "no", for the above reason.
A: The simple answer is no they can't. #Defines like that are dealt with by the preprocessor, and they only point in one direction. Of course the other problem is that even the compiler wouldn't know - as a "1" could point to anything - multiple variables can have the same value at the same time.
A:
Can that person find the variable name "count" using the 1 that is inside it?
No
A: As I'm sure someone more eloquent and versed than me will point out #define'd things aren't compiled into the source, what you have is a pre-processor macro which will go through the source and change all instance of 'count' it finds with a '1'.
However, to shed more light on the question you were asked, because C is a compiled language down to the machine code you are never going to have the reflection and introspection you have with a language like Java, or C#. All the naming is lost after compilation unless you have a framework built around your source/compiler to do some nifty stuff.
Hope this helps. (excuse the pun)
A: Unfortunately this is not possible.
#define statements are instructions for the preprocessor, all instances of count are replaced with 1. At runtime there is no memory location associated with count, so the effort is obviously futile.
Even if you're using variables, after compilation there will be no remnants of the original identifiers used in the program. This is generally only possible in dynamic languages.
A: One trick used in C is using the # syntax in macros to obtain the string literal of the of the macro parameter.
#define displayInt(val) printf("%s: %d\n",#val,val)
#define displayFloat(val) printf("%s: %d\n",#val,val)
#define displayString(val) printf("%s: %s\n",#val,val)
int main(){
int foo=123;
float bar=456.789;
char thud[]="this is a string";
displayInt(foo);
displayFloat(bar);
displayString(thud);
return 0;
}
The output should look something like the following:
foo: 123
bar: 456.789
thud: this is a string
A: #define count 1 is a very bad idea, because it prevents you from naming any variables or structure fields count.
For example:
void copyString(char* dst, const char* src, size_t count) {
...
}
Your count macro will cause the variable name to be replaced with 1, preventing this function from compiling:
void copyString(char* dst, const char* src, size_t 1) {
...
}
A: C defines are a pre-processor directive, not a variable. The pre-processor will go through your C file and replace where you write count with what you've defined it as, before compiling. Look at the obfuscated C contest entries for some particularly enlightened uses of this and other pre-processor directives.
The point is that there is no 'count' to point at a '1' value. It just a simple/find replace operation that happens before the code is even really compiled.
I'll leave this editable for someone who actually really knows C to correct.
A: count isn't a variable. It has no storage allocated to it and no entry in the symbol table. It's a macro that gets replaced by the preprocessor before passing the source code to the compiler.
On the off chance that you aren't asking quite the right question, there is a way to get the name using macros:
#define SHOW(sym) (printf(#sym " = %d\n", sym))
#define count 1
SHOW(count); // prints "count = 1"
The # operator converts a macro argument to a string literal.
A: What do you mean by "finding"?
The line
#define count 1
defines a symbol "count" that has value 1.
The first step of the compilation process (called preprocessing) will replace every occurence of the symbol count with 1 so that if you have:
if (x > count) ...
it will be replaced by:
if (x > 1) ...
If you get this, you may see why "finding count" is meaningless.
A: #define is a pre-processor directive, as such it is not a "variable"
A: What you have there is actually not a variable, it is a preprocessor directive. When you compile the code the preprocessor will go through and replace all instaces of the word 'count' in that file with 1.
You might be asking if I know 1 can I find that count points to it? No. Because the relationship between variables names and values is not a bijection there is no way back. Consider
int count = 1;
int count2 = 1;
perfectly legal but what should 1 resolve to?
A: In general, no.
Firstly, a #define is not a variable, it is a compiler preprocessor macro.
By the time the main phase of the compiler gets to work, the name has been replaced with the value, and the name "count" will not exist anywhere in the code that is compiled.
For variables, it is not possible to find out variable names in C code at runtime. That information is not kept. Unlike languages like Java or C#, C does not keep much metadata at all, in compiles down to assembly language.
A: Directive starting with "#" are handled by the pre-processor which usually does text substitution before passing the code to the 'real' compiler. As such, there is no variable called count, it's as if all "count" strings in your code are magically replaced with the "1" string.
So, no, no way to find that "variable".
A: In case of a macro this is preprocessed and the resulting output is compiled. So it is absolutely no way to find out that name because after the preprocessor finnishes his job the resulting file would contain '1' instead of 'count' everywhere in the file.
So the answer is no.
A: If they are looking at the C source code (which they will be in a debugger), then they will see something like
int i = count;
at that point, they can search back and find the line
#define count 1
If, however, all they have is variable iDontKnowWhat, and they can see it contans 1, there is no way to track that back to 'count'.
Why? Because the #define is evaluated at preprocessor time, which happens even before compilation (though for almost everyone, it can be viewed as the first stage of compilation). Consequently the source code is the only thing that has any information about 'count', like knowing that it ever existed. By the time the compiler gets a look in, every reference to 'count' has been replaced by the number '1'.
A: It's not a pointer, it's just a string/token substitution. The preprocessor replaces all the #defines before your code ever compiles. Most compilers include a -E or similar argument to emit precompiled code, so you can see what the code looks like after all the #directives are processed.
More directly to your question, there's no way to tell that a token is being replaced in code. Your code can't even tell the difference between (count == 1) and (1 == 1).
If you really want to do that, it might be possible using source file text analysis, say using a diff tool.
A: The person asking the question (was it an interview question?) may have been trying to get you to differentiate between using #define constants versus enums. For example:
#define ZERO 0
#define ONE 1
#define TWO 2
vs
enum {
ZERO,
ONE,
TWO
};
Given the code:
x = TWO;
If you use enumerations instead of the #defines, some debuggers will be able to show you the symbolic form of the value, TWO, instead of just the numeric value of 2.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: .NET SOAP Common types Is there a way when creating web services to specify the types to use? Specifically, I want to be able to use the same type on both the client and server to reduce duplication of code.
Over simplified example:
public class Name
{
public string FirstName {get; set;}
public string Surname { get; set; }
public override string ToString()
{
return string.Concat(FirstName, " ", Surname);
}
}
I don't want to have recode pieces of functionality in my class. The other thing is that any code that exists that manipulates this class won't work client side as the client side class that is generated would be a different type.
A: Okay, I see know that this has been an explicit design decision on the part of SOAP so you're not actually supposed to do this. I found the following page that explains why:
Services share schema and contract,
not class. Services interact solely on
their expression of structures through
schemas and behaviors through
contracts. The service's contract
describes the structure of messages
and ordering constraints over
messages. The formality of the
expression allows machine verification
of incoming messages. Machine
verification of incoming messages
allows you to protect the service's
integrity. Contracts and schemas must
remain stable over time, so building
them flexibly is important.
Having said that there are two other possibilities:
*
*Generate the the web references in Visual Studio or using wsdl.exe. Then go into the generated Reference.cs (or .vb) file and delete the type explicitly. Then redirect to the type that you want that is located in another assembly.
*You can share types between web services on the client side by wsdl.exe and the /sharetypes parameter.
A: If you want to have a type or structure shared between your web service and your client, add a public struct to your web service project like so:
public struct Whatever
{
public string A;
public int B;
}
then add a method to your web service that has this struct as its return type:
[WebMethod]
public Whatever GiveMeWhatever()
{
Whatever what = new Whatever();
what.A = "A";
what.B = 42;
return what;
}
After you update your client's web reference, you'll be able to create structs of type Whatever in your client application like so:
Webreference.Whatever what = new Webreference.Whatever();
what.A = "that works?";
what.B = -1; // FILENOTFOUND
This technique lets you maintain the definition of any structures you need to pass back and forth in one place (the web service project).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Java annotations for design patterns? Is there a project that maintains annotations for patterns?
For example, when I write a builder, I want to mark it with @Builder.
Annotating in this way immediately provides a clear idea of what the code implements. Also, the Javadoc of the @Builder annotation can reference explanations of the builder pattern. Furthermore, navigating from the Javadoc of a builder implementation to @Builder Javadoc is made easy by annotating @Builder with @Documented.
I've being slowing accumulating a small set of such annotations for patterns and idioms that I have in my code, but I'd like to leverage a more complete existing project if it exists. If there is no such project, maybe I can share what I have by spinning it off to a separate pattern/idiom annotation project.
Update: I've created the Pattern Notes project in response to this discussion. Contributions welcome! Here is @Builder
A: This seems like a misuse of annotations to me. Sure, I could see why you might want to note what design pattern a class is helping to implement, but just using the Javadoc and/or the name of the class seems more appropriate. The name of the pattern that you're using is of no actual importance to the code itself... patterns are just a guide for an often used way of solving a problem. A comment would suffice, rather than creating a new file for every pattern you use.
A: This is an interesting solution, but I keep wondering what's really the problem you're solving with this? Or in other words, what do you get from using something like this what you don't get by a proper comment on top of your class about it's usage?
I can think of a few cons but can't think of benefits apart from this being the nice standardized way to document code.
Cons would be, namely:
*
*one more thing for programmers to think about, which is never a good thing
*unannotated patterns might be confusing - someone probably forgot to document it, but maybe it's not a pattern..?
*can you really annotate all patterns..? what about patterns which are not tied to a single class/method, for example three-tier architectural pattern, or thread pool, or even MVC?
A: Michael Hunger and I have started an open-source project for annotations to specify what patterns the classes belong to. We are right at the beginning stages, but would love to hear your input.
I would like to go with the KISS principle in order to make it as easy as possible for people to use the annotations. For example, if you are writing an adapter, you can simply say:
@AdapterPattern
public class EnumerationIteratorAdapter<T> implements Enumeration<T> {
...
}
Of course, you can specify more information if you want, for example the role, the participants and a comment. We hope that this will make it easy for developers to mark-up their classes clearly.
The project home is on http://www.jpatterns.org from where you can also access the initial source tree. Please contact me on heinz at javaspecialists dot eu if you would like to contribute to the project.
Heinz (The Java Specialists' Newsletter)
A: I just stumbled on another article that is interesting for you: Design Markers - Explicit Programming for the Rest of Us which talks about marker interfaces, like Serializable.
In their words:
...just because a class declares that it "implements Serializable" doesn't mean that it has correctly implemented the Serializable contract.
Since Java can't really tell if the contract has been met, using the marker interface is more of an explicit pledge by the programmer that it has.
The overlooked benefit of marker interfaces is that they also document the intention that a contract should be met...
Why haven't design choices traditionally been recorded in source code? Mostly, because there has been no clear place to put them.
Even if each "typesafe enumeration" class had a comment noting that it followed that pattern, any elaboration (much less tutorial information) would not have been added because one either had to copy it repeatedly, or worse, place it sporadically in arbitrary spots.
When creating the JavaDoc comments attached to each Design Marker interface, one can put in more detail than is typical because the comments do not need to be repeated anywhere else.
They also mention some downsides, this a good food for thought!
A: Firstly, what you want to do is documenting an intention (or intentions).
So, why not use a generic version of your annotation, something like @UsePattern that use @Documented which is a marker annotation (nice tuorial from IBM)? What i don't like is that the annotation is kept at runtime, which is a waste unless you want to affect program semantics.
Or a Custom Javadoc tag which seems more appropriate.
Some information about the comparison: Comparing Annotations and Javadoc Tags with a nice one sentence summmary:
<< In general, if the markup is intended to affect or produce documentation, it should probably be a javadoc tag; otherwise, it should be an annotation. >>
There is/was also some debate on documentation as annotation or as javadoc tags.
A: What would be better would be to use annotations to actually build the boilerplate for a Builder. Let's face it most are pretty standard.
@Builder("buildMethodName")
Class Thing {
String thingName;
String thingDescr;
}
Typical useage:
Thing thing =
new Thing.Builder().setThingName("X").setThingDescr("x").buildMethodName();
A: Else there is this 2008 Computer Science paper: Design Pattern Implementation in Java and AspectJ, it was presented at OOPSLA 2008, which should give an indication about its quality.
A nice quote from it:
... the mere existence of classes that exclusively contain pattern code serve as records of what patterns are being used. In the AspectJ cases, we observe two additional improvements. First, all code related to a particular pattern instance is contained in a single module (which defines participants, assigns roles, etc.). This means that the entire description of a pattern instance is localized and does not “get lost” [21] or “degenerate” [7] in the system. Secondly, with the current AspectJ IDE support, all references, advised methods etc. are hyperlinks that allow a developer an overview of the assignment of roles and where the conceptual operations of interest are...
A: Seems like a misuse of annotations to me. Unless there is the intention of implementing behavior with those annotations, I'd use the KISS principle: Plain ol' javadoc does fine for documenting what the artifact is supposed to do/be; custom doclets for extending javadoc; and google for those who want to know what a X or Y pattern is for (or a link to it somewhere on the web.)
There are excellent, quasi-official explanations for most patterns out there. Why writing your own? Is there additional information that is crucial for the project? Using annotations to make sure one can navigate from one class' javadoc to a custom-written pattern javadoc is like the tale of the CEO who assembled a development team for creating a report that combines the totals of two existing quarterly reports - it was too difficult (and yet cheaper) to add the totals of the two with a calculator 4 times a year :-/
A: If you can also write an annotation processor that will verify certain properties of the pattern - for example checking for common mistakes when implementing the pattern - this would be very useful. Documentation for the compiler as well as the programmer.
A: First off all this is a very good idea and I'm only hanging out here because I googled for a "design pattern annotation" library. Good I found this! I will check it out and give feed back on it soon.
To all the skeptics: sorry obviously most of you are not very experienced in the topic of design patters. E.g. Martin Harris's post from Dec 3 '09 at 21:56 ...
I understand you wanted to keep your "example" simple. But that is not a Builder in the sense of the Design Pattern.
The same I want to say to those who don't see the usefulness at all. If the relations of classes regarding to their roles in design patters are annotated to the class, I can use a generator to craft the boilerplate code. I see all relations on top of the class in the source code and can use my IDE shortcuts to navigate to the relevant classes.
If you have learned to think in patterns and all patterns are obvious in the source code (via comments or annotations) you can grasp a system composed of 200 classes in less than an hour.
Regarding suggestions like using @UsePattern() or @Builder("buildMethodName")
etc. ... here we have to ask, how to make it "typesave"? After all those strings are prone to typos.
One advantage of proper annotations is that you can annotate roles ... Most Design Patterns do not consist out of a single class (like Singleton) but out of several classes working together! E.g. if you have a builder the result (annotated with @Product) might be also a @Composite. So the parts the builder is putting together will be @Component (in regard to the @Composite) and a @Part (in regard to the @Builder and the @Product).
Perhaps the best argument to such annotations would be java.lang.class, so you can express that.
Anyway, just a few thoughts ... I cant await to get home and play with the stuff you have so far ^^
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: Source code search with Google Desktop Is there a indexing plugin for GDS that allows for source code search? I see some for specific types (Java, C++, ...) and one for "any text". These are nice, but I would like one that allows for many/configurable extensions (HTML, CSS, JS, VB, C#, Java, Python, ...). A huge bonus would be to allow for syntax highlighting (http://pygments.org/) in the cache.
A: I just found Dropout and it seems to work great. Put Dropout in any folder and it will index all files in that folder. I put it in my Projects folder and it crawled all my code. Very fast and flexible search. Dropout
A: You could use OpenGrok or some other code-specific search engine instead.
I wrote a quick review of some of them some time ago.
A: It has been a long time, but the last time I tried to use Google Desktop Search for searching code, I found it quite inappropriate for that task, as I outlined at [http://perlmonks.org/?node_id=490310], the gist of which is that GDS (silently) only indexed a tiny fraction of many source code files (and made it quite a challenge to figure out why searching so often failed to find so much of what was in source code files).
I found Copernic Desktop Search worked better on code files (but I also had trouble with later versions of it being buggy in not finding all matches so I've been staying with version 2.1.1). But these days I don't use it much (mostly because I don't have permission to install such things on the laptop provided by my new employer).
A: You can try out Larry's Any Text File Indexer. You can specify a list of extensions at install time and it will do full text search on those file types.
A: Im just giving this a go:
http://desktop.google.com/plugins/i/java.html?hl=en
..also you can search for things in your Java tree using the following syntax in Google Desktop:
<YOUR SEARCH> filetype:java under:"C:\hft\trunk"
..where I keep my code in "C:\hft\trunk"
A: This is not a Google Desktop plugin, but works for what we need.
We have started using http://svnquery.tigris.org/ and it seems to work and is very fast. I wish it supported multiple repositories per site. We have a repository per project, so currently I have to create a virtual directory for each project we have. Not a show stopper, just something we need to automate in our project setup script.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: Controls do not appear on dynamically created user control I have user control named DateTimeUC which has two textboxes on its markup:
<asp:TextBox ID="dateTextBox" runat="server"></asp:TextBox>
<asp:TextBox ID="timeTextBox" runat="server"></asp:TextBox>
I am dynamically creating this control in another user control:
Controls.Add(GenerateDateTime(parameter));
private DateTimeUC GenerateDateTime(SomeParameter parameter)
{
DateTimeUC uc = new DateTimeUC();
uc.ID = parameter.Name;
return uc;
}
But when I render the page, DateTimeUC renders nothing. I checked it like this:
protected override void Render(HtmlTextWriter writer)
{
base.Render(writer);
StringBuilder builder = new StringBuilder();
StringWriter swriter = new StringWriter(builder);
HtmlTextWriter hwriter = new HtmlTextWriter(swriter);
base.Render(hwriter);
string s = builder.ToString();
}
s is empty and Controls.Count is 0. What am I doing wrong?
A: You must use the LoadControl( "your_user_control_app_relative_path.ascx" ) method instead of "DateTimeUC uc = new DateTimeUC();"
A: I ran into this problem myself a while back. You need to use the LoadControl() method. Check out this page on it.
A: I think you want to add your control to the page's form & not the form itself.
your code:
Controls.Add( GenerateDateTime(parameter) );
try:
Page.Form.Controls.Add( GenerateDateTime(parameter) );
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: History not visible in VSS, now what? I've been tasked with "fixing" an old VSS database. At this point in time, we are considering migrating to TFS, but for the time being, if we could get VSS back to a stable condition, it would provide some peace of mind.
We're starting to get worried that VSS is going to die on us, because when we try to view the history of any file, nothing seems to happen -- the dialogue appears to just be dismissed. That said, we don't seem to have any problems doing check outs and check ins from Visual Studio, and comparing a changed file to the latest from VSS seems to work (though I doubt this is a functionality of VSS and more of Visual Studio).
I made a backup of the project folder, and ran the Analyze utility, which said it didn't find any problems. I'm not sure what else to try. Help!
We're running VSS 2005 on Windows 2000.
A: Is this the issue?
http://support.microsoft.com/kb/910793/en-us?spid=10433&sid=global
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Using OpenID as a login for my website - redundant providers How do I support redundancy on my OpenID login website?
For instance, I have users that demand 100% uptime (yeah, right, but let's get as close as we can).
Some of them use less available providers (ie, myphpid on their own website, or an ID on a startup which has frequent downtime). Now I can shuttle them to a more reliable provider, but I also want to have some redundancy.
One solution I hope I can deploy, but don't understand enough about OpenID to try is:
Set up several phpmyids on different hosting services with the same credentials (hash/key/etc) but different domains (ideally I'd have a round robin DNS and the same name, but I also want to account for the case where they have different domains).
Will this work? In other words, I have the exact same phpmyid files, including the credentials, on different servers. Can I use example.com/id and example2.com/id and expect it to look the same on my end so I don't have to link multiple OpenId accounts to each user in my system?
While I use the example of phpmyid, the question is more general - are the credentials what's important, or is the domain/ip/??? also linked in such a way as to prevent this?
Is there, or can there be, a standard that would allow one to move one's OpenID from one provider to another without having to delink and relink on each website they used that openid on?
-Adam
A: An OpenID is a URL. example.com/id and example2.com/id are two different OpenIDs, regardless of which provider hosts them or what credentials the users share with those providers. The reliability of an OpenID really comes down to the reliability of hosting that URL. Yes, you can define fallback providers in your XRDS document, but you still have to be able to discover that document from the OpenID URL in the first place.
So the reliability techniques for OpenID are, for the most part, the same as for any other web resource with a fixed URL. And, as a relying party, there isn't a lot you can do about that. As you said, it's up to your users to choose an OpenID provider that suits their own requirements. You might suggest to your users that they ask their OpenID provider if a service level agreement is available.
The one thing you can do is allow your users to associate their account in your application with multiple OpenIDs.
A: I recently posted an answer to the question "How do I use more than one OpenID?" which may address your concerns.
A professional provider should have a redundant service commonly archieved by having a load balancer in front of two separate servers, but users should host an XRDS document on a custom domain (or by using an i-name) with references to different providers (for fallback purposes)
You could also host identical data on different servers but round robin wont help you. This is because round robin just acts as a simple load balancer (it doesn't check whether a host is accessible).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: GNU compiler warning "class has virtual functions but non-virtual destructor" I have defined an interface in C++, i.e. a class containing only pure virtual functions.
I want to explicitly forbid users of the interface to delete the object through a pointer to the interface, so I declared a protected and non-virtual destructor for the interface, something like:
class ITest{
public:
virtual void doSomething() = 0;
protected:
~ITest(){}
};
void someFunction(ITest * test){
test->doSomething(); // ok
// deleting object is not allowed
// delete test;
}
The GNU compiler gives me a warning saying:
class 'ITest' has virtual functions but non-virtual destructor
Once the destructor is protected, what is the difference in having it virtual or non-virtual?
Do you think this warning can be safely ignored or silenced?
A: Some of the comments on this answer relate to an earlier answer I gave, which was wrong.
A protected destructor means that it can only be called from a base class, not through delete. That means that an ITest* cannot be directly deleted, only a derived class can. The derived class may well want a virtual destructor. There is nothing wrong with your code at all.
However, since you cannot locally disable a warning in GCC, and you already have a vtable, you could consider just making the destructor virtual anyway. It will cost you 4 bytes for the program (not per class instance), maximum. Since you might have given your derived class a virtual dtor, you may find that it costs you nothing.
A: It's more or less a bug in the compiler. Note that in more recent versions of the compiler this warning does not get thrown (at least in 4.3 it doesn't). Having the destructor be protected and non-virtual is completely legitimate in your case.
See here for an excellent article by Herb Sutter on the subject. From the article:
Guideline #4: A base class destructor should be either public and virtual, or protected and nonvirtual.
A: If you insist on doing this, go ahead and pass -Wno-non-virtual-dtor to GCC. This warning doesn't seem to be turned on by default, so you must have enabled it with -Wall or -Weffc++. However, I think it's a useful warning, because in most situations this would be a bug.
A: It's an interface class, so it's reasonable you should not delete objects implementing that interface via that interface. A common case of that is an interface for objects created by a factory which should be returned to the factory. (Having objects contain a pointer to their factory might be quite expensive).
I'd agree with the observation that GCC is whining. Instead, it should simply warn when you delete an ITest*. That's where the real danger lies.
A: My personal view is that you'd doing the correct thing and the compiler is broken. I'd disable the warning (locally in the file which defines the interface) if possible,
I find that I use this pattern (small 'p') quite a lot. In fact I find that it's more common for my interfaces to have protected dtors than it is for them to have public ones. However I don't think it's actually that common an idiom (it doesn't get spoken about that much) and I guess back when the warning was added to GCC it was appropriate to try and enforce the older 'dtor must be virtual if you have virtual functions' rule. Personally I updated that rule to 'dtor must be virtual if you have virtual functions and wish users to be able to delete instances of the interface through the interface else the dtor should be protected and non virtual' ages ago ;)
A: If the destructor is virtual it makes sure that the base class destructor is also called fore doing the cleanup, otherwise some leaks can result from that code. So you should make sure that the program has no such warnings (prefferably no warnings at all).
A: If you had code in one of ITest's methods that tried to delete itself (a bad idea, but legal), the derived class's destructor wouldn't be called. You should still make your destructor virtual, even if you never intend to delete a derived instance via a base-class pointer.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "59"
}
|
Q: What are the naming conventions that you use while coding? What are the naming conventions that you use while coding?
A: I hope we will not discuss prefixes for field names and brace styles here :)
Here is my bible for .NET:
Also MSDN gives solid guidelines.
Another useful source is MS Internal Coding Guidelines
A: Here's a list of general naming conventions from MSDN.
I tend to just go-with-the-flow, however. Whatever standards are currently in place, it's usually easiest to just go with them and maybe slowly shift it over time. It's not really practical to just come into a project with your own idea of "standards" and try to implement them.
It doesn't REALLY matter what standards are used, imo -- just that there are some and people know what they are.
A: I use a combination of Hungarian, camel case, and other rules I come up with in the beginning of a project. Like right now:
*
*Methods are upper case (DoThis)
*variables are camel case (thisThing)
*page level variables are prefaced with _ (_thisWorksEverywhere)
*regions are all lower case (#region foreign properties)
*Properties and Objects are uppercase (Object.Property)
*Foreign properties are prefaced by _ (Object._ForeignGroups)
*Controls are Hungarian to an extent, like (txtTextBox) and (rptRepeater). I'm not too strict as to what's customary because "Watermark" can be wm or wk or whatever, as long as they all match each other accross my application.
...etc. Some things are standard, others are up to interpretation, but the most important thing is consistency across your application.
A: Hungarian notation can be used. I don't bother myself, but I give various things (variables, controls, etc.) sensible names.
For example, I use a Hungarian-style prefix for control names such as txt for TextBoxes, btn for Buttons, pic for PictureBoxes, lbl for Labels, etc. That helps to easily identify what a control is.
For function names I try and use sensible explanatory names, but nothing with any particular rules. For variable names again I just use explanatory names but nothing special.
A: To add on to the answer from @Aku authors of the Framework Design Guidelines have published on online digest version of their guidelines, with an emphasis on naming convetions.
Framework Design Guidelines Digest v2
Download here
Consistency is key. Depending on the size of your development team, using a consistent and documented convetion will make it easier to pick up someone elses code and for others to pick up your own code.
A: Folks, please don't post answers like "I like __field" or "I like m__field". It's a very personal and subjective question without a single answers.
If you have any guidlines it's already a big win. Worse thing in dev team is a lack of common conventions.
It would be nice if would try to describe some benefits of given guideline.
For example:
prefixing fields with underscore can
improve auto-completion with
intellisense
A: Pick one at be consistent. Changing name styles leads to confusion.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to write a class library for OLE Automation? I have Excel add-in which I add so many class modules that it is now very bulky. I want to convert it into a type library or a COM package so that I can re-use it for the other apps in the MS Office suite.
I ported the add-in to Visual Studio as a class library project but Excel Automation doesn't recognize the classes in the compiled .dll file. Intuitively I think I would need a manifest, an interface or the something like that in my code.
What do I need to know in order to expose a class's methods and properties for use in OLE Automation?
A: I am assuming since you used the phrase manifest, you are assembling this DLL using a .net development platform VS2003, VS2005 or VS2008 as compared to a VS 6.0
This link provides a detailed set of steps required to register a .NET assembly for use as COM component.
The one thing the article doesn't mention that I routinely do is create my own GUIDs. Use the Create GUID item in the Tools menu then insert them above the classes, interfaces, and enums you want exposed for COM.
[Guid("3838ADC1-E901-4003-BD0C-A889A7CF25A1")]
public interface IMyCOMClass {
void MyMethod();
}
[Guid("476BDEB6-B933-4ed5-8B86-7D9330A59356"),
ClassInterface(ClassInterfaceType.None)]
public class MyCOMClass : IMyCOMClass {
public void MyMethod() {
//implementation here
}
}
The second thing I do is use a separate interface for the COM portion that is implemented by the class. The reasoning for doing this has to do with the breakability of COM when the interface changes, think DLL Hell.
Hope this helps,
Bill.
A: (Assuming it's a .NET project)
Besides having to add the Guids to your interfaces and classes, you also need to mark them with the ComVisible attribute (unless you've marked the whole assembly with it). Also, you need to use the tlbexp.exe to export the metadata as a COM typelibrary for referencing in unmanaged clients.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Contributing to Python I'm a pretty inexperienced programmer (can make tk apps, text processing, sort of understand oop), but Python is so awesome that I would like to help the community. What's the best way for a beginner to contribute?
A: *
*Add to the docs. it is downright crappy
*Help out other users on the dev and user mailing lists.
*TEST PYTHON. bugs in programming languages are real bad. And I have seen someone discover atleast 1 bug in python
*Frequent the #python channel on irc.freenode.net
A: Build something cool in Python and share it with others. Small values of cool are still cool. Not everyone gets to write epic, world-changing software.
Every problem solved well using Python is a way of showing how cool Python is.
A: I guess one way would be to help with documentation (translation, updating), until you are aware enough about the language. Also following the devs and users mail groups would give you a pretty good idea of what is being done and needs to be done by the community.
A: I see two ways of going about it: working on Python directly or working on something that utilizes Python
Since you're a beginner, you're probably hesitant to work on the core Python language or feel that you can't contribute in a meaningful way, which is understandable. However, as a beginner, you're in a good position to help improve documentation and other items that are essential to learning Python.
For example, the Python tutorial is less of a tutorial (in the standard sense) and more of a feature listing, at least in my opinion. When I tried to learn from it, I never got the feeling that I was building up my knowledge, like creating an application. It felt more like I was being shown all the parts that make up Python but not how to put them together into a cohesive structure.
Once I became more comfortable with the language (mostly through books and lots of practice), I eventually wrote my own tutorial, trying to provide not only the technical information but also lessons learned and "newbie gotchas".
Alternatively, you can contribute to the Python world by using Python in programs. You can contribute to projects already established, e.g. Django, PyGame, etc., or you can make your own program to "scratch an itch". Either way, you not only build your knowledge of Python but you are giving back to the community.
Finally, you can become an advocate of Python, encouraging others to learn the language. I kept suggesting to my supervisor at my last job to use Python rather than Java when a considering what to use for a new project. I tell everyone I know about the joys of Python and encourage them to give it a try. I convinced the administrator of a computer forum I frequent to create a section for Python. And, as I already said, I wrote a tutorial for Python and I'm working on a new one for wxPython.
There are many ways you can contribute to Python that aren't necessarily programming related. As your programming skills grow, you may want to move further into code contributions. But you may gain more satisfaction by helping others find the same joy you found in Python.
A: If you aren't up to actually working on the Python core, there are still many ways to contribute.. 2 that immediately come to mind is:
work on documentation.. it can ALWAYS be improved. Take your favorite modules and check out the documentation and add where you can.
Reporting descriptive bugs is very helpful to the development process.
A: Get involved with the community: http://www.python.org/dev/
A: Start by contributing to a Python project that you use and enjoy. This can be as simple as answering questions on the mailing list or IRC channel, offering to help with documentation and test writing or fixing bugs.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Using shell_exec('passwd') to change a user's password I need to be able to change the users' password through a web page (in a controlled environment).
So, for that, I'm using this code:
<?php
$output = shell_exec("sudo -u dummy passwd testUser testUserPassword");
$output2 = shell_exec("dummyPassword");
echo $output;
echo $output2;
echo "done";
?>
My problem is that this script is not changing the password for the user "testUser".
What am I doing wrong?
Thanks
A: Another option is to have a shell script, say called passwd_change.sh somewhere that looks like this:
#!/usr/bin/expect -f
set username [lindex $argv 0]
set password [lindex $argv 1]
spawn passwd $username
expect "(current) UNIX password: "
send "$password\r"
expect "Enter new UNIX password: "
send "$password\r"
expect "Retype new UNIX password: "
send "$password\r"
expect eof
Then in your php code do:
<?php
shell_exec("sudo -u root /path/to/passwd_change.sh testUser testUserPass");
?>
A: I'm not familiar enough with PHP to tell you how to fix it, but your problem is that the two shell_exec commands are entirely separate. It appears as though you're trying to use the second command to pipe input to the first one, but that's not possible. The first command shouldn't return until after that process has executed, when you run the second one it will attempt to run the program dummyPassword, which we can probably expect to fail.
A: Use proc_open, which will let you interact with the process's stdin.
See this comment in particular at the manual: http://www.php.net/manual/en/function.proc-open.php#58044
A: The first response is correct. You probably want to use popen() or some other function that will return a pipe, which you can write to just like a file opened with fopen() or file().
<?php
$pipe = popen("sudo -u dummy passwd testUser testUserPassword", 'r');
fwrite($pipe, "dummyPasswd\r\n");
pclose($pipe);
echo "done";
?>
I haven't tested that, but it's the general idea of what you seem to be going for. You'll notice that this setup doesn't provide for the output from the commands you executed. For that, you'll need to use proc_open() which is a little harder to work with but does provide bi-directional support.
A: Use chpasswd:
$tmpfname = tempnam('/tmp/', 'chpasswd');
$handle = fopen($tmpfname, "w");
fwrite($handle, "$username:".crypt($password)."\n");
fclose($handle);
shell_exec("sudo sh -c \"chpasswd -e < $tmpfname\"");
Beware! If somebody will get control on $username then he can change any password on a system.
A: You should use the crypt() function to encrypt the password. Then you can call the usermod program like this usermod --password username encryptedpassword.
The most common way to encrypt a UNIX login password is like this:
crypt('password', '$1$salt1234$')
(Where salt1234 is an eight letter salt)
A: An easy I know and which works (at least for Debian 4.0r5) is:
#!/bin/bash
USER="root"
NEWPASS="bullsheit123"
echo $USER:$NEWPASS | chpasswd
echo $?
Just adapt this to the php script and it should work fine.
A: I it is way too late but this is for people still searching answer. This is what we use. Extremely simple.
file_put_contents("passd", "$pass\n$pass\n");
echo "$uname: $pass\n";
`passwd $uname --stdin < passd`;
`rm -rf passd`;
A: I prefer using 2 separate processes: http://sylnsr.blogspot.com/2012/09/keep-unix-password-in-sync-with.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: SQL Server Management Studio won't start A coworker of mine has this problem, apparently after installing Re#, which seems totally irrelevant. But perhaps it isn't.
Could not load file or assembly "SqlManagerUi, Version=9.0.242.0..." or one of its dependencies. The module was expected to contain an assembly manifest. (mscorlib).
Why is this?
Thanks
A: I was with a similar problem, i could not open my SQL Server Management Studio.
This steps works for me:
In file C:\Program Files (x86)\Microsoft SQL Server Management Studio 18\Common7\IDE\Ssms.exe.config, erase the item <NgenBind_OptimizeNonGac enabled="1" />.
Example:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<!-- ...snip... -->
<runtime>
<!-- ...snip... -->
<!-- Remove this line (~line 38) -->
<NgenBind_OptimizeNonGac enabled="1" />
<!-- ...snip... -->
</runtime>
<!-- ...snip... -->
</configuration>
I hope this helps.
A: Reinstalling the .Net framework didn't help me, but these next steps did. I had to completely uninstall and reinsall the shared components, but it's pretty hard to find out how to do that.
Make sure you have the installation media handy. In my case that's the unzipped folder from the downloaded SQLEXPRWT_x64_ENU ed 2014.exe
*
*First, through your Control panel -> Uninstall a program, find 'Microsoft SQL Server 2014 (x64)'.
*Right-click and select Uninstall/Change.
*When you're presented with a dialog with the options to 'Add', 'Repair' or 'Remove' the product, select the last (don't worry, your SQL Server won't be touched if you follow these steps carefully.
*From the initially blank drop-downlist select << Remove shared features only >>
*Follow all steps from there, and you are now able to reinstall the shared features.
A: Management studio requires the .Net framework. Try removing/reinstalling it.
A: was there any installing/uninstalling of VS involved? Don't get me started on SQL 2005 and VS intall conficts.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: looking for simulated annealing implementation in VB Is anyone aware of a reasonably well documented example of simulated annealing in Visual Basic that I can examine and adapt?
A: This project looks pretty well documented: http://www.codeproject.com/KB/recipes/simulatedAnnealingTSP.aspx. It's C# but contains only one important source file (TravellingSalesmanProblem.cs) so it's pretty easy to run it through a converter. Maybe: http://labs.developerfusion.co.uk/convert/csharp-to-vb.aspx?
MSDN magazine also had an interesting article on neural networks. As I understand simulated annealing, you can add it to other function estimation methods (like neural nets). So you could add simulated annealing to the MSDN VB code by shrinking the Momentum over time. The network starts 'hot' by backpropagating error with a large Momentum and slowly 'cools' by shrinking the Momentum and thus reducing the effect of output error in backpropagation.
Cheers.
A: I generally refer to "Numerical recipes in C/C++" for all the pseudocode and adapt to my own later. That is the best documentation/implementation you could find. Sometimes you could even find better algorithms or an alternative way of solving. (In case Newton Raphshon is not the way to go)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Detecting WPF Validation Errors In WPF you can setup validation based on errors thrown in your Data Layer during Data Binding using the ExceptionValidationRule or DataErrorValidationRule.
Suppose you had a bunch of controls set up this way and you had a Save button. When the user clicks the Save button, you need to make sure there are no validation errors before proceeding with the save. If there are validation errors, you want to holler at them.
In WPF, how do you find out if any of your Data Bound controls have validation errors set?
A: In addition to the great LINQ-implementation of Dean, I had fun wrapping the code into an extension for DependencyObjects:
public static bool IsValid(this DependencyObject instance)
{
// Validate recursivly
return !Validation.GetHasError(instance) && LogicalTreeHelper.GetChildren(instance).OfType<DependencyObject>().All(child => child.IsValid());
}
This makes it extremely nice considering reuseablity.
A: The following code (from Programming WPF book by Chris Sell & Ian Griffiths) validates all binding rules on a dependency object and its children:
public static class Validator
{
public static bool IsValid(DependencyObject parent)
{
// Validate all the bindings on the parent
bool valid = true;
LocalValueEnumerator localValues = parent.GetLocalValueEnumerator();
while (localValues.MoveNext())
{
LocalValueEntry entry = localValues.Current;
if (BindingOperations.IsDataBound(parent, entry.Property))
{
Binding binding = BindingOperations.GetBinding(parent, entry.Property);
foreach (ValidationRule rule in binding.ValidationRules)
{
ValidationResult result = rule.Validate(parent.GetValue(entry.Property), null);
if (!result.IsValid)
{
BindingExpression expression = BindingOperations.GetBindingExpression(parent, entry.Property);
System.Windows.Controls.Validation.MarkInvalid(expression, new ValidationError(rule, expression, result.ErrorContent, null));
valid = false;
}
}
}
}
// Validate all the bindings on the children
for (int i = 0; i != VisualTreeHelper.GetChildrenCount(parent); ++i)
{
DependencyObject child = VisualTreeHelper.GetChild(parent, i);
if (!IsValid(child)) { valid = false; }
}
return valid;
}
}
You can call this in your save button click event handler like this in your page/window
private void saveButton_Click(object sender, RoutedEventArgs e)
{
if (Validator.IsValid(this)) // is valid
{
....
}
}
A: The posted code did not work for me when using a ListBox. I rewrote it and now it works:
public static bool IsValid(DependencyObject parent)
{
if (Validation.GetHasError(parent))
return false;
// Validate all the bindings on the children
for (int i = 0; i != VisualTreeHelper.GetChildrenCount(parent); ++i)
{
DependencyObject child = VisualTreeHelper.GetChild(parent, i);
if (!IsValid(child)) { return false; }
}
return true;
}
A: I would offer a small optimization.
If you do this many times over the same controls, you can add the above code to keep a list of controls that actually have validation rules. Then whenever you need to check for validity, only go over those controls, instead of the whole visual tree.
This would prove to be much better if you have many such controls.
A: Here is a library for form validation in WPF. Nuget package here.
Sample:
<Border BorderBrush="{Binding Path=(validationScope:Scope.HasErrors),
Converter={local:BoolToBrushConverter},
ElementName=Form}"
BorderThickness="1">
<StackPanel x:Name="Form" validationScope:Scope.ForInputTypes="{x:Static validationScope:InputTypeCollection.Default}">
<TextBox Text="{Binding SomeProperty}" />
<TextBox Text="{Binding SomeOtherProperty}" />
</StackPanel>
</Border>
The idea is that we define a validation scope via the attached property telling it what input controls to track.
Then we can do:
<ItemsControl ItemsSource="{Binding Path=(validationScope:Scope.Errors),
ElementName=Form}">
<ItemsControl.ItemTemplate>
<DataTemplate DataType="{x:Type ValidationError}">
<TextBlock Foreground="Red"
Text="{Binding ErrorContent}" />
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
A: Had the same problem and tried the provided solutions. A combination of H-Man2's and skiba_k's solutions worked almost fine for me, for one exception: My Window has a TabControl. And the validation rules only get evaluated for the TabItem that is currently visible. So I replaced VisualTreeHelper by LogicalTreeHelper. Now it works.
public static bool IsValid(DependencyObject parent)
{
// Validate all the bindings on the parent
bool valid = true;
LocalValueEnumerator localValues = parent.GetLocalValueEnumerator();
while (localValues.MoveNext())
{
LocalValueEntry entry = localValues.Current;
if (BindingOperations.IsDataBound(parent, entry.Property))
{
Binding binding = BindingOperations.GetBinding(parent, entry.Property);
if (binding.ValidationRules.Count > 0)
{
BindingExpression expression = BindingOperations.GetBindingExpression(parent, entry.Property);
expression.UpdateSource();
if (expression.HasError)
{
valid = false;
}
}
}
}
// Validate all the bindings on the children
System.Collections.IEnumerable children = LogicalTreeHelper.GetChildren(parent);
foreach (object obj in children)
{
if (obj is DependencyObject)
{
DependencyObject child = (DependencyObject)obj;
if (!IsValid(child)) { valid = false; }
}
}
return valid;
}
A: This post was extremely helpful. Thanks to all who contributed. Here is a LINQ version that you will either love or hate.
private void CanExecute(object sender, CanExecuteRoutedEventArgs e)
{
e.CanExecute = IsValid(sender as DependencyObject);
}
private bool IsValid(DependencyObject obj)
{
// The dependency object is valid if it has no errors and all
// of its children (that are dependency objects) are error-free.
return !Validation.GetHasError(obj) &&
LogicalTreeHelper.GetChildren(obj)
.OfType<DependencyObject>()
.All(IsValid);
}
A: You can iterate over all your controls tree recursively and check the attached property Validation.HasErrorProperty, then focus on the first one you find in it.
you can also use many already-written solutions
you can check this thread for an example and more information
A: In answer form aogan, instead of explicitly iterate through validation rules, better just invoke expression.UpdateSource():
if (BindingOperations.IsDataBound(parent, entry.Property))
{
Binding binding = BindingOperations.GetBinding(parent, entry.Property);
if (binding.ValidationRules.Count > 0)
{
BindingExpression expression
= BindingOperations.GetBindingExpression(parent, entry.Property);
expression.UpdateSource();
if (expression.HasError) valid = false;
}
}
A: You might be interested in the BookLibrary sample application of the WPF Application Framework (WAF). It shows how to use validation in WPF and how to control the Save button when validation errors exists.
A: I am using a DataGrid, and the normal code above did not find errors until the DataGrid itself lost focus. Even with the code below, it still doesn't "see" an error until the row loses focus, but that's at least better than waiting until the grid loses focus.
This version also tracks all errors in a string list. Most of the other version in this post do not do that, so they can stop on the first error.
public static List<string> Errors { get; set; } = new();
public static bool IsValid(this DependencyObject parent)
{
Errors.Clear();
return IsValidInternal(parent);
}
private static bool IsValidInternal(DependencyObject parent)
{
// Validate all the bindings on this instance
bool valid = true;
if (Validation.GetHasError(parent) ||
GetRowsHasError(parent))
{
valid = false;
/*
* Find the error message and log it in the Errors list.
*/
foreach (var error in Validation.GetErrors(parent))
{
if (error.ErrorContent is string errorMessage)
{
Errors.Add(errorMessage);
}
else
{
if (parent is Control control)
{
Errors.Add($"<unknow error> on field `{control.Name}`");
}
else
{
Errors.Add("<unknow error>");
}
}
}
}
// Validate all the bindings on the children
for (int i = 0; i != VisualTreeHelper.GetChildrenCount(parent); i++)
{
var child = VisualTreeHelper.GetChild(parent, i);
if (IsValidInternal(child) == false)
{
valid = false;
}
}
return valid;
}
private static bool GetRowsHasError(DependencyObject parent)
{
DataGridRow dataGridRow;
if (parent is not DataGrid dataGrid)
{
/*
* This is not a DataGrid, so return and say we do not have an error.
* Errors for this object will be checked by the normal check instead.
*/
return false;
}
foreach (var item in dataGrid.Items)
{
/*
* Not sure why, but under some conditions I was returned a null dataGridRow
* so I had to test for it.
*/
dataGridRow = (DataGridRow)dataGrid.ItemContainerGenerator.ContainerFromItem(item);
if (dataGridRow != null &&
Validation.GetHasError(dataGridRow))
{
return true;
}
}
return false;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "122"
}
|
Q: How do you deploy a WAR that's inside an EAR as the root (/) context in Glassfish? I have an EAR file that contains two WARs, war1.war and war2.war. My application.xml file looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<application version="5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/application_5.xsd">
<display-name>MyEAR</display-name>
<module>
<web>
<web-uri>war1.war</web-uri>
<context-root>/</context-root>
</web>
</module>
<module>
<web>
<web-uri>war2.war</web-uri>
<context-root>/war2location</context-root>
</web>
</module>
</application>
This results in war2.war being available on http://localhost:8080/war2location, which is correct, but war1.war is on http://localhost:8080// -- note the two slashes.
What am I doing wrong?
Note that the WARs' sun-web.xml files get ignored when contained in an EAR.
A: In Glassfish 3.0.1 you can define the default web application in the administration console:
"Configuration\Virtual Servers\server\Default Web Module".
The drop-down box contains all deployed war modules.
The default web module is then accessible from http://localhost:8080/.
A: This seems to me as a bug in the glassfish application server.
It should work as it is already defined your application.xml file.
Maybe you could try the following:
<context-root>ROOT</context-root>
A: This does seem to be a bug / feature.
You can set Glassfish to use a certain web application as the root application, ie. when no other context matches, but the application then still thinks it's running on the original context and not on the root.
My solution is to run the first WAR on /w and use Apache to redirect /whatever to /w/whatever using a RedirectMatch. Not very pretty, but it solves the problem (kinda).
RewriteEngine On
RedirectMatch ^/(w[^/].*) /w/$1
RedirectMatch ^/([^w].*) /w/$1
A: Thanks jiriki. The Perfect answer!
Works in Galssfish 2.1.1 too!
Configuration> HTTP Service> Virtual Servers> server
or change default-web-module parameter in domain.xml
A: The same solution as described via @jiriki and @SteveGreenslade, but via asadmin.
Found on: http://www.java.net/node/681176
Or you can use CLI to change this default web module.
asadmin get server.http-service.virtual-server.server.default-web-module
should show you the app, and you can then use asadmin set command to change it.
UPDATE (Glassfish 3.1+):
With the glassfish 3.1+ you can achieve it without any need of setting default-web-module. The only place you need to modify is
<your_ear>.ear/META-INF/application.xml
where you should place for your web module:
<context-root/>
That does the job.
Based on other answers present here I got a wrong impression something more is required. See the related problem caused by confusion: http://www.java.net/forum/topic/glassfish/glassfish/asadmin-restart-domain-not-working-war-inside-ear-default-web-module
Basically:
<context-root>/</context-root>
should work as well, based on the code (https://svn.java.net/svn/glassfish~svn/tags/3.1.2/web/web-glue/src/main/java/com/sun/enterprise/web/WebContainer.java):
if (wmContextPath.length() == 0)
displayContextPath = "/";
else
displayContextPath = wmContextPath;
however I didn't test this option
A: http://localhost:8080// should still be a valid URL that is equivalent to http://localhost:8080/
I'd experiment with leaving the context-root of war1 blank (though I'm not sure if that's allowed). Or changing it to <context-root>.</context-root>.
Otherwise I'd have to say the generated URI is a bug on glassfish's part since I've never seen that using sun's.
A: Have you given it another try on a more recent version of Glassfish? (3.0.1 just came out).
I've been able to get a -single- WAR in an exploded EAR to deploy to http://localhost/ using Glassfish 3.0.1. Like you mentioned, sun-web.xml seems to be ignored (inside of exploded ears at least).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Dump CCWs and RCWs in a mixed managed/unmanaged process I have a mixed managed/unmanaged environment (Visual Studio and ReSharper) and I suspect CCW or RCW leak. Is there any way to dump all currently allocated wrappers and identify their source/target? I have WinDbg, SOS & SOSEx, so I can see total number of RCWs and CCWs with !syncblk command. I just want to see objects, so I can call !gcroot on them and otherwise examine suspects.
A: You should be able to use !dumpheap to do this. !dumpheap -stat would let you find the type names (if you don't already know them) and then !dumpheap -type {typename} would give you the individual object addresses which can be passed to !gcroot.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What guidelines for HTML email design are there? What guidelines can you give for rich HTML formatting in emails while maintaining good visual stability across many clients and web based email interfaces?
An unrelated answer on a question on Stack Overflow suggested:
http://www.campaignmonitor.com/blog/archives/2008/05/2008_email_design_guidelines.html
Which contains the following guidelines:
*
*Place stylesheet in <body> instead of <head>
Some email clients will strip CSS out of the head, but leave it if the style block is (invalidly) in the body.
*Use inline styles where ever possible
Gmail will strip any stylesheet, whether in the <head> or in the <body>, but honor inline styles assigned using the style="" attribute
*Return to tables
Email standards have actually taken a giant step backwards in recent years thanks to Outlook 2007 using the Microsoft Word rendering engine. Unlearn most of what you learned about positioning without stylesheets.
*Don't rely on images
Most clients and most web based email clients will not display images unless the user specifically requests them to be displayed.
I also have a few "unconfirmed" truths that I don't remember where I read them.
*
*Don't use more than two levels of nesting in tables
Is this true. What is likely to happen if I do? Is there any particular client/clients that choke on this?
*Be careful of nesting background images in cells/tables
As I understand you may encounter situations where the background image is applied in the descending table/cell completely anew, and not just "shining through". Again, true or not? Which clients?
I would like to flesh out this list with more guidelines and experiences from the trenches.
Can you offer any further suggestions?
Update: I'm specifially asking for guidelines for the design part in HTML and consistency there of. Questions about general guidelines for avoiding spam filters, and common courtesy are already on SO.
A: The folks behind Campaign Monitor also started a Email Standards Project web site with a lot of good information.
A: It's actually really hard to make a decent HTML email, if you approach it from a 'modern HTML and CSS' perspective.
For best results, imagine it's 1999.
*
*Go back to tables for layout (or preferably - don't attempt any complex layout)
*Be afraid of background images (they break in Outlook 2007 and Gmail).
*The style-tag-in-the-body thing is because Hotmail used to accept it that way - I'm pretty sure they strip it out now though. Use inline styles with the style attribute if you must use CSS.
*Forget entirely about float
*Remember your images will probably be blocked - use background and text colour to your advantage - make sure there is some readable text with images disabled
*Be very careful with links, be especially wary of anything that looks like a URL in the link text - you will anger 'phishing' filters (eg <a href="http://domain.tld">www.someotherdomain.tld</a> is bad)
*Remember that the "fold" on webmail clients tends to be extremely high up the page (on a 1024x768 screen, most interfaces won't show more than a hundred pixels or so) - get your identity stuff in right at the top so the recipient knows who you are.
*Recent version of outlook have a "portrait" preview pane which is significantly narrower than you may be expecting - be very wary of fixed-width layouts, if you must use them, make them as narrow as you can.
*Don't even think about flash, Javascript, SVG, canvas, or anything like that.
*Test, a lot. Make sure you test in a recent Outlook (things have changed a lot! It now uses Word as its HTML rendering engine, and it's crippled: Word 2007 HTML/CSS support). Gmail is pretty finicky also. Surprisingly Yahoo's webmail is extremely good, with nice CSS support.
Good luck ;)
Update to answer further questions:
Don't use more than two levels of nesting in tables
I believe this is an older guideline pertaining to Lotus Notes. Nested tables should be okay, but really, if you have a layout that's complicated enough to need them, you're probably going to have trouble anyway. Keep your layout simple.
Be careful of nesting background images in cells/tables
This may be related to the above, and the same applies, if you're getting that complicated then you will have problems. Recent versions of Outlook don't support background images at all, so you'd be best advised to forget about them entirely.
A: Take a look at this boilerplate, it is like html5boilerplate, but for emails:
http://htmlemailboilerplate.com/
A: I think this is lower level than the question you are asking, but if you really want an html email to be correctly viewed by as many clients as possible, make sure it's using valid MIME. In particular, for an email to be considered as valid MIME, the headers MUST (in the RFC sense of the word) contain both of these headers:
MIME-Version:
Content-Type:
Very strict clients will display your HTML as raw text if one or the other of these is missing. You'd be surprised how many large online vendors who should know better have screwed this up (notably, I've gotten HTML emails w/ missing MIME-Version: headers from Amazon and the ACM in the past)
A: Always use multipart mime and provide a plain text alternative.
A: *
*Background images are not reliable.
*Practically a no-brainer, but no javascript.
*Use an editor that lets you send the current file/buffer as an email, or at the very least, find a program that would let you send the contents of a file as an HTML email. do not test your emails by copying the HTML, and pasting it into outlook (or any other mail program for that matter).
A: Three words of advice: test, test, test.
Check out LitmusApp.com's email testing service. You send them a message and they render it in a bunch of clients and show you screenshots of the results. It's not perfect, but it's pretty good.
(Lotus Notes prior to 8.0 really, really stinks for HTML mail, by the way)
Also, beyond just inline CSS styles, I recommend switching to tags wherever possible.
A: Embed your images, don't link to them.
This is bad :
<img src="http://myserver.com/myImage.jpg" alt="Lolkat"/>
This is good :
<img src=cid:myImage/>
Yeah, it looks weird but check out this guide regarding embedding images in emails.
A: If you're including a style block don't begin any new line with ".classname" or "." anything. Put a brace or something before the period. If you don't do this some web mail systems will not properly display your style sheets.
Many people have incorrectly assumed they cannot use CSS blocks in emails because of this behavior... IIRC "." is the body delimiter for SMTP. Systems will tend to escape in their mail stores to prevent the contents of one message from being misrecognized as a new message. The way this is handled tends to break any style starting on a new line with a period.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "105"
}
|
Q: Resizing Controls in MFC I am writing a program which has two panes (via CSplitter), however I am having problems figuring out out to resize the controls in each frame. For simplicity, can someone tell me how I would do it for a basic frame with a single CEdit control?
I'm fairly sure it is to do with the CEdit::OnSize() function... But I'm not really getting anywhere...
Thanks! :)
A: GetDlgItem(IDC_your_slidebar)->SetWindowPos(...) // actually you can move ,resize...etc
A: SetWindowPos is a little heavy duty for this purpose. MoveWindow has just what is needed.
A: Others have pointed out that WM_SIZE is the message you should handle and resize the child controls at that point. WM_SIZE is sent after the resize has finished.
You might also want to handle the WM_SIZING message which gets sent while the resize is in progress. This will let you actively resize the child windows while the user is still dragging the mouse. Its not strictly necessary to handle WM_SIZING but it can provide a better user experience.
A: A window receives WM_SIZE message (which is processed by OnSize handler in MFC) immediately after it was resized, so CEdit::OnSize is not what you are looking for.
You should add OnSize handler in your frame class and inside this handler as Rob pointed out you'll get width and height of the client area of your frame, then you should add the code which adjusts size and position of your control.
Something like this
void MyFrame::OnSize(UINT nType, int w, int h)
{
// w and h parameters are new width and height of your frame
// suppose you have member variable CEdit myEdit which you need to resize/move
myEdit.MoveWindow(w/5, h/5, w/2, h/2);
}
A: When your frame receives an OnSize message it will give you the new width and height - you can simply call the CEdit SetWindowPos method passing it these values.
Assume CMyPane is your splitter pane and it contains a CEdit you created in OnCreate called m_wndEdit:
void CMyPane::OnSize(UINT nType, int cx, int cy)
{
m_wndEdit.SetWindowPos(NULL, 0, 0, cx, cy, SWP_NOMOVE | SWP_NOACTIVATE | SWP_NOZORDER);
}
A: I use CResize class from CodeGuru to resize all controls automatically. You tell how you want each control to be resized and it does the job for you.
The resize paradigm is to specify how much each side of a control will move when the dialog is resized.
SetResize(IDC_EDIT1, 0, 0, 0.5, 1);
SetResize(IDC_EDIT2, 0.5, 0, 1, 1);
Very handy when you have a large number of dialog controls.
Source code
A: When it comes to the window size changes, there are three window messages you may be interested in: ON_WM_SIZE(), ON_WM_SIZING(), and ON_WM_GETMINMAXINFO().
As the official docs says:
*
*ON_WM_SIZE whose message handler is ::OnSize() is triggered after the size of the CWnd has changed;
*ON_WM_SIZING whose message handler is ::OnSizing() is triggered when the size of the client area of the clipbord-viewer window has changed;
*ON_WM_GETMINMAXINFO whose message handler is ::OnGetMinMaxInfo() is triggered whenever the window needs to know the maximized position or dimensions , or the minimum or maximum tracking size.
If you want to restrict the size of the cwnd to some range, you may refer to message ON_WM_GETMINMAXINFO; and if you want to retrieve the size changes in real time, you may refer to the other two messages.
A: It is better to use the Dynamic Layout capabilities of each control at the Property section.
Let's say you want to have a specific control, like a heading, always at the center of the view/dialog, then you just choose the properties of Dynamic Layout of the control, Moving Type as Horizontal and Moving X as 50 but you keep sizing to None. This way, when you resize the view, the header remains always at the center. You have to keep in mind that the minimum of the resizing/moving is the size/position of the control within the dialog/view, when you designed it at the Resource View.
This way, you save the burden of geometry and the transformations.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Capture right-click'd text on Outlook Message Content I'd like to know if it's possible to capture the text when a user right-click's on an Outlook message, and then add items to the right-click menu depending on the type of text.
This is an example of what I'd like to do. If there's a message (mail item) with the following content: "Hello, please call me at 555-8474 regarding item A1234" and the user right-click's on the number "8", the pop-up context menu will have an extra item at the bottom called "Call 555-8474", and a "PhoneCall" sub will be run if selected. If the user right-click's anywhere on "A1234" a different item (i.e. "Look up A1234") will be shown.
We're running Outlook 2003 and if possible I'd like to know if this can be done using VBA. I'm open to other ideas as well. Thanks!
A: You can use this example in VBA to get started
A: SmartTags is an Office 2003 feature that was designed for exactly this sort of thing. I honestly don't know SmartTags well enough to do more than wave in the general direction, so I hope you get a better answer from a domain expert....
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How do I reorder the fields/columns in a SharePoint view? I'm adding a new field to a list and view. To add the field to the view, I'm using this code:
view.ViewFields.Add("My New Field");
However this just tacks it on to the end of the view. How do I add the field to a particular column, or rearrange the field order? view.ViewFields is an SPViewFieldCollection object that inherits from SPBaseCollection and there are no Insert / Reverse / Sort / RemoveAt methods available.
A: I've found removing all items from the list and readding them in the order that I'd like works well (although a little drastic). Here is the code I'm using:
string[] fieldNames = new string[] { "Title", "My New Field", "Modified", "Created" };
SPViewFieldCollection viewFields = view.ViewFields;
viewFields.DeleteAll();
foreach (string fieldName in fieldNames)
{
viewFields.Add(fieldName);
}
view.Update();
A: You can use the default method:
int newFieldOrderIndex = 1;
SPViewFieldCollection viewFields = view.ViewFields;
viewFields.MoveFieldTo(fieldName, newFieldOrderIndex);
view.Update();
https://msdn.microsoft.com/EN-US/library/microsoft.sharepoint.spviewfieldcollection.movefieldto.aspx
A: You have to use the follow method to reorder the field
string reorderMethod = @"<?xml version=""1.0"" encoding=""UTF-8""?>
<Method ID=""0,REORDERFIELDS"">
<SetList Scope=""Request"">{0}</SetList>
<SetVar Name=""Cmd"">REORDERFIELDS</SetVar>
<SetVar Name=""ReorderedFields"">{1}</SetVar>
<SetVar Name=""owshiddenversion"">{2}</SetVar>
</Method>";
A: I had two different lists and similar view. I wanted to update destination list view field order if user change order in source view.
ViewFieldCollection srcViewFields = srcView.ViewFields;
ViewFieldCollection destViewFields = destView.ViewFields;
var srcArray = srcViewFields.ToArray<string>();
var destArray = destViewFields.ToArray<string>();
foreach (var item in destArray)
{
destViewFields.MoveFieldTo(item, Array.IndexOf(srcArray, item));
destView.Update();
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Should I allow 'allow_url_fopen' in PHP? We have a couple of developers asking for allow_url_fopen to be enabled on our server. What's the norm these days and if libcurl is enabled is there really any good reason to allow?
Environment is: Windows 2003, PHP 5.2.6, FastCGI
A: I think the answer comes down to how well you trust your developers to use the feature responsibly? Data from a external URL should be treated like any other untrusted input and as long as that is understood, what's the big deal?
The way I see it is that if you treat your developers like children and never let them handle sharp things, then you'll have developers who never learn the responsibility of writing secure code.
A: It depends on the type of development. If your prototyping then enabling 'allow_url_fopen' is fine however there isn't a significant speed difference between libcurl and file_get_contents and enabling it is only a matter of convenience.
For production servers any call to libcurl should be flagged for a security audit. As should fopen and file_get_contents if 'allow_url_fopen' is enabled. Disabling 'allow_url_fopen' does not prevent exploits it only slightly limits the number of ways they can be done.
A: You definitely want allow_url_include set to Off, which mitigates many of the risks of allow_url_fopen as well.
But because not all versions of PHP have allow_url_include, best practice for many is to turn off fopen. Like with all features, the reality is that if you don't need it for your application, disable it. If you do need it, the curl module probably can do it better, and refactoring your application to use curl to disable allow_url_fopen may deter the least determined cracker.
A: Cross-site scripting attacks are a pain, so that's a vote against. And you should absolutely have "allow_url_include" set to off, or you'll be in for a world of hurt.
A: The big problem is that allow_url_fopen is not more secured, so if you want to save file from a url using curl, you must pass from fopen/file_get to save the file.
*
*CURL is only good to retrieve remote content from URL.
(allow_url_fopen not necessary)
*CURL must be added with Fopen or File_get if you want to save remote
file to your server.
(allow_url_fopen obligatory with CURL)
Php must find other ways to make it more secured.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
}
|
Q: Guidelines for designing classes for dependency injection This question about unit testing best practices mentions designing classes for dependency injection. This got me thinking as to what exactly that might mean.
Having just started working with inversion of control containers I have some ideas on the issue, so let me throw them against the wall and see what sticks.
The way I see it, there are three basic types of dependencies that an object can have.
*
*Object Dependency - An actual object that will be used by the class in question. For example LogInVerifier in a LogInFormController. These should be injected in through the constructor. If the class is sufficiently high level that it requires more than 4 of these objects in the constructor consider breaking it up or at the very least using a factory pattern. You should also consider providing the dependency with an interface and coding against the interface.
*A Simple Setting - For example a threshold or a timeout period. These should generally have a default value and be set via a builder of factory pattern. You can also provide constructor overloads which set them. However in most cases you probably shouldn't be forcing the client to have to set it up explicitly.
*A Message Object - An object that is handed off from one class to another which the receiving class presumably uses for business logic. An example would be a User object for a LogInCompleRouter class. Here I find it is often better for the message not to be specified in the constructor as you would then have to either register the User instance with the IoC Container (making it global) or not instantiate the LogInCompleteRouter until after you have an instance of User (for which you couldn't use DI or at least would need an explicit dependency on the Container). In this case it would be better to pass in the message object in only when you need it for the method call (ie. LoginInCompleteRouter.Route(User u); ).
Also, I should mention that not everything should be DI'ed, if you have a simple bit of functionality that was just convenient to factor out to a throw-away class, it is probably ok to instantiate on the spot. Obviously this is a judgement call; if I found it expedient to write a class such as
class PasswordEqualsVerifier {
public bool Check(string input, string actual) { return input===actual;}
}
I probably wouldn't bother dependency injecting it and would just have an object instantiate it directly inside a using block. The corollary being that if it is worth writing unit tests for, then it is probably worth injecting.
So what do you guys think? Any additional guidelines or contrasting opinions are welcome.
A: The important thing is to try to code to interfaces and the have your classes accept instances of those interfaces rather than create the instances themselves. You can obviously go crazy with this, but it's a general good practice regardless of unit testing or DI.
For example, if you have a Data Access Object, you might be inclined to write a base for all DAOs like this:
public class BaseDAO
{
public BaseDAO(String connectionURL,
String driverName,
String username, String password)
{
// use them to create a connection via JDBC, e.g.
}
protected Connection getConnection() { return connection; }
}
However, it would be better to remove this from the class in favor of an interface
public interface DatabaseConnection
{
Connection getConnection();
}
public class BaseDAO
{
public BaseDAO(DatabaseConnection dbConnection)
{
this.dbConnection = dbConnection;
}
protected Connection getConnection() { return dbConnection.getConnection(); }
}
Now, you can provide multilple imlementations of DatabaseConnection. Even ignoring unit testing, if we assume we are using JDBC, there are two ways to get a Connection : a connection pool from the container, or directly via using the driver. Now, your DAO code isn't coupled to either strategy.
For testing, you can make a MockDatabaseConnection that connects to some embedded JDBC implementation with canned data to test your code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: WPF ListBoxItem selection problem I have a listbox where the items contain checkboxes:
<ListBox Style="{StaticResource CheckBoxListStyle}" Name="EditListBox">
<ListBox.ItemTemplate>
<DataTemplate>
<CheckBox Click="Checkbox_Click" IsChecked="{Binding Path=IsChecked, Mode=TwoWay}" Content="{Binding Path=DisplayText}" />
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
The problem I'm having is that when I click on the checkbox or its content, the parent ListBoxItem does not get selected. If I click on the white space next to the checkbox, the ListBoxItem does get selected.
The behavior that I'm trying to get is to be able to select one or many items in the list and use the spacebar to toggle the checkboxes on and off.
Some more info:
private void Checkbox_Click(object sender, RoutedEventArgs e)
{
CheckBox chkBox = e.OriginalSource as CheckBox;
}
In the code above when I click on a checkbox, e.Handled is false and chkBox.Parent is null.
Kent's answer put me down the right path, here's what I ended up with:
<ListBox Style="{StaticResource CheckBoxListStyle}" Name="EditListBox" PreviewKeyDown="ListBox_PreviewKeyDown">
<ListBox.ItemTemplate>
<DataTemplate>
<StackPanel Orientation="Horizontal">
<CheckBox IsChecked="{Binding Path=IsChecked, Mode=TwoWay}" />
<TextBlock Text="{Binding DisplayText}"/>
</StackPanel>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
I had to use PreviewKeyDown because by default when you hit the spacebar in a list box, it deselects everything except for the most recently selected item.
A: You can also bind the IsChecked property of the CheckBox and IsSelected property of the ListBoxItem:
<ListBox>
<ListBox.ItemTemplate>
<DataTemplate>
<CheckBox Content="{Binding DisplayText}" IsChecked="{Binding Path=IsSelected, RelativeSource={RelativeSource AncestorType={x:Type ListBoxItem}}}"/>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
A: In your use case it would be way simpler to use a ItemsControl instead of a list box. A ItemsControl is similar to a Listbox except that it doesn't contain the automatic selection behaviour. Which means that using it to host a list of what are essentially checkboxes is very simple and you don't have to workaround the ListBox's selection behaviour.
Simply switching to ItemsControl will give you exactly what you need:
<ItemsControl Style="{StaticResource CheckBoxListStyle}" Name="EditListBox">
<ItemsControl .ItemTemplate>
<DataTemplate>
<CheckBox Click="Checkbox_Click" IsChecked="{Binding Path=IsChecked, Mode=TwoWay}" Content="{Binding Path=DisplayText}" />
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
You can click on text to check checkboxes (default behavior) and you can use the keyboard too without having to wire up any event handlers.
A: To begin with, put the content outside the CheckBox:
<StackPanel Orientation="Horizontal">
<CheckBox IsChecked="{Binding IsChecked}"/>
<TextBlock Text="{Binding DisplayText}"/>
</StackPanel>
After that, you will need to ensure that pressing space on a ListBoxItem results in the CheckBox being checked. There are a number of ways of doing this, including a simple event handler on the ListBoxItem. Or you could specify a handler for UIElement.KeyUp or whatever in your DataTemplate:
<CheckBox IsChecked="{Binding IsChecked}" UIElement.KeyUp="..."/>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Does Cache activity prevent IIS from unloading an ASP.NET app? I want to add a scheduled task to a client's ASP.NET app. These posts cover the idea well:
*
*https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
*What is the Best Practice to Kick-off Maintenance Process on ASP.NET
*"Out of Band" Processing Techiniques for asp.net applications
My question has two parts: First, will IIS unload the application if there isn't enough request activity despite the Cache activity? My client doesn't enjoy as much traffic as stackoverflow so they can't rely on user requests to keep the app 'active'. Obviously, I can't schedule tasks in an unloaded app.
Second, if so, is there a way to prevent IIS from unloading the app outside of configuration or external 'stay-alive' requests? My client's host doesn't allow much configuration tweaking and a stay-alive utility introduces the deployment complexity I'm trying to avoid with an ASP.NET Cache solution.
Thanks a bunch.
Edit/Conclusion: TheXenocide's solution is exactly correct given the question. However, I've decided it is a really bad question. The temptation to cut corners is always looming. I've regained my senses and told my client to use a website monitoring tool to keep the site active. In addition, the scheduled task is going in a windows service despite the extra deployment hassle.
A: Unfortunately, outside the range of changing timeout configuration (which I believe to be possible in Web.config, though I don't know what is and isn't allowed on hosting providers, most of which use Medium Trust) I don't believe there is any other method to keep the application from ending beyond web requests. One thing you might try that may be a little more simple than using some keep-alive service on a local machine might be to add some logic to Session_Start/Session_End that ensures there is always at least one session active; you can use the WebRequest class from within your application to call your own site and it should still start a new session.
Good luck, and let us know what you do :)
UPDATE: these details now very much depend on which version of IIS and which version of .NET you're running in. Newer versions of each have methods of configuring "always running" applications.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What is your favourite Windbg tip/trick? I have come to realize that Windbg is a very powerful debugger for the Windows platform & I learn something new about it once in a while. Can fellow Windbg users share some of their mad skills?
ps: I am not looking for a nifty command, those can be found in the documentation. How about sharing tips on doing something that one couldn't otherwise imagine could be done with windbg? e.g. Some way to generate statistics about memory allocations when a process is run under windbg.
A: One word (well, OK, three) : DML, i.e. Debugger Markup Language.
This is a fairly recent addition to WinDbg, and it's not documented in the help file. There is however some documentation in "dml.doc" in the installation directory for the Debugging Tools for Windows.
Basically, this is an HTML-like syntax you can add to your debugger scripts for formatting and, more importantly, linking. You can use links to call other scripts, or even the same script.
My day-to-day work involves maintenance on a meta-modeler that provides generic objects and relationship between objects for a large piece of C++ software. At first, to ease debugging, I had written a simple dump script that extracts relevant information from these objects.
Now, with DML, I've been able to add links to the output, allowing the same script to be called again on related objects. This allows for much faster exploration of a model.
Here's a simplified example. Assume the object under introspection has a relationship called "reference" to another object.
r @$t0 = $arg1 $$ arg1 is the address of an object to examine
$$ dump some information from $t0
$$ allow the user to examine our reference
aS /x myref @@(&((<C++ type of the reference>*)@$t0)->reference )
.block { .printf /D "<link cmd=\"$$>a< <full path to this script> ${myref}\">dump Ref</link> " }
Obviously, this a pretty canned example, but this stuff is really invaluable for me. Instead of hunting around in very complex objects for the right data members (which usually took up to a minute and various casting and dereferencing trickery), everything is automated in one click!
A: *
*.prefer_dml 1
This modifies many of the built in commands (for example, lm) to display DML output which allows you to click links instead of running commands. Pretty handy...
*.reload /f /o file.dll (the /o will overwrite the current copy of the symbol you have)
*.enable_unicode 1 //Switches the debugger to default to Unicode for strings since all the Windows components use Unicode internally, this is pretty handy.
*.ignore_missing_pages 1 //If you do a lot of kernel dump analysis, you will see a lot of errors regarding memory being paged out. This command will tell the debugger to stop throwing this warning.
alias alias alias...
Save yourself some time in the debugger. Here are some of mine:
aS !p !process;
aS !t !thread;
aS .f .frame;
aS .p .process /p /r
aS .t .thread /p /r
aS dv dv /V /i /t //make dv do your favorite options by default
aS f !process 0 0 //f for find, e.g. f explorer.exe
A: Another answer mentioned the command window and Alt + 1 to focus on the command input window. Does anyone find it difficult to scroll the command output window without using the mouse?
Well, I have recently used AutoHotkey to scroll the command output window using keyboard and without leaving the command input window.
; WM_VSCROLL = 0x115 (277)
ScrollUp(control="")
{
SendMessage, 277, 0, 0, %control%, A
}
ScrollDown(control="")
{
SendMessage, 277, 1, 0, %control%, A
}
ScrollPageUp(control="")
{
SendMessage, 277, 2, 0, %control%, A
}
ScrollPageDown(control="")
{
SendMessage, 277, 3, 0, %control%, A
}
ScrollToTop(control="")
{
SendMessage, 277, 6, 0, %control%, A
}
ScrollToBottom(control="")
{
SendMessage, 277, 7, 0, %control%, A
}
#IfWinActive, ahk_class WinDbgFrameClass
; For WinDbg, when the child window is attached to the main window
!UP::ScrollUp("RichEdit50W1")
^k::ScrollUp("RichEdit50W1")
!DOWN::ScrollDown("RichEdit50W1")
^j::ScrollDown("RichEdit50W1")
!PGDN::ScrollPageDown("RichEdit50W1")
!PGUP::ScrollPageUp("RichEdit50W1")
!HOME::ScrollToTop("RichEdit50W1")
!END::ScrollToBottom("RichEdit50W1")
#IfWinActive, ahk_class WinBaseClass
; Also for WinDbg, when the child window is a separate window
!UP::ScrollUp("RichEdit50W1")
!DOWN::ScrollDown("RichEdit50W1")
!PGDN::ScrollPageDown("RichEdit50W1")
!PGUP::ScrollPageUp("RichEdit50W1")
!HOME::ScrollToTop("RichEdit50W1")
!END::ScrollToBottom("RichEdit50W1")
After this script is run, you can use Alt + up/down to scroll one line of the command output window, Alt + PgDn/PgUp to scroll one screen.
Note: it seems different versions of WinDbg will have different class names for the window and controls, so you might want to use the window spy tool provided by AutoHotkey to find the actual class names first.
A: Script to load SOS based on the .NET framework version (v2.0 / v4.0):
!for_each_module .if(($sicmp( "@#ModuleName" , "mscorwks") = 0) )
{.loadby sos mscorwks} .elsif ($sicmp( "@#ModuleName" , "clr") = 0)
{.loadby sos clr}
A: My favorite is the command .cmdtree <file> (undocumented, but referenced in previous release notes). This can assist in bringing up another window (that can be docked) to display helpful or commonly used commands. This can help make the user much more productive using the tool.
Initially talked about here, with an example for the <file> parameter:
http://blogs.msdn.com/debuggingtoolbox/archive/2008/09/17/special-command-execute-commands-from-a-customized-user-interface-with-cmdtree.aspx
Example:
alt text http://blogs.msdn.com/photos/debuggingtoolbox/images/8954736/original.aspx
A: I like to use advanced breakpoint commands, such as using breakpoints to create new one-shot breakpoints.
A: To investigate a memory leak in a crash dump (since I prefer by far UMDH for live processes).
The strategy is that objects of the same type are all allocated with the same size.
*
*Feed the !heap -h 0 command to WinDbg's command line version cdb.exe (for greater speed) to get all heap allocations:
"C:\Program Files\Debugging Tools for Windows\cdb.exe" -c "!heap -h 0;q" -z [DumpPath] > DumpHeapEntries.log
*
*Use Cygwin to grep the list of allocations, grouping them by size:
grep "busy ([[:alnum:]]\+)" DumpHeapEntries.log \
| gawk '{ str = $8; gsub(/\(|\)/, "", str); print "0x" str " 0x" $4 }' \
| sort \
| uniq -c \
| gawk '{ printf "%10.2f %10d %10d ( %s = %d )\n", $1*strtonum($3)/1024, $1, strtonum($3), $2, strtonum($2) }' \
| sort > DumpHeapEntriesStats.log
*
*You get a table that looks like this, for example, telling us that 25529270 allocations of 0x24 bytes take nearly 1.2 GB of memory.
8489.52 707 12296 ( 0x3000 = 12288 )
11894.28 5924 2056 ( 0x800 = 2048 )
13222.66 846250 16 ( 0x2 = 2 )
14120.41 602471 24 ( 0x2 = 2 )
31539.30 2018515 16 ( 0x1 = 1 )
38902.01 1659819 24 ( 0x1 = 1 )
40856.38 817 51208 ( 0xc800 = 51200 )
1196684.53 25529270 48 ( 0x24 = 36 )
*
*Then if your objects have vtables, just use the dps command to seek some of the 0x24 bytes heap allocations in DumpHeapEntries.log to know the type of the objects that are taking all the memory.
0:075> dps 3be7f7e8
3be7f7e8 00020006
3be7f7ec 090c01e7
3be7f7f0 0b40fe94 SomeDll!SomeType::`vftable'
3be7f7f4 00000000
3be7f7f8 00000000
It's cheesy but it works :)
A: The following command comes very handy when looking on the stack for C++ objects with vtables, especially when working with release builds when quite a few things get optimized away.
dpp esp Range
Being able to load an arbitrary PE file as dump is neat:
windbg -z mylib.dll
Query GetLastError() with:
!gle
This helps to decode common error codes:
!error error_number
A: Almost 60% of the commands I use everyday..
dv /i /t
?? this
kM (kinda undocumented) generates links to frames
.frame x
!analyze -v
!lmi
~
Explanation
*
*dv /i /t [doc]
*
*dv - display names and values of local variables in the current scope
*/i - specify the kind of variable: local, global, parameter, function, or unknown
*/t - display data type of variables
*?? this [doc]
*
*?? - evaluate C++ expression
*this - C++ this pointer
*kM [doc]
*
*k - display stack back trace
*M - DML mode. Frame numbers are hyperlinks to the particular frame. For more info about kM refer to http://windbg.info/doc/1-common-cmds.html
*.frame x [doc]
*
*Switch to frame number x. 0 being the frame at top of stack, 1 being frame 1 below the 0th frame, and so on.
*To display local variables from another frame on the stack, first switch to that frame - .frame x, then use dv /i /t. By default d will show info from top frame.
*!analyze -v [doc1] [doc2 - Using the !analyze Extension]
*
*!analyze - analyze extension. Display information about the current exception or bug check. Note that to run an extension we prefix !.
*-v - verbose output
*!lmi [doc]
*
*!lmi - lmi extension. Display detailed information about a module.
*~ [doc]
*
*~ - Displays status for the specified thread or for all threads in the current process.
A: The "tip" I use most often is one that will save you from having to touch that pesky mouse so often: Alt + 1
Alt + 1 will place focus into the command window so that you can actually type a command and so that up-arrow actually scrolls through command history. However, it doesn't work if your focus is already in the scrollable command history.
Peeve: why the heck are key presses ignored while the focus is in a source window? It's not like you can edit the source code from inside WinDbg. Alt + 1 to the rescue.
A: Do not use WinDbg's .heap -stat command. It will sometimes give you incorrect output. Instead, use DebugDiags memory reporting.
Having the correct numbers, you can then use WinDbg's .heap -flt ... command.
A: For command & straightforward (static or automatable) routines where the debugger is used, it is very cool to be able to put all the debugger commands to run through in a text command file and run that as input through kd.exe or cdb.exe, callable via a batch script, etc.
Run that whenever you need to do this same old routine, without having to fire up WinDbg and do things manually. Too bad this doesn't work when you aren't sure what you are looking for, or some command parameters need manual analysis to find/get.
A: Platform-independent dump string for managed code which will work for x86/x64:
j $ptrsize = 8 'aS !ds .printf "%mu \n", c+';'aS !ds .printf "%mu \n", 10+'
Here is a sample usage:
0:000> !ds 00000000023620b8
MaxConcurrentInstances
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
}
|
Q: How do I programmatically close an InfoPath form in C#? Is it possible to close an InfoPath form programmatically? I know that it can be configured as a form rule / action but I want to close the form via code.
A: Use the ApplicationClass.XDocuments.Close method and pass it your document object:
using System;
using Microsoft.Office.Interop.InfoPath;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
var app = new ApplicationClass();
var uri = @".\form1.xml";
var doc = app.XDocuments.Open(uri, 0);
app.XDocuments.Close(doc);
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: WPF locbaml-ed application and runtime language switch? i wonder if there is a simple solution to change language of a wpf application during runtime. i used locbaml to globalize all the resources. Setting the Current Thread's UICulture in the App-constructor works fine, but when i try to chang it a little bit later, i doesn't reflect the changes anymore.
This was actually quite easy with winforms, but i have no clue how to solve the same problem with wpf.
any ideas?
regards
j.
A: No.
Once you load an assembly and it is bound to your application, you can not change classes in mid-work. You could create a bootstrapper assembly that loads the current language and when you change the language you close and re-open your application automatically, but I doubt that's what you want or need.
What I did on one of my projects was create a globalized application framework using converters, etc. You can see some of the problems I ran into here and especially this post which shows how it looked. HTH if you decide to go the same way as I did.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to use JVLC (Java bindings for VLC)? I'm trying to use JVLC but I can't seem to get it work. I've downloaded the jar, I installed VLC and passed the -D argument to the JVM telling it where VLC is installed. I also tried:
NativeLibrary.addSearchPath("libvlc", "C:\\Program Files\\VideoLAN\\VLC");
with no luck. I always get:
Exception in thread "main"
java.lang.UnsatisfiedLinkError: Unable
to load library 'libvlc': The
specified module could not be found.
Has anyone made it work?
A: You can get that exception if the dll you are trying to load requires other dlls that are not available. Sorry I can't be of more specific help, but it is something to check out. You can use depends to walk the dll dependancies.
A: Not sure about that NativeLibrary class. Typically when using native libraries, you need to set the system property, "java.library.path", to the location of your native libraries. As suggested, if your native library (dll, so, etc) depends on additional native libraries then the OS will takeover to resolve these dependencies. The OS will have no clue about java.library.path and beging by searching the OS specific path for native libraries. On windows this includes the current PATH environment variable as well as System32 in the windows directory. On linux this is the LD_LIBRARY_PATH / ld.conf setup.
Try setting the PATH (LD_LIBRARY_PATH) to point to the same location as java.library.path. The only catch is that you can't set this one your process launches (the JVM), it's already too late. You need to have the environment set BEFORE the JVM launches. You can do this vis batch files, shell scripts, Ant, or directly from your IDE.
A: I had the same problem too and I noticed that it occured only with 64-bit jdk/jre.
Works like charm with 32-bit jdk under Win7 x64.
Have a nice coding!
-Sipe
A: You should try
System.load("C:\\Path\\To\\libvlc.dll");
at least to verify that your library can be loaded.
And if not, it may give you useful error messages (it did for me).
(And as Sipe mentioned, you may be using a 64 bits JRE/JDK, in which case libvlc will never be found (it's 32 bits only). In this case you must switch to using a 32 bits JRE/JDK.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Using Caps Lock as Esc in Mac OS X How do I make Caps Lock work like Esc in Mac OS X?
A: I wasn't happy with any of the answers here, and went looking for a command-line solution.
In macOS Sierra 10.12, Apple introduced a new way for users to remap keys.
*
*No need to fiddle around with system GUIs
*No special privileges are required
*Completely customisable
*No need to install any 3rd-party crap like PCKeyboardHack / Seil / Karabiner / KeyRemap4MacBook / DoubleCommand / NoEjectDelay
If that sounds good to you, take a look at hidutil.
For example, to remap caps-lock to escape, refer to the key table and find that caps-lock has usage code 0x39 and escape has usage code 0x29. Put these codes or'd with the hex value 0x700000000 in the source and dest like this:
hidutil property --set '{"UserKeyMapping":[{"HIDKeyboardModifierMappingSrc":0x700000039,"HIDKeyboardModifierMappingDst":0x700000029}]}'
You may add other mappings in the same command. Personally, I like to remap caps-lock to backspace, and remap backspace to delete:
hidutil property --set '{"UserKeyMapping":[{"HIDKeyboardModifierMappingSrc":0x700000039,"HIDKeyboardModifierMappingDst":0x70000002A}, {"HIDKeyboardModifierMappingSrc":0x70000002A,"HIDKeyboardModifierMappingDst":0x70000004C}]}'
To see the current mapping:
hidutil property --get "UserKeyMapping"
Your changes will be lost at system reboot. If you want them to persist, configure them in a launch agent. Here's mine:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<!-- Place in ~/Library/LaunchAgents/ -->
<!-- launchctl load com.ldaws.CapslockBackspace.plist -->
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.ldaws.CapslockEsc</string>
<key>ProgramArguments</key>
<array>
<string>/usr/bin/hidutil</string>
<string>property</string>
<string>--set</string>
<string>{"UserKeyMapping":[{"HIDKeyboardModifierMappingSrc":0x700000039,"HIDKeyboardModifierMappingDst":0x70000002A},{"HIDKeyboardModifierMappingSrc":0x70000002A,"HIDKeyboardModifierMappingDst":0x70000004C}]}</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
I've placed this content into a file located at ~/Library/LaunchAgents/com.ldaws.CapslockBackspace.plist and then executed:
launchctl load com.ldaws.CapslockBackspace.plist
A: Edit: As described in this answer, newer versions of MacOS now have native support for rebinding Caps Lock to Escape. Thus it is no longer necessary to install third-party software to achieve this.
Here's my attempt at a comprehensive, visual walk-through answer (with links) of how to achieve this using Seil (formerly known as PCKeyboardHack).
*
*First, go into the System Preferences, choose Keyboard, then the Keyboard Tab (first tab), and click Modifier Keys:
In the popup dialog set Caps Lock Key to No Action:
2) Now, click here to download Seil and install it:
3) After the installation you will have a new Application installed ( Mountain Lion and newer ) and if you are on an older OS you may have to check for a new System Preferences pane:
4) Check the box that says "Change Caps Lock" and enter "53" as the code for the escape key:
And you're done! If it doesn't work immediately, you may need to restart your machine.
Impressed? Want More Control?
You may also want to check out KeyRemap4MacBook which is actually the flagship keyboard remapping tool from pqrs.org - it's also free.
If you like these tools you can make a donation. I have no affiliation with them but I've been using these tools for a long time and have to say the guys over there have been doing an excellent job maintaining these, adding features and fixing bugs.
Here's a screenshot to show a few of the (hundreds of) pre-selectable options:
PQRS also has a great utility called NoEjectDelay that you can use in combination with KeyRemap4MacBook for reprogramming the Eject key. After a little tweaking I have mine set to toggle the AirPort Wifi.
These utilities offer unlimited flexibility when remapping the Mac keyboard. Have fun!
A: Open up Keyboard preferences and click modifier keys... you can change the caps lock key to control, option, escape, or command.
A: In order to actually swap the escape key with the caps lock key (not just map one to the other) using both PCKeyboardHack and KeyRemap4MacBook, you have to follow the instructions in this thread, mapping the caps lock key to a keycode not used by the keyboard but accounted for by KeyRemap4MacBook (eg. 110). Then, in PCKeyboardHack, select the appropriate option that maps that keycode to escape (in the case of 110, it's "Application Key to Escape"). Here's what your KeyRemap4MacBook preferences should look like (provided you've selected the "show enabled only" checkbox).
I originally attempted to post this information as an edit to cwd's excellent answer, but it was rejected. I encourage anyone who wants to go the route that I describe to first read his/her response.
A: The only thing I know how to do is to map Caps Lock to Control, or Option, or Command. This can be done via the Keyboard & Mouse pane of System Preferences. Click on "Modifier Keys" on the bottom left and you'll be able to remap Caps Lock, Control, Option, and Command, to any of those.
@Craig:
This suggests that Caps Lock can be used as a normal -- that is, non-toggle -- key. On my MacBook, since I have re-mapped Caps Lock to Control, the Caps Lock light never lights up. It simply acts like the Control key.
A: It is now much easier to map the Caps Lock key to Esc with macOS Sierra.
*
*Open System Preferences → Keyboard.
*Click the Modifier Keys button in the bottom right-hand corner.
*Click the drop down box next to the hardware key that you’d like to remap, and select Escape.
*Click OK and close System Preferences.
https://9to5mac.com/2016/10/25/remap-escape-key-action-macbook-pro-macos-sierra-10-12-1-modifier-keys/
A: Since macOS 10.12.1 it is possible to remap Caps Lock to Esc natively (Apple > System Settings… > Keyboard > Keyboard Shortcuts > Modifier Keys in macOS 13, or, before, System Preferences -> Keyboard -> Modifier Keys).
A: It's possible.
Solution 1
From an arcticle on TrueAffection.net.
*
*Download PCKeyboardHack and install it.
*Go to PCKeyboardHack in System Preferences.
*Enable ‘Change Caps Lock’ and set the keycode to 53.
Solution 2
This solution doesn't involve patching the keyboard driver, but gives you a Vim specific solution.
OS X supports mapping the Caps Lock key to a whole bunch of keys, but you have to do it 'by hand', editting .plist files. The process is described in this article. As addendum to that hint I suggest you first set Caps-Lock to None in the System Preferences, then you only need to change one value in the .plist file. Also, you can of course use the Property List Editor instead of going through the XML conversion steps.
The trick is to map the Caps Lock key to the Help key (code 6), which isn't on most keyboards. But if it is, it will be treated as the insert key, which you probably don't use anyway, since you ask about remapping your Caps Lock to prevent stretching your hands ;)
You can then map the Help and the Insert key to Esc in vim.
map <Help> <Esc>
map! <Help> <Esc>
map <Insert> <Esc>
map! <Insert> <Esc>
This will work for gvim (Vim.app). I didn't get it to work with vim in the Terminal and I haven't tested it with MacVim.
So, it's rather a complicated, half-baked solution or installing a third-party piece of hackery. Your pick ;)
Edit: Just noticed solution 3, if you're using MacVim you can use Ctrl, Option and Command as Esc. With the System Preferences it's trivial to map Caps Lock to one of those keys.
A: Seil doesn't work on macOS Sierra yet, so I'm using Karabiner Elements, download from https://pqrs.org/latest/karabiner-elements-latest.dmg.
Either use the GUI or put the following into ~/.karabiner.d/configuration/karabiner.json:
{
"profiles" : [
{
"name" : "Default profile",
"selected" : true,
"simple_modifications" : {
"caps_lock" : "escape"
}
}
]
}
A: Seil isn't yet available on macOS Sierra (10.12 beta). As such, I've been using Keyboard Maestro with these settings:
Credit to this github comment: https://github.com/tekezo/Seil/issues/68#issuecomment-230131664
A: Having tried several of these solutions, I have some notes:
DoubleCommand will not allow you to swap esc and caps-lock.
PCKeyboardHack will allow you to map capslock to escape, but does not have the capability to map escape to capslock. Recent versions will allow you to perform a complete swap by editing both keys.
This may or may not be sufficient for your needs (I know it is for mine).
A: In case you don't want to install a third-party app and you really only care about vim inside iTerm, the following works:
Remap CapsLock to Help as described here.
Short version: use plutil or similar to edit ~/Library/Preferences/ByHost/.GlobalPreferences*.plist, it should look similar to this:
<key>HIDKeyboardModifierMappingDst</key>
<integer>6</integer>
<key>HIDKeyboardModifierMappingSrc</key>
<integer>0</integer>
Restart! A simple log-out and log-in did not work for me.
In iTerm, add a new key mapping for Help: send hex code 0x1b, which corresponds to Escape.
I know this is not exactly what was asked for, but I assume the intent of many people looking for a solution like this is actually this more specialized variant.
A: You can also use DoubleCommand to remap this, and other keys.
IIRC, it will map Caps Lock to Esc.
A: Karabiner-Elements
A powerful and stable keyboard customizer for macOS. (freeware)
https://pqrs.org/osx/karabiner/index.html
Worked for me for Mojave to change caps-lock to backspace
A: With the latest Ventura update, System Settings changed.
You will find it on
System Settings > Keyboard > Keyboard Shortcuts > Modifier Keys
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "504"
}
|
Q: How to make an International Soundex? E.g. the Soundex algorithm is optimized for English. Is there a more universal algorithm that would apply across large families of languages?
A: SOUNDEX is indeed English-oriented. Two others that take a wider variety of phonetic differences into account are: Double Metaphone and NYSIIS.
They produce encodings into a much larger possible space than SOUNDEX does. Double Metaphone, specifically, includes reductions with the express purpose of handling alternate pronunciations based on more languages than English.
I did a presentation on fuzzy string matching recently, the slides may be helpful.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Flex: Loading assets into externally loaded modules So, I have Flex project that loads a Module using the ModuleManager - not the module loader. The problem that I'm having is that to load an external asset (like a video or image) the path to load that asset has to be relative to the Module swf...not relative to the swf that loaded the module.
The question is - How can I load an asset into a loaded module using a path relative to the parent swf, not the module swf?
Arg! So in digging through the SWFLoader Class I found this chunk of code in private function loadContent:
// make relative paths relative to the SWF loading it, not the top-level SWF
if (!(url.indexOf(":") > -1 || url.indexOf("/") == 0 || url.indexOf("\\") == 0))
{
var rootURL:String;
if (SystemManagerGlobals.bootstrapLoaderInfoURL != null && SystemManagerGlobals.bootstrapLoaderInfoURL != "")
rootURL = SystemManagerGlobals.bootstrapLoaderInfoURL;
else if (root)
rootURL = LoaderUtil.normalizeURL(root.loaderInfo);
else if (systemManager)
rootURL = LoaderUtil.normalizeURL(DisplayObject(systemManager).loaderInfo);
if (rootURL)
{
var lastIndex:int = Math.max(rootURL.lastIndexOf("\\"), rootURL.lastIndexOf("/"));
if (lastIndex != -1)
url = rootURL.substr(0, lastIndex + 1) + url;
}
}
}
So apparently, Adobe has gone through the extra effort to make images load in the actual swf and not the top level swf (with no flag to choose otherwise...), so I guess I should submit a feature request to have some sort of "load relative to swf" flag, edit the SWFLoader directly, or maybe I should have everything relative to the individual swf and not the top level...any suggestions?
A: You can import mx.core.Application and then use Application.application.url to get the path of the host application in your module and use that as the basis for building the URLs.
For help in dealing with URLs, see the URLUtil class in the standard Flex libraries and the URI class in the as3corelib project.
A: You can use this.url in the module and use this as a baseURL.
var urlParts:Array = this.url.split("/");
urlParts.pop();
baseURL = urlParts.join("/");
Alert.show(baseURL);
and use {baseURL + "/location/file.ext"} instead of /location/file.ext
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Transferring extended ascii characters with unknown encoding to a Twisted XMLRPC from C# Basically I want to pass a string which contains Spanish text that could be in one of several encodings (Latin-1, CP-1252, or UTF-8 to name a few). Once it gets to the XMLRPC I can detect the encoding, but I won't know it before then. C#, by default seems to be killing any characters outside of ASCII. I've gotten around the problem by base64-encoding the string but I'd really love to NOT do that.
I'm using CookComputing.XmlRpc... Here's a code snippet of my interface:
public interface ISpanishAnalyzer
{
[XmlRpcMethod("analyzeSpanishString")]
int analyzeSpanishString(string text);
}
Any help would be appreciated. Thanks!
A: I don't think there is really a better way then base64 encoding. As long as you do not know the encoding, you have no other possibility as to handle it as a byte array. The only change I would suggest is to make this explicit by using a byte[] parameter instead of a string and letting the XmlRpc library to take care of the base64 encoding (assuming that it supports this).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I enable applets on Mac Firefox 3.0.1? I have Java 1.5.0 installed on a Mac OS machine. I have the Java Embedding Plugin 0.9.6.4 installed and java enabled on the preferences panel. When I go to http://java.com/en/download/help/testvm.xml the applet appears blank. Does anyone has an idea of what is happening?
A: I have found the solution. The Java Embedding Plugin uses the installed JVM which itself gets its proxy definitions from MacOS and not from Firefox. So, proxy definitions have to be defined for both MacOS/Safari and Firefox.
I've also updated Firefox and selected Java 1.5 on the Java preferences panel. However I'm not sure if these two actions helped solve the issue.
A: Firefox 3.0.1 on Mac OS X 10.5 should support java applets without any extra Java Embedding Plugin.
A: applets that require java 1.6 don't work in mac os x, even though java 1.6 (aka java se 6) is installed and at the top of the list in the java preferences.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Editing XML as a dictionary in python? I'm trying to generate customized xml files from a template xml file in python.
Conceptually, I want to read in the template xml, remove some elements, change some text attributes, and write the new xml out to a file. I wanted it to work something like this:
conf_base = ConvertXmlToDict('config-template.xml')
conf_base_dict = conf_base.UnWrap()
del conf_base_dict['root-name']['level1-name']['leaf1']
del conf_base_dict['root-name']['level1-name']['leaf2']
conf_new = ConvertDictToXml(conf_base_dict)
now I want to write to file, but I don't see how to get to
ElementTree.ElementTree.write()
conf_new.write('config-new.xml')
Is there some way to do this, or can someone suggest doing this a different way?
A: For easy manipulation of XML in python, I like the Beautiful Soup library. It works something like this:
Sample XML File:
<root>
<level1>leaf1</level1>
<level2>leaf2</level2>
</root>
Python code:
from BeautifulSoup import BeautifulStoneSoup, Tag, NavigableString
soup = BeautifulStoneSoup('config-template.xml') # get the parser for the xml file
soup.contents[0].name
# u'root'
You can use the node names as methods:
soup.root.contents[0].name
# u'level1'
It is also possible to use regexes:
import re
tags_starting_with_level = soup.findAll(re.compile('^level'))
for tag in tags_starting_with_level: print tag.name
# level1
# level2
Adding and inserting new nodes is pretty straightforward:
# build and insert a new level with a new leaf
level3 = Tag(soup, 'level3')
level3.insert(0, NavigableString('leaf3')
soup.root.insert(2, level3)
print soup.prettify()
# <root>
# <level1>
# leaf1
# </level1>
# <level2>
# leaf2
# </level2>
# <level3>
# leaf3
# </level3>
# </root>
A: My modification of Daniel's answer, to give a marginally neater dictionary:
def xml_to_dictionary(element):
l = len(namespace)
dictionary={}
tag = element.tag[l:]
if element.text:
if (element.text == ' '):
dictionary[tag] = {}
else:
dictionary[tag] = element.text
children = element.getchildren()
if children:
subdictionary = {}
for child in children:
for k,v in xml_to_dictionary(child).items():
if k in subdictionary:
if ( isinstance(subdictionary[k], list)):
subdictionary[k].append(v)
else:
subdictionary[k] = [subdictionary[k], v]
else:
subdictionary[k] = v
if (dictionary[tag] == {}):
dictionary[tag] = subdictionary
else:
dictionary[tag] = [dictionary[tag], subdictionary]
if element.attrib:
attribs = {}
for k,v in element.attrib.items():
attribs[k] = v
if (dictionary[tag] == {}):
dictionary[tag] = attribs
else:
dictionary[tag] = [dictionary[tag], attribs]
return dictionary
namespace is the xmlns string, including braces, that ElementTree prepends to all tags, so here I've cleared it as there is one namespace for the entire document
NB that I adjusted the raw xml too, so that 'empty' tags would produce at most a ' ' text property in the ElementTree representation
spacepattern = re.compile(r'\s+')
mydictionary = xml_to_dictionary(ElementTree.XML(spacepattern.sub(' ', content)))
would give for instance
{'note': {'to': 'Tove',
'from': 'Jani',
'heading': 'Reminder',
'body': "Don't forget me this weekend!"}}
it's designed for specific xml that is basically equivalent to json, should handle element attributes such as
<elementName attributeName='attributeContent'>elementContent</elementName>
too
there's the possibility of merging the attribute dictionary / subtag dictionary similarly to how repeat subtags are merged, although nesting the lists seems kind of appropriate :-)
A: This'll get you a dict minus attributes. I don't know, if this is useful to anyone. I was looking for an xml to dict solution myself, when I came up with this.
import xml.etree.ElementTree as etree
tree = etree.parse('test.xml')
root = tree.getroot()
def xml_to_dict(el):
d={}
if el.text:
d[el.tag] = el.text
else:
d[el.tag] = {}
children = el.getchildren()
if children:
d[el.tag] = map(xml_to_dict, children)
return d
This: http://www.w3schools.com/XML/note.xml
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Don't forget me this weekend!</body>
</note>
Would equal this:
{'note': [{'to': 'Tove'},
{'from': 'Jani'},
{'heading': 'Reminder'},
{'body': "Don't forget me this weekend!"}]}
A: I'm not sure if converting the info set to nested dicts first is easier. Using ElementTree, you can do this:
import xml.etree.ElementTree as ET
doc = ET.parse("template.xml")
lvl1 = doc.findall("level1-name")[0]
lvl1.remove(lvl1.find("leaf1")
lvl1.remove(lvl1.find("leaf2")
# or use del lvl1[idx]
doc.write("config-new.xml")
ElementTree was designed so that you don't have to convert your XML trees to lists and attributes first, since it uses exactly that internally.
It also support as small subset of XPath.
A: Adding this line
d.update(('@' + k, v) for k, v in el.attrib.iteritems())
in the user247686's code you can have node attributes too.
Found it in this post https://stackoverflow.com/a/7684581/1395962
Example:
import xml.etree.ElementTree as etree
from urllib import urlopen
xml_file = "http://your_xml_url"
tree = etree.parse(urlopen(xml_file))
root = tree.getroot()
def xml_to_dict(el):
d={}
if el.text:
d[el.tag] = el.text
else:
d[el.tag] = {}
children = el.getchildren()
if children:
d[el.tag] = map(xml_to_dict, children)
d.update(('@' + k, v) for k, v in el.attrib.iteritems())
return d
Call as
xml_to_dict(root)
A: Have you tried this?
print xml.etree.ElementTree.tostring( conf_new )
A: most direct way to me :
root = ET.parse(xh)
data = root.getroot()
xdic = {}
if data > None:
for part in data.getchildren():
xdic[part.tag] = part.text
A: XML has a rich infoset, and it takes some special tricks to represent that in a Python dictionary. Elements are ordered, attributes are distinguished from element bodies, etc.
One project to handle round-trips between XML and Python dictionaries, with some configuration options to handle the tradeoffs in different ways is XML Support in Pickling Tools. Version 1.3 and newer is required. It isn't pure Python (and in fact is designed to make C++ / Python interaction easier), but it might be appropriate for various use cases.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: What are the access restrictions on accessing a DSN We are running part of our app as a windows service and it needs to b able to access DSNs in order to import through ODBC. However there seem to be a lot of restrictions found through trial and error on what DSNs it can access. For example it seems that it cannot
1. access a system DSN unless the account that is running the service has admin privileges. (I get an Access Denied error, when trying to connect)
2. access a user DSN that was created by a different user (this one is understandable).
3. access a file DSN across the network
I've read that the purpose of a file DSN is to allow other computers to use it to connect, however i can't seem to make that work.
So does any know, or know where i can find out what all the rules and restrictions on accessing a DSN are when using a windows service.
thanks
A: I think you've already discovered the three main rules yourself. :-)
Except that you probably don't need admin privileges for your service account. IANANA (I am not a network administrator), but your service account probably just needs read access to one of the ODBC files or directories.
A: This is somewhere between your #1 and #2: sometimes correct file permissions are also necessary. I once had troubles on a Vista machine connecting to a DB2 DSN because, for whatever reason (maybe to write out temp files; although I don't know why it would do such a thing in this location instead of a user-specific one), the driver needed write access to the directory where IBM had installed the client binaries and libs, which had been done by an Administrator and was in the root of the C drive.
A: You cannot connect to mapped drives with a service. A mapped drive has to interact with memory called the desktop heap which tracks the icons on the desktop. Services do not have access to that memory. If you have to use a dsn create a systemdsn. better would be to use a connection string and store that in the app.config and use the encryption api to encrypt the user name and password.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I get a webservice connection to work from Access 2003 runtime install? I have an Access 2003 application that communicates with a Webservice to get articles from a central database. It runs fine in a full Access install but when I make runtime install with Package Wizard included in Access 2003 developer extensions, it fails with the error message "429 cannot create an object in the active-x component"
The DLL used for the webservice communication is mssoap30.dll. That dll doesn´t ship with the runtime install and when I try to manually add it to runtime install it is there but when I try to register the DLL it fails with the message:"The register failed reason failed to initiate a DLL" Same result when I put the DLL in the applications folder or in Microsoft shared/Office11. Is there anyone who has made an Access runtime application with web service communication?
A: If mssoap30.dll is failing to register, that probably means mssoap30.dll itself has dependencies that are missing.
You can download the SOAP Toolkit Installer here:
http://www.microsoft.com/downloads/details.aspx?FamilyID=ba611554-5943-444c-b53c-c0a450b7013c&DisplayLang=en
It's only 1.4 MB, and it should fix the problem. Depending upon what you're using to build your installer, you should be able to embed the SOAP installer and run it during installation (or else just give both files to your users and tell them to install both - that never killed anybody).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to programmatically change the work flow between SSIS control flow tasks? I have an SSIS package, which depending on a boolean variable, should either go to a Script Task or an Email task.(Note: the paths are coming from a Script Task)
I recall in the old dts designer there was a way to do this via code. What is the proper way to accomplish this in SSIS?
A:
Isn't a Conditional Split a data flow
task, which takes a row of data and
pushes it in one of two directions
according to some property of the
data???
Oops, that is correct. I found this blog entry which explains how to do proper control flow conditional branching based on boolean values.
A: In control flow, drag the green arrow to the email task, then right-click on it and you will see you can set it from 'Completed' to 'Conditional', then you can set an expression on the condition. The arrow will then turn blue. You should then be able to drag another arrow to the other script, and set that to conditional.
I have this set-up often, many times you want to email if a certain condition applies. The standard syntax for the conditional constraints is something like:
@[User::SendEmail] == True
Assuming your SendEmail variable is a boolean. If you use anything else, just construct an expression that evaluates to either true or false.
Remember to set the conditionals to OR instead of AND, otherwise it won't complete unless it can take both routes!
A: A Conditional Split task does what you want. Add the Conditional Split task, add in an additional output (a default output is provided), and set up the Condition for that output. Then just tie the outputs (default and new) to the Script and Email tasks as appropriate.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: WinForms in Excel (2002) and add-ons Good morning,
I am about to start writing an Excel add-in for Excel 2002. The add-in needs to call a form. Obviously, I can write the form within VBA.
My question is -- is there an easy/good way of calling a .NET (I am using 3.5) from Excel, and have the form be able to write stuff back to Excel the same way a native Excel 2002 form would?
A: Office XP... yes, functionally you can manipulate excel from add-in or the other way around, but obviously it requires more coding compared to VBA.
Most powerful solution is to use OLE automation, but it is not the easiest one to code and support.
If you really need it and have this option - get something like http://www.add-in-express.com/ - it gives a nice wrapper over Excel automation and addresses most common problems. Anyway, add-in-express looks like the most mature product supporting Office XP and worth checking out to get better idea about how .Net code and Excel can interact.
There are multiple ways you can implement data exchange between Excel and .Net code in add-in: OLE automation, calls to COM functions from VBA, RTD, not sure if anyone still uses DDE. There are some setup effort, programming challenges, and maintenance/stability problems for each of those.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Which design is better for a class that simply runs a self-contained computation? I'm currently working on a class that calculates the difference between two objects. I'm trying to decide what the best design for this class would be. I see two options:
1) Single-use class instance. Takes the objects to diff in the constructor and calculates the diff for that.
public class MyObjDiffer {
public MyObjDiffer(MyObj o1, MyObj o2) {
// Calculate diff here and store results in member variables
}
public boolean areObjectsDifferent() {
// ...
}
public Vector getOnlyInObj1() {
// ...
}
public Vector getOnlyInObj2() {
// ...
}
// ...
}
2) Re-usable class instance. Constructor takes no arguments. Has a "calculateDiff()" method that takes the objects to diff, and returns the results.
public class MyObjDiffer {
public MyObjDiffer() { }
public DiffResults getResults(MyObj o1, MyObj o2) {
// calculate and return the results. Nothing is stored in this class's members.
}
}
public class DiffResults {
public boolean areObjectsDifferent() {
// ...
}
public Vector getOnlyInObj1() {
// ...
}
public Vector getOnlyInObj2() {
// ...
}
}
The diffing will be fairly complex (details don't matter for the question), so there will need to be a number of helper functions. If I take solution 1 then I can store the data in member variables and don't have to pass everything around. It's slightly more compact, as everything is handled within a single class.
However, conceptually, it seems weird that a "Differ" would be specific to a certain set of results. Option 2 splits the results from the logic that actually calculates them.
EDIT: Option 2 also provides the ability to make the "MyObjDiffer" class static. Thanks kitsune, I forgot to mention that.
I'm having trouble seeing any significant pro or con to either option. I figure this kind of thing (a class that just handles some one-shot calculation) has to come up fairly often, and maybe I'm missing something. So, I figured I'd pose the question to the cloud. Are there significant pros or cons to one or the other option here? Is one inherently better? Does it matter?
I am doing this in Java, so there might be some restrictions on the possibilities, but the overall question of design is probably language-agnostic.
A: Use Object-Oriented Programming
Use option 2, but do not make it static.
The Strategy Pattern
This way, an instance MyObjDiffer can be passed to anyone that needs a Strategy for computing the difference between objects.
If, down the road, you find that different rules are used for computation in different contexts, you can create a new strategy to suit. With your code as it stands, you'd extend MyObjDiffer and override its methods, which is certainly workable. A better approach would be to define an interface, and have MyObjDiffer as one implementation.
Any decent refactoring tool will be able to "extract an interface" from MyObjDiffer and replace references to that type with the interface type at some later time if you want to delay the decision. Using "Option 2" with instance methods, rather than class procedures, gives you that flexibility.
Configure an Instance
Even if you never need to write a new comparison method, you might find that specifying options to tailor the behavior of your basic method is useful. If you think about using the "diff" command to compare text files, you'll remember how many different options there are: whitespace- and case-sensitivity, output options, etc. The best analog to this in OO programming is to consider each diff process as an object, with options set as properties on that object.
A: I can't really say I have firm reasons why it's the 'best' approach, but I usually write classes for objects that you can have a 'conversation' with. So it would be like your 'single use' option 1, except that by calling a setter, you would 'reset' it for another use.
Rather than supplying the implementation (which is pretty obvious), here's a sample invocation:
MyComparer cmp = new MyComparer(obj1, obj2);
boolean match = cmp.isMatch();
cmp.setSubjects(obj3,obj4);
List uniques1 = cmp.getOnlyIn(MyComparer.FIRST);
cmd.setSubject(MyComparer.SECOND,obj5);
List uniques = cmp.getOnlyIn(MyComparer.SECOND);
... and so on.
This way, the caller gets to decide whether they want to instantiate lots of objects, or keep reusing the one.
It's particularly useful if the object needs a lot of setup. Lets say your comparer takes options. There could be many. Set it up once, then use it many times.
// set up cmp with options and the master object
MyComparer cmp = new MyComparer();
cmp.setIgnoreCase(true);
cmp.setIgnoreTrailingWhitespace(false);
cmp.setSubject(MyComparer.FIRST,canonicalSubject);
// find items that are in the testSubjects objects,
// but not in the master.
List extraItems = new ArrayList();
for (Iterator it=testSubjects.iterator(); it.hasNext(); ) {
cmp.setSubject(MyComparer.SECOND,it.next());
extraItems.append(cmp.getOnlyIn(MyComparer.SECOND);
}
Edit: BTW I called it MyComparer rather than MyDiffer because it seemed more natural to have an isMatch() method than an isDifferent() method.
A: You want solution #2 for a number of reasons. And you don't want it to be static.
While static seems like fun, it's a maintenance nightmare when you come up with either (a) a new algorithm with the same requirements, or (b) new requirements.
A first-class object (without much internal state) allows you to evolve into a class hierarchy of different differs -- some slower, some faster, some with more memory, some with less memory, some for old requirements, some for new requirements.
Some of your differs may wind up with complicated internal state or memory, or incremental diffing or hash-code-based diffing. All kinds of possibilities might exist.
A reusable object allows you to pick your differ at application start-up time using a properties file.
In the long run, you want to minimize the number of new operations that are scattered throughout your application. You'd like to have your new operations focused in places where you can find and control them. To change from old differ algorithm to new differ algorithm, you'd like to do the following.
*
*Write the new subclass.
*Update a properties file to start using the new subclass.
And be completely confident that there wasn't some hidden d= new MyObjDiffer( x, y ) tucked away that you didn't know about.
You want to use d= theDiffer.getResults( x, y ) everywhere.
What the Java libraries do is they have a DifferFactory that's static. The factor checks the properties and emits the correct Differ.
DifferFactory df= new DifferFactory();
MyObjDiffer mod= df.getDiffer();
mod.getResults( x, y );
The Factory typically caches the single copy -- it doesn't have to physically read the properties every time getDiffer is called.
This design gives you ultimate flexibility in the future. At it looks like other parts of the Java libraries.
A: I'd take numero 2 and reflect on whether I should make this static.
A: It depends on how you're going to use diffs. In my mind, it makes sense to treat diffs as a logical entity because it needs to support some operations like 'getDiffString()', or 'numHunks()', or 'apply()'. I might take your first one and do it more like this:
public class Diff
{
public Diff(String path1, String path2)
{
// get diff
if (same)
throw new EmptyDiffException();
}
public String getDiffString()
{
}
public int numHunks()
{
}
public bool apply(String path1)
{
// try to apply diff as patch to file at path1. Return
// whether the patch applied successfully or not.
}
public bool merge(Diff diff)
{
// similar to apply(), but do merge yourself with another diff
}
}
Using a diff object like this also might lend itself to things like keeping a stack of patches, or serializing to a compressed archive, maybe an "undo" queue, and so on.
A: Why are you writing a class whose only purpose is to calculate the difference between two objects? That sounds like a task either for a static function or a member function of the class.
A: I would go for a static constructor method, something like.
Diffs diffs = Diffs.calculateDifferences(foo, bar);
In this way, it's clear when you're calculating the differences, and there is no way to misuse the object's interface.
A: I like the idea of explicitly starting the work rather than having it occur on instantiation. Also, I think the results are substantial enough to warrant their own class. Your first design isn't as clean to me. Someone using this class would have to understand that after performing the calculation some other class members are now holding the results. Option 2 is more clear about what is happening.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why can't I install DBD::mysql so I can use it with Maatkit? I'm trying to install Maatkit following the maatkit instructions. I can't get past having to install DBD::mysql. "Warning: prerequisite DBD::mysql 1 not found."
When I try to install DBD::mysql from cpan, I get very helpful "make had returned bad status, install seems impossible".
Perl is "v5.8.8 built for darwin-thread-multi-2level", the one that came with OS X. I also tried building from source with same result.
A: We need more of the error message. Most likely, you are missing the MySQL client development files. I don't know how to install these on OSX. Also see this older post on OSX 10.5.2 , in which some other failures with the mysql client libraries are found.
Possibly post this question with more parts of your error message at perlmonks.org, if stackoverflow doesn't allow for convenient pasting of your make session or rather the last 20 or 10 lines of it.
Some more Googling with site:perlmonks.org also finds this post which has some more details on things to watch out for when installing DBD::MySQL. Depending on how comfortable you feel with the installation, you might want to manually run the tests, supplying a test database and test user or even skip testing the module.
A: After a bit more googling, this worked for me:
sudo ln -s /usr/local/mysql/lib /usr/local/mysql/lib/mysql
sudo ln -s /usr/local/mysql/include /usr/local/mysql/include/mysql
sudo perl -MCPAN -e 'install Bundle::DBD::mysql'
press enter a bunch of times, then in your maatkit folder:
perl Makefile.PL
sudo make install
and you'll find the mk-* programs in /usr/local/bin/
A: You will want to install MySQL first. I usually use the binary packages they provide for OS X. The packages do include the headers and MySQL client libraries which DBD::MySQL requires. Once the MySQL package is installed, DBD::MySQL should install without issue.
A: Here is my output:
$ perl Makefile.PL
Checking if your kit is complete...
Looks good
Warning: prerequisite DBD::mysql 1 not found.
Writing Makefile for maatkit
$ mysql --version
mysql Ver 14.12 Distrib 5.0.51b, for apple-darwin9.0.0b5 (i686) using readline 5.0
A: I notice that there are in effect DBD::MySQL packages in the fink repositories. For example:
ayaz@ayazs-macbook$ fink list | grep -i 'dbd-mysql'
dbd-mysql-pm586 3.0008-10 Perl5 Database Interface to MySQL
dbd-mysql-pm588 3.0008-10 Perl5 Database Interface to MySQL
Perhaps installing through fink one of those packages may help alleviate your troubles.
Also, and I cannot be certain of this, you may want to install for MySQL-5.x (if you have that version installed) the mysql15-dev and mysql15-shlibs packages. I installed those through fink thus:
$ sudo fink --use-binary-dist install mysql15-dev
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What are the advantages/disadvantages for distributing multi stage tasks via JMS or JavaSpaces? When trying to distribute work that requires a multiple stage processing pipeline what are the communication, synchronization and throughput costs limitations in JMS vs JavaSpaces?
A: If you want SEDA, sending messages from stage to stage, then JMS implementations are typically much faster and more scalable, since MOMs are designed to not require locks so they can be highly asynchronous and concurrent. With JMS you can setup a consumer on startup and the message broker will typically push messages to your application ASAP so that there are many in-memory objects available at any time to be processed as soon as your application can process them - avoiding any network round trips or locking etc. See for example how prefetch works with ActiveMQ
Using JavaSpaces for messaging tends to be less efficient as they are generally implemented using a more database-centric approach of using locks with read/writes to entries etc. So you tend to query for objects then process them with JavaSpaces which tends to be a bit more chatty and less efficient for messaging.
The big win of the JavaSpaces approach though is if you want shared state; you can use a JavaSpace as a kinda database. Though maybe if you really want a database, you could use a relational database with JMS; but JavaSpace folks like to use a single system for shared state and messaging.
FWIW there's often no silver bullit with middleware; sometimes in memory SEDA is all you need, sometimes JMS, sometimes a relational database, sometimes files in a directory. It totally depends on your requirements, scalability, throughput, reliability and so forth. I tend to recommend to folks to hide middleware APIs from their code so that they can switch to whatever middleware they want easily via a simple one line config change such as with using Apache Camel
A: JMS is API, not product. It cannot have any "communication, synchronization and throughput costs". Specific implementation of JMS (Weblogic, JBoss, Tibco, ...) can.
There are no synchronization functions in JMS, btw -- queue is queue, you cannot make one message (in one queue) wait for another message (in another queue).
A: One other point to consider, JMS queues don't provide the ability to block based on size so a pure SEDA implementaion has a hard time working with pure JMS queues as it relies on the queues 'filling up' and applying back pressure on upstream stages.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is there a good tool for MySQL that will help me optimise my queries and index settings? I use MySQL in a fairly complex web site (PHP driven).
Ideally, there would be a tool I could use that would help me test the SQL queries I am using and suggest better table indexes that will improve performance and avoid table scans.
Failing that, something that will tell me exactly what each query is up to, so I can perform the optimisation myself.
Edit: a simple guide to understanding the output from EXPLAIN ... would also be useful.
Thank you.
A: As a simplest thing, enable Slow Query Log and see what queries are slow, then try to analyze them as suggested.
A: EverSQL
https://www.eversql.com
It will analyse your slow queries and generate indexes and give you some tips.
A: There are probably query analzers out there, but for a simple first cut at it
use the mysql command line, and type "explain select * from foo where bar = 'abc'". Make sure your most common queries are using indexes, try to avoid sequential scans or sorts of big tables.
A: Here's some info about EXPLAIN (referenced from the High Performance MySQL book from O'Reilly):
When you run an EXPLAIN on a query, it tells you everything MySQL knows about that query in the form of reports for each table involved in the query.
Each of these reports will tell you...
*
*the ID of the table (in the query)
*the table's role in a larger selection (if applicable, might just say SIMPLE if it's only one table)
*the name of the table (duh)
*the join type (if applicable, defaults to const)
*a list of indexes on the table (or NULL if none), possible_keys
*the name of the index that MySQL decided to use, key
*the size of the key value (in bytes)
*ref shows the cols or values used to match against the key
*rows is the number of rows that MySQL thinks it needs to examine in order to satisfy the query. This should be kept as close to your calculated minimum as possible!
*...then any extra information MySQL wishes to convey
The book is completely awesome at providing information like this, so if you haven't already, get your boss to sign off on a purchase.
Otherwise, I hope some more knowledgeable SO user can help :)
A: You should look into Maatkit, which is an open source toolkit for doing all sorts of MySQL tasks. Without more information about precisely what you're trying to tune, it's hard to tell you which tools you'd be using and how, but the documentation is excellent and it covers a lot of applications.
A: The tool I use for the rest of my sql tweaking (SQLyog) has a new version that includes a profiler, which is awesome! (I don't work for them - I just use their product)
http://www.webyog.com/en/screenshots_sqlyog.php
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.