question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,912,199
1,912,221
Better random algorithm?
I'm making a game in C++ and it involves filling tiles with random booleans (either yes or no) whether it is yes or no is decided by rand() % 1. It doesn't feel very random. I'm using srand with ctime at startup, but it seems like the same patterns are coming up. Are there any algorithms that will create very random numbers? Or any suggestions on how I could improve rand()?
True randomness often doesn't seem very random. Do expect to see odd runs. But at least one immediate thing you can do to help is to avoid using just the lowest-order bit. To quote Numerical Recipes in C: If you want to generate a random integer between 1 and 10, you should always do it by using high-order bits, as in j = 1 + (int) (10.0 * (rand() / (RAND_MAX + 1.0))); and never by anything resembling j = 1 + (rand() % 10); (which uses lower-order bits). Also, you might consider using a different RNG with better properties instead. The Xorshift algorithm is a nice alternative. It's speedy and compact at just a few lines of C, and should be good enough statistically for nearly any game.
1,912,693
1,912,734
segfault when copying an array to a vector in Linux
I'm trying to debug a legacy code written for Linux. Sometimes the application gets a segfault when it reaches the memcpy call in the following method: std::vector<uint8> _storage; size_t _wpos; void append(const uint8 *src, size_t cnt) { if (!cnt) return; if (_storage.size() < _wpos + cnt) _storage.resize(_wpos + cnt); memcpy(&_storage[_wpos], src, cnt); _wpos += cnt; } The values are as follows: _storage.size() is 1000 _wpos is 0 *src points to an array of uint8 with 3 values: { 3, 110, 20 } cnt is 3 I have no idea why this happens since this method gets called thousands of times during the application's runtime but it sometimes gets a segfault. Any one has any idea how to solve this?
Your code looks good in terms of the data that is written. Are you absolutely sure that you're passing in the right src pointer? What happens when you run the code with a debugger such as gdb? It should halt on the segfault, and then you can print out the values of _storage.size(), src, and cnt. I'm sure you'll find that (at least) one of those is not at all what you're expecting. You might have passed an invalid src; you might have passed an absurdly large cnt.
1,913,055
1,913,103
Making PNG|jpeg from LaTeX in C or C++
I'm looking for a library (or a cleverer solution) in C or C++ that would make an image file (PNG|jpeg) from LaTeX code. The use of packages is a prerequisite. For now I'm thinking of compiling a .tex file into a .dvi and using dvipng to get a .PNG. There's also the possibility of compiling a .tex file into a .ps file and then converting it into a .PNG by the mean of extern utilities like pstopng or pstoedit. But these solutions are cumbersome and not always portable. I would like to integrate this conversion in my program transparently.
I've used the dvipng route several times before, but in python. It's a common path, that lots of people have taken. Here's the code, to give you something to get started, and in case anyone wants Python code. I do realise you asked for C/C++; this is for a starter, or for others. This is for generating equations, but it would be trivial to adapt it for more general structures. It does support packages. In terms of integrating it transparently, I feel your pain. Not everyone has tex / latex of course, and if they don't, it's often a pain to get. The best way to do it, I think, is to provide that functionality as a web service - but of course that's not always an option. Finally, note all the options for dvipng. They control the appearance, via various anti-alisaing options etc. I tuned them extensively to get what I thought looked good. def geneq(f, eq, dpi, wl, outname, packages): # First check if there is an existing file. eqname = os.path.join(f.eqdir, outname + '.png') # Open tex file. tempdir = tempfile.gettempdir() fd, texfile = tempfile.mkstemp('.tex', '', tempdir, True) basefile = texfile[:-4] g = os.fdopen(fd, 'w') preamble = '\documentclass{article}\n' for p in packages: preamble += '\usepackage{%s}\n' % p preamble += '\pagestyle{empty}\n\\begin{document}\n' g.write(preamble) # Write the equation itself. if wl: g.write('\\[%s\\]' % eq) else: g.write('$%s$' % eq) # Finish off the tex file. g.write('\n\\newpage\n\end{document}') g.close() exts = ['.tex', '.aux', '.dvi', '.log'] try: # Generate the DVI file latexcmd = 'latex -file-line-error-style -interaction=nonstopmode ' + \ '-output-directory %s %s' % (tempdir, texfile) p = Popen(latexcmd, shell=True, stdout=PIPE) rc = p.wait() if rc != 0: for l in p.stdout.readlines(): print ' ' + l.rstrip() exts.remove('.tex') raise Exception('latex error') dvifile = basefile + '.dvi' dvicmd = 'dvipng --freetype0 -Q 9 -z 3 --depth -q -T tight -D %i -bg Transparent -o %s %s' % (dpi, eqname, dvifile) # discard warnings, as well. p = Popen(dvicmd, shell=True, stdout=PIPE, stderr=PIPE) rc = p.wait() if rc != 0: print p.stderr.readlines() raise Exception('dvipng error') depth = int(p.stdout.readlines()[-1].split('=')[-1]) finally: # Clean up. for ext in exts: g = basefile + ext if os.path.exists(g): os.remove(g)
1,913,069
1,913,142
Learning about C++ 0x features
What is a good place to learn about the new C++ 0x features? I understand that they may not have been fully finalized yet but it would be nice to get a head start. Also, what compilers currently support them?
An easy and fun way to learn about it is to watch the C++0x Overview Google Techtalk. Another good source is Bjarne Stroutstrup's C++0x FAQ which covers a huge portion of the new features.
1,913,337
1,913,435
Replacement for vector accepting non standard constructable and not assignable types
I have a class test which isn't standard constructable nor assignable due to certain reasons. However it is copy constructable - on may say it behaves a bit like a reference. Unfortunately I needed a dynamic array of these elements and realized that vector<test> isn't the right choice because the elements of a vector must be standard constructable and assignable. Fortunately I got around this problem by using vector<T>::reserve and vector<T>::push_back instead of vector<T>::resize and direct filling the entries (no standard construction) the copy'n'swap trick for assignment and the fact that a vector is usually implemented using the Pimpl-idiom (no direct assignment of an existing test element), i.e class base { private: std::vector<test> vect; /* ... */ public: /* ... */ base& operator= (base y) { swap(y); return *this; } void swap(base& y) { using std::swap; swap(vect, y.vect); } /* ... */ }; Now I assume that I probably didn't considered every tiny bit and above all these tricks are strongly implementation dependent. The standard only guarantees standard behavior for standard constructable and assignable types. Now what's next? How can I get a dynamic array of test objects? Remark: I must prefer built in solutions and classes provided by the standard C++. Edit: I just realized that my tricks actually didn't work. If I define a really* non assignable class I get plenty of errors on my compiler. So the question condenses to the last question: How can I have a dynamic array of these test objects? (*) My test class provided an assignment operator but this one worked like the assignment to a reference.
Edit: The below is no longer good practice. If your object supports moving then it will probably fit into a vector (see the std::vector elements requirements for details, in particular the changes for C++17). Consider using Boost's ptr_vector, part of the Boost Pointer Container Library. See in particular advantage #3 in that library's motivation.
1,913,343
1,913,393
How could pairing new[] with delete possibly lead to memory leak only?
First of all, using delete for anything allocated with new[] is undefined behaviour according to C++ standard. In Visual C++ 7 such pairing can lead to one of the two consequences. If the type new[]'ed has trivial constructor and destructor VC++ simply uses new instead of new[] and using delete for that block works fine - new just calls "allocate memory", delete just calls "free memory". If the type new[]'ed has a non-trivial constructor or destructor the above trick can't be done - VC++7 has to invoke exactly the right number of destructors. So it prepends the array with a size_t storing the number of elements. Now the address returned by new[] points onto the first element, not onto the beginning of the block. So if delete is used it only calls the destructor for the first element and the calls "free memory" with the address different from the one returned by "allocate memory" and this leads to some error indicaton inside HeapFree() which I suspect refers to heap corruption. Yet every here and there one can read false statements that using delete after new[] leads to a memory leak. I suspect that anything size of heap corruption is much more important than a fact that the destructor is called for the first element only and possibly the destructors not called didn't free heap-allocated sub-objects. How could using delete after new[] possibly lead only to a memory leak on some C++ implementation?
Suppose I'm a C++ compiler, and I implement my memory management like this: I prepend every block of reserved memory with the size of the memory, in bytes. Something like this; | size | data ... | ^ pointer returned by new and new[] Note that, in terms of memory allocation, there is no difference between new and new[]: both just allocate a block of memory of a certain size. Now how will delete[] know the size of the array, in order to call the right number of destructors? Simply divide the size of the memory block by sizeof(T), where T is the type of elements of the array. Now suppose I implement delete as simply one call to the destructor, followed by the freeing of the size bytes, then the destructors of the subsequent elements will never be called. This results in leaking resources allocated by the subsequent elements. Yet, because I do free size bytes (not sizeof(T) bytes), no heap corruption occurs.
1,913,496
1,915,020
Behavior of WS_CLIPCHILDREN and InvalidateRect in Windows 7
To reduce flickering I create my parent windows using the WS_CLIPCHILDREN flag and I call InvalidateRect during the WM_SIZE event. This approach has worked well in Windows XP. However, I recently started programming on Windows 7 and I'm now experiencing rendering issues when resizing windows. When resizing a window its contents is not refreshed until I do something that forces a redraw, like minimizing and restoring the window. I've tried following up the InvalidateRect with a UpdateWindow call but with no effect. Does anyone know how to do it correctly? Update I found a workaround: calling InvalidateRect(childHWND, NULL, FALSE) on all child windows followed by a InvalidateRect(parentHWND, NULL, TRUE) on the parent window fixes the rendering problem without introducing noticeable flickering. Other suggestions are still welcome! Update 2 I tried the RedrawWindow(hwnd, 0, 0, RDW_INVALIDATE | RDW_ALLCHILDREN) but that resulted in some rendering issues (left-over pixels). Update 3 The RedrawWindow works when followed by a InvalidateRect(hwnd, NULL, TRUE). Thanks @interjay!
You can try calling RedrawWindow, passing flags RDW_INVALIDATE and RDW_ALLCHILDREN. Edit: To redraw the background, you can add RDW_ERASE. If you want to redraw the background on the parent but not the children, call both RedrawWindow and InvalidateRect(...,TRUE).
1,913,541
1,913,898
How to save pointer to member in compile time?
Consider the following code template<typename T, int N> struct A { typedef T value_type; // OK. save T to value_type static const int size = N; // OK. save N to size }; Look, it is possible to save any template parameter if this parameter is a typename or an integer value. The thing is that pointer to member is an offset, i.e. integer. Now I want to save any pointer to member in compile time: struct Foo { int m; int r; }; template<int Foo::*ptr_to_member> struct B { // Next statement DOES NOT WORK! static int Foo::* const saved_ptr_to_member = ptr_to_member; }; // Example of using int main() { typedef B<&Foo::m> Bm; typedef B<&Foo::r> Br; Foo foo; std::cout << (foo.*(Bm::saved_ptr_to_member)); } How to save pointer to member in compile time? I use VS2008. Note. Compile time is critical. Please don't write run-time solution. I know it.
Why using a template? #include <cstdio> struct Foo { int a; int b; } foo = {2, 3}; int const (Foo::*mp) = &Foo::b; int main() { printf("%d\n", foo.*mp); return 0; } The following compiles mp to this on gcc-4.4.1 (I don't have access to MSVC at the moment): .globl mp .align 4 .type mp, @object .size mp, 4 mp: .long 4 It is just an offset to the member, which looks pretty compile-time to me. With template, you need to specify the definition outside of the class: #include <cstdio> struct Foo { int m; int r; } foo = {2, 3}; template<int Foo::*Mem> struct B { static int Foo::* const mp; }; template<int Foo::*Mem> int Foo::* const B<Mem>::mp = Mem; int main() { typedef B<&Foo::m> Bm; typedef B<&Foo::r> Br; printf("%d, %d\n", foo.*(Bm::mp), foo.*(Br::mp)); } Which compiles to: g++ -O2 -S -o- b.cc | c++filt ... .weak B<&(Foo::r)>::mp .section .rodata._ZN1BIXadL_ZN3Foo1rEEEE2mpE,"aG",@progbits,B<&(Foo::r)>::mp,comdat .align 4 .type B<&(Foo::r)>::mp, @object .size B<&(Foo::r)>::mp, 4 B<&(Foo::r)>::mp: .long 4 .weak B<&(Foo::m)>::mp .section .rodata._ZN1BIXadL_ZN3Foo1mEEEE2mpE,"aG",@progbits,B<&(Foo::m)>::mp,comdat .align 4 .type B<&(Foo::m)>::mp, @object .size B<&(Foo::m)>::mp, 4 B<&(Foo::m)>::mp: .zero 4 However this all smacks of standard library features reimplementation (see std::tr1::mem_fn).
1,913,767
1,922,745
What's the fastest way to deserialize a tree in C++
I'm working with a not so small tree structure (it's a Burkhard-Keller-Tree, > 100 MB in memory) implemented in C++. The pointers to the children of each node are stored in a QHash. Each node x has n children y[1] ... y[n], the edges to the children are labeled with the edit distance d(x, y[i]), so using a hash to store the nodes is an obvious solution. class Node { int value; QHash<int, Node*> children; /* ... */ }; I also want to serialize and deserialize it into a file (I currently use a QDataStream). The tree is just built once and doesn't change then. Building the tree and deserializing it is rather slow. I'm loading the tree in the obvious way: Recursively building each node. I think this is suboptimal due to the many nodes that are created seperately with the new operator. I read somewhere that new is pretty slow. The initial build is not a big problem because the tree's rather stable and doesn't have to be rebuilt very often. But loading the tree from a file should be as fast as possible. What's the best way to accomplish this? It must be much better to save the whole tree in a single memory block with adjacent nodes. Serializing and deserializing would then be reduced to save and load the whole block, which I have to allocate just once. But to implement this I would have to re-implement the QHash, AFAIK. What would you do to speed up the deserialization? Addendum Thank you for your suggestion to do some profiling. Here are the results: While rebuilding the tree from a file 1 % of the time is consumed by my own new calls 65 % is consumed by loading the QHash objects (this is implemented by the Qt Library) of each node 12 % is consumed by inserting the nodes into the existing tree 20 % is everything else So it's definitly not my new calls which cause the delay but the rebuild of the QHash objects at every node. This is basically done with: QDataStream in(&infile); in >> node.hash; Do I have to dig into QHash and look what's going on under the hood there? I think the best solution would be a hash object that can be serialized with a single read and write operation without the need to rebuild the internal data structure.
Another approach would be to serialize your pointers and restore them when loading. I mean: Serializing: nodeList = collectAllNodes(); for n in nodelist: write ( &n ) writeNode( n ) //with pointers as-they-are. Deserializing: //read all nodes into a list. while ( ! eof(f)) read( prevNodeAddress) readNode( node ) fixMap[prevNodeAddress] = &node; nodeList.append(node); //fix pointers to new values. for n in nodeList: for child in n.children: child->node = fixMap[child->node] This way if you don't insert-remove new nodes you can allocate a vector once and use that memory, reducing your allocation to the maps ( as rpg said, it might be faster with lists or even vectors).
1,913,842
1,913,864
struct sizeof result not expected
I have a a struct defined thusly: typedef struct _CONFIGURATION_DATA { BYTE configurationIndicator; ULONG32 baudRate; BYTE stopBits; BYTE parity; BYTE wordLength; BYTE flowControl; BYTE padding; } CONFIGURATION_DATA; Now, by my reckoning, that struct is 10 bytes long. However, sizeof reports that it is 16 bytes long? Anyone know why? I am compiling using the build tools in the Windows DDK.
Alignment. use #pragma pack(1) ...struct goes here... #pragma pack() I would also recommend reordering things, and if necessary padding then with RESERVED bytes, so that multi-byte integral types will be better aligned. This will make processing faster for tbe CPU, and your code smaller.
1,913,853
1,913,918
Why [] is used in delete ( delete [] ) to free dynamically allocated array ?
I know that when delete [] will cause destruction for all array elements and then releases the memory. I initially thought that compiler wants it just to call destructor for all elements in the array, but I have also a counter - argument for that which is: Heap memory allocator must know the size of bytes allocated and using sizeof(Type) its possible to find no of elements and to call appropriate no of destructors for an array to prevent resource leaks. So my assumption is correct or not and please clear my doubt on it. So I am not getting the usage of [] in delete [] ?
Scott Meyers says in his Effective C++ book: Item 5: Use the same form in corresponding uses of new and delete. The big question for delete is this: how many objects reside in the memory being deleted? The answer to that determines how many destructors must be called. Does the pointer being deleted point to a single object or to an array of objects? The only way for delete to know is for you to tell it. If you don't use brackets in your use of delete, delete assumes a single object is pointed to. Also, the memory allocator might allocate more space that required to store your objects and in this case dividing the size of the memory block returned by the size of each object won't work. Depending on the platform, the _msize (windows), malloc_usable_size (linux) or malloc_size (osx) functions will tell you the real length of the block that was allocated. This information can be exploited when designing growing containers. Another reason why it won't work is that Foo* foo = new Foo[10] calls operator new[] to allocate the memory. Then delete [] foo; calls operator delete[] to deallocate the memory. As those operators can be overloaded, you have to adhere to the convention otherwise delete foo; calls operator delete which may have an incompatible implementation with operator delete []. It's a matter of semantics, not just keeping track of the number of allocated object to later issue the right number of destructor calls. See also: [16.14] After p = new Fred[n], how does the compiler know there are n objects to be destructed during delete[] p? Short answer: Magic. Long answer: The run-time system stores the number of objects, n, somewhere where it can be retrieved if you only know the pointer, p. There are two popular techniques that do this. Both these techniques are in use by commercial-grade compilers, both have tradeoffs, and neither is perfect. These techniques are: Over-allocate the array and put n just to the left of the first Fred object. Use an associative array with p as the key and n as the value. EDIT: after having read @AndreyT comments, I dug into my copy of Stroustrup's "The Design and Evolution of C++" and excerpted the following: How do we ensure that an array is correctly deleted? In particular, how do we ensure that the destructor is called for all elements of an array? ... Plain delete isn't required to handle both individual objects an arrays. This avoids complicating the common case of allocating and deallocating individual objects. It also avoids encumbering individual objects with information necessary for array deallocation. An intermediate version of delete[] required the programmer to specify the number of elements of the array. ... That proved too error prone, so the burden of keeping track of the number of elements was placed on the implementation instead. As @Marcus mentioned, the rational may have been "you don't pay for what you don't use". EDIT2: In "The C++ Programming Language, 3rd edition", §10.4.7, Bjarne Stroustrup writes: Exactly how arrays and individual objects are allocated is implementation-dependent. Therefore, different implementations will react differently to incorrect uses of the delete and delete[] operators. In simple and uninteresting cases like the previous one, a compiler can detect the problem, but generally something nasty will happen at run time. The special destruction operator for arrays, delete[], isn’t logically necessary. However, suppose the implementation of the free store had been required to hold sufficient information for every object to tell if it was an individual or an array. The user could have been relieved of a burden, but that obligation would have imposed significant time and space overheads on some C++ implementations.
1,914,337
1,915,349
How can I make QtCreator compile with gsl library?
I am trying to use the GNU Scientific Library (GSL) http://www.gnu.org/software/gsl/ in QtCreator. How can I tell Qt creator to add these flags: http://www.gnu.org/software/gsl/manual/html_node/Linking-programs-with-the-library.html to link correctly?
You need to edit your .pro file and add the extra libs by hand, e.g.: LIBS += -L/usr/local/lib example.o -lgsl -lgslcblas -lm See the QMake documentation for more information.
1,914,416
1,984,617
OpenCV cvNamedWindow not appearing under Fedora
As the title suggests I'm simply trying to get a named window to come up. I've been working with OpenCV for over a year now, and never had this problem before. For some reason, the window never opens. I've tried running some of my old scripts and everything works fine. As a very cut down example, see below #include "cv.h" #include "highgui.h" int main(int argc, char** argv) { cvNamedWindow( "video", 0 ); IplImage *im = cvCreateImage( cvSize(200,200), 8, 3 ); while(1) { cvShowImage( "video", im ); } return 0; } I can see no reason why that wouldn't work, but for some reason the window never appears. Has anyone else experienced this? It's doing my head in!
Simply call cvWaitKey(int milliseconds) within the loop. This function notifies the GUI system to run graphics pending events. Your code should be something like: int main(int argc, char** argv) { cvNamedWindow( "video", 0 ); IplImage *im = cvCreateImage( cvSize(200,200), 8, 3 ); while(1) { cvShowImage( "video", im ); cvWaitKey(100); //wait for 100 ms for user to hit some key in the window } return 0; }
1,914,606
1,914,615
Is there any generic vesion of HashTable?
I need a class that will work like C++ std::map. More specifically, I need such a behavior: map< string, vector<int> > my_map; Is it possible?
A dictionary is I believe what you want: Dictionary<String, int> dict = new Dictionary<String, int>(); dict.Add("key", 0); Console.WriteLine(dict["key"]); etc, etc MSDN: http://msdn.microsoft.com/en-us/library/xfhwa508.aspx You can specify more or less any type as the key/value type. Including another dictionary, an array, or whatever: Dictionary<String, String[]> dict = new Dictionary<String, String[]>(); So here each element in the Dictionary points to an array of strings. To implement what you require (with the vector int), you would require a List as the value type: Dictionary<String, List<int>> dict = new Dictionary<String, List<int>>(); It is worth noting that a Dictionary has no predefined order, whereas std::map does. If order is important, you may want to use SortedDictionary instead, which is almost identical in usage, but sorts on the key. All depends if you plan to iterate over the dictionary really. Note however that if you use a class you created as the key, you will need to properly override GetHashCode and Equals.
1,914,633
2,063,964
bring malloc() back to its initial state
Do you know if there is a way to bring back malloc in its initial state, as if the program was just starting ? reason : I am developing an embedded application with the nintendods devkitpro and I would like to be able to improve debugging support in case of software faults. I can already catch most errors and e.g. return to the console menu, but this fails to work when catching std::bad_alloc. I suspect that the code I use for "soft reboot" involves malloc() itself at some point I cannot control, so I'd like to "forget everything about the running app and get a fresh start".
The only way to get a fresh start is to reload the application from storage. The DS loads everything into RAM which means that the data section is modified in place.
1,914,776
1,915,601
mysql aggregate UDF (user defined function) in C
I need to write an aggregate extension function (implemented in C) for mySQL 5.x. I have scoured the documentation (including browsing sql/udf_example.c) but I do not find anything that is brief, to the point and shows me just what I need to do. This is the problem: I have a C struct (Foo) I have a C function that takes an array of these Foo structs, performs an operation, on the array, and returns a double. struct FooBar { char * date; double age; double wight; double salary; int eye_color; }; /* Processing function / double processFooBars(struct FooBar foobars, const size_t size); /* MySQL table */ CREATE TABLE foo_bar( the_date DATE, double age, double weight, double salary, int eye_color}; I want to be able to create an aggregate function thus: (I maybe using PostgreSQL syntax) CREATE AGGREGATE FUNCTION proc_foobar RETURNS REAL soname myshlib.so ALIAS my_wrapper_func I can then use it in a MySQL Query thus: SELECT proc_foobar() as likeability FROM foo_bar WHERE the_date BETWEEN '1-Jan-09' and '1-Dec-09' What this query should then do would be to fetch all the the matching records from the table foo_bar, pass them to my wrapper function around processFooBar, which will then extract FooBar structs from the records received and then pass them to the C function that does the work and returns the value. Its simpler to explain using (pseudo)code: #ifdefined __cplusplus extern "C" { #endif /* this is the wrapper function that mySQL calls and passes the records to */ double my_wrapper_func(/*Matching rows sent by mySQL + other info .. ?*/) { /* create FooBar Array from received record */ struct FooBar ** the_array = ExtractArrayFromRowset(/*some params*/); double result = processFooBar(the_array, ARRAY_SIZE_MACRO(*the_array)); /* free resources */ FreeFooBarArray(the_array); RETURN_DOUBLE(result); /* or similar macro to return a double in MySQL */ } #ifdefined __cplusplus }; #endif Could anyone provide a little snippet (or direct me to a snippet) that shows me how I can write the my_wrapper_func - or more to the point how I can implement the required functionality of writing an aggregate function as described above, as an extension function in C/C++.
Doesn't answer your question but article on MySQL udf is pretty good: http://www.codeproject.com/KB/database/MySQL_UDFs.aspx
1,914,864
1,914,893
Determine an included header files contribution to total file size
I am interested in reducing the file size of my application. It is a MFC/C++ application built with MVC++ in Visual Studio 2008. UPX does a good job of reducing the final exe to about 40% of its original size but I would like to reduce it more. MFC must be statically linked in this project. I have tried some methods outlined in this question: reduce-windows-executable-size. Specifically applying different settings to the compiler/linker. I believe i can reduce the size further by having a look at the 'cost' of including certain headers in the project. Any tip on how to go about this, maybe a tool which could analyse my code for me? Thanks
You are probably wrong in this. Removing headers can result in somewhat shorter build times, but as what they contain is mostly declarations (which you will need at some point anyway) they should have little or no effect on the size of the final executable.
1,915,184
2,060,496
GStreamer or DirectShow for Windows development?
I'm implementing a lecture-capture project for a local university. Multiple video streams will arrive at one PC: the presenter's desktop slides, a video camera image of the presenter himself and optionally a digital whiteboard capture. These incoming streams will be managed by a desktop application that displays, transcodes/mixes and eventually saves them to disk. There will be some configuration options because the material can be distributed in various ways: as a Flash application on a DVD, as an online Flash application or as a video-on-demand stream for Windows Media Player. This application should work on Windows. Optionally it can support other platforms, but it doesn't seem to be high priority. Both GStreamer and DirectShow seem capable of providing the underlying technology. I have a little experience with GStreamer on Linux, and I like its design, so I'm inclined to use it for this project. However, I don't know how well it is supported on Windows. I couldn't find any recent docs on how to build GStreamer on Windows. So I'm afraid I'll get stuck somewhere in the process. DirectShow seems like a safer option because it is much more widely used and there is much more documentation available for it on the internet. Does anyone here have experience using GStreamer on Windows? Does it work well? Are there certain issues that I should be aware of? Edit I discovered the GStreamer OSSBuilds website and was able to quickly implement a simple video player (based on the 'playbin' element) with it. So I think I'll pursue the GStreamer path a little further.
Ok, I'll answer this question myself. The simple answer is: GStreamer! I've experienced no difficulties thus far. To make it work on Windows you need to use the GStreamer Winbuilds. Update (6 months later) Actually I burned myself a little bit on this bet. Later in the project the client specified that the WMV9 codec (VC-1) had to be supported. Since WMV9 encoders are only supported on Microsft platforms this wasn't possible to implement in a GStreamer-based solution. So maybe DirectShow would have been the right choice.
1,915,520
1,930,943
Asio async and concurrency
I'm writing some code with boost::asio, using asynchronous TCP connections. I've to admit that I have some doubts about it. All these regarding concurrency. Here are some: What happens if I start two or more async_write on the same socket without waiting completion of the first one? Will the handlers (and the async_write) overlap or asio provides serialization and synchronization? Same question of above with async_connect and async_read. In general is it safe to call these functions from different threads (I'm not talking about using different buffers, that's another problem...).
I assume from your question that you have a single instance of io_service and you want to call async_write() on it from multiple threads. async_write() ultimately calls the post() method of io_service, which in turn takes a lock and pushes the bits to be written into a work queue, ensuring that the bits won't be written interleaved. Those bits will eventually get written out and the underlying data structure that holds them (a char array or whatever) must remain valid until you get the callback signifying that the write has completed. If you are using the exact same callback function as your completion handler, you will have no way of knowing which of the two writes resulted in that function being called and if that function does anything not thread-safe, behavior may be undefined or incorrect. A popular way to handle this situation is to have a instance of a struct that is the completion handler (just overload the call () operator): you can set the properties of the struct to denote which write it corresponds to and then consult these values when the completion handler is called. However, absent a shared lock, you have no way of controlling which of the threads actually executes its async_write() method. In fact, even if you start up two threads and have one thread immediately call async_write() and have the other sleep for an hour and then call async_write(), you are still not assured that the OS didn't schedule your threads stupidly and execute the second thread's call first. (The example is pathological but the point is universally valid.) The same situation applies to async_read(). You certainly can interleave calls (ie do one async_read() and then another before the completion handler is called) but there is no guarantee that the will execute in the order you intend without some external means to ensure this.
1,915,659
1,915,702
does c++ standard prohibit the void main() prototype?
In section 3.6.1.2 of both C++ Standard 1998 and 2003 editions, An implementation shall not predefine the main function. This function shall not be overloaded. It shall have a return type of type int, but otherwise its type is implementation-defined. I am not a native English speaker.I do not sure what does"but otherwise" means.Whether it is to prohibit the other return type,or to give the right to C++ compiler writer? So what's the answer?
The english you quote does prohibit declaring main to return void. It is allowing variation in the arguments that come in, but not in the return type.
1,915,704
1,916,893
Writing concurrently to a file
I have this tool in which a single log-like file is written to by several processes. What I want to achieve is to have the file truncated when it is first opened, and then have all writes done at the end by the several processes that have it open. All writes are systematically flushed and mutex-protected so that I don't get jumbled output. First, a process creates the file, then starts a sequence of other processes, one at a time, that then open the file and write to it (the master sometimes chimes in with additional content; the slave process may or may not be open and writing something). I'd like, as much as possible, not to use more IPC that what already exists (all I'm doing now is writing to a popen-created pipe). I have no access to external libraries other that the CRT and Win32 API, and I would like not to start writing serialization code. Here is some code that shows where I've gone: // open the file. Truncate it if we're the 'master', append to it if we're a 'slave' std::ofstream blah(filename, ios::out | (isClient ? ios:app : 0)); // do stuff... // write stuff myMutex.acquire(); blah << "stuff to write" << std::flush; myMutex.release(); Well, this does not work: although the output of the slave process is ordered as expected, what the master writes is either bunched together or at the wrong place, when it exists at all. I have two questions: is the flag combination given to the ofstream's constructor the right one ? Am I going the right way anyway ?
As suggested by reinier, the problem was not in the way I use the files but in the way the programs behave. The fstreams do just fine. What I missed out is the synchronization between the master and the slave (the former was assuming a particular operation was synchronous where it was not). edit: Oh well, there still was a problem with the open flags. The process that opened the file with ios::out did not move the file pointer as needed (erasing text other processes were writing), and using seekp() completely screwed the output when writing to cout as another part of the code uses cerr. My final solution is to keep the mutex and the flush, and, for the master process, open the file in ios::out mode (to create or truncate the file), close it and reopen it using ios::app.
1,915,739
1,915,806
Stopping an MFC thread
I understand the problem with just killing the thread directly (via AfxEndThread or other means), and I've seen the examples using CEvent objects to signal the thread and then having the thread clean itself up. The problem I have is that using CEvent to signal the thread seems to require a loop where you check to see if the thread is signaled at the end of the loop. The problem is, my thread doesn't loop. It just runs, and the processing could take a while (which is why I'd like to be able to stop it). Also, if I were to just kill the thread, I realize that anything I've allocated will not have a chance to clean itself up. It seems to me like any locals I've been using that happen to have put stuff on the heap will also not be able to clean themselves up. Is this the case?
Does your thread ever exit? If so, you could set an event in the thread at exit and have the main process wait for that event via waitforsingleevent. This is best to do with a timeout so the main process doesn't appear to lockup when it's closing. At the timeout event, kill the thread via AfxKillThread. You'll have to determine what a reasonable timeout is, though. Since you don't loop in the thread this seems to me to be the only way to do this. Of course, you could something like set a boolean flag in the main process and have the thread periodically check this flag, but then your thread code will be littered with "if(!canRun) return;" type code. If the thread never exits, then AfxKillThread/AfxTerminateThread is the only way to stop the thread.
1,915,759
1,915,814
Forward declaration and typeid
I would like to check the type of a superclass A against the type of a subclass B (with a method inside the superclass A, so that B will inherit it). Here's what I thought did the trick (that is, the use of forward declaration): #include <iostream> #include <typeinfo> using namespace std; class B; class A { public: int i_; void Check () { if (typeid (*this) == typeid (B)) cout << "True: Same type as B." << endl; else cout << "False: Not the same type as B." << endl; } }; class B : public A { public: double d_; }; int main () { A a; B b; a.Check (); // should be false b.Check (); // should be true return 0; } However this code does not compile. The error I get is: main.cc: In member function ‘void A::Check()’: main.cc:12: error: invalid use of incomplete type ‘struct B’ main.cc:6: error: forward declaration of ‘struct B’ How could I solve this problem?
I think that the problem you are trying to solve is much better handled by a virtual method: class A { public: virtual bool Check() { return false; }; } class B : public A { public: // override A::Check() virtual bool Check() { return true; }; } Methods in the base class A should not need to know whether the object is "really" an A or a B. That's a violation of basic object-oriented design principles. If the behavior needs to change when the object is a B, then that behavior should be defined in B and handled by virtual method calls.
1,915,829
1,917,084
Learning C when you already know C++?
I think I have an advanced knowledge of C++, and I'd like to learn C. There are a lot of resources to help people going from C to C++, but I've not found anything useful to do the opposite of that. Specifically: Are there widely used general purpose libraries every C programmer should know about (like boost for C++) ? What are the most important C idioms (like RAII for C++) ? Should I learn C99 and use it, or stick to C89 ? Any pitfalls/traps for a C++ developer ? Anything else useful to know ?
There's a lot here already, so maybe this is just a minor addition but here's what I find to be the biggest differences. Library: I put this first, because this in my opinion this is the biggest difference in practice. The C standard library is very(!) sparse. It offers a bare minimum of services. For everything else you have to roll your own or find a library to use (and many people do). You have file I/O and some very basic string functions and math. For everything else you have to roll your own or find a library to use. I find I miss extended containers (especially maps) heavily when moving from C++ to C, but there are a lot of other ones. Idioms: Both languages have manual memory (resource) management, but C++ gives you some tools to hide the need. In C you will find yourself tracking resources by hand much more often, and you have to get used to that. Particular examples are arrays and strings (C++ vector and string save you a lot of work), smart pointers (you can't really do "smart pointers" as such in C. You can do reference counting, but you have to up and down the reference counts yourself, which is very error prone -- the reason smart pointers were added to C++ in the first place), and the lack of RAII generally which you will notice everywhere if you are used to the modern style of C++ programming. You have to be explicit about construction and destruction. You can argue about the merits of flaws of this, but there's a lot more explicit code as a result. Error handling. C++ exceptions can be tricky to get right so not everyone uses them, but if you do use them you will find you have to pay a lot of attention to how you do error notification. Needing to check for return values on all important calls (some would argue all calls) takes a lot of discipline and a lot of C code out there doesn't do it. Strings (and arrays in general) don't carry their sizes around. You have to pass a lot of extra parameters in C to deal with this. Without namespaces you have to manage your global namespace carefully. There's no explicit tying of functions to types as there is with class in C++. You have to maintain a convention of prefixing everything you want associated with a type. You will see a lot more macros. Macros are used in C in many places where C++ has language features to do the same, especially symbolic constants (C has enum but lots of older code uses #define instead), and for generics (where C++ uses templates). Advice: Consider finding an extended library for general use. Take a look at GLib or APR. Even if you don't want a full library consider finding a map / dictionary / hashtable for general use. Also consider bundling up a bare bones "string" type that contains a size. Get used to putting module or "class" prefixes on all public names. This is a little tedious but it will save you a lot of headaches. Make heavy use of forward declaration to make types opaque. Where in C++ you might have private data in a header and rely on private is preventing access, in C you want to push implementation details into the source files as much as possible. (You actually want to do this in C++ too in my opinion, but C makes it easier, so more people do it.) C++ reveals the implementation in the header, even though it technically hides it from access outside the class. // C.hh class C { public: void method1(); int method2(); private: int value1; char * value2; }; C pushes the 'class' definition into the source file. The header is all forward declarations. // C.h typedef struct C C; // forward declaration void c_method1(C *); int c_method2(C *); // C.c struct C { int value1; char * value2; };
1,915,880
1,917,145
boost::bind & boost::function pointers to overloaded or templated member functions
I have a callback mechanism, the classes involved are: class App { void onEvent(const MyEvent& event); void onEvent(const MyOtherEvent& event); Connector connect; } class Connector { template <class T> void Subscribe(boost::function <void (const T&)> callback); } App::App() { connect.Subscribe<MyEvent>(&App::OnEvent<MyEvent>); } First off this code doesn't compile, it's an illustration. The use of templates complicates my example, but I left them in because its affecting my problem. It seems certain to me that my subscribe needs to be templated because the Connector class doesn't know how many event types it handles. When I try to create a: boost::function f = &App::OnEvent, I tried creating OnEvent as a template function, with specializations, but it seems that the compiler is treating my OnEvent functions as overloads rather than template specializations, or else I get the template specialization not in namespace error if I try to explicitly declare it as template <> OnEvent(const MyEvent& e) ... I can get the following to compile: boost::function <void (App*, const MyEvent&)> f = &App::OnEvent; f(this, e); That compiles, runs, and works. boost::function<void (const MyEvent&)> g = boost::bind(&App::OnEvent, this); does not. I think its because I'm not correctly specifying the address of an overloaded function. Having now explained all this to the teddy bear - I think that my question is "How do I correctly create a function pointer to an overloaded or templated member function and bind the this pointer to it?"
I think you need to disambiguate the address of the overloaded function. You can do this by explicitly casting the function pointer to the one with the correct parameters. boost::bind( static_cast<void (App::*)( MyEvent& )>(&App::OnEvent) , this, _1); Similar problem + solution on gamedev.net
1,916,015
1,916,036
If we use the C prefix for classes, should we use it for struct also?
Assuming that a project has been using the C class prefix for a long time, and it would be a waste of time to change at a late stage, and that the person who originally wrote the style guide has been hit by a bus, and that there are no structs in the code already... It's a pretty trivial question, but if a C++ code style guide says "use C for class name prefix" then should this be taken to mean also use C for struct prefix also, or should we use something different, like S for example. class CFoo { }; struct CBar { }; ... or ... class CFoo { }; struct Bar { };
If the style guide doesn't specify, I would (probably) use the "structs are classes with all members public"-rule to use C for structs too, yes. Or I would think "hah, here's a loophope to get around that silly initial rule, yay" and not use it. In other words, this is highly subjective.
1,916,039
1,916,049
why would std::string s("??<") output a { instead of ??< as expected?
std::string s("??<"); std::cout << s << std::endl; Why does that output { instead of ??< I'm using Visual Studio 2008. I'm assume it's encoding it but why and what is the encoding called if that is what's happening? This little %#$^*! caused me to look for a bug in my (unit test) code for 30 minutes before I figured out my string was mangled!! :(
Because of trigraphs. These are the supported trigraphs, from the Wikipedia page: ??= → # ??/ → \ ??' → ^ ??( → [ ??) → ] ??! → | ??< → { ??> → } ??- → ~ For Visual Studio, according to the documentation trigraphs are turned off by default (sensibly enough), so check your project/makefiles.
1,916,118
1,916,235
C++ COM C# Mixed Mode Interoperation
I'm trying to understand my options for calling a C# library implementation from unmanaged C++. My top level module is an unmanaged C++ COM/ATL dll. I would like to integrate functionality of an existing managed C# dll. I have, and can recompile the source for both libraries. I understand from reading articles like this overview on MSDN and this SO question that it might be possible to create a "mixed-mode" dll which allows the native C++ code to call into the C# library. I have a couple of questions about this approach: How do I go about setting this up? Can I simply change some properties on the existing COM/ATL project to allow use of the C# modules? How will these mixed-mode calls differ in performance from COM interop calls? Is there a common string format that may be used to prevent conversion or deep copies between the modules? If this dll is created mixed-mode, can it still be interfaced/used in the same way by its COM clients, or do they need to be mixed mode aware? Will inclusion of the CLR impose substantial overhead when loading this COM object? I'm new to Windows development, so please comment if anything in the question statement needs clarification or correction. Thanks in advance.
How do I go about setting this up? Can I simply change some properties on the existing COM/ATL project to allow use of the C# modules? If you fully control that project, so changing such settings isn't an issue, then sure. All you need is to enable /clr for this project (In project properties, open the "General" page, and look for "Common Language Runtime" support). Now you can use managed handles (^) and other C++/CLI bits in your project as needed. All existing code written in plain C++ should just keep working (it will be compiled to MSIL now, inasmuch as possible, but its semantics will remain unchanged). How will these mixed-mode calls differ in performance from COM interop calls? Is there a common string format that may be used to prevent conversion or deep copies between the modules? A mixed-mode call will be faster, because it uses faster calling conventions, and doesn't do any marshaling the way COM interop does (you either use types that are inherently compatible, or do your own explicit conversions). There's no common string format - the problem is that System::String both allocates and owns its buffer, and also requires it to be immutable; so you can't create a buffer yourself and then wrap it as String, or create a String and then use it as a buffer to output text to. If this dll is created mixed-mode, can it still be interfaced/used in the same way by its COM clients, or do they need to be mixed mode aware? It can be interfaced the same, but if it's entered via an native entry point, it will try to load the CLR into the process, unless one is already loaded. If the calling client had already loaded CLR prior to the call (or the client was itself called from managed code), then you'll get the CLR that is already loaded, which may be different from the CLR that your code requires (e.g. client may have loaded 1.1, and your code needs 2.0). Will inclusion of the CLR impose substantial overhead when loading this COM object? It depends on what you define by overhead. Code size? Runtime penalties? Memory footprint? In any case, loading the CLR means that you get all the GC and JIT machinery. Those aren't cheap. That said, if you need to call managed code ultimately anyways, there's no way around this - you will have to load CLR into some process to do this. The penalties aren't going to differ between COM Interop and mixed-mode C++/CLI assemblies.
1,916,155
1,916,873
base32 conversion in C++
does anybody know any commonly used library for C++ that provides methods for encoding and decoding numbers from base 10 to base 32 and viceversa? Thanks, Stefano
Did you mean "base 10 to base 32", rather than integer to base32? The latter seems more likely and more useful; by default standard formatted I/O functions generate base 10 string format when dealing with integers. For the base 32 to integer conversion the standard library strtol() function will do that. For the reciprocal, you don't need a library for something you can easily implement yourself (not everything is a lego brick). Here's an example, not necessarily the most efficient, but simple; #include <cstring> #include <string> long b32tol( std::string b32 ) { return strtol( b32.c_str(), 0, 32 ) ; } std::string itob32( long i ) { unsigned long u = *(reinterpret_cast<unsigned long*>)( &i ) ; std::string b32 ; do { int d = u % 32 ; if( d < 10 ) { b32.insert( 0, 1, '0' + d ) ; } else { b32.insert( 0, 1, 'a' + d - 10 ) ; } u /= 32 ; } while( u > 0 ); return b32 ; } #include <iostream> int main() { long i = 32*32*11 + 32*20 + 5 ; // BK5 in base 32 std::string b32 = itob32( i ) ; long ii = b32tol( b32 ) ; std::cout << i << std::endl ; // Original std::cout << b32 << std::endl ; // Converted to b32 std::cout << ii << std::endl ; // Converted back return 0 ; }
1,916,397
1,916,424
Warning for Missing Virtual Keyword
I had a frustrating problem recently that boiled down to a very simple coding mistake. Consider the following code: #include <iostream> class Base { public: void func() { std::cout << "BASE" << std::endl; } }; class Derived : public Base { public: virtual void func() { std::cout << "DERIVED" << std::endl; } }; int main(int argc, char* argv[]) { Base* obj = new Derived; obj->func(); delete obj; return 0; } The output is BASE Obviously (for this case), I meant to put the virtual keyword on Base::func so that Derived::func would be called in main. I realize this is (probably) allowed by the c++ standard, and possibly with good reason, but it seems to me that 99% of the time this would be a coding mistake. However, when I compiled using g++ and all the -Wblah options I could think of, no warnings were generated. Is there a way to generate a warning when both a base and derived class have member functions of the same name where the derived class's function is virtual and the base class's function is not?
In Visual C++ you can use the override extension. Like this: virtual void func() override { std::cout << "DERIVED" << std::endl; } This will give an error if the function doesn't actually override a base class method. I use this for ALL virtual functions. Typically I define a macro like this: #ifdef _MSC_VER #define OVERRIDE override #else #define OVERRIDE #endif So I can use it like this: virtual void func() OVERRIDE { std::cout << "DERIVED" << std::endl; } I've looked for something like this in g++ but couldn't find a similar concept. The only thing I dislike about it in Visual C++ is that you can't have the compiler require it (or at least warn) on all overridden functions.
1,916,515
1,916,705
How defensive should you be?
Possible Duplicate: Defensive programming We had a great discussion this morning about the subject of defensive programming. We had a code review where a pointer was passed in and was not checked if it was valid. Some people felt that only a check for null pointer was needed. I questioned whether it could be checked at a higher level, rather than every method it is passed through, and that checking for null was a very limited check if the object at the other end of the point did not meet certain requirements. I understand and agree that a check for null is better than nothing, but it feels to me that checking only for null provides a false sense of security since it is limited in scope. If you want to ensure that the pointer is usable, check for more than the null. What are your experiences on the subject? How do you write defenses in to your code for parameters that are passed to subordinate methods?
In Code Complete 2, in the chapter on error handling, I was introduced to the idea of barricades. In essence, a barricade is code which rigorously validates all input coming into it. Code inside the barricade can assume that any invalid input has already been dealt with, and that the inputs that are received are good. Inside the barricade, code only needs to worry about invalid data passed to it by other code within the barricade. Asserting conditions and judicious unit testing can increase your confidence in the barricaded code. In this way, you program very defensively at the barricade, but less so inside the barricade. Another way to think about it is that at the barricade, you always handle errors correctly, and inside the barricade you merely assert conditions in your debug build. As far as using raw pointers goes, usually the best you can do is assert that the pointer is not null. If you know what is supposed to be in that memory then you could ensure that the contents are consistent in some way. This begs the question of why that memory is not wrapped up in an object which can verify it's consistency itself. So, why are you using a raw pointer in this case? Would it be better to use a reference or a smart pointer? Does the pointer contain numeric data, and if so, would it be better to wrap it up in an object which managed the lifecycle of that pointer? Answering these questions can help you find a way to be more defensive, in that you'll end up with a design that is easier to defend.
1,916,574
1,916,881
How to effectively kill a process in C++ (Win32)?
I am currently writing a very lightweight program so I have to use C++ since it is not bound to .NET framework which drastically increases size of the program. I need to be able to terminate process and to do that I need to get a process handle. Unfortuanately I haven't figured how to do that yet. P.S. I know that to kill a process you have to use TerminateProcess.
The PID you need for OpenProcess() is not normally easy to get a hold of. If all you got is a process name then you need to iterate the running processes on the machine. Do so with CreateToolhelp32Snapshot, followed by Process32First and loop with Process32Next. The PROCESSENTRY32.szExeFile gives you the process name (not path!), th32ProcessID gives you the PID. The next consideration is that the process may appear more than once. And there's a chance that the same process name is used for very different programs. Like "Setup". If you don't just want to kill them all, you'll need to try to obtain some runtime info from them. Window caption bar text, perhaps. GetProcessImageFileName() can give you the path to the .exe. It uses the native kernel format, you'd need QueryDosDevice to map a disk drive device name to a drive letter. The next consideration is the rights you ask for in OpenProcess(). You are unlikely to get PROCESS_ALL_ACCESS, all you need is PROCESS_TERMINATE. Although that's privileged as well. Ensure the account you use to run your program can obtain that right.
1,916,701
1,916,717
A simple C++ framework for Win32 Windows Applications?
Is there a simple/small framework (Other than .NET) which allows you to create windowed applications with C++ under Win32. Just like a little DLL I can include with my app. It should have basic functions like creating a window , buttons , text edits and handling them.
WTL is a set of lightweight templates that make writing Win32 windowing code quite easy (to the extend C++/Win32 can be easy).
1,916,736
1,917,071
Is there a way to do something to static members on process end?
I have a class that uses libxml2. It has static members which are used to hold context for a schema file and its parser. I'm using valgrind, and it's complaining that memory is not deallocated in connection with the schema context. This is because you need to free that memory yourself. However, since these context variables are static, I can't free on destruction of the object. Is there a way to call the necessary free functions, or should I just ignore valgrind.
Declare another class within your XML-using class. In its destructor, clean up your static members. Now give the outer class another static member of the inner class type. By virtue of having a non-trivial destructor, it will get cleaned up as the program exits, and thus your other values will get cleaned up, too. class UseLibXml { static int xmlvar; struct StaticCleanup { ~StaticCleanup() { CleanUpLibXmlVar(UseLibXml::xmlvar); } }; static StaticCleanup static_cleanup; }; Define UseLibXml::static_cleanup the same place you define the other static variables, in one of your .cpp files.
1,916,782
1,916,985
Static library links in wxWidgets statically, but apps using my lib still require wxwidgets
Hopefully someone can help me out here. I'm using Visual Studio 2005 and creating a static library that links in wxWidgets statically. I have: compiled wxWidgets statically according to their guide included the lib directory in my "Additional Library Directories" property added all of the wxWidget libs in my "Additional Dependencies" property set my "Link Library Dependencies" property to "Yes" set C++ Optimization to Disabled. I know that some of those steps shouldn't have to be done, I did so on a "just in case" rational. While my library compiles without a hitch, the test application that includes my static lib complains during linking that it cannot find 'wxbase28.lib' (which I included). I should note that I abstracted wxWidgets out completely, so the library's public API has no mention of anything wxWidget-related. The test app shouldn't know that wxWidgets exists. My tiny library has grown to over 51 MB, so I get the feeling that the libraries are being linked in... so why does my test application complain that it cannot find the wxWidgets library? Thank you
It is the link step in the build process that pulls dependent libs in : When you build a static library, it does NOT pull in any recursive dependencies as there is no link step. So both - your - and wx's - static libs need to be present then for the final application to link.
1,916,813
1,916,884
Handling of references in C++ templates
I currently have a function template, taking a reference, that does something in essence equivalent to: template <typename T> void f(T& t) { t = T(); } Now, I can call: int a; f(a); To initialize my variable a. I can even do: std::vector<int> a(10); f(a[5]); However, this will fail: std::vector<bool> a(10); f(a[5]); The reason being a[5] returns an object with reference semantic, but not a reference. So I need to be able to write: template <typename T> void f(T a) { a = T(); } But if I add this new template and try to compile the first example (with int), I obtain the following error: test_multi_tmpl.cc: In function ‘int main()’: test_multi_tmpl.cc:20: error: call of overloaded ‘f(int&)’ is ambiguous test_multi_tmpl.cc:6: note: candidates are: void f(T&) [with T = int] test_multi_tmpl.cc:12: note: void f(T) [with T = int] Any ideas how to solve this? I wouldn't like to overload f just for std::vector<bool>::reference as this construct might appears in other places ...
I think specialising f for std::vector<bool>::reference is your only option. Note that using std::vector<bool> is probably a bad idea in the first place (the std::vector<bool> specialisation is deprecated for future versions of the c++ language) so you could just use std::deque<bool> instead.
1,917,289
1,917,471
Call unmanaged C++ VS 6.0 MFC dll from C#
I have an unmanaged C++ MFC dll that was developed in VS 6.0. I would like to use it in my C# app. I'm trying to use PInvoke. Here is the C++ code: // testDll.cpp : Defines the entry point for the DLL application. // #include "stdafx.h" extern "C" { BOOL APIENTRY DllMain( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { return TRUE; } __declspec(dllexport) int test(int a) { return a * a; } } And here is how I'm trying to invoke the method "test" from C#: // PInvokeTest.cs using System; using System.Runtime.InteropServices; class PlatformInvokeTest { [DllImport("TestDll.dll")] internal static extern int test(int number); public static void Main() { Console.WriteLine(test(5)); } } This approach works just fine when I set C++ dll to be just a regular Win32 dll. But once I change the project type to MFC ("Use MFC in a Shared DLL") I'm getting this error: Unhandled Exception: System.DllNotFoundException: Unable to load DLL 'TestDll.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E) at PlatformInvokeTest.test(Int32 number) Thanks!
TestDll.dll probably can't load one of it's dependent DLL's. Try loading your TestDll.dll file in the Depends (Dependency Walker) utility. Depends should be installed with VC 6, under Microsoft Visual Studio 6.0 Tools. That will show you what dependencies the DLL has and will flag if one of the dependencies failed. Make sure you load the TestDll.dll from the same folder that the C# code does. Note that Depends only works with unmanaged DLL's.
1,917,344
1,917,530
Writing a test case that checks for memory leaks in C++
NOTE: THIS IS NOT HOMEWORK IT IS FROM A PRACTICE EXAM GIVEN TO US BY OUR PROFESSORS TO HELP US PREPARE FOR OUR EXAM I'm currently studying for a programming exam. On one of the sample tests they gave us we have the following question: Suppose you have been given a templated Container that holds an unordered collection of objects. template <typename T> class Container { public: void insert(T *op); // EFFECTS: inserts the object pointed to by op into // the container T *remove(); // EFFECTS: removes an object from the Container, and // returns a pointer to it. Returns NULL if no // objects remain in the Container. // Note: the implementation can choose which // object to return if more than one exists. Container(); // ctor Container(const Container &l); // copy ctor Container &operator=(const Container &l); // assignment ~Container(); // dtor private: ... }; Note that this is the interface only; the implementation details have been left out for brevity. However, you may assume that the implementation is node based; a linked collection of nodes hold objects. You suspect that the implementation of the destructor does not satisfy the Conservation Rule of the At-Most-Once invariant, and is leaking memory instead. Write an acceptance test (similar to those in Project 4) to check for this condition. You must supply a suitable contained type, and a main that performs the test. Note that you cannot depend on any behavior that the language leaves undefined, you may not assume that you have the altnew allocator from Project 5 available to you, and you may not override the delete operator. Hint: you are allowed to use a global variable. I though something like: #include <iostream> using namespace std; int *p = NULL; void leak() { int *num = new int(5); p = num; delete num; } int main() { if ((*p = 6)) { cout << "Memory leak\n"; } else { cout << "No Leak\n"; } } The basic idea behind this is I though I couldn't write to a space of memory that I hadn't allocated. In compiling this test code though it works just fine so apparently you can. Any ideas on how to write such a test case though?
What if you create a class to use as the template parameter that will add 1 to a global variable in it's constructor and decrease that same global variable by 1 in it's destructor. Then you can perform whatever tests you want on the container (create it, fill it and empty it, delete it, etc) and check for memory leaks by checking that the global variable is 0 after the container has been destroyed.
1,917,411
1,917,461
What's the result if I use delete p instead of delete [] p for an array?
Possible Duplicates: Why is there a special new and delete for arrays? ( POD )freeing memory : is delete[] equal to delete ? What's the result if I use delete p instead of delete [] p for an array? I met two answers for this problem. 1 only the first element will be freed. 2 there comes to a catastrophic end. My question is, how can this two happen? Why there would be a disaster if only the first element is freed? Can anybody offer me an example?
It is undefined behavior. What this means is that the standard gurantees to the writers of the memory management library that certain pre-conditions exist (In this case that arrays will be deleted with delete []). If you break these pre-conditions then the memory management library could fail in some way. How it fails will depend on how the library is implemented. But since C++ is designed for speed the result is probably not going to be nice. So usually this means that the internal memory management data structures are corrupted in some way. This will probably lead to some other part of your program sigfaulting. If you build in debug mode (on some compilers) they will use a special version of the memory management library that is designed to be more robust. Thus in these situations you may not crash but the extra checks have been explicitly added to the library and as a result is slower. But you still can not gurantee correct behavior.
1,917,415
1,917,468
c++ boost regex which element was true
The answer to this may be a simple no, but here goes... I'm currently using the boost function regex_match to evaluate a string against a regex value. Instead of just returning T/F, is there a way to find out which element of multiple joined statements evaluated to true? For example: ^a$|^z$|^p$ a --> 0 z --> 1 f --> -1
Enclose them in capturing parentheses, then test which sub-expression matched. (^a$)|(^z$)|(^p$) match_results m; regex_match(..., m); a -> m[1].matched z -> m[2].matched p -> m[3].matched Update: You might be able to improve on it by making a single capture group and testing the result, e.g.: ^([azp])$ ... if ('a' == m[0][0]) ... Either method is almost certainly faster than calling regex_match three times, though to be sure you just have to test it. Unless you're doing this really often, the difference is not worth worrying about. Obviously, make sure that you're only setting up the regex once, not each time you need it. If need it to be really, really fast you probably shouldn't be using a regex.
1,917,590
1,920,381
Dialog application with LISTBOX
I'm creating an S60 application that will have a main dialog with a listbox of 5 or so items. but i keep receiving a message : "application app1 closed" when trying to run the application on the emulator. This is my resource file (app1.rss)content : RESOURCE DIALOG r_dialog { flags=EAknDialogSelectionList; buttons=R_AVKON_SOFTKEYS_OPTIONS_EXIT; items= { DLG_LINE { id=EPowerSMSDlg1Label; type=EAknCtSingleGraphicListBox; control= LISTBOX { flags = EAknListBoxSelectionList; array_id=array0; }; }, DLG_LINE { itemflags = EEikDlgItemNonFocusing; id = EFindControl; type = EAknCtSelectionListFixedFind; } }; } RESOURCE ARRAY array0 { items= { LBUF { txt="Events Log"; } }; } what am I doing wrong ?
You are experiencing a panic. You should enable extended panic code to see which panic you are getting, and then refer to the system panic reference documentation to see what it means. In this particular case, at least your listbox item format is not correct. EAknCtSingleGraphicListBox enum value corresponds to CAknSingleGraphicStyleListBox class and its documentation states that list item string format: "0\tTextLabel\t1\t2" where 0,1,2 are index to the icon array Your item text is missing those tab separators.
1,917,718
1,917,736
Are multiple conditional operators in this situation a good idea?
I just saw this block of code on the Wikipedia article on conditional operators: Vehicle new_vehicle = arg == 'B' ? bus : arg == 'A' ? airplane : arg == 'T' ? train : arg == 'C' ? car : arg == 'H' ? horse : feet; I've changed the code a little, but the idea is the same. Would you find this use of the conditional operator acceptable? It's much more concise than the if-else construct, and using a switch would definitely open up a whole new set of opportunities for bugs (fall-throughs anyone?). Also, if-elses and switch can't be used as R-values, so you'd have to create the variable first, initialize it and then assign as necessary. I for one really like this, but I'm wondering what others think. But the formatting is essential. EDIT: I still like this. But I understand those who say "the switch statement was made for this". OK, maybe so. But what if the conditions are function calls that return bool? Or a million other things you can't switch on. Are you switch lovers really trying to convince me that a huge if-else chain is better? Yes, programmers who don't know how to use the conditional operator will not understand this. They should learn how to use it. It's not arcane.
I have used this type of construction many times. As long as it's formatted nicely (i.e. not all on one line, making it unreadable), I don't see a problem with it.
1,917,789
1,919,117
How to use HTTPS with HttpReceiveHttpRequest()?
I'm using the Windows HTTP API to process web service requests in C++ (not .NET) and everything works just fine for HTTP requests. When I change the URLs I'm expecting with HttpAddUrl to https://example.com:443/foo/bar my tests from Internet Explorer no longer connect. My code does not get called at all and the calls to HttpReceiveHttpRequest don't complete when an HTTPS request comes in. I created a certificate authority for myself and it is visible inside IE but I can't figure out what to do next. What do I need to configure to make HTTP.SYS call my code when an HTTPS request comes in?
You'll need to install the SSL cert in the machine store (mmc.exe, add Certificates snap-in, manage the Computer account, import the cert). Then have a go with httpconfig- it's a GUI version of httpcfg/netsh http that's much easier. I have this tool on every server I maintain that has SSL certs. Once that's configured, your SSL server registration should route correctly.
1,917,890
1,918,335
Using stdout/stderr/stdin streams behind haskell's FFI
I'm developing a small haskell program that uses an external static library I've developed in C++. It accesses the lib through ghc's FFI (foreign function interface). Inside this library I would like to do some output to the console. However, it looks to me like the c++ side of things does not have a correct handle to stdout because output does not appear on the console. So then, my questions are: Does ghc hijack these three streams (stdout, stdin, stderr) or is libstdc++ simply not initializing them because I'm linking with ghc? Do my FFI imports need to be "safe" if they write to stdout? How can I pass stdout to a C function? Should I simply pass it directly or do I need a C type? Additional notes: I'm linking libstdc++ directly to the executable (i.e. ghc -lstdc++ ...) which I naively assumed would be the correct way of doing this. Seems to work well Disclaimer: Still pretty new to Haskell, so baby steps for now ;P
Your problem does appear to be that libstdc++ is not being initialized. I'm not entirely sure why — -lstdc++ is sufficient on my system — but see if it works the other way around. Main.hs: {-# LANGUAGE ForeignFunctionInterface #-} module Main where foreign export ccall "Main_main" main :: IO () foreign import ccall driver_callback :: IO () main = putStrLn "Now in Haskell" >> driver_callback driver.cc: #include <iostream> extern "C" { # include "HsFFI.h" # ifdef __GLASGOW_HASKELL__ # include "Main_stub.h" extern void __stginit_Main(void); # endif void driver_callback(void) { std::cout << "Back in C++" << std::endl; } } int main(int argc, char **argv) { hs_init(&argc, &argv); # ifdef __GLASGOW_HASKELL__ hs_add_root(__stginit_Main); # endif std::cout << "Starting in C++" << std::endl; Main_main(); hs_exit(); return 0; } Compiling: $ ghc -c --make Main [1 of 1] Compiling Main ( Main.hs, Main.o ) $ ghc --make -no-hs-main -lstdc++ Main driver.cc Linking Main ... $ ./Main Starting in C++ Now in Haskell Back in C++
1,917,909
1,918,257
Detecting application hang
I have a very large, complex (million+ LOC) Windows application written in C++. We receive a handful of reports every day that the application has locked up, and must be forcefully shut down. While we have extensive reporting about crashes in place, I would like to expand this to include these hang scenarios -- even with heavy logging in place, we have not been able to track down root causes for some of these. We can clearly see where activity stopped - but not why it stopped, even in evaluating output of all threads. The problem is detecting when a hang occurs. So far, the best I can come up with is a watchdog thread (as we have evidence that background threads are continuing to run w/out issues) which periodically pings the main window with a custom message, and confirms that it is handled in a timely fashion. This would only capture GUI thread hangs, but this does seem to be where the majority of them are occurring. If a reply was not received within a configurable time frame, we would capture a memory and stack dump, and give the user the option of continuing to wait or restarting the app. Does anyone know of a better way to do this than such a periodic polling of the main window in this way? It seems painfully clumsy, but I have not seen alternatives that will work on our platforms -- Windows XP, and Windows 2003 Server. I see that Vista has much better tools for this, but unfortunately that won't help us. Suffice it to say that we have done extensive diagnostics on this and have been met with only limited success. Note that attaching windbg in real-time is not an option, as we don't get the reports until hours or days after the incident. We would be able to retrieve a memory dump and log files, but nothing more. Any suggestions beyond what I'm planning above would be appreciated.
The answer is simple: SendMessageTimeout! Using this API you can send a message to a window and wait for a timeout before continuing; if the application responds before timeout the is still running otherwise it is hung.
1,918,065
1,918,119
Passing a reference of a base class to another function
Here is the problem i am facing, does anyone have solution? Class A: public class B { // I want to pass a reference of B to Function } void ClassC::Function(class& B) { //do stuff }
The way you are declaring the class is wrong: class A : public B // no more class keyword here { }; // note the semicolon void ClassC::Function(const B &b) // this is how you declare a parameter of type B& { } You simply need to pass the object of type A to the Function. It'll work. It's good to declare the parameter as const if you want to take derived types too. To pass the this instance, you'd simply call: classCObject.Function(*this);
1,918,236
1,918,750
C# Child Process from Legacy C++ App Windowing Problems
We have a c++ legacy application and have been extending it with c# applets that are invoked using COM from the parent c++ app. They bring up windows that are not modal. Moreover, I think these .NET windows are not proper children of the c++ application, since EnumChildWindows misses them, and EnumWindows finds them. One child-like behavior remains, however, in that if you close the parent c++ app, the c# window will close as well. My basic problem with all this is that if the user invokes one of these c# applets, then inadvertently clicks the parent (c++) app window, the c# window drops to the background. If the user wants to bring this back to the top, they should be able to just click its icon in the TaskBar. Unfortunately, for some strange reason, it is often necessary to click the TaskBar icon three times! The first time should bring a hidden window to the top, but it doesn't. The second click minimizes the hidden window, and the third restores it successfully. Has anyone else run across this bug/feature when bridging the legacy->.NET divide? I'm wondering if I can intercept the first click on the taskbar icon for my C# applet, and somehow force it to claw its way back to the top. :-) I've been experimenting with the following: [DllImport("User32.dll")] private static extern int ShowWindow(IntPtr hwnd, IntPtr nCmdShow); but even if I get this working I'll still need to intercept that first mouseclick. Thanks for your help!
Would it work if the C# windows actually were child windows? It might be possible to accomplish that by passing the parent HWND as an argument to the C# COM object, and then using PInvoke to call SetParent on the C# windows. (I've never done this, but it sounds at least as safe as fighting with ShowWindow and the task bar?) (Note from the comments in the documetation for SetParent that you might also need to fiddle with the child window's window flags?) (Depending on the C# window type, it might already have a Handle property you can use; otherwise you could kludge a PInvoke call to FindWindow to get its handle.)
1,918,263
1,918,397
Reading Pixels of Image in C++
How to open and read the pixels of an image in c++? Read them in the form of X, Y and to know the color.
If you are going to be working with images you should look into the OpenCV library, it has pretty much everything you need to work with images. OpenCV 2.0 came out a couple of months ago and its very friendly with C++.
1,918,360
1,918,390
Are there any Regression Tests coded in C/C++ to test all the functionality of CString (ATL/MFC)?
I am trying to do a comparison of CString from ATL/MFC to a custom CString implementation and I want to make sure that all the functionality in the custom implementation matches that of the ATL/MFC implementation. The reason we have a custom CString implementation is so that we can use it on *nix and Windows platforms. The interface is the same, but the implementation is different so when we port our Windows code to use the SDK we are writing we don't have to change ALL the names ... we will only have to work out the differences between the two implementations. Any help on this would be greatly appreciated. Thanks!
Personally I cannot think of any. However if I were doing it I would encode all the use cases I have for it and make sure I owned a test to cover it. Also on windows do you delegate to the supplied implementaton or your own? If you delegated you could find your tests more useful as they could highlight differences. Good luck,
1,918,385
1,918,477
How do I over-allocate memory using new to allocate variables within a struct?
So I have a couple of structs... struct myBaseStruct { }; struct myDerivedStruct : public myBaseStruct { int a, b, c, d; unsigned char* ident; }; myDerivedStruct* pNewStruct; ...and I want to dynamically allocate enough space so that I can 'memcpy' in some data, including a zero-terminated string. The size of the base struct is apparently '1' (I assume because it can't be zero) and the size of the derived is 20, which seems to make sense (5 x 4). So, I have a data buffer which is a size of 29, the first 16 bytes being the ints and the remaining 13 being the string. How can I allocate enough memory for pNewStruct so that there is enough for the string? Ideally, I just want to go: allocate 29 bytes at pNewStruct; memcpy from buffer into pNewStruct; Thanks,
You can allocate any size you want with malloc: myDerivedStruct* pNewStruct = (myDerivedStruct*) malloc( sizeof(myDerivedStruct) + sizeof_extra data); You have a different problem though, in that myDerivedStruct::ident is a very ambigous construct. It is a pointer to a char (array), then the structs ends with the address where the char array starts? ident can point to anywhere and is very ambigous who owns the array ident points to. It seems to me that you expect the struct to end with the actual char array itself and the struct owns the extra array. Such structures usualy have a size member to keep track of teir own size so that API functions can properly manage them and copy them, and the extra data starts, by convention, after the structure ends. Or they end with a 0 length array char ident[0] although that creates problems with some compilers. For many reasons, there is no place for inheritance in such structs: struct myStruct { size_t size; int a, b, c, d; char ident[0]; };
1,918,498
1,918,508
Filling a Partially Rounded Rectangle with GDI+
I have a rounded rectangle that I make like so dc.RoundRect(textBorder, CPoint(20, 20)); Later on I draw a line through it about 1/3 of the way down. dc.LineTo(textBorder.right, textBorder.top + 15); Now I would like to fill just the part above the line with a solid color. In other words I need to fill a partially rounded rectangle, because the top of the rectangle is rounded, but the bottom of it is truncated by the line. Is there an easy way to do this?
Have you tried using a combination of CreateRoundRectRegion and then FillRgn to fill the non-rectangular area? This the example given in the docs for CreateRoundRectRegion: CRgn rgnA, rgnB, rgnC; VERIFY(rgnA.CreateRoundRectRgn( 50, 50, 150, 150, 30, 30 )); VERIFY(rgnB.CreateRoundRectRgn( 200, 75, 250, 125, 50, 50 )); VERIFY(rgnC.CreateRectRgn( 0, 0, 50, 50 )); int nCombineResult = rgnC.CombineRgn( &rgnA, &rgnB, RGN_OR ); ASSERT( nCombineResult != ERROR && nCombineResult != NULLREGION ); CBrush brA, brB, brC; VERIFY(brA.CreateSolidBrush( RGB(255, 0, 0) )); VERIFY(pDC->FillRgn( &rgnA, &brA)); // rgnA Red Filled VERIFY(brB.CreateSolidBrush( RGB(0, 255, 0) )); VERIFY(pDC->FillRgn( &rgnB, &brB)); // rgnB Green Filled VERIFY(brC.CreateSolidBrush( RGB(0, 0, 255) )); // rgnC Blue VERIFY(pDC->FrameRgn( &rgnC, &brC, 2, 2 )); In general, when you want to do something with non-rectangular areas you have to start looking into regions.
1,918,502
1,918,504
Very weird errors when linking (LNK1000)?
Error 1 fatal error LNK1000: Internal error during IncrBuildImage MFC_Test MFC_Test Why do I get this weird error every 2nd time I compile?
It is a bug in link.exe. Apply this hotfix https://connect.microsoft.com/VisualStudio/Downloads/DownloadDetails.aspx?DownloadID=11399
1,918,563
1,918,792
split a string using find_if
I found the following code in the book "Accelerated C++" (Chapter 6.1.1), but I can't compile it. The problem is with the find_if lines. I have the necessary includes (vector, string, algorithm, cctype). Any idea? Thanks, Jabba bool space(char c) { return isspace(c); } bool not_space(char c) { return !isspace(c); } vector<string> split_v3(const string& str) { typedef string::const_iterator iter; vector<string> ret; iter i, j; i = str.begin(); while (i != str.end()) { // ignore leading blanks i = find_if(i, str.end(), not_space); // find end of next word j = find_if(i, str.end(), space); // copy the characters in [i, j) if (i != str.end()) { ret.push_back(string(i, j)); } i = j; } return ret; }
There is no problem in the code you posted. There is a very obvious problem with the real code you linked to: is_space and space are member functions, and they cannot be called without an instance of Split2. This requirement doesn't make sense, though, so at least you should make those functions static. (Actually it doesn't make much sense for split_v3 to be a member function either. What does having a class called Split2 achieve over having just a free function - possibly in a namespace?)
1,918,723
1,918,739
std::vector of known sequences
I'm trying to learn C++ by doing everything "the C++ way". I'm writing a program where I have all of these known values (at compile time). Here is my problem: In my constructor I want to check to see if a passed value(an int) is one of 2,4,8,16 or 32 and throw an error elsewise. I've though about: making a C style array or ints creating a vector before by hand and interating through it to check making a list? I've never used lists before though. What I really want to do is make a const vector in a separate header file, this doesn't seem possible though. What is the most elegant way to do this check? Also, similarly is there any way to make a vector with a set number of known values (at compile time) in a header? If I can do const std::string a_str("a string"); I don't see why I can't use a similar technique for vectors.
What's wrong with: if (!(n == 2 || n == 4 || n == 8 || n == 16 || n == 32)) { // no! } If you want the "C++ way", a static array should do, with find: template <typename T, size_t N> T* endof(T (&pArray)[N]) { return &pArray[0] + N; } static const int OkNumbers[] = {2, 4, 8, 16, 32}; static const int* OkNumbersEnd = endof(OkNumbers); if (std::find(OkNumbers, OkNumbersEnd, n) == OkNumbersEnd) { // no! } Modifying this list is simple, and I'm guessing the compiler may optimize this to my previous answer.
1,918,911
1,931,513
Better boost asio deadline_timer example
I'm after a better example of the boost::asio::deadline_timer The examples given will always time out and call the close method. I tried calling cancel() on a timer but that causes the function passed into async_wait to be called immediately. Whats the correct way working with timers in a async tcp client?
You mention that calling cancel() on a timer causes the function passed to async_wait to be called immediately. This is the expected behavior but remember that you can check the error passed to the timer handler to determine if the timer was cancelled. If the timer was cancelled, operation_aborted is passed. For example: void handleTimer(const boost::system::error_code& error) { if (error == boost::asio::error::operation_aborted) { std::cout << "Timer was canceled" << std::endl; } else if (error) { std::cout << "Timer error: " << error.message() << std::endl; } } Hopefully this helps. If not, what is the specific example that are you looking for?
1,919,032
1,920,786
C++ Builder - Spawn TThreads On the Fly
I'm looking for the ability to spawn a thread or function so that it returns immediately to the calling line and continue on with the program but continues with the thread work. For instance, if you call Form.ShowDialog(), it will create a modeless form that has its own UI thread. Is there a way to do this (no form) without having to declare a TThread class? I guess sort of like an anonymous thread, if that even exists.
I don't know exactly why you don't want to create a TThread subclass, but if you are using the Windows version of C++ Builder you can use the _beginthreadex function (declared in process.h).
1,919,125
1,919,180
Programmatically adding a directory to Windows PATH environment variable
I'm writing a Win32 DLL with a function that adds a directory to the Windows PATH environment variable (to be used in an installer). Looking at the environment variables in Regedit or the Control Panel after the DLL has run shows me that my DLL has succeeded in adding the path to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Environment and HKEY_CURRENT_USER\Environment. But when I start up a new Command Prompt (after running the DLL), the directory I added does not show up in the output of echo %PATH% and I can not access the executable that lives in that directory by typing its name. I think my program is not doing a good job of notifying the system that the PATH has changed, or maybe it is notifying them before the change has fully taken effect. I read an article by Microsoft that says to broadcast the WM_SETTINGCHANGE message after changing an environment variable, and I am doing that with this code: DWORD result2 = 0; LRESULT result = SendMessageTimeout(HWND_BROADCAST, WM_SETTINGCHANGE, 0, (LPARAM)"Environment", SMTO_ABORTIFHUNG, 5000, &result2); if (result == 0){ /* ... Display error message to user ... */ } The order of my calls is: RegCreateKeyEx, RegSetValueEx, RegCloseKey, SendMessageTimeout If I press "OK" in the Control Panel "Environment Variables" window, the changes made by my DLL to the PATH show up in newly-created command prompts, so there is something that the Control Panel is doing to propagate PATH changes; I want to figure out what it is and do the same thing. Does anyone know what I should do? I'm running 64-bit Windows Vista but I want this to work on all Windows XP, Vista and Windows 7 operating systems. Update: The problem with the code I posted above is that I did not put the L prefix on the "Environment" string. Although it does not say it explicitly anywhere in the Microsoft documentation that I can find, the LPARAM needs to be a pointer to a WCHAR string (2-byte characters) as opposed to a CHAR string, which is what Visual Studio's compiler generates by default when I write a string literal. The solution to my problem was to change "Environment" to L"Environment". (I thought I already tried that before posting this question, but apparently I didn't try it correctly!) But anyone who wants a complete C++ solution for this task should look at Dan Moulding's answer.
It turns out there really isn't anything new under the sun. This has already been done before, at least once. By me. I created a DLL very much like what you describe for exactly the same purpose (for use in modifying the path from an NSIS installer). It gets used by the Visual Leak Detector installer. The DLL is called editenv.dll. The source is available at github. I just tested the installer and it updated the system PATH environment variable, no problem. Based on what you've written, I don't see anything that stands out as being wrong. I also don't see anything obvious that's missing. But it may be worth a look at the editenv.dll source (you'd be most interested in EnvVar::set() in EnvVar.cpp, and possibly the pathAdd() and pathRemove() C APIs in editenv.cpp).
1,919,251
1,919,279
Display image in opengl
I am fairly new to openGL. I have a 3d game that I have running, and it seems to go fairly well. What I would like to do is display an image straight onto the screen, and I am not sure the easiest way to do that. My only idea is to draw a rectangle right in front of the screen and use the image as the texture. It seems like there should be an easier way. This is for menu screens, and things, so if there is a better way to do that as well, please let me know.
I would recommend setting up OpenGL for 2D rendering via gluOrtho2d(); then, load the image into a texture and, as you said, draw it to the screen by creating a polygon and binding the texture to it. A good example can be found here.
1,919,388
1,919,411
Testing for a non-null pointer, and returning null otherwise
I'm wondering whether it's considered okay to do something like this. if ( p_Pointer != NULL ) { return p_Pointer; } else { return NULL; } Without the else, whatever. The point is that if the pointer is null, NULL is going to be returned, so it would seem pointless wasting a step on this. However, it seems useful for debugging purposes, because if I was stepping through with a debugger I would be able to check with this test if the pointer is NULL or not. Any comments or suggestions regarding this practice?
It's "okay" to do this, i.e. there's nothing wrong with it, although it's not very useful. If you're stepping through in a debugger, you should be able to display the value of p_Pointer anyway. It's similar to if( flag == TRUE ) { return TRUE; } else { return FALSE; } rather than just return flag;
1,919,546
1,921,262
Sun Studio C++ "is not terminated with a newline" warning - how to suppress?
I have ported a fair bit of code from Win to Solaris, one of the issues I have - I am getting a heaps of warnings: Warning: Last line in file is not terminated with a newline. I like warnings - but because of the sheer amount of those I am afraid I could miss more important one. Which compiler (cc) option should I specify to silence it? Thanks.
Although i think Martin's solution of fixing the original source files would be preferable, if you really want to disable the warnings then this page describes the -erroff flag which you can use to disable specific warnings. In your case add -erroff=E_NEWLINE_NOT_LAST to the CC command line to switch the newline warning off, e.g.: # Display the warning and the warning tag name. /opt/forte/sunstudio11_patch2/SUNWspro/bin/cc -errtags=yes test.c "test.c", line 1: warning: newline not last character in file (E_NEWLINE_NOT_LAST) # Disable the warning. /opt/forte/sunstudio11_patch2/SUNWspro/bin/cc -erroff=E_NEWLINE_NOT_LAST test.c
1,919,571
1,919,588
whats the difference between c compiler and c++ compiler of microsoft c/c++ compiler?
I could compile the void main() as c++ source file with microsoft c/c++ compiler 14.00 (integrated with visual studio 2005).So does it means that the compiler does not conform to the c++ standard on the main function prototype? Is the microsoft c/c++ compiler only one compiler,that is,it is only one c++ compiler?Because C source file could be compiled as C++ source file,so its no need to develop the c compiler anymore? thanks.
I could compile the void main() The valid signatures of main are: int main(void); // no parameters int main(int, char **); // parameterized Everything else is not standard. The standard does allow an implementation to allow alternate signatures of main(). Is the microsoft c/c++ compiler only one compiler,that is,it is only one c++ compiler? Yes, it is one executable (cl.exe). However, it can work either as a C compiler or a C++ compiler. The default is C++ compiler mode. You can change this by going into Project Properties > C/C++ > Advanced (/TP or /TC)
1,919,574
1,919,595
calculating expression without using semicolon
Given the expression by input like 68+32 we have to evaluate without using a semicolon in our program. If it will be something inside the if or for loop? Reference : https://www.spoj.pl/problems/EXPR2/
You can use if and the comma operator, something like this: if( expr1, expr2, expr3, ... ) {} It would be equivalent to expr1; expr2; expr3; ... To use variables without any warnings you can define a function the recieves the data types you need that you call from your main, like so: void myFunc(int a, double b) { if ( expr1, expr2 ) { } } int main() { if ( myFunc(0, 0), 0 ) { } } Note that you need to add , 0 in main, otherwise an error is raised because a void return is not ignored.
1,919,608
1,921,403
Checking for null before pointer usage
Most people use pointers like this... if ( p != NULL ) { DoWhateverWithP(); } However, if the pointer is null for whatever reason, the function won't be called. My question is, could it possibly be more beneficial to just not check for NULL? Obviously on safety critical systems this isn't an option, but your program crashing in a blaze of glory is more obvious than a function not being called if the program can still run without it. In relation to the first question, do you always check for NULL before you use pointers? Secondly, consider you have a function that takes a pointer as an argument, and you use this function multiple times on multiple pointers throughout your program. Do you find it more beneficial to test for NULL in the function (the benefit being you don't have to test for NULL all over the place), or on the pointer before calling the function (the benefit being no overhead from calling the function)?
Don't make it a rule to just check for null and do nothing if you find it. If the pointer is allowed to be null, then you have to think about what your code does in the case that it actually is null. Usually, just doing nothing is the wrong answer. With care it's possible to define APIs which work like that, but this requires more than just scattering a few NULL checks about the place. So, if the pointer is allowed to be null, then you must check for null, and you must do whatever is appropriate. If the pointer is not allowed be null, then it's perfectly reasonable to write code which invokes undefined behaviour if it is null. It's no different from writing string-handling routines which invoke undefined behaviour if the input is not NUL-terminated, or writing buffer-using routines which invoke undefined behaviour if the caller passes in the wrong value for the length, or writing a function that takes a file* parameter, and invokes undefined behaviour if the user passes in a file descriptor reinterpret_cast to file*. In C and C++, you simply have to be able to rely on what your caller tells you. Garbage in, garbage out. However, you might like to write code which helps out your caller (who is probably you, after all) when the most likely kinds of garbage are passed in. Asserts and exceptions are good for this. Taking up the analogy from Franci's comment on the question: most people do not look for cars when crossing a footpath, or before sitting down on their sofa. They could still be hit by a car. It happens. But it would generally be considered paranoid to spend any effort checking for cars in those circumstances, or for the instructions on a can of soup to say "first, check for cars in your kitchen. Then, heat the soup". The same goes for your code. It's much easier to pass an invalid value to a function than it is to accidentally drive your car into someone's kitchen. But it's still the fault of the driver if they do so and hit someone, not a failure of the cook to exercise due care. You don't necessarily want cooks (or callees) to clutter up their recipes (code) with checks that ought to be redundant. There are other ways to find problems, such as unit tests and debuggers. In any case it is much safer to create a car-free environment except where necessary (roads), than it is to drive cars willy-nilly all over the place and hope everybody can cope with them at all times. So, if you do check for null in cases where it isn't allowed, you shouldn't let this give people the idea that it is allowed after all. [Edit - I literally just hit an example of a bug where checking for null would not find an invalid pointer. I'm going to use a map to hold some objects. I will be using pointers to those objects (to represent a graph), which is fine because map never relocates its contents. But I haven't defined an ordering for the objects yet (and it's going to be a bit tricky to do so). So, to get things moving and prove that some other code works, I used a vector and a linear search instead of a map. That's right, I didn't mean vector, I meant deque. So after the first time the vector resized, I wasn't passing null pointers into functions, but I was passing pointers to memory which had been freed. I make dumb errors which pass invalid garbage approximately as often as I make dumb errors which pass null pointers invalidly. So regardless of whether I add checking for null, I still need to be able to diagnose problems where the program just crashes for reasons I can't check. Since this will also diagnose null pointer accesses, I usually don't bother checking for null unless I'm writing code to generally check the preconditions on entry to the function. In that case it should if possible do a lot more than just check null.]
1,919,626
1,919,647
Can I get a non-const C string back from a C++ string?
Const-correctness in C++ is still giving me headaches. In working with some old C code, I find myself needing to assign turn a C++ string object into a C string and assign it to a variable. However, the variable is a char * and c_str() returns a const char []. Is there a good way to get around this without having to roll my own function to do it? edit: I am also trying to avoid calling new. I will gladly trade slightly more complicated code for less memory leaks.
I guess there is always strcpy. Or use char* strings in the parts of your C++ code that must interface with the old stuff. Or refactor the existing code to compile with the C++ compiler and then to use std:string.
1,919,657
1,919,699
C++ static library link with shared lib. Compiling would be fine?
Here is a C++ project, and its lib dependency is Hello.exe -> A.so -> B.a B.a -> A.so Hello.exe depends on B.a and A.so, and B.a depends on A.so. GCC compiler will link Hello.exe successful? And if there is a b.cc file in B.a which includes a header file a.h of A.so, and also uses some interfaces of A.so, then with right "include" path setting, compiling b.cc to b.o should be successful. But if without A.so as input, the link of B.a would be failed, right? gcc -c b.cc -I../A/include ;; successful gcc -a B.a b.o ;; fail Where I can find detail library link documents about these complex reference relationship ... Thanks.
A static library is just a collection of object files created from compiled .c/.cpp files. It cannot have link relationships. You will need to specify link dependencies to both A.so and B.a when you compile Hello.exe off the top of my head it would be something like gcc -o Hello.exe B.a A.so As a side note you should rename A.so to libA.so and instead do gcc -o Hello.exe -lA B.a Linking to A.so directly like example 1 will require that A.so is always in the same directory as Hello.exe If you use example 2, you can put libA.so anywhere and use LD_LIBRARY_PATH to point to the right directory.
1,920,430
1,920,481
C++ array initialization
is this form of intializing an array to all 0s char myarray[ARRAY_SIZE] = {0} supported by all compilers? , if so, is there similar syntax to other types? for example bool myBoolArray[ARRAY_SIZE] = {false}
Yes, this form of initialization is supported by all C++ compilers. It is a part of C++ language. In fact, it is an idiom that came to C++ from C language. In C language = { 0 } is an idiomatic universal zero-initializer. This is also almost the case in C++. Since this initalizer is universal, for bool array you don't really need a different "syntax". 0 works as an initializer for bool type as well, so bool myBoolArray[ARRAY_SIZE] = { 0 }; is guaranteed to initialize the entire array with false. As well as char* myPtrArray[ARRAY_SIZE] = { 0 }; in guaranteed to initialize the whole array with null-pointers of type char *. If you believe it improves readability, you can certainly use bool myBoolArray[ARRAY_SIZE] = { false }; char* myPtrArray[ARRAY_SIZE] = { nullptr }; but the point is that = { 0 } variant gives you exactly the same result. However, in C++ = { 0 } might not work for all types, like enum types, for example, which cannot be initialized with integral 0. But C++ supports the shorter form T myArray[ARRAY_SIZE] = {}; i.e. just an empty pair of {}. This will default-initialize an array of any type (assuming the elements allow default initialization), which means that for basic (scalar) types the entire array will be properly zero-initialized.
1,920,687
1,921,236
Passing information between two seperate programs
I want to pass a value of an input variable in my program lets say#1 to another program #2 and i want #2 to print the data it got to screen, both are needed to be written in c++. The this will be on Linux.
In response to your comment to Roopesh Majeti's answer, here's a very simple example using environment variables: First program: // p1.cpp - set the variable #include <cstdlib> using namespace std;; int main() { _putenv( "MYVAR=foobar" ); system( "p2.exe" ); } Second program: // p2.cpp - read the variable #include <cstdlib> #include <iostream> using namespace std;; int main() { char * p = getenv( "MYVAR" ); if ( p == 0 ) { cout << "Not set" << endl; } else { cout << "Value: " << p << endl; } } Note: there is no standard way of setting an environment variable you will need to construct the name=value string from the variable contents
1,920,906
1,931,302
OpenCV with QT on Maemo 5 (N900)
Since the presentation by Eero Bragge at theAmterdam devdays about QT / QT-Creator I've been looking for an excuse to try my hands at mobile development. Now this excuse has arrived in the form of the Nokia N900, my new phone! My other hobby is computer vision, so my first Idea's for applications to try and build lie in that direction. My questions now are: Has anyone tried QT Creator + OpenCV + Maemo 5? I see there is a year old port of opencv for Meamo Diablo (4.1) has anyone tried that one on Maemo 5? I see that improvements to the OpenCV port are were among the Meamo google summer of code 2009 ideas that didn't make the cut. Is there work being done there? How easy is it to acquire images from the phone's camera and convert them to something opencv understands? Does anyone have any useful links to share?
"I see that improvements to the OpenCV port are were among the Meamo google summer of code 2009 ideas that didn't make the cut. Is there work being done there?" The project was not select, and AFAIK the people involved didn't carry the project. OpenCV seems to work under Maemo5 according to the discussion here: http://n2.nabble.com/OpenCV-for-Maemo5-td4172275.html#a4172275
1,920,910
1,920,927
What is the best way to take generically a container in C++ using interfaces (i.e. equivalent of taking IEnumerable as argument in C#)
I would like a C++ constructor/method to be able to take any container as argument. In C# this would be easy by using IEnumerable, is there an equivalent in C++/STL ? Anthony
The C++ way to do this is with iterators. Just like all the <algorithm> functions that take (it begin, it end, ) as first two parameters. template <class IT> T foo(IT first, IT last) { return std::accumulate(first, last, T()); } If you really want to go passing the container itself to the function, you have to use 'template template' parameters. This is due to the fact that C++ standard library containers are not only templated with the type of the contained type, but also with an allocator type, that have a default value and is therefore implicit and not known. #include <vector> #include <list> #include <numeric> #include <iostream> template <class T, class A, template <class T, class A> class CONT> T foo(CONT<T, A> &cont) { return std::accumulate(cont.begin(), cont.end(), T()); } int main() { std::vector<int> v; v.push_back(1); v.push_back(2); v.push_back(3); std::list<int> l; l.push_back(1); l.push_back(2); l.push_back(3); std::cout << foo(v) << " " << foo(l) << "\n"; return 0; }
1,920,969
1,921,010
Is there an easy way to push variables onto the stack for later retrieval
I have a member function of an object that is typically used in an iterative manner but occasionally in a recursive manner. The function is basically following the path of a stream of water downhill, and under certain conditions the stream could split. In order to support the occasional recursion I have to push the state of the object onto the stack and pop it off afterwords. I'm currently doing this using local variables, but the code looks awful. I'd like to write PushState and PopState members but most of the implementations I've looked at aren't great. e.g. using a stack member in my object which ends up using the heap which is slow and leads to other issues. _alloca won't work as the stack frame is lost when the push function returns. Are there any other reasonable generic ways of pushing and popping from the stack that i'm missing? class CMyObject { enum Direction { left,right,branch,finished; } // state variables double m_XPos,m_YPos; void Navigate(double x,double y); Direction GoLeft(); Direction GoLeft(); }; void CMyObject::Navigate(double x,double y) { m_XPos = x; m_YPos = y; Direction d = GoLeft(x,y); while(d != finished) { switch(d) { case left: d = GoLeft(); break; case right: d = GoRight(); break; case branch: { // push object state onto the stack double temp_x = m_XPos; double temp_y = m_YPos; Navigate(m_XPos,m_YPos); // pop object state from the stack m_XPos = temp_x; m_YPos = temp_x; GoRight(); } break; } } }
I guess the obvious way would be to create a copy of your entire object and the do the recursive call on that. This way, each branch has it's own state, and you compiler does the stack management for you. Basically, a method has to be reentrant in order to be safely used in recursion. That is not the case here, since your methods rely on a persistant state. So you have to copy that state, and that means copying the object. UPDATE: Some more links about recursion and reentrancy: this article is about reentrancy in emedded systems, but also explains the realtion to recursion Wikipedia on the subject another Article(pdf) by the author of the first one Sadly, Donald Knuth has not yet finished the fourth part of his masterpiece, but when he does, read it, for it certainly clear that up.
1,921,231
1,922,749
Maintaining a recent files list
I would like to maintain a simple recent files list on my MFC application that shows the 4 most recently used file names. I have been playing with an example from Eugene Kain's "The MFC Answer Book" that can programmatically add strings to the Recent Files list for an application based on the standard Document/View architecture: (see "Managing the Recent Files List (MRU)") : http://www.nerdbooks.com/isbn/0201185377 My application is a fairly lightweight utility that does not use the Document/View architecture to manage data, file formats and so on. I am not sure if the same principles used in the above example would be applicable here. Does anyone have any examples of how they go about maintaining a recent files list that is displayed in the File menu, and can be stored in a file / registry setting somewhere? More than anything, it's my lack of knowledge and understanding that is holding me back. Update: I have recently found this CodeProject article to be quite useful... http://www.codeproject.com/KB/dialog/rfldlg.aspx
I recently did that using MFC, so since you seems to be using MFC as well maybe it will help: in: BOOL MyApp::InitInstance() { // Call this member function from within the InitInstance member function to // enable and load the list of most recently used (MRU) files and last preview // state. SetRegistryKey("MyApp"); //I think this caused problem with Vista and up if it wasn't there //, not really sure now since I didn't wrote a comment at the time LoadStdProfileSettings(); } //.. //function called when you save or load a file void MyApp::addToRecentFileList(boost::filesystem::path const& path) { //use file_string to have your path in windows native format (\ instead of /) //or it won't work in the MFC version in vs2010 (error in CRecentFileList::Add at //hr = afxGlobalData.ShellCreateItemFromParsingName) AddToRecentFileList(path.file_string().c_str()); } //function called when the user click on a recent file in the menu boost::filesystem::path MyApp::getRecentFile(int index) const { return std::string((*m_pRecentFileList)[index]); } //... //handler for the menu BOOL MyFrame::OnCommand(WPARAM wParam, LPARAM lParam) { BOOL answ = TRUE; if(wParam >= ID_FILE_MRU_FILE1 && wParam <= ID_FILE_MRU_FILE16) { int nIndex = wParam - ID_FILE_MRU_FILE1; boost::filesystem::path path = getApp()->getRecentFile(nIndex); //do something with the recent file, probably load it return answ; } } You only need that your application is derived from CWinApp (and I use a class derived from CFrmWnd to handle the menu, maybe you do the same?). Tell me if that works for you. Not sure if I have everything.
1,921,232
1,924,467
Just-In-Time Derivation
There's a less common C++ idiom that I've used to good effect a few times in the past. I just can't seem to remember if it has a generally used name to describe it. It's somewhat related to mixins, CRTP and type-erasure, but is not specifically any of those things. The problem is found when you want to add some implementation to a class, but you don't want to put it in the class, or any class it derives from. One reason for this might be that the class could be part of an inheritance hierarchy where the implementation should only occur once. Setting aside, for the the moment, issues such as whether a hierarchy should have concrete non-leaf classes, or whether virtual inheritance may be an option in some cases, I know that one solution to provide the implementation in a template class that derives from its template parameter. This then allows you to use the template when you create an instance, but then only ever use the object by pointer or reference to one of its bases (that's where the type erasure, in a loose sense, comes in). An example might be that you have an intrusive reference count. All your classes derive from a ref count interface, but you only want the ref count itself, and the implementation of your ref count methods, to appear once, so you put them in the derived template - let's call it ImplementsRC<T>. Now you can create an instance like so: ConcreteClass* concrete = new ImplementsRC<ConcreteClass>(); I'm glossing over things like forwarding constructors formed of multiple templated overloads etc. So, hopefully I've made it clear what the idiom is. Now back to my question - is there an accepted, or at least generally used, name for this idiom?
I'd definitely consider this to be a mixin, as would Bruce Eckel (http://www.artima.com/weblogs/viewpost.jsp?thread=132988). In my opinion one of the things that makes this a mixin is that it's still single inheritance, which is different from using MI to achieve something similar.
1,921,607
1,921,640
A Reliable way to Identify a computer by its ip address
I have a network of computers that they will connect to the a server with DHCP, so I don't know what Ip address a computer will get when I connects to the server. If 192.168.0.39 for example is connected to the server can I identify the real computer behinde this ip address? ( I can install an external application on each client in order to send some data to server for example mac address or so... )
If you are responsible for the DHCP server, you can configure it to hand out a specific IP to a specific MAC. Having done that, you can be reasonably confident of that mapping -- it is possible to spoof MACs, so if you are worried about security, you'll need a much more heavy duty approach. If this is a casual application where the risk of that is low, you configure your DHCP server to hand out IPs based on MACs and then make use of those mappings in your application.
1,921,817
1,921,917
Template type deduction in C++ for Class vs Function?
Why is that automatic type deduction is possible only for functions and not for Classes?
In specific cases you could always do like std::make_pair: template<class T> make_foo(T val) { return foo<T>(val); } EDIT: I just found the following in "The C++ Programming Language, Third Edition", page 335. Bjarne says: Note that class template arguments are never deduced. The reason is that the flexibility provided by several constructors for a class would make such deduction impossible in many cases and obscure in many more. This is of course very subjective. There's been some discussion about this in comp.std.c++ and the consensus seems to be that there's no reason why it couldn't be supported. Whether it would be a good idea or not is another question...
1,921,948
1,922,030
ILockBytesOnHGlobal WriteAt performance decreases over time
I've created ILockBytesOnHGlobal and I write 64k of data repeatedly. What I've noticed is that WriteAt performance decreases over the time. What could be the reason for the performance slow down? Does it have to do with stream growth? Here is what I'm doing (in C#) public override void Write(byte[] buffer, int offset, int count) { EnsureBufferSize(count); Marshal.Copy(buffer, offset, hGlobalBuffer, count); lockBytes.WriteAt(writeOffset, hGlobalBuffer, count, out temp); writeOffset += temp.ToUInt32(); }
CreateILockBytesOnHGlobal documentation says that it uses GlobalReAlloc to increase the memory block. GlobalReAlloc copies the data from the old memory block to the new (and larger) memory block, so this causes performance to go down over time.
1,921,961
1,921,988
Should a person new to windowed applications study X, GTK+, or what?
Let's say the factors for valuing a choice are the library of widgets available, the slope of the learning curve, and the degree of portability (platforms it works on). As far a language binding goes, I'm using C++. Thanks!
Pure X is quite hardcore these days, and not very portable. Basically, there are three major toolkits: GTK+ (and C++ wrapper GTKmm) Qt wxWidgets which are pretty comparable, so which to choose is a matter of taste. All three run on major three operating systems, although GTK+ on Mac and Windows is little bit awkward.
1,922,069
1,970,470
WinPE 2.0 (Vista) - Looking for a solution for BrowseForFolder using VBSCRIPT & HTA application
I am creating an HTA application to be run inside of a WinPE 2.0 environment. The purpose of this HTA app is to prompt the user to select a back-up location. I am currently using BrowseForFolder to prompt the user folder location. Script works fine in Vista. However, this does not work in winpe 2.0 - and a dialog appears with no folders to select. Here is my code, lines 61-75: http://pastie.org/747122 Sub ChooseSaveFolder strStartDir = "" userselections.txtFile.value = PickFolder(strStartDir) End Sub Function PickFolder(strStartDir) Dim SA, F Set SA = CreateObject("Shell.Application") Set F = SA.BrowseForFolder(0, "Please choose a location to backup your system to. A .tbi file will be created here.", 0, strStartDir) If (Not F Is Nothing) Then PickFolder = F.Items.Item.path End If Set F = Nothing Set SA = Nothing End Function Failed Attempted Solutions: 1) Adding the directory X:\Windows\System32\config\systemprofile\Desktop Has anyone created any advanced HTA apps for winpe 2.0? I am looking for a solution to this problem, or possibly some c++ code that can put me on my way to accomplish a similar task.
After weeks and weeks... I have found (and tested) a solution using Autoit, download here: http://www.autoitscript.com/autoit3/ Autoit will allow you to create a standalone executable BrowseForFolder dialog using their "BASIC-like scripting language designed for automating the Windows GUI and general scripting" By doing this, the dialog is not dependent on any other windows files, and can be run in WinPE 2.0 Autoit may also be a solution to your other WinPE 2.0 dll dependency issues. Enjoy!
1,922,294
1,923,059
Using Unicode font in C++ console app
How do I change the font in my C++ Windows console app? It doesn't seem to use the font cmd.exe uses by default (Lucida Console). When I run my app through an existing cmd.exe (typing name.exe) it looks like this: http://dathui.mine.nu/konsol3.png which is entierly correct. But when I run my app seperatly (double-click the .exe) it looks like this: http://dathui.mine.nu/konsol2.png. Same code, two different looks. So now I wonder how I can change the font so it always looks correctly regardless of how it's run. EDIT: Ok, some more information. When I just use this little snippet: SetConsoleOutputCP(CP_UTF8); wchar_t s[] = L"èéøÞǽлљΣæča"; int bufferSize = WideCharToMultiByte(CP_UTF8, 0, s, -1, NULL, 0, NULL, NULL); char* m = new char[bufferSize]; WideCharToMultiByte(CP_UTF8, 0, s, -1, m, bufferSize, NULL, NULL); wprintf(L"%S", m); it works with the correct font. But in my real application I use WriteConsoleOutput() to print strings instead: CHAR_INFO* info = new CHAR_INFO[mWidth * mHeight]; for(unsigned int a = 0; a < mWidth*mHeight; ++a) { info[a].Char.UnicodeChar = mWorld.getSymbol(mWorldX + (a % mWidth), mWorldY + (a / mWidth)); info[a].Attributes = mWorld.getColour(mWorldX + (a % mWidth), mWorldY + (a / mWidth)); } COORD zero; zero.X = zero.Y = 0; COORD buffSize; buffSize.X = mWidth; buffSize.Y = mHeight; if(!WriteConsoleOutputW(window, info, buffSize, zero, &rect)) { exit(-1); } and then it uses the wrong font. I use two different windows, created like this: mHandleA = CreateConsoleScreenBuffer(GENERIC_READ | GENERIC_WRITE, 0, NULL, CONSOLE_TEXTMODE_BUFFER, NULL); Might I be setting the codepage for just the standard output or something?
For Vista and above, there is SetCurrentConsoleFontEx, as already has been said. For 2K and XP, there is an undocumented function SetConsoleFont; e.g. read here. typedef BOOL (WINAPI *FN_SETCONSOLEFONT)(HANDLE, DWORD); FN_SETCONSOLEFONT SetConsoleFont; .......... HMODULE hm = GetModuleHandle(_T("KERNEL32.DLL")); SetConsoleFont = (FN_SETCONSOLEFONT) GetProcAddress(hm, "SetConsoleFont"); // add error checking .......... SetConsoleFont(GetStdHandle(STD_OUTPUT_HANDLE), console_font_index); Now, console_font_index is an index into console font table, definition of which is unknown. However, console_font_index == 10 is known to identify Lucida Console (a Unicode font). I'm not sure how stable is this value across different OS versions. UPDATE After dutt's comment, I've run an experiment on a clean XP SP2 setup. Initially, GetNumberOfConsoleFonts(), indeed, returns 10, and font indices 0..9 specify various raster fonts. After I open a console with Lucida font selected in its properties (just once; I can close it immediately after opening but the effect is the same), suddenly GetNumberOfConsoleFonts() starts to return 12, and indices 10 and 11 select Lucida of different sizes. So it seems this trick worked for me when I played with it because I always had running at least one console app with Lucida font selected. Thus, for practical purposes, jon hanson's answer seems better. Besides offering better control, it actually works. :)
1,922,325
1,928,950
Find a cycle in an undirected graph (boost) and return its vertices and edges
I need a functions thats find a cycle in an undirected graph (boost) and returns its vertices and edges. It needs only return the vertices/edges of one cycle in the graph. My question is - what is the best way to do this using with boost? I am not experienced using it.
If you want to find a cycle, then using depth first search should do just fine. The DFS visitor has a back_edge function. When it's called, you have an edge in the cycle. You can then walk the predecessor map to reconstruct the cycle. Note that: There's the strong_components function, to find, well, strong components Finding all cycles, as opposed to a cycle, is much harder problem, and I believe Boost.Graph does not have a implementation for that at present
1,922,455
1,923,014
thread synchronization - delicate issue
let's i have this loop : static a; for (static int i=0; i<10; i++) { a++; ///// point A } to this loop 2 threads enters... i'm not sure about something.... what will happen in case thread1 gets into POINT A , stay there, while THREAD2 gets into the loop 10 times, but after the 10'th loop after incrementing i's value to 10, before checking i's value if it's less then 10, Thread1 is getting out of the loop and suppose to increment i and get into the loop again. what's the value that Thread1 will increment (which i will he see) ? will it be 10 or 0 ? is it posibble that Thread1 will increment i to 1, and then thread 2 will get to the loop again for 9 times (and them maybe 8 ,7 , etc...) thanks
You have to realize that an increment operation is effectively really: read the value add 1 write the value back You have to ask yourself, what happens if two of these happen in two independent threads at the same time: static int a = 0; thread 1 reads a (0) adds 1 (value is 1) thread 2 reads a (0) adds 1 (value is 1) thread 1 writes (1) thread 2 writes (1) For two simultaneous increments, you can see that it is possible that one of them gets lost because both threads read the pre-incremented value. The example you gave is complicated by the static loop index, which I didn't notice at first. Since this is c++ code, standard implementation is that the static variables are visible to all threads, thus there is only one loop counting variable for all threads. The sane thing to do would be to use a normal auto variable, because each thread would have its own, no locking required. That means that while you will lose increments sometimes, you also may gain them because the loop itself may lose count and iterate extra times. All in all, a great example of what not to do.
1,922,580
1,922,730
Import a DLL with C++ (Win32)
How do I import a DLL (minifmod.dll) in C++ ? I want to be able to call a function inside this DLL. I already know the argument list for the function but I don't know how to call it. Is there a way of declaring an imported function in C++ like in C# ?
The c# syntax for declaring an imported function is not available in c++. Here are some other SO questions on how to use DLLs: Explicit Loading of DLL Compile a DLL in C/C++, then call it from another program Calling functions in a DLL from C++ Call function in c++ dll without header How to use dll's? Is this a good way to use dlls? (C++?)
1,922,986
1,923,058
Running in the Terminal a build made in XCode, how?
Im creating a project in Xcode using OpenCV as a framework. It works great with the Build&Run option from Xcode, but now I need to run it in the Terminal and it gives me this error: dyld: Library not loaded: @executable_path/../Frameworks/OpenCV.framework/Versions/A/OpenCV Referenced from: /Users/Victor/Documents/PFC/src/opencv/blob/build/Release/./test3 Reason: image not found Trace/BPT trap I look for the build and just execute it with ./ So, any clue?
You need to run it from the build directory rather than the Release directory (assuming Frameworks is a directory in blob)
1,923,091
1,923,375
UTF-16 codecvt facet
Extending from this questions about locales And described in this question: What I really wanted to do was install a codecvt facet into the locale that understands UTF-16 files. I could write my own. But I am not a UTF expert and as such I am sure I would get it nearly correct; but it would break at the most inconvenient time. So I was wondering if there are any resources (on the web) of pre-build codecvt (or other) facets that can be used from C++ that are peer reviewed and tested? The reason is the default locale (on my system MAC OS X 10.6) when reading a file just converts 1 byte to 1 wchar_t with no conversion. Thus UTF-16 encoded files are converted into wstrings that contain lots of null ('\0') characters.
I'm not sure if by "resources on the Web" you meant available free of cost, but there is the Dinkumware Conversions Library that sounds like it will fit your needs—provided that the library can be integrated into your compiler suite. The codecvt types are described in the section Code Conversions.
1,923,201
1,923,221
CString join method?
I need to concatenate a list of MFC CString objects into a single CSV string. .NET has String.Join for this task. Is there an established way to do this in MFC/C++?
The + operator is overloaded to allow string concatenation. I'd suggest take a look at the documentation on MSDN: Basic CString Operations has the following example: CString s1 = _T("This "); // Cascading concatenation s1 += _T("is a "); CString s2 = _T("test"); CString message = s1 + _T("big ") + s2; // Message contains "This is a big test". If you want the strings to be comma-separated, just add the commas yourself.
1,923,317
1,923,504
Can BSTR's hold characters that take more than 16 bits to represent?
I am confused about Windows BSTR's and WCHAR's, etc. WCHAR is a 16-bit character intended to allow for Unicode characters. What about characters that take more then 16-bits to represent? Some UTF-8 chars require more then that. Is this a limitation of Windows? Edit: Thanks for all the answers. I think I understand the Unicode aspect. I am still confused on the Windows/WCHAR aspect though. If WCHAR is a 16-bit char, does Windows really use 2 of them to represent code-points bigger than 16-bits or is the data truncated?
UTF-8 is not the encoding used in Windows' BSTR or WCHAR types. Instead, they use UTF-16, which defines each code point in the Unicode set using either 1 or 2 WCHARs. 2 WCHARs gives exactly the same amount of code points as 4 bytes of UTF-8. So there is no limitation in Windows character set handling.
1,923,664
1,924,029
Simulating low memory using C++
I am debugging a program that fails during a low memory situation and would like a C++ program that just consumes LOT of memory. Any pointers would help!
Allcoating big blocks is not going to work. Depending on the OS you are not limited to the actual physical memory and unused large chunks could be potentially just swap out to the disk. Also this makes it very hard to get your memory to fail exactly when you want it to fail. What you need to do is write your own version of new/delete that fail on command. Somthing like this: #include <memory> #include <iostream> int memoryAllocFail = false; void* operator new(std::size_t size) { std::cout << "New Called\n"; if (memoryAllocFail) { throw std::bad_alloc(); } return ::malloc(size); } void operator delete(void* block) { ::free(block); } int main() { std::auto_ptr<int> data1(new int(5)); memoryAllocFail = true; try { std::auto_ptr<int> data2(new int(5)); } catch(std::exception const& e) { std::cout << "Exception: " << e.what() << "\n"; } } > g++ mem.cpp > ./a.exe New Called New Called Exception: St9bad_alloc
1,923,780
1,923,800
Using typedef from inside a template as template argument type
I'm trying to do something like this (completely synthetic example, because the real code is a bit to convoluted): enum MyInfoType { Value1, Value2 }; template<typename T> struct My_Type_Traits {}; template<> struct My_Type_Traits<int> { typedef MyInfoType InfoType; }; template<typename T> class Wrap { template<My_Type_Traits<T>::InfoType INFO> int GetInfo() {...} }; ... Wrap<int> w; int info = w.GetInfo<Value1>(); So basically I'm trying to use a typedef from inside another struct as type of a template parameter. With this code however the compiler complains that struct My_Type_Traits<T>::InfoType is not a type. So what do I need to do to make this work?
You need to use the typename keyword: like typename My_Type_Traits<T>::InfoType to let the compiler know you're referring to a nested type.
1,924,070
1,924,096
Don't give away your internals? [C++]
I am reading book called "C++ coding standard" By Herb Sutter, Andrei Alexandrescu and in chapter 42 of this book is an example:(chapter is short so I'm taking the liberty and pasting part of it) Consider: class Socket { public: // … constructor that opens handle_, destructor that closes handle_, etc. … int GetHandle() const {return handle_;} // avoid this - (1) <-why this is bad code? // and why there is a comment to avoid such code?? private: int handle_; // perhaps an OS resource handle }; Data hiding is a powerful abstraction and modularity device (see Items 11 and 41). But hiding data and then giving away handles to it is self-defeating, just like locking your house and leaving the keys in the lock. This is because: Clients now have two ways to implement functionality: They can use your class's abstraction (Socket) or directly manipulate the implementation that your class relies on (the socket's C-style handle). In the latter case, the object is unaware of significant changes to the resource it thinks it owns. Now the class cannot reliably enrich or embellish functionality (e.g., proxying, logging, collecting statistics) because clients can bypass the embellished, controlled implementationand any of the invariants it thinks it's adding, which makes correct error handling next to impossible (see Item 70). The class cannot change the underlying implementation of its abstraction because clients depend on it: If Socket is later upgraded to support a different protocol with a different set of low-level primitives, calling code that fetches the underlying handle_ and manipulates it incorrectly will be silently broken. The class cannot enforce its invariants because calling code can alter state unbeknownst to the class: For example, someone could close the handle being used by a Socket object without going through a Socket member function, thus rendering the object invalid. Client code can store the handles that your class returns, and attempt to use them after your class's code has invalidated them. this is a summary from this book: Don't volunteer too much: Avoid returning handles to internal data managed by your class, so clients won't uncontrollably modify state that your object thinks it owns. Basically what I'm asking for is: Why line marked by me as (1) is listed as an example of bad code (I always thought that returning pointers or reference is a bad idea but returning by value is OK. Here they're saying that returning by value is bad idea too?) Is it possible that there is '&' missing and what they really mean is to not return internal data by reference or pointers? Thank you.
I think what you're missing is that a handle — even though it's represented by an int in the type system — is a reference to something. This isn't returning some informational value — it's returning the object's internal reference to a system resource. The class should manage this handle itself, and the handle should be influenced by the external world only through the class interface.
1,924,255
1,924,341
How to embed WebKit into my C/C++/Win32 application?
The solutions I have found are irrelevant: someone used WebKit in a Delphi project someone used it with Java there is QtWebKit (about:blank demo app takes 44 megs) .Net port of it GTK+ port I need a guide how to embed WebKit instance into a pure C/C++ application under Win32.
Brent Fulgham has put lots of work into producing a Windows Cairo port of WebKit, which doesn't rely on Apple's proprietary backend stuff (e.g. CoreGraphics, CoreFoundation, CFNetwork). I believe that is what you are after. The details aren't entirely collated in one place, but there is some information in the Trac wiki and other bits are dotted around on Brent's blog. EDIT: Link to Brent's project on sourceforge EDIT 2: News of some interesting progress from Brent's blog
1,924,530
1,926,432
mixing cout and printf for faster output
After performing some tests I noticed that printf is much faster than cout. I know that it's implementation dependent, but on my Linux box printf is 8x faster. So my idea is to mix the two printing methods: I want to use cout for simple prints, and I plan to use printf for producing huge outputs (typically in a loop). I think it's safe to do as long as I don't forget to flush before switching to the other method: cout << "Hello" << endl; cout.flush(); for (int i=0; i<1000000; ++i) { printf("World!\n"); } fflush(stdout); cout << "last line" << endl; cout << flush; Is it OK like that? Update: Thanks for all the precious feedbacks. Summary of the answers: if you want to avoid tricky solutions, simply stick with cout but don't use endl since it flushes the buffer implicitly (slowing the process down). Use "\n" instead. It can be interesting if you produce large outputs.
The direct answer is that yes, that's okay. A lot of people have thrown around various ideas of how to improve speed, but there seems to be quite a bit of disagreement over which is most effective. I decided to write a quick test program to get at least some idea of which techniques did what. #include <iostream> #include <string> #include <sstream> #include <time.h> #include <iomanip> #include <algorithm> #include <iterator> #include <stdio.h> char fmt[] = "%s\n"; static const int count = 3000000; static char const *const string = "This is a string."; static std::string s = std::string(string) + "\n"; void show_time(void (*f)(), char const *caption) { clock_t start = clock(); f(); clock_t ticks = clock()-start; std::cerr << std::setw(30) << caption << ": " << (double)ticks/CLOCKS_PER_SEC << "\n"; } void use_printf() { for (int i=0; i<count; i++) printf(fmt, string); } void use_puts() { for (int i=0; i<count; i++) puts(string); } void use_cout() { for (int i=0; i<count; i++) std::cout << string << "\n"; } void use_cout_unsync() { std::cout.sync_with_stdio(false); for (int i=0; i<count; i++) std::cout << string << "\n"; std::cout.sync_with_stdio(true); } void use_stringstream() { std::stringstream temp; for (int i=0; i<count; i++) temp << string << "\n"; std::cout << temp.str(); } void use_endl() { for (int i=0; i<count; i++) std::cout << string << std::endl; } void use_fill_n() { std::fill_n(std::ostream_iterator<char const *>(std::cout, "\n"), count, string); } void use_write() { for (int i = 0; i < count; i++) std::cout.write(s.data(), s.size()); } int main() { show_time(use_printf, "Time using printf"); show_time(use_puts, "Time using puts"); show_time(use_cout, "Time using cout (synced)"); show_time(use_cout_unsync, "Time using cout (un-synced)"); show_time(use_stringstream, "Time using stringstream"); show_time(use_endl, "Time using endl"); show_time(use_fill_n, "Time using fill_n"); show_time(use_write, "Time using write"); return 0; } I ran this on Windows after compiling with VC++ 2013 (both x86 and x64 versions). Output from one run (with output redirected to a disk file) looked like this: Time using printf: 0.953 Time using puts: 0.567 Time using cout (synced): 0.736 Time using cout (un-synced): 0.714 Time using stringstream: 0.725 Time using endl: 20.097 Time using fill_n: 0.749 Time using write: 0.499 As expected, results vary, but there are a few points I found interesting: printf/puts are much faster than cout when writing to the NUL device but cout keeps up quite nicely when writing to a real file Quite a few proposed optimizations accomplish little In my testing, fill_n is about as fast as anything else By far the biggest optimization is avoiding endl cout.write gave the fastest time (though probably not by a significant margin I've recently edited the code to force a call to printf. Anders Kaseorg was kind enough to point out--that g++ recognizes the specific sequence printf("%s\n", foo); is equivalent to puts(foo);, and generates code accordingly (i.e., generates code to call puts instead of printf). Moving the format string to a global array, and passing that as the format string produces identical output, but forces it to be produced via printf instead of puts. Of course, it's possible they might optimize around this some day as well, but at least for now (g++ 5.1) a test with g++ -O3 -S confirms that it's actually calling printf (where the previous code compiled to a call to puts).
1,924,844
1,924,983
std::map of member function pointers?
I need to implement an std::map with <std::string, fn_ptr> pairs. The function pointers are pointers to methods of the same class that owns the map. The idea is to have direct access to the methods instead of implementing a switch or an equivalent. ( I am using std::string as keys for the map ) I'm quite new to C++, so could anyone post some pseudo-code or link that talks about implementing a map with function pointers? ( pointers to methods owned by the same class that owns the map ) If you think there's a better approach to my problem, suggestions are also welcome.
This is about the simplest I can come up with. Note no error checking, and the map could probably usefully be made static. #include <map> #include <iostream> #include <string> using namespace std; struct A { typedef int (A::*MFP)(int); std::map <string, MFP> fmap; int f( int x ) { return x + 1; } int g( int x ) { return x + 2; } A() { fmap.insert( std::make_pair( "f", &A::f )); fmap.insert( std::make_pair( "g", &A::g )); } int Call( const string & s, int x ) { MFP fp = fmap[s]; return (this->*fp)(x); } }; int main() { A a; cout << a.Call( "f", 0 ) << endl; cout << a.Call( "g", 0 ) << endl; }
1,925,237
1,925,274
Control USB port's power?
Does anybody know how to control USB pins on a certain USB port? I think it is definately possible in assembler but what about C++ or C#? I want to be able to use USB battery as a power supply for an LED or something like that. So then a program would power it on and power it off making it flash. I know it sounds pointless but I need to do it for something awesome. I also know that it might require a custom driver.
USB is not trivial, so I guess you'll have some problems (mis)using it. You would be /much/ better off (IMHO) with standard serial ports, which have been used for stuff like that for ages, with plenty of examples available. If you don't have serial port available on your target machine, you can use USB->Serial interface cable. That being said, you'll probably want to take a look @: http://sourceforge.net/projects/libusbdotnet/ LP, Dejan
1,925,264
1,925,281
MSVC9: How do I view a location in memory?
I'm pretty sure I'm overlooking something totally obvious, but I want to view the raw contents of a point in memory under MSVC9, but I can't find a location in the UI where I can punch in a memory address. How can this be done?
A couple of places: When you're debugging, go to Debug->Windows->Memory In the watch window, just cast a memory address to whatever you want: (char*)0xdeadbeef
1,925,341
1,925,373
Problems passing argument to a const parameter
Say I have a function that takes a const reference to a pointer... Example: void Foo( const Bar *&p_Thing, ); and I pass a pointer Bar *blah = NULL; // Initialized when program starts up to the function Foo( blah ); I may encounter a compiler error like this invalid initialization of reference of type 'const Bar*&' from expression of type 'Bar*' This has happened to me a few times, and I'd really like to clear up how const operates in terms of applying to parameters in relation to argument passing. Any help is appreciated, thanks.
This is what you want: void Foo( Bar * const &p_Thing ); Then it becomes a const-reference to a Bar * pointer, which has the lovely feature of compiling.
1,925,403
1,925,426
Implementing a z buffer in a software rasterizer
as a homework assignment, we're writing a software rasterizer. I've noticed my z buffering is not working as well as it should, so I'm trying to debug it by outputting it to the screen. (Black is near, white is far away). However, I'm getting peculiar values for the z per vertex. This is what I use to transform the points: float Camera::GetZToPoint(Vec3 a_Point) { Vec3 camera_new = (m_MatRotation * a_Point) - m_Position; return (HALFSCREEN / tanf(_RadToDeg(60.f * 0.5f)) / camera_new.z); } m_MatRotation is a 3x3 matrix. Multiplying it by a vector returns a transformed vector. I get maximum and minimum values between 0 and x, where x is a seemingly random number. Am I doing this transformation right? If so, how can I normalize my Z values so it lies between two set points? Thanks in advance.
To normalize the Z values you have to define a near clipping plane and a far clipping plane. Then you normalize Z such that its 0 at the near plane and 1 at the far plane. However, you would usually do that after projection. It looks like your last line is where projection occurs. A number of other things: You compute the full matrix-vector multiplication but keep only the Z, this is wasteful. You should consider transforming the points and keeping all their X, Y, Z coordinates ; You recompute tanf() at every vertex, but its constant ; I would suggest you use a projection matrix rather than the tanf computation ; Start with a simple orthogonal projection, it will be easier to debug.
1,925,422
1,925,435
How do I convert System::WideString to a char* and vice versa?
I have a situation where I need to compare a char* with a WideString. How do I convert the WideString to a char* in C++?
You can use the wcstombs function. size_t wcstombs( char * mbstr, const wchar_t * wcstr, size_t max );
1,925,523
1,925,544
Is There an Archiving Library Without Dependencies? (C/C++)
Hey, I'm looking for an archiving library that functions like GNU's tar, but without any dependencies. I need some sort of archiving format to manage resources in my game engine and am still iffy about rolling my own.
Take a look at the BSD libarchive
1,925,876
1,926,077
What kind of applications should be rewritten to use OpenCL?
Mac OS X 10.6 comes with OpenCL, but how many applications could have better performances if they would be rewritten to use OpenCL? What kind of applications should be rewritten to use OpenCL?
My company is using OpenCL for scientific calculations. While I'm not part of the development team which is using it I can tell you a few things they're using it to do: large scale data processing of images, imagine reconstruction, and massively parallelization of previously written scientific code. Basically, OpenCL is rather cutting edge. Unless you want to/have to deal with it, it's best left for those that need it. I'm not trying to discourage you but it is a niche product designed for a niche market. You'd be better served studying other areas of programming for potential employment purposes.
1,925,887
1,925,900
Splitting up lines into ints
I have a file that I read from, it contains a bunch of lines each with a different number of integers, I'm having trouble splitting it up into a vector of a vector of ints. This is my current code. std::vector<int> read_line() { std::vector<int> ints; int extract_int; while((const char*)std::cin.peek() != "\n" && std::cin.peek() != -1) { std::cin >> extract_int; ints.push_back(extract_int); } return ints; } std::vector<std::vector<int> > read_lines() { freopen("D:\\test.txt", "r", stdin); freopen("D:\\test2.txt", "w", stdout); std::vector<std::vector<int> > lines; while(!std::cin.eof()) { lines.push_back(read_line()); } return lines; } The problem is that all of the ints are being read as a single line. What am I doing wrong?
The problem is your (const char *)std::cin.peek() != "\n" cast. casts are evil; try to avoid using them. The following code works: std::vector<int> read_line() { std::vector<int> ints; int extract_int; while(std::cin.peek() != '\n' && std::cin.peek() != -1) { std::cin >> extract_int; ints.push_back(extract_int); } std::cin.ignore(); // You need this to discard the \n return ints; }
1,926,067
1,930,132
What alignment guarantees can I expect for arrays in a struct?
I've got a lightweight templated class that contains a couple of member objects that are very rarely used, and so I'd like to avoid calling their constructors and destructors except in the rare cases when I actually use them. To do that, I "declare" them in my class like this: template <class K, class V> class MyClass { public: MyClass() : wereConstructorsCalled(false) {/* empty */} ~MyClass() {if (wereConstructorsCalled) MyCallPlacementDestructorsFunc();} [...] private: bool wereConstructorsCalled; mutable char keyBuf[sizeof(K)]; mutable char valBuf[sizeof(V)]; }; ... and then I use placement new and placement delete to set up and tear down the objects only when I actually need to do so. Reading the C++ FAQ it said that when using placement new, I need to be careful that the placement is properly aligned, or I would run into trouble. My question is, will the keyBuf and valBuf arrays be properly aligned in all cases, or is there some extra step I need to take to make sure they will be aligned properly? (if so, a non-platform-dependent step would be preferable)
There's no guarantee that you'll get the appropriate alignment. Arrays are in general only guaranteed to be aligned for the member type. A char array is aligned for storage of char. The one exception is that char and unsigned char arrays allocated with new are given maximum alignment, so that you can store arbitrary types into them. But this guarantee doesn't apply in your case as you're avoiding heap allocation. TR1 and C++0x add some very helpful types though: std::alignment_of and std::aligned_storage together give you a portable (and functioning) answer. std::alignment_of<T>::value gives you the alignment required for a type T. std::aligned_storage<A, S>::type gives you a POD type with alignment A and size S. That means that you can safely write your object into a variable of type std::aligned_storage<A, S>::type. (In TR1, the namespace is std::tr1, rather than just std)