question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
2,102,921
2,102,950
Strange behaviour of ldr [pc, #value]
I was debugging some c++ code (WinCE 6 on ARM platform), and i find some behavior strange: 4277220C mov r3, #0x93, 30 42772210 str r3, [sp] 42772214 ldr r3, [pc, #0x69C] 42772218 ldr r2, [pc, #0x694] 4277221C mov r1, #0 42772220 ldr r0, [pc, #0x688] Line 42772214 ldr r3, [pc, #0x69C] is used to get some constant from .DATA section, at least I think so. What is strange that according to the code r2 should be filled with memory from address pc=0x42772214 + 0x69C = 0x427728B0, but according to the memory contents it's loaded from 0x427728B8 (8bytes+), it happens for other ldr usages too. Is it fault of the debugger or my understanding of ldr/pc? Another issue I don't get - why access to the .data section is relative to the executed code? I find it little bit strange. And one more issue: i cannot find syntax of the 1st mov command (any one could point me a optype specification for the Thumb (1C2)) Sorry for the laic description, but I'm just familiarizing with the assemblies.
This is correct. When pc is used for reading there is an 8-byte offset in ARM mode and 4-byte offset in Thumb mode. From the ARM-ARM: When an instruction reads the PC, the value read depends on which instruction set it comes from: For an ARM instruction, the value read is the address of the instruction plus 8 bytes. Bits [1:0] of this value are always zero, because ARM instructions are always word-aligned. For a Thumb instruction, the value read is the address of the instruction plus 4 bytes. Bit [0] of this value is always zero, because Thumb instructions are always halfword-aligned. This way of reading the PC is primarily used for quick, position-independent addressing of nearby instructions and data, including position-independent branching within a program. There are 2 reasons for pc-relative addressing. Position-independent code, which is in your case. Get some complicated constants nearby which cannot be written in 1 simple instruction, e.g. mov r3, #0x12345678 is impossible to complete in 1 instruction, so the compiler may put this constant in the end of the function and use e.g. ldr r3, [pc, #0x50] to load it instead. I don't know what mov r3, #0x93, 30 means. Probably it is mov r3, #0x93, rol 30 (which gives 0xC0000024)?
2,102,942
2,103,188
lua_pop vs lua_remove
Currently I'm building my own script VM manager class in C++. I have no problems with any of the Lua & Lua C or C++ stuff, but the one section that confuses me is when to use lua_pop and when to use lua_remove. From what I understand, lua_pop is to remove multiple values(on the stack) from the top down, eliminating data that is no longer needed, where as lua_remove is for removing a single value from any arbitrary, valid stack index (basically what the Lua manual says for both :P). But I've noticed certain segments of code scattered around the web that intermix lua_pop and lua_remove, but when I tried to use lua_pop instead of lua_remove where the call just removed the top stack element, I ran into problems. So would it be possible to get a definitive example or explanation on how and when to use these two functions correctly, as well as relative speed & efficiency for these two? I assume lua_pop is faster than lua_remove, which is one of the reasons why I want to use lua_pop as much as possible, apart from coding 'correct code'.
A typical example of lua_remove is accessing tables. Snippets from Lua reference manual. lua_getfield(L, LUA_GLOBALSINDEX, "t"); /* table to be indexed */ lua_getfield(L, -1, "x"); /* push result of t.x (2nd arg) */ lua_remove(L, -2); /* remove 't' from the stack */ lua_getfield pushes t[x] on the stack. You no longer need t, so you remove it. A typical example of lua_pop is iterating over a table. lua_pushnil(L); /* first key */ while (lua_next(L, t) != 0) { /* uses 'key' (at index -2) and 'value' (at index -1) */ /* do whatever you like with the key and the value */ lua_pop(L, 1); } After you are done with a single iteration, you need to have the key on top of the stack, so that lua_next knows which key/value pair comes next. You need to remove the value, which is on top of the stack. It's not a definitive example. In Lua you do whatever works for you. Always keep in mind what's on your lua_State stack, and you'll be fine.
2,103,349
2,103,377
Next value in std::map
I have an std::map<std::string, float> so I can do quick lookups for float values based on names, but I have a situation now where I need to find the next float value in the list as if it were sorted numerically. I cannot just use an iterator, as far as I know, since the map is key sorted. So if my set contains: std::pair<"One", 1.0> std::pair<"Two", 2.0> std::pair<"Three", 3.0> std::pair<"Four", 4.0> std::pair<"Five", 5.0> If I want to find the next value after "Three", what would be the most efficient way to do this? The expected output in this case would be 4.0. I can iterate over the whole list and store the max value along with the current next value. Is this the best way?
You probably want to use a Boost::bimap instead of a normal map -- it provides this capability quite directly.
2,103,411
2,103,424
Is creating an empty class purely to distinguish it from another class good practice?
I have a class CardStack. I have several classes that inherit from CardStack e.g. Cascade, Deck, Foundation etc. Foundation doesn't need to add any functionality to CardStack, but for display purposes my app needs to know which of the CardStacks are actually Foundations. Incidentally, I have no such function CardStack.Display() (I'm using a model-view-controller pattern where the View object simply queries the Model to find out what type of objects it's dealing with). It seems OK to me, but is there any reason not to do this? class Foundation : public CardStack { }; class Model { Cascade cascade[10]; Foundation foundations[10]; ... };
Nothing wrong with this. Do it all the time. In the future, there may be a difference in structure, behavior or implementation. For now, they happen to share a lot of common features.
2,103,484
2,103,870
c++ container for checking whether ordered data is in a collection
I have data that is a set of ordered ints [0] = 12345 [1] = 12346 [2] = 12454 etc. I need to check whether a value is in the collection in C++, what container will have the lowest complexity upon retrieval? In this case, the data does not grow after initiailization. In C# I would use a dictionary, in c++, I could either use a hash_map or set. If the data were unordered, I would use boost's unordered collections. However, do I have better options since the data is ordered? Thanks EDIT: The size of the collection is a couple of hundred items
Just to detail a bit over what have already been said. Sorted Containers The immutability is extremely important here: std::map and std::set are usually implemented in terms of binary trees (red-black trees for my few versions of the STL) because of the requirements on insertion, retrieval and deletion operation (and notably because of the invalidation of iterators requirements). However, because of immutability, as you suspected there are other candidates, not the least of them being array-like containers. They have here a few advantages: minimal overhead (in term of memory) contiguity of memory, and thus cache locality Several "Random Access Containers" are available here: Boost.Array std::vector std::deque So the only thing you actually need to do can be broken done in 2 steps: push all your values in the container of your choice, then (after all have been inserted) use std::sort on it. search for the value using std::binary_search, which has O(log(n)) complexity Because of cache locality, the search will in fact be faster even though the asymptotic behavior is similar. If you don't want to reinvent the wheel, you can also check Alexandrescu's [AssocVector][1]. Alexandrescu basically ported the std::set and std::map interfaces over a std::vector: because it's faster for small datasets because it can be faster for frozen datasets Unsorted Containers Actually, if you really don't care about order and your collection is kind of big, then a unordered_set will be faster, especially because integers are so trivial to hash size_t hash_method(int i) { return i; }. This could work very well... unless you're faced with a collection that somehow causes a lot of collisions, because then unsorted containers will search over the "collisions" list of a given hash in linear time. Conclusion Just try the sorted std::vector approach and the boost::unordered_set approach with a "real" dataset (and all optimizations on) and pick whichever gives you the best result. Unfortunately we can't really help more there, because it heavily depends on the size of the dataset and the repartition of its elements
2,103,728
2,103,850
Selecting An Embedded Language
I'm making an application that analyses one or more series of data using several different algorithms (agents). I came to the idea that each of these agents could be implemented as separate Python scripts which I run using either the Python C API or Boost.Python in my app. I'm a little worried about runtime overhead TBH, as I'm doing some pretty heavy duty data processing and I don't want to have to wait several minutes for each simulation. I will typically be making hundreds of thousands, if not millions, of iterations in which I invoke the external "agents"; am I better of just hardcoding everything in the app, or will the performance drop be tolerable? Also, are there any other interpreted languages I can use other than Python?
Yes, tons. Lua and Python seems to be the most popular: Embedding Lua http://www.lua.org/pil/24.html https://stackoverflow.com/questions/38338/why-is-lua-considered-a-game-language Lua as a general-purpose scripting language? Embedding Python http://docs.python.org/extending/embedding.html Embedding Tcl http://wiki.tcl.tk/3474 http://wiki.tcl.tk/2265 Embedding Ruby How to embed Ruby in C++? Embed Perl http://perldoc.perl.org/perlembed.html Embed JavaScript http://spiderape.sourceforge.net/ There are dozens of JavaScript engines around, this is just an example. Some of them are also frighteningly quick.
2,103,833
2,105,527
how to set base index in ublas matrix?
I have searched the web but could not find an answer. how do I have set base index in the matrix, such that indexes start from values other than zero? for example: A(-3:1) // Matlab/fortran equivalent A.reindex(-3); // boost multi-array equivalent thanks
Your search appears to be correct; it appears not to have such a function.
2,103,873
2,103,912
C++: casting to void* and back
* ---Edit - now the whole sourse* When I debug it on the end, "get" and "value" have different values! Probably, I convert to void* and back to User the wrong way? #include <db_cxx.h> #include <stdio.h> struct User{ User(){} int name; int town; User(int a){}; inline int get_index(int a){ return town; } //for another stuff }; int main(){ try { DbEnv* env = new DbEnv(NULL); env->open("./", DB_CREATE | DB_INIT_MPOOL | DB_THREAD | DB_INIT_LOCK | DB_INIT_TXN | DB_RECOVER | DB_INIT_LOG, 0); Db* datab = new Db(env, 0); datab->open(NULL, "db.dbf", NULL, DB_BTREE, DB_CREATE | DB_AUTO_COMMIT, 0); Dbt key, value, get; char a[10] = "bbaaccd"; User u; u.name = 1; u.town = 34; key.set_data(a); key.set_size(strlen(a) + 1 ); value.set_data((void*)&u); value.set_size(sizeof(u)); get.set_flags(DB_DBT_MALLOC); DbTxn* txn; env->txn_begin(NULL, &txn, 0); datab->put(txn, &key, &value, 0); datab->get(txn, &key, &get, 0); txn->commit(0); User g; g = *((User*)&get); printf("%d", g.town); getchar(); return 0; }catch (DbException &e){ printf("%s", e.what()); getchar(); } solution create a kind of "serializator" what would convert all POD's into void* and then will unite these pieces PS Or I'd rewrite User into POD type and everything will be all right, I hope. Add It's strange, but... I cast a defenetly non-pod object to void* and back (it has std::string inside) and it's all right (without sending it to the db and back). How could it be? And after I cast and send 'trough' db defenetly pod object (no extra methods, all members are pod, it's a simple struct {int a; int b; ...}) I get back dirted one. What's wrong with my approach? Add about week after first 'add' Damn... I've compiled it ones, just for have a look at which kind of dirt it returnes, and oh! it's okay!... I can't ! ... AAh!.. Lord... A reasonable question (in 99.999 percent of situations right answer is 'my', but... here...) - whos is this fault? My or VSs?
Unless User is a POD this is undefined in C++. Edit: Looking at db_cxx.h, aren't you supposed to do call get_doff(), get_dlen(), and get_data() or something on Dbt instead of just casting (and assigning) it to the user type?
2,104,208
2,104,243
Is it possible to use boost::foreach with std::map?
I find boost::foreach very useful as it saves me a lot of writing. For example, let's say I want to print all the elements in a list: std::list<int> numbers = { 1, 2, 3, 4 }; for (std::list<int>::iterator i = numbers.begin(); i != numbers.end(); ++i) cout << *i << " "; boost::foreach makes the code above much simplier: std::list<int> numbers = { 1, 2, 3, 4 }; BOOST_FOREACH (int i, numbers) cout << i << " "; Much better! However I never figured out a way (if it's at all possible) to use it for std::maps. The documentation only has examples with types such as vector or string.
You need to use: typedef std::map<int, int> map_type; map_type map = /* ... */; BOOST_FOREACH(const map_type::value_type& myPair, map) { // ... } The reason being that the macro expects two parameters. When you try to inline the pair definition, you introduce a second comma, making the macro three parameters instead. The preprocessor doesn't respect any C++ constructs, it only knows text. So when you say BOOST_FOREACH(pair<int, int>, map), the preprocessor sees these three arguments for the macro: 1.pair<int 2. int> 3. map Which is wrong. This is mentioned in the for-each documentation.
2,104,459
2,104,619
Is it possible to replace the global "operator new()" everywhere?
I would like to replace the global operator new() and operator delete() (along with all of their variants) in order to do some memory management tricks. I would like all code in my application to use the custom operators (including code in my own DLLs as well as third-party DLLs). I have read things to the effect that the linker will choose the first definition it sees when linking (e.g., if the library that contains your custom operator new() is linked first, it will "beat" the link with the CRT). Is there some way to guarantee that this will happen? What are the rules for this, since this really is a multiply-defined symbol (e.g., void* operator new(size_t size) has two definitions in the global namespace)? What about third-party DLLs that may be statically linked with the CRT? Even if they are dynamically linked with the CRT, is there some way I can get them to link with my operator new()?
The C++ standard explicitly allows you to write your own global operator new and delete (and array variants). The linker has to make it work, though exactly how is up to the implementors (e.g., things like weak externals can be helpful for supplying something if and only if one isn't already present). As far as DLLs go, it's going to be tricky: a statically linked DLL clearly won't use your code without a lot of extra work. Static linking means it already has a copy of the library code copied into the DLL, and any code in the DLL that used it has the address of that code already encoded. To get around that, you'd have to figure out where the code for new is in the DLL and dynamically patch all the code that calls it to call yours instead). If the DLL links to the standard library dynamically, it gets only marginally easier -- the import table still encodes the name of the DLL and function in that DLL that provides what it needs. That can be gotten around (e.g. with something like Microsoft's Detours library) but it's somewhat non-trivial (though certainly easier than when the DLL links the standard library statically).
2,104,471
2,104,479
C++ Class using header and implemenation files
I've put together a simple C++ "Hello World" program to practice; unfortunately, upon compilation I get a few errors: expected ')' before fName error: prototype for 'HelloWorld::HelloWorld(std::string, std::string)' does not match any in class 'HelloWorld' Below is my code, can anyone help me understand what I'm missing/overlooking? Thanks. Header: 1 #ifndef HELLOWORLD_H_ 2 #define HELLOWORLD_H_ 3 #include <string> 4 5 class HelloWorld 6 { 7 public: 8 HelloWorld(); 9 HelloWorld(string fName, string lName); 10 ~HelloWorld(); 11 }; 12 13 #endif Implementation: 1 #include <iostream> 2 #include <string> 3 #include "HelloWorld.h" 4 5 using namespace std; 6 7 HelloWorld::HelloWorld() 8 { 9 cout << "Hello, anonymous!"; 10 } 11 12 HelloWorld::HelloWorld(string fName, string lName) 13 { 14 cout << "Hello, " << fName << ' ' << lName << endl; 15 } 16 17 HelloWorld::~HelloWorld() 18 { 19 cout << "Goodbye..." << endl; 20 }
You need to change your header file to reference std::string instead of string because they are defined inside the std namespace. HelloWorld(std::string fName, std::string lName); It works in your .cpp file because you specifically import this namespace. The solution however is not to import this namespace in your header file (generally speaking a bad idea in C++).
2,104,523
2,105,513
c++ rapidxml node_iterator example?
I just started using rapidXML since it was recommended to me. Right now to iterate over multiple siblings i do this: //get the first texture node xml_node<>* texNode = rootNode->first_node("Texture"); if(texNode != 0){ string test = texNode->first_attribute("path")->value(); cout << test << endl; } //get all its siblings while(texNode->next_sibling() != 0){ string test = texNode->first_attribute("path")->value(); cout << test << endl; texNode = texNode->next_sibling(); } as a basic test and it works fine. Anyways, I came across node_iterator which seems to be an extra iterator class to do this for me. anyways, I could not find any example on how to use it, so I was wondering if some one could show me :) thanks!
The documentation that I could find documents no node_iterator type. I can't even find the word iterator on that page except in reference to output iterators, which you clearly don't want. It could be that it's an internal API, or one under development, so you're probably best not to use it right now.
2,104,598
2,104,702
_CrtMem* and the debug heap
When I use the following code, it detects a memory leak. How can I make it not? _CrtMemState startState; _CrtMemState endState; _CrtMemState temp; _CrtMemCheckpoint(&startState); const char* foo = "I'm not leaking memory! Stop saying I am!"; _CrtMemCheckpoint(&endState); _CrtMemDifference(&temp, &startState, &endState); // Returns true. Wtf?
I cut and pasted your code and tested it on my machine under VS2008 and _CrtMemDifference returns 0 ... As the oft heard adage goes: "Works on my machine" ;) Edit: Have you got multiple threads running? Is it possible another thread has allocated something between the 2 _CrtMemCheckpoint calls?
2,104,978
2,105,302
Why might trigger a breakpoint when I return TRUE from my OnCopyData?
I'm using Visual Studio to debug an ATL application. When I step over return TRUE in this code, the error occurs: BOOL CMainFrame::OnCopyData(CWnd* pWnd, COPYDATASTRUCT* pCopyDataStruct) { // Code snipped from here - maybe this causes stack/heap corruption? // I have a breakpoint here, if I step over (F10), AFX trace message // is shown (as below) return TRUE; } This is the message box that's shown: Windows has triggered a breakpoint in foobar.exe. This may be due to a corruption of the heap, which indicates a bug in foobar.exe or any of the DLLs it has loaded. This may also be due to the user pressing F12 while phonejournal.exe has focus. The output window may have more diagnostic information. The message is a little vague, and I'm wondering what tools I can use to get more information. The debugger breaks on the call to AtlTraceVU in atltrace.h: inline void __cdecl CTrace::TraceV(const char *pszFileName, int nLine, DWORD_PTR dwCategory, UINT nLevel, LPCWSTR pszFmt, va_list args) const { AtlTraceVU(m_dwModule, pszFileName, nLine, dwCategory, nLevel, pszFmt, args); }
Microsoft's Application Verifier may help with this. If the application has heap corruption, this utility can cause the exception to occur when the error occurs. It can use a lot of memory when running, though, since it can produce big changes in memory allocation schemes. The following obviously flawed code gives a simple demonstration: char *pc = malloc( 4 ); memcpy( pc, "abcdabcd", 9 ); free( pc ); When I ran this without application verifier, it ran to completion with no obvious error. With application verifier, though, it caused an exception (0x80000003). Application Verifier forced the allocation to be at the end of a segment (e.g., 0x1e9eff8). The memcpy resulted in a write into the subsequent segment, which resulted in the exception during the memcpy call. If the overwrite is less in this simple example, the break doesn't occur until the free call, but that is still better than no exception. It's a pretty cool utility.
2,105,077
2,105,116
Initializing static struct tm in a class
I would like to use the tm struct as a static variable in a class. Spent a whole day reading and trying but it still can't work :( Would appreciate if someone could point out what I was doing wrong In my class, under Public, i have declared it as: static struct tm *dataTime; In the main.cpp, I have tried to define and initialize it with system time temporarily to test out (actual time to be entered at runtime) time_t rawTime; time ( &rawTime ); tm Indice::dataTime = localtime(&rawTime); but seems like i can't use time() outside functions. main.cpp:28: error: expected constructor, destructor, or type conversion before ‘(’ token How do I initialize values in a static tm of a class?
You can wrap the above in a function: tm initTm() { time_t rawTime; ::time(&rawTime); return *::localtime(&rawTime); } tm Indice::dataTime = initTm(); To avoid possible linking problems, make the function static or put it in an unnamed namespace.
2,105,272
2,105,293
Extract Digits From An Integer Without sprintf() Or Modulo
The requirements of this are somewhat restrictive because of the machinery this will eventually be implemented on (a GPU). I have an unsigned integer, and I am trying to extract each individual digit. If I were doing this in C++ on normal hardware & performance weren't a major issue, I might do it like this: (Don't hate on me for this code, it's just a sample to illustrate the method) #define _CRT_SECURE_NO_WARNINGS #include <cstdlib> #include <string> #include <iostream> #include <algorithm> using namespace std; int main() { int someVal = 1234; char stringVal[256] ={0}; sprintf(stringVal, "%016d", someVal); int digits[16] = {0}; for( int i = 0; i < strlen(stringVal); ++i ) { digits[i] = stringVal[i] - '0'; } cout << "Integer Value = " << someVal << endl; cout << "Extracted Digits = "; copy( &digits[0], &digits[16], ostream_iterator<int>(cout, "-") ); cout << endl; return 0; } I'm trying to find a method to extract these digits with the following restrictions: Don't convert the integer to a string Don't use the modulus operator (floating point division is fine) The value in question is a 32-bit unsigned integer I'm looking for an algorithm, not necessarily specific code. But specific code would be great. The languages I'm most familiar with that translate well to my target hardware are C++, C and assembler. Any ideas? EDIT: Here's an update with the algorithm I implemented based on the comments & links below. Thanks all. #define _CRT_SECURE_NO_WARNINGS #include <cstdlib> #include <string> #include <iostream> #include <algorithm> #include <cmath> using namespace std; int main() { unsigned someVal = 12345678; static const unsigned numDigits = 10; unsigned digits[numDigits] = {0}; for( unsigned i = 0, temp = someVal; i < numDigits; ++i, temp /= 10 ) { digits[numDigits-i-1] = temp - 10 * (temp/10) /*temp % 10*/; } cout << "Integer Value = " << someVal << endl; cout << "Extracted Digits = "; copy( &digits[0], &digits[numDigits], ostream_iterator<int>(cout, "-") ); cout << endl; return 0; }
Remember that the modulo operator can actually be implemented as: mod(a, n) = a - n * floor(a / n) Hence, you can use your favorite modulo based algorithm. You can simulate floor itself by typecasting.
2,105,411
2,105,482
Windows: How do I calculate the time it takes a c/c++ application to run?
I am doing a performance comparison test. I want to record the run time for my c++ test application and compare it under different circumstances. The two cases to be compare are: 1) a file system driver is installed and active and 2) also when that same file system driver is not installed and active. A series of tests will be conducted on several operating systems and the two runs described above will be done for each operating system and it's setup. Results will only be compared between the two cases for a given operating system and setup. I understand that when running a c/c++ application within an operating system that is not a real-time system there is no way to get the real time it took for the application to run. I don't think this is a big concern as long as the test application runs for a fairly long period of time, therefore making the scheduling, priorities, switching, etc of the CPU negligible. Edited: For Windows platform only How can I generate some accurate application run time results within my test application?
You can put this #if _DEBUG time_t start = time(NULL); #endif and finish with this #if _DEBUG time end = time(NULL); #endif in your int main() method. Naturally you'll have to return the difference either to a log or cout it.
2,105,612
2,105,677
How to code Const and Mutable overloads?
I seem to have this pattern occuring pretty often in my code, with two functions performing the same task apart from the constness of their parameters/returns. int& myClass::getData() { return data; } // called for const objects const int& myData::getData() const { return data; } This offends my sense of DRY. It's not a problem for a one-liner, but as getData() gets bigger, there's obvious duplication. I know WHY I need both methods, but feel there should be a better way to implement it. Is there a template trick that can help, or should I have one method which calls the other casting the constness back and forth as required? ADDED: As a more real-world example, here's a sample of typical STL vector::at() implementation: const_reference at(size_type _Off) const { // subscript nonmutable sequence with checking if (size() <= _Off) _Xran(); return (*(begin() + _Off)); } reference at(size_type _Off) { // subscript mutable sequence with checking if (size() <= _Off) _Xran(); return (*(begin() + _Off)); }
Use the following trick (which I originally got from Scott Meyers' book Effective C++): int& myClass::getData() { // This is safe because we know from out here // that the return value isn't really const return const_cast<int&>(const_cast<const myClass&>(*this).getData()); } const int& myData::getData() const { return data; } Obviously for a short function like this, you may find it easier just to duplicate code, but this idiom is useful when you have a longer function (like vector<T>::at or it is subject to lots of changes.
2,105,716
2,105,796
Header Guards and LNK4006
I have a character array defined in a header //header.h const char* temp[] = {"JeffSter"}; The header if #defined guarded and has a #pragma once at the top. If this header is included in multiple places, I get an LNK4006 - char const * * temp already defined in blahblah.obj. So, I have a couple of questions about this Why does this happen if I have the guards in place? I thought that they prevented the header from being read in after the first access. Why do the numerous enums in this header not also give the LNK4006 warnings? If I add static before the signature, I don't get the warning. What are the implications of doing it this way. Is there a better way to do this that avoids the error, but lets me declare the array in the header. I would really hate to have a cpp file just for an array definition.
Why does this happen if I have the guards in place? I thought that they prevented the header from being read in after the first access. Include guards make sure that a header is included only once in one file (translation unit). For multiple files including the header, you want the header to be included in each file. By defining, as opposed to declaring variables with external linkage (global variables) in your header file, you can only include the header in once source file. If you include the header in multiple source files, there will be multiple definitions of a variable, which is not allowed in C++. So, as you have found out, it is a bad idea to define variables in a header file for precisely the reason above. Why do the numerous enums in this header not also give the LNK4006 warnings? Because, they don't define "global variables", they're only declarations about types, etc. They don't reserve any storage. If I add static before the signature, I don't get the warning. What are the implications of doing it this way. When you make a variable static, it has static scope. The object is not visible outside of the translation unit (file) in which it is defined. So, in simple terms, if you have: static int i; in your header, each source file in which you include the header will get a separate int variable i, which is invisible outside of the source file. This is known as internal linkage. Is there a better way to do this that avoids the error, but lets me declare the array in the header. I would really hate to have a cpp file just for an array definition. If you want the array to be one object visible from all your C++ files, you should do: extern int array[SIZE]; in your header file, and then include the header file in all the C++ source files that need the variable array. In one of the source (.cpp) files, you need to define array: int array[SIZE]; You should include the header in the above source file as well, to allow for catching mistakes due to a difference in the header and the source file. Basically, extern tells the compiler that "array is defined somewhere, and has the type int, and size SIZE". Then, you actually define array only once. At link stage, everything resolves nicely.
2,105,816
2,105,874
Trying to use/include/compile 3rd party library, libmagic. C/C++ filetype detection
After looking for a way to detect the filetype of a file stream, I found that the Unix file command uses libmagic and I'm trying to make use of the library myself, but I can't get it to work. I've rarely integrated 3rd party code in my own, so that's probably a big part of my problem as well. Why: I'm doing this because I have a portable gui image viewing app that will need to detect archive file types (rar, zip, more?) from given filename and then the image file types inside. I'm hoping that I can use libmagic for Windows and Linux (and Mac), so if this isn't the case, stop me now b/c I'll need to find something else. Attempt: I found somebody doing something similar, but I can't follow what they're doing, and I've no idea how compile/run anything at all to start messing around. My first instinct was to do something like: // fileTypeTest.cpp, placed in file-5.03/src/ (source from link above) #include <stdio.h> #include "magic.h" int main() { magic_t myt = magic_open(MAGIC_CONTINUE|MAGIC_ERROR/*|MAGIC_DEBUG*/|MAGIC_MIME); magic_load(myt,NULL); printf("magic output: '%s'\n",magic_file(myt,__FILE__)); magic_close(myt); return 0; } then compile with something like: $ gcc magic.c -o magic.o $ g++ fileTypeTest.cpp -o fileTypeTest magic.o which (obviously?) doesn't work. I don't even know where to start looking, what questions to ask, or if this is the right direction to go to solve my original problem in the first place. Edit: Now I have #include <stdio.h> #include <magic.h> int main(int argc, char* argv[]) { if (argc != 2) { printf("bad arguments"); return 0; } magic_t myt = magic_open(MAGIC_CONTINUE|MAGIC_ERROR/*|MAGIC_DEBUG*/|MAGIC_MIME); magic_load(myt,NULL); printf("magic output: '%s'\n", magic_file(myt, argv[1])); magic_close(myt); return 0; } compiling with: $ g++ -L/usr/lib -libmagic fileTypeTest.cpp -o fileTypeTest works. I had to go to synaptic and install libmagic-dev though. I'll have to test to see if I can just copy /usr/lib/libmagic.a into my source directory when compiling my app on Windows's MingW, but that'll be for another question later, I suppose.
__FILE__ is a reserved pre-processing symbol macro used for debugging/logging purposes. Consider this as an example: // This file is called test.c char *p = NULL; if (!(p = malloc((1 * sizeof(char) + 1)))){ printf("Error in file: %s @ line %d\n\tMalloc failed\n", __FILE__, __LINE__); exit(-1); } If the call to malloc failed you will see the output in the above example like this: Error in file: test.c @ line 23 Malloc failed Notice how the code picks up the original source code. The above example illustrates the usage of this. I think your code should be something like this: // fileTypeTest.cpp, placed in file-5.03/src/ (source from link above) #include <stdio.h> #include "magic.h" int main(int argc, char **argv) { if (argc > 1){ magic_t myt = magic_open(MAGIC_CONTINUE|MAGIC_ERROR/*|MAGIC_DEBUG*/|MAGIC_MIME); magic_load(myt,NULL); printf("magic output: '%s'\n",magic_file(myt,argv[1])); magic_close(myt); } return 0; } The code above checks if there is a parameter that is passed into this program and the parameter would be a filename, i.e. argv[0] points to the executable name (the compiled binary), argv[1] points to the array of chars (a string) indicating the filename in question. To compile it: g++ -I/usr/include -L/usr/lib/libmagic.so fileTestType.cpp -o fileTestType g++ -L/usr/lib -lmagic fileTestType.cpp -o fileTestType Edit: Thanks Alok for pointing out the error here... If you are not sure where the libmagic reside, look for it in the /usr/local/lib, and /usr/local/include - this depends on your installation. See this to find the predefined macros here. Hope this helps, Best regards, Tom.
2,105,901
2,105,906
How to fix 'expected primary-expression before' error in C++ template code?
Here's yet another VC9 vs. GCC 4.2 compile error problem. The following code compiles fine with VC9 (Microsoft Visual C++ 2008 SP1) but not with GCC 4.2 on Mac: struct C { template< typename T > static bool big() { return sizeof( T ) > 8; } }; template< typename X > struct UseBig { static bool test() { return X::big< char >(); // ERROR: expected primary-expression } // before 'char' }; int main() { C::big< char >(); UseBig< C >::test(); return 0; } Any ideas how I can fix this?
That should be return X::template big< char >(); Dependent names from templates are taken to not be types unless you specify that they are via typename and assumed to not be templates unless specified via template.
2,106,073
2,107,577
How would one setup autotools to build a project for separate architectures, concurrently, on multiple systems?
I've got a C++ project which uses automake and autoconf. I'm new to both of these. My home directory is network mounted -- the same on every server we have -- and I want to compile and run the project (and its executable) concurrently on separate machines. Our servers are frequently different architectures. My desktop is 32-bit, but the server is 64-bit, etc. What options do I use in configure.ac and Makefile.am to compile the object files in separate directories named for the machine architectures? It's relatively simple to do this in a regular Makefile, but I don't know how to set autotools.
If you don't do anything "wrong" or unusual in your configure.ac and Makefile.am setup, this is supported automatically: mkdir /some/where/build cd /some/where/build /else/where/source/configure --options... make make install Basically, you create the build directory anywhere you want (in your case probably on a non-network mount), and call configure from there. This will then build the code in the build directory you have created.
2,106,200
2,106,683
Starting wxWidgets C++ need a gentle nudge
So I've been learning C# for like a year now (I'm 20 years old) and I'm getting pretty confident with it. I've also been meddling with C++ every now and again. For example just recently I've been following the Nehe OpenGL tutorials for C++ and I find it a fun way of learning. I want to start looking at creating cross platform GUI software after I stumbled across this library called FLTK (fluid something rather). After finally and painfully getting it to work I found it refreshing to know that there are solutions for GUI creation in C++, however I think FLTK looked pretty old. So I googled around for some newer GUI frameworks and decided to start playing with wxWidgets (decided against Qt because of licensing). I downloaded it, compiled it, and looked to see if there were any IDE plug-ins for RAD development, as you can imagine going from drag and drop a component onto a form in C# I was hoping for something similar. I learned that code::blocks has something of the sort so I tried that out. It was alright but the thing that turned me off was the horrible code completion; it would only show members and methods in the current object and nothing for the #included header files. I understand that code completion/IntelliSense isn't easy for C++ but Visual Studio 2008 handles it pretty good. I did find some other RAD tools like wxFormBuilder but it costs money, not something I want to do for simply learning. So my TLDR question is if anyone has had experience with wxWidgets? Do you just develop in whatever IDE you're comfortable with and just code the GUI? Meaning no visual helpers? Perhaps you could give me a nudge in what direction I should be going :) Thanks, this is also my first post on this site albeit I have read many threads before that have helped me solve copious problems. Cheers!
My suggestion is to learn how to do GUI layout with wxWidgets in code, then when you get good at it learn how to use the GUI tools. Doing this kind of work manually for a while gives you the understanding about what you need ("Ok, I need a wxSizer, vertical, to put these two horizontal wxSizers into, where I put my a wxStaticText and a wxTextCtl for each line ...")... where as I think if you started out with the GUI tools you'd just tend to get annoyed because (last time I looked) none of them were Drag And Drop editors like you get with .NET.
2,106,218
2,106,229
quickly invalidate cache
Is there a way in c++ to quickly invalidate the L2 cache of a processor other than iterating through a large fake array?
I'm going to assume this is for performance testing and you want to eliminate cache effects between runs. In that case, what you'd need to know to do this efficiently is: The allocation size of the L2 cache How many allocations there are in the L2 cache Then it's basically a matter of touching memory allocation_size bytes away from each other until you've flushed the cache entirely. Context switching also often invalidates the cache - it might be faster to wait for a millisecond and if the OS swaps you in and out, it'll likely end up clearing the cache.
2,106,386
2,106,412
When are C++ destructors explicitly called?
What are the instances where you need to explicitly call a destructor?
When you use placement-new is a common reason (the only reason?): struct foo {}; void* memoryLocation = ::operator new(sizeof(foo)); foo* f = new (memoryLocation) foo(); // note: not safe, doesn't handle exceptions // ... f->~foo(); ::operator delete(memoryLocation); This is mostly present in allocators (used by containers), in the construct and destroy functions, respectively. Otherwise, don't. Stack-allocations will be done automatically, as it will when you delete pointers. (Use smart pointers!) Well, I suppose that makes one more reason: When you want undefined behavior. Then feel free to call it as many times as you want... :)
2,106,389
2,106,426
Basic Game DrawEngine question
i want to create a basic game Draw engine class for my 2D game. i'm not quite sure whether to share main window handle with class or keep it private as it is in the main class. The other way i'm thinking to do is pass the device context itself to draw engine class. which would be the standard way to work with draw engine?
I would say pass the Device Context in as you can always call GetDC(hWnd) in order to obtain the device context, however, the benefits of having the hWnd are that you can get the Client Size etc.. so, in that regard, the hWnd would be the best (perhaps save the hWnd in the class). In terms of speed, you probably want to limit the number of calls to GetDC().
2,106,496
2,106,513
Problems initializing glut
I have simplified my problem to this example: #include <GL/glut.h> int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize (600, 600); glutInitWindowPosition( 0, 0 ); int win = glutCreateWindow("Recon"); return 0; } When it executes the glutCreateWindow, it takes about 1 minute and the screens flicker several times. This is ridiculously long. This can't be normal. Environment: Fedora 10 Dual NVIDIA GTX280 cards driving 3 monitors. NVIDIA driver version 190.53 CUDA 2.3 installed gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) Any ideas as to what could be wrong? Edit: I have no display function because my ultimate goal is to create a rendering context so that I can create a Pixel Buffer Object from some CUDA code (which for the moment is not going to be displaying its output. I have also tried creating a context with a series of glx calls with the same delay and flickering happening when gkxMakeCurrent is called.
Do you have a display function? I'm not sure if this will help, but maybe putting in a display function in which you clear the buffers might help? e.g. glutDisplayFunc(myDisplay); void myDisplay() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear the screen glutSwapBuffers(); } What compiler are you using? And, have you looked into any possible performance issues associated with Fedora 10 and openGL (I'm looking into the second bit right now). Edit: There are definitely some anecedotal stories of a performance hit in Fedora 10 Here and Here. The second one seems to describe at least one of your symptoms. Are you able to try your code on another OS?
2,106,576
2,106,590
C++ Disabling warnings on a specific include
I'm wondering if there's a way to disable all warnings on a specific file (for example, using a preprocessor directive). I'm using CImg.h and I want to get rid of the warnings involving that code. I'm compiling both with VS (the version for Windows) and gcc (the Linux one), so I would like to have a generic way... Thanks!
You can do it using #pragma in Microsoft compiler: http://msdn.microsoft.com/en-us/library/2c8f766e%28VS.80%29.aspx Something like this: #pragma warning (push, 0) //....header file #pragma warning (pop) Can't help you with gcc compiler, some info here: Selectively disable GCC warnings for only part of a translation unit? EDIT EDIT Try push, 0.
2,106,657
2,106,736
Getting Original Regular Expression Out From sregex (Boost Xpressive)
I have the following code. sregex rex = sregex::compile( "(\\w+) (\\w+)!" ); How I can get "(\w+) (\w+)!" out from rex?
Looking at the documentation for basic_regex<> (sregex is just a typedef for basic_regex), I don't see any function that looks like it can retrieve the original textual representation of the regular expression. If you really need that, you are going to have to create your own class that holds both a sregex and a std::string.
2,106,707
2,106,760
Mutual exclusion (in static library )
I have a static library to access a Database. It has a function readMaximum(). readMaximum() reads a maximum value from DB. This function is thread safe (using mutex). But problem is : There are two processes A.exe and B.exe; both are compiled with the static library. Is there any way where I can implement mutual exclusion between process A.exe and B.exe, so that when function readMaximum() is called by two processes at same time, only one is allow to go into the critical section? PS. I would not like to change any property of the DB/Schema/Table.
Use CreateMutex() to created a named global mutex. Prefix the name with "Global\".
2,106,786
2,107,180
Variable Arguement With Class Reference As 1st Parameter
I have the following code : #include <cstdarg> #include <iostream> using namespace std; class a { }; void fun1(a& aa, ...) { va_list argp; va_start(argp, aa); char *p = 0; while ((p = va_arg(argp, char *)) != 0) { cout << p << endl; } va_end(argp); } void fun2(char *aa, ...) { va_list argp; va_start(argp, aa); char *p = 0; while ((p = va_arg(argp, char *)) != 0) { cout << p << endl; } va_end(argp); } int main() { cout << "fun2" << endl; fun2("a", "1", "2", (char *)0); cout << "fun1" << endl; fun1(a(), "1", "2", (char *)0); getchar(); } Everything works fine with fun2. However, fun1 will just crash. May I know how can I prevent from crashing, at the same time able to use class reference as 1st parameter. Currently, it prints : fun2 1 2 fun1 then crash. I wish fun2 1 2 fun1 1 2
You can't use a reference parameter as the last named parameter with va_start. The reason is because va_start takes the address of the named parameter to find the location of the rest of the arguments. However, taking the address of a reference gives the address of the variable pointed at by the reference, not the address of the parameter itself. Your options are: 1) change the variable type from a reference to a pointer (or a non-reference if you are OK with a copy of the passed in variable). 2) Add an additional required parameter so that the reference isn't the last named parameter. The additional parameter can be a useful parameter, such as one of the char* you are going to pass to your particular function, or it can be a dummy variable you just ignore. 3) Change the definition of va_start. It's not recommended, but you can do it. See http://support.microsoft.com/kb/119394 for a non-portable redefinition.
2,106,796
2,106,822
C++ Version For Java String.replaceAll
Java String.replaceAll comes very handy. Has anyone encounter similar library in C++ (Even without regular expression match, but with exact match is OK)
C++ has no built in lib to do that, but Boost has string replace functions: http://www.boost.org/doc/libs/1_41_0/doc/html/string_algo/usage.html#id1701549 Also without STL here is an example: http://www.linuxquestions.org/questions/programming-9/replace-a-substring-with-another-string-in-c-170076/
2,106,834
2,107,128
C++ expression evaluation order
i ran into a curious problem regarding evaluation of expressions: reference operator()(size_type i, size_type j) { return by_index(i, j, index)(i, j); // return matrix index reference with changed i, j } matrix& by_index(size_type &i, size_type &j, index_vector &index) { size_type a = position(i, index); // find position of i using std::upper_bound size_type b = position(j, index); i -= index[a]; j -= index[b]; return matrix_(a,b); // returns matrix reference stored in 2-D array } I have thought matrix(i,j) will be evaluated after the call to buy_index, so that i, j will be updated. this appears to be correct, i verified in debugger. however, for some types of matrix, specifically those which have to cast size_type the something else, for example int, the update in by_index is lost. modifying code slightly removes the problem: reference operator()(size_type i, size_type j) { matrix &m = by_index(i, j, index); return m(i, j); } do you know why the first operator misbehaves? thanks prototypes which work and which do not inline reference operator () (size_t i, size_t j); // ublas, size_type is std::size_t reference operator () (int i, int j); // other prototype, size_type is int in debugger backtrace stack looks like this: i = 1 upon entry to operator() //okay i = 0 after finish from by_index //okay i = 1 upon entry to matrix:: operator() //not right, should be 0
In my opinion, this boils down to order of evaluation. The standard says - (5.4) Except where noted, the order of evaluation of operands of individual operators and subexpressions of individual expressions, and the order in which side effects take place, is unspecified. Which fits the bill exactly. The values of i and j may be evaluated before the call to by_index(), or after it. You can't tell - this is unspecified. I will add that the form that solves your problem is far more readable in my eyes, and I would have used it regardless of correctness of the first form...
2,106,899
2,106,919
Is the following C++ code equiv? (in a smart pointer implementation)
Code 1: template<class T> const PtrInterface<T>* PtrInterface<T>::newRef() const { PtrInterface<T>* me = (PtrInterface<T>*) this; ++me->references_; //++this->references_; return this; } Code 2: template<class T> const PtrInterface<T>* PtrInterface<T>::newRef() const { //PtrInterface<T>* me = (PtrInterface<T>*) this; //++me->references_; ++this->references_; return this; } Is there ever any situation where these two blocks of code will do different things? Thanks!
Is there ever any situation where these two blocks of code will do different things? Yes, when you are in a const method. Currently, the one with me invokes undefined behavior. Here's why: As you know, when you call a member function, there is an implicit this pointer. The this pointer is const when a function is marked const. Take this for example: struct foo { void method1(void); void method2(void) const; int i; }; Implicitly, the compiler generates (by the way, this is simplified): void foo::method1(foo* this); void foo::method2(const foo* this) const; So, are these two bodies the same? foo* me = (foo*)this; me->i = 1; // and this->i = 1; The answer is it depends, and as stated earlier, it's dependent on the const-ness of the function. In a non-const function, they are the same: void foo::method1(foo* this) { foo* me = (foo*)this; // this cast is redundant me->i = 1; // ... this->i = 1; } But in a const function: void foo::method2(const foo* this) const { foo* me = (foo*)this; // uh-oh! acts like const_cast me->i = 1; // modifying a const_cast'd variable is undefined behavior // ... this->i = 1; // wouldn't compile } We end up stripping the const away. So, no, they aren't always the same. This is the peril of the C-style cast: it will find a way. By the way, casting const away in itself isn't undefined behavior; it's the modification of said variable that does it. There is a sticky problem in your question though: your code shouldn't compile. Like in the commented code above, in your const method you shouldn't be able to modify reference_. This is different if reference_ is mutable, which I'm guessing it might be (assuming you gave us compilable code.) In this case, I'm not certain if the first sample leads to undefined behavior, since it was mutable in the first place. I wouldn't take the chance though.
2,106,927
2,106,950
ROUNDUP? what does it do? in C++
Can someone explain to me what this does? #define ROUNDUP(n,width) (((n) + (width) - 1) & ~unsigned((width) - 1))
Providing width is an even power of 2 (so 2,4,8,16,32 etc), it will return a number equal to or greater than n, which is a multiple of width, and which is the smallest value meeting that criteria. So width = 16; 5->16, 7->16, 15->16, 16->16, 17->32, 18->32 etc. EDIT I started out on providing an explanation of why this works as it does, as I sense that's really what the OP wants, but it turned into a rather convoluted story. If the OP is still confused, I'd suggest working through a few simple examples, say width = 16, n=15,16,17. Remember that & = bitwise AND, ~ = bitwise complement, and to use binary representation exclusively as you work through the examples.
2,107,011
2,107,018
useful open source libraries/projects on Windows
Which open-source projects do you feel C/C++ Windows developers should be aware of? Boost Libraries: generic library (smart pointers, command line parsing, threads, formatting, etc) Postgresql: full-feature SQL database. MediaInfo: provides information about audio/video files.
I would say GTK+ Qt. SQLite is awesome. libxml. Mono Monodevelop Eclipse IDE Apache HTTP Server and APR, and all Apache top-levels GLib OpenGL Actually, just install Linux or another Free UNIX
2,107,122
2,118,787
cant get ifstream to work in XCode
No matter what I try, I cant get the following code to work correctly. ifstream inFile; inFile.open("sampleplanet"); cout << (inFile.good()); //prints a 1 int levelLW = 0; int numLevels = 0; inFile >> levelLW >> numLevels; cout << (inFile.good()); //prints a 0 at the first cout << (inFile.good());, it prints a 1 and at the second a 0. Which tells me that the file is opening correctly, but inFile is failing as soon as read in from it. The file has more then enough lines/characters, so there is no way I have tried to read past the end of the file by that point. File contents: 8 2 #level 2 XXXXXXXX X......X X..X..XX X.X....X X..XX..X XXXX...X X...T..X XXX..XXX #level 1 XXXXXXXX X......X X..X.XXX X.X..X.X X..XX..X X......X X^....SX XXX.^XXX
It turned out to be an issue with X-Code. I created a project in net beans using the same exact code and had no problems. Weird. Update: In my X-Code project, I changed my active SDK from Mac OS 10.6 to Mac OS 10.5 and everything works fine now.
2,107,260
2,107,462
How to make a Web Browser toolbar?
How do I make a Web Browser toolbar in C++. in dev-C++ for I.E with no addon libraries?
Since you use Dev C++ I am assuming you want to make IE Addons? If thats the case, this should get you started: Creating Add-ons for Internet Explorer: Toolbars on msdn.microsoft.com And you should also take a loot at the Guidelines for add-on developers over at IE Blog.
2,107,275
2,113,312
Does anyone have a FileSystemWatcher-like class in C++/WinAPI?
I need a .Net's FileSystemWatcher analog in raw C++/WinAPI. I almost started to code one myself using FindFirstChangeNotification/FindNextChangeNotification, but then it occurred to me that I am probably not the first one who needs this and maybe someone will be willing to share. Ideally what I need is a class which can be used as follows: FileWatcher fw; fw.startWatching("C:\MYDIR", "filename.dat", FileWatcher::SIZE | FileWatcher::LAST_WRITE, &myChangeHandler); ... fw.stopWatching(); Or if it would use somehting like boost::signal it would be even better. But please, no dependencies other than the Standard Library, boost and raw WinAPI. Thanks!
What about the ReadDirectoryChangesW function? http://msdn.microsoft.com/en-us/library/aa365465(VS.85).aspx It stores notifications in a buffer so you don't miss any changes (unless the buffer overflows)
2,107,601
2,111,344
Fastest cross-platform A* implementation?
With so many implementations available, what is the fastest executing (least CPU intensive, smallest binary), cross-platform (Linux, Mac, Windows, iPhone) A* implementation for C++ using a small grid? Implementations Google returns: http://www.heyes-jones.com/astar.html (Most links on that site are dead.) http://www.grinninglizard.com/MicroPather (Said to be slower than Heyes-Jones'.) http://www.ceng.metu.edu.tr/~cuneyt/codes.html (Generic C++ code.) http://swampthingtom.blogspot.com/2007/07/pathfinding-sample-using.html http://opensteer.sourceforge.net/ (Interesting for games, not A*.) Stack Overflow on Dijkstra's Algorithm Any others? The Wheel The question, as asked, pertains to reuse (plug into a game), not reinvention (at least not until performance is shown to be an issue). It might turn out that a Dijkstra implementation (or generic pathfinding algorithm) is better suited, or that the fastest implementations are not fast enough. I appreciate the suggestions of alternative algorithms, however the question is not, "Should I roll my own A*?" Joel on Software - Not Invented Here Syndrome Coding Horror: Don't Reinvent the Wheel Overcoming the "Not Invented Here Syndrome"
Look at other path-finding algorithms (like Breath-First, Depth-First, Minimax, Negmax etc.) and weigh the positives and negatives for your scenario. Boost also has an A-star implementation. Try following these instructions to build boost on iPhone, but it might not work for you: it is not a "full port" of boost and it might error out. The following is from Algorithms in a Nutshell (Java, not C++ but maybe you'd like to port it): public Solution search( INode initial, INode goal ) { // Start from the initial state INodeSet open = StateStorageFactory.create( StateStorageFactory.TREE ); INode copy = initial.copy(); scoringFunction.score( copy ); open.insert( copy ); // Use Hashtable to store states we have already visited. INodeSet closed = StateStorageFactory.create( StateStorageFactory. HASH ); while( !open.isEmpty() ) { // Remove node with smallest evaluation function and mark closed. INode n = open.remove(); closed.insert( n ); // Return if goal state reached. if( n.equals( goal ) ) { return new Solution( initial, n ); } // Compute successor moves and update OPEN/CLOSED lists. DepthTransition trans = (DepthTransition)n.storedData(); int depth = 1; if( trans ! = null ) { depth = trans.depth + 1; } DoubleLinkedList<IMove> moves = n.validMoves(); for( Iterator<IMove> it = moves.iterator(); it.hasNext(); ) { IMove move = it.next(); // Make move and score the new board state. INode successor = n.copy(); move.execute( successor ); // Record previous move for solution trace and compute // evaluation function to see if we have improved upon // a state already closed successor.storedData( new DepthTransition( move, n, depth ) ); scoringFunction.score( successor ); // If already visited, see if we are revisiting with lower // cost. If not, just continue; otherwise, pull out of closed // and process INode past = closed.contains( successor ); if( past ! = null ) { if( successor.score() >= past.score() ) { continue; } // we revisit with our lower cost. closed.remove( past ); } // place into open. open.insert( successor ); } } // No solution. return new Solution( initial, goal, false ); }
2,107,608
2,107,677
Using generic methods?
What are the benefits and disadvantages of using generic methods (in compile time, run time, performance, and memory)?
Okay, Java generics and C++ templates are so different that I'm not sure it's possible to answer them in a single question. Java Generics These are there pretty much for syntactic sugar. They are implemented through a controversial decision called type erasure. All they really do is prevent you from having to cast a whole lot, which makes them safer to use. Performance is identical to making specialized classes, except in cases where you are using what would have been a raw data type (int, float, double, char, bool, short). In these cases, the value types must be boxed to their corresponding reference types (Integer, Float, Double, Char, Bool, Short), which has some overhead. Memory usage is identical, since the JRE is just performing the casting in the background (which is essentially free). Java also has some nice type covariance and contravariance, which makes things look much cleaner than not using them. C++ Templates These actually generate different classes based on the input type. An std::vector<int> is a completely different class than an std::vector<float>. There is no support for covariance or contravariance, but there is support for passing non-types to templates, partial template specialization. They basically allow you to do whatever you want. However, since C++ templates create different classes for every variation of their template parameters, the size of the compiled executable is larger. Beyond that, compilation time increases greatly, since all template code must be included with each compilation unit and much more code must be generated. However, actual runtime memory footprint is typically smaller than the alternative (frees an extra void*) and performance is better, since the compiler can perform more aggressive optimizations with the known type. EDIT (thanks David Rodríguez): While a generic Java class compiles it's entire self, when using a C++ template, you only compile what you use. So, if you create an std::vector<int> and only use push_back and size, only those functions will be compiled into the object file. This eases the size of executable problem. If you're curious about the differences between them, check out this comparison of generics in C#, Java and C++.
2,107,688
2,107,833
How to store a sequence of timestamped data?
I have an application that need to store a sequence of voltage data, each entry is something like a pair {time, voltage} the time is not necessarily continuous, if the voltage doesn't move, I will not have any reading. The problem is that i also need to have a function that lookup timestamp, like, getVoltageOfTimestamp(float2second(922.325)) My solution is to have a deque that stores the paires, then for every 30 seconds, I do a sampling and store the index into a map std::map, so inside getVoltageOfTimestamp(float2second(922.325)), I simply find the nearest interval_of_30_seconds to the desired time, and then move my pointer of deque to that corresponding_index_of_deque, iterate from there and find the correct voltage. I am not sure whether there exist a more 'computer scientist' solution here, can anyone give me a clue?
You could use a binary search on your std::deque because the timestamps are in ascending order. If you want to optimize for speed, you could also use a std::map<Timestamp, Voltage>. For finding an element, you can use upper_bound on the map and return the element before the one found by upper_bound. This approach uses more memory (because std::map<Timestamp, Voltage> has some overhead and it also allocates each entry separately).
2,107,699
2,107,983
Credential manager for Vista/Windows 7
I have Credential manager implemented in VC++ which captures credentials during login process. It works well in XP/Vista/Windows 7 32 bit env. But is not working in 64 bit. Any idea ? Thanks in advance for any help
If you want your DLL to be loaded by a 64-bit process, your DLL has to be compiled for 64 bits. If you want your DLL to be loaded by a 32-bit process, your DLL has to be compiled for 32 bits. This is true on both 64-bit Windows systems and 32-bit Windows systems. John gave you a useful link, even though John's wording is wrong. An application (exe) which is built for 32 bits will run in 64 bit Windows, but it can only load 32-bit DLLs.
2,107,831
2,113,962
Problem using MIDI streams in Windows
I'm writing a Windows program using C++ and the Windows API, and, am trying to queue MIDI messages in a MIDI stream, but am receiving a strange error when I try to do so. If I use midiOutShortMsg to send a non-queued MIDI message to the stream, it works correctly. However, midiStreamOut always returns error code 68, which is #defined to MCIERR_WAVE_OUTPUTUNSPECIFIED. midiOutGetErrorText gives the following description of the error: The current MIDI Mapper setup refers to a MIDI device that is not installed on the system. Use MIDI Mapper to edit the setup. I am using Windows 7 (64-bit) and have tried opening the MIDI stream with device IDs of both MIDI_MAPPER and all four MIDI output devices on my system, and still receive the exact same error message. Here is the code to open the MIDI stream: UINT device_id = MIDI_MAPPER; //Also tried 0, 1, 2 and 3 midiStreamOpen( &midi, &device_id, 1, ( DWORD_PTR )hwnd, 0, CALLBACK_WINDOW ); Here is the code to send the MIDI message: MIDIHDR header; MIDIEVENT *event; event = ( MIDIEVENT * )malloc( sizeof( *event ) ); event->dwDeltaTime = delta_time; event->dwStreamID = 0; event->dwEvent = ( MEVT_F_SHORT | MEVT_SHORTMSG ) << 24 | ( msg & 0x00FFFFFF ); header.lpData = ( LPSTR )event; header.dwBufferLength = sizeof( *event ); header.dwBytesRecorded = sizeof( *event ); header.dwUser = 0; header.dwFlags = 0; header.dwOffset = 0; midiOutPrepareHeader( ( HMIDIOUT )midi, &header, sizeof( header ) ); midiStreamOut( midi, &header, sizeof( header ) ); How can I resolve this problem?
The problem was that I was using the entire event structure as the buffer for the MIDI stream. It turns out that the fourth member of the structure, dwParms, should actually be omitted from short messages. To correct the code in the posted question, two of the lines of code could be changed to the following: header.dwBufferLength = sizeof( *event ) - sizeof( event->dwParms ); header.dwBytesRecorded = sizeof( *event ) - sizeof( event->dwParms ); When adding multiple events to the stream, it's actually a lot easier to just use an array of DWORDs rather than even bothering with the MIDIEVENT structures. For anyone else doing MIDI programming using the Windows API, beware that some of the MSDN documentation is misleading, inadequate or completely wrong. The documentation for the MIDIEVENT structure says the following: dwParms If dwEvent specifies MEVT_F_SHORT, do not use this member in the stream buffer. This is ambiguous because it is not clear that "use" is intended to mean "include" rather than "specify". Here are two other flaws in the documentation that programmers need to be aware of: dwEvent Event code and event parameters or length. [...] The high byte of this member contains flags and an event code. Either the MEVT_F_LONG or MEVT_F_SHORT flag must be specified. The MEVT_F_CALLBACK flag is optional. When the header files are checked, the MEVT_F_ preprocessor definitions actually specify complete DWORDs rather than just the individual flags, so in my code in the question, the line specifying this member should have been as follows: event->dwEvent = MEVT_F_SHORT | MEVT_SHORTMSG << 24 | ( msg & 0x00FFFFFF ); In addition to this, it has also turned out that the memory containing the MIDIHDR structure should be retained until the buffer has finished playing, so it should be allocated on the heap rather than the stack for most implementations.
2,107,944
2,108,043
How to implement an associative array/map/hash table data structure (in general and in C++)
Well I'm making a small phone book application and I've decided that using maps would be the best data structure to use but I don't know where to start. (Gotta implement the data structure from scratch - school work)
Tries are quite efficient for implementing maps where the keys are short strings. The wikipedia article explains it pretty well. To deal with duplicates, just make each node of the tree store a linked list of duplicate matches Here's a basic structure for a trie struct Trie { struct Trie* letter; struct List *matches; }; malloc(26*sizeof(struct Trie)) for letter and you have an array. if you want to support punctuations, add them at the end of the letter array. matches can be a linked list of matches, implemented however you like, I won't define struct List for you.
2,108,000
2,110,489
Debugging asserts in Qt Creator
When I hit a normal assert statement while debugging with Visual Studio I get the option to break into the debugger so I can see the entire stack trace and the local variables, not just the assert message. Is it possible to do this with Qt Creator+mingw32 and Q_ASSERT/Q_ASSERT_X?
You can install a handler for the messages/warnings that Qt emits, and do your own processing of them. See the documentation for qInstallMsgHandler and the example they give there. It should be easy to insert a break in a custom message handler (or indeed, just assert on your own at that point). The one small drawback is that you'll be a bit further on down the stack than where the error actually occurred, but it is a simple matter to just step up the stack until you are at the proper frame.
2,108,084
2,108,109
Pass by reference more expensive than pass by value
Is there a case where pass-by-reference is more expensive than pass-by-value in C++? If so, what would that case be?
Prefer passing primitive types (int, char, float, ...) and POD structs that are cheap to copy (Point, complex) by value. This will be more efficient than the indirection required when passing by reference. See Boost's Call Traits. The template class call_traits<T> encapsulates the "best" method to pass a parameter of some type T to or from a function, and consists of a collection of typedefs defined as in the table below. The purpose of call_traits is to ensure that problems like "references to references" never occur, and that parameters are passed in the most efficient manner possible.
2,108,099
2,108,245
Modifying old Windows program for Mac OS X
This application was written for windows back in 1998, I loved using this program, Now I want to learn how to make it work on Mac, And maybe changing and adding functionality, The problem is I don't know where to start, I Have studied C++ php, javascript, But don't really know how to read this code. or where to start. Thanks for taking a look http://github.com/klanestro/textCalc From http://www.atomixbuttons.com/textcalc/ What is TextCalc? TextCalc is a combination of an expression calculator and a text editor. Being both, it has several advantages over conventional calculators. 1) You can evaluate expressions like 9*4-2+95-12 just the way you write them on paper. 2) You can put comments besides your answer and expressions. 3) You can save, reload, edit and print your results and expressions. 4) You do not need to write your answer down on a paper before computing another expression, as you can leave the previous result in the editor. 5) You can open an existing text data file and perform calculations on it. 6) You can apply an expression to many numbers at one go. For example, you can change the list 1 2 3 4 5 to 2 4 6 8 10 by multiplying each number by 2. 7) You can sum, average, convert into hex etc. a list of numbers easily. The editor is capable of parsing numbers and strings enclosed in double quotes " ". Numbers will be colored blue and strings will be colored red. This makes it ideal for editing files containing numeric data. ★✩
Based on the screenshots and info on the TextCalc site, I think this is best implemented as a Mac OS X service. You can assign a hot key to trigger your service in the System Preferences -> Keyboard -> Services. It would actually be rather easy. You don't need to write the text editor portion, it will be available in all text areas in all apps. You will be handed the text the user has selected, and all you need to do is evaluate it. There's a built-in command line tool, bc, that you should be able to delegate this to. There is a guide to implementing services. You will need to read through the Cocoa intro material to understand it. This is a good first project, though. I don't think there's any reason to try to read the source of the original app in this case. You just need to know what you want the behavior to be.
2,108,172
2,108,209
C++ Namespaces, comparison to Java packages
I've done a bunch of Java coding recently and have got used to very specific package naming systems, with deep nesting e.g. com.company.project.db. This works fine in Java, AS3/Flex and C#. I've seen the same paradigm applied in C++ too, but I've also heard that it's bad to view C++ namespaces as direct counterparts to Java packages. Is that true, and why? How are namespaces/packages alike and different? What problems are likely to be seen if you do use deep nested namespaces?
In C++ namespaces are just about partitioning the available names. Java packages are about modules. The naming hierarchy is just one aspect of it. There's nothing wrong, per-se, with deeply nested namespaces in C++, except that they're not normally necessary as there's no module system behind them, and the extra layers just add noise. It's usually sufficient to have one or two levels of namespace, with the odd extra level for internal details (often just called Details). There are also extra rules to C++ namespaces that may catch you out if overused - such as argument-dependent-lookup, and the rules around resolving to parent levels. WRT the latter, take: namespace a{ namespace b{ int x; } } namespace b{ string x; } namespace a { b::x = 42; } Is this legal? Is it obvious what's happening? You need to know the precendence of the namespace resolution to answer those questions.
2,108,355
2,108,398
Difficult concurrent design
I have a class called Root which serves as some kind of phonebook for dynamic method calls: it holds a dictionary of url keys pointing to objects. When a command wants to execute a given method it calls a Root instance with an url and some parameter: root_->call("/some/url", ...); Actually, the call method in Root looks close to this: // Version 0 const Value call(const Url &url, const Value &val) { // A. find object if (!objects_.get(url.path(), &target)) return ErrorValue(NOT_FOUND_ERROR, url.path()); } // B. trigger the object's method return target->trigger(val); } From the code above, you can see that this "call" method is not thread safe: the "target" object could be deleted between A and B and we have no guarantee that the "objects_" member (dictionary) is not altered while we read it. The first solution that occurred to me was: // Version I const Value call(const Url &url, const Value &val) { // Lock Root object with a mutex ScopedLock lock(mutex_); // A. find object if (!objects_.get(url.path(), &target)) return ErrorValue(NOT_FOUND_ERROR, url.path()); } // B. trigger the object's method return target->trigger(val); } This is fine until "target->trigger(val)" is a method that needs to alter Root, either by changing an object's url or by inserting new objects. Modifying the scope and using a RW mutex can help (there are far more reads than writes on Root): // Version II const Value call(const Url &url, const Value &val) { // A. find object { // Use a RW lock with smaller scope ScopedRead lock(mutex_); if (!objects_.get(url.path(), &target)) return ErrorValue(NOT_FOUND_ERROR, url.path()); } } // ? What happens to 'target' here ? // B. trigger the object's method return target->trigger(val); } What happens to 'target' ? How do we ensure it won't be deleted between finding and calling ? Some ideas: object deletion could be post-poned in a message queue in Root. But then we would need another RW mutex read-locking deletion on the full method scope and use a separate thread to process the delete queue. All this seems very convoluted to me and I'm not sure if concurrent design has to look like this or I just don't have the right ideas. PS: the code is part of an open source project called oscit (OpenSoundControl it).
To avoid the deletion of 'target', I had to write a thread safe reference counted smart pointer. It is not that hard to do. The only thing you need to ensure is that the reference count is accessed within a critical section. See this post for more information.
2,108,389
2,108,460
C++ classes , Object oriented programming
I have a very simple class named person which is given below , I have a problem with only two functions , i.e setstring () function and setname() function , I am calling setstring() function from the setname function. The only problem is when in the main function I write Object.setname(“Zia”); The result is ok as shown in the output screen, Now when I write Object.setname(“Zia ur Rahman”); Nothing is displayed as you can see the output screen. I know the problem is when I pass the name pointer to setstring () function but I am confused about it please explain it in detail that what is happening here. #include<iostream.h> class person { char* name; public: person(); void setname(const char*); void setstring(const char*, char*); void print()const; }; person::person() { name=new char[3]; strcpy(name,"NILL"); name[3]='\0'; } void person::setstring(const char* s, char*p) { if(s!=NULL) { delete[] p; p=new char[strlen(s)]; strcpy(p,s); p[strlen(s)]='\0'; } } void person::setname(const char* n) { setstring(n, name);//passing data member name } void person::print()const { cout<<"Name: "<<name<<endl; } main() { person object; object.setname("Zia ur Rahman"); object.print(); system("pause"); } alt text http://img264.imageshack.us/img264/8867/inheritanceimage004.jpg alt text http://img263.imageshack.us/img263/3770/inheritanceimage003.jpg
The specific reason that nothing is being printed is that in setstring, p is copy of the name pointer, not a reference to it. Try changing the signature of setstring to: void setstring(const char* s, char*& p); (note the &). See the other answers for other significant errors in the code - unless these problems are fixed, you are likely to get crashes or strange behaviour. And unless the purpose of the code is just to learn dynamic arrays, use std::string instead :-).
2,108,467
2,108,502
Is Short Circuit Evaluation guaranteed In C++ as it is in Java?
In Java, I use if (a != null && a.fun()); by taking full advantage of short-circuit evaluation and expression are evaluated from left to right? In C++, can I do the same? Are they guarantee to portable across different platform and compiler? if (a != 0 && a->fun());
Yes, it is guaranteed for the "built in" types. However, if you overload && or || for your own types, short-circuited evaluation is NOT performed. For this reason, overloading these operators is considered to be a bad thing.
2,108,538
2,108,625
how to use my_alloc for _all_ new calls in C++?
Imagine I'm in C-land, and I have void* my_alloc(size_t size); void* my_free(void*); then I can go through my code and replace all calls to malloc/free with my_alloc/my_free. How, I know that given a class Foo, I can do placement new; I can also overload the new operator. However, is there a way to do this for all my C++ classes? (i.e. I want to use my own allocator for new and new[]; but I don't want to run through and hack every class I have defined.) Thanks!
In global scope, void* operator new(size_t s) { return my_alloc(s); } void operator delete(void* p) { my_free(p); } void* operator new[](size_t s) { return my_alloc(s); } void operator delete[](void* p) { my_free(p); }
2,108,668
2,108,682
basic question c++, dynamic memory allocation
Suppose I have a class class person { char* name; public: void setname(const char*); }; void person::setname(const char* p) { name=new char[strlen(p)]; strcpy(name,p); name[strlen(p)]='\0'; } My question is about the line name=new char[strlen(p)]; suppose the p pointer is pointing to string i.e “zia” , now strlen(p) will return 3 it means we have an array of 4 characters i.e char[3] now I copy the string into the name and at the 4th location , I put the null character , what is wrong with this?????
You say: we have an array of 4 characters i.e char[3] Surprisingly enough, char[3] is an array of THREE characters, not FOUR!
2,108,899
2,109,083
Xcode cannot find #Include<> header
I'm trying to get Xcode to import the header file for Irrlicht. #include <irrlicht.h> It says "Irrlicht.h. No such file or directory". Yes Irrlicht.h with a capital I, even though the #include is lowercase. Anyway I added "/lib/irrlicht-1.6/include" in the header search paths for the Xcode project, yet it still doesn't find it. The only thing I've tried that does work is: #include "/lib/irrlicht-1.6/include/irrlicht.h" This is a bit ridiculous though, #include should work, I don't understand why it isn't working. Update (here are more details on the error): /lib/PAL/pal_benchmark/palBenchmark/main.h:31:0 /lib/PAL/pal_benchmark/palBenchmark/main.h:31:22: error: irrlicht.h: No such file or directory
I figured this out. Perhaps someone can comment as to why this is the case. The Header was located in this directory: /lib/irrlicht-1.6/include/ If I added that path to: "Header Search Paths" Xcode still wouldn't find the path when I built the project. Solution: Add the header path to: "User Header Search Paths" instead. It boggles me why I had to do this, as I frequently add my header paths to "Header Search Paths" and then #includes just work. Hopefully this can help someone else who gets this same issue.
2,109,191
2,109,323
Ambiguous overload accessing argument-less template functions with variadic parameters
Yeah, the title can scare babies, but it's actually quite straightforward. I am trying to store a function pointer to a specialized template function, namely boost::make_shared (boost 1.41), as illustrated: boost::shared_ptr<int> (*pt2Function)() = boost::make_shared<int>; However, it won't compile (GCC 4.4.1) due to the fact that boost::make_shared has the following two specializations which the compiler can't tell apart in this context: template< class T > boost::shared_ptr< T > make_shared() ... template< class T, class... Args > boost::shared_ptr< T > make_shared( Args && ... args ) The error, for reference: In function ‘int main()’: error: converting overloaded function ‘make_shared’ to type ‘class boost::shared_ptr<int> (*)()’ is ambiguous boost/smart_ptr/make_shared.hpp:100: error: candidates are: boost::shared_ptr<X> boost::make_shared() [with T = int] boost/smart_ptr/make_shared.hpp:138: error: boost::shared_ptr<X> boost::make_shared(Args&& ...) [with T = int, Args = ] If I comment out the non-variadic variation, the code compiles fine. Does anyone know the proper syntax for resolving the ambiguity between two argument-less functions like this?
Variadic template arguments mean you take 0..n template arguments, thus both your versions are matches. You could resolve the ambiguity by adding another template parameter to the second version, so that it takes 1..n arguments. Something like this should work: template< class T, class Arg1, class... Args > boost::shared_ptr< T > make_shared(Arg1&& arg1, Args && ... args ) But as UncleBens correctly pointed out, you don't even need two versions. The following should be enough in your case: template< class T, class... Args > boost::shared_ptr<T> make_shared(Args && ... args ); If you use only one template argument (i.e. T), you get the 0-argument version of make_shared().
2,109,283
2,109,313
How to detect if an application is running under KVM?
I already know how to detect VMWare and VirtualPC, but I want to know how to do this in Kernel Virtual Machine. I would like the code to be in C or C++.
This page implies that it's enough to check the kernel's boot messages, if Linux is your hosted OS: # dmesg | grep -i virtual CPU: AMD QEMU Virtual CPU version 0.9.1 stepping 03 That should be easy enough to implement in C.
2,109,450
2,109,473
I'm developing GUI apps on Mac. I have been using C++ for 10+ years. Do I need to switch to Objective C?
I've been coding on C++/Linux for 10+ years. I am switching to do Mac development. My development involves GUI components. Is my only choice to learn Cocoa/Objective-C, or is there a way to wrap Cocoa and use it from C++ land? Thanks!
Yes, you need to learn Objective-C. Besides, you wouldn't gain much if you didn't need to. It's not the language that's hard to learn but the Cocoa framework (not because it's inherently hard but because it's so huge).
2,109,483
2,109,516
Boost threads coring on startup
I have a program that brings up and tears down multiple threads throughout its life. Everything works great for awhile, but eventually, I get the following core dump stack trace. #0 0x009887a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x007617a5 in raise () from /lib/tls/libc.so.6 #2 0x00763209 in abort () from /lib/tls/libc.so.6 #3 0x003ec1bb in __gnu_cxx::__verbose_terminate_handler () from /usr/lib/libstdc++.so.6 #4 0x003e9ed1 in __cxa_call_unexpected () from /usr/lib/libstdc++.so.6 #5 0x003e9f06 in std::terminate () from /usr/lib/libstdc++.so.6 #6 0x003ea04f in __cxa_throw () from /usr/lib/libstdc++.so.6 #7 0x00d5562b in boost::thread::start_thread () from /h/Program/bin/../lib/libboost_thread-gcc34-mt-1_39.so.1.39.0 At first, I was leaking threads, and figured the core was due to hitting some maximum limit of number of current threads, but now it seems that this problems occurs even when I don't. For reference, in the core above there were 13 active threads executing. I did some searching to try and figure out why start_thread would core, but I didn't come across anything. Anyone have any ideas?
start_thread is throwing an uncaught exception, see which exceptions can start_thread throw and place a catch around it to see what is the problem.
2,109,643
2,109,659
C++, GTK+, and String types
Excuse my ignorance here but I know neither C++ nor GTK+. Which String type is used when setting Strings in GTK+ widgets? In .NET, Strings passed to a control are obviously .NET System.String. In Cocoa, Strings passed to a control are NSString. But I understand C++ does not have a standardized String type (but indeed several, depending on the library used). So how are Strings passed to GTK+ widgets? (I am thinking C Strings, but I want to know for sure.)
All text in GTK+ is UTF-8-encoded, using char *, of course const where possible. Remember that GTK+ is implemented in C, so there is no use of STL for instance. The underlying glib's character-set conversion documentation begins by stating: Glib uses UTF-8 for its strings, and GUI toolkits like GTK+ that use Glib do the same thing.
2,109,648
2,109,675
What's the lifetime of memory pointed to typeinfo::name()?
In C++ I can use typeid operator to retrieve the name of any polymorphic class: const char* name = typeid( CMyClass ).name(); How long will the string pointed to by the returned const char* pointer available to my program?
As long as the class with rtti exists. So if you deal with single executable - forever. But for classes in a Dynamic Link Librariy it shifts a little. Potentially you can unload it.
2,109,767
3,468,766
MSXML's loadXML fails to load even well formed xml
I have written a wrapper on top of MSXML in c++ . The load method looks like as below. The problem with the code is it fails to load well formed xml sometimes. Before passing the xml as string I do a string search for xmlns and replace all occurrence of xmlns with xmlns:dns. In the code below I remove bom character. Then i try to load using the MSXML loadXML method . If load succeeds I set the namespace as shown in the code. Class XmlDocument{ MSXML2::IXMLDOMDocument2Ptr spXMLDOM; .... } // XmlDocument methods void XmlDocument::Initialize() { CoInitialize(NULL); HRESULT hr = spXMLDOM.CreateInstance(__uuidof(MSXML2::DOMDocument60)); if ( FAILED(hr) ) { throw "Unable to create MSXML:: DOMDocument object"; } } bool XmlDocument::LoadXml(const char* xmltext) { if(spXMLDOM != NULL) { char BOM[3] = {0xEF,0xBB,0xBF}; //detect unicode BOM character if(strncmp(xmltext,BOM,sizeof(BOM)) == 0) { xmltext += 3; } VARIANT_BOOL bSuccess = spXMLDOM->loadXML(A2BSTR(xmltext)); if ( bSuccess == VARIANT_TRUE) { spXMLDOM->setProperty("SelectionNamespaces","xmlns:dns=\"http://www.w3.org/2005/Atom\""); return true; } } return false; } I tried to debug still could not figure why sometimes loadXML() fails to load even well formed xmls. What am I doing wrong in the code. Any help is greatly appreciated. Thanks JeeZ
For this specific issue, please refer to Strings Passed to loadXML must be UTF-16 Encoded BSTRs. Overall, xml parser is not designed for in memory string parsing, e.g. loadXML does not recognize BOM, and it has restriction on the encoding. Rather, an xml parser is designed for byte array form with encoding detection, which is critical for a standard parser. To better leverage MSXML, please consider loading from IStream or a Win32 file.
2,109,784
2,109,921
What am I missing in my compilation / linking stage of this C++ FreeType GLFW application?
g++ -framework OpenGL GLFT_Font.cpp test.cpp -o test -Wall -pedantic -lglfw -lfreetype - pthread `freetype-config --cflags` Undefined symbols: "_GetEventKind", referenced from: __glfwKeyEventHandler in libglfw.a(macosx_window.o) __glfwMouseEventHandler in libglfw.a(macosx_window.o) __glfwWindowEventHandler in libglfw.a(macosx_window.o) "_ShowWindow", referenced from: __glfwPlatformOpenWindow in libglfw.a(macosx_window.o) "_MenuSelect", referenced from: This is on Mac OS X. I am trying to get GLFT_FONT to work on MacOSX with GLFW and FreeType2. This is not the standard Makefile. I changed parts of it myself (like the "-framework OpenGL" I am from Linux land, a bit new to Mac. I am on Mac OS X 10.5.8; using XCode 3.1.3 Thanks!
I tink those come from the Carbon framework. LIBS += -framework Carbon should do it then.
2,110,151
2,110,171
Using Templated Classes and Functions in a Shared Object/DLL
I am working on a fairly significantly-sized project which spans many shared libraries. We also have significant reliance on the STL, Boost and our own template classes and functions. Many exported classes contain template members and exported functions contain template parameters. Here is a stripped-down example of how I do library exporting: #if defined(_MSC_VER) && defined(_DLL) // Microsoft #define EXPORT __declspec(dllexport) #define IMPORT __declspec(dllimport) #elif defined(_GCC) // GCC #define EXPORT __attribute__((visibility("default"))) #define IMPORT #else // do nothing and hope for the best at link time #define EXPORT #define IMPORT #endif #ifdef _CORE_COMPILATION #define PUBLIC_CORE EXPORT #define EXTERNAL_CORE #else #define PUBLIC_CORE IMPORT #define EXTERNAL_CORE extern #endif #include <deque> // force exporting of templates EXTERNAL_CORE template class PUBLIC_CORE std::allocator<int>; EXTERNAL_CORE template class PUBLIC_CORE std::deque<int, std::allocator<int> >; class PUBLIC_CORE MyObject { private: std::deque<int> m_deque; }; SO, my problem is that when I compile in Visual Studio (both 2008 and 2010), I get the following warning: warning C4251: 'std::_Deque_val<_Ty,_Alloc>::_Almap' : class 'std::allocator<_Ty>' needs to have dll-interface to be used by clients of class 'std::_Deque_val<_Ty,_Alloc>' Which seems to imply that I haven't exported std::allocator<int>, which I have. And it's not like my exporting is incorrect, since not including EXTERNAL_CORE template class PUBLIC_CORE std::allocator<int>; EXTERNAL_CORE template class PUBLIC_CORE std::deque<int, std::allocator<int> >; yields the warning: warning C4251: 'MyObject::m_deque' : class 'std::deque<_Ty>' needs to have dll-interface to be used by clients of class 'MyObject' The only thing I can think of is that the _Ty the warning about the std::allocator is talking about is somehow not int, but I can't seem to find any indication that it would be otherwise, since a std::deque<int> would logically allocate with an std::allocator<int>. A consuming application can use the class just fine, but I have a gut feeling that this warning should not be ignored. When compiling with g++ in Linux, no errors are emitted (although that doesn't mean it's working right). Is g++ automatically doing something that MSVC cannot do? I've been targeting GCC on Linux, LLVM on OSX and MSVC on Windows, but I could potentially move to MinGW for Windows development, so abandoning MSVC is not exactly out of the question (if this proves to be too big of an inconvenience).
As you may know, the templates in your export file are in fact a 'permission to fill in whatever you think necessary' for the compiler. That means that if you compile your header file with compiler A, it may instantiate a completely different deque<int> than compiler B. The order of some members may change, for one, or even the actual type of some member variables. And that's what the compiler is warning you for. EDIT: addes some consequences to the explanation So your shared libraries will only work together nicely when compiled by the same compiler. If you want them to work together, you can either make sure that all client code 'sees' the same declaration (through using the same stl implementation), or step back from adding templates to your API.
2,110,212
2,110,443
How to create good debugging problems for a contest?
I am involved in a contest, and in one event we have debugging questions. I have to design some really good debugging problems in C and C++. How can I create some good problems on debugging? What aspects should I consider while designing the problems?
My brainstorming session: Memory leaks of the subtle sort are always nice to have. Mess around with classes, constructors, copy-constructors and destructors, and you should be able to create a difficult-to-spot problem with ease. One-off errors for array loops are also a classic. Then you can simply mess with the minds of the readers by playing with names of things. Create variables with subtly different names, variables with randomized (AND subtly different) names, etc. and then let them try and spot the one place where you've mixed up length and lenght. Don't forget about casing differences. Calling conventions can be abused to create subtle bugs too (like reversing the order of parameters). Also let's not forget about endless hours of fun from tricky preprocessor defines and templates (did you know that C++ templates are supposedly Turing-complete?) Metaprogramming bugs should be entertaining. Next idea that comes to mind is to provide a correct program, but flawed input data (subtly, of course). The program will then fail for the lack of error checking, but it will be some time until people realize that they are looking for problems in the wrong place. Race conditions are often a difficult to reproduce and fix, try to play with multithreading. Underflows/overflows can be easily missed by casual inspection. And last, but not least - if you're a a programmer, try remembering what was the last big problem that you spent two weeks on solving. If you're not a computer programmer, try to find one and ask them. I'm a .NET programmer, so unfortunately my experiences will relate little to your requirement of C/C++.
2,110,215
2,110,281
Adding a field to a structure without breaking existing code
So I'm working with this huge repository of code and have realized that one of the structs lack an important field. I looked at the code (which uses the struct) as closely as I could and concluded that adding an extra field isn't going to break it. Any ideas on where I could've screwed up? Also: design advice is welcome - what's the best way I can accomplish this? E.g. (if I wasn't clear): typedef struct foo { int a; int b; } foo; Now it's : typedef struct foo { int a; int b; int c; } foo;
From what you've written above I can't see anything wrong. Two things I can think of: Whenever you change code and recompile you introduce the ability to find "hidden" bugs. That is, uninitialized pointers which your new data structure could be just big enough to be corrupted. Are you making sure you initialize c before it gets used? Follow Up: Since you haven't found the error yet I'd stop looking at your struct. Someone once wrote look for horses first, zebras second. That is, the error is probably not an exotic one. How much coverage do you have in your unit tests? I'm assuming this is legacy code which almost invariably means 0% or at least that's been my experience. Is this accurate?
2,110,302
2,110,340
C++ will this function leak?
I have started out to write a simple console Yahtzee game for practice. I just have a question regarding whether or not this function will leak memory. The roll function is called every time the dices need to be re-rolled. What it does is to create a dynamic array. First time it is used it will store 5 random values. For the next run it will only re-roll all except for the dice you want to keep. I have another function for that, but since it isn't relevant for this question I left it out Main function int *kast = NULL; //rolled dice int *keep_dice = NULL; //which dice to re-roll or keep kast = roll(kast, keep_dice); delete[] kast; and here's the function int *roll(int *dice, int *keep) { srand((unsigned)time(0)); int *arr = new int[DICE]; if(!dice) { for(int i=0;i<DICE;i++) { arr[i] = (rand()%6)+1; cout << arr[i] << " "; } } else { for(int i=0;i<DICE;i++) { if(!keep[i]) { dice[i] = (rand()%6)+1; cout << "Change "; } else { keep[i] = 0; cout << "Keep "; } } cout << endl; delete[] arr; arr = NULL; arr = dice; } return arr; }
Yes, it can leak. Just for example, using cout can throw an exception, and if it does, your delete will never be called. Instead of allocating a dynamic array yourself, you might want to consider returning an std::vector. Better still, turn your function into a proper algorithm, that takes an iterator (in this case, a back_insert_iterator) and writes its output there. Edit: Looking at it more carefully, I feel obliged to point out that I really dislike the basic structure of this code completely. You have one function that's really doing two different kinds of things. You also have a pair of arrays that you're depending on addressing in parallel. I'd restructure it into two separate functions, a roll and a re_roll. I'd restructure the data as an array of structs: struct die_roll { int value; bool keep; die_roll() : value(0), keep(true) {} }; To do an initial roll, you pass a vector (or array, if you truly insist) of these to the roll function, which fills in initial values. To do a re-roll, you pass the vector to re-roll which re-rolls to get a new value for any die_roll whose keep member has been set to false.
2,110,632
2,118,916
how to represent int * as array in totalview?
How do I 'dive' an int * which points to a dynamically allocated array of integers and represent it as a fixed int[] array? Put otherwise, if I dive an int * it shows the address and the int pointed to, but instead I would like to see the array of all of the integers.
I noticed the TotalView tag on this question. Are you asking how to see the values in your array in totalview? If so then the answer is pretty easy. Lets say you have a pointer p which is of type int * and you have it currently pointing towards an array with 10 integers. Step 1. Dive on the pointer. That's accomplished by double clicking, clicking the middle mouse button, or using the dive option on the context menu -- all after having placed the mouse cursor on the variable int he source code pane or the stack frame pane. This will bring up a new window that will say Expression: p Address: 0xbfaa1234 Type: int * and down in the data area will say something like 0x08059199 -> 0x000001a5 (412) This window is showing you the pointer itself, the address listed is the address of the pointer. The value (0x08059199 in the example above) is the actual value that the pointer has. Everything to the right of the arrow is just a "hint" telling you want it points to. Step 2. Dive on the pointer again. Repeat the double click or middle mouse button, this time on the data value in the variable window. (So you are double clicking where it says 0x08059199). This will effectively "dereference" the pointer. Now the window is focused not on pointer itself but the thing that the pointer pointed to. Notice that the address box now contains 0x08059199 which was the value before. expression: *(((int *) p)) Address: 0x08059199 Type: int and down in the data area it will say something like 0x000001a5 (412) Step 3. Cast the data window to the type you want. Just click in the type field and change it to say int[10]. Then hit return. This tells the debugger that 0x08059199 is the beginning of an array of 10 integers. The window will grow two new fields: Slice and Filter. You can leave those alone for now, but they can be useful later. The data area will now show two columns "field" and "value" and 10 rows. The field column will be the index in the array [0] - [9] and the value column will tell you what data you have in each array location. Other tips: In more complicated data structures you can may want to dive on individual elements (which might also be pointers, diving will dereference them as well) You can always cast to different types or lengths to look at data "as if it was" whatever You can edit the actual data values by clicking on the value column and editing what you find there. This is useful when you want to provoke specific mis-behavior from your application You can always undo diving operations with the "<" icon in the upper right hand corner of the variable window. There are some online videos that you might find helpful at http://www.roguewave.com/products/totalview/resources/videos.aspx in particular there is one labeled "getting started with TotalView". Don't hesitate to contact us at Rogue Wave Software for TotalView usage tips! support at roguewave dot com is a good address for that. Chris Gottbrath (Chris dot Gottbrath at roguewave dot com) TotalView Product Manager at Rogue Wave Software
2,110,900
2,110,952
Reassignment of a reference
Suppose I have a class class Foo { public: ~Foo() { delete &_bar; } void SetBar(const Bar& bar) { _bar = bar; } const Bar& GetBar() { return _bar; } private: Bar& _bar; } And my usage of this class is as follows (assume Bar has a working copy constructor) Foo f; f.SetBar(*(new Bar)); const Bar* bar = &(f.GetBar()); f.SetBar(*(new Bar(bar))); delete bar; I have a situation similar to this (in code I didn't write) and when I debug at a breakpoint set on the "delete bar;" line, I see that &f._bar == bar My question is this: Why do &f._bar and bar point to the same block of memory, and if I leave out the "delete bar;", what are the consequences, from a memory management standpoint? Many thanks!
References cannot be "reseated", setBar() just copies the contents of bar to the object referenced by _bar. If you need such a functionality use pointers instead. Also your usage example would be much simpler if you were just using pointers.
2,111,297
2,111,364
Using relative filepaths on a portable C++ application
I am developing a portable C++ application. Development environment is Linux. I have a code that loads data from Xml file and create a object model out of it. Currently path to file is provided as /home/myuser/projectdir/xmlfilename.xml. This is problematic when I use from a different computer where the home directory name will be different. I tried something like ~/myuserprojectdir/xmlfilename.xml but it didn't worked. So is there a standard method in defining file names that will work on variety of platforms without any issues? Or any standard method that will work on Linux machines? Any thoughts?
You need to locate the user's home directory. To do this, Use getpwent to get the user record and from there the home directory. Then add the rest of the path to your xml file /myuserprojectdir/xmlfilename.xml to the value you get. This will work even if the users's home directory is not /home/$USER. It works on linux and OSX, and will probably work on windows with cygwin installed. Here's a working example with error checking omitted for clarity: #include <unistd.h> #include <sys/types.h> #include <pwd.h> main() { char* user = getlogin(); struct passwd* userrecord; while((userrecord = getpwent()) != 0) if (0 == strcmp(user, userrecord->pw_name)) printf("save file is %s/myuserprojectdir/xmlfilename.xml\n", userrecord->pw_dir); } output: save file is /Users/alex/myuserprojectdir/xmlfilename.xml This is how it works (from man getpwent): struct passwd * getpwent(void); // The getpwent() function sequentially reads the password database and is intended for programs that wish to process the complete list of users. struct passwd { char *pw_name; /* user name */ // <<----- check this one char *pw_passwd; /* encrypted password */ uid_t pw_uid; /* user uid */ gid_t pw_gid; /* user gid */ time_t pw_change; /* password change time */ char *pw_class; /* user access class */ char *pw_gecos; /* Honeywell login info */ char *pw_dir; /* home directory */ // <<----- read this one char *pw_shell; /* default shell */ time_t pw_expire; /* account expiration */ int pw_fields; /* internal: fields filled in */ }; To get the username, use getlogin... here's a snippet from man getlogin. char * getlogin(void); // The getlogin() routine returns the login name of the user associated with the current session ...
2,111,314
2,111,346
What is std::vector::front() used for?
Sorry if this has been asked before, but I am wondering what the use of std::vector::front() is. Is there a reason to use e.g. myvector.front() rather than myvector[0] or myvector.at(0)?
Some of the generic algorithms that also work on lists use it. This is an example of a general principle: if you provide accessors for all the semantics you support, not just the implementation you support, it is easier to write generically and therefore easier to reuse code.
2,111,474
2,111,708
Reading from a file in C++
I'm trying to write a recursive function that does some formatting within a file I open for a class assignment. This is what I've written so far: const char * const FILENAME = "test.rtf"; void OpenFile(const char *fileName, ifstream &inFile) { inFile.open(FILENAME, ios_base::in); if (!inFile.is_open()) { cerr << "Could not open file " << fileName << "\n"; exit(EXIT_FAILURE); } else { cout << "File Open successful"; } } int Reverse(ifstream &inFile) { int myInput; while (inFile != EOF) { myInput = cin.get(); } } int main(int argc, char *argv[]) { ifstream inFile; // create ifstream file object OpenFile(FILENAME, inFile); // open file, FILENAME, with ifstream inFile object Reverse(inFile); // reverse lines according to output using infile object inFile.close(); } The question I have is in my Reverse() function. Is that how I would read in one character at a time from the file? Thanks.
void Reverse(ifstream &inFile) { char myInput; while ( inFile.get( myInput ) ) { // do something with myInput } }
2,111,480
2,111,495
How do I know if HWND is desktop itself?
I use GetForegroundWindow to get the foreground window handle but if there is no window, then it returns the HWND to the desktop. How do I know if the HWND is the desktop?
Compare it with the result of calling GetDesktopWindow().
2,111,550
2,111,589
Is there a way, using templates, to prevent a class from being derivable in C++
I need to prevent a class from being derived from so I thought to myself, this is something that Boost is bound to have already done. I know they have a noncopyable, they must have a nonderivable... Imagine my surprise when I couldn't find it.... That got me thinking.. There must be a reason. Maybe it isn't possible to do using templates.. I'm sure if it was easy it's be in the boost libraries. I know how to do it without using templates, i.e. using a base class with a private constructor i.e. class ThatCantBeDerived; // Forward reference class _NonDeriv { _NonDeriv() {} friend class ThatCantBeDerived; }; class ThatCantBeDerived : virtual public _NonDeriv { public: ThatCantBeDerived() : _NonDeriv() { } }; Or something like this.. Maybe it's the forward reference that causes the problem, or maybe there isn't a portable way to achieve it.. Either way, I'm not sure why it isn't in boost.. Any ideas?
Under the current spec, it is explicitly forbidden to "friend" a template argument, so templatizing your example would make it not standards compliant. Boost probably would not want to add something like that to its libraries. I believe this restriction is being relaxed in Ox however, and there are workarounds for compilers.
2,111,593
2,111,672
When is it good to use c++ iostreams over ReadFile, WriteFile, fprintf, etc ...?
I find that it is tremendously easier to use streams in c++ instead of windows functions like ReadFile, WriteFile, etc or even fprintf. When is it not good to use streams? When is it good to use streams? Is it safe to use streams? How come a lot of programmers don't use streams? This is just something I've always wondered about and maybe you can shed some wisdom.
When is it not good to use streams? Streams are not guaranteed to be thread safe. It's easy to dream up a situation where you can not use streams without some synchronization. Stream objects are typically pretty "heavy". They may be too heavy for low memory or embedded environments. When is it good to use streams? In general. Is it safe to use streams? Yes, but you've got to be careful when sharing a stream asynchronously. How come a lot of programmers don't use streams? Preference, style, or they learned a different method (or different language) first. I find that plenty of old "c++" examples online are written with a C-flavor to them, prefering printf to cout.
2,111,667
9,842,857
Compile time string hashing
I have read in few different places that using C++11's new string literals it might be possible to compute a string's hash at compile time. However, no one seems to be ready to come out and say that it will be possible or how it would be done. Is this possible? What would the operator look like? I'm particularly interested use cases like this. void foo( const std::string& value ) { switch( std::hash(value) ) { case "one"_hash: one(); break; case "two"_hash: two(); break; /*many more cases*/ default: other(); break; } } Note: the compile time hash function doesn't have to look exactly as I've written it. I did my best to guess what the final solution would look like, but meta_hash<"string"_meta>::value could also be a viable solution.
This is a little bit late, but I succeeded in implementing a compile-time CRC32 function with the use of constexpr. The problem with it is that at the time of writing, it only works with GCC and not MSVC nor Intel compiler. Here is the code snippet: // CRC32 Table (zlib polynomial) static constexpr uint32_t crc_table[256] = { 0x00000000L, 0x77073096L, 0xee0e612cL, 0x990951baL, 0x076dc419L, 0x706af48fL, 0xe963a535L, 0x9e6495a3L, 0x0edb8832L, 0x79dcb8a4L, 0xe0d5e91eL, 0x97d2d988L, 0x09b64c2bL, 0x7eb17cbdL, 0xe7b82d07L, ... }; template<size_t idx> constexpr uint32_t crc32(const char * str) { return (crc32<idx-1>(str) >> 8) ^ crc_table[(crc32<idx-1>(str) ^ str[idx]) & 0x000000FF]; } // This is the stop-recursion function template<> constexpr uint32_t crc32<size_t(-1)>(const char * str) { return 0xFFFFFFFF; } // This doesn't take into account the nul char #define COMPILE_TIME_CRC32_STR(x) (crc32<sizeof(x) - 2>(x) ^ 0xFFFFFFFF) enum TestEnum { CrcVal01 = COMPILE_TIME_CRC32_STR("stack-overflow"), }; CrcVal01 is equal to 0x335CC04A Hope this will help you!
2,112,188
2,112,264
What are all of the well-known virtual folder GUIDs?
There seem to be a few virtual folders which have GUIDs associated to them (control panel, desktop) - ::{00021400-0000-0000-c000-000000000046} // desktop Where the blazes are these defined? When are they used? What I want is a way to have a string which represents a virtual folder without any ambiguity. If, for instance, I were to create a PIDL for the desktop, the display name comes back as "C:\Users\Steve\Desktop". Well, that's true at the moment - but its not really the correct folder. I can navigate in Explorer to that folder, and it contains a portion of the files on my desktop, not the entire desktop. What I want is a way to encode that location as a string that will always navigate to the virtual desktop folder (the one that has all of its contents, not just a few things). Does anyone know of a definitive list of such GUIDs? Or how I might convert a given PIDL into one? I tried SHGetDisplayName(pidl, SHGDN_*) - every version of that for the desktop pidl gives me either a short "Desktop" or "C:\Users\Steve\Desktop". (I'm logged in under the 'steve' account, obviously). Ideas / comments / pointers? EDIT: So it seems that I can use the given answers below to have a list of Known Folder GUIds. But does anyone know programatically how to convert from a PIDL -> known folder GUID? I assume that I can ParseDisplayName("::{guid}") to get the PIDL, but is there a way to get to the GUID? EDIT2: I still cannot find a way to get to the GUID programatically. However, for my purposes, I am recording the CSIDL_xxx that I use to create the object initially, and write that out & restore it later, and then create a PIDL by way of the CSIDL, which retains its correct identity (ie. it doesn't degrade into "C:\Users\\Desktop" but rather generates a PIDL that really points to the virtual desktop. The trick for me is to always use the CSIDL->PIDL, never going to a string in between. CSIDL->PIDL->string->PIDL = degeneration into non-virtual path. Thanks everyone for the help - and I'll keep looking if anyone finds more on the subject and posts it, I'd be interested! ;)
If i understand you correctly you are looking for the CSIDLs (pre-Vista, include Shlobj.h) or KNOWNFOLDERID (>= Vista, Knownfolders.h).
2,112,247
2,113,954
How to better organize the code in C++ projects
I'm currently in the process of trying to organize my code in better way. To do that I used namespaces, grouping classes by components, each having a defined role and a few interfaces (actually Abstract classes). I found it to be pretty good, especially when I had to rewrite an entire component and I did with almost no impact on the others. (I believe it would have been a lot more difficult with a bunch of mixed-up classes and methods) Yet I'm not 100% happy with it. Especially I'd like to do a better separation between interfaces, the public face of the components, and their implementations in behind. I think the 'interface' of the component itself should be clearer, I mean a new comer should understand easily what interfaces he must implement, what interfaces he can use and what's part of the implementation. Soon I'll start a bigger project involving up to 5 devs, and I'd like to be clear in my mind on that point. So what about you? how do you do it? how do you organize your code?
Especially I'd like to do a better separation between interfaces, the public face of the components, and their implementations in behind. I think what you're looking for is the Facade pattern: A facade is an object that provides a simplified interface to a larger body of code, such as a class library. -- Wikipedia You may also want to look at the Mediator and Builder patterns if you have complex interactions in your classes. The Pimpl idiom (aka compiler firewall) is also useful for hiding implementation details and reducing build times. I prefer to use Pimpl over interface classes + factories when I don't need polymorphism. Be careful not to over-use it though. Don't use Pimpl for lightweight types that are normally allocated on the stack (like a 3D point or complex number). Use it for the bigger, longer-lived classes that have dependencies on other classes/libraries that you'd wish to hide from the user. In large-scale projects, it's important to not use an #include directives in a header file when a simple forward declaration will do. Only put an #include directives in a header file if absolutely necessary (prefer to put #includes in the implementation files). If done right, proper #include discipline will reduce your compile times significantly. The Pimpl idiom can help to move #includes from header files to their corresponding implementation files. A coherent collection of classes / functions can be grouped together in it's own namespace and put in a subdirectory of your source tree (the subdirectory should have the same name as the library namespace). You can then create a static library subproject/makefile for that package and link it with your main application. This is what I'd consider a "package" in UML jargon. In an ideal package, classes are closely related to each other, but loosely related with classes outside the package. It is helpful to draw dependency diagrams of your packages to make sure there are no cyclical dependencies.
2,112,252
2,112,336
How do I check whether a file exists in C++ for a Windows program?
This is for a Windows-only program so portable code is not an issue. I need simply: bool DoesFileExist( LPWSTR lpszFilename ) { // ... }
There are two common ways to do this in Windows code. GetFileAttributes, and CreateFile, bool DoesFileExist(LPCWSTR pszFilename) { DWORD dwAttrib = GetFileAttributes(pszFilename); if ( ! (dwAttrib & FILE_ATTRIBUTE_DEVICE) && ! (dwAttrib & FILE_ATTRIBUTE_DIRECTORY)) { return true; } return false; } This will tell you a file exists, but but it won't tell you whether you have access to it. for that you need to use CreateFile. bool DoesFileExist(LPCWSTR pszFilename) { HANDLE hf = CreateFile(pszFilename, GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (INVALID_HANDLE_VALUE != hf) { CloseHandle(hf); return true; } else if (GetLastError() == ERROR_SHARING_VIOLATION) { // should we return 'exists but you can't access it' here? return true; } return false; } But remember, that even if you get back true from one of these calls, the file could still not exist by the time you get around to opening it. Many times it's best to just behave as if the file exists and gracefully handle the errors when it doesn't.
2,112,302
2,112,358
Enumerate COM object (IDispatch) methods using ATL?
Using ATL (VS2008) how can I enumerate the available methods available on a given IDispatch interface (IDispatch*)? I need to search for a method with a specific name and, once I have the DISPID, invoke the method (I know the parameters the method takes.) Ideally I would like to do this using smart COM pointers (CComPtr<>). Is this possible?
You can't enumerate all the available methods unless the object implements IDispatchEx. However, if you know the name of the method you want to call, you can use GetIDsOfNames to map the name to the proper DISPID. HRESULT hr; CComPtr<IDispatch> dispatch; DISPID dispid; WCHAR *member = "YOUR-FUNCTION-NAME-HERE"; DISPPARAMS* dispparams; // Get your pointer to the IDispatch interface on the object here. Also setup your params in dispparams. hr = dispatch->GetIDsOfNames(IID_NULL, &member, 1, LOCALE_SYSTEM_DEFAULT, &dispid); if (SUCCEEDED(hr)) { hr = dispatch->Invoke(1, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, dispparams, &varResult, NULL, NULL); } Edit: For completeness, I suspect there is a way to interrogate the ITypeInfo2 interface (assuming there is a type library for the object) that you get from IDispatch::GetTypeInfo for a list of methods, but I've not done it. See the other answer.
2,112,318
2,112,356
C++: Will structure be copied properly?
I have a pointer to a structure and I need to implement a method that will copy all of the memory contents of a structure. Generally speaking I need to perform a deep copy of a structure. Here's the structure: typedef struct { Size2f spriteSize; Vertex2f *vertices; GLubyte *vertex_indices; } tSprite; And here's the method I've implemented that should copy the structure: tSprite* copySprite(const tSprite *copyFromMe) { tSprite *pSpriteToReturn = (tSprite*)malloc( sizeof(*copyFromMe) ); memcpy(pSpriteToReturn, copyFromMe, sizeof(*copyFromMe) ); return pSpriteToReturn; } The problem is that I'm not sure that arrays "vertices" and "vertex_indices" are going to be copied properly. What is going to be copied in this way? Address of the array or the array itself? Should I copy the arrays after copying the structure? Or is it enough just to copy the structure? Something like this: ... pSpriteToReturn->vertices = (Vector2f*)malloc( sizeof(arraysize) ); memcpy(pSpriteToReturn->vertices, copyFromMe->vertices, sizeof(arraysize) ); ... Thank you in advance.
As a rule of thumb, don’t ever use memcpy in C++ in normal code (it might crop up in very low-level code, e.g. in allocators)1). Instead, create a suitable copy constructor and overload operator = (the assignment operator) to match it (and a destructor – rule of three: “if you implement either of copy constructor, operator = and destructor, you must implement all three). If you do not implement your own versions of the copy constructor an the assignment operator, C++ will create default versions for you. These versions will implement a shallow copy (much like what memcpy would do), i.e. in your case the array contents would not be copied – only the pointers. 1) Incidentally, the same goes for malloc and free. Don’t use them, instead use new/new[] and delete/delete[].
2,112,759
2,113,036
C++ Inherited Virtual Method Still Uses Base Class Implementation
I have a base class called Packet: // Header File class Packet { public: virtual bool isAwesome() const { return false; } } and an inherited class called AwesomePacket: // Header File class AwesomePacket : public Packet { public: virtual bool isAwesome() const { return true; } } However, when I instantiate an AwesomePacket and call isAwesome(), the method returns false instead of true. Why is this the case?
By any chance is your code calling isAwesome in the Packet constructor: Packet::Packet() { // this will always call Packet::isAwesome if (isAwesome()) { } } Even if this Packet constructor is being used to construct the parent object of an AwesomePacket object, this will not call AwesomePacket::isAwesome. This is because at this point in time the object is not yet an AwesomePacket.
2,113,043
2,113,092
Concatenating strings in C++
I am rather inexperienced C++ programmer, so this question is probably rather basic. I am trying to get the file name for my copula: string MonteCarloBasketDistribution::fileName(char c) { char result[100]; sprintf(result, "%c_%s(%s, %s).csv", copula.toString().c_str(), left.toString().c_str(), right.toString().c_str()); return string(result); } which is used in: MonteCarloBasketDistribution::MonteCarloBasketDistribution(Copula &c, Distribution &l, Distribution &r): copula(c), left(l), right(r) { //..... ofstream funit; funit.open (fileName('u').c_str()); ofstream freal; freal.open (fileName('r').c_str()); } However, the files created have rubbish names, consisting mainly from weird characters. Any idea what I am doing wrong and how to fix it?
sprintf has 4 place holders while you give only 3 parameters. I would suggest: string MonteCarloBasketDistribution::fileName(char c) { std::ostringstream result; result << c <<"_"<<copula<<'('<<left<<", "<<right<<").csv"; return result.str(); } Your sprintf is not safe for buffer overflow, use rather C99 snprintf or std::stringstream
2,113,136
2,114,094
Trying to know why the OpenMP code does not parallelise
I just started learning how to use OpenMP. I am trying to figure out why the following code does not run in parallel with Visual Studio 2008. It compiles and runs fine. However it uses only one core on my quad core machine. This is part of the code that I am trying to port to a MATLAB mex function. Any pointer is appreciated. #pragma omp parallel for default(shared) private(dz, t, v, ts_count) reduction(+: sum_v) for(t = 0; t<T; t++) { dz = aRNG->randn(); v += mrdt* (tv - v) + vv_v_sqrt_dt * dz + vv_vv_v_dt*(dz*dz - 1.); sum_v += v; if(t == ts_count-1) { int_v->at_w(k++) = sum_v/(double)(t+1); ts_count += ts; } }
The v variable is computed using the v value of the previous iteration for(t = 0; t<T; t++) { ... v += ... ( tv - v ) .... ... } You cannot do that, it breaks the parallelism. The loop must be able to be run in any sequence, or with differents parallel chunks at once, with no side effects. From a first glance, it doesnt look like you can parallelize this kind of loop.
2,113,231
2,113,413
Making CMake choose static linkage when possible?
I'm working on a project that's link against SOCI, which comes as both static and dynamic libraries. I'd like CMake to choose the static version when available, and dynamic otherwise. Is there a reasonable way to do this in CMake? I've come up with nothing looking through the docs so far.
Sounds like you need to add CMAKE_EXE_LINKER_FLAGS=-static
2,113,634
2,113,891
Problem with BOOST_CHECK_CLOSE_FRACTION
I'm using the Boost::Test library, and I am trying to check if an actual percent value is close to the expected value: BOOST_CHECK_CLOSE_FRACTION( items[i].ExpectedPercent, items[i].ActualCount / totalCount, 0.05); For some reason this check fails even when the values are close enough: difference between items[i].ExpectedPercent{0.40000000000000002} and items[i].ActualCount / totalReturned{0.42999999999999999} exceeds 0.050000000000000003 Is this a problem with Boost or a problem with how I am using Boost?
After some testing, it turns out that the documentation for BOOST_CHECK_CLOSE_FRACTION is incorrect. The tolerance should be specified as a fraction of the expected value. So, TFAE: BOOST_CHECK(abs(x - y) < (min(x, y) * 0.1)); BOOST_CHECK_CLOSE(x, y, 10); BOOST_CHECK_CLOSE_FRACTION(x, y, 0.1);
2,113,950
2,113,967
How to send keystrokes to a window?
im using keybd_event(); and i want use SendMessage(); to send keystroke to notepad, can this be done?
using SendMessage to insert text into the edit buffer (which it sounds like you want): HWND notepad = FindWindow(_T("Notepad"), NULL); HWND edit = FindWindowEx(notepad, NULL, _T("Edit"), NULL); SendMessage(edit, WM_SETTEXT, NULL, (LPARAM)_T("hello")); if you need keycodes and arbitrary keystrokes, you can use SendInput() (available in 2k/xp and preferred), or keybd_event()` (which will end up calling SendInput in newer OSs) some examples here: http://www.codeguru.com/forum/showthread.php?t=377393 there's also WM_SYSCOMMAND/WM_KEYDOWN/WM_KEYUP/WM_CHAR events for SendMessage which you might be interested in.
2,114,106
2,114,143
I have a wxwidgets that I want to add some cool effects. Using GDI would be impossibly hard. Could I use flash or something else?
I have an application that I want add some cool animations to show state changes. However, wxwidgets would be difficult because I'd have to program these animations in straight gdi. What's the best way to add these effect windows? Should I open a flash window and run a flash sequence or is maybe some other technology? Does .net have something I could code into a dll and run from my wxwidgets binary? I need something that is super easy to draw and set up the animation.
You could prepare animation as a bunch of images (wxImage loaded from PNG, GIF, JPG or whatever files), and then use a timer and paint them on a control. Maybe it sounds like too much, you I believe you could do it in 50-70 lines of code.
2,114,127
2,114,216
Include paths not found while compiling with g++ on MacOS
I'm trying to compile the simplest program on MacOS 10.6 like: $ g++ -o hello hello.cpp the following source: #include <iostream> int main (int argc, char * const argv[]) { std::cout << "Hello, World!\n"; return 0; } I'm getting the error: hello.cpp:1:20: error: iostream: No such file or directory hello.cpp: In function ‘int main(int, char* const*)’: hello.cpp:4: error: ‘cout’ is not a member of ‘std’ So obviously I have to add the include path somewhere. My question is where can I find the include directories and how can add them globally (I don't want to provide the include path whenever I want to compile). I just installed the XCode 3.1.4 and managed to compile it via Xcode, but not via command line. I found some header files in this directory: /Xcode3.1.4/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Kernel.framework/Versions/A/Headers and tried to add it to the HEADER_SEARCH_PATHS after reading this question, but no luck. I'm developing on Linux and everything is working fine there, but I want to continue doing that on MacOS. Any help?
On my Mac, that include file is in /usr/include/c++/4.0.0/iostream . Are you sure you have all the command-line development tools installed? They might not be by default; I'm pretty sure I had to install it manually when I first set up my Mac. There should be a "developer tools" package somewhere on your OS X installation media. Or, if you want to make sure you're getting the latest version, you can download it from: http://developer.apple.com/technology/xcode.html
2,114,358
2,114,379
Using STL Allocator with STL Vectors
Here's the basic problem. There's an API which I depend on, with a method using the following syntax: void foo_api (std::vector<type>& ref_to_my_populated_vector); The area of code in question is rather performance intensive, and I want to avoid using the heap to allocate memory. As a result, I created a custom allocator which allocates the memory required for the vector on the stack. So, I can now define a vector as: // Create the stack allocator, with room for 100 elements my_stack_allocator<type, 100> my_allocator; // Create the vector, specifying our stack allocator to use std::vector<type, my_stack_allocator> my_vec(my_allocator); This is all fine. Performance tests using the stack allocated vector compared to the standard vector show performance is roughly 4x faster. The problem is, I can't call foo_api! So... foo_api(my_vec); // Results in an error due to incompatible types. // Can't convert std::vector<type> to std::vector<type, allocator> Is there a solution to this?
You have to use the default allocator just as the function expects. You have two different types, and there's no way around that. Just call reserve prior to operating on the vector to get the memory allocations out of the way. Think about the bad things that could happen. That function may take your vector and start adding more elements. Soon, you could over-flow the stack space you've allocated; oops! If you're really concerned about performance, a much better route is to replace operator new and kin with a custom memory manager. I have done so and allocations can be hugely improved. For me, allocating sizes of size 512 or less is about 4 operations (move a couple pointers around); I used a pool allocator)
2,114,477
2,114,489
how do I print a binary double array from commandline (unix)
I got binary file, that contains doubles. How do i print that out to a terminal. I've tried octaldump 'od' but cant figure out the syntax I've tried something like head -c80 |od -f But that doesnt work, the man page for od is extremely bad. I've made a c program that does what I want, something like assuming 10double chunks. double tmp[10]; while(fread(tmp,sizeof(double),10,stdin)) for(int i=0;i<10;i++) printf("%f\t",tmp[i]); thanks.
Have you tried hexdump utility? hexdump -e ' [iterations]/[byte_count] "[format string]" ' filename Where format string should be "%f", byte count should be 8, and iterations the amount of floats you want to read
2,114,694
2,114,713
HeapAlloc returns 0xC0000017: Not Enough Quota
I'm allocating a small number of data types, total size 2mb. I only use one heap, and it runs fine until I get to a certain number of allocations, I'm pretty sure of this because I've commented one allocation for it to crash on the next. Quota = disk space? the documentation doesn't cover error codes for this specific function, I've profiled the application and there's plenty of memory free allocated for the process. Also I put a data breakpoint on the heap pointer, and it doesn't break. The heap pointer is fine when i step through, up to my call to HeapAlloc. So strange..
Try allocating a large chunk of memory (i.e. >2MB) until you get the error to determine if the issue is the # of objects or total heap. Also, are you sure you aren't allocating more than 2mb memory? I've seen that error when the 2gb limit is hit, but never at 2mb unless your pagefile is full. If all else fails, reboot or try on a different machine.
2,114,797
2,114,817
Compute Median of Values Stored In Vector - C++?
I'm a programming student, and for a project I'm working on, on of the things I have to do is compute the median value of a vector of int values. I'm to do this using only the sort function from the STL and vector member functions such as .begin(), .end(), and .size(). I'm also supposed to make sure I find the median whether the vector has an odd number of values or an even number of values. And I'm Stuck, below I have included my attempt. So where am I going wrong? I would appreciate if you would be willing to give me some pointers or resources to get going in the right direction. Code: int CalcMHWScore(const vector<int>& hWScores) { const int DIVISOR = 2; double median; sort(hWScores.begin(), hWScores.end()); if ((hWScores.size() % DIVISOR) == 0) { median = ((hWScores.begin() + hWScores.size()) + (hWScores.begin() + (hWScores.size() + 1))) / DIVISOR); } else { median = ((hWScores.begin() + hWScores.size()) / DIVISOR) } return median; }
You are doing an extra division and overall making it a bit more complex than it needs to be. Also, there's no need to create a DIVISOR when 2 is actually more meaningful in context. double CalcMHWScore(vector<int> scores) { size_t size = scores.size(); if (size == 0) { return 0; // Undefined, really. } else { sort(scores.begin(), scores.end()); if (size % 2 == 0) { return (scores[size / 2 - 1] + scores[size / 2]) / 2; } else { return scores[size / 2]; } } }
2,114,941
2,115,068
Simple Qt Embedded Window Question
How are you meant to initialise the program using Qt embedded? At the moment I'm using QMainWindow but this means including a lot extra when configuring Qt and makes the applications a lot bigger when compiling them statically.What are you meant to use in place of QMainWindow? I don't need anything like maximise buttons - using a small screen with widgets taking up the entire view with no borders. Thanks
A QWidget without a parent is a window. If you don't want the things provided by QMainWindow, you don't have to use it - you can use any QWidget subclass.
2,115,185
2,166,256
Prevent memory working set minimize in Console application?
I want to prevent memory working set minimize in Console application. In windows application, I can do by overriding SC_MINIMIZE messages. But, how can I intercept SC_MINIMIZE in console application? Or, can I prevent memory working set minimize by other ways? I use Visual Studio 2005 C++. Somebody has some problem, and the solution is not pleasing. :( http://www.eggheadcafe.com/software/aspnet/30953826/working-set-and-console-a.aspx Thanks, in advance.
Working set trimming can only be prevented by locking pages in memory, either by locking them explictly with VirtualLock or by mapping memory into AWE. But both operations are extreamly high priviledged and require the application to run under an account that is granted the 'Lock Pages in Memory' priviledge, see How to: Enable the Lock Pages in Memory Option. By default nobody, not vene administrators, have this priviledge. Technically, that is the answer you are looking for (ommitting the 'minor' details of how to identify the regions to lock). But your question indicates that you are on a totaly wrong path. Wroking set trimming is something that occurs frequently and has no serious adverse effects. You are most likely confusing the trimming with paging out the memory, but they are distinct phases of the memory page lifetime. Trimming occurs when the OS takes away the mapping of the page from the process and places the page into a standby list. This is a very fast and simple operation: the page is added into the standby list and the pte is marked accordingly. No IO operation occurs, the physical RAM content is not changed. When, and if, the process accesses the trimmed page again a soft fault will occur. The TLB miss will trigger a walk into the kernel land, the kernel will locate the page in the standby list and it will re-allocate it to the process. Fast, quick, easy, again, no IO operation occurs, nor any RAM content changes for the page. So a process that has all its working set trimmed will regain the entire active set fairly quickly (microseconds) if it keeps referencing the pages. Only when the OS needs new pages for its free list will it look into the standby list, take the oldest page and actually swap it to disk. In this situation indeed IO occurs and the RAM content is zero-ed out. When the process accesses again the page a hard fault will occur. The TLB miss will wake the kernel, this will inspect the pte's list and now a 'real' page fault will occur: a new free page is allocate, the content is read from the disk, and then the page is allocated to the process and the execution resumes from the TLB miss location. As you can see, there is a huge difference between the working set trimming and a memory pressure page swap. If your console application is trimmed, don't sweat over it. You will do incalculable more damage to the system health by locking pages in memory. And btw, you also do a similar bad user experience by refusing to minimize when asked to, just because you misunderstand the page life cycle. It is true that there are processes that have a legitimate demand to keep their working set as hot as possible. All those processes, always, are implemented as services. Services benefit from a more lenient trimming policy from the OS, and this policy is actually configurable. If you are really concerned about the system memory and want to help the OS you should register for memory notifications using CreateMemoryResourceNotification and react to memory pressure by freeing your caches, and grow your caches back when your notified that free memory is available.
2,115,253
2,138,037
How to write a bison file to automatically use a token enumeration list define in a C header file?
I am trying to build a parser with Bison/Yacc to be able to parse a flow of token done by another module. The tokens are already listed in a enumeration type as follow: // C++ header file enum token_id { TokenType1 = 0x10000000, TokenType2 = 0x11000000, TokenType3 = 0x11100000, //... and the list go on with about 200/300 line }; I have gone through the documentation of bison many times but I couldn't find a better solution than copying each token in the Bison file like this: /* Bison/Yacc file */ %token TokenType1 0x10000000 %token TokenType2 0x11000000 %token TokenType3 0x11100000 //... If I have to do it like that, It will become pretty hard to maintain the file if the other module specification change (which happen quite oftenly). Could you please tell me how to do it, or point me in the good direction (any idea/comment is welcome). It would greatly help me! Thanks in advance.
Instead of doing : /* Bison/Yacc file */ %token TokenType1 0x10000000 %token TokenType2 0x11000000 %token TokenType3 0x11100000 //... You just need to include the file with the token type in the declaration part #include "mytoken_enum.h" // ... %token TokenType1 %token TokenType2 %token TokenType3 //... EDIT: This can not be done: As you see from the numbers above, Bison just numbers the tokens sequentially, and it is used shifted in parser lookup tables as indices, for speed simply. So Bison does not support that, I feel sure, and it would not be easy to fit with the implementation model. Just need to wrapper to convert the real token to yacc/bison token (eg: via yylex())
2,115,575
2,115,597
Why POSIX is called "Portable Operating System Interface"?
I have searched hard but still confused why POSIX is called "Portable Operating System Interface", what I learned is that it is some threading library for Unix environment, because when you need to use it under windows you have to use cygwin or "Windows Services of Unix", etc. That's why I am confused why it is called Portable OSIX. I am a professional C/C++ programmer in Windows domain but new in Unix/Linux. Thanks for your answers in advance.
Before Posix, the Unix family tree was becoming very diverse and incompatible. A program written for one Unix was not compatible with a different Unix without significant porting effort. Posix was one of the attempts to present a common set of utilities and programming interfaces so that your software would be portable to multiple versions of Unix. Since Posix is about the interface and not the actual OS, it is possible to have a Posix facade on a non Unix OS (such as the Microsoft Windows Services for Unix presenting a Posix facade on top of Windows).
2,115,640
2,115,874
STL Multimap Remove/Erase Values
I have STL Multimap, I want to remove entries from the map which has specific value , I do not want to remove entire key, as that key may be mapping to other values which are required. any help please.
If I understand correctly these values can appear under any key. If that is the case you'll have to iterate over your multimap and erase specific values. typedef std::multimap<std::string, int> Multimap; Multimap data; for (Multimap::iterator iter = data.begin(); iter != data.end();) { // you have to do this because iterators are invalidated Multimap::iterator erase_iter = iter++; // removes all even values if (erase_iter->second % 2 == 0) data.erase(erase_iter); }
2,115,816
2,117,525
OpenCV cvLoadImage() does not load images in visual studio debugger?
I am trying to work out a simple hello world for OpenCV but am running out of ideas as to why it is not working. When I compile and run this code: #include <cv.h> #include <highgui.h> int main(int argc, char* argv[]) { IplImage* img = cvLoadImage( "myjpeg.jpg" ); cvNamedWindow( "MyJPG", CV_WINDOW_AUTOSIZE ); cvShowImage("MyJPG", img); cvWaitKey(0); cvReleaseImage( &img ); cvDestroyWindow( "MyJPG" ); return 0; } I get a grey box about 200x200 instead of the indicated .jpg file. If I use a different jpg I get the same kind of window, and if I put an invalid filename in, I get a very tiny window (expected). I am using Visual Studio 2008 under Windows 7 Professional. Most of the sample programs seem to work fine, so I am doubly confused how that code loads the sample jpgs just fine but in the code above it does not work (even tried the sample jpeg). Update The executables produced by compiling work fine, however the Visual Studio 2008 debugger loads a null pointer into img every time I try to run the debugger - regardless if the file location is implicit or explicit.
It really seems like there's a problem with the path to myjpeg.jpg since the current directory could be different when you're running under the debugger. By default, the current directory that the Visual Studio debugger uses is the directory containing the .vcproj file, but you can change it in the project properties (Debugging -> Working Directory). Are you 100% sure that you pass the absolute path correctly? Try to pass the same path to fopen and see if it also returns NULL. If so, then the path is incorrect. If you want to see exactly what file is the library trying to open you can use Project Monitor with a filter on myjpeg.jpg.
2,115,931
2,126,566
Registering handlers for .NET COM event in C++
I've been following the 'tutorials' of how to expose a .NET framework through COM ( http://msdn.microsoft.com/en-us/library/zsfww439.aspx and http://msdn.microsoft.com/en-us/library/bd9cdfyx.aspx ). Everything works except for the events part. When I add events to the C# interface the following C++ code is generated: struct __declspec(uuid("...")) _MessageEventHandler : IDispatch {}; struct __declspec(uuid("...")) IConnection : IDispatch { virtual HRESULT __stdcall add_MessageEvent ( /*[in]*/ struct _MessageEventHandler * value ) = 0; virtual HRESULT __stdcall remove_MessageEvent ( /*[in]*/ struct _MessageEventHandler * value ) = 0; } The problem is that I haven't found any info on how to use this in C++. Do I need to derive from _MessageEventHandler and implement operator()? Or something else entirely? (Note that for the moment I'm also trying the more documented approach of using IConnectionPointContainer and IConnectionPoint.)
It has been a long time since I used COM and at that time I was using Visual C++ 6.0. I remember that implementing sinks for COM connection points was not a straightforward process. There were multiple ways for implementing them, depending if you used MFC or ATL. Maybe there are easier ways now. Here are couple of links that can help you: Code Project - Sinking events from managed code in unmanaged C++ Code Project - COM - large number of articles about COM Code Project - Handling COM Events in a Console Application Code Project - Handling COM Events in a Console Application, Part II