question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,963,493
1,963,524
hash_map crashing in c++ stl
i am relatively experienced in Java coding but am new to C++. I have written the following C++ code as solution to the USACO training problem which I have reproduced at this url This code looks fine to me. However it crashes on the sample test case given. On isolating the error, I found that if the second for loop is not run for the last iteration (I mean like in the sample test case, n = 5, so I run the loop only till i = 3 instead of i = 4), then it doesn't crash (and produces the expected output). Maybe the error is somewhere else, I can't detect it. Any ideas are welcome. Thanks in advance. Please excuse me for the slightly unwieldy formatting of the code (this is my first forum post). The files included are stdlib.h, stdio.h and hash_map.h ` #include <stdlib.h> #include <stdio.h> #include <hash_map.h> struct eqstr { bool operator()(const char* s1, const char* s2) const { return strcmp(s1, s2) == 0; } }; int main(int argc, char** argv) { FILE *fin = fopen("gift1.in", "r"); FILE *fout = fopen("gift1.out", "w"); hash_map<const char*, int, hash<const char*>, eqstr> table; int n; fscanf(fin,"%d",&n); char name[15]; char people[10][15]; for(int i = 0; i < n; i++){ fscanf(fin,"%s",name); strcpy(people[i],name); table[people[i]] = 0; }//ifor for(int i = 0; i < n; i++){ fscanf(fin,"%s",name); int money; fscanf(fin,"%d",&money); int friends; fscanf(fin,"%d",&friends); char fname[15]; int amt = money/friends; for(int j = 0; j < friends; j++){ fscanf(fin,"%s",fname); table[fname] = table[fname] + amt; }//jfor table[name] = table[name] - friends*amt; }//ifor for(int i = 0; i < n; i++) fprintf(fout,"%s %d\n",people[i],table[people[i]]); return (EXIT_SUCCESS); } `
The reason it is crashing is that vick is giving 0 friends money which causes a divide by zero exception from the following line of code: int amt = money/friends; You should put in some special logic to handle the case when the person has 0 friends so gives $0 away. As was stated in the other comments, you should use some stl classes (string,iostream, etc) to help clean up the code. Edit: Added the input data so the question and answer would make a little more sense 5 dave laura owen vick amr dave 200 3 laura owen vick owen 500 1 dave amr 150 2 vick owen laura 0 2 amr vick vick 0 0
1,963,560
1,963,570
'Bracket initializing'. (C++)
I'm learning C++ at the moment, C++ Primer plus. But I just felt like checking out the cplusplus website and skip a little forward to file handling. I pretty much know the basics of file handling coming from java, php, visual basic. But I came across a pretty weird line. ostream os(&fb); fb represents a filebuf. I just don't get the syntax of this, but I can figure out that it's the same as: ostream os = &fb; But I never really read about this way of initializing variables. So I'm wondering. Am I just senseless and missing out a real useful feature the entire time? Is this way of initializing just old? Is it something different? Thanks in advance.
Perhaps you should read this and this
1,963,880
1,963,893
Blt'ing through memoryDC does not work
HDC hdcScreen = GetDC(NULL); HDC hdcWindow = GetDC(mWin); HDC hdcMem = CreateCompatibleDC(hdcScreen); if (!hdcScreen || !hdcWindow || !hdcMem){ MessageBox(NULL, "could not locate hdc's", "Viewer", MB_ICONERROR); } if (!StretchBlt(hdcMem, 0, 0, 300, 300, hdcScreen, 0, 0, 300, 300, SRCCOPY)){ MessageBox(NULL, "stretchblt failed", "Viewer", MB_ICONERROR); } else if (!BitBlt(hdcWindow, 0, 0, 300, 300, hdcMem, 0, 0, SRCCOPY)){ // error MessageBox(NULL, "stretchblt failed", "Viewer", MB_ICONERROR); } ReleaseDC(NULL, hdcScreen); ReleaseDC(mWin, hdcWindow); ReleaseDC(mWin, hdcMem); A single call to StretchBlt from Screen to Window works fine, but the above does not. Any helpful tips? [Edit] No errors are triggered, so everything seems to work fine, however the window associated with mWin is blank.
You need to create a bitmap and select it into the memory DC using SelectObject.
1,963,926
1,963,977
When is a vtable created in C++?
When exactly does the compiler create a virtual function table? 1) when the class contains at least one virtual function. OR 2) when the immediate base class contains at least one virtual function. OR 3) when any parent class at any level of the hierarchy contains at least one virtual function. A related question to this: Is it possible to give up dynamic dispatch in a C++ hierarchy? e.g. consider the following example. #include <iostream> using namespace std; class A { public: virtual void f(); }; class B: public A { public: void f(); }; class C: public B { public: void f(); }; Which classes will contain a V-Table? Since B does not declare f() as virtual, does class C get dynamic polymorphism?
Beyond "vtables are implementation-specific" (which they are), if a vtable is used: there will be unique vtables for each of your classes. Even though B::f and C::f are not declared virtual, because there is a matching signature on a virtual method from a base class (A in your code), B::f and C::f are both implicitly virtual. Because each class has at least one unique virtual method (B::f overrides A::f for B instances and C::f similarly for C instances), you need three vtables. You generally shouldn't worry about such details. What matters is whether you have virtual dispatch or not. You don't have to use virtual dispatch, by explicitly specifying which function to call, but this is generally only useful when implementing a virtual method (such as to call the base's method). Example: struct B { virtual void f() {} virtual void g() {} }; struct D : B { virtual void f() { // would be implicitly virtual even if not declared virtual B::f(); // do D-specific stuff } virtual void g() {} }; int main() { { B b; b.g(); b.B::g(); // both call B::g } { D d; B& b = d; b.g(); // calls D::g b.B::g(); // calls B::g b.D::g(); // not allowed d.D::g(); // calls D::g void (B::*p)() = &B::g; (b.*p)(); // calls D::g // calls through a function pointer always use virtual dispatch // (if the pointed-to function is virtual) } return 0; } Some concrete rules that may help; but don't quote me on these, I've likely missed some edge cases: If a class has virtual methods or virtual bases, even if inherited, then instances must have a vtable pointer. If a class declares non-inherited virtual methods (such as when it doesn't have a base class), then it must have its own vtable. If a class has a different set of overriding methods than its first base class, then it must have its own vtable, and cannot reuse the base's. (Destructors commonly require this.) If a class has multiple base classes, with the second or later base having virtual methods: If no earlier bases have virtual methods and the Empty Base Optimization was applied to all earlier bases, then treat this base as the first base class. Otherwise, the class must have its own vtable. If a class has any virtual base classes, it must have its own vtable. Remember that a vtable is similar to a static data member of a class, and instances have only pointers to these. Also see the comprehensive article C++: Under the Hood (March 1994) by Jan Gray. (Try Google if that link dies.) Example of reusing a vtable: struct B { virtual void f(); }; struct D : B { // does not override B::f // does not have other virtuals of its own void g(); // still might have its own non-virtuals int n; // and data members }; In particular, notice B's dtor isn't virtual (and this is likely a mistake in real code), but in this example, D instances will point to the same vtable as B instances.
1,963,988
2,010,280
shared memory, MPI and queuing systems
My unix/windows C++ app is already parallelized using MPI: the job is splitted in N cpus and each chunk is executed in parallel, quite efficient, very good speed scaling, the job is done right. But some of the data is repeated in each process, and for technical reasons this data cannot be easily splitted over MPI (...). For example: 5 Gb of static data, exact same thing loaded for each process 4 Gb of data that can be distributed in MPI, the more CPUs are used, smaller this per-CPU RAM is. On a 4 CPU job, this would mean at least a 20Gb RAM load, most of memory 'wasted', this is awful. I'm thinking using shared memory to reduce the overall load, the "static" chunk would be loaded only once per computer. So, main question is: Is there any standard MPI way to share memory on a node? Some kind of readily available + free library ? If not, I would use boost.interprocess and use MPI calls to distribute local shared memory identifiers. The shared-memory would be read by a "local master" on each node, and shared read-only. No need for any kind of semaphore/synchronization, because it wont change. Any performance hit or particular issues to be wary of? (There wont be any "strings" or overly weird data structures, everything can be brought down to arrays and structure pointers) The job will be executed in a PBS (or SGE) queuing system, in the case of a process unclean exit, I wonder if those will cleanup the node-specific shared memory.
One increasingly common approach in High Performance Computing (HPC) is hybrid MPI/OpenMP programs. I.e. you have N MPI processes, and each MPI process has M threads. This approach maps well to clusters consisting of shared memory multiprocessor nodes. Changing to such a hierarchical parallelization scheme obviously requires some more or less invasive changes, OTOH if done properly it can increase the performance and scalability of the code in addition to reducing memory consumption for replicated data. Depending on the MPI implementation, you may or may not be able to make MPI calls from all threads. This is specified by the required and provided arguments to the MPI_Init_Thread() function that you must call instead of MPI_Init(). Possible values are { MPI_THREAD_SINGLE} Only one thread will execute. { MPI_THREAD_FUNNELED} The process may be multi-threaded, but only the main thread will make MPI calls (all MPI calls are ``funneled'' to the main thread). { MPI_THREAD_SERIALIZED} The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time: MPI calls are not made concurrently from two distinct threads (all MPI calls are ``serialized''). { MPI_THREAD_MULTIPLE} Multiple threads may call MPI, with no restrictions. In my experience, modern MPI implementations like Open MPI support the most flexible MPI_THREAD_MULTIPLE. If you use older MPI libraries, or some specialized architecture, you might be worse off. Of course, you don't need to do your threading with OpenMP, that's just the most popular option in HPC. You could use e.g. the Boost threads library, the Intel TBB library, or straight pthreads or windows threads for that matter.
1,963,992
1,964,001
Check Windows version
How I can check in C++ if Windows version installed on computer is Windows Vista and higher (Windows 7)?
Similar to other tests for checking the version of Windows NT: OSVERSIONINFO vi; memset (&vi, 0, sizeof vi); vi .dwOSVersionInfoSize = sizeof vi; GetVersionEx (&vi); if (vi.dwPlatformId == VER_PLATFORM_WIN32_NT && vi.dwMajorVersion >= 6)
1,964,149
1,966,168
SetCursorPos and GetCursorPos not working at login screen?
When I attempt to use SetCursorPos at the Windows Vista/7 login screen, true is returned which at first made me think it was working. However, when I call GetCursorPos it gives me: -858993460,-858993460 Any thoughts why? Is this a "security feature" or am I using it incorrectly? The code works fine on non-login (i.e. normal) desktop.
Alternative solution: It is possible (but very tricky) to use mouse_event (which does work at login screen) instead of SetCursorPos. I don't have time to post code now, but if asked I may update this answer...
1,964,150
1,964,252
c++ test if 2 sets are disjoint
I know the STL has set_difference, but I need to just know if 2 sets are disjoint. I've profiled my code and this is slowing my app down quite a bit. Is there an easy way to see if 2 sets are disjoint, or do I need to just roll my own code? EDIT: I also tried set_intersection but it took the same time...
Modified hjhill's code to reduce complexity by a factor of O(log n) by getting rid of the count() call. template<class Set1, class Set2> bool is_disjoint(const Set1 &set1, const Set2 &set2) { if(set1.empty() || set2.empty()) return true; typename Set1::const_iterator it1 = set1.begin(), it1End = set1.end(); typename Set2::const_iterator it2 = set2.begin(), it2End = set2.end(); if(*it1 > *set2.rbegin() || *it2 > *set1.rbegin()) return true; while(it1 != it1End && it2 != it2End) { if(*it1 == *it2) return false; if(*it1 < *it2) { it1++; } else { it2++; } } return true; } I've complied and tested this code now so it should be good.
1,964,256
1,964,273
How to make a C++ class compatible with stringstream objects?
I would like to be able to serialize my C++ classes using standard techniques like std::stringstream or boost::lexical_cast. For example if I have a Point object (2, 4) then I would like to serialize it to "(2, 4)", and also be able to construct a Point object from this string. I have some code already but with a few issues. Point to string works, but sometimes the input isn't completely read from the stream. The string to Point conversion results in a bad_cast exception. class Point { public: Point() : mX(0), mY(0) {} Point(int x, int y) : mX(x), mY(y){} int x() const { return mX; } int y() const { return mY; } private: int mX, mY; }; std::istream& operator>>(std::istream& str, Point & outPoint) { std::string text; str >> text; // doesn't always read the entire text int x(0), y(0); sscanf(text.c_str(), "(%d, %d)", &x, &y); outPoint = Point(x, y); return str; } std::ostream& operator<<(std::ostream& str, const Point & inPoint) { str << "(" << inPoint.x() << ", " << inPoint.y() << ")"; return str; } int main() { Point p(12, 14); std::string ps = boost::lexical_cast<std::string>(p); // "(12, 14)" => OK Point p2 = boost::lexical_cast<Point>(ps); // throws bad_cast exception! return 0; } How can I fix these problems?
To read an entire line, you can use the function std::getline: std::string text; getline(str, text);
1,964,463
1,964,561
How to interpret g++ warning
I've got a very strange g++ warning when tried to compile following code: #include <map> #include <set> class A { public: int x; int y; A(): x(0), y(0) {} A(int xx, int yy): x(xx), y(yy) {} bool operator< (const A &a) const { return (x < a.x || (!(a.x < x) && y < a.y)); } }; struct B { std::set<A> data; }; int main() { std::map<int, B> m; B b; b.data.insert(A(1, 1)); b.data.insert(A(1, 2)); b.data.insert(A(2, 1)); m[1] = b; return 0; } Output: $ g++ -Wall -W -O3 t.cpp -o /tmp/t t.cpp: In function ‘int main()’: t.cpp:14: warning: dereferencing pointer ‘__x.52’ does break strict-aliasing rules t.cpp:14: warning: dereferencing pointer ‘__x.52’ does break strict-aliasing rules /usr/lib/gcc/i686-redhat-linux/4.4.2/../../../../include/c++/4.4.2/bits/stl_tree.h:525: note: initialized from here It doesn't have any sence to me at all. How should I interpret it ? I don't see what's wrong with the code posted. Forget to specify compiler details: $ gcc --version gcc (GCC) 4.4.2 20091027 (Red Hat 4.4.2-7)
gcc 4.4 has a bug where std::map breaks incorrectly warns about strict-aliasing rules. http://gcc.gnu.org/bugzilla/show_bug.cgi?id=39390 Your code is valid C++. Strict aliasing merely allows a subset of optimizations that are enabled by default when using -O3. Your solution is to compile with -fno-strict-aliasing or a different version of gcc. If you're curious about what strict aliasing is, that has been asked here.
1,964,478
1,964,490
Displaying exception debug information to users
I'm currently working on adding exceptions and exception handling to my OSS application. Exceptions have been the general idea from the start, but I wanted to find a good exception framework and in all honesty, understand C++ exception handling conventions and idioms a bit better before starting to use them. I have a lot of experience with C#/.Net, Python and other languages that use exceptions. I'm no stranger to the idea (but far from a master). In C# and Python, when an unhandled exception occurs, the user gets a nice stack trace and in general a lot of very useful priceless debugging information. If you're working on an OSS application, having users paste that info into issue reports is... well let's just say I'm finding it difficult to live without that. For this C++ project, I get "The application crashed", or from more informed users, "I did X, Y and Z, and then it crashed". But I want that debugging information too! I've already (and with great difficulty) made my peace with the fact that I'll never see a cross-platform and cross-compiler way of getting a C++ exception stack trace, but I know I can get the function name and other relevant information. And now I want that for my unhandled exceptions. I'm using boost::exception, and they have this very nice diagnostic_information thingamajig that can print out the (unmangled) function name, file, line and most importantly, other exception specific information the programmer added to that exception. Naturally, I'll be handling exceptions inside the code whenever I can, but I'm not that naive to think I won't let a couple slip through (unintentionally, of course). So what I want to do is wrap my main entry point inside a try block with a catch that creates a special dialog that informs the user that an error has occurred in the application, with more detailed information presented when the user clicks "More" or "Debug info" or whatever. This would contain the string from diagnostic_information. I could then instruct the users to paste this information into issue reports. But a nagging gut feeling is telling me that wrapping everything in a try block is a really bad idea. Is what I'm about to do stupid? If it is (and even if it's not), what's a better way to achieve what I want?
Wrapping all your code in one try/catch block is a-ok. It won't slow down the execution of anything inside it, for example. In fact, all my programs have (code similar to) this framework: int execute(int pArgc, char *pArgv[]) { // do stuff } int main(int pArgc, char *pArgv[]) { // maybe setup some debug stuff, // like splitting cerr to log.txt try { return execute(pArgc, pArgv); } catch (const std::exception& e) { std::cerr << "Unhandled exception:\n" << e.what() << std::endl; // or other methods of displaying an error return EXIT_FAILURE; } catch (...) { std::cerr << "Unknown exception!" << std::endl; return EXIT_FAILURE; } }
1,964,595
1,964,613
Less CPU usage in C++: declaring as unsigned int or not?
What requires the most CPU: int foo = 3; or typecasting it to an unsigned int? unsigned int foo = 3;
My immediate thought is: it is not casting the int into unsigned int. So there is no difference in speed. hereis the link about the fast types. However it's more the algorithms which and functions which should be optimised rather than types.
1,964,708
1,964,759
I-Phone VM for Android
I'm considering opening up a project to create an i-phone virtual machine for android 2.0 (read motorola droid) before i do so i have some questions: Does one already exist that i just missed? Can the the Droid's Arm Cortex A8 down-clocked to 550MHz (thanks wikipedia) handle an I-Phone abstraction layer? Performance wise the best thing to do is write the app in C++, but for the health of the system, would it be better to put the iphone vm on top of the dalvik vm? Which approach would be better and why.
Does one already exist that i just missed? No. Can the the Droid's Arm Cortex A8 down-clocked to 550MHz (thanks wikipedia) handle an Iphone? No, but the CPU is not strictly the issue. Performance wise the best thing to do is write the app in C++, but for the health of the system, would it be better to put the iphone vm on top of the dalvik vm? Which approach would be better and why. It is conceivable you could create an Objective-C implementation in C/C++ that could run on Android via the Android NDK, but NDK libraries have limited system access, meaning you would not be able to do much in Objective-C. It is conceivable that your Objective-C implementation could run as a standalone application on rooted hardware, and therefore have access to more of the system, but then you pretty much aren't running Android anymore. It is inconceivable to create an Objective-C implementation that will run on the Dalvik VM and have performance similar to a native implementation of Objective-C on the iPhone. Note that I have not even discussed implementing the Cocoa libraries and such, as I have no idea how you could do that in reasonable time without copyright infringement, which will get you sued into oblivion (see: Apple v. Pystar). The only way to avoid this is a total cleanroom implementation, and the WINE folk will point out how they have been trying to do this for Windows for around 17 years and have had incomplete success. If your goal is to write applications once that run across Android and iPhone, consider PhoneGap, Appcelerator Titanium Mobile, and similar toolkits.
1,964,722
1,964,823
One big pool or several type specific pools?
I'm working on a video game which requires high performance so I'm trying to setup a good memory strategy or a specific part of the game, the part that is the game "model", the game representation. I have an object containing a whole game representation, with different managers inside to keep the representation consistent, following the game rules. Every game entity is currently generated by a type-specific factory, so I have several factories that allow me to isolate and change the memory management of those entities as I wish. Now, I'm in the process of choosing between those two alternatives: Having a memory pool for each type: that will allow really fast allocation/deallocation and minimal fragmentation as an object pool already know the size of the allocated objects. One thing that bother me is to have several pools like that that are separate, maybe making the other solution more efficient... Having one big memory pool shared by all factories of one game representation : (using something like boost::pool with some adapter functions) that way I've got all the game objects memory allocated together and can have one bit allocation for a game that I already know the total size (it's not always the case). I'm not sure it's a better solution than A because of possible fragmentation inside the pool, as there would be objects of different size in the same pool, but it looks like an easier one for memory analysis and other problem fixing. Now, I had some real worlds experiences with the A so I'm not experienced with B and would like some advice regarding those solutions, for a long-life project. Which solution seem better for a long-life project and why? (Note: a pool is really necessary in this case because the game model is used for game editing too so there will be lot of allocation/deallocation of little objects). Edit for clarification: I'm using C++ if (it's not clear yet)
The correct answer is specific to your problem domain. But in the problem domains that I work, the first is usually the one we choose. I do realtime or near realtime code. Audio editing and playback mostly. In in that code, we generally cannot afford to allocate memory from the heap down in the playback engine. Most of the time malloc returns fast enough, but sometimes it doesn't. And that sometimes matters. So our solutions is to have specific pools for certain objects, and use the general pool for everything else. The specific pools have a certain number of elements preallocated, and are implemented as a linked list (actually a queue), so allocation and release is never more than a couple of pointer updates and the cost of entering and leaving a critical section. As a fallback for unusual cases; when someone needs to allocate from a special pool and it's empty - we will allocate a hunk of general memory (several objects) and add that to the special pool. Once an allocation is part of the special pool, it is NEVER returned to the general pool until the app exits or starts a new project. Making good choices about the initial size and maximum size of the special pools are an important part of tuning the application.
1,964,751
1,964,794
rate my (C++) code: a recursive strstr sans any standard library string functions :)
So, the idea was a write a recursive function that compares two strings to see if string 'prefix' is contained in string 'other', without using any standard string functions, and using pointer arithmetic. below is what i came up with. i think it works, but was curious - how elegant is this, scale 1-10, any obvious funky moves you would have done instead? thanks. bool is_prefixR(char* prefix, char* other) { static int prePos = 0,othPos = 0; // static int othPos = 0; bool test; test = ( *(prefix+prePos) == *(other+othPos)); //checks to see if same if (!*(prefix+prePos)) { return 1; } //end of recursion if (!*(other+othPos)) { return 0; } if (!test) { othPos++; //move othPos pointer by 1 prePos = 0; //reset the prefix position return(is_prefixR(prefix, other)); //lets try again } else { //chars are the same othPos++; //move othPos pointer by 1 prePos++; return(is_prefixR(prefix, other)); //lets try again } return 0; }
It is 1AM and far to late for understanding code, however such a simple function should be really easy to comprehend and your code isn't. Static variables when writing functions are not a good idea because they make it incredibly hard to debug as the function ceases to become stateless. Try passing the values you need to the next function, and if you find you can't, try writing it a different way. You also used prefix in the wrong way. I think you meant substring. I present two functions below that do what you want and are fairly foolproof with everything except strings that are not null terminated. It is not quite as fast as it could be, as is_substr will continue to try and compare even when other is shorter than sub. You seemed to indicate elegance was the name of the game though, so I avoided all added complexity. Note: is_substr depends on is_prefix. bool is_prefix(const char* prefix, const char* other) { if ( *prefix == 0 ){ return true; }else if ( *other == 0 || *prefix != *other ){ return false; } return is_prefix(++prefix, ++other); } bool is_substr(const char* const sub, const char* other) { if ( *other == 0 ){ return false; }else if ( is_prefix(sub, other) ){ return true; } return is_substr(sub, ++other); } Just to give you an idea of the functions output is_substr("aab", "aaab"); //1 is_substr("ab", "ba"); //0 is_substr("aab", "a"); //0 is_substr("a", "bab"); //1 is_prefix("a", "a"); //1 is_prefix("a", "ab"); //1 is_prefix("ab", "a"); //0 is_prefix("aab", "aaab"); //0
1,964,821
1,964,847
strcmpi renamed to _strcmpi?
In MSVC++, there's a function strcmpi for case-insensitive C-string comparisons. When you try and use it, it goes, This POSIX function is deprecated beginning in Visual C++ 2005. Use the ISO C++ conformant _stricmp instead. What I don't see is why does ISO not want MSVC++ to use strcmpi, and why is _stricmp the preferred way, and why would they bother to rename the function, and how is a function beginning with an underscore ISO conformant. I know there must be a reason for all this, and I'm suspecting its because strcmpi is non-standard, and perhaps ISO wants non-standard extensions to begin with an _underscore?
ISO C reserves certain identifiers for future expansion (see here), including anything that starts with "str".
1,964,926
1,964,958
Converting C-Strings from Local Encoding to UTF8
I'm writing a small App in which i read some text from to console, which is then stored in a classic char* string. As it happens i need to pass it to an lib which only takes UTF-8 encoded Strings. Since the Windows console uses the local Encoding, i need to convert from local encoding to UTF-8. If i'm not mistaken i could use MultiByteToWideChar(..) to encode to UTF-16 and then use WideCharToMultiByte(..) to Convert to UTF-8. However i wonder if there is a way to convert directly from local Encoding to UTF-8 without the use of any external Libs, since the idea of converting to wchar just to be able to convert back to char (utf-8 encoded but still) seems kinda weird to me.
Converting from UTF-16 to UTF-8 is purely a mechanical process, but converting from local encoding to UTF-16 or UTF-8 involves some large specialized lookup tables. The c-runtime just turns around and calls WideCharToMultiByte and MultiByteToWideChar for non-trivial cases. As for having to use UTF-16 as an intermediate stage, as far as I know, there isn't any way around that - sorry. Since you are already linking to an external library to get file input, you might as well link to the same library to get WideCharToMultiByte and MultiByteToWideChar. Using the c-runtime will make your code re-compilable to other operating systems (in theory), but it also adds a layer of overhead between you and the library that does all of the real work in this case - kernel32.dll.
1,965,029
1,965,036
one question about std::cin
int i,j; std::string s; std::cin>>i>>j>>s>>s>>i; std::cout<<i<<" "<<j<<" "<<s<<" "<<i; Question Referring to the sample code above, what's the displayed output if the input string given is: "5 10 Sample Word 15 20"? The answer is 15 10 Word 15 I have the question is what's the underline policy for cin to over write the existing values? Does the latter one simply overwrite the previous one? Is there any other situations? I checked many books, but I didn't find one which explain this.
std::cin >> i >> j >> s >> s >> i; is equivalent to: std::cin >> i; std::cin >> j; std::cin >> s; std::cin >> s; // overwrite previous s std::cin >> i; // overwrite previous i Every time you read from cin to a variable, the old contents of that variable is overwritten. So you are explicitly asking to overwrite s and i.
1,965,067
1,965,105
Good c++ profiler for GCC
I tried to find a related question but all previous questions are about profilers for native c++ in windows. I googled a while and learned about gprof, but the output of gprof actually contained lot of obscure internal functions. Is there a good opensource c++ profiler with good documentation?
Valgrind I totally recommend this http://en.wikipedia.org/wiki/Valgrind
1,965,249
1,965,344
How to write a Java-enum-like class with multiple data fields in C++?
Coming from a Java background, I find C++'s enums very lame. I wanted to know how to write Java-like enums (the ones in which the enum values are objects, and can have attributes and methods) in C++. For example, translate the following Java code (a part of it, sufficient to demonstrate the technique) to C++ : public enum Planet { MERCURY (3.303e+23, 2.4397e6), VENUS (4.869e+24, 6.0518e6), EARTH (5.976e+24, 6.37814e6), MARS (6.421e+23, 3.3972e6), JUPITER (1.9e+27, 7.1492e7), SATURN (5.688e+26, 6.0268e7), URANUS (8.686e+25, 2.5559e7), NEPTUNE (1.024e+26, 2.4746e7); private final double mass; // in kilograms private final double radius; // in meters Planet(double mass, double radius) { this.mass = mass; this.radius = radius; } private double mass() { return mass; } private double radius() { return radius; } // universal gravitational constant (m3 kg-1 s-2) public static final double G = 6.67300E-11; double surfaceGravity() { return G * mass / (radius * radius); } double surfaceWeight(double otherMass) { return otherMass * surfaceGravity(); } public static void main(String[] args) { if (args.length != 1) { System.err.println("Usage: java Planet <earth_weight>"); System.exit(-1); } double earthWeight = Double.parseDouble(args[0]); double mass = earthWeight/EARTH.surfaceGravity(); for (Planet p : Planet.values()) System.out.printf("Your weight on %s is %f%n", p, p.surfaceWeight(mass)); } } Any help would be greatly appreciated! Thanks!
One way to simulate Java enums is to create a class with a private constructor that instantiates copies of itself as static variables: class Planet { public: // Enum value DECLARATIONS - they are defined later static const Planet MERCURY; static const Planet VENUS; // ... private: double mass; // in kilograms double radius; // in meters private: Planet(double mass, double radius) { this->mass = mass; this->radius = radius; } public: // Properties and methods go here }; // Enum value DEFINITIONS // The initialization occurs in the scope of the class, // so the private Planet constructor can be used. const Planet Planet::MERCURY = Planet(3.303e+23, 2.4397e6); const Planet Planet::VENUS = Planet(4.869e+24, 6.0518e6); // ... Then you can use the enums like this: double gravityOnMercury = Planet::MERCURY.SurfaceGravity();
1,965,328
1,965,446
Call different functions using Direct Parameter Access in C
I recently stumbled upon this page. And I was particularly interested about the section which dealt with Direct Parameter Access. I was just wondering if there is any way to execute just one of the functions depending on the value of n in the following line: printf("%n$p", func1, func2, func3 .. funcN); where func1,.. have signature as int func1(), int func2(), and so on.. This is a restriction as I might want to have function tha return void too. In the above line, only the address of the function is printed; The function is not called.. I even tried using the ',' (comma operator) to achieve this; but in that case, all the functions in the list will get call, and the result corresponding to the 'n' is printed. Is there any way to actually execute the function inside printf(..)? Thanks.
No, you can't do this with printf as printf does not support invocation of function pointer parameters. But, you can write your own function that does this using stdarg: #include <stdarg.h> void invoke_and_print(unsigned int n, ...) { va_list ap; va_start(ap, n); int (*fp)(void) = NULL; while (n-- != 0) { fp = va_arg(ap, int (*)(void)); } va_end(ap); printf("%d\n", (*fp)()); }
1,965,481
1,965,853
debug DLL in a different solution
I have an *.exe project that was written in one solution under vs2005 and i have a DLL file that the *.exe project is using. the problem is that the dll was written in adiffrent solution and when i try to make attach to the *.exe file (after i run it) from the dll solution in order to debug the dll , i get no symbols are loaded error (and i cant debug the dll) altough symbols were loaded (i can see the *.pdb files that created after i compiled the dll solution) . What can I do?
First check the Output window, it will show whether or not it could find debugging symbols for the DLL when it got loaded. Next, switch to Debug + Windows + Modules, right-click your DLL and choose "Symbol load information". That shows where the debugger looked for .pdb files for the DLL. Ensure the .pdb is located in one of these paths. If the problem is not getting source code for the DLL instead of missing .pdb files, first delete the hidden .suo file in the solution directory. The next time you debug into the DLL, Visual Studio will again prompt you to provide the path to the source code file. Don't press Escape, enter the path. Another thing you can do is right-click the solution in the Solution Explorer window, Properties, Common Properties, Debug Source Files. Add the path to the DLL source code directory.
1,965,487
1,966,649
Does the restrict keyword provide significant benefits in gcc/g++?
Has anyone ever seen any numbers/analysis on whether or not use of the C/C++ restrict keyword in gcc/g++ actual provides any significant performance boost in reality (and not just in theory)? I've read various articles recommending / disparaging its use, but I haven't ran across any real numbers practically demonstrating either sides arguments. EDIT I know that restrict is not officially part of C++, but it is supported by some compilers and I've read a paper by Christer Ericson which strongly recommends its usage.
The restrict keyword does a difference. I've seen improvements of factor 2 and more in some situations (image processing). Most of the time the difference is not that large though. About 10%. Here is a little example that illustrate the difference. I've written a very basic 4x4 vector * matrix transform as a test. Note that I have to force the function not to be inlined. Otherwise GCC detects that there aren't any aliasing pointers in my benchmark code and restrict wouldn't make a difference due to inlining. I could have moved the transform function to a different file as well. #include <math.h> #ifdef USE_RESTRICT #else #define __restrict #endif void transform (float * __restrict dest, float * __restrict src, float * __restrict matrix, int n) __attribute__ ((noinline)); void transform (float * __restrict dest, float * __restrict src, float * __restrict matrix, int n) { int i; // simple transform loop. // written with aliasing in mind. dest, src and matrix // are potentially aliasing, so the compiler is forced to reload // the values of matrix and src for each iteration. for (i=0; i<n; i++) { dest[0] = src[0] * matrix[0] + src[1] * matrix[1] + src[2] * matrix[2] + src[3] * matrix[3]; dest[1] = src[0] * matrix[4] + src[1] * matrix[5] + src[2] * matrix[6] + src[3] * matrix[7]; dest[2] = src[0] * matrix[8] + src[1] * matrix[9] + src[2] * matrix[10] + src[3] * matrix[11]; dest[3] = src[0] * matrix[12] + src[1] * matrix[13] + src[2] * matrix[14] + src[3] * matrix[15]; src += 4; dest += 4; } } float srcdata[4*10000]; float dstdata[4*10000]; int main (int argc, char**args) { int i,j; float matrix[16]; // init all source-data, so we don't get NANs for (i=0; i<16; i++) matrix[i] = 1; for (i=0; i<4*10000; i++) srcdata[i] = i; // do a bunch of tests for benchmarking. for (j=0; j<10000; j++) transform (dstdata, srcdata, matrix, 10000); } Results: (on my 2 Ghz Core Duo) nils@doofnase:~$ gcc -O3 test.c nils@doofnase:~$ time ./a.out real 0m2.517s user 0m2.516s sys 0m0.004s nils@doofnase:~$ gcc -O3 -DUSE_RESTRICT test.c nils@doofnase:~$ time ./a.out real 0m2.034s user 0m2.028s sys 0m0.000s Over the thumb 20% faster execution, on that system. To show how much it depends on the architecture I've let the same code run on a Cortex-A8 embedded CPU (adjusted the loop count a bit cause I don't want to wait that long): root@beagleboard:~# gcc -O3 -mcpu=cortex-a8 -mfpu=neon -mfloat-abi=softfp test.c root@beagleboard:~# time ./a.out real 0m 7.64s user 0m 7.62s sys 0m 0.00s root@beagleboard:~# gcc -O3 -mcpu=cortex-a8 -mfpu=neon -mfloat-abi=softfp -DUSE_RESTRICT test.c root@beagleboard:~# time ./a.out real 0m 7.00s user 0m 6.98s sys 0m 0.00s Here the difference is just 9% (same compiler btw.)
1,965,640
1,965,649
What is this C++ Syntax when declaring a class?
I occasionally run into this type of syntax when looking through open source code and was wondering what it's for, or what it's even called for that matter. I have crawled the internet many a times before but simple contrived examples never had it nor explained it. It looks like this class SomeIdentifier ClassName { ... } My question is what is SomeIdentifier ?
Generally this would be something like that #define SomeIdentifier __declspec(dllexport) It is for support of MS dlls where you must specify explicitly every class that is used in interface. And SomeIdentifier would be something like FOO_BAR_EXPORT
1,965,751
1,965,762
how do I read a huge .gz file (more than 5 gig uncompressed) in c
I have some .gz compressed files which is around 5-7gig uncompressed. These are flatfiles. I've written a program that takes a uncompressed file, and reads it line per line, which works perfectly. Now I want to be able to open the compressed files inmemory and run my little program. I've looked into zlib but I can't find a good solution. Loading the entire file is impossible using gzread(gzFile,void *,unsigned), because of the 32bit unsigned int limitation. I've tried gzgets, but this almost doubles the execution time, vs reading in using gzread.(I tested on a 2gig sample.) I've also looked into "buffering", such as splitting the gzread process into multiple 2gig chunks, find the last newline using strcchr, and then setting the gzseek. But gzseek will emulate a total file uncompression. which is very slow. I fail to see any sane solution to this problem. I could always do some checking, whether or not a current line actually has a newline (should only occure in the last partially read line), and then read more data from the point in the program where this occurs. But this could get very ugly. Does anyhow have any suggestions? thanks edit: I dont need to have the entire file at once,just need one line a time, but I got a fairly huge machine, so if that was the easiest I would have no problems. For all those that suggest piping the stdin, I've experienced extreme slowdowns compared to opening the file. Here is a small code snippet I made some months ago, that illustrates it. time ./a.out 59846/59846.txt # 59846/59846.txt 18255221 real 0m4.321s user 0m2.884s sys 0m1.424s time ./a.out <59846/59846.txt 18255221 real 1m56.544s user 1m55.043s sys 0m1.512s And the source code #include <iostream> #include <fstream> #define LENS 10000 int main(int argc, char **argv){ std::istream *pFile; if(argc==2)//ifargument supplied pFile = new std::ifstream(argv[1],std::ios::in); else //if we want to use stdin pFile = &std::cin; char line[LENS]; if(argc==2) //if we are using a filename, print it. printf("#\t%s\n",argv[1]); if(!pFile){ printf("Do you have permission to open file?\n"); return 0; } int numRow=0; while(!pFile->eof()) { numRow++; pFile->getline(line,LENS); } if(argc==2) delete pFile; printf("%d\n",numRow); return 0; } thanks for your replies, I'm still waiting the golden apple edit2: using the cstyle FILE pointers instead of c++ streams is much much faster. So I think this is the way to go. Thank for all your input
gzip -cd compressed.gz | yourprogram just go ahead and read it line by line from stdin as it is uncompressed. EDIT: Response to your remarks about performance. You're saying reading STDIN line by line is slow compared to reading an uncompressed file directly. The difference lies within terms of buffering. Normally pipe will yield to STDIN as soon as the output becomes available (no, or very small buffering there). You can do "buffered block reads" from STDIN and parse the read blocks yourself to gain performance. You can achieve the same result with possibly better performance by using gzread() as well. (Read a big chunk, parse the chunk, read the next chunk, repeat)
1,966,077
1,966,096
Calculate the factorial of an arbitrarily large number, showing all the digits
I was recently asked, in an interview, to describe a method to calculate the factorial of any arbitrarily large number; a method in which we obtain all the digits of the answer. I searched various places and asked in a few forums. But I would like to know if there is any way to accomplish this without using libraries like GMP. Thank you.
GNU Multiprecision library is a good one! But since you say using of external libraries are not allowed, only way I believe its possible is by taking an array of int and then multiplying numbers as you do with pen on paper! Here is the code I wrote some time back.. #include<iostream> #include<cstring> int max = 5000; void display(int arr[]){ int ctr = 0; for (int i=0; i<max; i++){ if (!ctr && arr[i]) ctr = 1; if(ctr) std::cout<<arr[i]; } } void factorial(int arr[], int n){ if (!n) return; int carry = 0; for (int i=max-1; i>=0; --i){ arr[i] = (arr[i] * n) + carry; carry = arr[i]/10; arr[i] %= 10; } factorial(arr,n-1); } int main(){ int *arr = new int[max]; std::memset(arr,0,max*sizeof(int)); arr[max-1] = 1; int num; std::cout<<"Enter the number: "; std::cin>>num; std::cout<<"factorial of "<<num<<"is :\n"; factorial(arr,num); display(arr); delete[] arr; return 0; } 'arr' is just an integer array, and factorial is a simple function that multiplies the given number to the 'large number'. Hope this solves your query..
1,966,319
1,966,323
about const member function
I met two explanation of const member function class A{ public: ... void f() const {} ... } it means it could only access constant members; it means it does not modify any members; I think the second one is right. But why does the first one come out? Is there anything to be clarify? Thanks!
You can examine all class member values in a const member function, and in some cases you can even change the value of member variables. The first explanation is incorrect, I don't know where it comes from. The second explanation is correct, but with a few exceptions. There are some exceptions to this rule. You can also change mutable variables in a const member function, for example a member variable declared like this: mutable float my_rank; You can also break const-correctness in a class by const_cast'ing a reference to yourself like this: Class* self = const_cast<Class*> (this); While technically allowed in C++, this is usually considered poor form because it throws away all of the const modifiers of your design. Don't do this unless you actually have to, and if you find yourself having to do this quite a lot that suggests a problem with your design. The C++ FAQ covers this very well. Here are two references in case you want to do more reading: Const-correctness (cprogramming.com) Const correctness (C++ FAQ Lite)
1,966,352
1,966,541
Build C/C++ library to link it into delphi application... How?
if I have a source of library written in C/C++ (lets say its libxml2), now I'd like to build it, and link it into the delphi application... I know it is possible, since Delphi Zlib does it ( http://www.dellapasqua.com/delphizlib/ ) ... But my question is, how to prepare those .obj files? Thanks in advance m.
You would need to use CodeGear's C++ compiler to produce compatible obj files for Delphi. Does your Delphi come with C++ Builder? Otherwise you could try the free (Borland) commandline version. Read more about this subject here.
1,966,362
1,967,183
SFINAE to check for inherited member functions
Using SFINAE, i can detect wether a given class has a certain member function. But what if i want to test for inherited member functions? The following does not work in VC8 and GCC4 (i.e. detects that A has a member function foo(), but not that B inherits one): #include <iostream> template<typename T, typename Sig> struct has_foo { template <typename U, U> struct type_check; template <typename V> static char (& chk(type_check<Sig, &V::foo>*))[1]; template <typename > static char (& chk(...))[2]; static bool const value = (sizeof(chk<T>(0)) == 1); }; struct A { void foo(); }; struct B : A {}; int main() { using namespace std; cout << boolalpha << has_foo<A, void (A::*)()>::value << endl; // true cout << boolalpha << has_foo<B, void (B::*)()>::value << endl; // false } So, is there a way to test for inherited member functions?
Take a look at this thread: http://lists.boost.org/boost-users/2009/01/44538.php Derived from the code linked to in that discussion: #include <iostream> template <typename Type> class has_foo { class yes { char m;}; class no { yes m[2];}; struct BaseMixin { void foo(){} }; struct Base : public Type, public BaseMixin {}; template <typename T, T t> class Helper{}; template <typename U> static no deduce(U*, Helper<void (BaseMixin::*)(), &U::foo>* = 0); static yes deduce(...); public: static const bool result = sizeof(yes) == sizeof(deduce((Base*)(0))); }; struct A { void foo(); }; struct B : A {}; struct C {}; int main() { using namespace std; cout << boolalpha << has_foo<A>::result << endl; cout << boolalpha << has_foo<B>::result << endl; cout << boolalpha << has_foo<C>::result; } Result: true true false
1,966,577
1,966,590
How to determine architecture in platform neutral way?
I have a C++ app that uses wxWidgets. Certain parts of the app differ for 32 and 64 bit hosts. Currently I use sizeof(void *), but is there a better way that uses conditional compilation and is platform neutral?
Typically people use #defines to determine bitness (the exact define will depend on the compiler). This is better than a runtime approach using sizeof(void*). As for platform neutral, well, some compilers are on multiple platforms..
1,966,687
1,966,712
Bogus IP Address from getaddrinfo & inet_ntop
I've been using getaddrinfo for looking up socket addresses for basic socket commands. Recently, though, the addresses it returns to me are for bogus IP addresses, which I have found using inet_ntop. I've tried my code, as well as that provided in Beej's Guide, and they both produce the same results. Here's the code: struct addrinfo hints, *info; int status; memset(&hints, 0, sizeof hints); hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; if(status = getaddrinfo(address, port, &hints, &info)) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(status)); } char ip4[INET_ADDRSTRLEN]; inet_ntop(AF_INET, info->ai_addr, ip4, INET_ADDRSTRLEN); std::cout<<ip4<<std::endl; No matter what address I use, it always gives me an IP of the form 16.2.x.y where 256*x + y is equal to the port number. Has anyone ever seen this happen, or can anyone guess why it's giving me this?
Shouldn't you be passing ((sockaddr_in const *)info->ai_addr)->sin_addr to inet_ntop?
1,966,705
1,966,776
Qt, widget generated with inheritance?
I remember when messing around with Qt seeing something where it was like class MyForm : QDialog { } Instead of class MyForm { void SetupUi(QDialog* dialog); } How do you generate the inherited form?
It is the new and only way to setup your UI since Qt 4.0. Like it or not, you can always alter the generated code before building. Here's an article on porting .ui files to Qt 4.x - http://qt.nokia.com/doc/4.0/porting4-designer.html.
1,966,893
1,966,897
C++ variable types limits
here is a quite simple question(I think), is there a STL library method that provides the limit of a variable type (e.g integer) ? I know these limits differ on different computers but there must be a way to get them through a method, right? Also, would it be really hard to write a method to calculate the limit of a variable type? I'm just curious! :) Thanks ;).
Use std::numeric_limits: // numeric_limits example // from the page I linked #include <iostream> #include <limits> using namespace std; int main () { cout << boolalpha; cout << "Minimum value for int: " << numeric_limits<int>::min() << endl; cout << "Maximum value for int: " << numeric_limits<int>::max() << endl; cout << "int is signed: " << numeric_limits<int>::is_signed << endl; cout << "Non-sign bits in int: " << numeric_limits<int>::digits << endl; cout << "int has infinity: " << numeric_limits<int>::has_infinity << endl; return 0; }
1,967,124
1,967,135
Finding a byte-pattern in some memory area
I want to search some memory range for a specific byte pattern. Therefore, my approach is to build a function void * FindPattern (std::vector<byte> pattern, byte wildcard, void * startAddress, void * endAddress); using the Boyer-Moore-Horspool algorithm to find the pattern in the memory range. The wildcard byte stays for some specific byte which should be treated as a wildcard. So - for example - if wildcard is 0xCC, every 0xCC in pattern will be a wildcard. The function should return the start of the memory range, where the pattern was found the first time. My question is now: is there some similar function already done in the most common libraries or do I have to implement this for my own?
The Wikipedia page on BMH has an implementation. I think that Boost xpressive is also based on (a variant of) BMH.
1,967,278
1,967,382
Shortening series of push_back's on a byte-vector
In my code, I want to use a byte-vector to store some data in memory. The problem is, that my current approach uses many lines of code: std::vector<byte> v; v.push_back(0x13); v.push_back(0x37); v.push_back(0xf0); v.push_back(0x0d); How can I shorten this procedure so that I have for example something like: std::vector<byte> v(4) = "\x13\x37\xf0\x0d"; // example code - not working ?
This solution gets the string length from the literal itself, meaning you don't need extra 5s and 4s lying around: const unsigned char src[] = "\xDE\xAD\xBE\xEF"; std::vector<unsigned char> pattern(src, src+sizeof(src)); Note that a null terminator (extra zero byte) is added to the array; sizeof(src) is 5 because it's a string literal. The null terminator can be discarded by saying sizeof(src)-1, or by doing this: const unsigned char src[] = {0xDE, 0xAD, 0xBE, 0xEF};
1,967,283
1,968,145
Multiple inheritance on different template types
I'm working on event handling in C++ and to handle notification of events, I have a class EventGenerator which any class generating events can inherit from. EventGenerator has a method which other classes can use to add in callbacks and a method to call the callbacks once an event happens To handle notification of different types of events, I've parametrized EventGenerator on template type T and the notifier class can then inherit from EventGenerator multiple times parametrized on different types. For the sake of completeness, here's the code for EventGenerator #ifndef _EventGenerator #define _EventGenerator #include <list> #include "EventListener.h" template <class Event> class EventGenerator { private: std::list<EventListener<Event>*> listeners; protected: EventGenerator() {} void changeEvent(Event event) { std::list<EventListener<Event>*>::const_iterator it = listeners->begin(); for (; it != listeners->end(); it++) { (*it)->changeEvent(event); } } public: void addListener(EventListener<Event>* listener) { listeners->push_back(listener); } }; #endif and here's the code for EventListener which any class which wants to add callbacks inherits from - #ifndef _EventListener #define _EventListener template <class Event> class EventListener { private: EventListener(const EventListener<Event>& event); protected: EventListener() {} public: virtual void changeEvent(Event event) = 0; }; #endif I've a feeling this is not a very good design and was wondering if there was a better design out there for such a problem. Edit: What bothers is the fact that I'm using multiple inheritance. I've been frequently warned against using it so I guess I wanted opinions on whether such a design could lead to bad things happening in the future Thanks
Beware of diamond inheritance heirarchies. Also note that overloading virtual functions is a bad thing. So if you have something like this: class Handler : public EventHandler<int>, public EventHandler<string> { ... }; Which changeEvent() function will be called? Don't count on it! If you are careful the above code should be fine, but if you want to avoid inheritance altogether then I suggest using function references associated with some unique identifier. As an example: class Listener { public: virtual ~Listener ( ) { } }; template<typename Event> class Distributor : public Listener { public: void addListener (shared_ptr<Listener>, function<void (Event)>); void listen (Event e) { for_each(_listeners.begin(), _listeners.end(), bind(&ListenNode::listen, _1, e)); } private: struct ListenNode { weak_ptr<Listener> listener; function<void (Event)> callback; void listen (Event e) { shared_ptr<Listener> l = listener.lock(); if(l) callback(e); } }; list<ListenNode> _listeners; }; With this setup, all listeners derive from one base class virtually. Listeners can have multiple callbacks registered, and Distributors can be chained. Of course you don't have to use shared_ptr's but I like them because they save from the hassle of unregistering listeners. You can register the callbacks any way you like, associating them with a string, integer or whatever. I have omitted a lot of detail, event distribution is a complicated business. I think Andrei Alexandrescu wrote a detailed article on the topic, look it up.
1,967,391
1,967,400
Is there a way to _get_ the UnhandledExceptionFilter?
SetUnhandledExceptionFilter() lets me install a function that gets called in case of an unhandled exception. I'm looking for a way to get the currently installed function, so I can store&restore it. I can't seem to find a Get equivalent of the SetUnhandledExceptionFilter call, and am wondering if I'm missing something or if it's just not possible.
SetUnhandledExceptionFilter actually returns the old unhandled exception filter, so you can check that way. Set a NULL filter, check the result, then set it again.
1,967,659
1,967,663
Passing on va_arg twice to a function result in same value
I'm trying to use va_arg to make a generic factory function in my GUI library. When passing va_arg twice in the same function they pass on the same value instead of two different: GUIObject* factory(enumGUIType type, GUIObject* parent, ...){ va_list vl; va_start(vl, parent); ... label->SetPosition(va_arg(vl, int), va_arg(vl, int)); va_end(vl); return finalObjectPointer; } factory(LABEL, theParent, 100,200); // Results in position 200:200 What causes this unexpected behavior?
The compiler is not guaranteed to evaluate arguments in order. Add some additional local variables and do the two assignments in sequence. See this other stack overflow posting. int v1 = va_arg(vl, int); int v2 = va_arg(vl, int); label->SetPosition(v1, v2); To get what you are observing: the exact same value twice -- probably requires a compiler bug piled on top of the undefined order of evaluation situation, or some entertaining aspect of the particular macro expansion of va_arg in your environment.
1,967,703
1,967,717
Error in linking to friend functions
I have a class 'Vector3' which is compiled successfully. It contains both non-friend and friend functions, for example, to overload * and << operators when Vector3 is the second operand. The problem is I can't link to any of the friend functions, be it operator overloaded or not. So I can confirm that the error is not specific to operator overloading. The g++ command used for linking is as follows (please also see Makefile at the end), g++ -Wall -W -I./ -g -o main.out main.o Vector3.o which gave the following errors, main.cpp:7: undefined reference to `operator*(double, Vector3 const&)' main.cpp:9: undefined reference to `mag(Vector3 const&)' main.cpp:10: undefined reference to `operator<<(std::basic_ostream<char, std::char_traits<char> >&, Vector3 const&)' Below is the relevant code in my source files. I follow the practice of making separate .hpp and .cpp for every class. /* file Vector3.hpp */ struct Vector3 { ... Vector3 operator*(const double k) const; friend Vector3 operator*(const double k, const Vector3 &vec); double magnitude() const; friend double mag(const Vector3 &vec); friend std::ostream& operator<<(std::ostream& output, const Vector3 &vec); ... } /* file Vector3.cpp */ Vector3 operator*(const double k, const Vector3 &vec) { ... } inline double mag(const Vector3 &vec) { ... } std::ostream& operator<<(std::ostream& output, const Vector3 &vec) { ... } /* file main.cpp */ #include "Vector3.hpp" int main() { Vector3 M(1, 1, 1); M = M * 2.0; // own operator* links successfully M = 10.0 * M; // friend operator* doesn't link double m = M.magnitude(); // own function magnitude() links successfully double n = mag(M); // friend function mag() doesn't link std::cout << M; // friend operator<< doesn't link } Finally, this is my Makefile. CXX = g++ CXXFLAGS = -Wall -W $(INCPATH) -g INCPATH = -I./ OBJS = main.o Vector3.o main.out: $(OBJS) $(CXX) $(CXXFLAGS) -o $@ $(OBJS) $(LIBPATH) main.o: main.cpp Vector3.o: Vector3.cpp clean: rm -f $(OBJS) main.out The strangest thing is that if I include the Vector3.cpp file as well in main.cpp and then remove Vector3.o from OBJS in Makefile, the program links successfully. I cannot make sense of this. Please help me!!
The definition of friend operator* uses fp_type while the friend declaration uses double as the first parameter. This will only work as intended if fp_type is a typedef-name for double. Are you sure fp_type actually stands for double? I can't see it from the code you posted. The problem with mag is rather obvious: you defined it as inline in .cpp file. Inline function definitions have to be visible everywhere they are used, meaning that normally they should be placed in the header file.
1,967,762
1,967,772
Compiling c++ program under linux
I am trying to compile simple program under linux. These are the set of operations I performed. [mypc@localhost programs]$ vim heap.cpp [mypc@localhost programs]$ g++ -c heap.cpp [mypc@localhost programs]$ chmod 777 heap.* [mypc@localhost programs]$ g++ -c heap.cpp [mypc@localhost programs]$ ./heap.o bash: ./heap.o: Permission denied [mypc@localhost programs]$ ls heap.cpp heap.o [mypc@localhost programs]$ ls -l total 8 -rwxrwxrwx. 1 mypc mypc 67 2009-12-28 12:01 heap.cpp -rw-rw-r--. 1 mypc mypc 1548 2009-12-28 12:02 heap.o [mypc@localhost programs]$ chmod 777 heap.o [mypc@localhost programs]$ ./heap.o bash: ./heap.o: cannot execute binary file [mypc@localhost programs]$ What kind of error is this ? Here is a program #include<iostream> using namespace std; int main(){ return 0; }
The -c option tells the compiler to generate an object file, not the final binary. You still need to link your code. If you only have a single file, you can do a compile and link in one step: g++ heap.cpp -o heap As you get to bigger programs, you will want to separate compilation from linking. Let's say you want to split your code between heap.cpp and main.cpp. First you would do a compilation step and later you would link them together: g++ -c heap.cpp g++ -c main.cpp g++ -o program_name heap.o main.o Finally, by default, the linking step creates a file named a.out. If you want so specify the name, make sure to use the -o option (which isn't necessary when compiling as the default is to convert NAME.EXTENSION to NAME.o).
1,967,882
1,967,960
is there a difference between malloced arrays and newed arrays
I'm normally programming in c++, but are using some clibrary functions for my char*. Some of the manpages like for 'getline', says that input should be a malloced array. Is it ok, to use 'new' instead? I can see for my small sample that it works, but could this at some point result in some strange undefined behavior? I know that a 'new' should match a 'delete', and a 'malloc' with a 'free'. I'm also not using std::string. And this is intentional. Thanks
The buffer passed to getline() MUST be malloced. The reason is that getline() may call realloc() on the buffer if more space is required. realloc() like free() should only be used with memory allocated by malloc(). This is because malloc() and new allocate memory from different storage areas: See: What is the difference between new/delete and malloc/free? Basically new uses "The "Free Store" while malloc uses "The Heap". Both of these areas are part of the "application Heap" (Though the standard does not actually require an application heap as that is an implementation detail). Though they are both on the "Application Heap" these areas need not overlap. Whether they do is a detail of the implementation. The man page for getline(): http://linux.die.net/man/3/getline http://www.kernel.org/doc/man-pages/online/pages/man3/getline.3.html Notice this line: Alternatively, before calling getline(), *lineptr can contain a pointer to a malloc()-allocated buffer *n bytes in size. If the buffer is not large enough to hold the line, getline() resizes it with realloc(), updating *lineptr and *n as necessary.
1,968,204
1,968,262
QMetaObject::invokeMethod returns true, but method is never called
I'm trying to run a method on the GUI thread using QMetaObject::invokeMethod, which returns true. But, if I use Qt::QueuedConnection my method never gets called (even if invokeMethod returns true). This is what I'm using: QMetaObject::invokeMethod(this, "draw_widgets", Qt::QueuedConnection) I don't get any error messages or anything... If I use Qt::AutoConnection or Qt::DirectConnection the method does get called, but from the same thread of course. Not from the GUI thread, which is what I need. draw_widgets is a public slot of type void draw_widgets() and my class inherits QObject and uses the Q_OBJECT macro as well. I would appreciate any help on this, or on how to check why the method is not being called. Thanks.
The "true" is telling you the message was successfully queued. That doesn't mean the queued message was ever processed... Let us say your program has 10 threads (Thread1-Thread10). You queue a message from Thread7. Which thread will it be queued to? And when will items on this queue be processed? The answer is that every QObject has something called Thread Affinity, and this is the thread where a queued slot will be run. The default affinity is to the thread where the object was created (but you can change it with QObject::moveToThread().) If you want to queue something to the GUI thread, then the object specified by your this pointer should have the GUI thread's affinity. You can check this with the QObject::thread() method. But in any case, no matter what thread you queue to... you must have some kind of message pump running on that thread. Look at for instance QThread::exec(). If your thread affinity is to the GUI then presumably this is already the case because you are running the app's exec. (As a sidenote, direct calls to QMetaObject::invokeMethod are usually unnecessary. You can create a signal and tie it to a slot, then emit the signal in lieu of the invoke.)
1,968,407
1,968,423
How to know defination of a struct in dll?
I need to use a third party DLL which I don't have header , lib or object file of it just DLL alone, I follow this article "Explicitly Linking to Classes in DLL's" in codeguru and able to user function, c++ class from that DLL but there some function call that need to pass or return a struct like this undecorated function I get from PE Explorer: Undecorated C++ Function: public: struct SCRIPT_SET_RESULT __thiscall ScriptSet::LoadScriptInPackFile(char const *,int) so how can I know the structure of struct SCRIPT_SET_RESULT ? or I have to disassemble this dll ?, if so please show me how to do that, I only have a very litle experience with that stuff (only cracked few simple crackme in school). Thanks
I'm afraid there is no way to solve your problem. Disassembling can give you examples of how this structure is used but only in the way providing offsets of members which is not very helpful. I think the best is to ask DLL author to send you header, or to google for it...
1,969,085
1,969,164
What is the difference between ANSI/ISO C++ and C++/CLI?
Created by Microsoft as the foundation of its .NET technology, the Common Language Infrastructure (CLI) is an ECMA standard (ECMA-335) that allows applications to be written in a variety of high-level programming languages and executed in different system environments. Programming languages that conform to the CLI have access to the same base class library and are capable of being compiled into the same intermediate language (IL) and metadata. IL is then further compiled into native code particular to a specific architecture. Because of this intermediate step, applications do not have to be rewritten from scratch. Their IL only needs to be further compiled into a system's native code. What exactly is meant by the system environments? Additionally, while studying Ivor Horton's Beginning Visual C++ 2008, I noticed that he stated that there are fundamentally different kinds of C++ applications can be developed with Visual C++ 2008. These are: Applications which execute natively on one's computer, which he referred to as native C++ programs. Native C++ programs are written in the version of C++ that is defined by the ISO/ANSI language standard. Application can also be written to run under the control of the CLR in an extended version of C++, called C++/CLI. These programs were referred to as CLR programs, or C++/CLI programs. So what is meant by native C++ programs and CLR programs? What's the difference between them? Thanks for any expert's help.
"System environments" means things like Linux, Windows x86, Windows x64, etc. Notice how they use the term "architecture" interchangeably at the end of the paragraph. A native C++ program is one where you take standard (ANSI/ISO) C++ and you compile it into a .exe. Usually you will be compiling this for a specific environment, e.g. Windows x86, in which case it could not run under Linux and would run under the WoW64 emulation layer on Windows x64. Notably, this code runs directly on the machine. C++/CLI is a different programming language than standard C++. It, just like C# or VB.NET, runs on top of Microsoft's Common Language Interface. This means it has access to all those nice things in the paragraph you quoted, like the base class library and compilation to IL which allows it to be run on different architectures. But, just like C# and VB.NET, it does not run natively on the machine. It requires the installation of the .NET Framework; part of the .NET Framework's job is translating C++/CLI programs into native programs, which means they have much less direct access to the machine.
1,969,343
1,969,349
Cannot export template function
I have a class named "SimObject": namespace simBase { class __declspec(dllexport) SimObject: public SimSomething { public: template <class T> void updateParamValue( const std::string& name, T val ); } } I have another class named "ITerrainDrawable": namespace simTerrain { class __declspec(dllexport) ITerrainDrawable : public simBase::SimObject { } } These classes are in different libraries. SimObject is in simBase, ITerrainDrawable is in simTerrain libraries. Even if ITerrainDrawable is derived from SimObject and I included library of simBase, I get a link error: unresolved external symbol 1>ITerrainDrawable.obj : error LNK2019: unresolved external symbol "public: void __thiscall simBase::SimObject::updateParamValue<float>(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,float)" (??$updateParamValue@M@SimObject@simBase@@QAEXABV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@M@Z) referenced in function "public: void __thiscall simTerrain::ITerrainDrawable::setTerrainSize(float)" (?setTerrainSize@ITerrainDrawable@simTerrain@@QAEXM@Z) 1>ITerrainDrawable.obj : error LNK2019: unresolved external symbol "public: void __thiscall simBase::SimObject::updateParamValue<class osg::Vec4f>(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,class osg::Vec4f)" (??$updateParamValue@VVec4f@osg@@@SimObject@simBase@@QAEXABV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@VVec4f@osg@@@Z) referenced in function "public: void __thiscall simTerrain::ITerrainDrawable::setSatelliteTextureBorders(class osg::Vec2f,class osg::Vec2f)" (?setSatelliteTextureBorders@ITerrainDrawable@simTerrain@@QAEXVVec2f@osg@@0@Z) Why do I get this error? Everything works fine if I don't use template function but I need it. If I move this function to simTerrain library it works fine but I don't want to use duplicate function because there are many libraries like simTerrain.
C++ does not really support the separate compilation of template code - you need to put the definition of the template in a header file.
1,969,484
1,969,693
How to get details about the selected items using QTreeView?
I'm using QTreeView with QDirModel like this: QDirModel * model = new QDirModel; ui->treeView->setModel(model); ui->treeView->setSelectionMode(QTreeView::ExtendedSelection); ui->treeView->setSelectionBehavior(QTreeView::SelectRows); This works fine, however, I'm not sure how to get the details about the files I select. I've got this so far: QModelIndexList list = ui->treeView->selectionModel()->selectedIndexes(); But not sure what to do now, I'd like to get each file's name and full path. An example would be really great. Thank you.
you can use fileInfo method of the QDirModel to get file details for the given model index object, smth like this: QModelIndexList list = ui->treeView->selectionModel()->selectedIndexes(); QDirModel* model = (QDirModel*)ui->treeView->model(); int row = -1; foreach (QModelIndex index, list) { if (index.row()!=row && index.column()==0) { QFileInfo fileInfo = model->fileInfo(index); qDebug() << fileInfo.fileName() << '\n'; row = index.row(); } } hope this helps, regards
1,969,579
1,982,200
Getting a handle to the process's main thread
I have created an additional thread in some small testing app and want to suspend the main thread from this additional thread. The additional thread is created via CreateRemoteThread from an external process. Since SuspendThread needs a HANDLE to the thread which should be suspended, I want to know how to get this HANDLE from code running in my additional thread.
DWORD GetMainThreadId () { const std::tr1::shared_ptr<void> hThreadSnapshot( CreateToolhelp32Snapshot(TH32CS_SNAPTHREAD, 0), CloseHandle); if (hThreadSnapshot.get() == INVALID_HANDLE_VALUE) { throw std::runtime_error("GetMainThreadId failed"); } THREADENTRY32 tEntry; tEntry.dwSize = sizeof(THREADENTRY32); DWORD result = 0; DWORD currentPID = GetCurrentProcessId(); for (BOOL success = Thread32First(hThreadSnapshot.get(), &tEntry); !result && success && GetLastError() != ERROR_NO_MORE_FILES; success = Thread32Next(hThreadSnapshot.get(), &tEntry)) { if (tEntry.th32OwnerProcessID == currentPID) { result = tEntry.th32ThreadID; } } return result; }
1,969,620
1,969,637
c++ float to bool conversion
I'm looking at some 3rd party code and am unsure exactly what one line is doing. I can't post the exact code but it's along the lines of: bool function(float x) { float f = doCalculation(x); return x > 0 ? f : std::numeric_limits<float>::infinity(); } This obviously throws a warning from the compiler about converting float->bool, but what will the actual behaviour be? How does Visual C++ convert floats to bools? At the very least I should be able to replace that nasty infinity...
I think it is a mistake. That function should return a float. This seem logical to me. The conversion float to bool is the same as float != 0. However, strict comparing two floating points is not always as you'd expect, due to precision.
1,969,916
1,970,871
Static analysis tool to detect ABI breaks in C++
It's not very hard to break binary backwards-compatibility of a DSO/shared library with a C++ interface. That said, is there a static analysis tool, which can help detecting such ABI breaks, if it's given two different sets of header files: those of an earlier state of the DSO and those of the current state (and maybe DSOs as well)? Both free and commercial product suggestions are welcome. If it could also warn about bad practices, e.g. inline functions and defaulted function parameters in DSO interfaces, it would be great.
I assume that you are familiar with this tutorial: Binary Compatibility Issues with C++, if not read it! I've heard about this tool: http://ispras.linuxbase.org/index.php/ABI_compliance_checker, however never tested or used one, so have no opinion. Also this may interest you: Creating Library with backward compatible ABI that uses Boost
1,969,955
2,126,040
How to turn Sequence of images into video using DirectShow filters?
How to turn Sequence of images into video using DirectShow filters? I have image A and image B and image C. I want to create a DirectShow graph (Using GraphEdit or with C\C++\C# for example) to create a video of 3 frames in duration where first frame is image A second image B and so on =) How to do it?
Take a look at the Push Source Filters Sample from MSDN: MSDN Push Source Filter sample
1,969,984
1,973,981
How to create a DirectShow graph which would wait for incoming images and add them as frames into video file?
How to create a DirectShow graph which would wait for incoming images and add them as frames into video file? Using GraphEdit or with C\C++\C# So I want to have a graph which would work and wait for images incoming into him in any way you think is most easy (for example We can have a folder from where DSfilter would be able to take images) and insert that images as new frames of our video. So how to do it?
You need a source filter, multiplexor and file writer. The multiplexor and file writer are stock components, but the source filter will be a custom filter. Look at the app source example on www.gdcl.co.uk for an example of a custom source filter that you can feed with frames from your app. The graph will not be time-sensitive: the multiplexing is based on the timestamps attached to the samples, not on the elapsed time. So you set the graph running, and as a frame arrives, you attach a timestamp to it and deliver it via the source filter to the mux. G
1,970,041
1,970,301
Background Gradient with Magick++
How do I create gradients with ImageMagick in C++? I am trying to create a visual representation of a WAV file. I can create an Image with Magick++, draw in the waveform data and save the image as a .png file but it still looks a bit basic. I'd like to give the image background and waveform gradients but I don't know how. Are there any examples of how to create gradients using Magick++? Many thanks, Josh
I believe you would have to use the Pixel class and interpolate Colors to create your own gradient fill. The manual for Magick++ does not indicate that it has native functions for gradient fill. It may also be possible to use the core ImageMagick API for gradient fill. Here's some useful links: http://www.imagemagick.org/Usage/canvas/ http://softwareas.com/imagemagick-one-second-gradient-images Edit - The Magick Core API does have a DrawGradientImage function which may help you out. Here's some more useful links: http://www.imagemagick.org/api/MagickCore/struct__GradientInfo.html http://www.imagemagick.org/api/MagickCore/index.html http://www.imagemagick.org/api/MagickCore/draw_8c_source.html#l03225
1,970,164
1,970,228
Function pointers for winapi functions (stdcall/cdecl)
Please could someone give me a few tips for creating function pointers for MS winapi functions? I'm trying to create a pointer for DefWindowProc (DefWindowProcA/DefWindowProcW) but getting this error: LRESULT (*dwp)(HWND, UINT, WPARAM, LPARAM) = &DefWindowProc; error C2440: 'initializing' : cannot convert from 'LRESULT (__stdcall *)(HWND,UINT,WPARAM,LPARAM)' to 'LRESULT (__cdecl *)(HWND,UINT,WPARAM,LPARAM)' I can't figure out what I need to use because I am not used to the MS ascii/wide macros. By the way, I'm creating a function pointer to make a quick hack, and unfortunately I don't have time to explain why - but regardless, I think this question will be helpful to people who need to create winapi function pointers. Update: This code works, but I'm worried that it is bad practice (and does not adhere to unicode/ascii compile options). Should I define two specifications? LRESULT (__stdcall* dwp)(HWND, UINT, WPARAM, LPARAM) = &DefWindowProc; Update 2: This is nicer (thanks to nobugz): WNDPROC dwp = DefWindowProc;
Fix the calling convention mismatch like this: LRESULT (__stdcall * dwp)(HWND, UINT, WPARAM, LPARAM) = DefWindowProc; A typedef can make this more readable: typedef LRESULT (__stdcall * WindowProcedure)(HWND, UINT, WPARAM, LPARAM); ... WindowProcedure dwp = DefWindowProc; But, <windows.h> already has a typedef for this, you might as well use it: WNDPROC dwp = DefWindowProc;
1,970,294
1,970,357
global variables in C++
So I have something like this #define HASHSIZE 1010081 static struct nlist *hashtab[HASHSIZE]; Now I want to be able to change the HASHSIZE of my hashtab, because I want to test different primes numbers and see which would give me less collisions. But Arrays do not take variable sizes so HASHSIZE has to be a constant. Is there a way to go about this?
Why don't you use std::vector instead of using arrays in C++? Eg: std::vector<nlist *> hashtab; hashtab.resize(<some_value>); But anyways you can do this if you are using g++ because g++ supports Variable Length Arrays(VLAs) as an extension. Eg: int HASHSIZE=<some_value> static struct nlist *hashtab[HASHSIZE];
1,970,315
1,980,385
c++ full transparency window but still read text for example
I'm trying to do something like Rainmeter do to its windows, that is use the full transparency in a window but we still read the text of each window. Anyone can explain me how this is done? how we set the full transparency in a window and show certain parts of this window (like text or other things). I can do this with regions but seems to me that it's not like Rainmeter does, anyone knows how can i do this in c++ (WinXP and above)? Thanks
In answer to your comment: To make part of the window transparent, call the UpdateLayerdWindow function and give it a partially transparent background image. You can also pass the ULW_COLORKEY instead of giving a partially transparent background image, and every part of the window that is the color you specify will become transparent. (most people use magenta). However, if you do it this way, you can't make part of the window semitransparent.
1,970,316
1,970,475
How to handle "item not found" situations in a find function?
I'm frequently run into a situation where I need to report in some way that a finding an item has failed. Since there are many ways how to deal with such a situation I'm always unsure how to do it. Here are a few examples: class ItemCollection { public: // Return size of collection if not found. size_t getIndex(Item * inItem) { size_t idx = 0; for (; idx != mItems.size(); ++idx) { if (inItem == mItems[idx]) { return idx; } } return idx; } // Use signed int and return -1 if not found. int getIndexV2(Item * inItem) { for (int idx = 0; idx != mItems.size(); ++idx) { if (inItem == mItems[idx]) { return idx; } } return -1; } // Throw exception if not found. size_t getIndexV3(Item * inItem) { for (size_t idx = 0; idx != mItems.size(); ++idx) { if (inItem == mItems[idx]) { return idx; } } throw std::runtime_error("Item not found"); } // Store result in output parameter and return boolean to indicate success. bool getIndex(Item * inItem, size_t & outIndex) { for (size_t idx = 0; idx != mItems.size(); ++idx) { if (inItem == mItems[idx]) { outIndex = idx; return true; } } return false; } private: std::vector<Item*> mItems; }; I've used all of these at some point in my (young) programming carreer. I mostly use the "return size of collection" approach because it is similar to how STL iterators work. However, I'd like to make more educated choices in the future. So, on what design principles should the decision on how to deal with not-found errors be based?
Your functions are more like std::string::find than any of the iterator-based functions in the algorithm header. It returns an index, not an iterator. I don't like that your function returns the collection size to emulate "one past the end." It requires the caller to know the collection size in order to check whether the function succeeded. I like your second function better since it returns a single constant value that always means "not found." The std::string type combines both of those by returning std::string::npos, which has a value of -1, but as an unsigned type. Stay away from the exception approach of your third function unless you have some other function that call tell in advance whether the item would be found. That is, provide some way for callers to avoid the exception. Your fourth function is most appropriate when the returned index would be useful even when the item isn't found. If you were doing a binary search, it could be useful to know the index where the item would be found if it were in the collection. Then you could provide an insert function that accepts that value as a hint, just like std::map::insert. If you can't provide that kind of information, then don't use that kind of function since it's just more cumbersome for callers to use. Prefer your first style instead.
1,970,384
1,970,416
Switch pointers in a function in the C programming language
How do you switch pointers in a function? void ChangePointers(int *p_intP1, int *p_intP2); int main() { int i = 100, j = 500; int *intP1, *intP2; /* pointers */ intP1 = &i; intP2 = &j; printf("%d\n", *intP1); /* prints 100 (i) */ printf("%d\n", *intP2); /* prints 500 (j) */ ChangePointers(intP1, intP2); printf("%d\n", *intP1); /* still prints 100, would like it swapped by now */ printf("%d\n", *intP2); /* still prints 500 would like it swapped by now */ }/* end main */ void ChangePointers(int *p_intP1, int *p_intP2) { int *l_intP3; /* local for swap */ l_intP3 = p_intP2; p_intP2 = p_intP1; p_intP1= l_intP3; }
In C, parameters are always passed by values. Although you are changing the values of the pointer variables inside the called function the changes are not reflected back to the calling function. Try doing this: void ChangePointers(int **p_intP1, int **p_intP2); /*Prototype*/ void ChangePointers(int **p_intP1, int **p_intP2) /*Definition*/ { int *l_intP3; /* local for swap */ l_intP3 = *p_intP2; *p_intP2 = *p_intP1; *p_intP1= l_intP3; } Corresponding call from main() should be: ChangePointers(&intP1, &intP2);/*Passing in the address of the pointers instead of their values*/
1,970,843
1,971,023
protobuf-net communicating with C++
I'm looking at protobuf-net for implementing various messaging formats, and I particularly like the contract-based approach as I don't have to mess with the proto compiler. one thing I couldn't quite find information on is, does this make it difficult to work cross-platform? there are a few C++ apps that would need to be able to parse PB data, and while I understand that protobuf-net serializes to the PB standard format, if I use the contract approach and not a proto file, how does the C++ side parse the data? can (should?) I write a separate proto file for the (very few) cases where C++ needs to understand the data? and if so, how exactly do I know that the C++ class generated from the proto file is going to match the data from the no-proto-file C# side?
Yes, in theory at least they should match at the binary level, but you might want to limit yourself to types that map simply to ".proto" - so avoid things like DateTime, inheritance ([ProtoInclude]), etc. This also has the advantage that you should be able to use: string proto = Serializer.GetProto<YourType>(); to get the .proto; it (GetProto) isn't 100%, but it works for basic types. But ultimately, the answer is "testing and tweaking"; perhaps design for interop from the outset - i.e. test this early.
1,971,087
1,971,388
long integer multiplication
I am preparing the interview questions not for homework. There is one question about how to multiple very very long integer. Could anybody offer any source code in C++ to learn from? I am trying to reduce the gap between myself and others by learning other's solution to improve myself. Thanks so much! Sorry if you think this is not the right place to ask such questions.
you can use GNU Multiple Precision Arithmetic Library for C++. If you just want an easy way to multiply huge numbers( Integers ), here you are: #include<iostream> #include<string> #include<sstream> #define SIZE 700 using namespace std; class Bignum{ int no[SIZE]; public: Bignum operator *(Bignum& x){ // overload the * operator /* 34 x 46 ------- 204 // these values are stored in the 136 // two dimensional array mat[][]; ------- 1564 // this the value stored in "Bignum ret" */ Bignum ret; int carry=0; int mat[2*SIZE+1][2*SIZE]={0}; for(int i=SIZE-1;i>=0;i--){ for(int j=SIZE-1;j>=0;j--){ carry += no[i]*x.no[j]; if(carry < 10){ mat[i][j-(SIZE-1-i)]=carry; carry=0; } else{ mat[i][j-(SIZE-1-i)]=carry%10; carry=carry/10; } } } for(int i=1;i<SIZE+1;i++){ for(int j=SIZE-1;j>=0;j--){ carry += mat[i][j]+mat[i-1][j]; if(carry < 10){ mat[i][j]=carry; carry=0; } else{ mat[i][j]=carry%10; carry=carry/10; } } } for(int i=0;i<SIZE;i++) ret.no[i]=mat[SIZE][i]; return ret; } Bignum (){ for(int i=0;i<SIZE;i++) no[i]=0; } Bignum (string _no){ for(int i=0;i<SIZE;i++) no[i]=0; int index=SIZE-1; for(int i=_no.length()-1;i>=0;i--,index--){ no[index]=_no[i]-'0'; } } void print(){ int start=0; for(int i=0;i<SIZE;i++) if(no[i]!=0){ start=i; break; // find the first non zero digit. store the index in start. } for(int i=start;i<SIZE;i++) // print the number starting from start till the end of array. cout<<no[i]; cout<<endl; return; } }; int main(){ Bignum n1("100122354123451234516326245372363523632123458913760187501287519875019671647109857108740138475018937460298374610938765410938457109384571039846"); Bignum n2("92759375839475239085472390845783940752398636109570251809571085701287505712857018570198713984570329867103986475103984765109384675109386713984751098570932847510938247510398475130984571093846571394675137846510874510847513049875610384750183274501978365109387460374651873496710394867103984761098347609138746297561762234873519257610"); Bignum n3 = n1*n2; n3.print(); return 0; } as you can see, it's multiply 2 huge integer :) ... (up to 700 digits)
1,971,183
1,971,196
when does c++ allocate/deallocate string literals
When is the string literal "hello" allocated and deallocated during the lifetime of the program in this example? init(char **s) { *s = "hello"; } int f() { char *s = 0; init(&s); printf("%s\n", s); return 0; }
The string literal is initialised into read-only memory segment by the compiler. There is no initialisation or removal done at run-time.
1,971,201
1,971,207
How to overload array index operator for wrapper class of 2D array?
#define ROW 3 #define COL 4 class Matrix { private: int mat[ROW][COL]; //..... //..... }; int main() { Matrix m; int a = m[0][1]; // reading m[0][2] = m[1][1]; // writing } I think directly it not possible to overload [][] . I think i have to do it indirectly but how to implement it?
The easier solution is to use the operator() as it allows multiple parameters. class M { public: int& operator()(int x,int y) {return at(x,y);} // .. Stuff to hold data and implement at() }; M a; a(1,2) = 4; The easy way is that the first operator[] returns an intermediate object that the second operator[] returns the value from the array. class M { public: class R { private: friend class M; // Only M can create these objects. R(M& parent,int row): m_parent(parent),m_row(row) {} public: int& operator[](int col) {return m_parent.at(m_row,col);} private: M& m_parent; int m_row; }; R operator[](int row) {return R(*this,row);} // .. Stuff to hold data and implement at() }; M b; b[1][2] = 3; // This is shorthand for: R row = b[1]; int& val = row[2]; val = 3;
1,971,202
1,971,218
Member access differences
can someone tell me what is the different between (*ptr).field and ptr->field? I know it connect somehow to static and dynamic linking, but i dont know what is it. can someone tell me the differnet and give me an example? edit: if i have this code: Point p; //point is a class that derive from class shape Shape *s=&p; //there is a diffrence if i write: (*s).print(); //print is virtual func s->print(); // the answers will not be the same, why? TNX!
it has nothing to do with static or dynamic linking both expressions will return the value of ptr.field the ptr->field form is an abbreviated syntax for accessing a member directly from a pointer UPDATE: it occurred to me that your original intent was not linking but binding if this indeed was what you were aiming to then there is static binding and dynamic binding which have some relation to the -> operator see here
1,971,271
1,971,335
VC choosing the wrong operator<< overload only at the first call. Bug?
I spent some time removing all the uninfluent code and here is my problem. --- File.h --- #include <fstream> #include <string> template <typename Element> class DataOutput : public std::basic_ofstream<Element> { public: DataOutput(const std::string &strPath, bool bAppend, bool bBinary) : std::basic_ofstream<Element>( strPath.c_str(), (bAppend ? ios_base::app : (ios_base::out | ios_base::trunc)) | (bBinary ? ios_base::binary : 0)) { if (is_open()) clear(); } ~DataOutput() { if (is_open()) close(); } }; class File { public: File(const std::string &strPath); DataOutput<char> *CreateOutput(bool bAppend, bool bBinary); private: std::string m_strPath; }; --- File.cpp --- #include <File.h> File::File(const std::string &strPath) : m_strPath(strPath) { } DataOutput<char> *File::CreateOutput(bool bAppend, bool bBinary) { return new DataOutput<char>(m_strPath, bAppend, bBinary); } --- main.cpp --- #include <File.h> void main() { File file("test.txt"); DataOutput<char> *output(file.CreateOutput(false, false)); *output << "test"; // Calls wrong overload *output << "test"; // Calls right overload!!! output->flush(); delete output; } And this is the output file after building with cl and options /D "WIN32" /D "_UNICODE" /D "UNICODE" and running --- test.txt --- 00414114test Basically what happens is that the first operator<< call in main is bound to the member method basic_ostream<char>& basic_ostream<char>::operator<<( const void *) whereas the second one is (correctly) bound to basic_ostream<char>& __cdecl operator<<( basic_ostream<char>&, const char *) thus giving a different output. This doesn't happen if i do any of the following: Inline File::CreateOutput Change DataOutput with a non-template one with Element=char Add *output; before the first operator<< call Am i correct in considering this an undesired compiler behavior? Is there any explanation for this? Oh, and i'm using VC7 at the moment to test this simplified code but i have tried the original code in VC9 and VC8 and the same thing was happening. Any help or even a clue is appreciated
Looks like a compiler bug. You might want to try with the latest VC compiler (which at the moment is VC10 Beta2), and if it's not fixed, follow up with the VC team (you'll need a complete self contained repo). If it is fixed, you should just use the work around you found and move on with your life.
1,971,277
1,971,320
any possible explanations for this weird crash?
I have a core file I am examining. And I am just stumped at what can be the possible causes for this. Here is the behavoir: extern sampleclas* someobj; void func() { someobj->MemFuncCall("This is a sample str"); } My crash is inside MemFuncCall. But when I examine core file, someobj has an address, say abc(this address is properly initialized and not corrupted) , which is different from this pointer in the function stacktrace: sampleclass::MemFuncCall(this=xyz, "This is a sample str") I was assuming that this pointer will always be the same as address for someobj i.e. abc should always be equal to xyz. What are the possible cases where these 2 addresses can be different??? Fyi, This app is single threaded.
It is possible. Maybe some kind of buffer overrun? Maybe the calling convention (or definition in general) is wrong for MemFuncCall (there is a mismatch between the header you compiled with and when MemFuncCall was compiled). Hard to say. But since this is single threaded I would try following technique. Usually memory layout in apps is the same between reruns of application. So start your application under debugger, stop it immediately and put two memory breakpoints on addresses 0xabc and 0xxyz. You have good chance of hitting breakpoints once someone is modifying this memory. Maybe than stack traces will help?
1,971,311
1,971,326
What does it mean when the first "for" parameter is blank?
I have been looking through some code and I have seen several examples where the first element of a for cycle is omitted. An example: for ( ; hole*2 <= currentSize; hole = child) What does this mean? Thanks.
It just means that the user chose not to set a variable to their own starting value. for(int i = 0; i < x; i++) is equivalent to... int i = 0; for( ; i < x; i++) EDIT (in response to comments): These aren't exactly equivalent. the scope of the variable i is different. Sometimes the latter is used to break up the code. You can also drop out the third statement if your indexing variable is modified within the for loop itself... int i = 0; for(; i < x;) { ... i++ ... } And if you drop out the second statement then you have an infinite loop. for(;;) { runs indefinitely }
1,971,421
1,971,591
stl hash_map slower than simple hash function?
I was comparing a simple hash function that I wrote which just multiplies it by a prime mod another prime number (the table size) and it turns out that stl is slower by 100 times. This is the test method that I wrote: stdext:: hash_map<string, int> hashDict; for (int l = 0; l < size; ++l){ hashDict[arr[l]] = l; } long int before3 = GetTickCount(); int c = 0; while (c < size){ hashDict[arr[c]]; c++; } long int after3 = GetTickCount(); cout << "for stl class, the time is " << (after3 - before3) / 1000.0 << '\n'; cout << "the average is " << ((after3 - before3) / 1000.0 ) /long (size) << '\n'; The size of the dictionary is about 200k elements and the table size of the hash function I wrote has 3m entries, so maybe it has to do with the table size of the stl class being very small. Does anyone know what the tablesize of the stl function is and the collision rates.etc?
The VS2008 STL implementation uses the following hash function for strings: size_t _Val = 2166136261U; while(_Begin != _End) _Val = 16777619U * _Val ^ (size_t)*_Begin++; This is no less efficient than yours, certainly not 100x, and I doubt the Builder version is much different. The difference is either in measurement (GetTickCount() is not very precise) or in operations other than computing hash values. I don't know about C++ Builder, but some STL implementations have a lot of extra checks and assertions built into the debug version. Have you tried profiling an optimized release build? If you post a minimal but complete example we can help you figure out what is going on, but without more code there's really not much to say.
1,971,707
1,971,822
How to find table size and memory consumption of STL hash_map?
I want to know how stl hash_map is implemented. How do I find out what the table size is and the memory space the map consumes? This is in C++.
There is no such thing as an "stl hash_map". There is an unordered_map in TR1, but I assume you're not using that or you would have said unordered_map. As someone pointed out, unordered_map has "bucket_count" to determine the number of buckets. You can iterate over each bucket, get it's size ("bucket_size(size_t bucket_num)"), multiply that by the size of a pair of key and values, and add them all up to give you a rough estimate of the memory used. There may be non-portable ways which are implementation defined. It will obviously be implemention defined for whatever hash_map class you're using.
1,971,758
1,971,882
C++ context switch and mutex problem
Ok.. here is some background on the issue. I have some 'critical' code that i'm trying to protect with a mutex. It goes something like this Mutex.Lock() // critical code // some file IO Mutex.Unlock(). Now the issue is that my program seems to be 'stuck' due to this. Let me explain with an example. Thread_1 comes in; and go to Mutex.Lock() and starts executing the critical code. In the critical code; it needs to do some File IO. Now at this point; I believe a 'context switch' happens and Thread_2 comes in and blocks on the Mutex.Lock() (since Thread_1 has the lock). All seems fine but in my case; the program 'hangs' here.. The only thing I can think of is that somehow Thread_2 keeps blocking for ever and doesn't switch back to Thread_1?? More info: using pthread_mutex_init and pthread_mutex_lock on linux.
As others have mentioned, you probably have a deadlock. Sidenote: You'll want to make sure that there aren't any uncaught exceptions thrown in the critical block of code. Otherwise the lock will never be released. You can use an RAII lock to overcome this issue: class SingleLock { public: SingleLock(Mutex &m) : m(m) { m.Lock(); } ~SingleLock() { m.Unlock(); } private: Mutex m; }; ... { SingleLock lock(mutex); // critical code // some file IO } ...
1,971,961
1,971,982
Is there anything wrong with this shuffling algorithm?
I have been doing a little recreational holiday computing. My mini-project was a simulation of the Italian game of "tomboli". A key building block was a simulation of the following process; The game is controlled by a man with a bag of 90 marbles, numbered 1 to 90. He draws marbles one by one randomly from the bag, each time calling out the marble number to the players. After a little thought I wrote the following code for this building block; // NBR marbles, numbered 1...NBR are in a bag. Simulate randomly // pulling them from the bag, one by one, until the bag is empty void bag( int random_sequence[NBR] ) { int i; // Store each marble as it is pulled out int *store = random_sequence; // Array of marbles still in the bag int not_yet_pulled[NBR]; for( i=0; i<NBR; i++ ) not_yet_pulled[i] = i+1; // eg NBR=90; 1,2,3 ... 90 // Loop pulling marbles from the bag, one each time through for( i=NBR; i>=1; i-- ) { int x = rand(); int idx = x%i; // eg i=90 idx is random in range 0..89 // eg i=89 idx is random in range 0..88 // ... // eg i=1 idx is random in range 0..0 // (so we could optimize when i=1 but not worth the bother) *store++ = not_yet_pulled[idx]; // Replace the marble just drawn (so it cannot be pulled again) // with the last marble in the bag. So; // 1) there is now one less marble in the bag // 2) only marbles not yet pulled are still in the bag // If we happened to pull the last marble in the *current subarray*, this is // not required but does no harm. not_yet_pulled[idx] = not_yet_pulled[i-1]; } } I know there are subtleties and traps all over the place in game simulation with random numbers, so although I am pretty happy with my code my confidence is a little less than 100%. So my questions are; 1) Is there anything wrong with my code ? 2) [if the answer to 1) is no] Am I unknowingly using a standard shuffling algorithm ? 3) [if the answer to 2) is no] How does my algorithm compare to standard alternatives ? EDIT Thanks to all who answered. I am going to accept Aidan Cully's answer because it turns out I was rediscovering the Fisher-Yates algorithm, and revealing that gets to the heart of the matter. Of course it is no surprise I could have saved myself time and effort by doing some research up front. But on the other hand it was a fun hobby project. The rest of the simulation was routine, this was the most interesting part, and I would have deprived myself of enjoyment by not having a go myself. Additionally, I was trying to simulate a man taking marbles from a bag, and it was fairly late in the piece that I realized that the situation was exactly analagous to shuffling cards. Another point of interest is that there is a small flaw, identified by Ken who points out that the oft repeated pattern rand()%N is not a really good way of picking a random number from the range 0..N-1. Finally, my version of Fisher-Yates lacks the elegant trick that allows the nice property of shuffling in place. As a result, my algorithm would end up with an equally random but reversed shuffle.
You're using the Fisher-Yates shuffling algorithm.
1,972,003
2,391,089
How to compile C code with anonymous structs / unions?
I can do this in c++/g++: struct vec3 { union { struct { float x, y, z; }; float xyz[3]; }; }; Then, vec3 v; assert(&v.xyz[0] == &v.x); assert(&v.xyz[1] == &v.y); assert(&v.xyz[2] == &v.z); will work. How does one do this in c with gcc? I have typedef struct { union { struct { float x, y, z; }; float xyz[3]; }; } Vector3; But I get errors all around, specifically line 5: warning: declaration does not declare anything line 7: warning: declaration does not declare anything
according to http://gcc.gnu.org/onlinedocs/gcc/Unnamed-Fields.html#Unnamed-Fields -fms-extensions will enable the feature you (and I) want.
1,972,058
1,972,070
What represents Math.IEEERemainder(x,y) in C++?
What represents Math.IEEERemainder(x,y) in C++?
Try the fmod function.
1,972,079
1,972,590
How to tell the controller what view to call?
I have a virtual function that is called handlePathChange() in my Controller class. It checks the current URL and should dispatch the right view for it. Here's the code I have so far: void Controller::handlePathChange() { if ( app->internalPathMatches(basePath) ) { string path = app->internalPathNextPart(basePath); if ( path.empty() ) // If it's empty it is known that the index of the controller should show up index(); // else if ( path == ?? ) each controller has it's own routes // call_some_unknown_function(); } } How can I generalize this? I was thinking about two options: Call a pure virtual function called dispatch() that will match the right path to the right function in the derived class. This solution violates DRY as basically you will write the same code over and over again. Create a hash maps of std::function but then if a part of the url is a parameter then the view won't be found. So that option isn't good enough. Any ideas?
I realize your post uses a c++ example, but if you don't mind reading some c#, this article by Scott Guthrie is a great overview of how the ASP.NET MVC framework implements its routing: http://weblogs.asp.net/scottgu/archive/2007/12/03/asp-net-mvc-framework-part-2-url-routing.aspx I think you will find that article very helpful. In an overly simplified sort-of-way, it is similar to your option #2, yet it always checks for a parameter. If the parameter is not provided, it uses the same routing rule, but provides a "default" value and sends the request to the correct view. That strategy avoids the problem you mention where you can't find the appropriate view if the parameter is specified. Hope this helps.
1,972,086
1,972,192
need a cast syntax to access an old c api
I'm trying to write a glue function between two data types and I can't seem to get the compiler to be happy. On one side, I have a pointer to a chunk of data that is logically a n x 2 array, but is declared as: double* pData=new double[2*n]; On the other side, I have a c function that is declared as void Function(double data[][2], int n); If I remember my c syntax, the data[][2] is really just a pointer to a contiguous chunk of memory, but the compiler knows the size of the second dimension is 2. So I'd like to take pData and pass it into Function(), without a memcpy. I just can't seem to write the cast. I thought something like Function((double [][2])pData,n) would work, but the compiler (MSVC 8) doesn't like that. Can anyone let me know the proper way to write the cast to get the compiler to be happy.
void Function(double data[][2], int n); double* pData = new double[2*n]; Function((double (*)[2])pData, n); Function parameters of the form T[] are identical to T* (not even T* const that some people expect). This is a special case for parameter types in both C and C++. So your double[][2] follows this rule, with T being double[2]. Typedefs help illustrate this: typedef double T[2]; void Function(T data[], int n); // identical to: void Function(double data[][2], int n); // also identical to: void Function(double (*data)[2], int n); So you write T* when T is double[2] as double (*)[2]. You could also do this: void Function(double data[][2], int n); double (*pData)[2] = new double[n][2]; Function(pData, n); Which requires no cast because pData is already the correct type. Or with typedefs: typedef double T[2]; T* pData = new T[n];
1,972,099
1,972,123
Win API VirtualQueryEx Function,ERROR_BAD_LENGTH
Hi I try to call the VirtualQueryEx function to get some Information about Memory Protection, however my code gives me error 0x18 (ERROR_BAD_LENGTH) and i dont know whats wrong with my code; code snippet: PMEMORY_BASIC_INFORMATION alte; VirtualQueryEx(processhandle,(LPVOID) (address),alte,sizeof(PMEMORY_BASIC_INFORMATION)); thanks for your help
alte needes to by declared as MEMORY_BASIC_INFORMATION not a pointer to one. MEMORY_BASIC_INFORMATION alte; VirtualQueryEx(processhandle,(LPVOID) (address),&alte,sizeof(MEMORY_BASIC_INFORMATION)); edit: Note its sizeof(MEMORY_BASIC_INFORMATION) not sizeof(PMEMORY_BASIC_INFORMATION). Actually, it's better to write this anyway VirtualQueryEx(processhandle,(LPVOID) (address),&alte,sizeof(alte));
1,972,186
1,972,294
Building my project with make
I'm working to improve the long languishing Linux build process for Bitfighter, and am having problems with make. My process is actually quite simple, and since make is (nearly) universal, I want to stick with it if I can. Below I've attached my current Makefile, which works, but clumsily so. I'm looking for ways to improve it, and have three specific questions at this point. First, the project can be built with several options. Let's take debug and dedicated for this example. The dedicated option will exclude all UI code, and create a more efficient binary good for hosting (but not playing) games. The debug option adds a flag to the compiler that activates debugging code. One might want to build the game with either, both, or neither of these options. So the question is, how do I make this work? As you can see from the comments in the makefile below, debugging is enabled by setting DFLAGS=-DTNL_DEBUG. I'd like to have the user type make dedicated debug rather than make dedicated DFLAGS=-DTNL_DEBUG How can I rewrite my makefile so that this will work? Secondly, when I install the lualibs package on different versions of Linux, I get different libraries. For example, on Ubuntu, when I install the lualib package with apt-get, I get lua5.1.a in my /usr/lib folder. On Centos, when I install the same thing with yum, I end up with liblua.a in my /usr/lib folder. How can I get make to figure out which library I've got, and link that in? Obviously the -l directive is not smart enough for that. I'd like the user to not have to worry about where Lua ends up when it gets installed, and for the makefile to just work. Finally, is there any way to get make to detect whether certain required packages (freeglut, for example) have not been installed, and either install them automatically or at least alert the user to the fact they need to get them installed (as opposed to simply terminating with a cryptic error message)? Thanks!! Here is my Makefile. # Bitfighter Makefile ####################################### # # Configuration # # # Some installs of lua call the lua library by different names, and you # may need to override the default lua library path. For the ServerHitch # CENTOS installs, for example, you will need to specify the lua library # on the make command line: # LUALIB=/usr/lib/liblua.a # # # To compile Bitfighter with debugging enabled, specify # DFLAGS=-DTNL_DEBUG # on the make command line # # # Building with make on Windows is still highly experimental. You will # probably need to add # WFLAGS="-DWIN32 -D_STDCALL_SUPPORTED" THREADLIB= GLUT=-lglut32 INPUT=winJoystick.o # to the make command line to have any hope of getting it to work! :-) # # ####################################### CC=g++ -g -I../tnl -I../glut -I../openal -DTNL_ENABLE_LOGGING THREADLIB= -lpthread GLUT=-lGL -lGLU -lglut INPUT=linuxInput.o OBJECTS_ZAP=\ CTFGame.o\ ...many more... BotNavMeshZone.o\ ../master/masterInterface.o\ CFLAGS= DFLAGS= EXEFILE=bitfighter OPENAL=../openal/linux/libopenal.a LUALIB=-llua5.1 WFLAGS= .c.o: $(CC) $(DFLAGS) $(WFLAGS) -c $(CFLAGS) $< .cpp.o : $(CC) $(DFLAGS) $(WFLAGS) -c $(CFLAGS) $< default: ../exe/bitfighter bitfighter: ../exe/bitfighter dedicated: CFLAGS=-DZAP_DEDICATED dedicated: GLUT= dedicated: OPENAL= dedicated: EXEFILE=bitfighterd dedicated: ../exe/bitfighter ../exe/bitfighter: $(OBJECTS_ZAP) $(CC) -o ../exe/$(EXEFILE) $(OBJECTS_ZAP) ../tnl/libtnl.a \ ../libtomcrypt/libtomcrypt.a \ $(OPENAL) $(GLUT) $(THREADLIB) $(LUALIB) -lstdc++ -lm ../master/masterInterface.o: make -C ../master clean: rm -f $(OBJECTS_ZAP) ../exe/bitfighter ../exe/bitfightered cleano: rm -f $(OBJECTS_ZAP)
Raw make doesn't really support any of these uses. make considers targets passed in on the command line to be different programs to be built, or different actions to take, and it has no concept of using two targets passed in to switch independent options for a single build. make also doesn't have any built in support for checking for versions of packages installed. It's a bit of a steep learning curve, but the most common solution for all of these problems is to use the GNU autotools toolchain (Autoconf and Automake, specifically). These tools have been written to help write portable, configurable build systems, that can probe the system for libraries in various locations, and generate Makefiles based on configuration options and the user's system. If you have ever run ./configure; make; make install, you have probably used a configure script generated with Autoconf and Automake. The Wikipedia article provides a bit of an overview, and the Automake manual provides a tutorial introducing the toolchain. For your usage, what you would probably want to do is create a configure using Autoconf that takes options like --enable-debug and --enable-dedicated, to set options for generating your Makefile. You could then port your Makefile to Automake, or you could simply turn your Makefile into a Makefile.in with a few variables that Autoconf will fill in when generating the Makefile. While the GNU Autotools system is very complete, and supports a lot of platforms, it is a bit baroque. There are some alternative build systems that support some similar auto-configuration behavior, like CMake and SCons, that might be worth looking into if Autotools feels like too much. For the specific task of detecting certain libraries, and finding the options you need to link to them, pkg-config can be used; however, not all libraries install pkg-config definitions, and not all systems even have pkg-config installed, so it's not a universal solution, but can be a nice quick and easy way to get something building without too much messing with options in the cases in which it does work.
1,972,231
1,972,264
Question regarding libraries and framework
Sorry i'm a beginner,from what i know there are number of varieties of libraries and framework out there provided for the C++ language.My question is,when we create an application using the framework and libraries,do the users of the application need to install the framework or so so call the libraries on his/her PC??Thank You
It depends whether the library you are using is statically or dynamically linked. In the former case, it is part of the executable file that you distribute. In the latter case, it is an extra file (or set of files) with extensions such as .so or .dll, which you should distribute with your app.
1,972,239
1,972,272
Qt, Color Picker Dialog?
Is there a color picker dialog for Qt like the following? Also it needs to have a OnColorChanged signal which is called when ever the selected color changes. I want to give a live preview when they are changing the colors, that is why. Using google I could only find this one that was a triangle in side of a circle and personally I think it looks ugly.
QColorDialog does exactly what you want. (It is easy to find when you Ctrl-F through the list of Qt classes for "color")
1,972,403
1,972,424
stl::deque's insert(loc, val) - inconsistent behavior at end of deque vs other locations?
Using http://www.cppreference.com/wiki/stl/deque/insert as a reference, I was inserting values into a deque at certain locations. For example, if deque A was: a, b, d, e, g with an iterator pointing to d, i can: A.insert(iter, c); // insert val c before loc iter //deque is now a, b, c, d, e, g and the iter still points to d. However, when iter points to g, the last element: A.insert(iter, f); //deque is now a, b, c, d, e, f, g but the iter now points to f!! My current workaround is: iter = A.insert(loc, val); // point iterator to element that was inserted before loc iter++; // point iter back to loc I haven't tested this again or anything, it was annoying to have spent so much time tracking a bug down, just to discover insert()'s inconsistent behavior, in stl, of all places. Why does insert() behave differently when at the end, compared to at any other location? Or is it that I did something wrong?
Performing an insert invalidates all existing iterators, so you will get unpredictable behavior (possibly a crash) by reusing the old iterator. Your workaround is the correct solution. Edit: Regarding your second question, you are missing braces after if (*iter == 'g'). In the future though, please put new questions in a new post.
1,972,552
1,972,647
How to convert a static library project into a dll project in VS2005
When I create a project in vs2005. I can also create Win32->Win32Project. I can choose "console application" or "dll" or "static library" if I created a static library project. How can I convert it to dll project. I found in setting panel of the created project. General->Configuration Type, I can switch Static Library(.lib) to DLL However, after this setting. I does get a dll. but I do not have a lib with it. and I can not use it in other project. How to convert a static library project into a dll project in VS2005 many thanks!
The way I've done this, and this may not be the "best" way, was to create a new project with the right settings (DLL in this case) and then create the stub methods with the wizards that I want to expose from the static library. Then you have two choices, you can leave the real code in the static library and just have your stubs in the DLL call into the static library, or you can copy the code out of the static library project and retire the static library entirely. The advantage of the first option is that you can support both the static library and the DLL without having to duplicate a lot of work. But if you can get rid of supporting the static library entirely the second option is probably better because you don't have to make changes in two different projects (adding the stub method in the DLL and the real code to the static lib) every time you want to add a new method/property. YMMV
1,972,722
1,972,859
Lua vs. XML for data storage
Many of us have been indoctrinated in using XML for storing data. It's benefits and drawbacks are generally known, and I surely don't want to discuss them here. However in the project I'm writing in C++, I'm also using Lua. I've been very surprised how well Lua can be used to store and handle data. Yet, this aspect of Lua is less recognized, at least in the game programming world. I'm aware that XML has it's advantages in cases like sending data over the internet, and in places where safety comes into play (using data downloaded from the net for example, or loading user-editable configuration files) and finally in cases where the same data is being read by programs in different languages. However once I learned how nice and easy it is to handle data using Lua (especially having luabind to back you up!), I started to wonder is there any reason to use XML to store game data, if we already use Lua anyway? Blizzard, while using Lua for scripting the UI, still stores the layout in XML. Is the reason something that is only UI related? What are the drawbacks of using Lua as a data storage language?
This might not be the kind of answer you expected, but it might help you make your decision. Blizzard (WoW) uses XML to define UI. It's kinda like XAML in C#, just a lot less powerful and most addons just use XML to bootstrap the addon and then build UI in lua code. Also WoW actually stores addon "Saved Variables" in .lua files. In my opinion it doesn't mater that much. Choose something you like and which is easy to use for those who are going to extend your engine. The good thing about XML is that there are A LOT of tools and code already written to test, write and parse XML which means it could save you some time. For example XML Schema's are very useful for validating user written files (security is just a side effect, the good thing is that if it passes your schema, the data is most likely 100% safe and ready to be plugged into your engine) and there quite a few validators already written for you to use. Then again some users are scared from XML files (even though they are very readable, maybe too readable) and would prefer something "simpler". If it's just for storage (not configuration) then no one is going to edit those file anyway in most cases. XML will also take more space then lua var dump (shouldn't matter, unless you have a lot data). I don't think you can go wrong here. Blizzard is using lua for storage and I quite like like how it works.
1,972,735
1,972,746
C++ Programming Contests
I would like to test my C++ programming skill level by competing with others. What programming contests are there for C++?
There's Google Code Jam, but only once a year; TopCoder, with many more contests; and others listed here.
1,972,765
1,972,773
mmap problem, allocates huge amounts of memory
I got some huge files I need to parse, and people have been recommending mmap because this should avoid having to allocate the entire file in-memory. But looking at 'top' it does look like I'm opening the entire file into the memory, so I think I must be doing something wrong. 'top shows >2.1 gig' This is a code snippet that shows what I'm doing. Thanks #include <stdio.h> #include <stdlib.h> #include <err.h> #include <fcntl.h> #include <sysexits.h> #include <unistd.h> #include <sys/stat.h> #include <sys/types.h> #include <sys/mman.h> #include <cstring> int main (int argc, char *argv[] ) { struct stat sb; char *p,*q; //open filedescriptor int fd = open (argv[1], O_RDONLY); //initialize a stat for getting the filesize if (fstat (fd, &sb) == -1) { perror ("fstat"); return 1; } //do the actual mmap, and keep pointer to the first element p =(char *) mmap (0, sb.st_size, PROT_READ, MAP_SHARED, fd, 0); q=p; //something went wrong if (p == MAP_FAILED) { perror ("mmap"); return 1; } //lets just count the number of lines size_t numlines=0; while(*p++!='\0') if(*p=='\n') numlines++; fprintf(stderr,"numlines:%lu\n",numlines); //unmap it if (munmap (q, sb.st_size) == -1) { perror ("munmap"); return 1; } if (close (fd) == -1) { perror ("close"); return 1; } return 0; }
No, what you're doing is mapping the file into memory. This is different to actually reading the file into memory. Were you to read it in, you would have to transfer the entire contents into memory. By mapping it, you let the operating system handle it. If you attempt to read or write to a location in that memory area, the OS will load the relevant section for you first. It will not load the entire file unless the entire file is needed. That is where you get your performance gain. If you map the entire file but only change one byte then unmap it, you'll find that there's not much disk I/O at all. Of course, if you touch every byte in the file, then yes, it will all be loaded at some point but not necessarily in physical RAM all at once. But that's the case even if you load the entire file up front. The OS will swap out parts of your data if there's not enough physical memory to contain it all, along with that of the other processes in the system. The main advantages of memory mapping are: you defer reading the file sections until they're needed (and, if they're never needed, they don't get loaded). So there's no big upfront cost as you load the entire file. It amortises the cost of loading. The writes are automated, you don't have to write out every byte. Just close it and the OS will write out the changed sections. I think this also happens when the memory is swapped out as well (in low physical memory situations), since your buffer is simply a window onto the file. Keep in mind that there is most likely a disconnect between your address space usage and your physical memory usage. You can allocate an address space of 4G (ideally, though there may be OS, BIOS or hardware limitations) in a 32-bit machine with only 1G of RAM. The OS handles the paging to and from disk. And to answer your further request for clarification: Just to clarify. So If I need the entire file, mmap will actually load the entire file? Yes, but it may not be in physical memory all at once. The OS will swap out bits back to the filesystem in order to bring in new bits. But it will also do that if you've read the entire file in manually. The difference between those two situations is as follows. With the file read into memory manually, the OS will swap parts of your address space (may include the data or may not) out to the swap file. And you will need to manually rewrite the file when your finished with it. With memory mapping, you have effectively told it to use the original file as an extra swap area for that file/memory only. And, when data is written to that swap area, it affects the actual file immediately. So no having to manually rewrite anything when you're done and no affecting the normal swap (usually). It really is just a window to the file:                        
1,972,888
2,986,499
Large number of simultaneous long-running operations in Qt
I have some long-running operations that number in the hundreds. At the moment they are each on their own thread. My main goal in using threads is not to speed these operations up. The more important thing in this case is that they appear to run simultaneously. I'm aware of cooperative multitasking and fibers. However, I'm trying to avoid anything that would require touching the code in the operations, e.g. peppering them with things like yieldToScheduler(). I also don't want to prescribe that these routines be stylized to be coded to emit queues of bite-sized task items...I want to treat them as black boxes. For the moment I can live with these downsides: Maximum # of threads tend to be O(1000) Cost per thread is O(1MB) To address the bad cache performance due to context-switches, I did have the idea of a timer which would juggle the priorities such that only idealThreadCount() threads were ever at Normal priority, with all the rest set to Idle. This would let me widen the timeslices, which would mean fewer context switches and still be okay for my purposes. Question #1: Is that a good idea at all? One certain downside is it won't work on Linux (docs say no QThread::setPriority() there). Question #2: Any other ideas or approaches? Is QtConcurrent thinking about this scenario? (Some related reading: how-many-threads-does-it-take-to-make-them-a-bad-choice, many-threads-or-as-few-threads-as-possible, maximum-number-of-threads-per-process-in-linux)
It's been 6 months, so I'm going to close this. Firstly I'll say that threads serve more than one purpose. One is speedup...and a lot of people are focusing on that in the era of multi-core machines. But another is concurrency, which can be desirable even if it slows the system down when taken as a whole. Yet concurrency can be achieved using mechanisms more lightweight than threads, although it may complicate the code. So this is just one of those situations where the tradeoff of programmer convenience against user experience must be tuned to fit the target environment. It's how Google's approach to a process-per-tab with Chrome would have been ill-advised in the era of Mosaic (even if process isolation was preferable with all else being equal). If the OS, memory, and CPU couldn't give a good browsing experience...they wouldn't do it that way now. Similarly, creating a lot of threads when there are independent operations you want to be concurrent saves you the trouble of sticking in your own scheduler and yield() operations. It may be the cleanest way to express the code, but if it chokes the target environment then something different needs to be done. So I think I'll settle on the idea that in the future when our hardware is better than it is today, we'll probably not have to worry about how many threads we make. But for now I'll take it on a case-by-case basis. i.e. If I have 100 of concurrent task class A, and 10 of concurrent task class B, and 3 of concurrent task class C... then switching A to a fiber-based solution and giving it a pool of a few threads is probably worth the extra complication.
1,972,953
1,972,967
using exit(1) to return from a function
linux gcc 4.4.1 C99 I am just wondering is there any advantage using the following techniques. I noticed with some code I was reading the exit number went up in value, as displayed in this code snippet. /* This would happen in 1 function */ if(test condition 1) { /* something went wrong */ exit(1); } if(test condition 2) { /* something went wrong with another condition*/ exit(2); } or doing the following and just returning: /* This would happen in 1 function */ if(test condition 1) { /* something went wrong */ return; } if(test condition 2) { /* something went wrong with another condition*/ return; }
exit() exits your entire program, and reports back the argument you pass it. This allows any programs that are running your program to figure out why it exited incorrectly. (1 could mean failure to connect to a database, 2 could mean unexpected arguments, etc). Return only returns out of the current function you're in, not the entire program.
1,973,471
2,089,723
vim - indentation of C++ constructor initialization list problem
I'm using vim 7.0. I want the following code be indented in the following way (initialization list in the same indentation as constructor): A::A() : a1(10), a2(10), a3(10) { } According to vim help this can be done by setting: set cino+=i0 But this setting yields (only a1 is indented correctly): A::A() : a1(10), a2(10), a3(10) { } Setting cino+=i1 indents correctly a1..a3 with 1 space indentation.
According to documentation and a little experiment, the following could help: :set cino=i-s Seems to be indenting init list exactly as you wanted.
1,973,788
2,057,641
Tracking Lua tables in C
I have C++ objects and I have Lua objects/tables. (Also have SWIG C++ bindings.) What I need to be able to do is associate the two objects so that if I do say CObject* o1 = getObject(); o1->Update(); it will do the equivalent Lua: myluatable1.Update(); So far I can imagine that CObject::Update would have the following code: void CObject::Update(){ // Acquire table. // ??? // Do the following operations on the table. lua_getfield(L, -1, "Update"); lua_pcall(L, 0, 0, 0); } How would I store/set the Lua table to be used, and what would go in the // ??? above to make the Update call work?
I cant believe nobody noticed this! http://www.lua.org/pil/27.3.2.html A section of the Lua API for storing references to lua objects and tables and returning references for the purposes of being stored in C structures!!
1,973,815
1,973,879
Code compiles locally on g++ 4.4.1, but not on codepad.org (g++ 4.1.2)? (reference to reference problem)?)
I was writing a test case out to tackle a bigger problem in my application. I ended trying some code out on codepad and discovered that some code that compiled on my local machine (g++ 4.4.1, with -Wall) didn't compile on codepad (g++ 4.1.2), even though my local machine has a newer version of g++. Codepad calls this a reference to reference error, which I looked up and found a litle information on. It looks like it's not a good idea to have a stl container of references. Does this mean I need to define my own PairPages class? And if this is the case, why did it compile locally in the first place? What's going on? codepad link: http://codepad.org/UAaJI1rl #include <deque> #include <utility> #include <iostream> using namespace std; class Page { public: Page() : number_(++count) {} int getNum() const { return number_; } private: static int count; int number_; }; int Page::count = 0; class Book { public: Book() : currPageIdx_(3) { int numPages = 5; while (numPages > 0) { pages_.push_back(Page()); numPages--; // oops } } pair<const Page&, const Page&> currPages() { return pagesAt(currPageIdx_); } pair<const Page&, const Page&> pagesAt(int pageNo) { return make_pair(pages_[pageNo - 1], pages_[pageNo]); } //const Page& currPages() { return pagesAt(currPageIdx_); } //const Page& pagesAt(int pageNo); private: deque<Page> pages_; int currPageIdx_; }; int main() { Book book; cout << book.pagesAt(3).first.getNum() << endl; cout << book.currPages().first.getNum() << endl; }
A vector (or any STL container) of references is indeed a bad idea, as obvious when you simply look at requirements for element type T of any STL container (ISO C++03 23.1[lib.container.requirements]). It starts off by saying that "containers are objects that store other objects". We can stop right here, because a reference is not an object in C++ (unlike, say, a pointer; note that "object" in C++ parlance doesn't mean "instance of class"!). But, furthermore, it requires T to be Assignable, the requirements for which refer to type T& - if T is itself some reference type U&, then the constructed type would be U& &, which (reference to reference) is illegal in C++. If you really want to have a container that doesn't manage lifetimes of objects, then you should use a container of pointers. If you prefer the safety of references (e.g. lack of pointer arithmetic and null value), you can use std::tr1::reference_wrapper<T> class, which is copy constructible and assignable wrapper for a reference.
1,974,006
1,974,201
Calculate the gradient for an histogram in c++
I calculated the histogram(a simple 1d array) for an 3D grayscale Image. Now I would like to calculate the gradient for the this histogram at each point. So this would actually mean I have to calculate the gradient for a 1D function at certain points. However I do not have a function. So how can I calculate it with concrete x and y values? For the sake of simplicity could you probably explain this to me on an example histogram - for example with the following values (x is the intensity, and y the frequency of this intensity): x1 = 1; y1 = 3 x2 = 2; y2 = 6 x3 = 3; y3 = 8 x4 = 4; y4 = 5 x5 = 5; y5 = 9 x6 = 6; y6 = 12 x7 = 7; y7 = 5 x8 = 8; y8 = 3 x9 = 9; y9 = 5 x10 = 10; y10 = 2 I know that this is also a math problem, but since I need to solve it in c++ I though you could help me here. Thank you for your advice Marc
I think you can calculate your gradient using the same approach used in image border detection (which is a gradient calculus). If your histogram is in a vector you can calculate an approximation of the gradient as*: for each point in the histogram compute gradient[x] = (hist[x+1] - hist[x]) This is a very simple way to do it, but I'm not sure if is the most accurate. approximation because you are working with discrete data instead of continuous Edited: Other operators will may emphasize small differences (small gradients will became more emphasized). Roberts algorithm derives from the derivative calculus: lim delta -> 0 = f(x + delta) - f(x) / delta delta tends infinitely to 0 (in order to avoid 0 division) but is never zero. As in computer's memory this is impossible, the smallest we can get of delta is 1 (because 1 is the smallest distance from to points in an image (or histogram)). Substituting lim delta -> 0 to lim delta -> 1 we get f(x + 1) - f(x) / 1 = f(x + 1) - f(x) => vet[x+1] - vet[x]
1,974,301
1,974,308
Can you place a complex condition into a for loop?
while (status) for (int i = 0; i < 3; i++) Is the following syntactically correct: for (int i = 0; i < 3; i++ && status) I am trying to have the for loop break early if status is true.
Syntactically, you might want to use: for (int i = 0; i < 3 && status; i++) which is valid. Some consider it bad form though, as it leads to more complicated loops and annoyed maintenance programmers. Another alternative you might want to explore would be: for (int i = 0; i < 3; i++) { if (!status) { break; } }
1,974,487
1,974,494
Extreme big/small number programming
I am trying to do some extreme precise maths calculation of very big/small number. The very big number may have 10 - 50 digits and the very small number may have 10 - 50 decimal places. Can C++ do this? If not is there any other programming language that can handle this kind of number?
C++ can do it with a library, for example the GNU Multiple Precision Arithmetic library.
1,974,828
1,975,679
How do we tell if a C++ application is launched as a Windows service?
We have a console app which we launch from command prompt for debugging, but we also launch this as an NT service for production. Right now, the code has this logic: if (__argc <= 1) { assumeService(); } else { assumeForgound(); } Is there a better way to check how the process has been launched? We're an open source project, so every time we get a new Windows developer we have to explain that they must specify the -f arg to stop the app from connecting to the service controller. What about checking the parent process? Update: I forgot to mention that we're using C++ (unmanaged).
Here's some code I created (seems to work nicely). Apologies for missing headers, #defines, etc. If you want to see the full version, look here. bool CArchMiscWindows::wasLaunchedAsService() { CString name; if (!getParentProcessName(name)) { LOG((CLOG_ERR "cannot determine if process was launched as service")); return false; } return (name == SERVICE_LAUNCHER); } bool CArchMiscWindows::getParentProcessName(CString &name) { PROCESSENTRY32 parentEntry; if (!getParentProcessEntry(parentEntry)){ LOG((CLOG_ERR "could not get entry for parent process")); return false; } name = parentEntry.szExeFile; return true; } BOOL WINAPI CArchMiscWindows::getSelfProcessEntry(PROCESSENTRY32& entry) { // get entry from current PID return getProcessEntry(entry, GetCurrentProcessId()); } BOOL WINAPI CArchMiscWindows::getParentProcessEntry(PROCESSENTRY32& entry) { // get the current process, so we can get parent PID PROCESSENTRY32 selfEntry; if (!getSelfProcessEntry(selfEntry)) { return FALSE; } // get entry from parent PID return getProcessEntry(entry, selfEntry.th32ParentProcessID); } BOOL WINAPI CArchMiscWindows::getProcessEntry(PROCESSENTRY32& entry, DWORD processID) { // first we need to take a snapshot of the running processes HANDLE snapshot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); if (snapshot == INVALID_HANDLE_VALUE) { LOG((CLOG_ERR "could not get process snapshot (error: %i)", GetLastError())); return FALSE; } entry.dwSize = sizeof(PROCESSENTRY32); // get the first process, and if we can't do that then it's // unlikely we can go any further BOOL gotEntry = Process32First(snapshot, &entry); if (!gotEntry) { LOG((CLOG_ERR "could not get first process entry (error: %i)", GetLastError())); return FALSE; } while(gotEntry) { if (entry.th32ProcessID == processID) { // found current process return TRUE; } // now move on to the next entry (when we reach end, loop will stop) gotEntry = Process32Next(snapshot, &entry); } return FALSE; }
1,975,201
1,975,243
Generate all of the unique combinations of numbers that add up to a certain number
I am writing a program to try to solve a math problem. I need to generate a unique list of all of the numbers that add up to another number. For example, all of the unqiue combinations of 4 numbers that add up to 5 are: 5 0 0 0 4 1 0 0 3 2 0 0 3 1 1 0 2 2 1 0 2 1 1 1 This is easy to brute force in perl but I am working in C and would like to find a more elegant solution. In perl I would generate every possible combination of numbers 0-N in each column, discard the ones that don't add up to the target number, then sort the numbers in each row and remove the duplicate rows. I've been trying all morning to write this in C but can't seem to come up with a satisfactory solution. I need it to work up to a maximum N of about 25. Do you guys have any ideas? Here is an example of the kind of thing I have been trying (this produces duplicate combinations): // target is the number each row should sum to. // Don't worry about overflows, I am only using small values for target void example(int target) { int row[4]; for (int a=target; a>=0; a--) { row[0] = a; for (int b=target-a; b>=0; b--) { row[1] = b; for (int c=target-(a+b); c>=0; c--) { row[2] = c; row[3] = target-(a+b+c); printf ("%2d %2d %2d %2d sum: %d\n", row[0],row[1],row[2],row[3], row[0]+row[1]+row[2]+row[3]); } } } }
This is called a partition problem and approaches are discussed here, here and here.
1,975,439
1,975,634
Capturing window output in another window
I am building a C++ (Qt) based application for controlling a flash based UI. Because the flash runtime leaks spectatular amounts of memory, we execute the UI as a .swf loaded in the standalone flash player separate from the command-and-control app written i C++. The C++ starts the flash player as an external process with appropriate parameters, and communicates with it over a TCP socket connected to localhost. The application runs primarily on Windows XP and above. The unfortunate side effect of running the flash player standalone is that two applications are shown in the Alt+tab list as well as in the task bar on windows (one being our application, the other being the flash player). Additionally, as the application runs full screen, flash must manage the entire screen. Allowing the C++ app to draw parts of the screen would be a massive improvement. We would like to somehow merge the two, while leaving our own application in control. I am thinking something along the lines of Google Chrome, which appears to be running each browser tab in a separate process while displaying all output in a single window. I've been reading up in the Win32 API (and google) in order to determine if accomplishing this is even possible. Althogh so far I've come up with dll injection as the only semi-viable solution, but I would very much like to consider that plan B. Any suggestions would be appreciated.
The Alt+Tab list shows top-level (no parent) windows that are visible and don't have the WS_EX_TOOLWINDOW extended style. So if you have two windows from two processes but you only want to see one in the Alt-Tab list (and on the task bar), then you have a few options: Add the WS_EX_TOOLWINDOW to one of the windows. Re-parent one of the windows to a hidden top-level window. Re-parent one of the windows (probably the Flash player) to the other window. This is tricky, but it's probably how Chrome and many other multi-process single-window apps work. What makes it tricky is handling the lifetimes of the windows and inadvertently serializing the message queues.
1,975,778
1,977,411
OpenGL antialiasing isn't working
I'm using the following code in order to antialias only the edges of my polygons: glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST); glEnable(GL_POLYGON_SMOOTH); But it doesn't work. I can force enable antialiasing by the nvidia control panel, and it does antialias my application polygons. With the code above, I even enabled blending, but it has no effect. Also the rendering code shouldn't be changed since the nvidia control panel can turn it on, and it certainly cant modify my rendering code, it must be some on/off flag. What is it? I've heard of "multisampling", but I don't need that. Edit: the nvidia control panel setting is "application controlled" when it doesn't work.
You need to ask for a visual/pixelformat with support for multisampling. This is an attribute in the attribute list you pass to glXChooseFBConfig when using GLX/XLib, and wglChoosePixelformatARB when using the Win32 API. See my post here: Getting smooth, big points in OpenGL
1,975,812
1,976,113
TI C2800 DSPs: troubleshooting linker problems between C++ and assembly code
I have a function sincos_Q15_asm() in assembly, in file sincos_p5sh.asm with directives as follows: .sect ".text" .global _sincos_Q15_asm .sym _sincos_Q15_asm,_sincos_Q15_asm, 36, 2, 0 .func 1 The function works fine when I test it by itself (assembly only), but when I try to link to it, I get a linker error: undefined first referenced symbol in file --------- ---------------- sincos_Q15_asm(int, int *) build\pwm3phase.obj error: unresolved symbols remain This is very puzzling to me, as I am including the assembled file build\blocks\sincos_p5sh.obj in my linker command, and I've used the absolute lister abs2000 on this obj file and it says there is a symbol _sincos_Q15_asm. (the underscore prefix is how it works for assembly) Any suggestions what I should troubleshoot next?
D'oh! I figured it out -- I was using C++ and forgot to include the extern "C" declaration for my function: extern "C" { extern void sincos_Q15_asm(int16_t theta, int16_t* cs); }
1,975,916
1,975,961
Should C++ programmer avoid memset?
I heard a saying that c++ programmers should avoid memset, class ArrInit { //! int a[1024] = { 0 }; int a[1024]; public: ArrInit() { memset(a, 0, 1024 * sizeof(int)); } }; so considering the code above,if you do not use memset,how could you make a[1..1024] filled with zero?Whats wrong with memset in C++? thanks.
The issue is not so much using memset() on the built-in types, it is using them on class (aka non-POD) types. Doing so will almost always do the wrong thing and frequently do the fatal thing - it may, for example, trample over a virtual function table pointer.
1,975,951
1,976,175
Beneficial to limit scope of Qt objects?
Qt objects which are allocated with new are pretty much handled for you. Things will get cleaned up at some point (almost always when the parent gets destructed) because Qt objects have a nice parent child relationship. So my question is this: given that some widgets exist for the life of the application, is it considered good/beneficial to limit the scope of some child widgets? It seems to me that if I don't the application may not release these objects until the application exits. For example: MyMainWindow::contextMenu(...) { QMenu *menu = new QMenu(this); // ... menu->exec(); } vs: MyMainWindow::contextMenu(...) { QMenu *menu = new QMenu(this); // ... menu->exec(); delete menu; } vs: MyMainWindow::contextMenu(...) { QScopedPointer<QMenu> menu(new QMenu(this)); // ... menu->exec(); } I like the last one the best, i know that that menu object will be cleaned up immediately, without adding any lines of code to worry about. But, in the first one, it should be cleaned up eventually. Am I wasting my effort trying to manage the lifetime of these Qt widgets? Should I just leave it up to Qt entirely?
In your first example, menu will be deleted when this (i.e. the MyMainWindow object) is... which is probably not what you want, since that means that if contextMenu() is called more than once, multiple unseen old QMenu objects will build up in memory, and might eventually use up a lot of RAM if the user never closes/deletes the MyMainWindow for a long time. Your second and third examples are both fine. The third is probably slightly better, since it avoids any possibility of a bug ever being introduced where the delete doesn't get called.
1,975,992
1,976,026
Accessing native C++ data from managed C++
I have an native C++ library which makes use of a large static buffer (it acquires data from a device). Let's say this buffer is defined like this: unsigned char LargeBuffer[1000000]; Now I would like to expose parts of this buffer to managed C++, e.g. when 1000 bytes of new data are stored by the library at LargeBuffer[5000] I would like to perform a callback into managed C++ code, passing a pointer to LargeBuffer[5000] so that managed C++ can access the 1000 bytes of data there (directly if possibile, i.e. without copying data, to achieve maximum performance). What is the best way to let managed C++ code access data in this native array?
Managed C++ can access the unmanaged memory just fine. You can just pass in the pointer and use it in managed c++. Now, if you want to then pass that data into other .NET languages, you'll need to copy that data over to managed memory structures or use unsafe code in C#
1,976,414
1,976,612
How do I use a class as a value to be used on set::find()? - C++
So I'm working on a project and I have to use the set library on class objects. Those objects have many attributes, ID being one of them. What I wanted to do was search for an object inside a "set" by its ID. The problem is set only has find and I don't know how to search for an ID this way since I'd have to use find(class object) and not find(int). I tried messing with class operators to read it as an object but couldn't find a way. Also, I thought about algorithm::find_if, but that would just check every element from beggining to end instead of using the set "tree" search functions, right? Thanks in advance.
You need to create a constructor for your class that takes int as its only argument. Doing so allows implicit conversion from int to your class, making it possible to call std::set::find(int), as requested. For example: #include <iostream> #include <set> class Foo { public: /* Normal constructor */ Foo(const char * s, int i) : str(s),id(i) {} /* Special constructor for implicit conversion */ Foo(int i) : str(0),id(i) {} /* Make Foo usable with std::set */ bool operator<(const Foo& f) const { return f.id<id; } /* Make Foo printable */ friend std::ostream& operator<<(std::ostream& o, const Foo& f); private: const char * str; int id; }; std::ostream& operator<<(std::ostream& o, const Foo& f) { return o << "(" << f.str << " " << f.id << ")"; } typedef std::set<Foo> FooSet; int main(void) { FooSet s; s.insert(Foo("test",1)); s.insert(Foo("asdf",7)); s.insert(Foo("spam",3)); for (int i=0; i<10; ++i) { /* Note that searching is done via FooSet::find(int id) */ FooSet::const_iterator f = s.find(i); std::cout << "Searching for id " << i << ": "; if (f==s.end()) std::cout << "absent"; else std::cout << "present " << *f; std::cout << std::endl; } return 0; } This yields: Searching for id 0: absent Searching for id 1: present (test 1) Searching for id 2: absent Searching for id 3: present (spam 3) Searching for id 4: absent Searching for id 5: absent Searching for id 6: absent Searching for id 7: present (asdf 7) Searching for id 8: absent Searching for id 9: absent
1,976,453
1,976,484
Format string into scientific notation
I have a string that looks like this: "0.4794255386042030002732879352156" which is approximately the sin(0.5). I would like to format the string to look a much nicer "4.794255386042e-1" How can I achieve this? Remember I am dealing with strings and not numbers (float, double). Also I need to round to keep the number as accurate as possible, I can't just truncate. If I need to convert to a different data type I would prefer a long double because a regular double doesn't have enough precision. I'd like at least 12 decimal digits before rounding. Perhaps there is a simple sprintf() conversion I could do.
Something like this: #include<iostream> using namespace std; int main() { char *s = "0.4794255386042030002732879352156"; double d; sscanf(s,"%lf",&d); printf("%.12e\n",d); return EXIT_SUCCESS; } Output: # g++ a.cpp && ./a.out 4.794255386042e-01