question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
2,066,477
2,066,592
RtlpNtMakeTemporaryKey - VerySleepy indicates that this takes a lot of time
I'm profiling some computationally intensive code of mine, and was surprised to see that the function RtlpNtMakeTemporaryKey takes up a huge chunk of time. It's module is ntdll and the source file is Unknown. Is this a call which is waiting for my slow function to terminate or is it something which I can optimize?
Are you sure you have symbols for ntdll? It's possible that you don't, and RtlpNtMakeTemporaryKey is just the closet exported symbol name that your debugger can see to the real function or functions that are taking up so much time. But yeah, you should focus on your code and who/why you're calling into ntdll so much.
2,066,863
2,066,902
compiling a program to run in DOS mode
I write a simple program, to run in DOS mode. Everything works under emulated console in Win XP / Vista / Seven, but not in DOS. The error says: this program caonnot be run in DOS mode. I wonder is that a problem with compiler flags or something bigger. For programming i use Code::Blocks v 8.02 with such settings for compilation: -Wall -W -pedantic -pedantic-errors in Project \ Build options \ Compiler settings I've tried a clean DOS mode, booting from cd, and also setting up DOS in Virtual Machine. The same error appears. Should i turn on some more compiler flags ? Some specific 386 / 486 optimizations ? UPDATE Ok, i've downloaded, installed and configured DJGPP. Even resolved some problems with libs and includes. Still have two questions. 1) i can't compile a code, which calls _strdate and _strtime, i've double checked the includes, as MSDN says it needs time.h, but still error says: _strdate was not declared in this scope, i even tried to add std::_strdate, but then i have 4, not 2 errors sazing the same 2) the 2nd code is about gotoxy, it looks like that: #include <windows.h> void gotoxy(int x, int y) { COORD position; position.X = x; position.Y = y; SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE), position); } error says there is no windows.h, so i've put it in place, but then there are many more errors saying some is missing from windows.h, I SUPPOSE it won't work because this functions is strictly for windows right ? is there any way to write similar gotoxy for DOS ? UPDATE2 1) solved using time(); instead of _strdate(); and _strtime(); here's the code time_t rawtime; struct tm * timeinfo; char buffer [20]; time ( &rawtime ); timeinfo = localtime ( &rawtime ); strftime (buffer,80,"%Y.%m.%d %H:%M:%S\0",timeinfo); string myTime(buffer); It now compiles under DJGPP. UPDATE3 Still need to solve a code using gotoxy - replaced it with some other code that compiles (under DJGPP). Thank You all for help. Just learnt some new things about compiling (flags, old IDE's like DJGPP, OpenWatcom) and refreshed memories setting DOS to work :--)
From the sound of things, you're currently compiling a Windows console program. Even though it's a console program, it still needs Windows to run. To compile a program to run on real DOS, you'll need to find a (probably really old) compiler and (especially) linker that targets DOS and can produce DOS MZ executables. The last Microsoft compiler to target MS-DOS was VC++ 1.52c. If memory serves, Borland continued to target DOS somewhat later, up through something like Broland C++ 5 or so. I believe if you check the Digital Mars web site, he may still have a C++ compiler available that targets DOS. Otherwise, you're going to be stuck looking for something used and quite old. Edit: looking at other answers reminded me of DJGPP and OpenWatcom. My apologies for not mentioning them previously. Be aware that from a C++ viewpoint, Borland and Microsoft are really old compilers -- they don't do namespaces at all, and template support varies from nonexistent in the Microsoft compiler to mediocre in Borland's. DJGPP is basically a DOS extender to which gcc has been ported; the degree to which it's out of date (or modern) will depend on which version of gcc is involved. The Digital Mars compiler is somewhat more modern than the Borland one if I'm not mistaken, but Walter Bright now spends most of his time working on D instead of C++, so the C++ compiler doesn't really compete with gcc, or MSVC, not to mention something like Comeau or Intel that's based on the EDG front-end.
2,066,906
2,067,015
How are objects of subclasses allocated in C++?
I have confusion about the concept of inheritance in C++, suppose we have a class named computer and we publicly inherit a class named laptop from computer class. Now when we create an object of class laptop in main function what happens in the memory? Please Explain.
I'm assuming that Laptop inherits from Computer, and am explaining what happens in general; the implementation details of C++ (for optimization reasons) may differ from this general explanation. Logically, the Laptop class definition has a pointer to the Computer class definition. An instance of Laptop class has a pointer to the Laptop class definition (in C++, most likely this is just a reference to an array of function pointers for the class methods). When a laptop object receives a message, it first looks in its own method table for a corresponding function. If it is not there, it follows the inheritance pointer and looks in the method table for the Computer class. Now, in C++, much of this happens in the compilation phase, in particular I believe the method table is flattened and any calls that can be statically bound are shortcut.
2,066,965
2,082,634
Is it possible to troubleshoot C# COM Interface Implementations?
I have a C# implementation of a C++ COM Interface. I have ported the IDL (Interface) as accurately as I can. When the C++ app instantiates my object it successfully calls one of the methods. It then tries to call another of the methods but nothing happens. The second call's execution path never makes it to the C# side. I don't have access to the C++ exe code. I do have a working compiled C++ version of the COM DLL object with code. This is what I am trying to replace in C#. What can I use to compare the interfaces of the C++ COM and C# COM DLLs to see if there are any differences? Is this even possible? I've tried OLE View by microsoft, but it failes to open the C++ DLL. I think if I can see that my DLL looks exactly like the C++ one it might work. This question is more about helping my understand where I can go from here. I have asked a very detailed question about the exact implementation I am trying to acheive, but no one is interested. This is why I'm posting a more general question to help guide me to the answer. I'm really stuck. 25+ hours gone on this already heh.
After solving my compatibility issue, I discovered that The C++ dll doesn't expose the interface items I was expecting. Although this question was aimed at how to debug or compare the exposed interfaces of the 2 dlls, I got my project working by using the [ComImport] attribute on the C# interfaces I was implementing rather than [ComVisible(true)]. I used the [PreserveSig] attribute on the interface declarations to enforce compatibility. This is my understanding of what is happening.
2,067,111
2,067,395
Moving objects on screen by per pixel basis with glRasterPos()
I have the following code to render text in my app, first i get the mouse coordinates in the world, then use those coordinates to place my text in the world, so it will follow my mouse position: Edit: added buildfont() function in code example: GLvoid BuildFont(GLvoid) // Build Our Bitmap Font { HFONT font; // Windows Font ID HFONT oldfont; // Used For Good House Keeping base = glGenLists(96); // Storage For 96 Characters font = CreateFont( -12, // Height Of Font 0, // Width Of Font 0, // Angle Of Escapement 0, // Orientation Angle FW_NORMAL, // Font Weight FALSE, // Italic FALSE, // Underline FALSE, // Strikeout ANSI_CHARSET, // Character Set Identifier OUT_TT_PRECIS, // Output Precision CLIP_DEFAULT_PRECIS, // Clipping Precision ANTIALIASED_QUALITY, // Output Quality FF_DONTCARE|DEFAULT_PITCH, // Family And Pitch "Verdana"); // Font Name (if not found, its using some other font) oldfont = (HFONT)SelectObject(hDC, font); // Selects The Font We Want wglUseFontBitmaps(hDC, 32, 96, base); // Builds 96 Characters Starting At Character 32 SelectObject(hDC, oldfont); // Selects The Font We Want DeleteObject(font); // Delete The Font } GLvoid glPrint(const char *fmt, ...){ char text[256]; va_list ap; if (fmt == NULL) return; va_start(ap, fmt); vsprintf(text, fmt, ap); va_end(ap); glPushAttrib(GL_LIST_BIT); glListBase(base - 32); glCallLists(strlen(text), GL_UNSIGNED_BYTE, text); glPopAttrib(); } ... glPushMatrix(); glColor4f(0,0,0,1); // X-1 wont work because these are the world coordinates: glRasterPos2d(MousePosX-1, MousePosY); glPrint("TEST"); glColor4f(1,1,0,1); glRasterPos2d(MousePosX, MousePosY); glPrint("TEST"); glPopMatrix(); But i want to render multiline texts, or texts with "borders" (like in above code i tried) or text with background (so i could distinguish them better from the background) So how i do this? I just need to know how i can move it in pixels, so i could precisely modify the position on my screen WITHOUT using 2d projection view on top of my 3d render projection... i just want to make it as simple as possible. I tried to draw a quad under the text, but of course it doesnt work since its still using the world coordinates... so when i rotate my camera the text doesnt move along the background of the text... i am afraid the only solution is to create another projection on top of the 3d projection...
Here's a small snippet of code that I use to render some debug text in a small application: void renderText(float x, float y, const char* text) { int viewport[4]; glGetIntegerv(GL_VIEWPORT, viewport); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); glOrtho(viewport[0], viewport[2], viewport[1], viewport[3], -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRasterPos2f(x, viewport[3] - y); const int length = (int)strlen(text); for (int i = 0; i < length; ++i) { glutBitmapCharacter(GLUT_BITMAP_9_BY_15, text[i]); } glMatrixMode( GL_PROJECTION ); glPopMatrix(); glMatrixMode( GL_MODELVIEW ); glPopMatrix(); } xand y is the desired window coordinates of the string. glutBitmapCharacter(GLUT_BITMAP_9_BY_15, ...) is just a utility function from GLUT that renders a 9 x 15 pixels large bitmap character.
2,067,196
2,114,690
Using fstream tellg to read a portion of the stream till the end
I have this simple code that needs to get a chunk of a large log file that is being written into. At some point it stores the current location returned from streampos start = istream::tellg(); method. Later on the code has to read from the stream a buffer from the start till the end. The code is approximately like this: streampos start = my_stream.tellg(); ... // do some stuff with logging streampos end = my_stream.tellg(); const streamsize size_to_read = (end - start); char *buf = new char[size_to_read]; lock (m_logReadLock); { my_stream.flush(); my_stream.seekg(start); my_stream.read(buf, size_to_read); size_read = my_stream->gcount(); } unlock (m_logReadLock); The effect that I'm observing is that size_read is smaller than size_to_read and the stream has its eof flag set. Shouldn't the end pointer specify exactly where the stream ends and read() method return that exact amount of data? It is fine, I can work round it by checking the eof flag. However, can anyone provide the explanation for this effect? Thanks.
http://groups.google.com/group/comp.lang.c++/browse_thread/thread/709cde3942e64d6c#
2,067,310
2,292,607
How to communicate between Rhapsody models in different processes/systems?
We are using IBM (formerly Telelogic) Rhapsody for a new project to do model driven development of a complex device. The device consists of several subsystems that are connected by various network interfaces. We'd like to model the entire system in Rhapsody and then generate code separately for the various subsystems. The problem is this: Rhapsody can model and generate code for a single process. We like to model subsystems using separate processes at first and then generate code for the subsystems directly. What alternatives are available for inter -process and -system communication that we can slide into the Rhapsody model with as little pain as possible? We are generating C++ and would like to use sockets for inter-system communication, Rhapsody currently communicates between objects and threads in the same process using message queues.
You can implement your own version of the Rhapsody MessageQueue class and rebuild the oxf library using your code instead of the default code.
2,067,349
2,067,364
simulate socket errors
How to simulate socket errors? (sometimes server or client disconnects because of some socket error and it is impossible to reproduce.) I was looking for a tool to do this, but I can't find one. Does anyone know either of a tool or has a code example on how to do this? (C# or C/C++)
Add a wrapper layer to the APIs you're using to access the sockets and have them fail rand() % 100 > x percent of the time.
2,067,392
2,067,416
pthread vs NSThread: which is faster
In Cocoa, is NSThread faster than pthread? is are any performance gain? is it negligible to ignore?
I have no data to back this up, but I'm going to go out on a limb and say "they're equivalent". NSThread is almost certainly wrapper around pthread (is there really any other way to create a system thread?), so any overhead of using NSThread versus pthread would be that associated with creating a new object and then destroying it. Once the thread itself starts, it should be pretty much identical in terms of performance. I think the real question here is: "Why do you need to know?" Have you come up against some situation where spawning NSThreads seems to be detrimental to your performance? (I could see this being an issue if you're spawning hundreds of threads, but in that case, the hundreds of threads are most likely your problem, and not the NSThread objects) Unless you have proof that the creation of an NSThread object is a bottleneck in your application, I would definitely go with the "negligible to ignore" option.
2,067,433
2,067,692
Convert MYSQL Timestamp to time_t
I'm writing a multi-threaded program that needs to be able to check if a row requires updating and act accordingly. I had problems using the built in date/time functions of MySql and so decided to just store the "lastupdate" timestamp as an integer in the table. However, I'm having problems converting this timestamp to time_t so that I can use the time functions with it. Any help is greatly appreciated.
The MySql timestamp data type can be stored as a number in either YYYYMMDDHHMMSS, YYMMDDHHMMSS, YYYYMMDD, or YYMMDD format. In Unix and POSIX-compliant systems, time_t is typically an integer which represents the number of seconds since the start of the Unix epoch: midnight UTC of January 1, 1970. In MySQL you can use the UNIX_TIMESTAMP() and FROM_UNIXTIME() functions convert between TIMESTAMP values and Unix timestamp values. Query Example: SELECT Unix_Timestamp(Date_Entered) FROM Foo;
2,067,436
2,067,470
Sequential Key Generation
Right now, I'm working on a project which requires sequential text key generation. I need to seed the key generator with an integer corresponding to a certain key, which the constructor converts to a key. My key generator overloads the increment operators so that the string is incremented directly, rather than what I had previously been doing, which is incrementing an index value, then converting the index to a key for every key that I wanted to generate. My problem is that I have a limited character set I want to use when generating keys. I have to find the character in the key that I want to increment, find out where it is in my character set, find the next character in the set, then replace the character in the key with the next character in the set. Here is my code: // Not the full charset std::string charset = "abcdefghijklmnopqrstuvwxyz0123456789"; std::string key; key.push_back(charset[0]); for(unsigned int place = 0; place < key.length(); place++) { if(key[place] == charset[charset.length() - 1]) { // Overflow, reset char at place key[place] = charset[0]; if((key.length() - 1) < (place + 1)) { // Carry, no space, insert char key.insert(key.begin(), charset[0]); break; } else { // Space available, increment next char continue; } } else { // Increment char at place key[place] = charset[charset.find(key[place]) + 1]; break; } } In profiling, I found that the search operation is really slowing things down. Is there any faster way of doing this? I thought of creating a linked list out of the character set, but before I do that, I'd like some input on this.
You could store a vector of the same length as your key, where each element in the vector was the index in the charset of the corresponding character in the key. For example, if key[0] was 'c', then thisVector[0] would be 2, since 'c' is the 3rd character in the character set. Then all operations would be performed on that integer vector, removing the necessity for a find operation on the string.
2,067,457
2,067,492
CComModule UnregisterServer error?
I have a CComModule that is calling RegisterServer (TRUE) on DllRegisterServer and UnregisterServer (TRUE) on DllUnregisterServer. The UnregisterServer is getting a 0x8002801C (Error accessing the OLE registry.) error and leaving around registery keys. I am using a Windows Server 2k8 R2 machine with UAC enabled. The components are x86 and I am using the 32 bit regsrv32. Does anyone know why I would be getting this error?
You must run Regsvr32.exe from a command prompt that's elevated to administrator (i.e. UAC disabled). Make a shortcut on your desktop to "cmd.exe", right-click it and choose "Run as Administrator".
2,067,479
2,067,505
Is there any reason that the STL does not provide functions to return an iterator via index?
Is there a reason that the STL does not provide functions to return an iterator into a container via an index? For example, let's say I wanted to insert an element into a std::list but at the nth position. It appears that I have to retrieve an iterator via something like begin() and add n to that iterator. I'm thinking it would be easier if I could just get an iterator at the nth position with something like, std::list::get_nth_iterator(n). I suspect I have misunderstood the principles of the STL. Can anyone help explain? Thanks BeeBand
You can use advance() from the <iterator> header: list<foo>::iterator iter = advance(someFooList.begin(), n); list<foo>::iterator iter = someFooList.begin(); std::advance( iter, n); If the iterator supports random access (like vector) it'll work quite efficiently, if it only supports increasing (or decreasing) the iterator, like list, it'll work but only as well as can be.
2,067,497
2,078,506
How to programmatically move Windows taskbar?
I'd like to know any sort of API or workaround (e.g., script or registry) to move (or resize) Windows taskbar to another position including another monitor (if dual monitors). Definitely, we can move task bar by using mouse, but I want to move it by a program, or a sort of automated way. I tried to find Win32 API, but it seems no one does this job. EDIT: I was surprised by many people's opinion. Let me explain why I wanted it. In my workplace, I'm using dual monitors (resolutions are different), and the taskbar is placed on the left monitor while the primary monitor is the right monitor. However, I often connect to my workplace computer via remote desktop. After the remote connection, the taskbar position is switched. That's why I wanted to make a simple program that can save/restore taskbar's position. Everyday I have to rearrange my taskbar. That's it. I just want it for me.
As far as I can tell, Vista and onwards ignore any program trying to move the taskbar. The old method was ABM_SETPOS + MoveWindow, and this no longer works on the taskbar. The only way that I am aware of that still works is simulating a mouse move (click-move-release). I've read about that method, but I've never done it myself.
2,067,814
2,067,826
Apache mod_c++ wanted?
I want to experiment a bit with C++ as a server side language. I'm not looking for a framework, and simply want to achieve a silly old "Hello World" webapp using C++. Is there an Apache HTTP server module that I can install? If i can do the PHP equivalent of : <?php $personName = "Peter Pan"; echo "Hello " . $personName; I'd be most thrilled! Thanks in advance!
cgi would do this. Just have your C++ app spit its output to stdout and your mod_cgi will handle it
2,067,833
2,067,856
What could be generating the compiler error in this statement to advance an iterator?
The following line generates a compiler error: std::vector<int>::iterator blah = std::advance(instructions.begin(), x ); where I have declared: std::vector<int> instructions; int x; The error I get is: error C2440: 'initializing' : cannot convert from 'void' to 'std::_Vector_iterator<_Ty,_Alloc>'. What element of that statement is of type void?
Without looking this up, I'm guessing the advance function returns void, which you are assigning to blah try: advance(blah, x);, assuming of course you've initialized blah: blah = instructions.begin();
2,067,846
2,067,968
Windows Threads: when should you use InterlockedExchangeAdd()?
The naming of this function seems like this is some complicated stuff going on. When exactly does one know that this is the way to go instead of doing something like this: Preparation CRITICAL_SECTION cs; int *p = malloc(sizeof(int)); // Allocation Site InitializeCriticalSection(&cs); // HINT for first Write Thread #1 { *p = 1; // First Write } Thread #2 { EnterCriticalSection(&cs); *p = 2; // Second Write LeaveCriticalSection(&cs); } I have a write that gets done in one thread: Run() { // some code m_bIsTerminated = TRUE; // some more code } Then, I have a read that gets done in another thread (potentially at the same time): Terminate() { // some code if( m_bIsTerminated ) { m_dwThreadId = 0; m_hThread = NULL; m_evExit.SetEvent(); return; } // even more code } What's the best solution to solve this race condition? Are critical sections the way to go or is the use of InterlockedExchangeAdd() more useful?
InterlockedExchangeAdd is used to add a value to an integer as an atomic operation, meaning that you won't have to use a critical section. This also removes the risk of a deadlock if one of your threads throws an exception - you need to make sure that you don't keep any lock of any kind as that would prevent other threads from acquiring that lock. For your scenario you can definitely use an Interlocked...- function, but I would use an event (CreateEvent, SetEvent, WaitForSingleObject), probably because I often find myself needing to wait for more than one object (you can wait for zero seconds in your scenario). Upd: Using volatile for the variable may work, however it isn't recommended, see: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2016.html and http://www-949.ibm.com/software/rational/cafe/blogs/ccpp-parallel-multicore/tags/c%2B%2B0x for instance. If you want to be portable, take a look at boost::thread.
2,067,893
2,068,078
C++ console keyboard events
Is there any way to get key events in a Windows console? I need a way to get keydown and keyup events quickly without a GUI. I've tried using getch(), but it doesn't get keyups and waits until a key has been pressed to return.
You can use GetKeyState or GetAsyncKeyState, but that won't give you keydown/keyup events. It will only tell you what keys are currently down. So if you really need to get the keydown/keyup events, you could install a hook. A Console window has a window handle that is owned by code in Windows and a message pump, also owned by code in Windows. You can get the window handle of of the console window by using GetConsoleWindowThen install a WH_CALLWNDPROC hook using SetWindowsHookEx to listen in on messages send to the console window. You might try a WH_MSGFILTER hook instead. I don't know if this works for console windows, but it would generate less messages to be ignored if it does work.
2,067,975
2,068,308
Ogre3D Basic Framework issue on Ubuntu
I have been trying to learn to use Ogre3D and have gotten to the stage where I want to start something more serious than the examples it comes with so I found and copied the Basic Ogre Framework I am using Ubuntu 9.10, but have compiled Ogre 1.7 from the Ogre3D website, I am using the Netbeans 6.8 IDE with the c++ plugin. The Basic Ogre Framework Demo App compiles and runs, but gets to the main loop where is checks to see if the Render Window is active, otherwise it calls sleep(1000); The if statement that is checking if(OgreFramework::getSingletonPtr()->m_pRenderWnd->isActive()) is always returning false, despite specifically setting the m_pRenderWnd->setActive(true); From reading the forum posts related to it, nobody else is having this issue, but they are primarily using windows or Mac. Is there issues with Ogre3D on Ubuntu, or is it possibly a problem with the autogenerated makefiles that netbeans is generating?
Have you configured the application to use the correct video drivers for your system? Since you're on ubuntu you'll need to use OpenGL. I found some drivers didn't work on some systems when using Ogre.
2,067,988
4,081,391
Recursive lambda functions in C++11
I am new to C++11. I am writing the following recursive lambda function, but it doesn't compile. sum.cpp #include <iostream> #include <functional> auto term = [](int a)->int { return a*a; }; auto next = [](int a)->int { return ++a; }; auto sum = [term,next,&sum](int a, int b)mutable ->int { if(a>b) return 0; else return term(a) + sum(next(a),b); }; int main(){ std::cout<<sum(1,10)<<std::endl; return 0; } compilation error: vimal@linux-718q:~/Study/09C++/c++0x/lambda> g++ -std=c++0x sum.cpp sum.cpp: In lambda function: sum.cpp:18:36: error: ‘((<lambda(int, int)>*)this)-><lambda(int, int)>::sum’ cannot be used as a function gcc version gcc version 4.5.0 20091231 (experimental) (GCC) But if I change the declaration of sum() as below, it works: std::function<int(int,int)> sum = [term,next,&sum](int a, int b)->int { if(a>b) return 0; else return term(a) + sum(next(a),b); }; Could someone please throw light on this?
Think about the difference between the auto version and the fully specified type version. The auto keyword infers its type from whatever it's initialized with, but what you're initializing it with needs to know what its type is (in this case, the lambda closure needs to know the types it's capturing). Something of a chicken-and-egg problem. On the other hand, a fully specified function object's type doesn't need to "know" anything about what is being assigned to it, and so the lambda's closure can likewise be fully informed about the types its capturing. Consider this slight modification of your code and it may make more sense: std::function<int(int, int)> sum; sum = [term, next, &sum](int a, int b) -> int { if (a > b) return 0; else return term(a) + sum(next(a), b); }; Obviously, this wouldn't work with auto. Recursive lambda functions work perfectly well (at least they do in MSVC, where I have experience with them), it's just that they aren't really compatible with type inference.
2,068,022
2,068,048
In C++, is it safe/portable to use static member function pointer for C API callbacks?
In C++, is it safe/portable to use static member function pointer for C API callbacks? Is the ABI of a static member function the same as a C function?
It is not safe per the C++ standard. As stated in this SO posting: A C callback function implemented in C++ must be extern "C". It may seem to work as a static function in a class because class-static functions often use the same calling convention as a C function. However, doing that is a bug waiting to happen (see comments below), so please don't - go through an extern "C" wrapper instead. And according to comments made by Martin York in that answer there are real-world problems trying to do so on some platforms. Make your C ABI callbacks extern "C". Edit: Adding some supporting quotes from the standard (emphasis mine): 3.5 "Program and linkage": After all adjustments of types (during which typedefs (7.1.3) are replaced by their definitions), the types specified by all declarations referring to a given object or function shall be identical, except that declarations for an array object can specify array types that differ by the presence or absence of a major array bound (8.3.4). A violation of this rule on type identity does not require a diagnostic. [3.5/10] [Note: linkage to non-C++ declarations can be achieved using a linkage-specification (7.5). ] [3.5/11] And 7.5 "Linkage specifications": ... Two function types with different language linkages are distinct types even if they are otherwise identical. [7.5/1] So if the code making the callback is using C language bindings for the callback, then the callback target (in the C++ program) must as well.
2,068,088
2,068,108
C++ method only visible when object cast to base class?
It must be something specific in my code, which I can't post. But maybe someone can suggest possible causes. Basically I have: class CParent { public: void doIt(int x); }; class CChild : public CParent { public: void doIt(int x,int y,int z); }; CChild *pChild = ... pChild->doIt(123); //FAILS compiler, no method found CParent *pParent = pChild; pParent->doIt(123); //works fine How on earth? EDIT: people are talking about shadowing/hiding. But the two versions of doIt have different numbers of parameters. Surely that can't confuse the compiler, overloads in child class which can't possibly be confused with the parent class version? Can it? The compiler error I get is: error C2660: 'CChild::doIt' : function does not take 1 argument
You have shadowed a method. For example: struct base { void method(int); void method(float); }; struct derived : base { void method(int); // base::method(int) is not visible. // base::method(float) is not visible. }; You can fix this with a using directive: class derived : public base { using base::method; // bring all of them in. void method(int); // base::method(int) is not visible. // base::method(float) is visible. }; Since you seem insistent about the number of parameters, I'll address that. That doesn't change anything. Observe: struct base { void method(int){} }; struct derived : base { void method(int,int){} // method(int) is not visible. }; struct derived_fixed : base { using base::method; void method(int,int){} }; int main(void) { { derived d; d.method(1, 2); // will compile d.method(3); // will NOT compile } { derived_fixed d; d.method(1, 2); // will compile d.method(3); // will compile } } It will still be shadowed regardless of parameters or return types; it's simply the name that shadows. using base::<x>; will bring all of base's "<x>" methods into visibility.
2,068,300
2,068,321
Keyboard Tabbing Stops working on Windows GUI
I have a windows gui built in Microsoft Visual C++ and when the user performs a certain set of actions the keyboard tabbing to move from widget to widget stops working. Simply put, there are two list boxes with an add and a remove buttons. Selecting a row in listbox #1 and pressing the add button removes the object from list box #1 and moves it to list box #2. The problem I am seeing is that the keyboard tabbing functionality goes away since the tab focus was on the add button which become desensitized when the add callback is completed (since no row in list box #1 is selected currently). I want to be able to re-set the tab focus to listbox #1 (but not the selection of a particular row). Any ways to do this? I believe I am running as a standard modal dialog.
If I understand correctly, you just want to set the focus back to one of the listboxes. Since this is in a dialog, instead of calling SetFocus, The Old New Thing recommends you send a message to the listbox's hWnd to do this: void SetDialogFocus(HWND hdlg, HWND hwndControl) { SendMessage(hdlg, WM_NEXTDLGCTL, (WPARAM)hwndControl, TRUE); }
2,068,531
2,068,800
Is It Possible To Simplify This Branch-Based Vector Math Operation?
I'm trying to achieve something like the following in C++: class MyVector; // 3 component vector class MyVector const kA = /* ... */; MyVector const kB = /* ... */; MyVector const kC = /* ... */; MyVector const kD = /* ... */; // I'd like to shorten the remaining lines, ideally making it readable but less code/operations. MyVector result = kA; MyVector const kCMinusD = kC - kD; if(kCMinusD.X <= 0) { result.X = kB.X; } if(kCMinusD.Y <= 0) { result.Y = kB.Y; } if(kCMinusD.Z <= 0) { result.Z = kB.Z; } Paraphrasing the code into English, I have four 'known' vectors. Two of the vectors have values that I may or may not want in my result, and whether I want them or not is contingent on a branch based on the components of two other vectors. I feel like I should be able to simplify this code with some matrix math and masking, but I can't wrap my head around it. For now I'm going with the branch, but I'm curious to know if there's a better way that still would be understandable, and less code-verbose. Edit: In reference to Mark's comment, I'll explain what I'm trying to do here. This code is an excerpt from some spring physics I'm working on. The components are as follows: kC is the springs length currently, and kD is minimum spring length. kA and kB are two sets of spring tensions, each component of which may be unique per component (i.e., a different spring tension along the X, Y, or Z). kA is the springs tension if it's not fully compressed, and kB is the springs tension if it IS fully compressed. I'd like to build up a resultant 'vector' that simply is the amalgamation of kC and kD, dependant on whether the spring is compressed or not.
Depending on the platform you're on, the compiler might be able to optimize statements like result.x = (kC.x > kD.x) ? kA.x : kB.x; result.y = (kC.y > kD.y) ? kA.y : kB.y; result.z = (kC.z > kD.z) ? kA.z : kB.z; using fsel (floating point select) instructions or conditional moves. Personally, I think the code looks nicer and more concise this way too, but that's subjective. If the code is really performance critical, and you don't mind changing your vector class to be 4 floats instead of 3, you could use SIMD (e.g SSE on Intel platforms, VMX on PowerPC) to do the comparison and select the answers. If you went ahead with this, it would like this: (in pseudo code) // Set each component of mask to be either 0x0 or 0xFFFFFFFF depending on the comparison MyVector4 mask = vec_compareLessThan(kC, kD); // Sets each component of result to either kA or kB's component, depending on whether the bits are set in mask result = vec_select(kA, kb, mask); This takes a while getting used to, and it might be less readable initially, but you eventually get used to thinking in SIMD mode. The usual caveats apply, of course - don't optimize before you profile, etc.
2,068,693
2,084,228
OpenGL and GLUT in Eclipse on OS X
I have been trying to setup the OpenGL and GLUT libraries in Eclipse, with CDT, on OS X with not very much success. I cannot seem to get eclipse to actually realize where GLUT is. It is currently giving me the error that I have an unresolved inclusion GL/glut.h. Looking around online I found that I should be using the -framework GLUT flag in the gcc linker settings, but this seems ineffective.
Ok. I got it working in X11. The reason I could only get it working on X11 is because it seems the OpenGL libs on the OS are for the 64-bit architecture, but eclipse will only compile code if we use 32-bit architecture. Maybe if this got fixed we could use OS X pre-installed libraries. Also, maybe there is a 32-bit version lying around on the OS we could use that but I can't seem to find it. I, however, am content with using X11 for my learning purposes. First create your C++ project. Then since you can't compile code in 64-bit using eclipse add the following... Then you need your libraries and linking set up. To do this do the following: Lastly you need to set a DISPLAY variable. Before you try running start up X11. Try the following code to get something I've got running in my machine. Hope it works for you! //#include <GL/gl.h> //#include <GL/glu.h> #include <GL/glut.h> #define window_width 640 #define window_height 480 // Main loop void main_loop_function() { // Z angle static float angle; // Clear color (screen) // And depth (used internally to block obstructed objects) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Load identity matrix glLoadIdentity(); // Multiply in translation matrix glTranslatef(0, 0, -10); // Multiply in rotation matrix glRotatef(angle, 0, 0, 1); // Render colored quad glBegin( GL_QUADS); glColor3ub(255, 000, 000); glVertex2f(-1, 1); glColor3ub(000, 255, 000); glVertex2f(1, 1); glColor3ub(000, 000, 255); glVertex2f(1, -1); glColor3ub(255, 255, 000); glVertex2f(-1, -1); glEnd(); // Swap buffers (color buffers, makes previous render visible) glutSwapBuffers(); // Increase angle to rotate angle += 0.25; } // Initialze OpenGL perspective matrix void GL_Setup(int width, int height) { glViewport(0, 0, width, height); glMatrixMode( GL_PROJECTION); glEnable( GL_DEPTH_TEST); gluPerspective(45, (float) width / height, .1, 100); glMatrixMode( GL_MODELVIEW); } // Initialize GLUT and start main loop int main(int argc, char** argv) { glutInit(&argc, argv); glutInitWindowSize(window_width, window_height); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE); glutCreateWindow("GLUT Example!!!"); glutDisplayFunc(main_loop_function); glutIdleFunc(main_loop_function); GL_Setup(window_width, window_height); glutMainLoop(); }
2,068,816
2,068,829
Usefulness of const (C++)
I'm a const fiend, and I strive to make everything as const as possible. I've tried looking at various dissassembly outputs from const and non const versions of functions, and I've yet to see a marked improvement however. I'm assuming compilers nowadays are able to do smart things with non const functions that could technically be const. Are there still cases where const is useful at the machine level? Any examples?
As far as I know, the only effect of marking a function const is to allow the function to be called on a const object. There's no optimization benefit. Herb Sutter has an article which discusses const and optimization in depth: http://www.gotw.ca/gotw/081.htm The one area that const is useful at the machine level is when applied to data - const data might be able to be placed in non-writable memory.
2,068,916
2,068,935
Coding Linux console application in Visual C++ 2008/2010 Express
I was told about the fascination of C++ and I have recently downloaded Visual C++ IDE to start learning C++. However I had this question in mind: How can I write C++ console application in Visual C++ and build it for Linux and Windows? Is there any plugin, additional compilers or hacks to go around with?
The most important thing is you want to avoid OS specific calls and stick with the standard C++ library. If you don't include any Windows header file such as windows.h or winuser.h, then the compiler will warn you if you try to call a Windows specific function. There are some features available on both Windows and Linux that need to be handled slightly differently (such as networking and memory mapping). You may want to look into a portable runtime library such as the Apache Portable Runtime that will abstract out the differences for you.
2,068,969
2,069,031
Template deduction and function pointers
How does the compiler know the correct type for this code: class Base { protected: typedef View * ViewType; typedef boost::function<ViewType ()> ActionType; typedef boost::unordered_map<std::string, ActionType> ActionMapType; ActionMapType actions; template <class ControllerType> inline void addAction(std::string actionName, ViewType (ControllerType::*action)()) { actions.insert(ActionMapType::value_type(actionName, bind<ViewType>(&action, static_cast<ControllerType *>(this)))); } }; class Derived : public Base { Derived() { addAction("someAction", &Derived::foo); // No template } ViewType foo() { cout << "foo"; } } I am aware that I am passing Derived as ControllerType but how can the compiler know for sure that Derived is the template parameter?
The template parameter is ControllerType which is used in the function parameter list as ViewType (ControllerType::*action)() parameter. When you supply an actual argument of ViewType (Derived::*)() type, the compiler immediately realizes that ControllerType = Derived. That's it. This is called template argument deduction. In some contexts in C++ the compiler cannot deduce the template argument from the type of function argument. Such contexts are called non-deduced contexts. The language specification provides a list of non-deduced contexts. And yours is not one of them.
2,069,097
2,069,148
Using Boost with Team Foundation Server
What would be a good way to use Boost in a small team (< 10 people) and lower the time between joining the team and building the application as much as possible. I basically want a workflow like this... Set up the TFS with the new person's username + password. Have them log into the TFS from Visual Studio. Check out the team project (which uses boost libraries) and hit build. Build succeeds. Now they can get hacking. Suggestions, anyone?
Check in an already-compiled Boost into a "vendor" folder, then just point all the project refs toward that folder.
2,069,110
2,069,131
Read a line from file, using stream style
I have a simple text file, that has following content word1 word2 I need to read it's first line in my C++ application. Following code works, ... std::string result; std::ifstream f( "file.txt" ); f >> result; ... but result variable will be equal to "word1". It should be equal to "word1 word2" (first line of text file) Yes, i know, that i can use readline(f, result) function, but is there a way to do the same, using >> style. This could be much more pretty. Possible, some manipulators, i don't know about, will be useful here ?
No, there isn't. Use getline(f, result) to read a line.
2,069,174
6,789,862
Using boost test with Visual Studio
I am trying to use Boost Test to add some much needed unit tests to my code. However I can't seem to get it to work. Right now I have the following code #include <Drawing.h> #define BOOST_AUTO_TEST_MAIN #define BOOST_TEST_MODULE DrawingModelTests #include <boost/test/unit_test.hpp> BOOST_AUTO_TEST_SUITE(DrawingModelTests) BOOST_AUTO_TEST_CASE ( DrawingConstructorTest) { Drawing * drawing = new Drawing; delete drawing; } BOOST_AUTO_TEST_SUITE_END() From what I understand I don't need to put a main or anything since boost will take care of it himself. However Visual Studio keep giving me a "entry point must be defined" error. Do I need to manually add a link to the static library or something? I am compiling as a standard .exe console application.
I had this problem with VS2010 and the solution was to set 'Configuration Properties -> Linker -> Advanced -> Entry Point' to 'main' for the project.
2,069,450
2,069,460
How to get a "bus error"?
I am trying very hard to get a bus error. One way is misaligned access and I have tried the examples given here and here, but no error for me - the programs execute just fine. Is there some situation which is sure to produce a bus error?
Bus errors can only be invoked on hardware platforms that: Require aligned access, and Don't compensate for an unaligned access by performing two aligned accesses and combining the results. You probably do not have access to such a system.
2,069,744
2,069,982
SFINAE canAdd template problem
I'm trying tow write a SFINAE template to determine whether two classes can be added together. This is mostly to better understand how SFINAE works, rather than for any particular "real world" reason. So what I've come up with is #include <assert.h> struct Vec { Vec operator+(Vec v ); }; template<typename T1, typename T2> struct CanBeAdded { struct One { char _[1]; }; struct Two { char _[2]; }; template<typename W> static W make(); template<int i> struct force_int { typedef void* T; }; static One test_sfinae( typename force_int< sizeof( make<T1>() + make<T2>() ) >::T ); static Two test_sfinae( ... ); enum { value = sizeof( test_sfinae( NULL ) )==1 }; }; int main() { assert((CanBeAdded<int, int>::value)); assert((CanBeAdded<int, char*>::value)); assert((CanBeAdded<char*, int>::value)); assert((CanBeAdded<Vec, Vec>::value)); assert((CanBeAdded<char*, int*>::value)); } This compiles for all except the last line, which gives finae_test.cpp: In instantiation of ‘CanBeAdded<char*, int*>’: sfinae_test.cpp:76: instantiated from here sfinae_test.cpp:40: error: invalid operands of types ‘char*’ and ‘int*’ to binary ‘operator+’ So this error is kind-of what I'd expect, but I'd expect the compiler to then find the test_sfinae( ... ) definition and use that instead (and not complain about the one that doesn't parse. Clearly I'm missing something, I just don't know what it is.
It looks to me like you've run into the problem that's discussed in Core Issue 339 as well as N2634. The bottom line is that you're pushing a bit beyond what any compiler can currently handle, even though what you're doing is allowed by the standard. C++ 0x will add more detail about what will and won't result in SFINAE failure versus a hard error. See N3000, §14.9.2, if you want to get into the gory details.
2,069,855
2,070,134
Getting Machine's MAC Address -- Good Solution?
I've heard it's not possible with my current library of winpcap. Is this really true? I see lots of examples on the net but then comments saying "This doesn't work". What's the best way to get a MAC address of the local machine?
One common method is using bits from a UUID, but this isn't entirely dependable. For example, it'll return a value even on a machine that doesn't have a network adapter. Fortunately, there is a way that works dependably on any reasonably recent version of Windows. MSDN says it only goes back to Windows 2000, but if memory serves, it also works on NT 4, starting around SP 5, in case anybody's still using NT 4. #include <windows.h> #include <iphlpapi.h> #include <stdio.h> int main() { IP_ADAPTER_INFO *info = NULL, *pos; DWORD size = 0; GetAdaptersInfo(info, &size); info = (IP_ADAPTER_INFO *)malloc(size); GetAdaptersInfo(info, &size); for (pos=info; pos!=NULL; pos=pos->Next) { printf("\n%s\n\t", pos->Description); printf("%2.2x", pos->Address[0]); for (int i=1; i<pos->AddressLength; i++) printf(":%2.2x", pos->Address[i]); } free(info); return 0; } Please forgive the ancient C code...
2,069,949
2,069,987
Compact pointer notation with doubles
Quick question. When you are accessing a character array, I know you can set the pointer to the first element in the array, and use a while look and do something like while (*ptr != '\0') { do something } Now is there a double or int equivalent? #define ARRAY_SIZE 10 double someArray[ARRAY_SIZE] = {0}; double *ptr = someArray; // then not sure what to do here? I guess I am looking for an equivalent of the above while loop, but don't want to just do: for (int i = 0; i < ARRAY_SIZE); *ptr++) cout << *ptr; thanks!
If I understand you correctly, you want to iterate through the array and stop when *ptr has a certain value. That's not always possible. With a character array (string), a common convention is to have the string be "null-terminated"; that is, it will have a 0 byte ('\0') at the end. You can add such a sentinel to an int or double-valued array (if you can single out a "special" value that won't otherwise be used), but it's not a generally applicable technique. By the way, your for-loop is probably not what you want: for (int i = 0; i < ARRAY_SIZE); *ptr++) If you want to iterate through the array, you'll need to increment the pointer (ptr++), not the value to which it points (*ptr++).
2,070,156
4,380,886
Margin in popup menu (qt3)
alt text http://img130.imageshack.us/img130/6218/menuk.jpg Is there any method to get rid of the huge margin on the right side? It appears to be constant, because adding some text to the menu items doesn't change its width at all.
QPopupMenu is inherited from QWidget. So try function setMaximumWidth() or try to use adjustSize() after adding all items in popup menu.
2,070,216
2,070,269
Why building the same project generates different EXE file for each developer
My team and I are developing a VC++ 6 project. We are all using the same code-base (using version control system), and all our compiler/linker/environment-settings (including include directories order), as far as we can tell, are exactly the same. Of course we are using the same VC++ version with the same service packs (VC6 SP6). The problem is that the EXE that each one of us build is a little bit different. I know that every time you build an EXE on the same computer, there are 3 locations in the file where the linker stores a time-stamp. I'm not talking about these differences. Though our EXE files are exactly the same length, when we compare the EXEs, there are 1000's of bytes that differs. Many of those bytes differs by 0x20 in value. Any idea what may be the reason? Edit: Debug build (Actually, We didn't check the release). Edit: The differences are in binary sections, not in text-strings. Edit: All of the developers are using the same drive/folder names, for source and for products.
If Debug version has the option "Link incrementally" checked, then probably it's the reason for the diffs.
2,070,307
2,082,913
Qt: How to connect QScriptEngineDebugger to QScriptEngine in separate thread?
I need to process script in separate, non-GUI thread since script calls C++ function that can take very long time to process (seconds). Is it possible to connect QScriptEngineDebugger to my QScriptEngine in non-gui thread? The problem is - if I put QScriptEngineDebugger in same thread as QScriptEngine (non-gui) than debugger will crash on debug - the code shows that it wants to create it's debug window and such window can be created only in GUI thread. And if i place QScriptEngineDebugger in GUI thread application will crash since QScriptEngine is not thread-safe. Any insights?
Unless you're prepared to write your own script debugger, there doesn't seem to be a way to run the debugger in a different thread than the engine. Behind the scenes, QScriptEngineDebugger uses a class called QScriptEngineDebuggerFrontend, which in turn uses a class called QScriptEngineDebuggerBackend, which in turn makes many direct calls to the engine and installs its own agent into the engine. Long story short, there's a lot of coupling between the debugger and the engine. An alternative is to encapsulate your time-consuming C++ function inside a class which runs the time-consuming function in a background thread and emits a signal when the time-consuming function has completed. Then, connect the signal to a function in your script to process the results. Refer to the following documentation on how to connect signals from your C++ objects to functions in your script: http://doc.trolltech.com/4.5/qtscript.html#using-signals-and-slots
2,070,397
2,073,671
how to deal with a static analyzer output
We have started using a static analyzer (Coverity) on our code base. We were promptly stupefied by the sheer amount of warnings we received (its in the hundreds of thousands) , it will take the entire team a few months to clear them all (obliviously impossible). the options we discussed so far are 1) hire a contractor to sort out the warning and fix them - he drawback: we will probably need very experiences people to do all these modifications, and no contractor will have required understanding of the code. 2) filter out the warning and deal only with the dangerous ones - the problem here is that our static analysis output will always be cluttered by warning making it difficult for us to isolate problems. also the filtering of the warning is also a major effort. either way, bringing our code to a state when the static analyzer can be a useful tool for us seems a monumental task. so how is it possible to work with the static analyzer without braining current development efforts into a complete stand still?
The first thing to do is tweak the heck out of your analysis settings; Coverity support probably left you with a fairly generic configuration. Triage a representative sample of the defects, and if a checker doesn’t seem to be producing a lot more signal than noise, turn it off for now. (Most of Coverity’s checkers are good, but nobody’s perfect, and it sounds like you need to do some ruthless prioritization.) In the long run, turn some of those checkers back on, but mark them in your reporting as low priority. (This is harder than it should be; I’ve long argued that Coverity needs to read a couple of papers on defect ranking by somebody called Dawson Engler. :-) In the even longer run, try the checkers that are disabled by default; some of them find impressive bugs. And parse warnings are surprisingly useful, though you do need to turn off some bogus ones. Be cynically realistic about which part of your codebase you’re actually going to fix soon. Use components to skip analysis on the code you’re not going to fix defects in, at least for now. (For instance, in theory, if your product includes third-party code, you’re responsible for its quality and should patch bugs in it. In practice, such bugs rarely get fixed. And if it’s mature third-party code, the false positive rate will be high.) Setting up components and exclusion is tricky, but once it’s done, they work well—one of my negative look-ahead regexes had over a hundred disjuncts. Components also help with assigning individual responsibility for defects, which I’ve found to be crucial to getting them fixed. Set up a report for only new defects, and have people watch that URL. New defects are in active code, and it’s easier to get started with a No New Warnings policy. Let me end with a couple of disclaimers: You may want to re-ask this question in the Coverity support forum (http://forums.coverity.com/), which isn’t very active, but where we don’t have to worry about violating the NDA. I’ve got a list there of the checkers I found worth enabling. I do this for a living, and maybe you want to hire us (http://codeintegritysolutions.com/); I’m giving a talk on this subject at Stanford today. Hiring a consultant to do the tuning makes a lot of sense; having somebody outside the company doing the triaging is trickier. Having an outsider do the fixes is trickier still; learning from your mistakes is even more important than fixing them. I’ve expanded this a bit with some parts of my Stanford talk, for our corporate blog: http://codeintegrity.blogspot.com/2010/01/handling-embarrassment-of-riches.html.
2,070,567
2,070,665
Object oriented programming , inheritance copy constructor
Suppose i have a base class "person". and i publically inherits a class "student " from the base class "person". i have not written the copy constructor for base and the derived class. now suppose i write in the main program main() { student sobj1("name", "computer science"); student sobj2=sobj1; } now in the second line the default compiler generated copy constructor of the student will be called but before the execution the default copy constructor of the base class will be called which creates an anonymous object and initialize it then control comes back to the copy constructor of the student which initialize the student's portion of the object. this is the demonstration for the situation where we don't write the copy constructor now suppose we write the copy constructor for both the classes , then i have tested that when i write student sobj2=sobj1; what happens is , this line calles the copy constructor of the student which works , but the copy constructor of the bases class will not be called in this case(default constructor of the base class will be called) my question is why?
I believe the rules are following: Constructor of base class always should be called before constructor of derived class. You can choose which one of base class constructors will be called by calling it explicitly in initialization list. If you do not do that, default constructor is called. When class has no copy constructor, compiler generates one instead. It will call default constructors for all the members of the class and copy constructor of base class, just as your hand written constructor actually should. So, there you go. Unless you call copy constructor of base class, default one will be used BUT compiler is smart enough to actually call it in it's own generated copy constructor. Just in case you do not know how to call it, there example Student(Student const & p): Person(p) { }
2,070,740
2,450,314
Drag Drop using SendMessage
This sounds funny..just a little experiment. i wanted to simulate a drag drop of a file on a Application/Window using Send Message. Is it possible? I dont have code for the application but on the executable. The application is IP Messenger. What i wanted to do is use "Send To" functionality to send the file to an .exe ,which will find IPMessenger window and simulate a drag drop thr code. The user will select the file and right click "send to" to the .exe which will do drag drop from code. **Note: IP Messenger supports drag-drop operation for files thx amit
There is the WM_DROPFILES Message. I guess that you could use CreateToolhelp32Snapshot to locate the window that is IP Messenger and then build the DROPFILES structure to send with the WM_DROPFILES message. The final link would be to Codeproject, with some help on creating the DROPFILES structure: How to Implement Drag and Drop Between Your Program and Explorer. Instead of using CreatToolhelp32Snapshot you could be using FindWindow function. Here you will get the HWND for IP Messenger directly, instead of CTh32S, which will only locate the HANDLE for the process. When this is done you create the DROPFILES structure. Read the comments on the CodeProject link in the "Initiating a drag and drop" section for more info of how. And finally you send it with sendmessage SendMessage(ipMessHWND, WM_DROPFILES, (HDROP)&myDropFiles, 0);
2,070,782
2,072,582
How to speed up rotated text output in MFC
I have a MFC application that displays annotated maps, which can include a large amount of text. While the size and font of the text does not tend to change much, the rotation of the text varies considerably, in order to be aligned with the surrounding line work. This basically means that I have to do create and select a new font into the display context each time the rotation changes. Something like; if (TextRotationChanges) { m_pFont = new CFont; m_lf.lfEscapement = NewRotation; m_pFont->CreateFontIndirect(&m_lf); } CFont *OldFont = m_pDC->SelectObject(m_pFont); m_pDC->TextOut(x,y,text,strlen(text)); m_pDC->SelectObject(OldFont); This is obviously slow when dealing with large amounts of text. Is there any way of speeding this up without going to a different display engine such as D3D or OpenGL? Put another way, can I change the text rotation in the existing selected font? n.b. I'm already carrying out other obvious optimizations, like ensuring text is on screen at a visible size prior to attempting to draw it.
Creating and destroying many GDI object can be slow. What you can do is create 360 fonts at the startup of your program so that you can SelectObject() from a lookup table with pre-made fonts at the correct rotation, rather than creating them on-demand. Or you can rotate your text by not using lfEscapement but by using SetWorldTransform() with the appropriate rotation matrix (again, you could cache rotation matrices for speed). You'd have to test if it will actually give you a speed gain. See my question here SetWorldTransform() and font rotation for an issue I had/have with that approach, though (haven't had time to go back and look into it).
2,070,951
2,070,968
main function does not return anything. Why?
With respect to C/C++ main() must always return an integer(zero to indicate success and non-zero to indicate failure). I can understand this as the program is run it becomes a process and every process should have an exit status, which we obtain by doing echo $? from shell after the process gets over. Now I don't understand why is the main method does not return anything in Java? Has it got anything to do with the fact that the program is run on JVM and the JVM process is reposnsible for the returning of exit status? Please clarify. Thanks, Roger
If the main method of a single threaded java application terminates, the application will terminate with exit code 0. If you need another exit code, maybe to indicate an error, you can place System.exit(yourNumberHere); anywhere in the code (especially outside of the main method). This is different for multi-threaded applications , where you either have to use System.exit from the inside of kill -9 from the outside to stop the JVM. Here's a quick example where termination of main doesn't stop the application (a typical service or daemon behaviour): public static void main(String args[]) { Thread iWillSurvive = new Thread(new Runnable() { public void run() { while(true) { // heat the CPU } } }); iWillSurvive.start(); } Remark: Sure, a thread will terminate when it's run method (or the main method in case of the main thread) terminates. And in this case, when all threads have terminated, the JVM will terminate with exit code 0 (which brings us back to the initial question). Hope everybody is happy now.
2,071,403
2,071,431
Calling a subclassed virtual method from a base class method
class A { public: virtual void doSomething(void) {} void doStuff(void) { doSomething(); } }; class B : public A { public: void doSomething(void) { // do some stuff here } }; B * b = new B; b->doStuff(); It gives me Segmentation fault. What am I doing wrong? It should work well in my opinion!
As far as I can see, you're not doing any polymorphism in the code bellow the class definition. b->doStuff() should call the method of B class. If you want to inside B call A-> doSomething you can use A:: doSomething
2,071,411
2,071,452
Returning from a multimap search with equal_range without being error-prone
I'm about to refactor some duplicated code. Two functions both search in a multimap using equal_range(). In a for loop after the call to equal_range() there is a for loop that sets an iterator to equalRange.first with the condition it != equalRange.second. If the correct value is found, the two functions differ. What I would like to do is to have the search function as an own help function used by the previously mentioned two. Making that work is not the problem. What is the problem is that I cannot figure out a way to make it "easy" and future proof in a way that makes sense to other people using this code. Obviously, I would like something returned from the search function. If I were to return a boolean to indicate if the value was found in the multimap, I would have to pass an iterator to the multimap which points out the element. I find that quite ugly. If an iterator was returned instead, we of course have to check that against the boundaries back in the two functions that use the search function. We can't check it against multimap.end() since we use equal_range so equalRange.second doesnt have to equal multimap.end(). Using boundary checking returnIter == checkBound(x) where checkBound(x) returns multimap::upperbound(x) makes the checkBound(x) aware of the equal_range implementation of the search function. Hence, if someone else were to change the search function, the checkBound(x) might not work as expected. My standing point here is that the users of the search function should not be concerned with how it is implemented, i.e., should not know that it uses equal_range. What are your inputs and suggestions to this? Am I over-detailed here? How would you have implemented the search function? Thanks
Instead of an either/or decision on the return value, it sounds to me like you'd want to do what functions like map::insert do - return a std::pair<iterator, bool> to signal both the position and the success/failure of the search function.
2,071,449
2,086,314
my_thread_global_end threads didn't exit, error?
I am using MySQL c++ connector (1.0.5) , recently I moved get_driver_instance() and connect() methods to secondary thread then I am getting below error. Error in my_thread_global_end(): 1 threads didn't exit After googling I found that mysql thread isn't exiting. Is there a method in c++ wrapper to do cleanup?
After googling I came to know that mysql_thread_end() will solve the problem. Any way I was linking against libmysqlclient.a so included mysql.h file and called mysql_thread_end before exiting secondary thread, now the problem is solved.
2,071,492
2,071,848
How to display exception message of managed C# code in c++ code
Iam calling the functions using SMO of a c# dll in a c++ project but the code in that dll is throwing some exception, so how can I display the exception message in my C++ code
It depends how you call it. If you use COM then you'll get a failure HRESULT. You can use IErrorInfo to retrieve the exception message. If you use something else then you'll lose the error context, all you can see is an SEH exception with exception code 0xe0434f4e, catchable only with the __try and __except keywords. Using COM is heavily recommended. EDIT after you posted code. Okay, you are using COM. And the smart pointers derived from _com_ptr_t that are created by the #import directive. These smart pointers turn failure HRESULTs into C++ exceptions. You'll need to catch a _com_error exception. That class also has the plumbing to get a suitable exception description, use the Description() method.
2,071,557
2,071,832
Memory analysis for a process
I have a process which calls/creates another process, and this one will load a bunch of modules. The thing is that these modules will all be loaded in the same process as the caller (by default). Is there any way that I can collect resources information for the individual loaded module, even through they are all on one big process?
I have been in a situation where a process loaded some modules, these modules loaded lots of data from a database and put them this data in STL and Boost containers (std::set, std::map, std::vector, boost::multiindex). And since most of memory was used by these containers my task was to measure how much memory each container used. If it looks like your task then you can add your own counting allocators to each container and after that you will have information about memory consumption.
2,071,876
2,071,910
Does the following code invoke UB?
Does the following code invoke UB ? int main(){ volatile int i = 0; volatile int* p = &i; int j = ++i * *p; }
Yes that is Undefined Behavior because you are trying to violate the second rule.. The Standard states that 1) Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression. 2) Furthermore, the prior value shall be accessed only to determine the value to be stored. Note: The order of evaluation of the operands of * operator is unspecified and *p is nothing but i.
2,071,993
2,072,069
Using DLLImport to import an Object
I have a dll for a c++ class (SLABHIDDevice.dll). I am trying to use the functions of this dll in a C#.net application. The dll contains several methods which I can use easily with statements such as this... (I appolagize if i get some of the terminology wrong here I am new to using dlls) [DllImport("SLABHIDDevice.dll")] public static extern byte GetHidString (Int32 deviceIndex, Int32 vid, Int32 pid, Byte hidStringType, String deviceString, Int32 deviceStringLength); The documentation for SLABHIDDevice.dll says that it also contains a class object, CHIDDevice. and that object has a whole list of member functions such as Open(); If I try to import Open() using the same syntax as above, I get an error saying that it can not find an entry point for the Open() function. Is this because Open() is a member of CHIDDevice? This is the makeup of the dll from DUMPBIN... The bottom three functions are the only ones I am able to get to work? Does anyone know what syntax I need to use to get the other ones? What do the question marks mean that precede the function names? Dump of file SLABHIDDEVICE.dll File Type: DLL Section contains the following exports for SLABHIDDevice.dll 00000000 characteristics 47E13E0F time date stamp Wed Mar 19 12:23:43 2008 0.00 version 1 ordinal base 26 number of functions 26 number of names ordinal hint RVA name 4 0 00001000 ??0CHIDDevice@@QAE@ABV0@@Z 5 1 00001330 ??0CHIDDevice@@QAE@XZ 6 2 00001430 ??1CHIDDevice@@UAE@XZ 7 3 00001080 ??4CHIDDevice@@QAEAAV0@ABV0@@Z 8 4 00020044 ??_7CHIDDevice@@6B@ 9 5 00001460 ?Close@CHIDDevice@@QAEEXZ 10 6 00001C70 ?FlushBuffers@CHIDDevice@@QAEHXZ 11 7 00001CA0 ?GetFeatureReportBufferLength@CHIDDevice@@QAEGXZ 12 8 00001850 ?GetFeatureReport_Control@CHIDDevice@@QAEEPAEK@Z 13 9 00001C80 ?GetInputReportBufferLength@CHIDDevice@@QAEGXZ 14 A 00001BE0 ?GetInputReport_Control@CHIDDevice@@QAEEPAEK@Z 15 B 00001A20 ?GetInputReport_Interrupt@CHIDDevice@@QAEEPAEKGPAK@Z 16 C 00001CB0 ?GetMaxReportRequest@CHIDDevice@@QAEKXZ 17 D 00001C90 ?GetOutputReportBufferLength@CHIDDevice@@QAEGXZ 18 E 00001730 ?GetString@CHIDDevice@@QAEEEPADK@Z 19 F 00001CC0 ?GetTimeouts@CHIDDevice@@QAEXPAI0@Z 20 10 00001700 ?IsOpened@CHIDDevice@@QAEHXZ 21 11 000014A0 ?Open@CHIDDevice@@QAEEKGGG@Z 22 12 00001360 ?ResetDeviceData@CHIDDevice@@AAEXXZ 23 13 00001810 ?SetFeatureReport_Control@CHIDDevice@@QAEEPAEK@Z 24 14 00001B80 ?SetOutputReport_Control@CHIDDevice@@QAEEPAEK@Z 25 15 000018C0 ?SetOutputReport_Interrupt@CHIDDevice@@QAEEPAEK@Z 26 16 00001CE0 ?SetTimeouts@CHIDDevice@@QAEXII@Z 3 17 00001320 GetHidGuid 2 18 00001230 GetHidString 1 19 00001190 GetNumHidDevices Summary 6000 .data 7000 .rdata 5000 .reloc 4000 .rsrc 1C000 .text
You cannot use P/Invoke to call instance methods of a C++ class. The primary hang-up is that you can't create an object of the class, you cannot discover the required memory allocation size. Passing the implicit "this" pointer to the instance method is another problem, it needs to be passed in a register. You'll need to create a managed wrapper for the class, that requires using the C++/CLI language. Google "C++/CLI wrapper" for good hits.
2,072,256
2,072,292
Under what circumstances can a vtable pointer be null (or 0x1)?
I am currently debugging a crashlog. The crash occurs because the vtable pointer of a (c++-) object is 0x1, while the rest of the object seems to be ok as far as I can tell from the crashlog. The program crashes when it tries to call a virtual method. My question: Under what circumstances can a vtable pointer become null? Does operator delete set the vtable pointer to null? This occurs on OS X using gcc 4.0.1 (Apple Inc. build 5493).
Could be a memory trample - something writing over that vtable by mistake. There is a nearly infinite amount of ways to "achieve" this in C++. A buffer overflow, for example.
2,072,424
2,072,510
outputting to cin from a worker thread (c++)
My program has a main thread that takes command input from a user. Separately, it has potentially multiplie (at least 1) worker threads churning data in the background. The user is able to terminate the program using a command by typing into the console. However, when the data churning is done, the main thread is still blocking waiting for user input and hence the program does not terminate. What I would like to know is how to write the terminate command, "q\n", into the std::cin from a worker thread so that the blocking command input thread (also the main thread) will terminate. Or would this be a bad thing to do? I've tried the below but the program simply hangs or not able to write to the std::cin, not sure why. static ACE_THR_FUNC_RETURN worker( void *p) { ..... if (_this->m_num_threads_done == _this->m_threads.size()) { fputs("q\n", stdin); } } on the main thread, this is called from main: void runEventLoop() { printWelcomeMessage(); char buffer[MAXC]; while( !m_exitLoop ) { std::cin.getline(buffer, MAXC); if( std::cin.eof() ) break; handleCommand( buffer ); } } would someone please advise on what I'm doing wrong here or otherwise suggest a better solution for what I'm trying to accomplish? thanks
On Unix, when you need a thread to wait for multiple things (for example, a character on std::in and a command from a worker thread to communicate that it is shutting down) you use select()... You could create a pipe with the pipe() system call and the worker thread could write to it when it's exiting... The main thread that is waiting currently on cin could call select() on both to block, then react appropriately to either when it's woken up.. On Windows, you can probably use WaitForMultipleObjects() for the same purpose.
2,072,454
2,072,710
How do I find out why g++ takes a very long time on a particular file?
I am building a lot of auto-generated code, including one particularly large file (~15K lines), using a mingw32 cross compiler on linux. Most files are extremely quick, but this one large file takes an unexpectedly long time (~15 minutes) to compile. I have tried manipulating various optimization flags to see if they had any effect, without any luck. What I really need is some way of determining what g++ is doing that is taking so long. Are there any (relatively simple) ways to have g++ generate output about different phases of compilation, to help me narrow down what the hang-up might be? Sadly, I do not have the ability to rebuild this cross-compiler, so adding debugging information to the compiler and stepping through it is not a possibility. What's in the file: a bunch of includes a bunch of string comparisons a bunch of if-then checks and constructor invocations The file is a factory for producing a ton of different specific subclasses of a certain parent class. Most of the includes, however, are nothing terribly fancy. The results of -ftime-report, as suggested by Neil Butterworth, indicate that the "life analysis" phase is taking 921 seconds, which takes up most of the 15 minutes. It appears that this takes place during data flow analysis. The file itself is a bunch of conditional string comparisons, constructing an object by class name provided as a string. We think changing this to point into a map of names to function pointers might improve things a bit, so we're going to try that. Indeed, generating a bunch of factory functions (per object) and creating a map from the string name of the object to a pointer to its factory function reduced compile time from the original 15 minutes to about 25 seconds, which will save everyone tons of time on their builds. Thanks again to Neil Butterworth for the tip about -ftime-report.
Won't give all the details you want, but try running with the -v (verbose) and -ftime-report flags. The latter produces a summary of what the compiler has been up to.
2,072,838
2,074,662
Is there a better design pattern/method to use?
I've currently completed one of two phases of a project that required I write database information to XML using C++. While use of a third party tool was used to do the actually formatting of XML tags and data, I still had to design a model along with business logic to take the database tables and map them into XML structures. For this I ended up creating an individual class for each XML structure, resulting in a large amount of classes (~75). Each class had the knowledge of how to read its associated table and serialize itself to XML through the third party tool. In the end the system worked very well ( on time and budget ) and output errors were extremely easy to find. Phase two is almost identical however instead of formatted text it will be binary data. So while I am still considering utilizing the same strategy used in phase one, I would like to inquire, is a better method or design pattern that would lend itself to this problem? Particularly, due to the large amount of dependancies in some of the XML classes in phase one, unit testing was very difficult.
An other idea, that might also fit: When performance is not an issue, also generic data containers could be used. A generic data container could take a specification of one node (like an XML node or an object or even a table entry) and just store such a container. This way, the ~75 classes could be replaced by one or a handful. Services like serialization could also be provided in a generic fashion. Different instances could thus play the role that still now is played by different classes. As much I understood, the data primitives used are rather straight forward and limited. So this could be implemented rather simple.
2,073,016
2,074,374
C++: how to prevent destructing of objects constructed in argument?
I have a questing around such staff. There is a class A which has a object of type class B as it's member. Since I'd like B to be a base class of group of other classes I need to use pointer or reference to the object, not it's copy, to use virtual methods of B inside A properly. But when I write such code class B {public: B(int _i = 1): i(_i) {}; ~B() {i = 0; // just to indicate existence of problem: here maybe something more dangerous, like delete [] operator, as well! cout << "B destructed!\n"; }; virtual int GetI () const {return i;}; // for example protected: int i; }; class A {public: A(const B& _b): b(_b) {} void ShowI () {cout <<b.GetI()<<'\n';}; private: const B& b; }; and use it this way B b(1); A a(b); a.ShowI(); it works perfectly: 1 B destructed! But A a(B(1)); a.ShowI(); give very unwanted result: object b creates and a.b is set as reference to it, but just after constructor of A finished, object b destructs! The output is: B destructed! 0 I repeat again, that using copy of instead of reference to b in A class A {public: A(B _b): b(_b) {} void ShowI () {cout <<b.GetI()<<'\n';}; private: B b; }; won't work if B is base class and A calls it's virtual function. Maybe I'm too stupid since I do not know not know proper way to write necessary code to make it works perfectly (then I'm sorry!) or maybe it is not so easy at all :-( Of course, if B(1) was sent to function or method, not class constructor, it worked perfect. And of course, I may use the same code as in this problem as described here to create B as properly cloneable base or derived object, but doesn't it appears too hard to for such easy looking problem? And what if I want to use class B that I could not edit?
This is a standard issue, don't fear. First, you could retain your design with a subtle change: class A { public: A(B& b): m_b(b) {} private: B& m_b; }; By using a reference instead of a const reference, the compiler will reject the call to A's constructor that is made with a temporary because it is illegal to take a reference from a temporary. There is no (direct) solution to actually retain the const, since unfortunately compilers accept the strange construct &B() even though it means taking the address of a temporary (and they don't even shy to make it a pointer to non-const...). There are a number of so-called smart pointers. The basic one, in the STL is called std::auto_ptr. Another (well-known) one is the boost::shared_ptr. Those pointers are said to be smart because they allow you no to worry (too much) about the destruction of the object, and in fact guarantee you that it WILL be destroyed, and correctly at that. Thus you never have to worry about the call to delete. One caveat though: don't use std::auto_ptr. It's a mean beast because it has a unnatural behavior regarding copying. std::auto_ptr<A> a(new A()); // Building a->myMethod(); // Fine std::auto_ptr<A> b = a; // Constructing b from a b->myMethod(); // Fine a->myMethod(); // ERROR (and usually crash) The problem is that copying (using copy construction) or assigning (using assignment operator) means transfer of ownership from the copied toward the copying... VERY SURPRISING. If you have access to the upcoming standard, you can use std::unique_ptr, much like an auto pointer except from the bad behavior: it cannot be copied or assigned. In the mean time, you can simply use a boost::shared_ptr or perhaps std::tr1::shared_ptr. They are somewhat identical. They are a fine example of "reference counted" pointers. And they are smart at that. std::vector< boost::shared_ptr<A> > method() { boost::shared_ptr<A> a(new A()); // Create an `A` instance and a pointer to it std::vector< boost::shared_ptr<A> > v; v.push_back(a); // 2 references to the A instance v.push_back(a); // 3 references to the A instance return v; } // a is destroyed, only 2 references now void function() { std::vector< boost::shared_ptr<A> > w = method(); // 2 instances w.erase(w.begin()); // remove w[0], 1 instance } // w is destroyed, 0 instance // upon dying, destroys A instance That's what reference counted means: a copy and its original point to the same instance, and they share its ownership. And as long as there is one of them still alive, the instance of A exists, being destructed by the last one of them to die so you don't have to worry about it!! You should remember though, that they do share the pointer. If you modify the object using one shared_ptr, all its relatives will actually see the change. You can do a copy in the usual mode with pointers: boost::shared_ptr<A> a(new A()); boost::shared_ptr<A> b(new A(*a)); // copies *a into *b, b has its own instance So to sum up: don't use an auto_ptr, you'll have bad surprises use unique_ptr if available, it's your safer bet and the easiest to deal with use share_ptr otherwise, but beware of the shallow copy semantics Good luck!
2,073,054
2,073,084
How to parse complex string with C++?
I'm trying to figure out how could I parse this string using "sstream" and C++ The format of it is: "string,int,int". I need to be able to assign the first part of the string which contains an IP address to a std::string. Here is an example of this string: std::string("127.0.0.1,12,324"); I would then need to obtain string someString = "127.0.0.1"; int aNumber = 12; int bNumber = 324; I will mention again that I can't use boost library, just sstream :-) Thanks
Here's a useful tokenization function. It doesn't use streams, but can easily perform the task you require by splitting the string on commas. Then you can do whatever you want with the resulting vector of tokens. /// String tokenizer. /// /// A simple tokenizer - extracts a vector of tokens from a /// string, delimited by any character in delims. /// vector<string> tokenize(const string& str, const string& delims) { string::size_type start_index, end_index; vector<string> ret; // Skip leading delimiters, to get to the first token start_index = str.find_first_not_of(delims); // While found a beginning of a new token // while (start_index != string::npos) { // Find the end of this token end_index = str.find_first_of(delims, start_index); // If this is the end of the string if (end_index == string::npos) end_index = str.length(); ret.push_back(str.substr(start_index, end_index - start_index)); // Find beginning of the next token start_index = str.find_first_not_of(delims, end_index); } return ret; }
2,073,079
2,073,212
what does compiler do with a[i] which a is array? And what if a is a pointer?
I was told by c-faq that compiler do different things to deal with a[i] while a is an array or a pointer. Here's an example from c-faq: char a[] = "hello"; char *p = "world"; Given the declarations above, when the compiler sees the expression a[3], it emits code to start at the location ``a'', move three past it, and fetch the character there. When it sees the expression p[3], it emits code to start at the location ``p'', fetch the pointer value there, add three to the pointer, and finally fetch the character pointed to. But I was told that when dealing with a[i], the compiler tends to convert a (which is an array) to a pointer-to-array. So I want to see assembly codes to find out which is right. EDIT: Here's the source of this statement. c-faq And note this sentence: an expression of the form a[i] causes the array to decay into a pointer, following the rule above, and then to be subscripted just as would be a pointer variable in the expression p[i] (although the eventual memory accesses will be different, " I'm pretty confused of this: since a has decayed to pointer, then why does he mean about "memory accesses will be different?" Here's my code: // array.cpp #include <cstdio> using namespace std; int main() { char a[6] = "hello"; char *p = "world"; printf("%c\n", a[3]); printf("%c\n", p[3]); } And here's part of the assembly code I got using g++ -S array.cpp .file "array.cpp" .section .rodata .LC0: .string "world" .LC1: .string "%c\n" .text .globl main .type main, @function main: .LFB2: leal 4(%esp), %ecx .LCFI0: andl $-16, %esp pushl -4(%ecx) .LCFI1: pushl %ebp .LCFI2: movl %esp, %ebp .LCFI3: pushl %ecx .LCFI4: subl $36, %esp .LCFI5: movl $1819043176, -14(%ebp) movw $111, -10(%ebp) movl $.LC0, -8(%ebp) movzbl -11(%ebp), %eax movsbl %al,%eax movl %eax, 4(%esp) movl $.LC1, (%esp) call printf movl -8(%ebp), %eax addl $3, %eax movzbl (%eax), %eax movsbl %al,%eax movl %eax, 4(%esp) movl $.LC1, (%esp) call printf movl $0, %eax addl $36, %esp popl %ecx popl %ebp leal -4(%ecx), %esp ret I can not figure out the mechanism of a[3] and p[3] from codes above. Such as: where was "hello" initialized? what does $1819043176 mean? maybe it's the memory address of "hello" (address of a)? I'm sure that "-11(%ebp)" means a[3], but why? In "movl -8(%ebp), %eax", content of poniter p is stored in EAX, right? So $.LC0 means content of pointer p? What does "movsbl %al,%eax" mean? And, note these 3 lines of codes: movl $1819043176, -14(%ebp) movw $111, -10(%ebp) movl $.LC0, -8(%ebp) The last one use "movl" but why did not it overwrite the content of -10(%ebp)? (I know the anser now :), the address is incremental and "movl $.LC0 -8(%ebp) will only overwrite {-8, -7, -6, -5}(%ebp)) I'm sorry but I'm totally confused of the mechanism, as well as assembly code... Thank you very much for your help.
a is a pointer to an array of chars. p is a pointer to a char which happens to, in this case, being pointed at a string-literal. movl $1819043176, -14(%ebp) movw $111, -10(%ebp) Initializes the local "hello" on the stack (that's why it is referenced through ebp). Since there are more than 4bytes in "hello", it takes two instructions. movzbl -11(%ebp), %eax movsbl %al,%eax References a[3]: the two step process is because of a limitation in terms of access to the memory referenced though ebp (my x86-fu is a bit rusty). movl -8(%ebp), %eax does indeed reference the p pointer. LC0 references a "relative memory" location: a fixed memory location will be allocated once the program is loaded in memory. movsbl %al,%eax means: "move single byte, lower" (give or take... I'd have to look it up... I am a bit rusty on this front). al represent a byte from the register eax.
2,073,366
2,073,403
how to return from a blocked call?
is it possible to force a thread to return from a call to a blocking function such as a blocking read from a stream ? int x; std::cin >> x; for example...
No, it's not possible. If you want to find out whether there's data to read, use the select() syscall - if you only read when there's data waiting, you'll never block
2,073,510
2,073,914
simple 2d collision problem
I want to find when a collision between a static and a moving ball occurs, but the algorithm I came up with, sometimes doesn't detect a collision and the moving ball goes through the static one. The moving ball is affected by gravity and the static one is not. Here's my collision detection code: GLfloat whenSpheresCollide(const sphere2d &firstSphere, const sphere2d &secondSphere) { Vector2f relativePosition = subtractVectors(firstSphere.vPosition, secondSphere.vPosition); Vector2f relativeVelocity = subtractVectors(firstSphere.vVelocity, secondSphere.vVelocity); GLfloat radiusSum = firstSphere.radius + secondSphere.radius; //We'll find the time when objects collide if a collision takes place //r(t) = P[0] + t * V[0] // //d^2(t) = P[0]^2 + 2 * t * P[0] * V[0] + t^2 * V[0]^2 // //d^2(t) = V[0]^2 * t^2 + 2t( P[0] . V[0] ) + P[0]^2 // //d(t) = R // //d(t)^2 = R^2 // //V[0]^2 * t^2 + 2t( P[0] . V[0] ) + P[0]^2 - R^2 = 0 // //delta = ( P[0] . V[0] )^2 - V[0]^2 * (P[0]^2 - R^2) // // We are interested in the lowest t: // //t = ( -( P[0] . V[0] ) - sqrt(delta) ) / V[0]^2 // GLfloat equationDelta = squaref( dotProduct(relativePosition, relativeVelocity) ) - squarev( relativeVelocity ) * ( squarev( relativePosition ) - squaref(radiusSum) ); if (equationDelta >= 0.0f) { GLfloat collisionTime = ( - dotProduct(relativePosition, relativeVelocity) - sqrtf(equationDelta) ) / squarev(relativeVelocity); if (collisionTime >= 0.0f && collisionTime <= 1.0f / GAME_FPS) { return collisionTime; } } return -1.0f; } And here is the updating function that calls collision detection: void GamePhysicsManager::updateBallPhysics() { // //Update velocity vVelocity->y -= constG / GAME_FPS; //v = a * t = g * 1 sec / (updates per second) shouldApplyForcesToBall = TRUE; vPosition->x += vVelocity->x / GAME_FPS; vPosition->y += vVelocity->y / GAME_FPS; if ( distanceBetweenVectors( *pBall->getPositionVector(), *pBasket->getPositionVector() ) <= pBasket->getRadius() + vectorLength(*vVelocity) / GAME_FPS ) { //Ball sphere sphere2d ballSphere; ballSphere.radius = pBall->getRadius(); ballSphere.mass = 1.0f; ballSphere.vPosition = *( pBall->getPositionVector() ); ballSphere.vVelocity = *( pBall->getVelocityVector() ); sphere2d ringSphereRight; ringSphereRight.radius = 0.05f; ringSphereRight.mass = -1.0f; ringSphereRight.vPosition = *( pBasket->getPositionVector() ); //ringSphereRight.vPosition.x += pBasket->getRadius(); ringSphereRight.vPosition.x += (pBasket->getRadius() - ringSphereRight.radius); ringSphereRight.vVelocity = zeroVector(); GLfloat collisionTime = whenSpheresCollide(ballSphere, ringSphereRight); if ( collisionTime >= 0.0f ) { DebugLog("collision"); respondToCollision(&ballSphere, &ringSphereRight, collisionTime, pBall->getRestitution() * 0.75f ); } // //Implement selection of the results that are first to collide collision vVelocity->x = ballSphere.vVelocity.x; vVelocity->y = ballSphere.vVelocity.y; vPosition->x = ballSphere.vPosition.x; vPosition->y = ballSphere.vPosition.y; } Why isn't the collision being detected in 100% of cases? It's being detected only in 70% of cases. Thanks. UPDATE: Problem seems to be solved when I change FPS from 30 to 10. How does FPS affect my collision detection?
delta = ( P[0] . V[0] )^2 - V[0]^2 * (P[0]^2 - R^2) Shouldn't that be delta = b2 - 4 ac? [Edit] Oh I see, you factored the 4 out. In that case, are you sure you're considering both solutions for t? t = ( -( P[0] . V[0] ) - sqrt(delta) ) / V[0]^2 and t = ( -( P[0] . V[0] ) + sqrt(delta) ) / V[0]^2
2,074,099
2,075,050
Coding Practices which enable the compiler/optimizer to make a faster program
Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code. As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant. FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code. What coding practices are available that may enable the compiler/optimizer to generate faster code? Identifying the platform and compiler you use, would be appreciated. Why does the technique seem to work? Sample code is encouraged. Here is a related question [Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code? [Edit] Offset related link
Write to local variables and not output arguments! This can be a huge help for getting around aliasing slowdowns. For example, if your code looks like void DoSomething(const Foo& foo1, const Foo* foo2, int numFoo, Foo& barOut) { for (int i=0; i<numFoo, i++) { barOut.munge(foo1, foo2[i]); } } the compiler doesn't know that foo1 != barOut, and thus has to reload foo1 each time through the loop. It also can't read foo2[i] until the write to barOut is finished. You could start messing around with restricted pointers, but it's just as effective (and much clearer) to do this: void DoSomethingFaster(const Foo& foo1, const Foo* foo2, int numFoo, Foo& barOut) { Foo barTemp = barOut; for (int i=0; i<numFoo, i++) { barTemp.munge(foo1, foo2[i]); } barOut = barTemp; } It sounds silly, but the compiler can be much smarter dealing with the local variable, since it can't possibly overlap in memory with any of the arguments. This can help you avoid the dreaded load-hit-store (mentioned by Francis Boivin in this thread).
2,074,173
2,074,241
Resizing Arrays - Difference between two execution blocks?
I have a function which grows an array when trying to add an element if it is full. Which of the execution blocks is better or faster? I think my second block (commented out) may be wrong, because after doubling my array I then go back and point to the original. When creating arrays does the compiler look for a contiguous block in memory which it entirely fits into? (On the stack/heap? I don't fully understand which, though it is important for me to learn it is irrelevant to the actual question.) If so, would this mean using the second block could potentially overwrite other information by overwriting adjacent memory? (Since the original would use 20 adjacent blocks of memory, and the latter 40.) Or would it just mean the location of elements in my array would be split, causing poor performance? void Grow() { length *= 2; // double the size of our stack // create temp pointer to this double sized array int* tempStack = new int[length]; // loop the same number of times as original size for(int i = 0; i < (length / 2); i++) { // copy the elements from the original array to the temp one tempStack[i] = myStack[i]; } delete[] myStack; //delete the original pointer and free the memory myStack = tempStack; //make the original point to the new stack //Could do the following - but may not get contiguous memory block, causing // overwritten >data #if 0 int* tempStack = myStack; //create temp pointer to our current stack delete[] myStack; //delete the original pointer and free memory myStack = new int[length *= 2]; //delete not required due to new? myStack = tempStack; #endif }
The second block wouldn't accomplish what you want at all. When you do myStack = new int[length *= 2]; then the system will return a pointer to wherever it happens to allocate the new, larger array. You then reassign myStack to the old location (which you've already de-allocated!), which means you're pointing at memory that's not allocated (bad!) and you've lost the pointer to the new memory you just allocated (also bad!). Edit: To clarify, your array will be allocated on the heap. Additionally, the (new) pointer returned by your larger array allocation (new int[foo]) will be a contiguous block of memory, like the old one, just probably in a different location. Unless you go out of bounds, don't worry about "overwriting" memory.
2,074,424
2,074,445
Which open-source C++ projects have good (inspiring) tests?
What are inspiring examples of open-source software written in C++ with great tests? I would like to read, study and follow.
I like mysql's exhaustive test suite, where they add a test for almost every bug they fix to prevent regressions.
2,074,494
2,074,520
virtual function question
#include "stdafx.h" #include <iostream> #include <vector> #include <string> class Helper { public: Helper() { init(); } virtual void print() { int nSize = m_vItems.size(); std::cout << "Size : " << nSize << std::endl; std::cout << "Items: " << std::endl; for(int i=0; i<nSize; i++) { std::cout << m_vItems[i] << std::endl; } } protected: virtual void init() { m_vItems.push_back("A"); } std::vector<std::string> m_vItems; }; class ItemsHelper : public Helper { public: ItemsHelper() { } protected: virtual void init() { Helper::init(); m_vItems.push_back("B"); } }; int _tmain(int argc, _TCHAR* argv[]) { ItemsHelper h; h.print(); } This output's that the size of the vector is 1. I expected the size to be 2 because in the ItemsHelper::init function I called the base class Helper::init() function, then I add a second item to the vector. The problem is, the ItemsHelper::init doesn't get called, the base class init function gets called instead. I want the ItemsHelper::init function to get called, and I can do that by calling the init function in the ItemsHelper ctor rather than in the base class. BUT, the question is, is there a better way to achieve that and still keep the call to the init() in the base class? Because what if I want to create a Helper object instead of a ItemsHelper, then the init function would never get called. btw, this is a simplified version of a issue I'm seeing in a much larger object, I just made these objects up for example.
In a base class constructor, the derived class has not yet been constructed so the overriden function on the derived class is not yet available. There's a FAQ entry on this somewhere... which I can't find. The simplest solution is to just put the .push_back("A") part of init into the Helper constructor and the .push_back("B") into the ItemsHelper constructor. This seems to do what you are trying to do and cuts out the unnecessary init virtual function.
2,074,579
2,074,884
Should I use _T or _TEXT on C++ string literals?
For example: // This will become either SomeMethodA or SomeMethodW, // depending on whether _UNICODE is defined. SomeMethod( _T( "My String Literal" ) ); // Becomes either AnotherMethodA or AnotherMethodW. AnotherMethod( _TEXT( "My Text" ) ); I've seen both. _T seems to be for brevity and _TEXT for clarity. Is this merely a subjective programmer preference or is it more technical than that? For instance, if I use one over the other, will my code not compile against a particular system or some older version of a header file?
A simple grep of the SDK shows us that the answer is that it doesn't matter—they are the same. They both turn into __T(x). C:\...\Visual Studio 8\VC>findstr /spin /c:"#define _T(" *.h crt\src\tchar.h:2439:#define _T(x) __T(x) include\tchar.h:2390:#define _T(x) __T(x) C:\...\Visual Studio 8\VC>findstr /spin /c:"#define _TEXT(" *.h crt\src\tchar.h:2440:#define _TEXT(x) __T(x) include\tchar.h:2391:#define _TEXT(x) __T(x) And for completeness: C:\...\Visual Studio 8\VC>findstr /spin /c:"#define __T(" *.h crt\src\tchar.h:210:#define __T(x) L ## x crt\src\tchar.h:889:#define __T(x) x include\tchar.h:210:#define __T(x) L ## x include\tchar.h:858:#define __T(x) x However, technically, for C++ you should be using TEXT() instead of _TEXT(), but it (eventually) expands to the same thing too.
2,074,780
2,075,795
how to create a vpn software
I want to create an application which creates a VPN between some endpoints, something like hamachi and i do not have a starting point. I haven't found any resource to explain how to create such a network application.I want to use c# because i have some experience with it. I really need some help, anything that can put me on the right way. Thanks.
There are a number of distinct elements of VPN software that you'll have to figure out: What technology/standard will your program use to provide the privacy? Some common ones are IPSEC, L2TP, PPTP, SSH, and SSL. Web searches ought to turn up rich information (including RFCs) on all of these. If you're doing this as a learning exercise, rather than needing actual security, you could also design your own. Are you implementing a client, a server, or both? What operating system(s) will you support? This affects what you need to do to convince it to route packets through your application. Do you plan to interoperate with software implementing some standard?
2,075,078
2,075,099
C++/VS2005: Defining the same class name in two different .cpp files
Somewhat of an academic question, but I ran into this while writing some unit tests. My unit test framework (UnitTest++) allows you to create structs to serve as fixtures. Usually these are customized to the tests in the file, so I put them at the top of my unit test file. //Tests1.cpp struct MyFixture { MyFixture() { ... do some setup things ...} }; TEST_FIXTURE(MyFixture, SomeTest) { ... } //Tests2.cpp struct MyFixture { MyFixture() { ... do some other setup things, different from Tests1}}; TEST_FIXTURE(MyFixture, SomeOtherTest) { ... } However, I found recently (with VS2005 at least) that when you name the fixture struct using the same name (so now two versions of the struct exist with the same name), then one of the versions is silently thrown out. This is pretty surprising, because I have my compiler set to /W4 (highest warning level) and no warning comes out. I guess this is a name clash, and why namespaces were invented, but do I really need to wrap each of my unit test fixtures in a separate namespace? I just want to make sure I'm not missing something more fundamental. Is there a better way to fix this - should this be happening? Shouldn't I be seeing a duplicate symbols error or something?
Try sticking the classes in an anonymous namespace, you may find it less distasteful than having to create and name a new namespace for each file. Don't have access to VS2005 and Cpp unit but this may work.. //Tests1.cpp namespace { struct MyFixture { MyFixture() { ... do some setup things ...} }; } TEST_FIXTURE(MyFixture, SomeTest) { ... } //Tests2.cpp namespace { struct MyFixture { MyFixture() { ... do some other setup things, different from Tests1}}; } TEST_FIXTURE(MyFixture, SomeOtherTest) { ... }
2,075,123
2,075,212
How to get an object of a unknown class with given classname
I am searching for a way to determine at runtime, which type of object should be alloced (based on a given class name, which is of type const char*). Well the simplest way of course is to use loads of ifs /else ifs, but that doesnt seem applicable, because i have > 100 different classes(well at least they all derive from one base class), and i have to add new classes quite regularly aswell. I already came up with a first draft, but sadly it doesnt compile yet (mingw & g++ 4.4) template<typename TBase, typename TDerived, typename... TArgs> Base* get_classobject(const char* classname) { if(strcmp(classname,typeid(TDerived).name())==0) return new TDerived; // else if(sizeof...(TArgs)>0) return get_classobject<TBase,TArgs...>(classname); else return 0; } int main() { Base* obj = get_classobject<Base,A,Foo,B,C>("Foo"); // ^- Types A B C and Foo are all derived from Base delete obj; //of course we got an virtual dtor ;) return 0; } but that sizeof...(TArgs)>0 doesnt stop gcc from trying to generate code for get_classobject<TBase,const char*>(const char*) which fails Do you have any idea, how to fix this, or any other idea ? Thanks. EDIT: i solved it: template<typename TBase, typename TDerived> Base* get_classobject(const char* classname) { if(strcmp(classname,typeid(TDerived).name())==0) return new TDerived; return 0; } template<typename TBase, typename TDerived, typename TArg, typename... TArgs> Base* get_classobject(const char* classname) { if(strcmp(classname,typeid(TDerived).name())==0) return new TDerived; return get_classobject<TBase,TArg,TArgs...>(classname); } EDIT For interested readers: You should now that the implementation above is NOT compiler independent at all. The output of typeif(sometype).name() is compiler/implementation specific. Using a static const char* name variable or function inside all Derived classes, would fix this, but adds a bunch of work(of course you can use a macro for this, but if you are using macros already, you could aswell use another object factory method)
Can't you just declare template<typename TBase, typename TDerived, typename TArg, typename... TArgs> ? Then you can specialize for the case of typename TBase, typename TDerived, typename TArg
2,075,231
2,075,268
Problem with a trainer I'm trying to create (for educational purposes)
I'm trying to create a trainer for Icy Tower 1.4 for educational purposes. I wrote a function that shorten the WriteProcessMemory function like that: void WPM(HWND hWnd,int address,byte data[]) { DWORD proc_id; GetWindowThreadProcessId(hWnd, &proc_id); HANDLE hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, proc_id); if(!hProcess) return; DWORD dataSize = sizeof(data); WriteProcessMemory(hProcess,(LPVOID)address,&data,dataSize,NULL); CloseHandle(hProcess); } and that's the function that should stop the Icy Tower Clock: void ClockHack(int status) { if(status==1)//enable { //crashes the game byte data[]={0xc7,0x05,0x04,0x11,0x45,0x00,0x00,0x00,0x00,0x00}; WPM(FindIcyTower(),0x00415E19,data); } else if(status==0)//disable { byte data[]={0xA3,0x04,0x11,0x45,0x00}; } } in the else statement there's the orginal AOB of the Opcode. When I call the ClockHack function with the status parameter set to 1, the game crashes. In Cheat Engine I wrote for this a script, that dosen't exactly write to the same address because I did Code Cave and it works great. Someone knows why? Thank you. By the way: it is for educational purposes only.
You can't pass an array to a function like that. Having a byte[] parameter is the same as a byte * parameter, and sizeof(data) will just give you the size of a pointer. Also, you shouldn't use &data since it's already a pointer. So your function should look like: void WPM(HWND hWnd,int address, byte *data, int dataSize) { //.... WriteProcessMemory(hProcess,(LPVOID)address,data,dataSize,NULL); //... }
2,075,247
2,075,256
C++: Overloading operator=
Okay so I have a class that has 'weak typing' I.E. it can store many different types defined as: #include <string> class myObject{ public: bool isString; std::string strVal; bool isNumber; double numVal; bool isBoolean; bool boolVal; double operator= (const myObject &); }; I would like to overload the assignment operator like this: double myObject::operator= (const myObject &right){ if(right.isNumber){ return right.numVal; }else{ // Arbitrary Throw. throw 5; } } So that I can do this: int main(){ myObject obj; obj.isNumber = true; obj.numVal = 17.5; //This is what I would like to do double number = obj; } But when I do that, I get: error: cannot convert ‘myObject’ to ‘double’ in initialization At the assignment. I have also tried: int main(){ myObject obj; obj.isNumber = true; obj.numVal = 17.5; //This is what I would like to do double number; number = obj; } To which I get: error: cannot convert ‘myObject’ to ‘double’ in assignment Is there something I am missing? or is it simply not possible to do a conversion like that by overloading operator=.
Overloading operator= changes the behaviour when assigning to objects of your class type. If you want to provide implicit conversion to other types you need to supply a conversion operator, e.g. operator double() const { if (!isNumber) throw something(); return numVal; }
2,075,299
3,324,185
Any good (really good) material on Poco C++?
Hardly can I find any good material on Poco C++. Their documentation cannot replace a good 3rd party tutorial enriched with real-world examples and lit with creativity. Please refer some if you know any. Thanks in advance.
We have now posted lots of introductory slides to our documentation website: http://pocoproject.org/documentation
2,075,655
2,075,770
Dealing with 32-bit code on Snow Leopard 64-bit?
I would like to create an XPCOM plugin for a XULRunner application that I'm building. Since Mozilla only provides a 32-bit build of the XULRunner SDK I have to link with 32-bit libraries. This means that a lot of libraries need to be built by me. Relating this I have a few questions: Can I do a sudo make install for a 32-bit build? Or will it mess up my system? If I can't do it, then what is the workaround? My current solution is including the lib dir in the configure command: CFLAGS=" -arch i386" CCFLAGS=" -arch i386" CXXFLAGS=" -arch i386" LDFLAGS=" -L`pwd`/../libs/gst-plugins-base -L`pwd`/../libs/liboil -arch i386" ./configure Is this the way to go or are the better alternatives?
sudo make install will be fine, IF the installation location does not conflict with the system libraries. In order to allow the built binaries to find your libraries, otool and install_name_tool are your friends. For deployment, these will allow you to make everything bundle-relative even if it has been built for a different install location.
2,075,814
2,075,826
Can C++ export class from DLL
I would like to know if the export of class ( __declspec(dllexport) in VC++ ) is a kind of standard ( ANSI , ISO , ... ) I would like to know if someone has already try to do the same with intel c++ compiler and gcc ( mingw on windows ) and if it is possible to mix dlls generated from different compilers ( I really doubt that it is possible ) Thx
No, __declspec is VC++ specific. One of the reasons that VC++ needs that is by default, DLLs do not expose symbols outside the DLL unless explicitly requested to do that. On Posix, shared objects expose all their (not-static) symbols unless explicitly told to hide them. Update Based on your comment that you want to make your code portable, you want to use the preprocessor and do something like this: #ifdef WIN32 #ifdef EXPORT_CLASS_FOO #define CLASS_FOO __declspec(dllexport) #else #define CLASS_FOO __declspec(dllimport) #endif #else #define CLASS_FOO #endif class CLASS_FOO foo { ... }; In the project implementing the class, make sure to add EXPORT_CLASS_FOO as a preprocessor definition (found in Project | NAME Properties.. under C/C++ | Preprocessor | Preprocess Definitions). This way, you'll export them when building the DLL, import them when you are using the DLL and do nothing special under Unix.
2,075,898
2,076,144
Good input validation loop using cin - C++
I'm in my second OOP class, and my first class was taught in C#, so I'm new to C++ and currently I am practicing input validation using cin. So here's my question: Is this loop I constructed a pretty good way of validating input? Or is there a more common/accepted way of doing it? Thanks! Code: int taxableIncome; int error; // input validation loop do { error = 0; cout << "Please enter in your taxable income: "; cin >> taxableIncome; if (cin.fail()) { cout << "Please enter a valid integer" << endl; error = 1; cin.clear(); cin.ignore(80, '\n'); } }while(error == 1);
I'm not a huge fan of turning on exceptions for iostreams. I/O errors aren't exceptional enough, in that errors are often very likely. I prefer only to use exceptions for less frequent error conditions. The code isn't bad, but skipping 80 characters is a bit arbitrary, and the error variable isn't necessary if you fiddle with the loop (and should be bool if you keep it). You can put the read from cin directly into an if, which is perhaps more of a Perl idiom. Here's my take: int taxableIncome; for (;;) { cout << "Please enter in your taxable income: "; if (cin >> taxableIncome) { break; } else { cout << "Please enter a valid integer" << endl; cin.clear(); cin.ignore(numeric_limits<streamsize>::max(), '\n'); } } Apart from only skipping 80 characters, these are only minor quibbles, and are more a matter of preferred style.
2,075,957
2,075,982
Passing a const vector<pointers> to a method but changing value pointed to
I have following code (only relevant portions shown for sake of brevity - please let me know if I have been too brief): class my_class { public: my_class() {m_i=0;} set(int i) {m_i = i;} private: int m_i; } void CallMod() { // create a bunch of my_class* o = new my_class() and store in vector<my_class*> // vObject (left out for brevity) Mod(vObject); // will vObject contain pointers to objects that have m_i == 2 } void Mod(vector<my_class*> const & vObject) { BOOST_FOREACH(my_class o, vObject) { o->set(2); } } Does this mean that while vObject is const, the modification done by the call to o->set(2) will be retained once Mod returns? Does that indicate that the "const" qualifier will not allow modify operations on vObject (i.e. the vector) but allows modification on the contained pointers to my_class? Did I understand this right? Any duplicate questions that answers this - I couldn't find one - links most appreciated.
The vector will be const. You can only get const_iterators from it. You can't modify it or it's elements. The elements in the container will be const pointers. Unfortunately, a const pointer doesn't mean the element it points to is const, just that the value of the pointer can't change. If you had a vector<my_class> instead of vector<my_class*>, you would not be able to modify the my_class objects inside the const vector (except if you casted away the const-ness, obviously).
2,076,000
2,076,011
removing strings from a vector via boost::bind
I am trying to remove short strings from a vector. std::vector<std::string> vec; // ... vec.erase(std::remove_if(vec.begin(), vec.end(), boost::bind(std::less<size_t>(), boost::bind(&std::string::length, _1), 5), vec.end()); The compiler spits out a very large error message: qwer.cpp:20: error: no matching function for call to 'remove_if(__gnu_cxx::__nor mal_iterator<std::basic_string<char, std::char_traits<char>, std::allocator<char > >*, std::vector<std::basic_string<char, std::char_traits<char>, std::allocator <char> >, std::allocator<std::basic_string<char, std::char_traits<char>, std::al locator<char> > > > >, __gnu_cxx::__normal_iterator<std::basic_string<char, std: :char_traits<char>, std::allocator<char> >*, std::vector<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::basic_strin g<char, std::char_traits<char>, std::allocator<char> > > > >, boost::_bi::bind_t <boost::_bi::unspecified, std::less<unsigned int>, boost::_bi::list2<boost::_bi: :bind_t<unsigned int, boost::_mfi::cmf0<unsigned int, std::basic_string<char, st d::char_traits<char>, std::allocator<char> > >, boost::_bi::list1<boost::arg<1> > >, boost::_bi::value<int> > >, __gnu_cxx::__normal_iterator<std::basic_string< char, std::char_traits<char>, std::allocator<char> >*, std::vector<std::basic_st ring<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::b asic_string<char, std::char_traits<char>, std::allocator<char> > > > >)' The following solution works: vec.erase(std::remove_if(vec.begin(), vec.end(), boost::bind(&std::string::length, _1) < 5), vec.end()); But I am still curious as to what I did wrong in the first version. Thanks!
It looks like you got your parenthesis off (there should be two after 5, one to close the bind, one to close the remove_if.) I am surprised this didn't give another error message about invalid token or something though, as the parens are clearly unbalanced (did you remove an extra close paren from the end while preparing for SO?). It looks like this is the case, because if you read the template arguments to remove_if in the error message, the next to last one is a boost bind_t, followed by another gnu::iterator.
2,076,209
2,076,261
passing pointers from C to C++ and vice versa
Is there any tips one can give me about passing pointers to structs, doubles, functions, ... from a C program to a C++ library and back?
Assuming you're coding these in two different libraries static or dynamic (DLLs on windows shared libraries on Linux and other *nix variants) The biggest concerns I have are as follows: They are compiled with the same compiler. While this isn't necessary if all C++ exports are exported with a C-style naming convention it is necessary for C++ to C++ calls to class instances between the two C++ modules. This is necessary due to how different compilers mangle C++ exports differently. Do not cast a C++ class as a C struct. They aren't the same under the covers, even if the layout of fields are the same. C++ classes have a "v-table" if they have any virtual members; this v-table allows the proper calling of inherited or base class methods. This is true of C to C or C++ to C++ as well as C to C++. Ensure both use the same byte alignment for the output library. You can only determine this by reading your compiler or development environments documentation. Don't mix malloc/free with new/delete. More specifically don't allocate memory with new and free memory with "free" and vice versa. Many compilers and operating systems handle memory management differently between the two. Passing function pointers: So long as they are exposed to/from C++ as ''extern "C"'' this should be fine. (You'll either need to reference your compilers documentation on how to determine when a header is being compiled as C or C++ to maintain this in one file, or you will need two separate copies of the same function declaration in each project -- I recommend the first) Passing doubles: This is a built-in type in both C and C++ and should be handled the same. If you must share an instance of a C++ object with a C function, and act on it from within C code, expose a set of C-exported helper functions which call the appropriate methods on the C++ object. Pure C code cannot properly call methods on C++ objects. Pseudocode-ish Example: // C++ class class foo { public: void DoIt(); }; // export helper declarations extern "C" void call_doit(foo* pFoo); extern "C" foo* allocate_foo(); extern "C" deallocate_foo(foo* pFoo); // implementation void call_doit(foo* pFoo) { pFoo->DoIt(); } foo* allocate_foo() { return new foo(); } deallocate_foo(foo* pFoo) { delete pFoo; } // c consumer void main() { foo* pFoo= allocate_foo(); call_doit(pFoo); dealocate_foo(pFoo); }
2,076,238
2,076,263
Dispatch Table in C++
Suppose I have something like the following: class Point : geometry { ... Point(double x, double y) { } double distanceTo(Line) { } double distanceTo(Point) { } } class Line : geometry { ... Line(double x, double y, double slopex, double slopey) { } double distanceTo(Line) { } double distanceTo(Point) { } } struct point_t { double x, y; } struct line_t { double x, y, slope_x, slope_y; } struct Geom_Object_t { int type; union { point_t p; line_t l; } geom; } I am wondering what the best way to define a dispatch table for a function like double distanceTo(Geom_Object_t * geom1, Geom_Object_t * geom2) { } The classes are written in C++, but the distanceTo function and the struct must be externed to C thanks
I would make the class diagram different: an abstract base class GeomObject, subclassing geometry (with a getType accessor, as well as pure virtual distanceTo overloads), and concrete subclasses Line and Point of GeomObject (with overrides of the accessor and overloads). The need to "extern C" the double distanceTo function is not a problem, since you're not talking about overloads of that function anyway: you simply want to return geom1.distanceTo(x) (letting the virtual table do that part of the work;-) where x is an appropriate cast, e.g., assuming the class diagram I've explained: extern "C" double distanceTo(Geom_Object_t * geom1, Geom_Object_t * geom2) { if(geom2->getType() == POINT_TYPE) { return geom1->distanceTo(static_cast<Point*>(geom2)); } else { return geom1->distanceTo(static_cast<Line*>(geom2)); } }
2,076,332
2,076,347
Strange Compiler Behavior Regarding Default Constructors in C++
class TestClass { public: TestClass(int i) { i = i; }; private: int i; } class TestClass2 { private: TestClass testClass; } Why does the above code compile fine even when we have not provided a default constructor? Only if someone instantiates TestClass2 elsewhere in the code, do we get a compile error. What is the compiler doing here? Seems strange... Thanks.
When you specify a non default constructor without specifying a default constructor, the default constructor doesn't exist. You aren't attempting to call the default constructor until you try to call it explicitly as you are in TestClass2. If you instead in TestClass2 specified a constructor that initialized TestClass appropriately, you would have no error. i.e. class TestClass2 { TestClass m_testClass; public: TestClass2():m_testClass(2){} }; also use initializer lists wherever possible for performance, and if you call the parameter name and the member variable name the same it can be confusing for others.
2,076,337
2,076,361
Semantic checking of default template parameters
On page 340 of the C++ Programming Language: Special Edition, Stroustrup writes... The semantic checking of a default argument for a template parameter is done if and (only) when that default argument is actually used. In particular, as long as we refrain from using the default template argument Cmp<T> we can compare() strings of a type for which Cmp<X> wouldn't compile (say, because < wasn't defined for an X). This point is crucial in the design of the standard containers, which rely on a template argument to specify default values. I'm having trouble wrapping my head around the usage of this. Why would this rule allow strings of type X to be compared, when normally it wouldn't compile? Wouldn't this behavior be undesirable?
The given example is: template<class T, class C = Cmp<T> > int compare(const String<T>& str1, const String<T>& str2) { // ... compare using C } The idea is that the class template Cmp might not be defined or illegal for some T. In that case, you can pass a custom comparison class template: compare<char, MyComparer>(str1, str2); If you do that, Cmp isn't used and won't be checked if it actually would compile.
2,076,339
2,076,349
Java wrapper around a PE (.exe)
Is there any way to make a Java program (in Windows) that just acts as a wrapper around a PE (.exe), passing all stdin input to the program and writing out to stdout everything that the PE writes out. I need this because the interface for a program only allows Java classes, but I want it to run some code that I've put together in C++. Thanks in advance. edit: portability is 0% important. This only needs to work in Windows and will never be needed to work anywhere else.
Take a look at ProcessBuilder: ProcessBuilder pb = new ProcessBuilder("myCommand", "myArg1", "myArg2"); Map<String, String> env = pb.environment(); env.put("VAR1", "myValue"); env.remove("OTHERVAR"); env.put("VAR2", env.get("VAR1") + "suffix"); pb.directory("myDir"); Process p = pb.start(); and another example of it.
2,076,409
2,076,413
C++ ...when all the arguments have default values
I guess that this is a very absurd/basic question, but still: class m { public: void f(int ***); /***/ } void m::f(int ***a = NULL) { /***/ } The call to f (as well as any function which has default values for all the arguments) doesn't accept 0 arguments. Why? How should I format the declaration then?
That works fine if the function definition is in the header file. The rule is that whoever is calling the function has to 'see' the default value. So, I'm guessing you have the function definition in a separate source file. Assuming that's the case, just put the default in the function declaration (in the class): class m { public: void f(int *** = 0); /***/ }; You'll also need to remove the default value from the function definition as you can only define the default in a single place (even if the value itself is the same).
2,076,460
2,076,476
C++ NetUserAdd() not working?
I posted earlier about how to do this, and got some great replies, and have managed to get the code written based off the MSDN example. However, it does not seem to be working properly. Its printing out the ERROR_ACCESS_DENIED message, but im not sure why as I am running it as a full admin. I was initially trying to create a USER_PRIV_ADMIN, but the MSDN said it can only use USER_PRIV_USER, but sadly neither work. Im hoping someone can spot a mistake or has an idea. Thanks! void AddRDPUser() { USER_INFO_1 ui; DWORD dwLevel = 1; DWORD dwError = 0; NET_API_STATUS nStatus; ui.usri1_name = L"DummyUserAccount"; ui.usri1_password = L"a2cDz3rQpG8"; //ignored by NetUserAdd //ui.usri1_password_age = -1; ui.usri1_priv = USER_PRIV_USER; //USER_PRIV_ADMIN; ui.usri1_home_dir = NULL; ui.usri1_comment = NULL; ui.usri1_flags = UF_SCRIPT; ui.usri1_script_path = NULL; nStatus = NetUserAdd(NULL, dwLevel, (LPBYTE)&ui, &dwError); switch (nStatus) { case NERR_Success: { Msg("SUCCESS!\n"); break; } case NERR_InvalidComputer: { fprintf(stderr, "A system error has occurred: NERR_InvalidComputer\n"); break; } case NERR_NotPrimary: { fprintf(stderr, "A system error has occurred: NERR_NotPrimary\n"); break; } case NERR_GroupExists: { fprintf(stderr, "A system error has occurred: NERR_GroupExists\n"); break; } case NERR_UserExists: { fprintf(stderr, "A system error has occurred: NERR_UserExists\n"); break; } case NERR_PasswordTooShort: { fprintf(stderr, "A system error has occurred: NERR_PasswordTooShort\n"); break; } case ERROR_ACCESS_DENIED: { fprintf(stderr, "A system error has occurred: ERROR_ACCESS_DENIED\n"); break; } } }
Is you os vista or win 7?, if so then you may need to raise your privilege level.
2,076,532
2,076,547
How does sbrk() work in C++?
Where can I read about sbrk() in some detail? How does it exactly work? In what situations would I want to use sbrk() instead of the cumbersome malloc() and new()? btw, what is the expansion for sbrk()?
Have a look at the specification for brk/sbrk. The call basically asks the OS to allocate some more memory for the application by incrementing the previous "break value" by a certain amount. This amount (the first parameter) is the amount of extra memory your application then gets. Most rudimentary malloc implementations build upon the sbrk system call to get blocks of memory that they split up and track. The mmap function is generally accepted as a better choice (which is why mallocs like dlmalloc support both with an #ifdef). As for "how it works", an sbrk at its most simplest level could look something like this: uintptr_t current_break; // Some global variable for your application. // This would probably be properly tracked by the OS for the process void *sbrk(intptr_t incr) { uintptr_t old_break = current_break; current_break += incr; return (void*) old_break; } Modern operating systems would do far more, such as map pages into the address space and add tracking information for each block of memory allocated.
2,076,618
2,076,627
NetUserAdd() to Remote Desktop Group?
Is there anyway to give a newly created user from NetUserAdd() remote desktop access and/or administrative rights? I know it is possible, at least for Remote Desktop, and I have been reading through the MSDN but nothing seems to hint at what is required to be set for it to work.
Those rights are controlled by alias membership. For Remote Desktop access it's the Remote Desktop Users alias, while for administrators it's the Administrators alias (obviously). You can add users to aliases using NetLocalGroupAddMembers. PS: The proper term is "alias", but the Net* functions use "local group" for some reason. EDIT: If you have trouble using "Administrators" or "Remote Desktop Users", try "Builtin\Administrators" and "Builtin\Remote Desktop Users".
2,076,723
2,076,761
Read from file, clear it, write to it
I'm trying to read data from a text file, clear it, and then write to it, in that order using the fstream class. My question is how to clear a file after reading from it. I know that I can open a file and clear it at the same time, but is there some function I can call on the stream to clear its contents?
You should open it, perform your input operations, and then close it and reopen it with the std::fstream::trunc flag set. #include <fstream> int main() { std::fstream f; f.open("file", std::fstream::in); // read data f.close(); f.open("file", std::fstream::out | std::fstream::trunc); // write data f.close(); return 0; }
2,076,753
2,076,778
auto_ptr released without assigning its return value
So what happens to a pointer if you release an object owned by auto_ptr but do not actually assign it to a raw pointer? It seems like it's supposed to be deleted but it never gets the chance to. So does it get leaked out "into the wild"? void usingPointer(int* p); std::auto_ptr<int> point(new int); *point = 3; usingPointer(point.release()); Note: I don't use auto_ptr anymore, I use tr1::shared_ptr now. This situation just got me curious.
release isn't suppose to delete the owned point, from the docs: Sets the auto_ptr internal pointer to null pointer (which indicates it points to no object) without destructing the object currently pointed by the auto_ptr. Also, it's overkill to replace all uses of your auto_ptr with tr1::shared_ptr - you should be using unique_ptr where a shared one isn't necessary.
2,076,817
2,079,921
Help Using NetuserAdd() and NetLocalGroupAddMembers() in C++
So I think I almost got it. I create my dummy account with one function, and wrote a second function to add it to the Remote Desktop group. Problem is, the Administrator account is the one logged in, so I am not sure how to specify what account to add to the group. Here is my code... The user is being created properly... void AddRDPUser() { USER_INFO_1 ui; DWORD dwLevel = 1; DWORD dwError = 0; NET_API_STATUS nStatus; ui.usri1_name = L"BrettXFactor"; ui.usri1_password = L"XfactorsServer96"; ui.usri1_priv = USER_PRIV_USER; ui.usri1_home_dir = NULL; ui.usri1_comment = NULL; ui.usri1_flags = UF_SCRIPT; ui.usri1_script_path = NULL; nStatus = NetUserAdd(NULL, dwLevel, (LPBYTE)&ui, &dwError); } But I dont know how to specify to add them to this group since they are not logged in. Any help would be appreciated void AddToGroup() { LOCALGROUP_MEMBERS_INFO_3 lgmi3; DWORD dwLevel = 3; DWORD totalEntries = 1; NET_API_STATUS nStatus; LPCWSTR TargetGroup = L"Remote Desktop Users"; LPSTR sBuffer = NULL; memset(sBuffer, 0, 255); DWORD nBuffSize = sizeof(sBuffer); if(GetUserNameEx(NameDnsDomain, sBuffer, &nBuffSize)==0) { Msg("Failed to add User to Group\n"); return; } LPWSTR user_name = (LPWSTR)sBuffer; lgmi3.lgrmi3_domainandname = user_name; nStatus = NetLocalGroupAddMembers(NULL, TargetGroup, 3, (LPBYTE)&lgmi3, totalEntries); }
No offense, but you don't seem to know what you're doing with the code at all. You're not adding the current user to the target group; you're adding the user you just created to it right? Then why are you calling GetUserNameEx? Just use the name of the new user: lgmi3.lgrmi3_domainandname = L"BrettXFactor";
2,076,874
2,076,892
friend function in template definition
My question ist related a bit to this one. I want to overload the operator << for some class and I found two different notations that both work: template <class T> class A{ T t; public: A(T init) : t(init){} friend ostream& operator<< <> (ostream &os, const A<T> &a); //need forward declaration //template <class U> friend ostream& operator<< (ostream &os, const A<U> &a); }; Do I define identical things with different notations? Or is the first version more restrictive in which instance (in this case only the instance with the same T as my class A) of << is friend of A?
The first version restricts the friendship to the operator<< for the specific type A<T> , while the second makes any operator<< that takes an A<SomeType> a friend. So yes, the first one is more restrictive: template<class T> ostream& operator<< (ostream& os, const A<T>& a) { A<double> b(0.0); b.t; // compile error with version 1, fine with version 2 return os; } int main() { A<int> a(0); cout << a << endl; }
2,076,936
2,076,943
An Analog of List.h in .Net
I used to use List.h to work with lists in C++, but are there any similar libraries in .Net ? Becouse I can't use List.h for managed types.
Check out the System.Collections and System.Collections.Generic namespaces. There, you'll find classes like ArrayList, List<T>, etc...
2,077,068
2,077,073
Number of items in a byte array
I've the following C++ array: byte data[] = {0xc7, 0x05, 0x04, 0x11 ,0x45, 0x00, 0x00, 0x00, 0x00, 0x00}; How can I know how many items there are in this array?
For byte-sized elements, you can use sizeof(data). More generally, sizeof(data)/sizeof(data[0]) will give the number of elements. Since this issue came up in your last question, I'll clarify that this can't be used when you pass an array to a function as a parameter: void f(byte arr[]) { //This always prints the size of a pointer, regardless of number of elements. cout << sizeof(arr); } void g() { byte data[] = {0xc7, 0x05, 0x04, 0x11 ,0x45, 0x00, 0x00, 0x00, 0x00, 0x00}; cout << sizeof(data); //prints 10 }
2,077,091
2,077,114
If I Develop a C++ (native) DLL with VS2010 will I need MSVCRT100.dll to be also deployed?
I'm not using any features of the MSVCRT100.dll (I don't even know if there are new features).
Unfortunately, yes. You'll need the VC10 runtime for your platform (x86) or (x64) -- keep in mind though the runtime may change, though it is highly unlikely since VStudio has been in it's final phases for a while now. It is the core runtime library, you can find out more of your dependencies using DependencyWalker (http://www.dependencywalker.com) Or alternatively, try it :-)
2,077,119
2,077,434
what is the best way to synchronize container access between multiple threads in real-time application
I have std::list<Info> infoList in my application that is shared between two threads. These 2 threads are accessing this list as follows: Thread 1: uses push_back(), pop_front() or clear() on the list (Depending on the situation) Thread 2: uses an iterator to iterate through the items in the list and do some actions. Thread 2 is iterating the list like the following: for(std::list<Info>::iterator i = infoList.begin(); i != infoList.end(); ++i) { DoAction(i); } The code is compiled using GCC 4.4.2. Sometimes ++i causes a segfault and crashes the application. The error is caused in std_list.h line 143 at the following line: _M_node = _M_node->_M_next; I guess this is a racing condition. The list might have changed or even cleared by thread 1 while thread 2 was iterating it. I used Mutex to synchronize access to this list and all went ok during my initial test. But the system just freezes under stress test making this solution totally unacceptable. This application is a real-time application and i need to find a solution so both threads can run as fast as possible without hurting the total applications throughput. My question is this: Thread 1 and Thread 2 need to execute as fast as possible since this is a real-time application. what can i do to prevent this problem and still maintain the application performance? Are there any lock-free algorithms available for such a problem? Its ok if i miss some newly added Info objects in thread 2's iteration but what can i do to prevent the iterator from becoming a dangling pointer? Thanks
In general it is not safe to use STL containers this way. You will have to implement specific method to make your code thread safe. The solution you chose depends on your needs. I would probably solve this by maintaining two lists, one in each thread. And communicating the changes through a lock free queue (mentioned in the comments to this question). You could also limit the lifetime of your Info objects by wrapping them in boost::shared_ptr e.g. typedef boost::shared_ptr<Info> InfoReference; typedef std::list<InfoReference> InfoList; enum CommandValue { Insert, Delete } struct Command { CommandValue operation; InfoReference reference; } typedef LockFreeQueue<Command> CommandQueue; class Thread1 { Thread1(CommandQueue queue) : m_commands(queue) {} void run() { while (!finished) { //Process Items and use // deleteInfo() or addInfo() }; } void deleteInfo(InfoReference reference) { Command command; command.operation = Delete; command.reference = reference; m_commands.produce(command); } void addInfo(InfoReference reference) { Command command; command.operation = Insert; command.reference = reference; m_commands.produce(command); } } private: CommandQueue& m_commands; InfoList m_infoList; } class Thread2 { Thread2(CommandQueue queue) : m_commands(queue) {} void run() { while(!finished) { processQueue(); processList(); } } void processQueue() { Command command; while (m_commands.consume(command)) { switch(command.operation) { case Insert: m_infoList.push_back(command.reference); break; case Delete: m_infoList.remove(command.reference); break; } } } void processList() { // Iterate over m_infoList } private: CommandQueue& m_commands; InfoList m_infoList; } void main() { CommandQueue commands; Thread1 thread1(commands); Thread2 thread2(commands); thread1.start(); thread2.start(); waitforTermination(); } This has not been compiled. You still need to make sure that access to your Info objects is thread safe.
2,077,303
2,077,469
QProgressbar and QNetworkReply signals
i'm writing an application in C++ with the Qt Framework. It should download a File over http and display the download progress with a QProgressbar - but I don't get that part to work! Sample code: QProgressBar* pbar = new QProgressBar(); //calls the website and returns the QNetworkReply* QNetworkReply* downloader = Downloader->getFile(); connect(downloader, SIGNAL(downloadProgress(qint64,qint64)), pbar, SLOT(setValue(int))); If I run my code, the following error occurs: QObject::connect: Incompatible sender/receiver arguments QNetworkReplyImpl::downloadProgress(qint64,qint64) --> QProgressBar::setValue(int) But the Qt docs for QNetworkReply say: This signal is suitable to connecting to QProgressBar::setValue() to update the QProgressBar that provides user feedback. What is wrong with my code and how do I get it working? I'm running Qt 4.5.3 under Linux. Thanks for help and sorry for my english!
Yeah, it's right, you have to set matching arguments in your SIGNAL/SLOT methods... Anyway, in the Qt Examples And Demos, you can find the following code in the exemple "FTP Client" : connect(ftp, SIGNAL(dataTransferProgress(qint64, qint64)), this, SLOT(updateDataTransferProgress(qint64, qint64))); ... void FtpWindow::updateDataTransferProgress(qint64 readBytes, qint64 totalBytes) { progressDialog->setMaximum(totalBytes); progressDialog->setValue(readBytes); } You could copy that part and update your progress bar this way... I would therefore propose : connect(downloader, SIGNAL(downloadProgress(qint64,qint64)), pbar, SLOT(updateDataTransferProgress(qint64,qint64))); I hope it helps you ! More info : http://qt.nokia.com/doc/4.6/network-qftp.html
2,077,664
2,077,784
implicit linking DLL question
I started studying DLL's with implicit linking. I don't really fully understand how it works. Please correct me where I'm wrong. I failed to compile the next code(3 modules): MyLib.h #ifdef MYLIBAPI #else #define MYLIBAPI extern "C" __declspec(dllimport) #endif MYLIBAPI int g_nResult; MYLIBAPI int Add(int nLeft, int nRight); As far as I understand this is the header of the DLL. #define MYLIBAPI extern "C" __declspec(dllimport) means that here we are going to declare some functions/variables that will be described in devoted .cpp file and will be contained in a DLL. MyLibFile1.cpp #include <windows.h> #define MYLIBAPI extern "C" __declspec(dllexport) #include "MyLib.h" int g_nResult; int Add(int nLeft, int nRight) { g_nResult = nLeft + nRight; return(g_nResult); } So, this is obviously the file where our functions are implemented. This is the part of the DLL, right? MyExeFile1.cpp #include <windows.h> #include <strsafe.h> #include <stdlib.h> #include "MyLib.h" int WINAPI _tWinMain(HINSTANCE, HINSTANCE, LPTSTR, int) { int nLeft = 10, nRight = 25; TCHAR sz[100]; StringCchPrintf(sz, _countof(sz), TEXT("%d + %d = %d"), nLeft, nRight, Add(nLeft, nRight)); MessageBox(NULL, sz, TEXT("Calculation"), MB_OK); StringCchPrintf(sz, _countof(sz), TEXT("The result from the last Add is: %d"), g_nResult); MessageBox(NULL, sz, TEXT("Last Result"), MB_OK); return(0); } So, this is the executable file where we use the functions from the library. The whole thing doesn't work. I tried to put this all into one directory and compile at once. I tried first to compile a DLL from the first two modules(successfully) and then compile the executable (changing the path to the header file). However it resulted in 2 errors both times: error LNK2019: unresolved external symbol _WinMain@16 referenced in function ___tmainCRTStartup \Visual Studio 2008\Projects\MyExeFile1\Debug\MyExeFile1.exe : fatal error LNK1120: 1 unresolved externals What' s the correct way to do that - what should I change in the code and how should I compile the code (I use VS2008)? Thanks.
#include <tchar.h> to solve the linker error. Your header file should look like this: #ifdef BUILDING_DLL # define MYLIBAPI extern "C" __declspec(dllexport) #else # define MYLIBAPI extern "C" __declspec(dllimport) #endif MYLIBAPI int __stdcall Add(int nLeft, int nRight); Right-click your DLL project in Solution Explorer, Properties, C/C++, Preprocessor, Preprocessor Definitions, add "BUILDING_DLL". Repeat for the Release configuration. You can verify that your DLL properly exports the functions with Dumpbin.exe /exports. The __declspec(dllimport) declarator is not strictly necessary, it does however make it more efficient. The __stdcall attribute is not necessary either, it does however make your DLL usuable from any language that supports calling DLL exports.
2,077,919
2,078,013
Is there an algorithm for moving ranges?
In C++98, I can copy ranges with the std::copy algorithm. std::copy(source.begin(), source.end(), destination.begin()); Is there an algorithm in C++0x that moves the elements from source to destination? Or is std::copy somehow overloaded to accept something like rvalue iterators -- is there even such a thing? The algorithm might look something like this: #include <utility> template<class InputIterator, class OutputIterator> OutputIterator mooove(InputIterator first, InputIterator last, OutputIterator result) { for (; first != last; ++first, ++last) *result = std::move(*first); return result; }
It seems to be in the latest draft (see section 25.3.2). I have a hard copy of C++03 which is exactly the same as C++98 (sections 25.2.x) where you can see the same algorithms (without 'move' obviously).
2,078,087
2,078,121
Local classes inside inline non-member function produces LNK2005 with MSVC2005
Apparently, MSVC2005 fails to inline local classes' member functions which leads to LNK2005. I'm facing this LNK2005 error when compiling the following: common.h content: inline void wait_what() { struct wtf { void ffffuuu() {} } local; } foo.cpp content: #include "common.h" void foo() { wait_what(); } bar.cpp content: #include "common.h" void bar() { wait_what(); } LNK2005.cpp content: // forward declarations void foo(); void bar(); int main() { foo(); bar(); return 0; } The error message is: error LNK2005: "public void __thiscall `void__cdecl wait_what(void)'::`2'::wtf::ffffuuu(void)" (?ffffuuu@wtf?1??wait_what@@YAXXZ@QAEXXZ) already defined in bar.obj About local classes, ISO IEC 14882-2003 says: 9.8 Local class declarations A class can be defined within a function definition; such a class is called a local class. The name of a local class is local to its enclosing scope. The local class is in the scope of the enclosing scope, and has the same access to names outside the function as does the enclosing function. Declarations in a local class can use only type names, static variables, extern variables and functions, and enumerators from the enclosing scope. An enclosing function has no special access to members of the local class; it obeys the usual access rules (clause 11). Member functions of a local class shall be defined within their class definition, if they are defined at all. Did I miss something? To me, it looks like it is a compiler bug. GCC and MSVC2008 compile it just fine. However, I wonder whether they would really inline the call or just discard one of the two symbols during the link phase. As an interesting note, you can notice that there is even no call to this local class member function. I wonder whether there is a workaround for MSVC2005. I tried to search MSDN for this typical problem without much success: I wasn't even capable of finding a list of known bugs for the compiler. Attachment: LNK2005.zip
it was a bug in visual studio 2005, it was fixed in vs 2008
2,078,220
2,078,288
Help in converting AA script to C++
I've this AA script (Cheat Engine scripting language): [ENABLE] alloc(newmem,2048) //2kb should be enough label(returnhere) label(exit) 00415e19: jmp newmem returnhere: newmem: mov [00451104],0//moves 0 to the clock variable //nop//nops the clock increaser exit: jmp returnhere [DISABLE] dealloc(newmem) 00415e19: mov [00451104],eax //Alt: db A3 04 11 45 00 It's working - stopping the game clock. Now, I'm trying to convert this code to C++. Here's what I did so far: #include <windows.h> HWND FindIcyTower() { return FindWindowA(NULL, "Icy Tower v1.4"); } void WPM(HWND hWnd,int address, byte *data, int dataSize) { DWORD proc_id; GetWindowThreadProcessId(hWnd, &proc_id); HANDLE hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, proc_id); if(!hProcess) return; WriteProcessMemory(hProcess, (LPVOID)address, data, dataSize, NULL); CloseHandle(hProcess); } void EnableClockHack() { WPM(FindIcyTower(), 0x00415E19, {0xc7, 0x05, 0x04, 0x11 ,0x45, 0x00, 0x00, 0x00, 0x00, 0x00}, 9); } void DisableClockHack() { WPM(FindIcyTower(), 0x00415E19, {0xA3, 0x04, 0x11, 0x45, 0x00}, 4); } but it crashes the game, instead of stopping the clock. Any ideas?return FindWindowA(NULL, "Icy Tower v1.4");
The opcode for moving an immediate (like you are trying to do) is longer (9 bytes) than the game's original code that moves from a register (4 bytes). The extra 5 bytes are most likely overwriting an instruction or two after the original mov causing the program to crash. Just like in Cheat Engine you'll need to allocate new memory in the target process in which to place your modification code so you don't overwrite any of the game's code. Since this memory will be used by the processor for execution it will need to have the execute bit set. Once you have your mod code injected you can then change the game's original mov [00451104],eax to a jmp to the mod code start address. The the last instruction in the mod code would need to be a jmp back to the instruction just after your replacement jmp which would resume the game executing. See VirtualAllocEx for allocating memory in other processes, VirtualProtectEx for setting PAGE_EXECUTE on that memory, and of course VirtualFreeEx for releasing the memory when you're done with it.
2,078,290
2,078,331
Efficient Way to Process Simple but Large Files in C++
I'm working on a project that has me a bit over my head performance-wise. I'm tasked with reading large (50MB or so) files of particle coordinates and displaying them. I'd like to use C++ for this because I am learning it already. The coordinate structure in the files are simple, there's just alot (say a million or so): 1234.5667 5234.1566 //coordinate 1 8532.6123 5152.6612 //coordinate 2 .... Being a noob, I just want to read in the files line by line and store them in vectors, is this wrong? Maybe I should be reading in the whole file first (buffered?), and then parsing the values? Working example: clock_t c1 = clock(); vector<double> coords; double coord; ifstream fin("file.txt"); while(fin >> coord) { coords.push_back(coord); } cout << "done. " << coords.size()/2 << " coords read.\n"; cout << "took " << (clock() - c1)/(double)CLOCKS_PER_SEC << " seconds." << endl; And corresponding output on a 40MB file with 2 million coordinates: done. 2000000 coords read. took 1.74 seconds. Which is fast in my mind, but I'm thinking my mind isn't a good judge.
You might want to preallocate the vector using .reserve if you have an idea of how large the "average" file is. Efficiency is a tricky game. Don't play tricks early on, and design a good basic algorithm. If it's not fast enough, you start looking at the IO routines, whether you're creating any "extra" objects (explicitly or implicitly, especially if you're passing parameters around). In your example, you might want to do a second call to clock() before printing the summary output -- get a slightly more accurate timing! :)
2,078,365
2,078,387
Resources to Write ANSI C++ Code
The last time I heavily used C++ was years ago, and it was strictly done on the Windows platform. Specifically, I used Microsoft Visual Studio as my IDE and developed some habitual patterns to use Microsoft's C++ version. For example, I used void main() instead of the standard int main(). Now, I am taking a class where it is required to develop programs to be ANSI C++ compliant and the code will be compiled using g++. I quickly learned that system ( "PAUSE" ) does not work in g++ and is probably a Microsoft thing. Does anyone know of any good resources ( sites, tutorials, books ) where I can learn what more to be better ANSI C++ complaint? Thank you.
I would highly recommend these two: comp.lang.c++ Usenet newsgroup. If you can get hold of a good Usenet service provider, and use a news reader, you should be able to get rid of the spam. I use eternal-september.org, and like it a lot. Read the C++ FAQ. It has a lot of great information. Granted, they both are not terribly great if you want a tutorial introduction to C++, but looks like you already know some C++, and need to learn more, and correct bad habits. From my personal experience, the above two are highly useful in doing exactly that. About comp.lang.c++, make sure you fully read their FAQ and lurk there a while before posting. The same applies to stackoverflow of course, although lurking may not be necessary here. Using g++, compile your programs with g++ -ansi -pedantic -Wall -Wextra -Weffc++, and make sure you understand all the warnings. I use: g++ -Wextra -Wall -Weffc++ -ansi -pedantic -Woverloaded-virtual \ -Wcast-align -Wpointer-arith
2,078,474
2,078,627
How to use boost normal distribution classes?
I'm trying to use boost::normal_distribution in order to generate a normal distribution with mean 0 and sigma 1. The following code doesn't work as some values are over or beyond -1 and 1 (and shouldn't be). Could someont point out what I am doing wrong? #include <boost/random.hpp> #include <boost/random/normal_distribution.hpp> int main() { boost::mt19937 rng; // I don't seed it on purpouse (it's not relevant) boost::normal_distribution<> nd(0.0, 1.0); boost::variate_generator<boost::mt19937&, boost::normal_distribution<> > var_nor(rng, nd); int i = 0; for (; i < 10; ++i) { double d = var_nor(); std::cout << d << std::endl; } } The result on my machine is: 0.213436 -0.49558 1.57538 -1.0592 1.83927 1.88577 0.604675 -0.365983 -0.578264 -0.634376 As you can see all values are not between -1 and 1. Thank you all in advance! EDIT: This is what happens when you have deadlines and avoid studying the theory before doing the practice.
The following code doesn't work as some values are over or beyond -1 and 1 (and shouldn't be). Could someont point out what I am doing wrong? No, this is a misunderstanding of the standard deviation (the second parameter in the constructor1) of the normal distribution. The normal distribution is the familiar bell curve. That curve effectively tells you the distribution of values. Values close to where the bell curve peaks are more likely than values far away (the tail of the distribution). The standard deviation tells you how spread out the values are. The smaller the number, the more concentrated values are around the mean. The larger the number, the less concentrated values are around the mean. In the image below you see that the red curve has a variance (variance is the square of the standard deviation) of 0.2. Compare this to the green curve which has the same mean but a variance of 1.0. You can see that the values in the green curve are more spread out relative to the red curve. The purple curve has variance 5.0 and the values are even more spread out. So, this explains why the values are not confined to [-1, 1]. It is, however, an interesting fact that 68% of the values are always within one standard deviation of the mean. So, as an interesting test for yourself write a program to draw a large number of values from a normal distribution with mean 0 and variance 1 and count the number that are within one standard deviation of the mean. You should get a number close to 68% (68.2689492137% to be a little more precise). 1: From the boost documentation: normal_distribution(RealType mean = 0, RealType sd = 1); Constructs a normal distribution with mean mean and standard deviation sd.