question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,163,718
1,163,765
Modifying Git Diff files
We branched off the main project about 6 months ago. In that time the master branch has had 100,000 more lines of code added or deleted. We on the other hand have added or changed about 50,000 lines of code since. Its currently time to merge, so we've made a copy of branch A and merged changes from their branch B into it and called it branch C. Now we want to take all the merge changes in our branch C and merge it back into our original branch A. But we want to safe guard against something horribly going wrong so we want to use the preproccesor to enable it or disable it depending on our product release of beta or release. For example. #if _BETA_CODE .... their new code. (Merged) #else ... our old code. #endif Does anyone know of a tool that can take the Diff of two projects (branch A, and the beta branch C) and merge them together but mark the changes with using #ifdef #else #endif commands?
diff has the option you need: -D NAME
1,163,896
1,182,698
Mystery HRESULT, 0x889000D
Decimal: 143196173 Hex: 0x889000D Results from a call to IAudioSessionControl2->GetProcessId(). GetLastError = 126* Message = "The specified module could not be found" I'm not really sure how to interpret this error. Additionally, I can't find a description of the HRESULT anywhere. The documented return codes are S_OK, E_POINTER, AUDCLNT_E_NO_SINGLE_PROCESS, and AUDCLNT_E_DEVICE_INVALIDATED. Anyone know what this code indicates? *This is an error marshalled across a managed/unmanaged boundary, obtained by Marshal.GetLastError with a Win32Exception providing the message. It could be bogus, but its what I've got. The HRESULT is pulled out of the unmanaged code directly. Further investigation, FAILED() doesn't seem to think this is an error. However, the out parameter is cleared (set to 0) which doesn't really make sense. Also, GetErrorInfo returns S_FALSE; so there isn't any additional debug info to go on.
This is AUDCLNT_S_NO_CURRENT_PROCESS - I realized that it somehow missed the Windows 7 SDK headers too late. The SDK documentation is going to be updated to reflect this. The result means that the session is a cross process session. The process ID returned is the process ID for the first process which created the session, but if you get this result, you really can't depend on the process ID since the process ID isn't unique.
1,164,266
1,164,306
Why are arrays of references illegal?
The following code does not compile. int a = 1, b = 2, c = 3; int& arr[] = {a,b,c,8}; What does the C++ standard say about this? I know I could declare a class that contains a reference, then create an array of that class, as shown below. But I really want to know why the code above doesn't compile. struct cintref { cintref(const int & ref) : ref(ref) {} operator const int &() { return ref; } private: const int & ref; void operator=(const cintref &); }; int main() { int a=1,b=2,c=3; //typedef const int & cintref; cintref arr[] = {a,b,c,8}; } It is possible to use struct cintref instead of const int & to simulate an array of references.
Answering to your question about standard I can cite the C++ Standard §8.3.2/4: There shall be no references to references, no arrays of references, and no pointers to references. That's because references are not objects and doesn't occupy the memory so doesn't have the address. You can think of them as the aliases to the objects. Declaring an array of nothing has not much sense.
1,164,410
1,164,435
What is a good way to think about C++ references?
I've been programming C, mainly in an embedded environment, for years now and have a perfectly good mental model of pointers - I don't have to explicitly think about how to use them, am 100% comfortable with pointer arithmetic, arrays of pointers, pointers-to-pointers etc. I've written very little C++ and really don't have a good way of thinking about references. I've been advised in the past to "think of them as pointers that can't be NULL" but this question shows that that is far from the full story. So for more experienced C++ programmers - how do you think of references? Do you think of them as a special sort of pointer, or as their own thing entirely? What's a good way for a C programmer to get their head round the concept?
I've get used to think about references as an alias for main object. EDIT(Due to request in comments): I used to think about reference as kind of aliasing is because it behaves in the exact same way as the original variable without any need to make an extra manipulation in order to affect the variable referenced.
1,164,650
1,164,673
How to update all C/C++ identifier names in a project
After frequently coming across recommendation not to use leading and double underscores in C/C++ identifiers I decided to fix all our sources once and for all. What I need to do now is convert _Identifier to Identifier_. Should I use a specialized tool for this task of regular expressions will do for this job? In latter case what is the pattern to match C/C++ identifier?
Although I am one of those that frequently points out that names with leading underscores may be reserved, I strongly recommend you don't do this unless you are experiencing problems caused by the names. Making this global change will make your version control system less useful than it might otherwise be by causing all sorts of spurious diffs. Also, there is a chance of creating duplicate names. Also, there are many underscore prefixed names that re perfectly valid. One thinks immediately of __LINE__ and __FILE__ etc, and of all the names of non-standard functions that may be provided by your specific implementation. Filtering out those names will be far from trivial (I would say next to impossible), certainly a simple Perl or sed script will not be enough. So instead, I would encourage you to change the names on a file by file basis as you make other changes to the code. Change your programming standards to avoid the leading underscore names and write new code in accordance with the standards.
1,164,730
1,164,831
Win API C++ Control Edit compulsory
I'm developing a GUI in C++ using dev-c++. I have an edit control like this: hctrl = CreateWindowEx( 0, "EDIT", /* Nombre de la clase */ "", /* Texto del título, no tiene */ ES_LEFT | WS_CHILD | WS_VISIBLE | WS_BORDER | WS_TABSTOP | ES_NUMBER , /* Estilo */ 85, 43, /* Posición */ 90, 25, /* Tamaño */ hwnd, /* Ventana padre */ (HMENU)ID_TEXTO2, /* Identificador del control */ hInstance, /* Instancia */ NULL); /* Sin datos de creación de ventana */ SendMessage(hctrl, WM_SETFONT, (WPARAM)hfont, MAKELPARAM(TRUE, 0)); I want users to introduce a phone number in this field. It's a compulsory field. I need that the OK button of this GUI is disabled until the field is correctly fill. It could be possible also that you could push the button but a message was shown saying you have to fill the empty field. I tried this: switch (HIWORD(wParam)) { case BN_CLICKED: switch (LOWORD(wParam)) { ... ... case ID_BOTON9: hctrl = GetDlgItem(hwnd,ID_TEXTO2); len = GetWindowTextLength(GetDlgItem(hwnd,ID_TEXTO2)); if (len == 0) MessageBox(hctrl, "Número no válido","Error", MB_ICONEXCLAMATION | MB_OK); break; ... } break; } But this doesn't work. Can anybody shed any light on it? Thanks in advance.
Create a validating function that returns a bool indicating whether input in your window is correct or not. If it returns false, disable the OK button and optionally show a message box or, preferably, trigger a balloon notification on the edit control so the user isn't annoyed by another OK he has to push in order to correct her mistake. Then you can listen for EN_CHANGE notification coming from the Editbox and validate the input with the above function. But first, debug your application to make sure the BN_CLICKED event is handled by you properly.
1,164,803
1,164,824
change global variables in c++
Is there a way to define a global variable by user input? Lets say I use #include... #define N 12 double array[N][N]; void main();... But I would like the user to be able to choose what N is. Do I have to have N as a local variable or is there a way around this(without macros)? I've a pretty small program but with a lot of different variables that need the N value. Alternatively, is there a way I could send a group of variables into a function without having to explicitly write them out every time. for example myfunction(var1,var2,var3...) and instead write something like myfunction(Allvariables) Thanks a lot for Your answers! This is a great forum.
int* data; int main() { int n; // get n from the user. data = new int[n]; // use data. . . delete[] data; } or just forget pointers for ever and use vector! std::vector<int> data; data.push_back(55); // just push_back data! ======================================================================= EDIT :: If you want to use Edouard A. way :) #include <iostream> #include <sstream> #include <vector> int main(int argc, char* argv[]) { std::vector<double>::size_type dataSize = 0; std::stringstream convertor(argv[1]); { if(argc > 1) { convertor >> dataSize; if(convertor.fail() == true) { // do whatever you want here in case // the user didn't input a number. } } } std::vector<double> data(dataSize); // use the vector here. return 0; } I prefere to use lexical_cast in this case, but I am not sure if you have Boost. #include <iostream> #include <vector> #include <boost/lexical_cast.hpp> int main(int argc, char* argv[]) { typedef std::vector<double>::size_type vectorSize; if(argc < 2) { // err! The user didn't input anything. } vectorSize dataSize = boost::lexical_cast<vectorSize>(argv[1]); std::vector<double> data(dataSize); // use the vector here. return 0; }
1,164,868
1,165,052
How to get size of check and gap in check box?
I have a check box that I want to accurately measure so I can position controls on a dialog correctly. I can easily measure the size of the text on the control - but I don't know the "official" way of calculating the size of the check box and the gap before (or after) the text.
I'm pretty sure the width of the checkbox is equal to int x = GetSystemMetrics( SM_CXMENUCHECK ); int y = GetSystemMetrics( SM_CYMENUCHECK ); You can then work out the area inside by subtracting the following ... int xInner = GetSystemMetrics( SM_CXEDGE ); int yInner = GetSystemMetrics( SM_CYEDGE ); I use that in my code and haven't had a problem thus far ...
1,164,982
1,170,761
CppCMS vs. C++ Server Pages vs. Wt
I know Wt is the most stable of them, but it's a bit uncomfortable to use. CppCMS sounds good but how stable is it? How secure is it? I have encountered C++ Server Pages as well but there's nothing about their security in there. Has anyone had some experience with any of those libraries and can enlight me?
First of all, several differences: Wt is GUI like framework, it is quite far from traditional web development. So, if you want to develop a code as if it was GUI it is for you. CppCMS is traditional MVC framework optimized for performance, it has many features like template engines, forms processing, i18n support, sessions, efficient caching and so on, support of various web server APIs: FastCGI, SCGI and CGI. If you come for Django world, you would find yourself at home. I'm less familiar with the third project, but it feels more like PHP -- you put the C++ code inside templates and has no clear separation of View and Controller. Stability, I can tell only about CppCMS, it is stable, and there are applications running it 7/24, the authors blog and the Wiki with documentation of CppCMS are written in CppCMS. So, there shouldn't be major critical bugs. Disclosure: I'm developer of CppCMS.
1,165,071
1,165,436
fstream linking error in g++ with -std=gnu++0x
I'm have an application built with the -std=gnu++0x parameter in tdm-mingw g++ 4.4.0 on windows. It is using an ofstream object and when I build, it gives the following linking error: c:\opt\Noddler/main_func.cpp:43: undefined reference to `std::basic_ofstream<char, std::char_traits<char> >::basic_ofstream(std::string const&, std::_Ios_Openmode)' It builds properly when using the default older standard. This is the only error, and trying to link with -lstdc++ doesn't help. Has someone experienced this before? Can I get any suggestions? Edit: I'm creating an ofstream object like this: std::string filename = string("noddler\\") + callobj.get_ucid() + "_" + callobj.gram_counter() + ".grxml"; ofstream grxml_file(string("C:\\CWorld\\Server\\Grammar\\") + filename); ... grxml_file.close(); It is getting compiled fine, but not getting linked.
I would guess that you have some code like this: string fname = "foo.txt"; ifstream ifs( fname ); Try changing it to: ifstream ifs( fname.c_str() ); This could happen if the header files you are using are somewhat out of whack with the libraries you are linking to. And if this doesn't work, post the code that causes the problem.
1,165,619
1,165,932
MFC radio buttons - DDX_Radio and DDX_Control behavior
I have an MFC dialog in which there are two radio buttons. I have put them in a nice group, their IDCs are one after each other (RB_LEFT, RB_RIGHT). I want to use DDX_Radio so I can access the buttons using an integer value so in the function DoDataExchange I call : DDX_Radio(pDX, RB_LEFT, mRBLeftRight); where mRBLeftRight is a member variable of integer type. I also need to edit the buttons properties so I wanted to use a DDX_Control to map them on member variables mRBLeft and mRBRight (CButton): DDX_Control(pDX, RB_LEFT, mRBLeft); DDX_Control(pDX, RB_RIGHT, mRBRight); Now if I do the call to DDX_Control, whenever DoDataExchange is called, the application crashes because DDX_Control forces RB_LEFT to handle a message that DDX_Radio cannot handle. This part I understand. I decided to not use DDX_Control (removed the calls in DoDataExchange) and just keep a pointer to my radio buttons (CButton*) in my classes. So in my OnInitDialog function, I do the following calls : mRBLeft= ((CButton*)GetDlgItem(RB_LEFT)); mRBRight = ((CButton*)GetDlgItem(RB_RIGHT)); Now as long as I don't use mRBLeft it's going to be fine, but if I do, bam, crash on DoDataExchange. The thing that really puzzles me is if I change my left radio button using ((CButton*)GetDlgItem(RB_LEFT)->SetCheck(true) it's going to work. Sooo what's the difference ? (I know it's a lot of hassle for little, but I just wanna understand the mechanics)
TBH Its even easier than JC's post leads you to believe. DDX_Control( pDX, IDC_RADIO3, m_r3 ); DDX_Control( pDX, IDC_RADIO4, m_r4 ); m_Val = 0; DDX_Radio( pDX, IDC_RADIO3, m_Val ); You need to mark the FIRST radio button in the group with WS_GROUP (In this case IDC_RADIO3). You are now good to go and it will automatically select IDC_RADIO3). Now to keep m_Val up to date it is probably worth putting a click handler on all the radio buttons in the group. Inside that function, simply, call UpdateData( TRUE ); m_Val will now point to the index of the radio button in the group.
1,165,623
1,165,721
How to get the Windows Power State Message (WM_POWERBROADCAST) when not running a Win32 GUI app?
So basically I have a plugin dll that is loaded by a GUI-Application. In this dll I need to detect when Windows enters the Hibernate state. I cannot modify the GUI-App. GetMessage only works if the calling thread is the same thread as the UI-Thread, which it is not. Any ideas?
You could create a hidden window in a seperate thread from your DLL code. And process messages as shown below. You could use this Window class for that. #pragma once #include <windows.h> #include <process.h> #include <iostream> using namespace std; static const char *g_AppName = "Test"; class CMyWindow { HWND _hWnd; int _width; int _height; public: CMyWindow(const int width,const int height):_hWnd(NULL),_width(width),_height(height) { _beginthread( &CMyWindow::thread_entry, 0, this); } ~CMyWindow(void) { SendMessage(_hWnd, WM_CLOSE, NULL, NULL); } private: static void thread_entry(void * p_userdata) { CMyWindow * p_win = static_cast<CMyWindow*> (p_userdata); p_win->create_window(); p_win->message_loop(); } void create_window() { WNDCLASSEX wcex; wcex.cbSize = sizeof(WNDCLASSEX); wcex.style = CS_HREDRAW | CS_VREDRAW; wcex.lpfnWndProc = &CMyWindow::WindowProc; wcex.cbClsExtra = 0; wcex.cbWndExtra = 0; wcex.hInstance = GetModuleHandle(NULL); wcex.hIcon = LoadIcon(NULL, IDI_APPLICATION); wcex.hCursor = LoadCursor(NULL, IDC_ARROW); wcex.hbrBackground = (HBRUSH)(COLOR_WINDOW+1); wcex.lpszMenuName = NULL; wcex.lpszClassName = g_AppName; wcex.hIconSm = LoadIcon(NULL, IDI_APPLICATION); RegisterClassEx(&wcex); _hWnd = CreateWindow(g_AppName, g_AppName, WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, NULL, NULL, GetModuleHandle(NULL), NULL); ShowWindow(_hWnd, SW_SHOWDEFAULT); UpdateWindow(_hWnd); } void message_loop() { MSG msg = {0}; while (GetMessage(&msg, NULL, 0, 0)) { if(msg.message == WM_QUIT) { break; } TranslateMessage(&msg); DispatchMessage(&msg); } } static LRESULT WINAPI WindowProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch(uMsg) { case WM_DESTROY: PostQuitMessage(0); return 0; case WM_POWERBROADCAST: { //power management code here } } return DefWindowProc(hWnd, uMsg, wParam, lParam); } }; Also make sure to include an exit condition.
1,165,707
1,165,717
Force Program / Thread to use 100% of processor(s) resources
I do some c++ programming related to mapping software and mathematical modeling. Some programs take anywhere from one to five hours to perform and output a result; however, they only consume 50% of my core duo. I tried the code on another dual processor based machine with the same result. Is there a way to force a program to use all available processer resources and memory? Note: I'm using ubuntu and g++
A thread can only run on one core at a time. If you want to use both cores, you need to find a way to do half the work in another thread. Whether this is possible, and if so how to divide the work between threads, is completely dependent on the specific work you're doing. To actually create a new thread, see the Boost.Thread docs, or the pthreads docs, or the Win32 API docs. [Edit: other people have suggested using libraries to handle the threads for you. The reason I didn't mention these is because I have no experience of them, not because I don't think they're a good idea. They probably are, but it all depends on your algorithm and your platform. Threads are almost universal, but beware that multithreaded programming is often difficult: you create a lot of problems for yourself.]
1,165,963
1,166,012
c++ standard practice: virtual interface classes vs. templates
I have to make a decision regarding generalization vs polymorphism. Well the scenario is standard: I want to make my monolithic interdependent code to be more modular, clean and extensible. It is still in a stage where the change of design principle is doable, and, as I look at it, highly desirable. Will I introduce purely virtual base classes (interfaces) or templates? I am aware of the basics regarding the template option: less indirection, better performance, more compiling but no late binding, and so on. The stl does not use much (or none?) inheritance, and boost doesn't either. But I think those are aimed to be really small basic tools that are used every 2 lines of code by the programmer. I consider the inheritance and late binding approach to be more sensible for plug-in style of big pieces of code and functionality that should be interchangeable, updateable etc. after deployment or even during runtime. Well my scenario lies somewhat inbetween. I dont need to exchange pieces of code on the fly at runtime, compile time is fine. Usually it is also a very central and frequently used piece of functionality, it is not logically seperatable into big blocks. This lets me tend somewhat to the template solution. To me it also looks somewhat cleaner. Are there any big bad implications, are interfaces still THE way to go? When are they not? Which complies more with standard c++ style? I know this is bordering on subjective, but I am really interested in some experiences. I don't own a copy of Scott Meyers effective C++ so I set my hopes on you guys :)
You're basically right, dynamic polymorphism (inheritance, virtuals) is generally the right choice when the type should be allowed to change at runtime (for example in plugin architectures). Static polymorphism (templates) is a better choice if the type should only change at compile-time. The only potential downsides to templates are that 1) they generally have to be defined in the headers (which means more code gets #included), and this often leads to slower compile-times. But design-wise, I can't see any problems in using templates when possible. Which complies more with standard c++ style? Depends on what "standard C++ style" is. The C++ standard library uses a bit of everything. The STL uses templates for everything, the slightly older IOStreams library uses inheritance and virtual functions, and the library functions inherited from C uses neither, of course. These days, templates are by far the most popular choice though, and I'd have to say that is the most "standard" approach.
1,165,983
1,166,014
Declare array size in header file without #define's
I have a code a following (simplified version): #define MESSAGE_SIZE_MAX 1024 #defined MESSAGE_COUNT_MAX 20 class MyClass { public: .. some stuff private: unsigned char m_messageStorage[MESSAGE_COUNT_MAX*MESSAGE_SIZE_MAX]; }; I don't like defines, which are visible to all users of MyCalss. How can I do it in C++ style? Thanks Dima
The trick to get such things into the class definition is, // public: enum {MESSAGE_SIZE_MAX=1024, MESSAGE_COUNT_MAX=20}; I never liked #defines to be used like constants. Its always a good practice to use enum.
1,166,029
1,166,043
Accessing vectors of structs
I have a struct: struct OutputStore { int myINT; string mySTRING; } If I create an array of type OutputStore as follows: OutputStore *OutputFileData = new OutputStore[100]; then I can address it with: OutputFileData[5].myINT = 27; But if I use a vector instead of an array: vector<OutputStore> *OutputFileData = new vector<OutputStore>(100); Then I get an '... is not a member of 'std::vector<_Ty>' error if I try: OutputFileData[5].myINT = 27; Since you can access a vector via it's index just as you can an array, why does this line not work. I'm just interested to know as it suggests I'm missing some fundamental bit of understanding. (I changed to a vector as I wanted to push_back as I do not know the size that my data will reach. I've got it to work by using a constructor for the structure and pushing back via that...I just want to understand what is going on here)
Don't create a pointer to a vector. Just do vector<OutputStore> OutputFileData(100); And you'll be fine. To make your code above work you'll need to do the following (*OutputFileData)[5].myINT = 27;
1,166,342
1,166,373
Good Tutorial To Learn C++ Development For Game Boy
I'm learning C++ with this book of Deitel: C++ How to Program, 5/e and some tutorials and resources of the internet, but I want to learn how I can develop Nintendo GameBoy Advance games using C++, but only in resources over the internet, because I don't want to spent money now with a thing that I only want to try.
Get DevkitPro and a good library like TONC. Also, you can get more help at GBADev. Although you can use C++ in GBA development, plain C is recommended. The choice is yours to make, though.
1,166,348
1,166,394
Dealing with an object corrupting the heap
In my application I am creating an object pretty much like this : connect() { mVHTGlove = new vhtGlove(params); } and once I am about to close application I call this one : disconnect() { if (mVHTGlove) delete mVHTGlove; } This call always triggers a breakpoint with the following message : Windows has triggered a breakpoint in DesignerDynD.exe. This may be due to a corruption of the heap, which indicates a bug in DesignerDynD.exe or any of the DLLs it has loaded. This may also be due to the user pressing F12 while DesignerDynD.exe has focus. The output window may have more diagnostic information. I cannot modify the vhtGlove class to fix the corruption of the stack as it is an external library provided only in the form of header files, lib files and dlls. Is there any way to use this class in a clean way ? **** EDIT ::: I tried to strip things down to a bare minimum, however I get the same results... here you have the ENTIRE code. #include "vhandtk/vhtCyberGlove.h" #include "vhandtk/vhtIOConn.h" #include "vhandtk/vhtBaseException.h" using namespace std; int main(int argc, char* argv[]) { vhtCyberGlove* testGlove = NULL; vhtIOConn gloveAddress("cyberglove", "localhost", "12345", "com1", "115200"); try { testGlove = new vhtCyberGlove(&gloveAddress,false); if (testGlove->connect()) cout << "Glove connected successfully" << endl; else { throw vhtBaseException("testGlove()->connect() returned false."); } if (testGlove->disconnect()) { cout << "Glove disconnected successfully" << endl; } else { throw vhtBaseException("testGlove()->disconnect() returned false."); } } catch (vhtBaseException *e) { cout << "Error with gloves: " << e << endl; system("pause"); exit(0); } delete testGlove; return 0; } Still crashes on deletion of the glove. EDIT #2 :: If I just allocate and delete an instance of vhtCyberGlove it also crashes. int main(int argc, char* argv[]) { vhtCyberGlove* testGlove = NULL; vhtIOConn gloveAddress("cyberglove", "localhost", "12345", "com1", "115200"); testGlove = new vhtCyberGlove(&gloveAddress,false); delete testGlove; //<<crash! return 0; } Any ideas? thanks! JC
One possiblity is that mVHTGlove isn't being initialized to 0. If disconnect was then called without a connect ever being called, then you'd be attempting to deallocate a garbage pointer. Boom. Another possibility is that you are actually corrupting the stack a bit before that point, but that is where the corruption actually causes the crash. A good way to check that would be to comment out as much code as you can and still get the program to run, then see if you still get the corruption. If you don't, slowly bring back in bits of code until you see it come back. Some further thoughts (after your edits). You might check and see if the API doesn't have its own calls for memory management, rather than expecting you to "new" and "delete" objects manually. The reason I say this is that I've seen some DLLs have issues that looked a lot like this when some memory was managed inside the DLL and some outside.
1,166,378
1,166,463
Why do some c++ compilers let you take the address of a literal?
A C++ compiler that I will not name lets you take the address of a literal, int *p = &42; Clearly 42 is an r-value and most compilers refuse to do so. Why would a compiler allow this? What could you do with this other than shoot yourself in the foot?
What if you needed a pointer to an integer with the value of 42? :) C++ references are much like automatically dereferenced pointers. One can create a constant reference to a literal, like this: const int &x = 42; It effectively requires the compiler to initialize a pointer with the address of an integer with the value 42, as you might subsequently do this: const int *y = &x; Combine that with the fact that compilers need to have logic to distinguish between a value which has not had its address taken, and one which has, so it knows to store it in memory. The first need not have a memory location, as it can be entirely temporary and stored in a register, or it may be eliminated by optimization. Taking the address of the value potentially introduces an alias the compiler can't track and inhibits optimization. So, applying the & operator may force the value, whatever it is, into memory. So, it's possible you found a bug that combined these two effects.
1,166,507
1,166,558
How to prevent inadvertently using delete and free interchangeably in C++?
When, if ever, can delete and free be used interchangeably in C++? My concern is as follows: Say there is an incorrect mixup in the use of malloc/ free and new/ delete (not to mention new[]/ delete[]). However delete and free doing the same thing; Fortuitously so this goes uncaught in testing. Later this may lead to a crash in production. How can I enforce some kind of check to prevent this? Can I be warned if the two are mixed up? If not at compile time, perhaps some code instrumentation at run time? How would I approach this? The intention of this question is to find ways to avoid inadvertent mix up in the usages.
To answer the second question, if you control both malloc/free and operator new/delete, you can stash extra information to associate with pointers returned by both that tell you how they were allocated. When a pointer is passed to free or operator delete, check to see that it was allocated by the appropriate function. If not, assert or raise an exception or do whatever it is you do to report the mismatch. Usually this is done by allocating extra memory, e.g., given malloc(size) or operator new(size), you allocate size + additional space and shove extra information in there.
1,166,729
1,166,745
What is the difference when using typdef when declaring a struct?
Possible Duplicates: Why should we typedef a struct so often in C? Difference between ‘struct’ and ‘typedef struct’ in C++? What is the difference between the following type declarations? struct Person { int age; }; typedef struct { int age; }Person; I understand that struct { int age; }Person; Creates and instance of an unnamed struct called person, where struct Person { int age; }; declares a type called person, but not an instance. But I still dont get what the typedef does.
I think that's the same as in C, typedef creates an alias of a type... in your first case, the name of the type is "struct Person", while in the second case is just "Person". Usually, when you have to declare self referencing structures (like lists), you use both, because the typedef has not effect until the structure is defined (unless you make a forward declaration), for example: typedef struct node { void *data; struct node *next; } TNode, *PTNode; so now you can declare variables of the same type in the following ways: struct node *node1; TNode *node2; PTNode node3; the three variables above are the same, pointers to the node structure.
1,166,822
1,167,657
ambiguous template weirdness
I have the following code (sorry for the large code chunk, but I could not narrow it down any more) template <bool B> struct enable_if_c { typedef void type; }; template <> struct enable_if_c<false> {}; template <class Cond> struct enable_if : public enable_if_c<Cond::value> {}; template <typename X> struct Base { enum { value = 1 }; }; template <typename X, typename Y=Base<X>, typename Z=void> struct Foo; template <typename X> struct Foo<X, Base<X>, void> { enum { value = 0 }; }; template <typename X, typename Y> struct Foo<X, Y, typename enable_if<Y>::type > { enum { value = 1 }; }; int main(int, char**) { Foo<int> foo; } But it fails to compile with gcc (v4.3) with foo.cc: In function ‘int main(int, char**)’: foo.cc:33: error: ambiguous class template instantiation for ‘struct Foo<int, Base<int>, void>’ foo.cc:24: error: candidates are: struct Foo<X, Base<X>, void> foo.cc:27: error: struct Foo<X, Y, typename enable_if<Y>::type> foo.cc:33: error: aggregate ‘Foo<int, Base<int>, void> foo’ has incomplete type and cannot be defined OK, so it's ambiguous. but I wasn't expecting it to be a problem as when using specialization it will almost always be some ambiguity. However this error is only triggered when using the class with enable_if<...>, if I replace it with a class like the following there is no problem. template <typename X, typename Y> struct Foo<X, Y, void > { enum { value = 2 }; }; Why does this class not cause an ambiguity while the others do? Isn't the two the same thing for classes with a true ::value? Anyway, any hints as to what I am doing wrong are appreciated. Thanks for the answers, my real problem (to get the compiler to select my first specialization) was solved by replacing struct Foo<X, Base<X>, void> with struct Foo<X, Base<X>, typename enable_if< Base<X> >::type > which seems to work the way I want.
The gist of your question is that you have: template <typename X, typename Y, typename Z> struct Foo {}; template <typename X> struct Foo<X, Base<X>, void> {}; // #1 template <typename X, typename Y> struct Foo<X, Y, typename whatever<Y>::type> {}; // #2 and you're trying to match it to Foo<int, Base<int>, void> Obviously, both specializations match (the first with X = int, the second with X = int, Y = Base<int>). According to the standard, section 14.5.4, if there are more matching specializations, a partial ordering (as defined in 14.5.5.2) among them is constructed and the most specialized one is used. In your case, however, neither one is more specialized than the other. (Simply put, a template is more specialized than another, if you can replace each type parameter of the latter template with some type and in result get the signature of the former. Also, if you have whatever<Y>::type and you replace Y with Base<X> you get whatever<Base<X> >::type not void, i.e. there is not processing performed.) If you replace #2 with template <typename X, typename Y> struct Foo<X, Y, void > {}; // #3 then the candidate set again contains both templates, however, #1 is more specialized then #3 and as such is selected.
1,166,986
1,167,039
Programmatically verify if a UDP port is bound in C/C++
Without attempting to bind it
This should do the trick... int getsockname(int socket, struct sockaddr *restrict address, socklen_t *restrict address_len);
1,167,028
1,167,046
Porting (unmanaged) C++ to C# vs. using the C++ as a DLL in a C# application
I have a code library written in plain old C++ (no .NET/managed code) and I'm porting the application that uses this code to C#. I'm faced with two options: Rewrite the C++ code in C# to achieve the same functionality; Compile the C++ as a DLL and use it as a library in the C# application. I'm relatively new to C# and am pretty unfamiliar with the implications of using an unmanaged code library in a C# app (or if there even are any). The code itself is moderate in size; it will likely take only a few days to rewrite in C#, but my thought is that leaving the code as a it is would allow me to use it in other applications as well (and to compile it on UNIX, etc). What sort of things should I be aware of when making this decision? Are there any major drawbacks or gotchas to using the DLL in the C# application?
I would make a wrapper library using C++/CLI to expose the library to C#. This can leave your library unchanged, and just wrap it for use from .NET, providing the best of both options.
1,167,086
1,167,104
How can i make a tray icon in Linux using C++?
In Windows, I can use Shell_NotifyIcon(). What is the Linux equivalent of this function? Is it different in GNOME (gtk) and KDE (qt)? Can't find any tutorials on the Internet.
Yes, it depends on a desktop environment you're using. In Qt (KDE), you can do this in a cross-platform manner using QSystemTrayIcon. There is even a sample application in Qt. In GTK, you do this with GtkStatusIcon. There is a sample application in this blog post.
1,167,120
1,167,151
OpenGL Alpha blending with wrong color
I am trying to create a simple ray tracer. I have a perspective view which shows the rays visibly for debugging purposes. In my example screenshot below I have a single white sphere to be raytraced and a green sphere representing the eye. Rays are drawn as lines with glLineWidth(10.0f) If a ray misses the sphere it is given color glColor4ub(100,100,100,100); in my initialization code I have the following: glEnable(GL_ALPHA_TEST); glAlphaFunc(GL_GREATER, 0.0f); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA,GL_SRC_ALPHA); You can see in the screen shot that for some reason, the rays passing between the perspective view point and the sphere are being color blended with the axis line behind the sphere, rather than with the sphere itself. Here is a screenshot: Can anyone explain what I am doing wrong here? Thanks!!
Is it a possibility you cast those rays before you draw the sphere? Then if Z-buffer is enabled, the sphere's fragments simply won't be rendered, as those parts of rays are closer. When you are drawing something semi-transparent (using blending), you should watch the order you draw things carefully. In fact I think you cannot use Z-buffer in any sensible way together with ray-tracing process. You'll have to track Z-order manually. While we are at it OpenGL might not be the best API to visualize ray-tracing process. (It will do so possibly much slower than pure software ray-tracer)
1,167,342
1,167,391
Are ref and out in C# the same a pointers in C++?
I just made a Swap routine in C# like this: static void Swap(ref int x, ref int y) { int temp = x; x = y; y = temp; } It does the same thing that this C++ code does: void swap(int *d1, int *d2) { int temp=*d1; *d1=*d2; *d2=temp; } So are the ref and out keywords like pointers for C# without using unsafe code?
They're more limited. You can say ++ on a pointer, but not on a ref or out. EDIT Some confusion in the comments, so to be absolutely clear: the point here is to compare with the capabilities of pointers. You can't perform the same operation as ptr++ on a ref/out, i.e. make it address an adjacent location in memory. It's true (but irrelevant here) that you can perform the equivalent of (*ptr)++, but that would be to compare it with the capabilities of values, not pointers. It's a safe bet that they are internally just pointers, because the stack doesn't get moved and C# is carefully organised so that ref and out always refer to an active region of the stack. EDIT To be absolutely clear again (if it wasn't already clear from the example below), the point here is not that ref/out can only point to the stack. It's that when it points to the stack, it is guaranteed by the language rules not to become a dangling pointer. This guarantee is necessary (and relevant/interesting here) because the stack just discards information in accordance with method call exits, with no checks to ensure that any referrers still exist. Conversely when ref/out refers to objects in the GC heap it's no surprise that those objects are able to be kept alive as long as necessary: the GC heap is designed precisely for the purpose of retaining objects for any length of time required by their referrers, and provides pinning (see example below) to support situations where the object must not be moved by GC compacting. If you ever play with interop in unsafe code, you will find that ref is very closely related to pointers. For example, if a COM interface is declared like this: HRESULT Write(BYTE *pBuffer, UINT size); The interop assembly will turn it into this: void Write(ref byte pBuffer, uint size); And you can do this to call it (I believe the COM interop stuff takes care of pinning the array): byte[] b = new byte[1000]; obj.Write(ref b[0], b.Length); In other words, ref to the first byte gets you access to all of it; it's apparently a pointer to the first byte.
1,167,374
1,167,577
Visual C++ 2008 Express Or Eclipse Ganymede With CDT
I'm learning C++, and I want to know from those who are very good developers now: What is the best IDE, Visual C++ 2008 Express or Eclipse Ganymede with CDT? Remember that I'm using Microsoft Windows Vista Ultimate. Thanks! The book that I'm reading is from Deitel: C++ How to Program, 5/e, because I don't know if the code of the book supports Microsoft Visual C++ 2008 Express.
I'm using both regularly now. Visual studio is easier and more user friendly. I have issues with it though. They force you to do a number of things for reasons the benefit Microsoft and not you. It's free so you can't complain that much. Support is non existent but there's google for help. Eclipse Gallileo does some difficult things startlingly well, but does some simple stuff startlingly badly. Such as when you compile if there's an error you get no visual indication. You have to open the problems window to see the errors. DOH! Eclipse is nearly as good as visual studio overall and is one of the best when using linux. The new version of the debugger has some very nice new features as well. Support is poor to non existent but there's google for help. I tried codeblocks. The support was not very good to rude. I found it difficult to do anything serious with.
1,167,518
1,168,087
unix timestamp to boost::posix_time::ptime
I need to convert double with number of seconds since the epoch to ptime. I'm prety sure there must be an easy way to do this, but I couldn't find anything. Thanks. Edit: The original timestamp is floating point. I can't change it and i don't want to lose the sub-second precision.
after some fiddling around i came up with this: ptime(date(1970, 1, 1), time_duration(0, 0, 0, time_duration::ticks_per_second() * 1234567890.0987654321)) I'm not sure this is the best solution, but it seems to do what i need.
1,167,532
1,168,407
Staff Web Service Framework
How does Staff web service framework compare to others for c++?
I cannot answer your question in all details, but I'm searching for C++ SOA / web service frameworks for a year now. My favorites (all OpenSource and platform independent - not ordered) are currently: GSOAP - http://www.cs.fsu.edu/~engelen/soap.html pros: proven, reliable, very fast big documentation, many support still maintained - releases every 3-6 months contras: WSDL/client generators are not free programming and embedding into existing apps isn't so easy seems to be more C than C++ Apache AXIS/C++ - http://ws.apache.org/axis/cpp/index.html pros: proven, already in use in big projects (nearly) good documentation up to date, maintenance is ensured by Apache Foundation better/nicer C++ API contras: heavy weight SDK / too many functionality for me not easy to implement it / many work to embed it into own app maybe not as fast and bigger footprint as GSOAP Staff - http://code.google.com/p/staff/ pros: very small footprint easy and fast to integrate contras: future maintenance is not clear / it's (only) a Google summer of code project very early stage support party only in cyrillic If I have to decide for a framework right now, I would take Apache AXIS - it's proven and reliable and thus ready for productive use. Further it's future maintenance is guaranteed by the Apache Foundation and I'm free to uase, modify and integrate AXIS as I want - even for my commercial applications. I hope that helped a little bit.
1,167,573
1,209,109
QMake 'subdirs' template - executing a target?
I am putting together a build system for my Qt app using a qmake .pro file that uses the 'subdirs' template. This works fine, and allows me to specify the order that each target is built, so dependencies work nicely. However, I have now added a tool to the project that generates a version number (containing the build date, SVN revision, etc,) that is used by the main app - I can build this version tool first but when it is built I want to execute it before any more targets are built (it generates a header file containing the version number that the main app includes.) For example, my simple qmake file looks like something this: TEMPLATE = subdirs CONFIG += ordered SUBDIRS = version \ lib \ tests \ mainapp When 'version' is built I want to execute it (passing some arguments on the command-line) before 'lib' is built. Does anyone know if this is possible? I see that qmake has a 'system' command that can execute apps, but I don't know how I could leverage this. A related question concerns my unit tests. These live in the 'test' project and use the QTest framework. I want to execute the tests exe before building 'mainapp' and if the tests fail (i.e. the tests exe doesn't return zero) I want to quit the build process. I realise that qmake is designed to generate makefiles, so I may be wishing for a little too much here but if anyone can give me some pointers it would be very welcome.
I posted a message on the Qt Interest mailing list about a 'pre build' step and it can be done using a combination of PRE_TARGETDEPS and QMAKE_EXTRA_TARGETS. Here is the response: You can specify custom build steps, eg. this would call makemyversion.sh to create myversion.cpp every time before it builds something: versiontarget.target = myversion.cpp versiontarget.commands = ./makemyversion.sh versiontarget.depends = FORCE PRE_TARGETDEPS += myversion.cpp QMAKE_EXTRA_TARGETS += versiontarget I am now using something similar to this to generate my app's version number each time it is built.
1,167,622
1,167,706
Defining Binary Macros in C++
Can someone explain why the following error happens: #define bla "\xA" char a [2] = {0}; memcpy (a,bla,1); // a[0] = 0x0a <- Correct //a[1] = bla; // '=' : cannot convert from 'const char [2]' to 'char' Thanks, RM
Try: #define bla '\xA' Although that will stop the memcpy working.
1,167,852
1,167,930
How do you multiply a matrix by itself?
This is what i have so far but I do not think it is right. for (int i = 0 ; i < 5; i++) { for (int j = 0; j < 5; j++) { matrix[i][j] += matrix[i][j] * matrix[i][j]; } }
I don't think you can multiply a matrix by itself in-place. for (i = 0; i < 5; i++) { for (j = 0; j < 5; j++) { product[i][j] = 0; for (k = 0; k < 5; k++) { product[i][j] += matrix[i][k] * matrix[k][j]; } } } Even if you use a less naïve matrix multiplication (i.e. something other than this O(n3) algorithm), you still need extra storage.
1,167,933
1,168,070
I have a server listening on sockets, whats a good approach to service CPU-bound requests with multiple threads?
I've got an application, written in C++, that uses boost::asio. It listens for requests on a socket, and for each request does some CPU-bound work (e.g. no disk or network i/o), and then responds with a response. This application will run on a multi-core system, so I plan to have (at least) 1 thread per core, to process requests in parallel. Whats the best approach here? Things to think about: I'll need a fixed size thread pool (e.g. 1 thread per CPU) If more requests arrive than I have threads then they'll need to be queued (maybe in the o/s sockets layer?) Currently the server is single threaded: It waits for a client request Once it receives a request, it performs the work, and writes the response back, then starts waiting for the next request Update: More specifically: what mechanism should I use to ensure that if the server is busy that incoming requests get queued up? What mechanism should I use to distribute incoming requests among the N threads (1 per core)?
I don't see that there is much to consider that you haven't already covered. If it is truly CPU-bound then adding threads beyond the number of cores doesn't help you much except if you are going to have a lot of requests. In that case the listen queue may or may not meet your needs and it might be better to have some threads to accept the connections and queue them up yourself. Checkout the listen backlog values for your system and experiment a bit with the number of threads. UPDATE: listen() has a second parameter that is your requested OS/TCP queue depth. You can set it up to the OS limit. Beyond that you need to play with the system knobs. On my current system it is 128 so it is not huge but not trivial either. Check your system and consider whether you realistically need something larger than the default. Beyond that there are several directions you can go. Consider KISS - no complexity before it is actually needed. Start off with something simple like just have a thread to accept connection (up to some limit) and plop them in a queue. Worker threads pick them up, process, write result, and close socket. At the current pace of my distro's Boost updates (and my lack of will to compile it myself) it will be 2012 before I play with ASIO - so I can't help with that.
1,167,956
1,168,014
Resources to learn bitwise programming?
I am a c++ programmer and occasionally I'll come across some code that is using bitwise operators to manipulate things at the bit level, but I have no real understanding of those concepts. So I would like a resource to help me learn it so well that it becomes second nature. Does anyone know of good resources for this? A google search did not provide me much useful information. Maybe I'm not sure what to look for. Thanks!
I absolutely love Henry Warren's Hacker's Delight book. The website for it offers Chapter 2 ("Basics") as a free sample which, despite the name, offers some fairly sophisticated bit manipulation tricks. It might not be the best place to start, but it's fantastic once you start to get comfortable with bit arithmetic.
1,167,980
1,168,285
How can I use templates to determine the appropriate argument passing method?
As I understand it, when passing an object to a function that's larger than a register, it's preferable to pass it as a (const) reference, e.g.: void foo(const std::string& bar) { ... } This avoids having to perform a potentially expensive copy of the argument. However, when passing a type that fits into a register, passing it as a (const) reference is at best redundant, and at worst slower: void foo(const int& bar) { ... } My problem is, I'd like to know how to get the best of both worlds when I'm using a templated class that needs to pass around either type: template <typename T> class Foo { public: // Good for complex types, bad for small types void bar(const T& baz); // Good for small types, but will needlessly copy complex types void bar2(T baz); }; Is there a template decision method that allows me to pick the correct type? Something that would let me do, void bar(const_nocopy<T>::type baz); that would pick the better method depending on the type? Edit: After a fair amount of timed tests, the difference between the two calling times is different, but very small. The solution is probably a dubious micro-optimization for my situation. Still, TMP is an interesting mental exercise.
Use Boost.CallTraits: #include <boost/call_traits.hpp> template <typename T> void most_efficient( boost::call_traits<T>::param_type t ) { // use 't' }
1,168,338
1,168,344
Why STL algorithm find() doesn't work on maps?
Is there any explanation why find() algorithm doesn't work for maps and one have to use map::find instead?
It does work on maps, but you need to compare against a map::value_type (which is std::pair<const map::key_type, map::mapped_type>), not the key type. Because map.find takes a key and returns a key/value pair iterator.
1,168,525
1,827,810
C++ GCC4.4 warning: array subscript is above array bounds
I recently upgraded to GCC 4.4 (MinGW TDM build) and now the follow code produces these warning: In member function 'void Console::print(const std::string&)': warning: array subscript is above array bounds Here's the code: void Console::print( const std::string& str ) { std::string newLine( str ); if( newLine.size() > MAX_LINE_LENGTH ) { sf::Uint32 stringSize = newLine.size(); for( sf::Uint32 insertPos = MAX_LINE_LENGTH; insertPos < stringSize; insertPos += MAX_LINE_LENGTH ) { newLine.insert( insertPos, "\n" ); } } StringList tokens; boost::split( tokens, newLine, boost::is_any_of("\n") ); for( StringList::iterator it = tokens.begin(); it != tokens.end(); ++it ) { addLine( *it ); } } Any ideas? It is the optimizations that are doing it... Also it appears to be this line which is causing it: boost::split( tokens, newLine, boost::is_any_of("\n") ); Ah yes, I found it, it is the argument for boost::is_any_of(), by wrapping it in a string() constructor the warning goes away, thank you all for your help :) boost::split( tokens, newLine, boost::is_any_of( string( "\n" ) ) );
Got the same error. As a workaround I replaced is_any_of(" ") with is_from_range(' ', ' ') which might also be slightly more efficient.
1,168,835
1,169,694
Difference Betwen Visual C++ 2008 And g++
I'm learning C++ and when I was testing the Microsoft Visual C++ 2008 Express and Eclipse Ganymede, but with the g++ compiler as default, I've noted that a same code in VC++ get some errors and in g++ compile normally, without errors or warnings and execute normally, but I want to know what is the difference between VC++ syntax and g++ syntax? Thanks!
Please add this to Pavel's answer: If you are developing a cross platform product, use g++ (MingW) and stick to the C++ standard. Use POSIX compliant APIs provided by MingW where the standard facilities does not serve the purpose. An executable built using MingW does not need a special runtime to be installed on the target system, but VC++ 2008 requires that. So deployment is simpler with MingW applications. An advantage of VC++ is its great IDE. The Microsoft compiler seems to perform some Windows specific optimizations as well. Still, MingW complemented with the CodeBlocks IDE can give you a great, free, cross-platform C++ development environment.
1,169,010
1,169,083
Win32 Message Handler Error Propagation
I'm writing a (C++) application that utilizes a single dialog box. After setting up a message pump and handler I started wondering how I would go about propagating C++ exceptions to my original code (i.e., the code that calls CreateDialogParam, for instance). Here's a skeleton example of what I mean: BOOL CALLBACK DialogProc(HWND, UINT msg, WPARAM, LPARAM) { if(msg == WM_INITDIALOG) //Or some other message { /* Load some critical resource(s) here. For instnace: const HANDLE someResource = LoadImage(...); if(someResource == NULL) { ---> throw std::runtime_error("Exception 1"); <--- The exception handler in WinMain will never see this! Maybe PostMessage(MY_CUSTOM_ERROR_MSG)? } */ return TRUE; } return FALSE; } //====================== void RunApp() { const HWND dlg = CreateDialog(...); //Using DialogProc if(dlg == NULL) { throw std::runtime_error("Exception 2"); //Ok, WinMain will see this. } MSG msg = {}; BOOL result = 0; while((result = GetMessage(&msg, ...)) != 0) { if(result == -1) { throw std::runtime_error("Exception 3"); //Ok, WinMain will see this. } //Maybe check msg.message == MY_CUSTOM_ERROR_MSG and throw from here? TranslateMessage(&msg); DispatchMessage(&msg); } } //====================== int WINAPI WinMain(...) { try { RunApp(); //Some other init routines go here as well. } catch(const std::exception& e) { //log the error return 1; } catch(...) { //log the error return 1; } return 0; } As you can see, WinMain will handle "Exception 2" and "3", but not "Exception 1". My fundemental question is simple; what would be an elegant way to propagate these sorts of errors to the original "calling" code? I thought of maybe using custom messages and moving the actual throw-statements out to the message pump (in RunApp()), but I'm not sure how that would work yet as I have relatively little experience with Windows in general. Perhaps I'm looking at this situation all wrong. How do you usually bail out when something fatal (i.e., an acquisition of a critical resource fails, and there's no chance for recovery) when you're in the message handler?
I would stay away from registering custom Window messages for error-handling purposes. I mean this approach will work fine, but there's not really a need. By the way, your catch handler above should catch all 3 exceptions. Your dialog procedure runs on the same thread that calls CreateDialog. Creating a modeless dialog doesn't spawn off a worker thread. The modeless dialog still gets its messages via your GetMessage/Translate/Dispatch loop. There's a stack frame there, which means when you throw, it should unwind all the way out to your WinMain try/catch block. Is this not the behavior you're seeing?
1,169,028
1,169,070
How do I generate an HTML file using XSL?
As I understand it, using XSL to generate documents has two parts: 1) An XML document which references an XSL stylesheet 2) An XSL stylesheet Let's say that I want to generate a document using XSL, and then send it to a friend. Without relying on the stylesheet being available on the internet, and without including the stylesheet as a separate file, how can I send him the document as a single file and have it just work? I suppose ideally I'd like to send the "transformed" output, not the XML or XSL itself. Can this be done?
You have a two options: Do as you suggest and send your friend the transformed document (the output of the xml/xsl transformation) Embed the xml/xsl in a single file as per the xslt spec (link text) If you're not sure if your friend will be able to process the xml/xsl file himself, then you are really only left with option 1
1,169,066
1,171,155
What are the performance implications of inheriting a class vs including a ptr to an instance of the class as a member variable?
I am working in a class "A" that requires very high performance and am trying to work out the implications either way. If I inherit this new class "B", the memory profile of "A" should increase by that much. If I include just a ptr to "B" as a member variable in "A", then would I be correct in thinking that, as long as "B" is on the heap (ie new'd up) then "A" will remain as small as it is other than the new ptr reference. Is there something else that I haven't thought of? It is preferable architecture wise for me to inherit "B", however it may be preferable performance wise to just stick with it as a member variable.
I believe inheritance would be the best case here, but every situation is different so here's the pros and cons of each choice. Inheriting Usually requires virtual destructors, this causes a slight overhead during deletion of the object, and increases the size by a 'pointer' (this 1 pointer is used by all virtual objects, so it makes it a lot cleaner) Allows you to override a function so that it acts correctly for your object even when it is cast to the 'base' class. (Note: This does invoke a small overhead when calling the function) The size of the class increases by the size of the 'base'. Any 'base' functions that aren't virtual have NO overhead for calling them. Generally a lot cleaner (so long as it makes sense to inherit) Pointing to "base" class Requires you to dereference the pointer to the 'base' class everytime you want to call one of its functions (i.e. every 'base' function has overhead) Impossible to override functions properly (so calling them on the 'base' class invokes the one in the container of the base) without using function pointers (which is possibly more overhead than virtual functions) Requires separate allocations for both of the allocations, which usually have quite severe performance AND memory implications (allocating is slow, and usually allocations are aligned to a certain boundary, increasing their size, as well as storing extra information so the block can be properly deallocated). Allows you to NOT allocate the base class, saving memory in that instance Allows you to change what the 'base' class even after you have created it. Really, I think that as an optimization, this is probably one of the least significant items to investigate. Usually a small change to an algorithm (such as adding in early escapes) will make an astronomical difference compared to this sort of optimization. What should guide this decision is the structure of the program, and on that note, I think the best advice I can give you is this: Say, out-loud, the relationship the classes have, if its "Class A is a Class B" then you should inherit. If you say "Class A has a Class B" then you should keep a pointer within Class A to Class B.
1,169,109
1,169,135
How do you execute an INSERT statement using MySQL (in c++)?
Google is failing me (or I am failing Google.) I am simply looking for the function that executes an INSERT statement using the mysql.h library in C++.
Not too familiar with using MySQL in C, but according to what I can see in the mysql.h file, you should call mysql_stmt_prepare to create the statement, and mysql_stmt_execute to execute said prepared statement.
1,169,119
1,175,012
Handling windowStateChanged - Tab change in IE extension
I have the following in my IE extension to handle when a tab is switched in IE, etc. [ATL project, VS2008, C++ using IDispEventImpl] SINK_ENTRY_EX(1, DIID_DWebBrowserEvents2, DISPID_WINDOWSTATECHANGED,WindowStateChanged) . . . void WindowStateChanged (DWORD dwFlags, DWORD dwValidFlagsMask); . . . . void CHelloWorld::WindowStateChanged (DWORD dwFlags, DWORD dwValidFlagsMask){ //I don't do anything here right now. Even if I have some piece of code like //ATLTRACE, IE just hangs } Whenever I run my code, the IE stops working (I get a dialog "Internet Explorer has stopped working") What am I doing wrong? What might be missing in my code? Or, Is this a bug in IE8? I'm working on Windows 7 (eval) BTW.
How stupid of me. I missed this: STDMETHODCALLTYPE So my code is: SINK_ENTRY_EX(1, DIID_DWebBrowserEvents2, DISPID_WINDOWSTATECHANGED,WindowStateChanged) . . . void STDMETHODCALLTYPE WindowStateChanged (DWORD dwFlags, DWORD dwValidFlagsMask); . . . . void STDMETHODCALLTYPE CHelloWorld::WindowStateChanged (DWORD dwFlags, DWORD dwValidFlagsMask){ //I don't do anything here right now. Even if I have some piece of code like //ATLTRACE, IE just hangs } Now, IE hangs no more. :)
1,169,258
1,173,532
accessing windows taskbar icons in c++
I am looking for a way to programmatically get the current taskbar icons (not the system tray) for each program that is in the taskbar. I haven't had much luck with MSDN or Google, because all of the results relate to the system tray. Any suggestions or pointers would be helpful. EDIT: I tried Keegan Hernandez's idea but I think I might have done something wrong. The code is below (c++). #include <iostream> #include <vector> #include <windows.h> #include <sstream> using namespace std; vector<string> xxx; bool EnumWindowsProc(HWND hwnd,int ll) { if(ll=0) { //... if(IsWindowVisible(hwnd)==true){ char tyty[129]; GetWindowText(hwnd,tyty,128); stringstream lmlm; lmlm<<tyty; xxx.push_back(lmlm.str()); return TRUE; } } } int main() { EnumWindows((WNDENUMPROC)EnumWindowsProc,0); vector<string>::iterator it; for(it=xxx.begin();it<xxx.end();it++) {cout<< *it <<endl;} bool empty; cin>>empty; }
There are several problems with your code, please see my corrections. Turn the warnings up (or read the build output) on your compiler, it should have warned (or did warn) you about these! #include <iostream> #include <vector> #include <windows.h> #include <sstream> using namespace std; vector<string> xxx; // The CALLBACK part is important; it specifies the calling convention. // If you get this wrong, the compiler will generate the wrong code and your // program will crash. // Better yet, use BOOL and LPARAM instead of bool and int. Then you won't // have to use a cast when calling EnumWindows. BOOL CALLBACK EnumWindowsProc(HWND hwnd,LPARAM ll) { if(ll==0) // I think you meant '==' { //... if(IsWindowVisible(hwnd)==true){ char tyty[129]; GetWindowText(hwnd,tyty,128); stringstream lmlm; lmlm<<tyty; xxx.push_back(lmlm.str()); //return TRUE; What if either if statement fails? You haven't returned a value! } } return TRUE; } int main() { EnumWindows(EnumWindowsProc,0); vector<string>::iterator it; for(it=xxx.begin();it<xxx.end();it++) {cout<< *it <<endl;} bool empty; cin>>empty; }
1,169,405
1,169,648
c++ Input from text file help
My test file has data like this: 1 2 3 0 1, 2 3, 4 0, 0 4, 3 2, 1 0, 0 How would I separate the data by line but also separate each section of data by the zeros. ifstream data("testData.txt"); string line, a, b; while(getline(data,line)) { stringstream str(line); istringstream ins; ins.str(line); ins >> a >> b; hold.push_back(a); hold.push_back(b); } How do I separate them by the zeros?
So the lines are significant, and the zero-delimited lists of numbers are also significant? Try something like this: std::ifstream data("testData.txt"); std::vector<int> hold; std::string line; std::vector<std::string> lines; while(std::getline(data,line)) { lines.push_back(line); std::stringstream str(line); // Read an int and the next character as long as there is one while (str.good()) { int val; char c; str >> val >> c; if (val == 0) { do_something(hold); hold.clear(); } else hold.push_back(val); } } This isn't very fault-tolerant, but it works. It relies on a single character (a comma) to be present after every number except the last one on each line.
1,169,429
1,169,475
Using Eclipse C++ CDT in Linux
I want to use Eclipse to develop C++ projects on Linux. Particularly I want to modify stable and widely used open source projects using the Eclipse CDT. One of them is Intel Opencv. There are tutorials to create simple c++ projects like here: http://www.ibm.com/developerworks/opensource/library/os-eclipse-stlcdt/ . I have seen plenty of tutorials for using Eclipse CDT to write programs in OpenCv like here: http://opencv.willowgarage.com/wiki/Eclipse http://tommy.chheng.com/development/windows_development_setup.html http://tommy.chheng.com/index.php/2009/05/opencv-with-eclipse-on-windows/ However I want to use Eclipse to make changes to the OpenCv platform itself and compile it from there. I really like many of Eclipse's features like: Syntax highlighting Outline Code assist Code templates Code history etc. Would someone write a small tutorial on how one can make a project in Eclipse from the OpenCv tarball? I would use Eclipse CDT on Linux. Can Eclipse CDT recognize Makefile as it can do for Ant scripts?
I made the experience that for OpenCV using cmake is the way to go. You can unzip the cmake source code and use cmake to compile it. Even after your changes. There are some tools to integrate cmake into eclipse but I found them unstable or not very mature so I use cmake from a terminal to compile and eclipse for editing the source files.
1,169,455
1,169,485
read a int from a file wrote by java's writeInt method in C++?
How would one go about doing this? Also, is there an easy way to do it? Using a lib like Boost or something?
The DataOutputStream which writes out the int writes out a 4 byte int, with the high bytes first. Read into char*, reinterpret and if you need to convert the byte order use ntohl. ifstream is; is.open ("test.txt", ios::binary ); char* pBuffer = new char[4]; is.read (pBuffer, 4); is.close(); int* pInt = reinterpret_cast<int*>(pBuffer); int myInt = ntohl(*pInt); // This is only required if you are on a little endian box delete [] pBuffer;
1,169,721
1,175,865
Get ID of excel worksheet in focus using OLE
Using C++ and OLE, how might I go about obtaining the ID of the worksheet that is currently in focus? For example, I have the following code: Variant excelSheets; Variant excelSheet; excelSheets.OleProcedure("Add"); excelSheet= excelSheets.OlePropertyGet("Item", 1); I would like to add a sheet and then get the sheet that was just added so that I may add content. The above code only works if the user doesn't shift focus away from the sheet which is at the far left. Seth
I ended up using OlePropertyGet( "ActiveSheet" ); because when you add a sheet it becomes the ActiveSheet and you can work with it from there. I put an example of what I did below: Variant excelApp; Variant excelBooks; Variant excelWorkBook; Variant excelSheet; Variant excelSheets; try { mExcelApp = Variant::GetActiveObject("Excel.Application"); } catch(EOleSysError& e) { mExcelApp = Variant::CreateObject("Excel.Application"); //open excel } catch(...) { throw; } mExcelApp.OlePropertySet("ScreenUpdating", true); excelBooks = mExcelApp.OlePropertyGet("Workbooks"); excelWorkBook = excelBooks.OlePropertyGet("Item",1); // a worksheet is added which becomes the active sheet excelSheets.OleProcedure( "Add" ); excelSheet = excelWorkBook.OlePropertyGet( "ActiveSheet" );
1,169,732
1,169,843
WM_KEYDOWN : how to use it?
I'm trying to send a key stroke to one application, through PostMessage. I am using too Spy++ to try to understand how to send the message, as I do not fully understand its inner workings. In this picture, the first item(selected item) was made with an actual key stroke made by myself. The one with a red elipse around it(below) was made with the following code: WinApi.PostMessage(InsideLobbyHandle, WinApi.WM_KEYDOWN, (int)WinApi.VK_UP, 1); I guess it must have something to do with the last PostMessage() parameter, but I can't figure out how it really works. I can see in the original key stroke the ScanCode = 48, and in mine its 0, and also fExtended is 1 and in mine is 0. How can I make it look the same? In http://msdn.microsoft.com/en-us/library/ms646280(VS.85).aspx I cannot understand the last parameter's working.
Simulate keyboard input using SendInput, not PostMessage. You can't simulate keyboard input with PostMessage. There are still some caveats with respect to keyboard state/async-state: The SendInput function does not reset the keyboard's current state. Therefore, if the user has any keys pressed when you call this function, they might interfere with the events that this function generates. If you are concerned about possible interference, check the keyboard's state with the GetAsyncKeyState function and correct as necessary. The lParam for the WM_KEYDOWN Notification is specified in terms of the bits of the field: The first 16 bits are the repeat count The next 8 bits are the scan code The next bit is 1 for extended key, 0 otherwise The next 4 bits are reserved and must be 0 The next bit is always 0 (for WM_KEYDOWN) The next bit is the previous key state The last bit is always 0 (for WM_KEYDOWN) A warning: Any solution you build based around PostMessage is going to be very brittle.
1,169,745
1,170,001
Having a problem with placement-new!
I am having a problem placing an instance of my reference-counting Pointer<Type> class into my Array class. Using the debugger, it seems that the constructor is never called (which messes up the reference-count and causes a segfault down the line)! My push_back function is: void push_back(const T& element) { if (length >= max) reallocate(max > 0 ? max * 2 : 1); new (&data[length]) T(element); ++length; } The reference-count is the same before new is called as after. I'm very sure this is the problem, but I can't figure out why the constructor wouldn't be called. Additionally Pointer::Pointer(...) compiles whether it takes a Pointer<T>& or a const Pointer<T>& (huh?), and has the problem regardless as well! Maybe there are some details on placement new I am not taking into account. If anyone has some thoughts, they'd be much appreciated! edit: [as requested, a relevant excerpt from Pointer] // ... private: T* p; public: //! Constructor Pointer() : p(0) { } //! Copy Constructor template<class X> Pointer(Pointer<X>& other) : p(other.getPointer()) { if (p) p->incrementRef(); } //! Constructor (sets and increments p) Pointer(T* p) : p(p) { if (p) p->incrementRef(); } //! Destructor (decrements p) ~Pointer() { if (p) p->decrementRef(); } // ... I've also implemented operator = for Pointer<T>& and T*, as well as operator -> and operator T*
Your comment and your code are out of sync: //! Copy Constructor template<class X> Pointer(Pointer<X>& other) A constructor generated from a class template is not a copy constructor (there's a footnote in 12.8 [class.copy] that clarifies this), so won't prevent the compiler from generating a copy constructor for you. This generated constructor will be a better match for a standard copy as non-template functions are preferred to template functions in overload resolution. It appears that you need to write an explicit copy constructor in your pointer class to get the desired effect.
1,169,858
1,171,517
Global memory management in C++ in stack or heap?
If I declare a data structure globally in a C++ application , does it consume stack memory or heap memory ? For eg struct AAA { .../.../. ../../.. }arr[59652323];
Since I wasn't satisfied with the answers, and hope that the sameer karjatkar wants to learn more than just a simple yes/no answer, here you go. Typically a process has 5 different areas of memory allocated Code - text segment Initialized data – data segment Uninitialized data – bss segment Heap Stack If you really want to learn what is saved where then read and bookmark these: COMPILER, ASSEMBLER, LINKER AND LOADER: A BRIEF STORY (look at Table w.5) Anatomy of a Program in Memory
1,169,875
1,169,915
How to mark a list control item as selected?
In a Win32 application I have a dialog with a list control which is defined is the dialog template: CONTROL "",IDC_LIST_Attributes,"SysListView32",LVS_REPORT | LVS_SINGLESEL | LVS_ALIGNLEFT | WS_BORDER | WS_TABSTOP,7,36,246,110 In the runtime I retrieve the handle to that control and perform different operations with it - remove all items, add items, etc. It works fine. The problem is I can't programmatically mark an item as selected. I use the following code for that: LVITEM lvItem; lvItem.stateMask = stateMask; lvItem.state = state; SendMessage( windowHandle, LVM_SETITEMSTATE, indexToSelect, (LPARAM)&lvItem); This code runs and no changes happen to the list control. when I clisk on items with a mouse they are selected allright. What am I missing?
Have you tried ListView_SetItemState Macro? From the MSDN Link: Items will only show as selected if the list-view control has focus or the LVS_SHOWSELALWAYS style is used. Another Link that my help.
1,170,153
1,170,172
How to find whether the .NET installed or not in the System by using c++?
Is there any API available to find the whether the .NET framework installed or not in the system. or atlest can any one give me idea how to do this our own in c++ and also how to find the path where .NET installed if it is installed?? How can i do this ... Any Help in this regard will Be Appreciated Greately.....
Here's how: try to LoadLibrary() the mscoree.dll and then pass the handle to the just loaded library to GetProcAddress() and try to retrieve the entry point for GetCORSystemDirectory() and then try to call GetCORSystemDirectory() via the retrieved pointer. If all steps succeed the .NET is installed. Don't forget error handling - each step can fail and you need to be sure your program is ready for that.
1,170,427
1,170,461
Visual Studio C++ - unresolved symbol __environ
I'm using VS 2008 and compile my application with Multi-threaded Debug (/MTd). At link time I receive the following error: error LNK2001: unresolved external symbol __environ Where the symbol is defined? Thanks Dima
When you are using /Md (or variants), the symbols _environ and _wenviron are replaced by function calls. You need to track down the code that uses these (obsolete and deprecated) symbols, and make them use the proper function names. I found lots of people with the same problem as you in google also. I found some more detail here: Polling _environ in a Unicode context is meaningless when /MD or /MDd linkage is used. For the CRT DLL, the type (wide or multibyte) of the program is unknown. Only the multibyte type is created because that is the most likely scenario. If you change the use of the symbol _environ to the wide character version _wenviron, your original code will probably work.
1,170,508
1,170,575
c ++ String array issue
I'm just learning c++ coming from a Java background. Just playing around with simple classes now, but for some reason the following won't compile, when the same syntax compiles fine elsewhere: class CardDealer { private: string suits[4]; string values[13]; bool cardTaken[4][13]; int getRand(int top); void getValidSuit(int *suit); void getValidCard(int suit,int *value); public: CardDealer(); string dealCard(); void resetDeck(); }; CardDealer::CardDealer(){ suits = {"hearts", "clubs", "spades", "diamonds"}; values = {"ace","two","three","four","five","six","seven","eight","nine","ten","jack","queen","king"}; cardTaken = {{false,false,false,false,false,false,false,false,false,false,false,false,false},{false,false,false,false,false,false,false,false,false,false,false,false,false}, {false,false,false,false,false,false,false,false,false,false,false,false,false},{false,false,false,false,false,false,false,false,false,false,false,false,false}}; } obviously this is just a part of the class so please don't yell at me for missing '}'s compiler chucks a wobbly when it hits the instantiations in the constructor, spits out errors like these: 1>.\CardDealer.cpp(26) : error C2059: syntax error : '{' 1>.\CardDealer.cpp(26) : error C2143: syntax error : missing ';' before '{' 1>.\CardDealer.cpp(26) : error C2143: syntax error : missing ';' before '}' 1>.\CardDealer.cpp(27) : error C2059: syntax error : '{' 1>.\CardDealer.cpp(27) : error C2143: syntax error : missing ';' before '{' 1>.\CardDealer.cpp(27) : error C2143: syntax error : missing ';' before '}' 1>.\CardDealer.cpp(28) : error C2059: syntax error : '{' line 26 is the one where I've instantiated suits (suits = {...) thanks for taking a look guys, much appreciated
Until C++0x, you can only use the aggregate initializer syntax (ie, braces) when declaring an array. Note that this program gives a similar error: int thing[4]; int main () { thing = { 0, 1, 2, 3 }; } You'll have to initialize your array with the somewhat tedious bracket syntax, one element at a time.
1,170,522
1,170,563
How to make this template code work?
the template code is like this: template <class type1> struct DefaultInstanceCreator { type1 * operator ()() { return new type1; } }; template < class type1 , class InstanceCreator = DefaultInstanceCreator<type1> > class objectCache { public: objectCache (InstanceCreator & instCreator) :instCreator_ (instCreator) {} type1* Get() { type1 * temp = instCreator_ (); } private: InstanceCreator instCreator_; }; this code work well with object class like this: class A{ public: A(int num){ number = num; } int number; struct CreateInstance { CreateInstance (int value) : value_ (value) {} A * operator ()() const{ return new A(value_); } int value_; }; }; objectCache< A, A::CreateInstance > intcache(A::CreateInstance(2)); A* temp = intcache.Get(); cout << temp->number <<endl; when I tried this template with type like int, string... objectCache< int > intcache(); int* temp = intcache.Get(); *temp = 3; cout <<temp <<endl; I get E left of "'.Get' must have class/struct/union", I can't find out where is the problem when I change to objectCache< int > intcache; I get "'objectCache' : no appropriate default constructor available" use objectCache< int > intcache(DefaultInstanceCreator<int>()); I get left of "'.Get' must have class/struct/union" too.
In here, you aren't passing in the parameter to the intcache constructor: objectCache< int > intcache(); int* temp = intcache.Get(); This causes the first line to revert to the well known "most vexing parse" of C++, in short, you are declaring intcache as a function which takes no parameters and returns objectCache<int>. Maybe you mean this: objectCache< int > intcache; But probably you wanted to pass a factory: objectCache< int > intcache((DefaultInstanceCreator<int>()));
1,170,746
1,170,817
How to Use GetCORSystemDirectory()?
HANDLE Proc; HMODULE hDLL; hDLL = LoadLibrary(TEXT("mscoree.dll")); if(hDLL == NULL) cout << "No Dll with Specified Name" << endl; else { cout << "DLL Handle" << hDLL << endl<<endl; cout << "Getting the process address..." << endl; Proc = GetProcAddress(hDLL,"GetRequestedRuntimeVersion"); if(Proc == NULL) { FreeLibrary(hDLL); cout << "Process load FAILED" << endl; } else { cout << "Process address found at: " << Proc << endl << endl; LPWSTR st;DWORD* dwlength; ;DWORD cchBuffer=MAX_PATH; HRESULT hr=GetCORSystemDirectory(st,cchBuffer,dwlength); if(hr!=NULL) { printf("%s",hr); } FreeLibrary(hDLL); } } i did like this to get the .NET installation path but i am getting Linker Errors. error LNK2019: unresolved external symbol _GetCORSystemDirectory@12 referenced in function _main dot.obj
define the GetCORSystemDirectory signature: typedef HRESULT ( __stdcall *FNPTR_GET_COR_SYS_DIR) ( LPWSTR pbuffer, DWORD cchBuffer, DWORD* dwlength); initialise the function pointer: FNPTR_GET_COR_SYS_DIR GetCORSystemDirectory = NULL; get a function pointer from mscoree.dll and use: GetCORSystemDirectory = (FNPTR_GET_COR_SYS_DIR) GetProcAddress (hDLL, "GetCORSystemDirectory"); if( GetCORSystemDirectory!=NULL) { ... //use GetCORSystemDirectory ... } As requested: #ifndef _WIN32_WINNT #define _WIN32_WINNT 0x0600 #endif #include <stdio.h> #include <tchar.h> #include <windows.h> typedef HRESULT (__stdcall *FNPTR_GET_COR_SYS_DIR) ( LPWSTR pbuffer, DWORD cchBuffer, DWORD* dwlength); FNPTR_GET_COR_SYS_DIR GetCORSystemDirectory = NULL; int _tmain(int argc, _TCHAR* argv[]) { HINSTANCE hDLL = LoadLibrary(TEXT("mscoree.dll")); GetCORSystemDirectory = (FNPTR_GET_COR_SYS_DIR) GetProcAddress (hDLL, "GetCORSystemDirectory"); if( GetCORSystemDirectory!=NULL) { TCHAR buffer[MAX_PATH]; DWORD length; HRESULT hr = GetCORSystemDirectory(buffer,MAX_PATH,&length); // buffer should contain the folder name // use it.. } return 0; }
1,170,801
1,170,847
using boost::lambda_ to compress whitespaces in a string
I am using boost::lambda to remove subsequent whitespaces in a string, leaving only one space. I tried this program. #include <algorithm> #include <iostream> #include <string> #include <boost/lambda/lambda.hpp> int main() { std::string s = "str str st st sss"; //s.erase( std::unique(s.begin(), s.end(), (boost::lambda::_1 == ' ') && (boost::lambda::_2== ' ')), s.end()); ///< works s.erase( std::unique(s.begin(), s.end(), (boost::lambda::_1 == boost::lambda::_2== ' ')), s.end()); ///< does not work std::cout << s << std::endl; return 0; } the commented line works fine, but the uncommented one does not. How is (boost::lambda::_1 == boost::lambda::_2== ' ') different from (boost::lambda::_1 == ' ') && (boost::lambda::_2== ' ')) in the above progam. The commented one also gives me a warning that "warning C4805: '==' : unsafe mix of type 'bool' and type 'const char' in operation" Thanks.
In C and C++ a == b == x is very different than (a == x) && (b == x), the former is interpreted as (a == b) == x, which compares a with b and the result of that comparison (true or false) is compared with x. In your case x is a space character, and in typical implementation that uses ASCII its code is equal to 32, comparing it with boolean value which is converted either to 0 or 1 gives always false.
1,171,060
1,171,076
where to put break in switch/case statement with blocks
When I use braces around case code block in C++ to localize variables should I put break inside or outside the block? case FOO: // 'break' inside { int i; doStuff(); break; } case BAR: // 'break' outside { int i; doStuff(); } break; Thanks.
It's a matter of style. I would put break outside the closing brace just to make it more readable.
1,171,279
1,171,317
Can I use DLL written in C++ in my C# project?
The problem is - I want to write a template class in C++, put it in a DLL, and then call it from my C# project. Is it possible? Can you please provide any references or articles on about how to do it? EDIT I want DLL to be written in unmanaged C++
Template class could not be exported. It does not exist until someone instantiate it. You should explicitly instantiate it and then export it as usual class.
1,171,365
1,171,390
Should use an insertion sort or construct a heap to improve performance?
We have large (100,000+ elements) ordered vectors of structs (operator < overloaded to provide ordering): std::vector < MyType > vectorMyTypes; std::sort(vectorMyType.begin(), vectorMyType.end()); My problem is that we're seeing performance problems when adding new elements to these vectors while preserving sort order. At the moment we're doing something like: for ( a very large set ) { vectorMyTypes.push_back(newType); std::sort(vectorMyType.begin(), vectorMyType.end()); ... ValidateStuff(vectorMyType); // this method expects the vector to be ordered } This isn't exactly what our code looks like since I know this example could be optimised in different ways, however it gives you an idea of how performance could be a problem because I'm sorting after every push_back. I think I essentially have two options to improve performance: Use a (hand crafted?) insertion sort instead of std::sort to improve the sort performance (insertion sorts on a partially sorted vector are blindingly quick) Create a heap by using std::make_heap and std::push_heap to maintain the sort order My questions are: Should I implement an insertion sort? Is there something in Boost that could help me here? Should I consider using a heap? How would I do this? Edit: Thanks for all your responses. I understand that the example I gave was far from optimal and it doesn't fully represent what I have in my code right now. It was simply there to illustrate the performance bottleneck I was experiencing - perhaps that's why this question isn't seeing many up-votes :) Many thanks to you Steve, it's often the simplest answers that are the best, and perhaps it was my over analysis of the problem that blinded me to perhaps the most obvious solution. I do like the neat method you outlined to insert directly into a pre-ordered vector. As I've commented, I'm constrained to using vectors right now, so std::set, std::map, etc aren't an option.
Ordered insertion doesn't need boost: vectorMyTypes.insert( std::upper_bound(vectorMyTypes.begin(), vectorMyTypes.end(), newType), newType); upper_bound provides a valid insertion point provided that the vector is sorted to start with, so as long as you only ever insert elements in their correct place, you're done. I originally said lower_bound, but if the vector contains multiple equal elements, then upper_bound selects the insertion point which requires less work. This does have to copy O(n) elements, but you say insertion sort is "blindingly fast", and this is faster. If it's not fast enough, you have to find a way to add items in batches and validate at the end, or else give up on contiguous storage and switch to a container which maintains order, such as set or multiset. A heap does not maintain order in the underlying container, but is good for a priority queue or similar, because it makes removal of the maximum element fast. You say you want to maintain the vector in order, but if you never actually iterate over the whole collection in order then you might not need it to be fully ordered, and that's when a heap is useful.
1,171,664
1,171,715
C++ strange compile linker error
I am trying to compile large C++ project and I am getting this strange error. I know that it is linking error but couldn't figure out what it is exactly. test_oqlquery.o:(.rodata._ZTV8r_MarrayIhE[vtable for r_Marray]+0x8): undefined reference to r_Marray<unsigned char>::~r_Marray()' test_oqlquery.o:(.rodata._ZTV8r_MarrayIhE[vtable for r_Marray<unsigned char>]+0xc): undefined reference tor_Marray::~r_Marray()' test_oqlquery.o:(.rodata._ZTV8r_MarrayIhE[vtable for r_Marray]+0x28): undefined reference to `r_Marray::print_status(std::basic_ostream >&) const' What does this error mean ? And, is it possible to see the line number where there error is happening ? How ? I am mainly concerned with what this means ".rodata._ZTV8r_MarrayIhE[vtable for r_Marray]+0x28" Actually, my error is like this, but dont know why everything inside angle bracket are missing, so replacing them with " ", here is detailed error, it has something to do with template instantiation, as well test_oqlquery.o:(.rodata._ZTV8r_MarrayIhE[vtable for r_Marray"unsigned char"]+0x8): undefined reference to `r_Marray"unsigned char"::~r_Marray()' I am using g++ 4.3.3. Please excuse me, I cannot submit the whole source code here as it is very large and spans over multiple directories. Thanks a lot.
First, linker errors and compiler errors are different things. Since linker deals with object files rather than source files, compiler errors have a line number but linker errors don't. Second, it seems that you have declared the destructor for r_Marray but have not implemented it anywhere included in the build. The same thing goes for print_status.
1,171,818
1,172,046
Annotatable Control Flow Graph with Boost?
I have a control flow graph representing a single procedure of my intermediate language code. Nodes and Edges are annotated via vertex/edge properties and contain instructions resp branch information. Now I want to perform data flow analysis on this graph and feed that graph into each data flow analysis module. Each module should be able to annotate the CFG with its own data. Problems I need to solve: I don't know upfront how many annotations are introduced by the data flow analysis modules (because I will implement additional analysis modules in the future) I don't know anything about the type of annotation introduced by a specific data flow analysis module Each data flow analysis module should exist independently from the other modules, i.e. module A shouldn't be concerned about the annotations introduced by module B Do you see any chance to realize all of the above requirements? Any comments or advises are highly appreciated Update: To be more specific, I basically want to decouple my annotations from the Graph type. When using the usual vertex/edge properties the Graph type itself is always "polluted" (and is therefore dependent on the vertex/edge property types) by the contained property types.
See the "Using Property Maps" chapter of the documentation of the boost graph library. Especially the "Constructing an Exterior Property Map" section. If that doesn't answer your question, could you clarify what is missing?
1,171,923
1,172,031
Prototype for function that allocates memory on the heap (C/C++)
I'm fairly new to C++ so this is probably somewhat of a beginner question. It regards the "proper" style for doing something I suspect to be rather common. I'm writing a function that, in performing its duties, allocates memory on the heap for use by the caller. I'm curious about what a good prototype for this function should look like. Right now I've got: int f(char** buffer); To use it, I would write: char* data; int data_length = f(&data); // ... delete[] data; However, the fact that I'm passing a pointer to a pointer tips me off that I'm probably doing this the wrong way. Anyone care to enlighten me?
In C, that would have been more or less legal. In C++, functions typically shouldn't do that. You should try to use RAII to guarantee memory doesn't get leaked. And now you might say "how would it leak memory, I call delete[] just there!", but what if an exception is thrown at the // ... lines? Depending on what exactly the functions are meant to do, you have several options to consider. One obvious one is to replace the array with a vector: std::vector<char> f(); std::vector<char> data = f(); int data_length = data.size(); // ... //delete[] data; and now we no longer need to explicitly delete, because the vector is allocated on the stack, and its destructor is called when it goes out of scope. I should mention, in response to comments, that the above implies a copy of the vector, which could potentially be expensive. Most compilers will, if the f function is not too complex, optimize that copy away so this will be fine. (and if the function isn't called too often, the overhead won't matter anyway). But if that doesn't happen, you could instead pass an empty array to the f function by reference, and have f store its data in that instead of returning a new vector. If the performance of returning a copy is unacceptable, another alternative would be to decouple the choice of container entirely, and use iterators instead: // definition of f template <typename iter> void f(iter out); // use of f std::vector<char> vec; f(std::back_inserter(vec)); Now the usual iterator operations can be used (*out to reference or write to the current element, and ++out to move the iterator forward to the next element) -- and more importantly, all the standard algorithms will now work. You could use std::copy to copy the data to the iterator, for example. This is the approach usually chosen by the standard library (ie. it is a good idea;)) when a function has to return a sequence of data. Another option would be to make your own object taking responsibility for the allocation/deallocation: struct f { // simplified for the sake of example. In the real world, it should be given a proper copy constructor + assignment operator, or they should be made inaccessible to avoid copying the object f(){ // do whatever the f function was originally meant to do here size = ??? data = new char[size]; } ~f() { delete[] data; } int size; char* data; }; f data; int data_length = data.size; // ... //delete[] data; And again we no longer need to explicitly delete because the allocation is managed by an object on the stack. The latter is obviously more work, and there's more room for errors, so if the standard vector class (or other standard library components) do the job, prefer them. This example is only if you need something customized to your situation. The general rule of thumb in C++ is that "if you're writing a delete or delete[] outside a RAII object, you're doing it wrong. If you're writing a new or `new[] outside a RAII object, you're doing it wrong, unless the result is immediately passed to a smart pointer"
1,171,988
1,172,428
Sound Processing - Beat Matching Music Player on Android
So I want to make a new music player for Android, it's going to be open source and if you think this idea is any good feel free to let me know and maybe we can work on it. I know it's possible to speed up and slow down a song and normalize the sound so that the voices and instruments still hit the same pitch. I'd like to make a media play for Android aimed at joggers which will; Beat match successive songs Maintain a constant beat for running to Beat can be established via accelerometer or manually Alarms and notifications automatically at points in the run (Geo located or timer) Now I know that this will fall down with many use cases (Slow songs sounding stupid, beat changes within song getting messed up) but I feel they can be overcome. What I really need to know is how to get started writing an application in C++ (Using the Android NDK) which will perform the analysis and adjust the stream. Will it be feasible to do this on the fly? What approach would you use? A server that streams to the phone? Maybe offline analysis of the songs on a desktop that gets synched to your device via tether? If this is too many questions for one post I am most interested in the easiest way of analysing the wave of an MP3 to find the beat. On top of that, how to perform the manipulation, to change the beat, would be my next point of interest. I had a tiny crappy mp3 player that could do double speed on the fly so I'm sure it can be done! Gav
This is technologically feasible on a smartphone-type device, although it is extremely difficult to achieve good-sounding pitch-shifting and time-stretching effects even on a powerful PC and not in realtime. Pitch-shifting and time-stretching can be achieved on a relatively powerful mobile device in realtime (I've done it in .Net CF on a Samsung i760 smartphone) without overly taxing the processor (the simple version is not much more expensive than ordinary MP3 playback). The effect is not great, although it doesn't sound too bad if the pitch and time changes are relatively small. Automatic determination of a song's tempo might be too time-consuming to do in real time, but this part of the process could be performed in advance of playback, or it could be done on the next song well before the current song is finished playing. I've never done this myself, so I dunno. Everything else you mentioned is relatively easy to do. However: I don't know how easy Android's API is regarding audio output, or even whether it allows the low-level access to audio playback that this project would require.
1,172,086
1,172,208
GD Image library: Range of colour component arguments for TrueColor images
I'm trying to output a TrueColor image using GD (specifically bgd.dll) from a C++ program under windows. The API (or at least the examples) seem to suggest that the range of the integer RGB arguments for gdResolveColor spans the values 0-255. Is this correct? I've experimented with higher values and gotten strange results but this could well be to due my own lack of understanding.
That is correct. True color uses one byte for each color component (red, green and blue). The range of an byte is 0 to 255, hence the range indicated in the GD documentation. So, 16,777,216 (2^24 or 256^3) different colors can be specified using these 3 bytes (24-bits). I'm not sure how GD handles invalid inputs (i.e. a color component over 255). It likely masks the input and you end up with the your submitted value modulo 255.
1,172,281
1,172,419
Forward declaration in multiple source directory; template instantation
I am looking for a nice book, reference material which deals with forward declaration of classes esp. when sources are in multiple directories, eg. class A in dirA is forward declared in class B in dirB ? How is this done ? Also, any material for template issues, advanced uses and instantation problems, highly appreicated ? Thanks.
Forward declarations have nothing to do with the directory structure of your project. You can forward declare something even not existing in your project. They are mostly used to resolve cyclic references between classes and to speed up compilation when the complete class declaration is not necessary, and the corresponding #include can be replaced with a forward declaration. To determine when a forward declaration is sufficient, the sizeof() query can usually answer the question. For example, class Wheel; class Car { Wheel wheels[4]; }; In this declaration, a forward declaration cannot be used since the compiler cannot determine the size of a Car: it doesn't know how much data the wheels contain. In other words, sizeof(Car) is unknown. Also regarding templates, forward declared classes cannot be used as template parameters if the template class contains data members of the template parameter (but their pointers can be). For instance, template<class T> class pointer { T *ptr; }; class Test; pointer<Test> testpointer; is legal but std::vector<Test> testvector will not compile. Because of the aforementioned limitations, forward declared classes are generally used as pointers or references. I don't know if there's a book on the subject but you can see this section on c++ faq lite.
1,172,734
1,172,800
How to Create a process in c++ to execute exe?
I have to execute an .exe which is available on some drive. How can I do this using C++? I am doing it like this: #include <stdio.h> #include <conio.h> #include <windows.h> void main() { STARTUPINFO si; PROCESS_INFORMATION pi; ZeroMemory( &si, sizeof(si) ); si.cb = sizeof(si); ZeroMemory( &pi, sizeof(pi) ); if(!CreateProcess(L"c:\\DOTNET.exe",NULL,NULL, NULL,FALSE, 0,NULL,NULL,&si,&pi ) ) { printf( "CreateProcess failed (%d).\n", GetLastError() ); } else { printf("Prcess Creation Success"); } WaitForSingleObject( pi.hProcess, INFINITE ); CloseHandle( pi.hProcess ); CloseHandle( pi.hThread ); getch(); } But every time, it is showing this error: process creation failed with error code 2 (i.e can not find the path specified) But I place the DOTNET.exe at c:\DOTNET.exe only. What is wrong in this code?
I've just tested your code and it's working here with : if(!CreateProcess(L"C:\\Program Files\\Mozilla Firefox\\firefox.exe",NULL,NULL, NULL,FALSE, 0,NULL,NULL,&si,&pi ) ) A C++/Win32 solution for your C/Win32 code :) void ExecuteAndWait (wstring toto) { STARTUPINFO si = { sizeof(si) }; PROCESS_INFORMATION pi; vector<TCHAR> V( toto.length() + 1); for (int i=0;i< (int) toto.length();i++) V[i] = toto[i]; CreateProcess(NULL, &V[0],0, 0, FALSE, 0, 0, 0, &si, &pi); WaitForSingleObject(pi.hProcess, INFINITE); CloseHandle(pi.hProcess); CloseHandle(pi.hThread); }
1,172,792
1,172,820
C++ pure virtual class question
I'm attempting to write a simple B+tree implementation (very early stages). I've got a virtual class with a few functions. Needless to say, I'm very new to these strategies and am running into all sorts of problems. I'm attempting to create a root node within the BTree class. The root node will be a BBranch, which should inherit from BNode? I'm getting errors btree.cpp: In constructor âBTree::BTree()â: btree.cpp:25: error: cannot declare variable ârootâ to be of abstract type âBBranchâ btree.cpp:12: note: because the following virtual functions are pure within âBBranchâ: btree.cpp:9: note: virtual void BNode::del(int) btree.cpp: In member function âvoid BTree::ins(int)â: btree.cpp:44: error: ârootâ was not declared in this scope The code is this using namespace std; class BNode { public: int key [10]; int pointer [11]; virtual void ins( int num ) =0; virtual void del( int num ) =0; }; class BBranch: public BNode { public: void ins( int num ); }; class BLeaf: public BNode { public: void ins( int num ); }; class BTree { public: BTree() { BBranch root; }; void ins( int num ); }; // Insert into branch node void BBranch::ins( int num ){ // stuff for inserting specifically into branches }; // Insert for node void BTree::ins( int num ){ root.ins( num ); }; int main(void){ return 0; } Thank you for any information you can give me.
The compiler seems to be pretty clear about what's wrong. You can't declare a BBranch because there's still a pure virtual function in that class. You defined ins, but del is still undefined. Define that in BBranch (and BLeaf) and you should be fine. You can't declare instances of abstract classes, which are classes that have pure virtual functions. Furthermore, you have declared root in the constructor. You meant for it to be a member variable, which means it needs to be declared beside the constructor, not inside. class BTree { public: BTree() { }; BBranch root; void ins( int num ); };
1,172,867
1,172,953
Boost unit test failure detected in wrong test suite
I'm learning how to use the Boost Test Library at the moment, and I can't seem to get test suites to work correctly. In the following code 'test_case_1' fails correctly but it's reported as being in the Master Test Suite instead of 'test_suite_1'. Anyone know what I'm doing wrong? #define BOOST_AUTO_TEST_MAIN #include <boost/test/auto_unit_test.hpp> BOOST_AUTO_TEST_SUITE(test_suite_1); BOOST_AUTO_TEST_CASE(test_case_1) { BOOST_REQUIRE_EQUAL(1, 2); } BOOST_AUTO_TEST_SUITE_END(); edit: Ovanes' answer led me to understand the suite hierarchy better - in this case test_suite_1 is a sub-suite of the root suite which by default is named 'Master Test Suite'. The default logging only shows the root suite, which isn't what I expected by I can deal with it :) You can set the root suite name by defining BOOST_TEST_MODULE - so an alternative version of the above example which gives the expected error message is: #define BOOST_TEST_MODULE test_suite_1 #define BOOST_AUTO_TEST_MAIN #include <boost/test/auto_unit_test.hpp> BOOST_AUTO_TEST_CASE(test_case_1) { BOOST_REQUIRE_EQUAL(1, 2); }
It depends how you configure your logger to produce the report. For example passing to your example --log_level=all will result in the following output: Running 1 test case... Entering test suite "Master Test Suite" Entering test suite "test_suite_1" Entering test case "test_case_1" d:/projects/cpp/test/main.cpp(9): fatal error in "test_case_1": critical check 1 == 2 failed [1 != 2] Leaving test case "test_case_1" Leaving test suite "test_suite_1" Leaving test suite "Master Test Suite" *** 1 failure detected in test suite "Master Test Suite" Here is the link to the command line config options of Boost Test Framework. Regards, Ovanes
1,172,928
1,173,020
How to access DOM of a web page in QtWebKit?
How to access DOM of a web page in QtWebKit? I don't see any methods exposing DOM in QtWebKit...
Right now as of Qt 4.4/4.5 I don't think there are any direct way, but it's coming. See http://labs.trolltech.com/blogs/2009/04/07/qwebelement-sees-the-light-do-i-hear-a-booyakasha/
1,173,525
1,173,680
C++ and Qt - Problem with 2D graphics
Mission: Draw two lines with different color on one graph with automatic cliping, by adding points bit by bit. So, what am I doing. Create class GraphWidget, inherited from QGraphicsView. Create member of QGraphicsScene. Create 2 QPainterPath instances, and add them to graphicsScene. Then, I eventually call graphWidget.Redraw(), where call for QPainterPath.lineTo() for both instances. And I expect appearance of that lines of graphics view, but it doesn't. I tired from reading Qt's doc and forums. What am I doing wrong?
We need to know more, what does not happen? Does the window appear at all? Are the lines not drawn? In the meantime try out this sample code if you want :) Edit: updated to show updating. #include ... class QUpdatingPathItem : public QGraphicsPathItem { void advance(int phase) { if (phase == 0) return; int x = abs(rand()) % 100; int y = abs(rand()) % 100; QPainterPath p = path(); p.lineTo(x, y); setPath(p); } }; int main(int argc, char *argv[]) { QApplication a(argc, argv); QGraphicsScene s; QGraphicsView v(&s); QUpdatingPathItem item; item.setPen(QPen(QColor("red"))); s.addItem(&item); v.show(); QTimer *timer = new QTimer(&s); timer->connect(timer, SIGNAL(timeout()), &s, SLOT(advance())); timer->start(1000); return a.exec(); } You should get something like this: The path in any QGraphicsPathItem can of course be updated later. You might want to keep the original painter path somewhere to avoid performance hit caused by all the path copying (I'm not sure if QPainterPath is implicitly shared...) QPainterPath p = gPath.path(); p.lineTo(0, 42); gPath.setPath(p); Animation It seems that you're trying to do some sort of animation/on-the-fly updating. There is entire framework for this in Qt. In the simplest form you can subclass QGraphicsPathItem, reimplement its advance() slot to automatically fetch next point from motion. The only thing left to do then would be calling s.advance() with the required frequency. http://doc.trolltech.com/4.5/qgraphicsscene.html#advance
1,173,962
1,173,989
Run Code Before Every Function Call for a Class in C++
I would like to run some code (perhaps a function) right before every function call for a class and all functions of the classes that inherit from that class. I'd like to do this without actually editing every function, Is such a thing even possible? I would settle for having a function called as the first instruction of every function call instead of it being called right before.
AspectC++ is what you want. I haven't used it myself, but Aspect-Oriented Programming paradigm tries to solve this exact problem.
1,174,169
1,174,193
Function passed as template argument
I'm looking for the rules involving passing C++ templates functions as arguments. This is supported by C++ as shown by an example here: #include <iostream> void add1(int &v) { v+=1; } void add2(int &v) { v+=2; } template <void (*T)(int &)> void doOperation() { int temp=0; T(temp); std::cout << "Result is " << temp << std::endl; } int main() { doOperation<add1>(); doOperation<add2>(); } Learning about this technique is difficult, however. Googling for "function as a template argument" doesn't lead to much. And the classic C++ Templates The Complete Guide surprisingly also doesn't discuss it (at least not from my search). The questions I have are whether this is valid C++ (or just some widely supported extension). Also, is there a way to allow a functor with the same signature to be used interchangeably with explicit functions during this kind of template invocation? The following does not work in the above program, at least in Visual C++, because the syntax is obviously wrong. It'd be nice to be able to switch out a function for a functor and vice versa, similar to the way you can pass a function pointer or functor to the std::sort algorithm if you want to define a custom comparison operation. struct add3 { void operator() (int &v) {v+=3;} }; ... doOperation<add3>(); Pointers to a web link or two, or a page in the C++ Templates book would be appreciated!
Yes, it is valid. As for making it work with functors as well, the usual solution is something like this instead: template <typename F> void doOperation(F f) { int temp=0; f(temp); std::cout << "Result is " << temp << std::endl; } which can now be called as either: doOperation(add2); doOperation(add3()); See it live The problem with this is that if it makes it tricky for the compiler to inline the call to add2, since all the compiler knows is that a function pointer type void (*)(int &) is being passed to doOperation. (But add3, being a functor, can be inlined easily. Here, the compiler knows that an object of type add3 is passed to the function, which means that the function to call is add3::operator(), and not just some unknown function pointer.)
1,174,264
1,174,295
Linker problem on VS2005 with VC++
Here's the scenario: Platform: VS2005 and language is VC++ Situation: There's just 1 assembly CMPW32. It has 2 projects: 1 is a DLL project called CMPW32 and the 2nd one is an .exe project called Driver They both share the same Debug folder under the main assembly folder. I have been able to successfully export a few functions from the DLL. The Driver project accesses 1 of these exported functions. (First of all I am not if functions need to be exported for projects in the SAME assembly to be able to use them. I can just include the header files and use the functions I think.) Following is are a few lines of code from some files which you might find useful to analyze my problem: //main.cpp file from the Driver project which is meant to generate Driver.exe #pragma comment(lib, "winmm.lib") #include <CM.h> #include "conio.h" #include "CMM.h" #include "CMF.h" #define C_M_F _T("c:\\CannedMessages.en-US") int_tmain (int argc, TCHAR* argv []) { CMM myobjModel; CMF::Read (CANNED_MESSAGES_FILE, myobjModel); getch(); } //CMM.h file #ifndef C_M_M #define C_M_M #include "CMD.h" #include "CMC.h" #include "CM.h" #define _C_M_DLL #include "CMP.h" class CM_DLL_API CMM { //some code here... } //CMF.h #ifndef C_M_F #define C_M_F #include "CMM.h" #define _C_M_DLL #include "CMP.h" class CM_DLL_API CMF { //some code here... } //CMP.h #ifndef C_M_P #define C_M_P #include "CMD.h" #define C_M_B_F _T("CannedMessages.") #ifdef _C_M_DLL #define CM_DLL_API __declspec( dllexport ) #else #define CM_DLL_API __declspec( dllimport ) #endif extern "C" { //list of functions to be exported.. } ERRORS on building the solution: Error13 error LNK2019: unresolved external symbol "public: __thiscall CMM::~CMM(void)" (??1CMM@@QAE@XZ) referenced in function _wmain main.obj Error15 fatal error LNK1120: 2 unresolved externals C:\"somepath here which I cant disclose"\Projects\CMPW32\Debug\Driver.exe Please Note: If I choose to build only the CMPW32 DLL project, there are no errors and the CMPW32.dll file gets generated in the debug folder with the correct functions getting getting exported. However there seems to be some linking problem that is pretty evident and I don't know what's going on. I have included every required file and also have entered the required .lib in the input of the "Project Settings". The paths have been set correctly too. It would be really helpful if someone could help me out with this. Please lemme know if additional information required. Thanks, Viren
Looks like your Driver.exe project does not include the CPP source files of the CMM class, likely CMM.cpp. or You have declare a destructor for CMM class in your .H file (CMM.H) and forgot to implement it in the .CPP file (CMM.CPP).
1,174,296
1,182,108
How do I return a pointer to a user-defined class object using SWIG
I have the following code wrapped by swig: int cluster::myController(controller*& _controller) { _controller = my_controller; return 0; } controller has a private constructor. What's the correct incantation to make something like this not throw an exception? public static void main(String argv[]) { controller c = null; int r = dan_cluster.theCluster().myController(c); System.out.println(r); System.out.println(c.getUUID()); }
It would be helpful if you were to post the exception you're getting, but I don't believe that what you're trying to do is possible. In C++ you can pass a pointer by reference and so make changes to a pointer variable passed to a function, but in Java there is nothing equivalent. You should check the generated wrapper, but my guess is that your C++ code is being wrapped as something like: int myController(controller _controller) { _controller = my_controller; return 0; } ...which clearly won't do what you want, leading to a NullPointerException when you try to getUUID(). Assuming I'm right, the best fix is for myController() to just return a controller*. If you really need the integer, consider returning a std::pair (note: wrapping any part of the stl requires some delicacy, consult the documentation).
1,174,373
1,174,393
Run .exe outside IDE but use break points inside IDE
Using VS .NET 2003. Would like to run the .exe from outside the IDE (i.e. command prompt or double clicking .exe icon in windows) However, still want break points to hit in the IDE. How do I set this up? (Running from outside IDE but IDE seeing it as run from "Debug" -> "Start") Thanks.
On the Debug menu, choose the "Attach to process" option to attach a debugger to your externally-running application.
1,175,164
1,175,314
Timer Interrupt Service Routine on a host computer running at a rate of 10 microseconds or faster
I am trying to run the following pseudocode at a rate of 10 microseconds or faster on a host computer (512 mb RAM, Intel 2.5 GHz Pentium 4 processor, etc.) running on a Windows XP operating system: int main(void) { while(1){}; } Interrupt service routine: every 10 microseconds, printf("Hello World"); I'm aware that there are MFC timers, but they are not functional if the timers need to trigger faster than 1 ms. What would be the easiest method to accomplish what the goals of my pseudocode? Thanks in advance.
I'm not sure you can get that kind of performance out of Windows XP, at least not reliably from userland. You might have to run your code as a kernel driver, or better yet investigate using a real-time OS like Xenomai instead.
1,175,317
1,175,332
CallNamedPipe & NamedPipeServerStream, access denied?
I'm trying to do some IPC between a managed and unmanaged process. I've settled on named pipes. I'm spinning up a thread in managed code, using NamedPipeServerStream: using (NamedPipeServerStream stream = new NamedPipeServerStream("MyPipe", PipeDirection.In)) { while (true) { stream.WaitForConnection(); stream.Read(buffer, 0, size); //Handle buffer values } } On the unmanaged side I'm using CallNamedPipe: CallNamedPipe(TEXT("\\\\.\\pipe\\MyPipe"), NULL, 0, pData, dataSize, NULL, NMPWAIT_WAIT_FOREVER); However, CallNamedPipe fails with a GetLastError of 5 (Access Denied). Any idea why?
My guess would be that you are running the processes under two different accounts. Since you are using the NamedPipeStream constructor that uses default security the other user does not have access. This can be solved by using the constructor that takes a PipeSecurity instance. Then just give the other account access explicitly. EDIT: I just noticed that you are creating the Pipe as a one-way pipe with the direction in. But I believe that CallNamedPipe attempts to open the pipe for both reading and writing and will fail when connecting to a one-way pipe. EDIT 2: Also that constructor creates a byte type pipe and CallNamedPipe can only connect to message type pipes. So you'll have to use another constructor.
1,175,330
1,175,869
Bad pointer or link issue when creating wstring from vc6 dll
I got a DLL generated on VC6 and using wstring, and I'm trying to use it in a VC9 project. In this DLL there is a higher level class manipulating wstring, called UtfString. I got everything imported correctly in my project, but when I call: std::wstring test; UtfString uTest(test); it won't link, even if the function prototype is in the lib... The other issuer is that when create a new UtfString, and debug my app, the new pointer is <Bad Ptr>. I suspect a conflict between VC6 wstring and VC9 wstring but I'm not sure. I want to avoid to modify the original Dll. It would be great if someone could make things more clear for me, and explain me what is the real reason of the problem. Thanks in advance for your answer, Boris
DONT EVEN TRY the string layouts are different you can't do that. The string class is entirely different between VC6 and VC9. Even if you were able to link you will most likely crash. In VC9 strings have a union that is 16 byte buffer for small strings and a pointer for string s.t. size()>15. In VC9 wstrings have a union that is 8 wchar buffer for small strings and a pointer for string s.t. size()>7. in VC6 all string buffer space is allocated on the heap. YOU must recompile the DLL if you pass strings across the boundary. There are other issues too regarding iterators that are too technical to describe here. sorry gotta rebuild
1,175,505
1,176,393
Is it possible to instruct MSVC to use release version of Boost when compiling Debug project?
I have built Boost in Release configuration and have staged it into one folder. Now when I add Boost libraries into project and try to build it in Debug configuration - linker fails because there are no Debug versions libraries. Is there a way to make MSVC 9.0 use Release version of libraries when building Debug configuration? Of course, there is an easy soultion - build Debug version of Boost. But I am just curious.
You can do two things: Build the debug version for boost (this is the best option). Add debugging symbols to your release build. You can't use the release version of boost with your debug build because boost depends on the CRT, which is different in debug/release builds.
1,175,646
1,175,664
C++ - when should I use a pointer member in a class
One of the thing that has been confusing for me while learning C++ (and Direct3D, but that some time ago) is when you should use a pointer member in a class. For example, I can use a non-pointer declaration: private: SomeClass instance_; Or I could use a pointer declaration private: Someclass * instance_ And then use new() on it in the constructor. I understand that if SomeClass could be derived from another class, a COM object or is an ABC then it should be a pointer. Are there any other guidelines that I should be aware of?
A pointer has following advantages: a) You can do a lazy initialization, that means to init / create the object only short before the first real usage. b) The design: if you use pointers for members of an external class type, you can place a forward declaration above your class and thus don't need to include the headers of that types in your header - instead of that you include the third party headers in your .cpp - that has the advantage to reduce the compile time and prevents side effects by including too many other headers. class ExtCamera; // forward declaration to external class type in "ExtCamera.h" class MyCamera { public: MyCamera() : m_pCamera(0) { } void init(const ExtCamera &cam); private: ExtCamera *m_pCamera; // do not use it in inline code inside header! }; c) A pointer can be deleted anytime - so you have more control about the livetime and can re-create an object - for example in case of a failure.
1,175,823
1,175,826
gcc linker errors when using boost to_lower & trim
I'm trying to use the boost library in my code but get the following linker errors under Sparc Solaris platform. The problem code can essentially be summarised to: #include <boost/algorithm/string.hpp> std::string xparam; ... xparam = boost::to_lower(xparam); The linker error is: LdapClient.cc:349: no match for `std::string& = void' operator /opt/gcc-3.2.3/include/c++/3.2.3/bits/basic_string.h:338: candidates are: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(const std::basic_string<_CharT, _Traits, _Alloc>&) [with _CharT = char, _Traits = std::char_traits<char>, _Alloc = std::allocator<char>] /opt/gcc-3.2.3/include/c++/3.2.3/bits/basic_string.h:341: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(const _CharT*) [with _CharT = char, _Traits = std::char_traits<char>, _Alloc = std::allocator<char>] /opt/gcc-3.2.3/include/c++/3.2.3/bits/basic_string.h:344: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(_CharT) [with _CharT = char, _Traits = std::char_traits<char>, _Alloc = std::allocator<char>] gmake: *** [LdapClient.o] Error 1 Any ideas?
boost::to_lower does not return a copy of the string, it operates on the variable passed into the function. For some examples, read this. So no need to reassign: boost::to_lower(xparam); You will get an error because you are trying to assign the string to the value void. If you want to make a copy of it, use the copy version: std::string xparamLowered = boost::to_lower_copy(xparam);
1,175,976
1,176,219
How to do type conversion for the following Scenario?
I am using TCHAR buffer[MAX_SIZE]; after some set of steps i am gettig the relative path of folder say for ex: c:\Microsoft.NET\Framework\v1.0.037\ Since the above path is in buffer of type TCHAR and i am trying to concatenate"RegAsm.exe" After Appending i need to convert the path to the LPCTSTR since i need to pass it to CreateProcess() which takes LPCTSTR type as argument then the compiler giving error.I have tried but vexed. can any one help me in this aspect....
The problem is TCHAR and CreateProcess are macros that expand differently depending on whether you compile for Unicode or not. The caveat is that GetCORSystemDirectory() will only accept a Unicode buffer. To get rid of these ANSI/Unicode problems write this code part explicitly for Unicode. Instead of TCHAR use WCHAR for the buffer. Instead of CreateProcess() use CreateProcessW() - it will happily accept the Unicode buffer. Use wcscat() for strings concatenation. Something like this (error handling omitted): WCHAR buffer[MAX_PATH + 1]; DWORD realLength; GetCORSystemDirectory( buffer, MAX_PATH, &realLength ); *( buffer + realLength ) = 0;// Don't forget to null-terminate the string wcscat( buffer, L"regasm.exe" ); CreateProcessW( /*pass buffer here*/ );
1,176,050
1,176,092
Implement IDropTarget
I would like to drag and drop files from windows explorer onto my application which is being built in Codegear RAD studio 2009. Then I would like to be able to access data from the object I am dragging and dropping. I believe I have to implement IDropTarget. Can someone please provide an example of how I might implement IDropTarget to achieve this?
There is a nice example written by Michael Dunn over at codeproject.com which shows how to implement IDropTarget and access data from inside a IDataObject which is used to store data during the drag and drop operation.
1,176,129
1,176,430
libxml for C++: How to add a root node to XML tree?
I have an xml file that looks like the following <siteinfo> ... </siteinfo> <page> <title>...</title> <revision> ... <revision> </page> It does not have a root/enclosing node so I get the "extra content at end of document" on running my program. After opening the file for parsing using libxml, is there a way to easily add this root/enclosing node to the parse tree?
If you can't change the structure of the source XML, an easier way would be to create a string like: string xml = string("<root>") + file_contents + string("</root>"); Then you can parse this string easily. By the way, since you're using C++, you should give libxml++ a try. It is a C++ wrapper for libxml and it is very good.
1,176,131
1,176,200
Library to facilitate the use of the "design by contract" principle
Is there any library that aids in implementing the design by contract principle in a C++ application? In particular, I'm looking for a library that facilities the usage of the principle, something like this.
I followed the teachings of the following articles: An exception or a bug? (Miro Samek, C/C++ Users Journal, 2003) Simple Support for Design by Contract in C++ (Pedro Guerreiro, TOOLS, 2001) What I ultimately applied was pretty much Samek's approach. Just creating macros for REQUIRE, ENSURE, CHECK and INVARIANT (based on the existing assert macro) was very useful. Of course it's not as good as native language support but anyway, it allows you to get most of the practical value from the technique. As for libraries, I don't think that it pays to use one, because one important value of the assertion mechanism is its simplicity. For the difference between debug and production code, see When should assertions stay in production code?.
1,176,298
1,176,317
Best practices of dynamic vs. static memory in terms of cleanliness and speed
I have an array, called x, whose size is 6*sizeof(float). I'm aware that declaring: float x[6]; would allocate 6*sizeof(float) for x in the stack memory. However, if I do the following: float *x; // in class definition x = new float[6]; // in class constructor delete [] x; // in class destructor I would be allocating dynamic memory of 6*sizeof(float) to x. If the size of x does not change for the lifetime of the class, in terms of best practices for cleanliness and speed (I do vaguely recall, if not correctly, that stack memory operations are faster than dynamic memory operations), should I make sure that x is statically rather than dynamically allocated memory? Thanks in advance.
Declaring the array of fixed size will surely be faster. Each separate dynamic allocation requires finding an unoccupied block and that's not very fast. So if you really care about speed (have profiled) the rule is if you don't need dynamic allocation - don't use it. If you need it - think twice on how much to allocate since reallocating is not very fast too.
1,176,367
1,176,404
how to catch the power button on wince?
I would like to catch the power button flagged as VK_TPOWER in the documentation how is it possible ?
The power manager is the only interface I can think of : But I am not so sure that it will let you do anything you want with the power button, it is probably strictly limited in what you can do with it : Here's a link that could help
1,176,400
1,225,801
How to setup eclipse cpp to generate multiple executables
I have made a new project in a clean eclipse installation and imported a lot of source and header files into a source directory. Some of the source files are libraries, some have a main method and are supposed to be compiled into an executable. How can I indicate which source files are supposed to be executables? Should I make different build projects?
Eclipse is able to generate multiple executables when you write your own makefile. Eclipse is than just performing a make all and puts all binaries in Debug or Release. Then you can run them individually. Steps to use own makefile: New -> Project -> C++ -> MakeFile project
1,176,427
1,186,836
Shared libraries and .h files
I have some doubt about how do programs use shared library. When I build a shared library ( with -shared -fPIC switches) I make some functions available from an external program. Usually I do a dlopen() to load the library and then dlsym() to link the said functions to some function pointers. This approach does not involve including any .h file. Is there a way to avoid doing dlopen() & dlsym() and just including the .h of the shared library? I guess this may be how c++ programs uses code stored in system shared library. ie just including stdlib.h etc.
Nick, I think all the other answers are actually answering your question, which is how you link libraries, but the way you phrase your question suggests you have a misunderstanding of the difference between headers files and libraries. They are not the same. You need both, and they are not doing the same thing. Building an executable has two main phases, compilation (which turns your source into an intermediate form, containing executable binary instructions, but is not a runnable program), and linking (which combines these intermediate files into a single running executable or library). When you do gcc -c program.c, you are compiling, and you generate program.o. This step is where headers matter. You need to #include <stdlib.h> in program.c to (for example) use malloc and free. (Similarly you need #include <dlfcn.h> for dlopen and dlsym.) If you don't do that the compiler will complain that it doesn't know what those names are, and halt with an error. But if you do #include the header the compiler does not insert the code for the function you call into program.o. It merely inserts a reference to them. The reason is to avoid duplication of code: The code is only going to need to be accessed once by every part of your program, so if you needed further files (module1.c, module2.c and so on), even if they all used malloc you would merely end up with many references to a single copy of malloc. That single copy is present in the standard library in either it's shared or static form (libc.so or libc.a) but these are not referenced in your source, and the compiler is not aware of them. The linker is. In the linking phase you do gcc -o program program.o. The linker will then search all libraries you pass it on the command line and find the single definition of all functions you've called which are not defined in your own code. That is what the -l does (as the others have explained): tell the linker the list of libraries you need to use. Their names often have little to do with the headers you used in the previous step. For example to get use of dlsym you need libdl.so or libdl.a, so your command-line would be gcc -o program program.o -ldl. To use malloc or most of the functions in the std*.h headers you need libc, but because that library is used by every C program it is automatically linked (as if you had done -lc). Sorry if I'm going into a lot of detail but if you don't know the difference you will want to. It's very hard to make sense of how C compilation works if you don't. One last thing: dlopen and dlsym are not the normal method of linking. They are used for special cases where you want to dynamically determine what behavior you want based on information that is, for whatever reason, only available at runtime. If you know what functions you want to call at compile time (true in 99% of the cases) you do not need to use the dl* functions.
1,176,448
1,176,530
Drawing on the Desktop Background (WIN32)
Is there any way to draw on the desktop background in WIN32 and also receive notifications when the desktop background is repainted? I tried this: desk = GetDesktopWindow(); dc = GetDC(desk); MoveToEx(dc,0,0,NULL); LineTo(dc,1680,1050); ReleaseDC(desk,dc); But it draws on the whole screen, even over windows that are on the screen.
You can use Spy++ to find which window is the desktop background window. On my system I see the following hierarchy: Window 000100098 "Program Manager" Progman Window 0001009E "" SHELLDLL_DefView Window 00100A0 "FolderView" SysListView32 I guess you are referring to the SysListView32 - the window with all the icons. You can use FindWindowEx to find this window. Edit You should use a combination of FindWindowEx and EnumerateChildWindows. The code presented below can be compiled in a command line box like this: cl /EHsc finddesktop.cpp /DUNICODE /link user32.lib #include <windows.h> #include <iostream> #include <string> BOOL CALLBACK EnumChildProc(HWND hwnd, LPARAM lParam) { std::wstring windowClass; windowClass.resize(255); unsigned int chars = ::RealGetWindowClass(hwnd, &*windowClass.begin(), windowClass.size()); windowClass.resize(chars); if (windowClass == L"SysListView32") { HWND* folderView = reinterpret_cast<HWND*>(lParam); *folderView = hwnd; return FALSE; } return TRUE; } int wmain() { HWND parentFolderView = ::FindWindowEx(0, 0, L"Progman", L"Program Manager"); if (parentFolderView == 0) { std::wcout << L"Couldn't find Progman window, error: 0x" << std::hex << GetLastError() << std::endl; } HWND folderView = 0; ::EnumChildWindows(parentFolderView, EnumChildProc, reinterpret_cast<LPARAM>(&folderView)); if (folderView == 0) { std::wcout << L"Couldn't find FolderView window, error: 0x" << std::hex << GetLastError() << std::endl; } HWND desktopWindow = ::GetDesktopWindow(); std::wcout << L"Folder View: " << folderView << std::endl; std::wcout << L"Desktop Window: " << desktopWindow << std::endl; return 0; } Here are the results after running finddesktop.exe Folder View: 000100A0 Desktop Window: 00010014 As you can see the window handles are quite different.
1,176,580
1,179,664
custom FILE type in C/C++
Is it possible in C/C++ to create my own custom stream of type FILE (stdio.h) that can be used with fputs() for example ?
If your "custom stream" isn't something you can represent with a file descriptor or file handle, then you're out of luck. The FILE type is implementation-defined, so there's no standard way to associate other things with one. If you can get a C file descriptor for whatever it is you're trying to write to, then you can call fdopen on it to turn it into a FILE*. It's not standard C or C++, but it's provided by Posix. On Windows, it's spelled _fdopen. If you're using Windows and you have a HANDLE, then you can use _open_osfhandle to associate a file descriptor with it, and then use _fdopen from there. Are you really tied to fputs? If not, then replace it with use of a C++ IOStream. Then you can provide your own descendant of std::basic_streambuf, wrap it in a std::ostream, and use standard C++ I/O on it.
1,176,602
1,176,618
Automatically removing unneeded #include statements
Possible Duplicates: C/C++: Detecting superfluous #includes? How should I detect unnecessary #include files in a large C++ project? Hi, I've been following numerous discussions about how to reduce the build time for C/C++ projects. Usually, a good optimization is to get rid of #include statements by using forward declarations. Now, I was wondering: Is there maybe a tool which can compute the #include dependency tree between C/C++ header files (I know mkdep on Linux can do this) and then starts a 'remove header file/recompile' cycle? It would be great if the tool could try to remove nodes from the dependency tree (e.g. remove #include statments from files) and then rebuild the project to see whether it still works. It shouldn't need to be very clever (as in, refactoring the code to make header files unnecessary by using pointers instead of values or the like) but I believe many projects I worked on had plain unneeded #include statements. This usually happens by refactoring code and moving it around, but then forgetting to take the #include out. Does anybody know whether a tool like this exists?
There have been lots of questions here similar to this. So far, no-one has come up with a really good tool to list the dependancy graph and hilight multiple includes etc. (favourite seems to be doxygen) much less perform edits on the files themselves. So I would guess the anser is going to be "No" - I'd be happy to be wrong, however!
1,177,093
1,196,735
question on the use of libmemcached
This one could be a trivial 'yes or no' question but still could be helpfull. Could the C/C++ library libmemcached be used in a distributed file system ? I am asking this one because in all documentation on the net I came accross memcached was mostly assosciated with caching in web-service applications. An example : Let there be servers A,B and client C. Client C connects to server A and asks to open a file F. If file F resides on server B , then server A caches that file and then serves it back to client C. Should libmemcached library be used in the above situation ? Is there any alternative C framework proposal.
Yes, but with severe limitations. You'll only be able to cache blocks, not the entire file because memcached has an upper limit on what can be stored behind each key that is about 1MB. The other thing to consider is that you're taking a 1-2 ms for each block you want to assemble for the final file. Your better off implementing your own in memory cache or finding an existing clustered filesystem like gluster.
1,177,159
1,177,177
MSVC++ how to ouput something to the "output"-window during compilation
sometimes i see that certain projects write something to the output during compilation. how can that be achieved in MSVC++ thanks!
use #pragma message e.g. #define MESSAGE(t) message(__FILE__ "(" STRINGXXX(__LINE__) ") : " t) #define STRINGXXX(x) STRINGYYY(x) #define STRINGYYY(x) #x then if you put #pragma MESSAGE("TODO: testing") it will appear as a clickable message just like the normal compiler messages
1,177,276
1,177,622
Best way to find a whitespace-delimited word in a CString
example: "select * from somewhere where x = 1" I want to find the whitespace-delimited "where", but not the "where" within "somewhere". In the example "where" is delimited by spaces, but it could be carriage returns, tabs etc. Note: I know regex would make it easy to do (the regex equivalent would be "\bwhere\b"), but I don't want to add a regex library to my project just to do this.
If you wanted to use the pure MFC method of string manipulation, then this should work: CString strSql = _T("select * from somewhere where x = 1"); int nTokenPos = 0; CString strToken = strSql.Tokenize(_T(" \r\n\t"), nTokenPos); while (!strToken.IsEmpty()) { if (strToken.Trim().CompareNoCase(_T("where")) == 0) return TRUE; // found strToken = strSql.Tokenize(_T(" \r\n\t"), nTokenPos); } return FALSE; // not found
1,177,457
1,177,599
Debug heap/STL debugging equivalent for GCC?
I plan on using GCC more (Linux and Windows) and I was wondering if there's an equivalent of the MSVC debug heap and the STL checks available for the GCC CRT and STL. I already know about tools such as Valgrind, but I'm looking for something built in the libraries.
I'm not too familiar with the debug heap and STL checks, but when I have memory problems in GCC on linux I use an environment variable called MALLOC_CHECK_ (from malloc(3)): Recent versions of Linux libc (later than 5.4.23) and GNU libc (2.x) include a malloc implementation which is tunable via environment variables. When MALLOC_CHECK_ is set, a special (less efficient) implementation is used which is designed to be tolerant against simple errors, such as double calls of free() with the same argument, or overruns of a single byte (off-by-one bugs). Not all such errors can be protected against, however, and memory leaks can result. If MALLOC_CHECK_ is set to 0, any detected heap corruption is silently ignored; if set to 1, a diagnostic is printed on stderr; if set to 2, abort() is called immediately. This can be useful because otherwise a crash may happen much later, and the true cause for the problem is then very hard to track down. There is also Electric Fence which can help catch buffer overruns aborting as soon as the overrun / underrun happens. See libefence(3) for more information.
1,177,704
1,177,733
Why does C++ allow an integer to be assigned to a string?
I encountered an interesting situation today in a program where I inadvertantly assigned an unsigned integer to a std::string. The VisualStudio C++ compiler did not give any warnings or errors about it, but I happened to notice the bug when I ran the project and it gave me junk characters for my string. This is kind of what the code looked like: std::string my_string(""); unsigned int my_number = 1234; my_string = my_number; The following code also compiles fine: std::string my_string(""); unsigned int my_number = 1234; my_string.operator=(my_number); The following results in an error: unsigned int my_number = 1234; std::string my_string(my_number); What is going on? How come the compiler will stop the build with the last code block, but let the first 2 code blocks build?
Because string is assignable from char, and int is implicitly convertible to char.