question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,065,672
1,065,729
How to link against boost.system with cmake
I use a cmake generated makefile to compile a c++ file that depends on the boost filesystem library. During the linking process I get the following error: Undefined symbols: "boost::system::get_generic_category()", referenced from: __static_initialization_and_destruction_0(int, int)in FaceRecognizer.cpp.o __static_initialization_and_destruction_0(int, int)in FaceRecognizer.cpp.o __static_initialization_and_destruction_0(int, int)in FaceRecognizer.cpp.o "boost::system::get_system_category()", referenced from: __static_initialization_and_destruction_0(int, int)in FaceRecognizer.cpp.o __static_initialization_and_destruction_0(int, int)in FaceRecognizer.cpp.o ld: symbol(s) not found collect2: ld returned 1 exit status make[2]: *** [src/ImageMarker] Error 1 The action from the makefile that generates this error is this line: cd /Users/janusz/Documents/workspace/ImageMarker/Debug/src && /opt/local/bin/cmake -E cmake_link_script CMakeFiles/ImageMarker.dir/link.txt --verbose=1 /usr/bin/c++ -O3 -Wall -Wno-deprecated -g -verbose -Wl,-search_paths_first -headerpad_max_install_names -fPIC CMakeFiles/ImageMarker.dir/ImageMarker.cpp.o CMakeFiles/ImageMarker.dir/Image.cpp.o CMakeFiles/ImageMarker.dir/utils.cpp.o CMakeFiles/ImageMarker.dir/XMLWriter.cpp.o CMakeFiles/ImageMarker.dir/FaceRecognizer.cpp.o -o ImageMarker -L/opt/local/lib ../libTinyXml.a /opt/local/lib/libboost_filesystem-mt.dylib Some googling showed me that this error seems to be common on macs with the boost file system library because I have to link against a boost.system library or make my project depending on the boost.system library. How do i force cmake to link against the library without hardcoding the library path? Here the result from otool: otool -L /opt/local/lib/libboost_filesystem-mt.dylib /opt/local/lib/libboost_filesystem-mt.dylib: /opt/local/lib/libboost_filesystem-mt.dylib (compatibility version 0.0.0, current version 0.0.0) /opt/local/lib/libboost_system-mt.dylib (compatibility version 0.0.0, current version 0.0.0) /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.4.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.0.0)
On linux CMake figures itself that boost_filesystem is linked against boost_system. Obviously you have to tell it explicitly on Mac: find_package(Boost COMPONENTS system filesystem REQUIRED) #... target_link_libraries(mytarget ${Boost_FILESYSTEM_LIBRARY} ${Boost_SYSTEM_LIBRARY} )
1,065,743
1,065,830
Using Boost::asio in Winx64: I'm stuck, need to figure out how to build libboost_system_xxxx.lib for x64
Unlike this question: Linker Error while building application using Boost Asio in Visual Studio C++ 2008 Express I need an x64 build of the lib files... I'm not even sure how to get started. I'm reading here: http://www.boost.org/doc/libs/1_39_0/more/getting_started/windows.html Or, more generally, how do I build boost for x64?
I'm not on Windows, but I guess adding address-model=64 to the bjam invocation should do the trick.
1,065,774
1,065,800
Initialization of all elements of an array to one default value in C++?
C++ Notes: Array Initialization has a nice list over initialization of arrays. I have a int array[100] = {-1}; expecting it to be full with -1's but its not, only first value is and the rest are 0's mixed with random values. The code int array[100] = {0}; works just fine and sets each element to 0. What am I missing here.. Can't one initialize it if the value isn't zero ? And 2: Is the default initialization (as above) faster than the usual loop through the whole array and assign a value or does it do the same thing?
Using the syntax that you used, int array[100] = {-1}; says "set the first element to -1 and the rest to 0" since all omitted elements are set to 0. In C++, to set them all to -1, you can use something like std::fill_n (from <algorithm>): std::fill_n(array, 100, -1); In portable C, you have to roll your own loop. There are compiler-extensions or you can depend on implementation-defined behavior as a shortcut if that's acceptable.
1,066,071
1,067,072
Boost linker error: Unresolved external symbol "class boost::system::error_category const & __cdecl boost::system::get_system_category(void)"
I'm just getting started with Boost for the first time, details: I'm using Visual Studio 2008 SP1 I'm doing an x64 Build I'm using boost::asio only (and any dependencies it has) My code now compiles, and I pointed my project at the boost libraries (after having built x64 libs) and got past simple issues, now I am facing a linker error: 2>BaseWebServer.obj : error LNK2001: unresolved external symbol "class boost::system::error_category const & __cdecl boost::system::get_system_category(void)" (?get_system_category@system@boost@@YAAEBVerror_category@12@XZ) 2>BaseWebServer.obj : error LNK2001: unresolved external symbol "class boost::system::error_category const & __cdecl boost::system::get_generic_category(void)" (?get_generic_category@system@boost@@YAAEBVerror_category@12@XZ) any ideas? I added this define: #define BOOST_LIB_DIAGNOSTIC And now in my output I see this: 1>Linking to lib file: libboost_system-vc90-mt-1_38.lib 1>Linking to lib file: libboost_date_time-vc90-mt-1_38.lib 1>Linking to lib file: libboost_regex-vc90-mt-1_38.lib which seems to indicate it is infact linking in the system lib.
I solved the problem. I had built 32-bit libraries when I had intended to build 64-bit libraries. I fixed up my build statement, and built 64-bit libraries, and now it works. Here is my bjam command line: C:\Program Files (x86)\boost\boost_1_38>bjam --build-dir=c:\boost --build-type=complete --toolset=msvc-9.0 address-model=64 architecture=x86 --with-system
1,066,137
1,066,200
What is the Preferred Cross-platform 'main' Definition Using boost::program_options?
I'm trying to develop a cross-platform application using C++ with boost. I typically program in a *nix environment, where I've always defined 'main' as follows: int main( const int argc, const char* argv[] ) { ... } For this application, I'm starting in the Windows environment, using Visual Studio 2003. When I try to use boost::program_options with this definition, I get compile errors from program_options::store: po::options_description desc("Supported options"); desc.add_options()...; po::variables_map vm; po::store(po::parse_command_line(argc, argv, desc), vm); Error: main.cpp(46) : error C2665: 'boost::program_options::store' : none of the 2 overloads can convert parameter 1 from type 'boost::program_options::basic_parsed_options<charT>' with [ charT=const char ] c:\boost_1_38_0\boost\program_options\variables_map.hpp(34): could be 'void boost::program_options::store(const boost::program_options::basic_parsed_options<charT> &,boost::program_options::variables_map &,bool)' with [ charT=char ] c:\boost_1_38_0\boost\program_options\variables_map.hpp(43): or 'void boost::program_options::store(const boost::program_options::basic_parsed_options<wchar_t> &,boost::program_options::variables_map &)' while trying to match the argument list '(boost::program_options::basic_parsed_options<charT>, boost::program_options::variables_map)' with [ charT=const char ] I tried to force the wchar_t function by defining main as follows: int main( const int argc, wchar_t* argv[]){ ... } Then it compiles, but I get link errors: main.obj : error LNK2019: unresolved external symbol "void __cdecl boost::program_options::store(class boost::program_options::basic_parsed_options<unsigned short> const &,class boost::program_options::variables_map &)" referenced in function _main main.obj : error LNK2019: unresolved external symbol "public: __thiscall boost::program_options::basic_parsed_options<unsigned short>::basic_parsed_options<unsigned short>(class boost::program_options::basic_parsed_options<char> const &)" referenced in function "public: class boost::program_options::basic_parsed_options<unsigned short> __thiscall boost::program_options::basic_command_line_parser<unsigned short>::run(void)" main.obj : error LNK2019: unresolved external symbol "class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __cdecl boost::program_options::to_internal(class std::basic_string<unsigned short,struct std::char_traits<unsigned short>,class std::allocator<unsigned short> > const &)" referenced in function "class std::vector<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::allocator<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > > > __cdecl boost::program_options::to_internal<class std::basic_string<unsigned short,struct std::char_traits<unsigned short>,class std::allocator<unsigned short> > >(class std::vector<class std::basic_string<unsigned short,struct std::char_traits<unsigned short>,class std::allocator<unsigned short> >,class std::allocator<class std::basic_string<unsigned short,struct std::char_traits<unsigned short>,class std::allocator<unsigned short> > > > const &)" Finally, if I fall back to the default main definition setup by Visual Studio, it compiles and links: int main( const int argc, _TCHAR* argv[]){ ... } So, that's good for Windows, but will this work when I try to take it to *nix? Do those systems typically define a _TCHAR type? I haven't come across it personally. What is the proper way to define main to work on Windows and *nix, plus work with the boost::program_options library?
It seems to be a constness related problem. Try: int main( int argc, char* argv[] ) { // ... }
1,066,183
1,086,771
QWinWidget Inside MFC Dialog Not Repainting or Responding to Tab/Arrow keys
I am using a QWinWidget inside of an MFC dialog and the QWinWidget is not drawing itself correctly and it is not handling keyboard input correctly. Repainting [Unsolved] Within the QWinWidget, I have a QTableWidget. When I scroll the QTableWidget, it does not redraw itself until I stop scrolling, at which point it redraws everything. Similarly, I can type into cells in the QTableWidget and the control is not updated until I force it to re-update by scrolling up or down (it re-updates when the scrolling stops). Since this QWinWidget is housed in an MFC CDialog, I tried overriding the CDialog's OnPaint method and only call the QWinWidget::repaint method, however this has the opposite problem where now only the QWinWidget is updated and the CDialog is never redrawn, resulting in artifacts. If I call QWinWidget::repaint and CDialog::OnPaint, the result is the same as not overriding the OnPaint method. Has anyone ever seen this problem or know how to resolve it? Keyboard Input [Solved] None of the controls within the QWinWidget respond to the tab key or arrow keys correctly. The tab/arrow keys simply skip over the entire QWinWidget (and all child controls). Even if I click inside the QWinWidget and select a control, the next time I press the tab key, it skips the focus completely out of the entire QWinWidget. I noticed that the QWinWidget has two functions, QWinWidget::focusNextPrevChild and QWinWidget::focusInEvent and both of them have a comment header saying "\reimp". Am I supposed to override these functions in order to get correct tab functionality? If so, how can these functions be implemented for correct tab functionality.
I have fixed the keyboard input issue. The QWinWidget class needed some changes: in the QWinWidget::init method, the WS_TABSTOP must be added to the window style: SetWindowLong(winId(), GWL_STYLE, WS_CHILD | WS_CLIPCHILDREN | WS_CLIPSIBLINGS | WS_TABSTOP); Also, the QWinWidget::winEvent method needs to respond to the WM_GETDLGCODE to let Windows know that it is interested in receiving key/tab inputs. I had to add this if block: if(msg->message == WM_GETDLGCODE) { *result = DLGC_WANTARROWS | DLGC_WANTTAB; return(true); } I am still working on getting the widget to paint properly.
1,066,677
1,066,706
How to iterate over a std::map full of strings in C++
I have the following issue related to iterating over an associative array of strings defined using std::map. -- snip -- class something { //... private: std::map<std::string, std::string> table; //... } In the constructor I populate table with pairs of string keys associated to string data. Somewhere else I have a method toString that returns a string object that contains all the keys and associated data contained in the table object(as key=data format). std::string something::toString() { std::map<std::string, std::string>::iterator iter; std::string* strToReturn = new std::string(""); for (iter = table.begin(); iter != table.end(); iter++) { strToReturn->append(iter->first()); strToReturn->append('='); strToRetunr->append(iter->second()); //.... } //... } When I'm trying to compile I get the following error: error: "error: no match for call to ‘(std::basic_string<char, std::char_traits<char>, std::allocator<char> >) ()’". Could somebody explain to me what is missing, what I'm doing wrong? I only found some discussion about a similar issue in the case of hash_map where the user has to define a hashing function to be able to use hash_map with std::string objects. Could be something similar also in my case?
Your main problem is that you are calling a method called first() in the iterator. What you are meant to do is use the property called first: ...append(iter->first) rather than ...append(iter->first()) As a matter of style, you shouldn't be using new to create that string. std::string something::toString() { std::map<std::string, std::string>::iterator iter; std::string strToReturn; //This is no longer on the heap for (iter = table.begin(); iter != table.end(); ++iter) { strToReturn.append(iter->first); //Not a method call strToReturn.append("="); strToReturn.append(iter->second); //.... // Make sure you don't modify table here or the iterators will not work as you expect } //... return strToReturn; } edit: facildelembrar pointed out (in the comments) that in modern C++ you can now rewrite the loop for (auto& item: table) { ... }
1,066,971
1,066,979
Is C or C++ better for making portable code?
I am trying to have some fun in summer. Writing a piece of code that enables presenting Arabic language in systems that support Unicode but no support for eastern languages it. I am writing only the logic hopefully with no integration code initially. Should I use C++ or C? Which is the easier language to write portable code and easier to integrate with Python possibly? Edit: I am fairly good with C/C++ though I consider myself closer to C++. But It seems it is easier to write C and plug it every where or I am wrong ? I would write some functions to process Arabic Unicode String. presenting Arabic language need some processing because ALMOST ALL characters have different shapes in different contexts. Edit: It seems I will go with C++, just to make it more fun.
I would use C++, mostly because it provides a lot more "stuff" to use and as far as my experience goes is as portable as C. However, I have not used straight C/C compiler for 10 years or more. EDIT A commenter questioned my experience with portability. Mine is limited to Linux and Win32 primarily. I assumed this would be sufficient OSes for this exercise.
1,067,066
1,067,079
Compilation errors through incorrect use of CComPtr objects
I have defined the following CComPtr object and method in my class: private: CComPtr<IRawPdu>& getRawPdu(); // Returns the RawPdu interface pointer from the mRawPdu data member. // mRawPdu is initialized, if necessary. CComPtr<IRawPdu> mRawPdu; // Initialized to 0 in the ctor. Uses lazy evaluation via getRawPdu(). In the constructor of my class, I initialise mRawPdu to 0 via the initialisor list. The getRawPdu() method used lazy evaluation if mRawPdu has yet to be initialised. When compiling the code, I get the following errors: Compiling... topport.cpp C:\Program Files\Microsoft Visual Studio 8\VC\atlmfc\include\atlcomcli.h(295) : error C2664: 'ATL::AtlComPtrAssign' : cannot convert parameter 2 from 'const ATL::CComPtr<T>' to 'IUnknown *' with [ T=IRawPdu ] No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called C:\Program Files\Microsoft Visual Studio 8\VC\atlmfc\include\atlcomcli.h(292) : while compiling class template member function 'IRawPdu *ATL::CComPtr<T>::operator =(const ATL::CComPtr<T> &) throw()' with [ T=IRawPdu ] sessionutilities.h(186) : see reference to class template instantiation 'ATL::CComPtr<T>' being compiled with [ T=IRawPdu ] topglobals.cpp C:\Program Files\Microsoft Visual Studio 8\VC\atlmfc\include\atlcomcli.h(295) : error C2664: 'ATL::AtlComPtrAssign' : cannot convert parameter 2 from 'const ATL::CComPtr<T>' to 'IUnknown *' with [ T=IRawPdu ] No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called C:\Program Files\Microsoft Visual Studio 8\VC\atlmfc\include\atlcomcli.h(292) : while compiling class template member function 'IRawPdu *ATL::CComPtr<T>::operator =(const ATL::CComPtr<T> &) throw()' with [ T=IRawPdu ] sessionutilities.h(186) : see reference to class template instantiation 'ATL::CComPtr<T>' being compiled with [ T=IRawPdu ] Any suggestions as to what could be causing this?
Based on the error given by the compiler it appears that it cannot infer a conversion between IRawPdu and IUnknown. Does it actually inherit from IUnknown? If so then it's possibly an include ordering issue. Can you give more insight into the hierarchy of IRawPdu
1,067,102
1,154,402
C++ XML comments to generate MSDN style CHM
I have several projects, some using managed code and some using only unmanaged. All have the XML comments have been added and the XML documentation is being generated correctly (the generated xml file and intermediate the xdc files). Surely there's something that can take these files (the output of xdcmake) and generate MSDN style chm help! ? From what I understand, both doxygen and sandcastle ignore that obvious step and re-invent the wheel to re-scan your code. (there's also the fact that sandcastle is apparently useless for non-.NET projects). Having tried doxygen (horrible output, but fast) and sandcastle (nice MSDN style output, but sloooooow) both are begging to be outdone by something much simpler. It would also be nice if there was some kind of editor associated that we can also write the 'Getting Started' and 'Information' kind of pages that are also needed with any API documentation. Does anyone know of any solutions?
You might want to try DoxyComment. Here is the description from Doxygen's Helper tools & scripts: An addin for Visual Studio 2005 called DoxyComment was created by Troels Gram. It is designed to assist you in inserting context sensitive comment blocks into C/C++ source files. DoxyComment also comes with an xslt template that lets you generate documentation like the MSDN library.
1,067,236
1,067,249
C/C++ testing framework (like JUnit for java)
Been hitting my head on the wall before as I don't make any test classes while using c/c++ (but instead have a lot of print methods). What is the most used method to perform testing in the c/c++ code? Java's JUnit has only left me with good memories while debugging things. I know that using asserts in code with a defined debug header should work, but aren't there any better ways? And since I have the intention to make test classes in the future, any useful tips are welcome.
You can check these out: http://gamesfromwithin.com/?p=29 http://www.opensourcetesting.org/unit_c.php http://msdn.microsoft.com/en-us/magazine/cc136757.aspx
1,067,346
1,067,684
Alternatives to dlsym() and dlopen() in C++
I have an application a part of which uses shared libraries. These libraries are linked at compile time. At Runtime the loader expects the shared object to be in the LD_LIBRARY_PATH , if not found the entire application crashes with error "unable to load shared libraries".Note that there is no guarantee that client would be having the library, in that case I want the application to leave a suitable error message also the independent part should work correctly. For this purpose I am using dlsym() and dlopen() to use the API in the shared library. The problem with this is if I have a lot of functions in the API, i have to access them Individually using dlsym() and ptrs which in my case are leading to memory corruption and code crashes. Are there any alternatives for this?
The common solution to your problem is to declare a table of function pointers, to do a single dlsym() to find it, and then call all the other functions through a pointer to that table. Example (untested): // libfoo.h struct APIs { void (*api1)(void); void *(*api2)(int); long (*api3)(int, void *); }; // libfoo.cc void fn1(void) { ... } void *fn2(int) { ... } long fn3(int, void *) { ... } APIs api_table = { fn1, fn2, fn3 }; // client.cc #include "libfoo.h" ... void *foo_handle = dlopen("libfoo.so", RTLD_LAZY); if (!foo_handle) { return false; // library not present } APIs *table = dlsym(foo_handle, "api_table"); table->api1(); // calls fn1 void *p = table->api2(42); // calls fn2 long x = table->api3(1, p); // calls fn3 P.S. Accessing your API functions individually using dlsym and pointers does not in itself lead to memory corruption and crashes. Most likely you just have bugs. EDIT: You can use this exact same technique with a 3rd-party library. Create a libdrmaa_wrapper.so and put the api_table into it. Link the wrapper directly against libdrmaa.so. In the main executable, dlopen("libdrmaa_wrapper.so", RTLD_NOW). This dlopen will succeed if (and only if) libdrmaa.so is present at runtime and provides all API functions you used in the api_table. If it does succeed, a single dlsym call will give you access to the entire API.
1,067,535
1,067,561
Custom-typed Reference members in C++ - initialising them
this snippet of code is given me headache. Personally, I would like to use reference as they are neater compared to pointer, so I tried this: include "SomeClass.h" class FooBar { private: SomeClass& member_; public: FooBar() : member_(SomeClass()) { }; } I have read that you need to assign a temp variable to a class member reference, so in this case I create a dummy SomeClass() (I'm not sure if I am doing it right here. I tried it with and without specifying a default constructor). However, it does not compile in VS 2005, saying that member_ cannot be initialised. How should I be doing this? Thanks in advance!
1) References can not be changed to point to another object after it is initialized. Do you really need this behavior? 2) When you initialize reference with temporary object, after this temp. object is out of scope your reference is invalid. That's why your code is incorrect. And you have useless member now. I'd recommend to think about two alternatives a) Consider using pointer instead of reference. b) Change you constructor to something like this: MyClass(type & a):membmer_(a){...}
1,067,558
1,067,622
Pass an unary predicate to a function in C++
I need a function which establishes a policy for my class for displaying items. e.g: SetDisplayPolicy(BOOLEAN_PRED_T f) This is assuming BOOLEAN_PRED_T is a function-pointer to some boolean predicate type like: typedef bool (*BOOLEAN_PRED_T) (int); I'm interested only on e.g: display something when the passed predicate is TRUE, do not display when it's false. The above example works for functions returning bool and taking an int, but I need a very generic pointer for the SetDisplayPolicy argument, so I thought of UnaryPredicate, but it's boost related. How I can pass a unary predicate to a function in STL/C++? unary_function< bool,T > won't work because I need a bool as return value, but I want to ask the user just for "unary function that returns bool", in the most generic approach. I thought of deriving my own type as: template<typename T> class MyOwnPredicate : public std::unary_function<bool, T>{}; Could that be a good approach?
Turn SetDisplayPolicy into a function template: template<typename Pred> void SetDisplayPolicy(Pred &pred) { // Depending on what you want exactly, you may want to set a pointer to pred, // or copy it, etc. You may need to templetize the appropriate field for // this. } Then to use, do: struct MyPredClass { bool operator()(myType a) { /* your code here */ } }; SetDisplayPolicy(MyPredClass()); In the display code you would then d someting like: if(myPred(/* whatever */) Display(); Of course, your functor may need to have a state and you may want its constructor to do stuff, etc. The point is that SetDisplayPolicy doesn't care what you give it (including a function pointer), provided that you can stick a function call onto it and get back a bool. Edit: And, as csj said, you could inherit from STL's unary_function which does the same thing and will also buy you the two typedefs argument_type and result_type.
1,067,607
1,067,661
Closing a thread with select() system call statement?
I have a thread to monitor serial port using select system call, the run function of the thread is as follows: void <ProtocolClass>::run() { int fd = mPort->GetFileDescriptor(); fd_set readfs; int maxfd=fd+1; int res; struct timeval Timeout; Timeout.tv_usec=0; Timeout.tv_sec=3; //BYTE ack_message_frame[ACKNOWLEDGE_FRAME_SIZE]; while(true) { usleep(10); FD_ZERO(&readfs); FD_SET(fd,&readfs); res=select(maxfd,&readfs,NULL,NULL,NULL); if(res<0) perror("\nselect failed"); else if( res==0) puts("TIMEOUT"); else if(FD_ISSET(fd,&readfs)) {//IF INPUT RECEIVED qDebug("************RECEIVED DATA****************"); FlushBuf(); qDebug("\nReading data into a read buffer"); int bytes_read=mPort->ReadPort(mBuf,1000); mFrameReceived=false; for(int i=0;i<bytes_read;i++) { qDebug("%x",mBuf[i]); } //if complete frame has been received, write the acknowledge message frame to the port. if(bytes_read>0) { qDebug("\nAbout to Process Received bytes"); ProcessReceivedBytes(mBuf,bytes_read); qDebug("\n Processed Received bytes"); if(mFrameReceived) { int no_bytes=mPort->WritePort(mAcknowledgeMessage,ACKNOWLEDGE_FRAME_SIZE); }//if frame received }//if bytes read > 0 } //if input received }//end while } The problem is when I exit from this thread, using delete <protocolclass>::instance(); the program crashes with a glibc error of malloc memory corruption. On checking the core with gdb it was found the when exiting the thread it was processing the data and thus the error. The destructor of the protocol class looks as follows: <ProtocolClass>::~<ProtocolClass>() { delete [] mpTrackInfo; //delete data wait(); mPort->ClosePort(); s_instance = NULL; //static instance of singleton delete mPort; } Is this due to select? Do the semantics for destroying objects change when select is involved? Can someone suggest a clean way to destroy threads involving select call. Thanks
I'm not sure what threading library you use, but you should probably signal the thread in one way or another that it should exit, rather than killing it. The most simple way would be to keep a boolean that is set true when the thread should exit, and use a timeout on the select() call to check it periodically. ProtocolClass::StopThread () { kill_me = true; // Wait for thread to die Join(); } ProtocolClass::run () { struct timeval tv; ... while (!kill_me) { ... tv.tv_sec = 1; tv.tv_usec = 0; res = select (maxfd, &readfds, NULL, NULL, &tv); if (res < 0) { // Handle error } else if (res != 0) { ... } } You could also set up a pipe and include it in readfds, and then just write something to it from another thread. That would avoid waking up every second and bring down the thread without delay. Also, you should of course never use a boolean variable like that without some kind of lock, ...
1,067,630
1,067,819
SSE2 option in Visual C++ (x64)
I've added x64 configuration to my C++ project to compile 64-bit version of my app. Everything looks fine, but compiler gives the following warning: `cl : Command line warning D9002 : ignoring unknown option '/arch:SSE2'` Is there SSE2 optimization really not available for 64-bit projects?
Seems to be all 64-bit processors has SSE2. Since compiler option always switched on by default no need to switch it on manually. From Wikipedia: SSE instructions: The original AMD64 architecture adopted Intel's SSE and SSE2 as core instructions. SSE3 instructions were added in April 2005. SSE2 replaces the x87 instruction set's IEEE 80-bit precision with the choice of either IEEE 32-bit or 64-bit floating-point mathematics. This provides floating-point operations compatible with many other modern CPUs. The SSE and SSE2 instructions have also been extended to operate on the eight new XMM registers. SSE and SSE2 are available in 32-bit mode in modern x86 processors; however, if they're used in 32-bit programs, those programs will only work on systems with processors that have the feature. This is not an issue in 64-bit programs, as all AMD64 processors have SSE and SSE2, so using SSE and SSE2 instructions instead of x87 instructions does not reduce the set of machines on which x64 programs can be run. SSE and SSE2 are generally faster than, and duplicate most of the features of the traditional x87 instructions, MMX, and 3DNow!.
1,067,821
1,068,111
ublas vs. matrix template library (MTL4)
I'm writing a software for hyperbolic partial differential equations in c++. Almost all notations are vector and matrix ones. On top of that, I need the linear algebra solver. And yes, the vector's and matrix's sizes can vary considerably (from say 1000 to sizes that can be solved only by distributed memory computing, eg. clusters or similar architecture). If I had lived in utopia, I'd had had linear solver which scales great for clusters, GPUs and multicores. When thinking about the data structure that should represent the variables, I came accros the boost.ublas and MTL4. Both libraries are blas level 3 compatible, MTL4 implements sparse solver and is much faster than ublas. They both don't have implemented support for multicore processors, not to mention parallelization for distributed memory computations. On the other hand, the development of MTL4 depends on sole effort of 2 developers (at least as I understood), and I'm sure there is a reason that the ublas is in the boost library. Furthermore, intel's mkl library includes the example for binding their structure with ublas. I'd like to bind my data and software to the data structure that will be rock solid, developed and maintained for long period of time. Finally, the question. What is your experience with the use of ublas and/or mtl4, and what would you recommend? thanx, mightydodol
With your requirements, I would probably go for BOOST::uBLAS. Indeed, a good deployment of uBLAS should be roughly on par with MTL4 regarding speed. The reason is that there exist bindings for ATLAS (hence shared-memory parallelization that you can efficiently optimize for your computer), and also vendor-tuned implementations like the Intel Math Kernel Library or HP MLIB. With these bindings, uBLAS with a well-tuned ATLAS / BLAS library doing the math should be fast enough. If you link against a given BLAS / ATLAS, you should be roughly on par with MTL4 linked against the same BLAS / ATLAS using the compiler flag -DMTL_HAS_BLAS, and most likely faster than the MTL4 without BLAS according to their own observation (example see here, where GotoBLAS outperforms MTL4). To sum up, speed should not be your decisive factor as long as you are willing to use some BLAS library. Usability and support is more important. You have to decide, whether MTL or uBLAS is better suited for you. I tend towards uBLAS given that it is part of BOOST, and MTL4 currently only supports BLAS selectively. You might also find this slightly dated comparison of scientific C++ packages interesting. One big BUT: for your requirements (extremely big matrices), I would probably skip the "syntactic sugar" uBLAS or MTL, and call the "metal" C interface of BLAS / LAPACK directly. But that's just me... Another advantage is that it should be easier than to switch to ScaLAPACK (distributed memory LAPACK, have never used it) for bigger problems. Just to be clear: for house-hold problems, I would not suggest calling a BLAS library directly.
1,067,827
1,067,862
Dangerous ways of removing compiler warnings?
I like to force a policy of no warnings when I check someone's code. Any warnings that appear have to be explicitly documented as sometimes it's not easy to remove some warnings or might require too many cycles or memory etc. But there is a down-side to this policy and that is removing warnings in ways that are potentially dangerous, i.e. the method used actually hides the problem rather than fixes it. The one I'm most acutely aware of is explicitly casts which might hide a bug. What other potentially dangerous ways of removing compiler warnings in C(++) are there that I should look out for?
const correctness can cause a few problems for beginners: // following should have been declared as f(const int & x) void f( int & x ) { ... } later: // n is only used to pass the parameter "4" int n = 4; // really wanted to say f(4) f( n ); Edit1: In a somewhat similar vein, marking all member variables as mutable, because your code often changes them when const correctness says it really shouldn't. Edit2: Another one I've come across (possibly from Java programmers ) is to tack throw() specifications onto functions, whether they could actually throw or not.
1,067,986
1,070,240
C++ Linking and COM Registration issue
I've added a new library to my application (multiple projects-DLLs) - SQLite, to perform some in memory caching. There is just one library/project that is affected by this change - Lib1. A build goes through fine. All libraries are built successfully and no errors are reported, including a couple of Com Objects. If I try to register the com objects, I get the The DLL could not be loaded. Check to make sure all required application runtime files and other dependent DLLs are available in the component DLL's directory or the system path. message. But all the libs are at the same place. And all are in the path. A copy of this project builds and registers fine (without the few changes made for SqlLite ofcourse). Dependency walker reports no issues Oddly, if I try to register the DLL of the com object (using regsvr32) it works fine. Also, I have another library, which is dependant on Lib1 (not on SqlLite) which also cannot be loaded. Any ideas? Thanks, A'z
You can use Process Monitor (http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) set to filter process name regsvr32.exe in order to see all file and registry access. Always use full path to your-com-dll when you issue regsvr32 commands, if you have the same dll somewhere else in path (for example c:\windows\system32) regsvr32 will use the other dll and not the one in your current directory. Another trick would be to use "rundll32 your-com-dll,DllRegisterServer". In case of missing dlls it will tell which dll is missing instead of just saying that LoadLibrary failed. Edit: What do you mean by "If I try to register the com objects"? How are you doing this? I'm asking because you say that regsvr32 on the dll which actually implements these com object works fine.
1,068,092
1,068,136
Exposing C# via COM for C++ Client
we're considering exposing some C# types to C++ clients via COM. What problems can we expect to hit over the life of the project? E.g. how will versioning be managed? On versioning, it would seem from reading this that we should decorate our types to be exposed with [ClassInterface(ClassInterfaceType.None)] and use an explicit interface. That way I assume we fully control the interface that will be exposed to COM clients. Thanks in advance.
Since you are using a C++ client you should definitely use explicit interfaces for early binding. Dispatch interfaces are useful when using scripting clients such as VBS but they are rarely useful for C++ clients. The only way to version an interface is to create a new interface (possibly inheriting from the original interface). When using explicit interfaces you have full control over this process. This means you should create an interface for every class that you intend to expose via COM. Don't forget to mark every interface and class with the ComVisible and Guid attributes. Also all your classes must have a default constructor.
1,068,134
1,068,240
Comparing wstring with ignoring the case
I am sure this would have been asked before but couldn't find it. Is there any built in (i.e. either using std::wstring's methods or the algorithms) way to case insensitive comparison the two wstring objects?
If you don't mind being tied to Microsoft implementation you can use this function defined in <string.h> int _wcsnicmp( const wchar_t *string1, const wchar_t *string2, size_t count ); But if you want best performance/compatibility/functionality ratio you will probably have to look at boost library (part of it is stl anyway). Simple example (taken from different answer to different question): #include <boost/algorithm/string.hpp> std::wstring wstr1 = L"hello, world!"; std::wstring wstr2 = L"HELLO, WORLD!"; if (boost::iequals(wstr1, wstr2)) { // Strings are identical }
1,068,557
1,069,200
C++ Storing references to values in std::map
Am I right in assuming that adding/removing elements to an std::map does not effect the other elements (ie cause them to be relocated in memory) and so that the following is safe: I looked at various sites with info on the container but only found out about the cases where iterators are invalidated, which I already know... std::map<std::string,std::string> map; PopulateMap(map); std::string &a= map["x"]; AddMoreData(map); RemoveRandomKeysExceptX(map); map["x"] = "foo"; std::cout << a << " " << map["x"] << std::endl;//prints "foo foo" a = "bar"; std::cout << a << " " << map["x"] << std::endl;//prints "bar bar" I tested some similar code on VC9, which seems to work however that doesn't mean I didn't just get lucky or that it doesn't vary across compilers.
The Standard is clear on this in 23.1.2/8 about associative containers The insert members shall not affect the validity of iterators and references to the container, and the erase members shall invalidate only iterators and references to the erased elements.
1,068,663
1,077,548
How to modify options being passed to ld , without recompiling gcc
I'm trying to compile shared library on solaris 2.7 using gcc 3.4.6 and which is linking to a statically linked c .a and .o files. Please note that it is using Sun ld from path "/usr/ccs/bin/ld" At linking time i got a long list of symbols and following error ld: fatal: relocations remain against allocatable but non-writable sections collect2: ld returned 1 exit status Then i tried to build it passing -z textoff option to ld. but i'm getting follwing error ld: fatal: option -ztextoff and -ztext are incompatible ld: fatal: Flags processing errors Is there any other way where i don't need to recompile gcc and still modify the options getting passed to ld.
The errors are the result of linking position-dependent code into a shared library. Such code will result in the library not being shareable, and thus wasting RAM. If you can rebuild all the objects you are trying to link into the shared library, the simplest (and most correct) solution is to rebuild all of them with -fPIC flag. However, sometimes you really must link non-PIC object code which you can't rebuild into a shared library, and therefore you need to get rid of the -ztext option. To do that, add -mimpure-text option to your link line.
1,068,762
1,068,797
Calling C# from C++, Reverse P/Invoke, Mixed Mode DLLs and C++/CLI
As I understand it I can use reverse P/Invoke to call C# from C++. Reverse P/Invoke is simply a case of: Create you managed (c#) class. Create a c++/cli (formerly managed c++) class library project. Use this to call the managed c# class (presumably via a reference). Call the c++/cli code from native c++. Questions: Is this correct? Is the DLL created at step 2 known as a mixed mode DLL? Has C++/CLI completely superseded Managed C++ as far as MS are concerned? Is COM completely avoided using this approach? At what point would the CLR be created and run, and by whom? Thanks in advance
Here are the answers to the best of my knowledge: Yes Yes, it is a mixed mode DLL (In fact, you can make one file of your native C++ project managed and create this C++/CLI class in that file and call the code directly from that file. You don't even need a separate DLL to accomplish this. C++/CLI and Managed C++ both represent same thing. The only difference is that in the older version till Visual Studio 2003, it was termed as Managed C++. Later on, the syntax was changed quite a lot and it was renamed as C++/CLI. Have a look at this link for details. Yes CLR will be used whenever a call to the managed DLL is made.
1,069,335
1,069,367
How to implement 'virtual ostream & print( ostream & out ) const;'
I found this function in the header file of an abstract class: virtual ostream & print( ostream & out ) const; Can anyone tell me what kind of function this is and how to declare it in a derived class? From what I can tell, it looks like it returns a reference to an outstream. If I implement it in my cc file with nothing in it, I get a compiler error: error: expected constructor, destructor, or type conversion before ‘&’ token Can someone show me a simple implementation of how to use it?
some implementation: ostream& ClassA::print( ostream& out) const { out << myMember1 << myMember2; return out; } Returning the same ostream allows combinations like a.print( myStream) << someOtherVariables; However, it is still strange to use it this way. Regarding the error, ostream is part of std namespace, and not part of the global namespace or the namespace the class you're refering is part of.
1,069,352
1,069,397
Is it possible to turn off support for "and" / "or" boolean operator usage in gcc?
GCC seems to allow "and" / "or" to be used instead of "&&" / "||" in C++ code; however, as I expected, many compilers (notably MSVC 7) do not support this. The fact that GCC allows this has caused some annoyances for us in that we have different developers working on the same code base on multiple platforms and occasionally, these "errors" slip in as people are switching back and forth between Python and C++ development. Ideally, we would all remember to use the appropriate syntax, but for those situations where we occasionally mess up, it would be really nice if GCC didn't let it slide. Anybody have any ideas on approaches to this? If "and" and "or" are simply #defines then I could #undef when using GCC but I worry that it is more likely built into the compiler at more fundamental level. Thanks.
They are part of the C++ standard, see for instance this StackOverflow answer (which quotes the relevant parts of the standard). Another answer in the same question mentions how to do the opposite: make them work in MSVC. To disable them in GCC, use -fno-operator-names. Note that, by doing so, you are in fact switching to a non-standard dialect of C++, and there is a risk that you end up writing code which might not compile correctly on standard-compliant compilers (for instance, if you declare a variable with a name that would normally be reserved).
1,069,525
1,070,045
How to convert Win Mobile 6 project into Win CE 6.0 RC2
I have a Windows Mobile 6 Professional native project that runs ok on Win Mobile devices. Now I need a version that runs on Windows Embedded CE 6.0 RC2. What is the best path for this conversion? Can I just change few project settings / add new platform with configuration manager OR I have to start with new smart device project and import the existing files? Further, I will be targeting the device which is still not delivered to me, so currently playing with Win CE image I constructed with platform builder (I tried to have very generic OS, with most default components included, of course this will change later). So now I created SDK for my OS, installed it and new smart device projects are targeting this SDK. How it goes in "real world" embedded app development, should the company deliver me SDK, BSP or something else? The real hardware will come not so soon so I need to start developing without it.
Adding a new configuration to a native platform is, and always has been, a real nightmare. Your best bet is to just create a new project and add in the source files again. I've complained about this to the Studio for Devices team several times, but it doesn't seem to be a priority to fix. Bear in mind that if you used anything WinMo specific, you're going to have to fix that or come up with a workaround for WinCE. As far as targeting your hardware, you should try to generate an SDK that is as close to what your final OS image will contain as possible. That means the same processor and hopefully the same components. This will prevent you from using libraries or APIs that aren't available in the final OS image. Whether you get an SDK or a BSP depends on how you've worked that out with your vendor. If they are providing just the hardware and you have to roll the OS, then you would get a BSP. If they are providing the hardware and the OS, then they must provide an SDK.
1,069,602
1,656,679
How do I install a c++ library so I can use it?
I have this library called BASS which is an audio library which I'm going to use to record with the microphone. I have all the files needed to use it, but I don't know how to install the library. I tried taking the example files and putting them in the same directory as the bass.h file. But I got a bunch of errors saying there are function calls that doesn't exist. So my question is, how do I install it to be able to use it?
Installing a C++ library means specifying to interested software (eg. a compiler) the location of two kinds of files: headers (typical extensions *.h or .hpp) and compiled objects (.dll or *.lib for instance). The headers will contain the declarations exposed to the developer by the library authors, and your program will #include them in its source code, the dll will contain the compiled code which will be or linked together and used by your program, and they will be found by the linker (or loaded dynamically, but this is another step). So you need to Put the header files in a location which your compiler is aware of (typically IDE allows to set so-called include directories, otherwise you specify a flag like -I<path-to-headers> when invoking the compiler) Put the dll files in a location which your linker is aware of (surely your IDE will allow that, otherwise you speficy a flag like -L<path-to-libraries> -l<name-of-libraries> Last but not least, since I see that BASS library is a commercial product, probably they will have made available some installation instructions?
1,069,621
1,069,634
Are members of a C++ struct initialized to 0 by default?
I have this struct: struct Snapshot { double x; int y; }; I want x and y to be 0. Will they be 0 by default or do I have to do: Snapshot s = {0,0}; What are the other ways to zero out the structure?
They are not null if you don't initialize the struct. Snapshot s; // receives no initialization Snapshot s = {}; // value initializes all members The second will make all members zero, the first leaves them at unspecified values. Note that it is recursive: struct Parent { Snapshot s; }; Parent p; // receives no initialization Parent p = {}; // value initializes all members The second will make p.s.{x,y} zero. You cannot use these aggregate initializer lists if you've got constructors in your struct. If that is the case, you will have to add proper initalization to those constructors struct Snapshot { int x; double y; Snapshot():x(0),y(0) { } // other ctors / functions... }; Will initialize both x and y to 0. Note that you can use x(), y() to initialize them disregarding of their type: That's then value initialization, and usually yields a proper initial value (0 for int, 0.0 for double, calling the default constructor for user defined types that have user declared constructors, ...). This is important especially if your struct is a template.
1,069,860
1,069,888
OpenThread() Returns NULL Win32
I feel like there is an obvious answer to this, but it's been eluding me. I've got some legacy code in C++ here that breaks when it tries to call OpenThread(). I'm running it in Visual C++ 2008 Express Edition. The program first gets the ThreadID of the calling thread, and attempts to open it, like so: ThreadId threadId = IsThreaded() ? thread_id : ::GetCurrentThreadId(); HANDLE threadHandle = OpenThread(THREAD_ALL_ACCESS, FALSE, threadId); Now here's what I don't understand: if the thread ID is the current thread's ID, isn't it already open? Could that be why it's returning NULL? Any feedback would be appreciated.
Maybe you're asking for too much access (THREAD_ALL_ACCESS), though I'd think that you'd have pretty much all permissions to your own thread. Try reducing the access to what you really need. What does GetLastError() return? Update: Take a look at this comment from MSDN: Windows Server 2003 and Windows XP/2000: The size of the THREAD_ALL_ACCESS flag increased on Windows Server 2008 and Windows Vista. If an application compiled for Windows Server 2008 and Windows Vista is run on Windows Server 2003 or Windows XP/2000, the THREAD_ALL_ACCESS flag is too large and the function specifying this flag fails with ERROR_ACCESS_DENIED. To avoid this problem, specify the minimum set of access rights required for the operation. If THREAD_ALL_ACCESS must be used, set _WIN32_WINNT to the minimum operating system targeted by your application (for example, #define _WIN32_WINNT _WIN32_WINNT_WINXP ). For more information, see Using the Windows Headers
1,070,333
1,074,325
Is there an easier way to pop off a directory from boost::filesystem::path?
I have a relative path (e.g. "foo/bar/baz/quux.xml") and I want to pop a directory off so that I will have the subdirectory + file (e.g. "bar/baz/quux.xml"). You can do this with path iterators, but I was hoping there was something I was missing from the documentation or something more elegant. Below is the code that I used. #include <boost/filesystem/path.hpp> #include <boost/filesystem/operations.hpp> #include <boost/filesystem/convenience.hpp> #include <boost/filesystem/exception.hpp> #include <boost/assign.hpp> boost::filesystem::path pop_directory(const boost::filesystem::path& path) { list<string> parts; copy(path.begin(), path.end(), back_inserter(parts)); if (parts.size() < 2) { return path; } else { boost::filesystem::path pathSub; for (list<string>::iterator it = ++parts.begin(); it != parts.end(); ++it) { pathSub /= *it; } return pathSub; } } int main(int argc, char* argv) { list<string> test = boost::assign::list_of("foo/bar/baz/quux.xml") ("quux.xml")("foo/bar.xml")("./foo/bar.xml"); for (list<string>::iterator i = test.begin(); i != test.end(); ++i) { boost::filesystem::path p(*i); cout << "Input: " << p.native_file_string() << endl; boost::filesystem::path p2(pop_directory(p)); cout << "Subdir Path: " << p2.native_file_string() << endl; } } The output is: Input: foo/bar/baz/quux.xml Subdir Path: bar/baz/quux.xml Input: quux.xml Subdir Path: quux.xml Input: foo/bar.xml Subdir Path: bar.xml Input: ./foo/bar.xml Subdir Path: foo/bar.xml What I was hoping for was something like: boost::filesystem::path p1(someString); boost::filesystem::path p2(p2.pop()); If you look at some test code on codepad.org, I have tried branch_path (returns "foo/bar/baz") and relative_path (returns "foo/bar/baz/quux.xml").
Here is something that a co-worker figured out just using string::find with boost::filesystem::slash. I like this that it doesn't require iterate over the entire path breaking it up, but also using the path's OS-independent definition of the path separation character. Thanks Bodgan! boost::filesystem::path pop_front_directory(const boost::filesystem::path& path) { string::size_type pos = path.string().find(boost::filesystem::slash<boost::filesystem::path>::value); if (pos == string::npos) { return path; } else { return boost::filesystem::path(path.string().substr(pos+1)); } }
1,070,351
1,070,481
GetAdaptersInfo and GetAdaptersAddressess BufferLength Param
I've got some legacy code in C++ here that does some things I don't understand. I'm running it in Visual C++ 2008 Express Edition on a machine running Windows XP. The code uses some Windows functions: GetAdaptersInfo and GetAdaptersAddressess. I realize that the final parameter for both of these is a pointer to the size of the buffer and since it's in_out, it can be changed within the function. My question is: are these functions supposed to change the buffer length? In the code I have, every time these functions are called the buffer length variable is initialized to zero, and after the function is called, it's still 0.
Your code needs to look something like this: // First get the desired size. unsigned long outBufLen = 0; DWORD dwResult = GetAdaptersInfo(NULL, &outBufLen); if (dwResult == ERROR_BUFFER_OVERFLOW) // This is what we're expecting { // Now allocate a structure of the requried size. PIP_ADAPTER_INFO pIpAdapterInfo = (PIP_ADAPTER_INFO) malloc(outBufLen); dwResult = GetAdaptersInfo(pIpAdapterInfo, &outBufLen); if (dwResult == ERROR_SUCCESS) { // Yay! Edit: See also Jeremy Friesner's answer for why this code isn't quite enough.
1,070,363
1,070,457
Generate HTML Pages from C Structures
I would like to develop a application (i prefer c++), which will take C header file with lot of nested structures as input and generate a html page where the data will be presented as Hierarchial tree structures, which can be collapsed.. file.h struct level1 { struct level2 { struct level3 { } } }; file.html [+] level1 I can collapse the level1 as below [-] level1 [-] level2 [+] level3 Its for Learning purpose..and i am not sure where to start. Few pointers will be really helpful.
The hardest part will be parsing the C header files. GCCXML will do that for you, outputting an XML structure that's then trivial to parse.
1,070,497
1,070,499
C++ convert hex string to signed integer
I want to convert a hex string to a 32 bit signed integer in C++. So, for example, I have the hex string "fffefffe". The binary representation of this is 11111111111111101111111111111110. The signed integer representation of this is: -65538. How do I do this conversion in C++? This also needs to work for non-negative numbers. For example, the hex string "0000000A", which is 00000000000000000000000000001010 in binary, and 10 in decimal.
use std::stringstream unsigned int x; std::stringstream ss; ss << std::hex << "fffefffe"; ss >> x; the following example produces -65538 as its result: #include <sstream> #include <iostream> int main() { unsigned int x; std::stringstream ss; ss << std::hex << "fffefffe"; ss >> x; // output it as a signed type std::cout << static_cast<int>(x) << std::endl; } In the new C++11 standard, there are a few new utility functions which you can make use of! specifically, there is a family of "string to number" functions (http://en.cppreference.com/w/cpp/string/basic_string/stol and http://en.cppreference.com/w/cpp/string/basic_string/stoul). These are essentially thin wrappers around C's string to number conversion functions, but know how to deal with a std::string So, the simplest answer for newer code would probably look like this: std::string s = "0xfffefffe"; unsigned int x = std::stoul(s, nullptr, 16); NOTE: Below is my original answer, which as the edit says is not a complete answer. For a functional solution, stick the code above the line :-). It appears that since lexical_cast<> is defined to have stream conversion semantics. Sadly, streams don't understand the "0x" notation. So both the boost::lexical_cast and my hand rolled one don't deal well with hex strings. The above solution which manually sets the input stream to hex will handle it just fine. Boost has some stuff to do this as well, which has some nice error checking capabilities as well. You can use it like this: try { unsigned int x = lexical_cast<int>("0x0badc0de"); } catch(bad_lexical_cast &) { // whatever you want to do... } If you don't feel like using boost, here's a light version of lexical cast which does no error checking: template<typename T2, typename T1> inline T2 lexical_cast(const T1 &in) { T2 out; std::stringstream ss; ss << in; ss >> out; return out; } which you can use like this: // though this needs the 0x prefix so it knows it is hex unsigned int x = lexical_cast<unsigned int>("0xdeadbeef");
1,070,666
1,070,824
Eclipse c++ makefile project output
I have a c++ Makefileproject for eclipse, if I build it, the binary is in the project root. How can I change the build directory to {ROOT}/bin? I Tryed project propertys -> c/c++ Build -> Build location (Build directory: MY PATH) but than it can't compile at all.
You use a Makefile-Project. Everything that has to be done, including where to put an executable, has to be coded into the Makefile by you! Eclipse just kicks the build by invoking make. An simple example: CXXFLAGS= -g -O0 CXX=g++ all: bin bin/test bin/test: bin/test.o $(CXX) -o bin/test bin/test.o bin/test.o: test.cpp $(CXX) $(CXXFLAGS) -o bin/test.o -c test.cpp bin: mkdir bin clean: rm bin/test.o rm bin/test This is the source I learned how to write a Makefile from: http://www.eng.hawaii.edu/Tutor/Make Plain Makefiles are handy for projects with a small number of files. Once you've been bitten by having a segfault due to missing recompilaton (forgot to list a header file at the .o: dependencies) you should move on to a full blown build system, like cmake. cmake generates Makefiles for you, but its important to understand the fundamentals of Makfiles to interpret error messages.
1,070,813
1,070,890
Setup main function in eclipse makefile project
I created a new HalloWorld Makefile Project. There is a HalloWorld.cpp with my main function. Now I have a file /src/startup.cpp that conains a main function. Now I want to use the main function from /src/startup.cpp Where can I tell eclipse to use that one?
Place the following in the file Makefile at the project root CXXFLAGS= -g -O0 CXX=g++ all: bin bin/myprog bin/myprog: bin/startup.o $(CXX) -o bin/myprog bin/startup.o bin/startup.o: src/startup.cpp $(CXX) $(CXXFLAGS) -o bin/startup.o -c src/startup.cpp bin: mkdir bin clean: rm bin/startup.o rm bin/myprog
1,070,882
1,070,897
C++ string.compare()
I'm building a comparator for an assignment, and I'm pulling my hair out because this seems to simple, but I can't figure it out. This function is giving me trouble: int compare(Word *a, Word *b) { string *aTerm = a->getString(); string *bTerm = b->getString(); return aTerm->compare(bTerm); } Word::getString returns a string* Error: In member function `virtual int CompWordByAlpha::compare(Word*, Word*)': no matching function for call to... ...followed by a bunch of function definitions. Any help?
You're comparing a string to a string pointer, and that's not valid. You want return aTerm->compare(*bTerm);
1,071,092
1,071,111
What are the uses of pure virtual functions in C++?
I'm learning about C++ in a class right now and I don't quite grok pure virtual functions. I understand that they are later outlined in a derived class, but why would you want to declare it as equal to 0 if you are just going to define it in the derived class?
Briefly, it's to make the class abstract, so that it can't be instantiated, but a child class can override the pure virtual methods to form a concrete class. This is a good way to define an interface in C++.
1,071,119
1,071,461
Accessing types from dependent base classes
Does anyone know why using-declarations don't seem to work for importing type names from dependent base classes? They work for member variables and functions, but at least in GCC 4.3, they seem to be ignored for types. template <class T> struct Base { typedef T value_type; }; template <class T> struct Derived : Base<T> { // Version 1: error on conforming compilers value_type get(); // Version 2: OK, but unwieldy for repeated references typename Base<T>::value_type get(); // Version 3: OK, but unwieldy for many types or deep inheritance typedef typename Base<T>::value_type value_type; value_type get(); // Version 4: why doesn't this work? using typename Base<T>::value_type; value_type get(); // GCC: `value_type' is not a type }; I have a base class with a set of allocator-style typedefs that I'd like to inherit throughout several levels of inheritance. The best solution I've found so far is Version 3 above, but I'm curious why Version 4 doesn't seem to work. GCC accepts the using-declaration, but seems to ignore it. I've checked the C++ Standard, C++ Prog. Lang. 3rd ed. [Stroustrup], and C++ Templates [Vandevoorde, Josuttis], but none seem to address whether using-declarations can be applied to dependent base class types. In case it helps to see another example, here is the same question being asked, but not really answered, on the GCC mailing list. The asker indicates that he has seen 'using typename' elsewhere, but that GCC doesn't seem to support it. I don't have another conforming compiler available to test it.
As Richard Corden points out, this issue was addressed in the C++ Standard Core Language Defect Reports after the 2003 standard was ratified: How do the keywords typename/template interact with using-declarations? Proposed resolution (April 2003, revised October 2003): Add a new paragraph to the bottom of 7.3.3 [namespace.udecl]: If a using-declaration uses the keyword typename and specifies a dependent name (14.7.2 [temp.dep]), the name introduced by the using-declaration is treated as a typedef-name (7.1.3 [dcl.typedef]). This text doesn't seem to appear in the Second Edition standard from October 15, 2003. GCC does not yet implement this resolution, as explained in bug 14258: ------- Comment #3 From Giovanni Bajo 2004-02-27 12:47 [reply] ------- The problem is that our USING_DECL doesn't record the "typename", that is the fact that it is a type which is imported through it. This used to work thanks to the implicit type name extension, I believe. Duplicate bug 21484 indicates that 'using typename' works on Comeau and Intel compilers. Because MSVC treats all names as dependent, the construct is unnecessary (but permitted) for that compiler. Fixed in GCC 4.7 on Dec 13 2011!
1,071,120
1,071,745
How do I use MySQL C++ Connector for storing binary data?
I have a block of binary data defined as: void* address, size_t binarySize; that I want to store to a MySQL database using MySQL C++ Connector. The function setBlob() takes istream. The question: How can I convert from a raw void* address, size_t binarySize to either an istream object or istringstream? Is it possible to do this without "copying" the data? i.e. tell istream the pointer and the size so that it could point to it.
You have to subclass streambuf e.g. like this: class DataBuf : public streambuf { public: DataBuf(char * d, size_t s) { setg(d, d, d + s); } }; Then you can instantiate an istream object which uses a DataBuf as buffer, which itself uses your block of binary data. Supposing that binarySize specifies the size of your binary data in bytes (sizeof(char) should be one byte), you could do this like so: DataBuf buffer((char*)address, binarySize); istream stream(&buffer); That istream object you can now pass to setBlob(). Regards, Elrohir
1,071,417
1,071,598
can't seem to build a Qt project on eclipse (C++, Windows)
I have Qt installed + Qt Eclipse Integration + MinGW but I can't seem to find a way to build a new Qt GUI project. I'm getting the following error: Error launching builder (mingw32-make debug ) (Cannot run program "mingw32-make": Launching failed) I've updated the Path variable and added all I can think about that can be related and nothing.. Path now is: C:\PROGRAM FILES\THINKPAD\UTILITIES;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;C:\WINDOWS\Downloaded Program Files;C:\Program Files\PC-Doctor for Windows\services;C:\Program Files\SMLNJ\bin\;C:\Program Files\Chez Scheme Version 7.4\bin\i3nt;C:\Program Files\QuickTime\QTSystem\;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Qt\2009.03\qt\bin;C:\Qt\2009.03\bin;C:\MinGW\bin; any ideas..?? Thanks, C.
When installing mingw, have you selected the checkbox to install mingw's make too? You could have a look into c:\mingw\bin and see if there is a mingw32-make executable and you could try to launch mingw32-make from a windows console. Probably there is something wrong with your mingw installation. Hope that helps, Elrohir
1,071,606
1,071,770
How to read "Contributing Artist" metadata in C++?
Windows 7 has a very nifty way of showing "Contributing Artist" metadata in Windows Explorer. In wonder how can I access that metadata from C++? Maybe you even point to some source code? Greatly appreciate in advance.
Assuming you can limit your app to Windows Vista and later, use IPropertyStore. Otherwise each file type needs to be parsed independently.
1,071,674
1,071,718
Dynamically allocated arrays or std::vector
I'm trying to optimize my C++ code. I've searched the internet on using dynamically allocated C++ arrays vs using std::vector and have generally seen a recommendation in favor of std::vector and that the difference in performance between the two is negligible. For instance here - Using arrays or std::vectors in C++, what's the performance gap?. However, I wrote some code to test the performance of iterating through an array/vector and assigning values to the elements and I generally found that using dynamically allocated arrays was nearly 3 times faster than using vectors (I did specify a size for the vectors beforehand). I used g++-4.3.2. However I feel that my test may have ignored issues I don't know about so I would appreciate any advice on this issue. Thanks Code used - #include <time.h> #include <iostream> #include <vector> using namespace std; int main() { clock_t start,end; std::vector<int> vec(9999999); std::vector<int>::iterator vecIt = vec.begin(); std::vector<int>::iterator vecEnd = vec.end(); start = clock(); for (int i = 0; vecIt != vecEnd; i++) { *(vecIt++) = i; } end = clock(); cout<<"vector: "<<(double)(end-start)/CLOCKS_PER_SEC<<endl; int* arr = new int[9999999]; start = clock(); for (int i = 0; i < 9999999; i++) { arr[i] = i; } end = clock(); cout<<"array: "<<(double)(end-start)/CLOCKS_PER_SEC<<endl; }
When benchmarking C++ comtainers, it's important to enable most compiler optimisations. Several of my own answers on SO have fallen foul of this - for example, the function call overhead when something like operator[] is not inlined can be very significant.
1,071,720
1,071,859
SQL-Like Selects in Imperative Languages
I'm doing some coding at work in C++, and a lot of the things that I work on involve analyzing sets of data. Very often I need to select some elements from a STL container, and very frequently I wrote code like this: using std::vector; vector< int > numbers; for ( int i = -10; i <= 10; ++i ) { numbers.push_back( i ); } vector< int > positive_numbers; for ( vector< int >::const_iterator it = numbers.begin(), end = numbers.end(); it != end; ++it ) { if ( number > 0 ) { positive_numbers.push_back( *it ); } } Over time this for loop and the logic contained within it gets a lot more complicated and unreadable. Code like this is less satisfying than the analogous SELECT statement in SQL, assuming that I have a table called numbers with a column named "num" rather than a std::vector< int > : SELECT * INTO positive_numbers FROM numbers WHERE num > 0 That's a lot more readable to me, and also scales better, over time a lot of the if-statement logic that's in our codebase has become complicated, order-dependent and unmaintainable. If we could do SQL-like statements in C++ without having to go to a database I think that the state of the code might be better. Is there a simpler way that I can implement something like a SELECT statement in C++ where I can create a new container of objects by only describing the characteristics of the objects that I want? I'm still relatively new to C++, so I'm hoping that there's something magic with either template metaprogramming or clever iterators that would solve this. Thanks! Edit based on first two answers. Thanks, I had no idea that's what LINQ actually was. I program on Linux and OSX systems primarily, and am interested in something cross-platform across OSX, Linux and Windows. So a more educated version of this question would be - is there a cross-platform implementation of something like LINQ for C++?
LINQ is the obvious answer for .NET (or Mono on non-Windows platforms, but in C++, it shouldn't be that difficult to write something like it yourself in STL. Use the Boost.Iterator library to write a "select" iterator, for example, one which skips all elements that do not satisfy a given predicate. Boost already has a few relevant examples in their documentation I believe. Or http://www.boost.org/doc/libs/1_39_0/libs/iterator/doc/filter_iterator.html might actually do what you need out of the box. In any case, in C++, you could achieve the same effect basically by layering iterators. If you have a regular iterator, which visits every element in the sequence, you can wrap that in a filter iterator, which increments the underlying iterator until it finds a value satisfying the condition. Then you could even wrap that in a "select" iterator transforming the value to the desired format. It seems like a fairly obvious idea, but I'm not aware of any complete implementations of it.
1,071,778
1,074,038
Data streaming in MATLAB with input data coming in from a C++ executable
I'm completely new to MATLAB and I want to know what my options are for data streaming from a C++ file. I heard of using the MATLAB "engine" for this purpose, and some of the methods like engPutVariable, etc., but can someone give me a thorough example of how to go about doing it? I'm trying to implement streaming a sine wave, but a simple example of sending a sample set of data through should suffice.
You have two options: the matlab engine and mex functions. It's very important to note that the Matlab API is single-threaded. There is absolutely no way to have user-visible background threads. At best, there are interrupts for UI events. With the Matlab engine, your application is a C++ application that uses Matlab as an add-in library. You can call Matlab functions from C++, but you must make sure that only one thread accesses Matlab at any point in time. So, you could have a thread that feeds data to Matlab from a queue of inputs coming from the rest of your application. The C++ can have as many threads as it wants, but only one can interact with Matlab. The other approach is to have Matlab control the main application and have it call C++ code whenever it wants some more data. The C++ code acts as a plugin for Matlab. The C++ code can have as many threads as it wants, but Matlab polls the C++ when your m-file calls it. Look up the documentation on MEX functions.
1,071,804
1,071,926
yyparse is printing a leading tab
In my bison/flex program, right after yyparse() is called, a leading tab is printed, but I don't know why. Can you see what's wrong? This calls the bison code, and right after yyparse() returns, a tab is printed. void parseArguments(int argc, char** argv) 130 { 131 int i; 132 133 int sum = 0; 134 // calculate the length of buffer we need 135 for(i = 1; i < argc; i++) 136 { 137 sum += strlen(argv[i]) + 1; 138 } 139 140 if(sum <= 0) 141 return; 142 143 // make us a buffer and zero it out 144 char tempBuffer[sum]; 145 memset(tempBuffer, 0, sum); 146 147 // pointer to walk through our buffer 148 int pos = 0; 149 150 // copy arguments into the buffer 151 for(i = 1; i < argc; i++) 152 { 153 memcpy(tempBuffer+pos, argv[i], strlen(argv[i])); 154 pos += strlen(argv[i]); 155 sprintf(tempBuffer+pos++, " "); 156 } 157 158 // give our arguments to lex for parsing 159 configBuffer(tempBuffer); 160 // use bison parsing 163 int returnVal = yyparse(); // after this returns a tab character has been printed 164 if(returnVal != 0) 165 { 166 printf("yyparse failed!\n"); 167 } 168 All my bison rules are just regular expressions paired with a return statement. The only code of interest in bison that I could see affecting this would be this: 64 %% 65 void configBuffer(char* arguments) 66 { 67 #ifdef DEBUG 68 printf("Given the buffer: %s\n", arguments); 69 #endif 70 yy_delete_buffer(YY_CURRENT_BUFFER); 71 72 yy_scan_string(arguments); 73 } I tried the suggestions given by several people, but still not luck. Here is my full flex file: %{ #include <string.h> #include "CommandParser.tab.h" %} %% \t { printf("TAB!\n"); } " " { printf("SPACE!\n"); } \n { return; } -p { return PRINTMODE; } -x { return XORMODE; } -n { return NOTMODE; } -a { return ANDMODE; } -o { return ORMODE; } -r { return RANGEFLAG; } -l { return LENGTHFLAG; } 0[xX][0-9a-fA-F]+ { int retVal = sscanf(yytext, "%x",&(yylval.int_val)); if(retVal != 1) return; return NUMBER; } [0-9]+ { yylval.int_val = atoi(yytext); return NUMBER; } ['"].*+['"] { yylval.string_val = strdup(yytext); return ARGUMENT; } [^ \t\n]+ { yylval.string_val = strdup(yytext); return ARGUMENT; } %% void configBuffer(char* arguments) { #define DEBUG #ifdef DEBUG printf("Given the buffer: %s:\n", arguments); #endif yy_delete_buffer(YY_CURRENT_BUFFER); yy_scan_string(arguments); }
Is the tab not handled in you lexer and therefore the default rule matching and echoed is being applied? Put a extra match \t { printf("TAB"); } into the code before your end code section. if that shows TAB instead of the \t, then turn the printf into an empty statement \t { /*printf("TAB")*/; } After lex posting Edit: Ok, after testing your lex it would seem you are matching things correctly. I used this code to test it #include <stdio.h> #include "CommandParser.tab.h" YYSTYPE yylval; int main(int argc, char* argv[]) { while(1) { printf("lex:%d\r\n",yylex()); } return 0; } extern "C" int yywrap(); int yywrap () { return 1; } So with the input (via stdin) -a<\ >-x<\t>-p<space>-c<\r> I get lex:103 SPACE! lex:101 TAB! lex:100 SPACE! lex:108 lex:3 for this header file #define PRINTMODE 100 #define XORMODE 101 #define NOTMODE 102 #define ANDMODE 103 #define ORMODE 104 #define LENGTHFLAG 105 #define RANGEFLAG 106 #define NUMBER 107 #define ARGUMENT 108 #define DEFUALT 0 typedef union { int int_val; char* string_val; } YYSTYPE; #ifdef __cplusplus extern "C" int yylex(); extern "C" YYSTYPE yylval; #else // __cplusplus extern YYSTYPE yylval; #endif // __cplusplus So what I'd try next is replace the yyparse with this code and see what you get. while(1) { printf("lex:%d\r\n",yylex()); } If you still get the tab printed it is somehow you lexer, otherwise it is somehow your parser/main program. To find that out I'd replace the magic string building you do with a const string, and see what happen in that case. Basically binary search your code to find the problem spot.
1,071,888
1,071,897
Can I make C++ in Visual Studio 2008 behave like an earlier version?
I need to work with some old C++ code that was developed in Visual C++ 6.0. Right now it's giving me compile errors galore. (For instance, "cannot open include file: 'iostream.h'"... because now it should say #include <iostream> rather than #include <iostream.h>). How can I work with this code without having to change it all over the place?
Unfortunately, there isn't a targetting feature in VS2008 that lets you do this. You'll just need to clean up your code. Luckily, VS2008 is far more standards-compliant than older versions of Visual C++ (in particular, VC 6). Getting the code clean should help in the future (you're less likely to have to worry about this later), as well as help if you ever decide to port to other platforms.
1,072,085
1,286,193
Control Click to get definition in IDE does not work
I am using C++Builder, I know that to go to a definition of a variable or class you must press control and click on the method name, or any identifier where you want to go to a definition. However, as most of you would notice this does not work all the time. Does anyone have any trick on doing this?
I actually used the Visual Studio Emulator for keys and because of that I can now right click a popup menu and go to definition. Another benefit of enumlating the Visual Studio keyboard setup is the multiple line tab and alt-Tab now works. Sadly no more shortcut to compile (F6 for RAD Studio 2007 default keyboard setup).
1,072,099
1,072,123
Visual Studio 2008, error c2039: 'set_new_handler' : is not a member of 'std'
So the other day I went to compile a VC++ project I am working on and all of a sudden I get errors in almost all of my files saying: new.h: error C2039: 'set_new_handler' : is not a member of 'std new.h: error C2039: 'set_new_handelr' : symbol cannot be used in a using-declaration "new.h" and 'set_new_handler' are not being used in any of my files, so I have no idea how or why these errors are suddenly appearing since they relate to a windows/VS library file. Would anyone know what I can do to clear this error and compile my code again? UPDATE After examining the files being included upon compilation, some files are including and some are . The problem is that is being included in afxwin.h and is being included in a third-party library. I honestly have no idea what to do with this problem...no other developers that have played with this code are running into this problem, may it be a settings problem? I am not using precompiled headers.
If I were to hazard a guess, I would say that <new.h> declares set_new_handler in the global namespace and <new> declares it within the std namespace. Some code is including <new.h> and expecting it to act as if it had included <new>. I would suspect either some 3rd party library/header or a precompiled header as suggested by Evan. You can narrow down the culprit using either /showIncludes or pre-processing a source code file (using /E) and examining the output. I usually use the latter and look at the #line directives in the output file to figure out the include chain. Good luck.
1,072,484
1,072,524
Fast string matching algorithm with simple wildcards support
I need to match input strings (URLs) against a large set (anywhere from 1k-250k) of string rules with simple wildcard support. Requirements for wildcard support are as follows: Wildcard (*) can only substitute a "part" of a URL. That is fragments of a domain, path, and parameters. For example, "*.part.part/*/part?part=part&part=*". The only exception to this rule is in the path area where "/*" should match anything after the slash. Examples: *.site.com/* -- should match sub.site.com/home.html, sub2.site.com/path/home.html sub.site.*/path/* -- should match sub.site.com/path/home.html, sub.site.net/path/home.html, but not sub.site.com/home.html Additional requirements: Fast lookup (I realize "fast" is a relative term. Given the max 250k rules, still fall within < 1.5s if possible.) Work within the scope of a modern desktop (e.g. not a server implementation) Ability to return 0:n matches given a input string Matches will have rule data attached to them What is the best system/algorithm for such as task? I will be developing the solution in C++ with the rules themselves stored in a SQLite database.
If I'm not mistaken, you can take string rule and break it up into domain, path, and query pieces, just like it's a URL. Then you can apply a standard wildcard matching algorithm with each of those pieces against the corresponding pieces from the URLs you want to test against. If all of the pieces match, the rule is a match. Example Rule: *.site.com/* domain => *.site.com path => /* query => [empty] URL: sub.site.com/path/home.html domain => sub.site.com path => /path/home.html query => [empty] Matching process: domain => *.site.com matches sub.site.com? YES path => /* matches /path/home.html? YES query => [empty] matches [empty] YES Result: MATCH As you are storing the rules in a database I would store them already broken into those three pieces. And if you want uber-speed you could convert the *'s to %'s and then use the database's native LIKE operation to do the matching for you. Then you'd just have a query like SELECT * FROM ruleTable WHERE @urlDomain LIKE ruleDomain AND @urlPath LIKE rulePath AND @urlQuery LIKE ruleQuery where @urlDomain, @urlPath, and @urlQuery are variables in a prepared statement. The query would return the rules that match a URL, or an empty result set if nothing matches.
1,073,384
1,073,434
What strategies have you used to improve build times on large projects?
I once worked on a C++ project that took about an hour and a half for a full rebuild. Small edit, build, test cycles took about 5 to 10 minutes. It was an unproductive nightmare. What is the worst build times you ever had to handle? What strategies have you used to improve build times on large projects? Update: How much do you think the language used is to blame for the problem? I think C++ is prone to massive dependencies on large projects, which often means even simple changes to the source code can result in a massive rebuild. Which language do you think copes with large project dependency issues best?
Forward declaration pimpl idiom Precompiled headers Parallel compilation (e.g. MPCL add-in for Visual Studio). Distributed compilation (e.g. Incredibuild for Visual Studio). Incremental build Split build in several "projects" so not compile all the code if not needed. [Later Edit] 8. Buy faster machines.
1,073,543
1,388,914
Qt Creator source files
Is it possible to set up QtCreator to treat .d files as C sources?
There is a file called CppEditor.mimetypes.xml embedded as a resource in the binary executable. This file contains a list of file extensions that are treated as C++ source files. It can be found in the source tree here: src/plugins/cppeditor/CppEditor.mimetypes.xml I don't think you can change the list without recompiling Qt Creator from source.
1,073,754
1,073,767
Linker Error on having non Inline Function defined in header file?
Non inline function defined in header file with guards #if !defined(HEADER_RANDOM_H) #define HEADER_RANDOM_H void foo() { //something } #endif Results in linker error : Already defined in someother.obj file Making the function inline works fine but I am not able to understand why the function is already erroring out in first case.
If the header is included in more than one source file and the function is not marked as "inline" you will have more than one definition. The include guards only prevent multiple inclusions in the same source file.
1,073,958
1,074,030
Extending the C++ Standard Library by inheritance?
It is a commonly held belief that the the C++ Standard library is not generally intended to be extended using inheritance. Certainly, I (and others) have criticised people who suggest deriving from classes such as std::vector. However, this question: c++ exceptions, can what() be NULL? made me realise that there is at least one part of the Standard Library that is intended to be so extended - std::exception. So, my question has two parts: Are there any other Standard Library classes which are intended to be derived from? If one does derive from a Standard Library class such as std::exception, is one bound by the interface described in the ISO Standard? For example, would a program which used an exception class who's what() member function did not return a NTBS (say it returned a null pointer) be standard conforming?
Good nice question. I really wish that the Standard was a little more explicit about what the intended usage is. Maybe there should be a C++ Rationale document that sits alongside the language standard. In any case, here is the approach that I use: (a) I'm not aware of the existence of any such list. Instead, I use the following list to determine whether a Standard Library type is likely to be designed to be inherited from: If it doesn't have any virtual methods, then you shouldn't be using it as a base. This rules out std::vector and the like. If it does have virtual methods, then it is a candidate for usage as a base class. If there are lots of friend statements floating around, then steer clear since there is probably an encapsulation problem. If it is a template, then look closer before you inherit from it since you can probably customize it with specializations instead. The presence of policy-based mechanism (e.g., std::char_traits) is a pretty good clue that you shouldn't be using it as a base. Unfortunately I don't know of a nice comprehensive or black and white list. I usually go by gut feel. (b) I would apply LSP here. If someone calls what() on your exception, then it's observable behavior should match that of std::exception. I don't think that it is really a standards conformance issue as much as a correctness issue. The Standard doesn't require that subclasses are substitutable for base classes. It is really just a "best practice".
1,074,130
1,074,151
How do I avoid compiler warnings when converting enum values to integer ones?
I created a class CMyClass whose CTor takes a UCHAR as argument. That argument can have the values of various enums (all guaranteed to fit into a UCHAR). I need to convert these values to UCHAR because of a library function demanding its parameter as that type. I have to create a lot of those message objects and to save typing effort I use boost::assign: std::vector<CMyClass> myObjects; boost::assign::push_back(myObjects) (MemberOfSomeEnum) (MemberOfSomeEnum); std::vector<CMyClass> myOtherObjects; boost::assign::push_back(myObjects) (MemberOfAnotherEnum) (MemberOfAnotherEnum); The above code calls the CMessage CTor with each of the two enum members and then puts them in a list. My problem is, that this code throws the warning C4244 (possible loss of data during conversion from enum to UCHAR) on VC++9. My current solution is to create a conversion function for each enum type: static UCHAR ToUchar(const SomeEnum eType) { return static_cast<UCHAR>(eType); } static UCHAR ToUchar(const AnotherEnum eType) { return static_cast<UCHAR>(eType); } And then the above code looks like this: std::vector<CMyClass> myObjects; boost::assign::push_back(myObjects) (ToUchar(MemberOfSomeEnum)) (ToUchar(MemberOfSomeEnum)); std::vector<CMyClass> myOtherObjects; boost::assign::push_back(myObjects) (ToUchar(MemberOfAnotherEnum)) (ToUchar(MemberOfAnotherEnum)); This is the cleanest approach I could think of so far. Are there any better ways? Maybe boost has something nice to offer? I don't want to disable warnings with pragma statements and I cannot modify the enums.
I wouldn't be emabarrassed by static_cast here, but if you are: template <class T> inline UCHAR ToUchar(T t) { return static_cast<UCHAR>(t); } saves writing a function for every enum.
1,074,247
1,074,319
Error c2061 when compiling
When I compile a project I get this error: C:\DATOSA~1\FAXENG~1>nmake /f Makefile.vc clean Microsoft (R) Program Maintenance Utility Version 9.00.21022.08 Copyright (C) Microsoft Corporation. All rights reserved. cd src nmake /nologo /f Makefile.vc clean del /F *.obj *.lib *.dll *.exe *.res *.exp cd.. cd tools nmake /nologo /f Makefile.vc clean del *.obj *.lib *.dll *.exe No se encuentra C:\DATOSA~1\FAXENG~1\tools\*.obj cd .. C:\DATOSA~1\FAXENG~1>nmake /f Makefile.vc Microsoft (R) Program Maintenance Utility Version 9.00.21022.08 Copyright (C) Microsoft Corporation. All rights reserved. cd src nmake /nologo /f Makefile.vc cl /nologo /MT /W3 /EHsc /O2 /I "..\..\tiff-3.8.2\libtiff" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /c ClassOne.cpp ClassOne.cpp cl /nologo /MT /W3 /EHsc /O2 /I "..\..\tiff-3.8.2\libtiff" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /c ClassOnePointZero. ClassOnePointZero.cpp cl /nologo /MT /W3 /EHsc /O2 /I "..\..\tiff-3.8.2\libtiff" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /c ClassTwo.cpp ClassTwo.cpp cl /nologo /MT /W3 /EHsc /O2 /I "..\..\tiff-3.8.2\libtiff" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /c ClassTwoPointOne.c ClassTwoPointOne.cpp cl /nologo /MT /W3 /EHsc /O2 /I "..\..\tiff-3.8.2\libtiff" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /c ClassTwoPointZero. ClassTwoPointZero.cpp cl /nologo /MT /W3 /EHsc /O2 /I "..\..\tiff-3.8.2\libtiff" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /c ClassZero.cpp ClassZero.cpp cl /nologo /MT /W3 /EHsc /O2 /I "..\..\tiff-3.8.2\libtiff" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /c CommPort.cpp CommPort.cpp cl /nologo /MT /W3 /EHsc /O2 /I "..\..\tiff-3.8.2\libtiff" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /c ECMBuffer.cpp ECMBuffer.cpp cl /nologo /MT /W3 /EHsc /O2 /I "..\..\tiff-3.8.2\libtiff" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /c excepthandler.cpp excepthandler.cpp cl /nologo /MT /W3 /EHsc /O2 /I "..\..\tiff-3.8.2\libtiff" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /c FaxAPI.cpp FaxAPI.cpp FaxAPI.cpp(143) : error C2061: syntax error : identifier 'CClassZero' NMAKE : fatal error U1077: '"c:\Archivos de programa\Microsoft Visual Studio 9.0\VC\BIN\cl.EXE"' : return code '0x2' Stop. NMAKE : fatal error U1077: '"c:\Archivos de programa\Microsoft Visual Studio 9.0\VC\BIN\nmake.EXE"' : return code '0x2' Stop. The only thing I did was copy and paste ClassTwoPointOne files into ClassZero files and change names... ClassTwoPointOne.h: #ifndef CLASSTWOPOINTONE_H #define CLASSTWOPOINTONE_H #include "ClassTwoPointZero.h" class CClassTwoPointOne : public CClassTwoPointZero { public: CClassTwoPointOne(); virtual ~CClassTwoPointOne(); virtual void SetFClass(void); }; #endif // CLASSTWOPOINTONE_H ClassTwoPointOne.cpp: #include "stdafx.h" #include "ClassTwoPointOne.h" ////////////////////////////////////////////////////////////////////// // Construction/Destruction ////////////////////////////////////////////////////////////////////// CClassTwoPointOne::CClassTwoPointOne() { m_sEIAClass = "2.1"; m_nScanTime = 0; } CClassTwoPointOne::~CClassTwoPointOne() { } void CClassTwoPointOne::SetFClass(void) { SendCommand( COMMAND_SET_FCLASS_2_1); } ClassZero.h: #ifndef CLASSZERO_H #define CLASSZERO_H #include "VoiceModem.h" class CClassZero : public CVoiceModem { public: CClassZero(); virtual ~CClassZero(); }; #endif // CLASSZERO_H ClassZero.cpp: #include "stdafx.h" #include "ClassZero.h" ////////////////////////////////////////////////////////////////////// // Construction/Destruction ////////////////////////////////////////////////////////////////////// CClassZero::CClassZero() { } CClassZero::~CClassZero() { } I don't understand whats wrong... anyone can help? Thanks a lot
FaxAPI.cpp(143) : error C2061: syntax error : identifier 'CClassZero' The error is at or near line number 143, in file FaxAPI.cpp. The error is related to the identifier CClassZero (Possibly being undefined, or misused. Possibly something as mundane as a missing semicolon). If you cannot find the error in FaxAPI.cpp yourself, you need to provide us with the relevant part of that file.
1,074,362
1,074,911
Embedded resource in C++
How do I create an embedded resource and then access it from C++? Any example on how to read the resource would be great. I am using Visual Studio 2005. Thanks in advance. Edit: I want to put one xsd file which is required while validating schema of the recieved xml file.
I'm doing @Sharptooth explained before and use the following code to get the resource HRSRC hResInfo = FindResource(hInstance, MAKEINTRESOURCE(resourceId), type); HGLOBAL hRes = LoadResource(hInstance, hResInfo); LPVOID memRes = LockResource(hRes); DWORD sizeRes = SizeofResource(hInstance, hResInfo); Here you have to change resourceId and type. For example for a .png file I use FindResource(hInstance, MAKEINTRESOURCE(bitmapId), _T("PNG")); (the "PNG" string is the type you used when adding a custom resource).
1,074,428
1,074,720
How to write to a varchar(max) column using ODBC
Summary: I'm trying to write a text string to a column of type varchar(max) using ODBC and SQL Server 2005. It fails if the length of the string is greater than 8000. Help! I have some C++ code that uses ODBC (SQL Native Client) to write a text string to a table. If I change the column from, say, varchar(100) to varchar(max) and try to write a string with length greater than 8000, the write fails with the following error [Microsoft][ODBC SQL Server Driver]String data, right truncation So, can anyone advise me on if this can be done, and how? Some example (not production) code that shows what I'm trying to do: SQLHENV hEnv = NULL; SQLRETURN iError = SQLAllocEnv(&hEnv); HDBC hDbc = NULL; SQLAllocConnect(hEnv, &hDbc); const char* pszConnStr = "Driver={SQL Server};Server=127.0.0.1;Database=MyTestDB"; UCHAR szConnectOut[SQL_MAX_MESSAGE_LENGTH]; SWORD iConnectOutLen = 0; iError = SQLDriverConnect(hDbc, NULL, (unsigned char*)pszConnStr, SQL_NTS, szConnectOut, (SQL_MAX_MESSAGE_LENGTH-1), &iConnectOutLen, SQL_DRIVER_COMPLETE); HSTMT hStmt = NULL; iError = SQLAllocStmt(hDbc, &hStmt); const char* pszSQL = "INSERT INTO MyTestTable (LongStr) VALUES (?)"; iError = SQLPrepare(hStmt, (SQLCHAR*)pszSQL, SQL_NTS); char* pszBigString = AllocBigString(8001); iError = SQLSetParam(hStmt, 1, SQL_C_CHAR, SQL_VARCHAR, 0, 0, (SQLPOINTER)pszBigString, NULL); iError = SQLExecute(hStmt); // Returns SQL_ERROR if pszBigString len > 8000 The table MyTestTable contains a single colum defined as varchar(max). The function AllocBigString (not shown) creates a string of arbitrary length. I understand that previous versions of SQL Server had an 8000 character limit to varchars, but not why is this happening in SQL 2005? Thanks, Andy
You sure you load the SQL Native Driver for 2005, not the old driver for 2000? The native driver name is {SQL Server Native Client 10.0} for 2k8 or {SQL Native Client} for 2k5 The error message ODBC SQL Server Driver seem to indicate the old 2k driver (I may be wrong, haven't touch ODBC in like 10 years now).
1,074,474
1,074,537
Should I use double or float?
What are the advantages and disadvantages of using one instead of the other in C++?
If you want to know the true answer, you should read What Every Computer Scientist Should Know About Floating-Point Arithmetic. In short, although double allows for higher precision in its representation, for certain calculations it would produce larger errors. The "right" choice is: use as much precision as you need but not more and choose the right algorithm. Many compilers do extended floating point math in "non-strict" mode anyway (i.e. use a wider floating point type available in hardware, e.g. 80-bits and 128-bits floating), this should be taken into account as well. In practice, you can hardly see any difference in speed -- they are natives to hardware anyway.
1,075,154
1,075,191
memory allocation in C++
Is it possible to allocate an arbitrary memory block using the "new" operator? In C I can do it like "void * p = malloc(7);" - this will allocate 7 bytes if memory alignment is set to 1 byte. How to make the same in C++ with the new operator?
Arbitrary memory blocks can be allocated with operator new in C++; not with the new operator which is for constructing objects. void* pBlock = ::operator new(7); Such blocks can subsequently be freed with operator delete. ::operator delete(pBlock); Note that operator new will allocated memory suitably aligned for any sort of object, so the implementation might not allocate exactly seven bytes and no more, but the same is (usually) true of malloc. C clients of malloc usually need aligned memory too.
1,075,243
1,075,261
Get Last Error from LogonUser function and return it as a String?
I am just wondering how i can get the error causing LoginUser function to fail in my C++ program and return it as a String? JNIEXPORT jstring JNICALL Java_com_entrust_adminservices_urs_examples_authn_LdapAuthenticator2_takeInfo(JNIEnv *env, jobject obj, jstring domain, jstring id, jstring idca, jstring password) { const char *nt_domain; const char *nt_id; const char *nt_password; nt_domain = env->GetStringUTFChars(domain, NULL); nt_id = env->GetStringUTFChars(id, NULL); nt_password = env->GetStringUTFChars(password, NULL); HANDLE hToken = 0; char *otherString; bool aut = true; aut = LogonUser(nt_id, nt_domain, nt_password, LOGON32_LOGON_NETWORK, LOGON32_PROVIDER_DEFAULT, &hToken ); if(aut) { otherString = "true"; } else { otherString = //how would i get the last error here? } jstring newString = env->NewStringUTF((const char*)otherString); return newString; } int main() { return 0; } Thanks -Pete Edit: Thanks guys, did it with: DWORD dwError = GetLastError(); LPVOID lpMsgBuf; FormatMessage( FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, dwError, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT ), (LPTSTR) &lpMsgBuf, 0, NULL ); otherString = (char*)lpMsgBuf;
In Windows, you can use GetLastError to retrieve the error, and then FormatMessage to turn that into a string you can use.
1,075,614
1,075,709
using copy with multi-dimensional arrays
I'm just trying to get more into stl semantics, and converting old loops over to algorithms where appropriate. I'm having trouble figuring out the best way to transform this loop into a call to copy. Any ideas? vector< vector<float> > rvec; const float * r[Max] = ...; // ... for (int ri=0; ri<N; ri++) for (int rj=0; rj<M; rj++) rvec[ri][rj] = r[ri][rj];
rvec.resize(Max); for (int i = 0; i < Max; ++i) { rvec[i].resize(M); std::copy(r[i], r[i] + M, rvec[i].begin()); } If rvec itself and each vector in rvec already has the correct size, then resizing isn't needed.
1,075,712
1,075,785
Reading delimited files in C++
What is the best way to read in a tab delimited file in C++ and store each line as a record? I have been looking for an open source library to help with this, but have been unsuccessful so it looks like I will have to write my own.
typedef vector<vector<string> > Rows; Rows rows; ifstream input("filename.csv"); char const row_delim = '\n'; char const field_delim = '\t'; for (string row; getline(input, row, row_delim); ) { rows.push_back(Rows::value_type()); istringstream ss(row); for (string field; getline(ss, field, field_delim); ) { rows.back().push_back(field); } } This will get you started. It doesn't do any checking that each row has the same number of fields, allow for escaping field_delim, etc.
1,076,190
1,076,218
64 bit floating point porting issues
I'm porting my application from 32 bit to 64 bit. Currently, the code compiles under both architectures, but the results are different. For various reasons, I'm using floats instead of doubles. I assume that there is some implicit upconverting from float to double happening on one machine and not the other. Is there a way to control for this, or specific gotchas I should be looking for? edited to add: 32 bit platform gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33) Dual-Core AMD Opteron(tm) Processor 2218 HE 64 bit platform gcc (Ubuntu 4.3.3-5ubuntu4) 4.3.3 Intel(R) Xeon(R) CPU Applying the -mfpmath=387 helps somewhat, after 1 iteration of the algorithm the values are the same, but beyond that they fall out of sync again. I should also add that my concern isn't that the results aren't identical, it's that porting to a 64 bit platform has uncovered a 32 bit dependency of which I was not aware.
There is no inherent need for floats and doubles to behave differently between 32-bit and 64-bit code but frequently they do. The answer to your question is going to be platform and compiler specific so you need to say what platform you are porting from and what platform you are porting to. On intel x86 platforms 32-bit code often uses the x87 co-processor instruction set and floating-point register stack for maximum compatibility whereas on amb64/x86_64 platforms, the SSE* instructions and xmm* registers and are often used instead. These have different precision characteristics. Post edit: Given your platform, you might want to consider trying the -mfpmath=387 (the default for i386 gcc) on your x86_64 build to see if this explains the differing results. You may also want to look at the settings for all the -fmath-* compiler switches to ensure that they match what you want in both builds.
1,076,316
1,076,394
Excel document parser/importer?
Can anyone recommend a decent Excel (Binary XLS) document importer written in C? I am looking to write a Ruby wrapper around one. I haven't been able to find any via Google.
Have you considered the source code of Gnumeric?
1,076,955
1,077,083
C++ DLL Called From C# on Windows CE for ARM Always Returns 0
I am currently developing an application for Windows CE on the TI OMAP processor, which is an ARM processor. I am trying to simply call a function in a C++ DLL file from C# and I always get a value of 0 back, no matter which data type I use. Is this most likely some kind of calling convention mismatch? I am compiling the DLL and the main EXE from the same Visual Studio solution. C# Code Snippet: public partial class Form1 : Form { private void button1_Click(object sender, EventArgs e) { byte test = LibWrap.test_return(); MessageBox.Show(test.ToString()); } } public class LibWrap { [DllImport("Test_CE.dll")] public static extern byte test_return(); } C++ DLL Code Snippet: extern "C" __declspec (dllexport) unsigned char test_return() { return 95; }
It worked when I changed: extern "C" __declspec (dllexport) unsigned char test_return() { return 95; } to extern "C" __declspec (dllexport) unsigned char __cdecl test_return() { return 95; } In the DLL code. Why it doesn't assume this when compiled for WinCE is beyond me.
1,077,216
1,077,229
How do you Make A Repeat-Until Loop in C++?
How do you Make A Repeat-Until Loop in C++? As opposed to a standard While or For loop. I need to check the condition at the end of each iteration, rather than at the beginning.
do { // whatever } while ( !condition );
1,077,258
1,077,271
Windows SearchPath function
I am using the following to search for a file defined as a macro DB_CONFIG_FILE_PATH_1. wchar_t filename[100]; SearchPath( L".\\", DB_CONFIG_FILE_PATH_1, NULL, 100, filename, NULL); If the file is in C:\ directory, it is found. But, if the file is in one of its sub-directories the function doesn't find it. Can some explain how to search all the drives including subdirectories for a file with the above function. I am not using FindFirstFile function because, I am unable to retrieve the path to the file even though the function returns handle to the file. To put it, I want full path name of a file. I know the name of the file, but do not know where it is on the comp.
For searching subdirectories in native code on Win32, you need to do it yourself, using FindFirstFile and then recursing into subdirectories. The return value of FindFirstFile isn't a file handle - the file information is contained in the WIN32_FIND_DATA structure returned. The handle is used in calls to FindNextFile to continue the search. To get a full path name during your search, you'll need to keep track of what directory you are currently in and append the discovered directory names to the path. SearchPath only searches in the PATH environment variable or the first parameter if present and doesn't search subdirectories.
1,077,298
1,077,306
How do you Specify a Method to be a Destructor Rather than a Constructor in C++?
How do you specify a method to be a destructor rather than a constructor in C++? This confuses me very much. I can't tell the difference between the two.
Here's an example: MyClass::MyClass() // Constructor MyClass::~MyClass() // Destructor Note the "~" in front of the destructor.
1,077,336
1,077,364
OpenSource Instant Messaging APIs
I want to create my own IM and I'm searching an open-source IM APIs. What do you think is the best open-source IM APIs. And what good front end to use? Thanks.
If you are looking into making a client, check out libpurple. This is what pidgin and many other IM clients use to access multiple IM networks. http://developer.pidgin.im/wiki/WhatIsLibpurple If you are just worried about one IM network, the easiest one to work with would be Jabber because it is an open sourced protocol http://www.jabber.org/
1,077,869
1,080,705
Internet Explorer 8 + Deflate
I have a very weird problem.. I really do hope someone has an answer because I wouldn't know where else to ask. I am writing a cgi application in C++ which is executed by Apache and outputs HTML code. I am compressing the HTML output myself - from within my C++ application - since my web host doesn't support mod_deflate for some reason. I tested this with Firefox 2, Firefox 3, Opera 9, Opera 10, Google Chrome, Safari, IE6, IE7, IE8, even wget.. It works with ANYTHING except IE8. IE8 just says "Internet Explorer cannot display the webpage", with no information whatsoever. I know it's because of the compression only because it works if I disable it. Do you know what I'm doing wrong? I use zlib to compress it, and the exact code is: /* Compress it */ int compressed_output_size = content.length() + (content.length() * 0.2) + 16; char *compressed_output = (char *)Alloc(compressed_output_size); int compressed_output_length; Compress(compressed_output, compressed_output_size, (void *)content.c_str(), content.length(), &compressed_output_length); /* Send the compressed header */ cout << "Content-Encoding: deflate\r\n"; cout << boost::format("Content-Length: %d\r\n") % compressed_output_length; cgiHeaderContentType("text/html"); cout.write(compressed_output, compressed_output_length); static void Compress(void *to, size_t to_size, void *from, size_t from_size, int *final_size) { int ret; z_stream stream; stream.zalloc = Z_NULL; stream.zfree = Z_NULL; stream.opaque = Z_NULL; if ((ret = deflateInit(&stream, CompressionSpeed)) != Z_OK) COMPRESSION_ERROR("deflateInit() failed: %d", ret); stream.next_out = (Bytef *)to; stream.avail_out = (uInt)to_size; stream.next_in = (Bytef *)from; stream.avail_in = (uInt)from_size; if ((ret = deflate(&stream, Z_NO_FLUSH)) != Z_OK) COMPRESSION_ERROR("deflate() failed: %d", ret); if (stream.avail_in != 0) COMPRESSION_ERROR("stream.avail_in is not 0 (it's %d)", stream.avail_in); if ((ret = deflate(&stream, Z_FINISH)) != Z_STREAM_END) COMPRESSION_ERROR("deflate() failed: %d", ret); if ((ret = deflateEnd(&stream)) != Z_OK) COMPRESSION_ERROR("deflateEnd() failed: %d", ret); if (final_size) *final_size = stream.total_out; return; }
The gzip and deflate methods aren't the same... they are very close, but there are some subtle differences with the header, so, if you change your content-encoding, you should also change your parameters to the encoding method (specifically, the window size)! See: http://apcmag.com/improve_your_site_with_http_compression.htm Probably the other browsers are ignoring your content-encoding specification and doing some automatic recognition, but IE8 is not... See: http://www.zlib.net/manual.html#deflateInit2 Try to use: method=Z_DEFLATED windowBits=-15 (negative so that the header is suppressed) And use "gzip" as the content-encoding
1,078,002
1,078,086
how to write a virtual com port to TCP driver?
Hi I am trying to write a windows virtual com port driver which will divert the data to a IP address. any pointers, best practice will be of help?
I know of a Open source project called com0com which is virtual com port redirector. there is a subproject called com2tcp in that which you can look atcom0com. otherwise for windows I am not sure you have any open source available Between there are commercially available software such as the one from Eltima and tactical software. there are few freewares too, you can search by the name comport redirector.
1,078,218
1,078,254
while (cin >> x) and end-of-file issues
I'm a little confused as to what's going on, i'm playing with some programs from "Accelerated C++", and have hit a problem with one of the early programs (page 35, if you happen to have a copy nearby). It uses this snippet: while (cin >> x) { ++count; sum += x; } ("count" is an integer, "x" is a double) It works as intended, allowing me to enter several values and add them together, but i can't work out what's going wrong with "End-of-file" signalling. The book says the loop will keep running until the program encounters an end of file signal, which is ctrl+z in windows. This is all fine, and works, but then my program won't let me use cin again. I usually just set up a program to wait for some random variable in order to stop the console closing immediately after executing (is there a better way to do that, by the way?) which is how i noticed this, and i'm wondering if there's a solution. I've done a bunch of searching, but found little that doesn't say what's already said in the book (press ctrl+z, or enter a non-compatible type of input etc.) I'm using Visual studio 2008 express to compile.
From one point of view, once you've hit the end of an input stream then by definition there's nothing left in the stream so trying to read again from it doesn't make sense. However, in the case of 'end-of-stream' actually being caused be a special character like Ctrl-Z on windows, we know that there is the possibility that we could read again from cin. However, the failed read will have caused the eof flag on the stream to be set. To clear this flag (and all the other failure flags) you can use the clear method. std::cin.clear(); After calling this, you can attempt another read.
1,078,312
1,078,325
Return value of process
How can I get the return value of a process? Basically I'm **ShellExecute()**ing a .NET process from a DLL (in C++). The process does its task, but now I want to know whether it succeeded or failed. How to do that in WinAPI or MFC?
Use ShellExecuteEx instead so you can get a handle to the process which was launched. You should then be able to use GetExitCodeProcess to obtain the exit code. (I've left this answer here despite the similar one from MSalters, as I suspect you're using ShellExecute deliberately to get the shell behaviour instead of explicitly creating the process.)
1,078,768
1,079,040
Is there a relation between integer and register sizes?
Recently, I was challenged in a recent interview with a string manipulation problem and asked to optimize for performance. I had to use an iterator to move back and forth between TCHAR characters (with UNICODE support - 2bytes each). Not really thinking of the array length, I made a curial mistake with not using size_t but an int to iterate through. I understand it is not compliant and not secure. int i, size = _tcslen(str); for(i=0; i<size; i++){ // code here } But, the maximum memory we can allocate is limited. And if there is a relation between int and register sizes, it may be safe to use an integer. E.g.: Without any virtual mapping tools, we can only map 2^register-size bytes. Since TCHAR is 2 bytes long, half of that number. For any system that has int as 32-bits, this is not going to be a problem even if you dont use an unsigned version of int. People with embedded background used to think of int as 16-bits, but memory size will be restricted on such a device. So I wonder if there is a architectural fine-tuning decision between integer and register sizes.
The C++ standard doesn't specify the size of an int. (It says that sizeof(char) == 1, and sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long). So there doesn't have to be a relation to register size. A fully conforming C++ implementation could give you 256 byte integers on your PC with 32-bit registers. But it'd be inefficient. So yes, in practice, the size of the int datatype is generally equal to the size of the CPU's general-purpose registers, since that is by far the most efficient option. If an int was bigger than a register, then simple arithmetic operations would require more than one instruction, which would be costly. If they were smaller than a register, then loading and storing the values of a register would require the program to mask out the unused bits, to avoid overwriting other data. (That is why the int datatype is typically more efficient than short.) (Some languages simply require an int to be 32-bit, in which case there is obviously no relation to register size --- other than that 32-bit is chosen because it is a common register size)
1,078,775
1,123,964
Is time.h clock() broken on my hardware?
I try to measure the clock cyles needed to execute a piece of code on the TMS32064x+ DSP that comes with the OMAP ZOOM 3430 MDK. I look at the "Programmer's Guide" of the DSP chip and it says that the DSP supports the clock() function. What I do is really simple, I just do start = clock(); for (i=0;i<100;i++){ /* do something here */ } stop = clock(); total = stop - start; and then put the values of "start","stop" and "total" to a previously allocated shared memory with the ARM processor. Then I simply print it to the screen at the ARM side. The problem is, in my first executes, I always get the same "total" value, and then in my next runs I always get 0! The "start" and "stop" values go along with the "total" value. The strangest thing is that they seem to follow a bit pattern! I put the output below: # ./sampleapp Total = 63744 Start clock() value = 0x000000f9 Stop clock() value = 0x0000f9f9 # ./sampleapp Total = 4177526784 Start clock() value = 0x00f9f9f9 Stop clock() value = 0xf9f9f9f9 # ./sampleapp Total clock cyles = 0 Start clock() value = 0xf9f9f9f9 Stop clock() value = 0xf9f9f9f9 Apparantly, clock() is not functioning well, but I'm not sure if this is because of something I do wrong or because this type of thing is not supported with the hardware I have. Any ideas why this might be happening?
From reading the questions so far, I'd say the Original Poster has substantially more knowledge of this matter than the contributors so far, and that the suspicion that the clock() is broken (or not supported, and returns an undefined result) on the DSP seems quite likely.
1,078,939
1,079,007
I'd like to call the Windows C++ function WinHttpGetProxyForUrl from Python - can this be done?
Microsoft provides a method as part of WinHTTP which allows a user to determine which Proxy ought to be used for any given URL. It's called WinHttpGetProxyForUrl. Unfortunately I'm programming in python so I cannot directly access this function - I can use Win32COM to call any Microsoft service with a COM interface. So is there any way to get access to this function from Python? As an additional problem I'm not able to add anything other than Python to the project. That means however convenient it is impossible to add C# or C++ fixes. I'm running Python2.4.4 with Win32 extensions on Windows XP. Update 0: This is what I have so far: import win32inet import pprint hinternet = win32inet.InternetOpen("foo 1.0", 0, "", "", 0) # Does not work!!! proxy = win32inet.WinHttpGetProxyForUrl( hinternet, u"http://www.foo.com", 0 ) Obviously the last line is wrong, however I cannot see any docs or examples on the right way to do it! Update 1: I'm going to re-ask this as a new question since it's now really about win32com.
You can use ctypes to call function in WinHttp.dll, it is the DLL which contains 'WinHttpGetProxyForUrl. ' Though to call it you will need a HINTERNET session variable, so here I am showing you the first step, it shows how you can use ctypes to call into DLL,it produces a HINTERNET which you have to pass to WinHttpGetProxyForUrl, that I will leave for you as exercise, if you feel difficulty POST the code I will try to fix it. Read more about ctypes @ http://docs.python.org/library/ctypes.html import ctypes winHttp = ctypes.windll.LoadLibrary("Winhttp.dll") WINHTTP_ACCESS_TYPE_DEFAULT_PROXY=0 WINHTTP_NO_PROXY_NAME=WINHTTP_NO_PROXY_BYPASS=0 WINHTTP_FLAG_ASYNC=0x10000000 # http://msdn.microsoft.com/en-us/library/aa384098(VS.85).aspx HINTERNET = winHttp.WinHttpOpen("PyWin32", WINHTTP_ACCESS_TYPE_DEFAULT_PROXY, WINHTTP_NO_PROXY_NAME, WINHTTP_NO_PROXY_BYPASS, WINHTTP_FLAG_ASYNC) print HINTERNET
1,079,020
1,079,032
Can you expand #define's into string literals?
Is there a way to get the C++ pre-processor to expand a #define'ed value into a string literal? for example: #define NEW_LINE '\n' Printf("OutputNEW_LINE"); //or whatever This looks to me like it should be possible as it's before compilation? Or is there a better design pattern to achieve this kind of behaviour (without resorting to runtime fixes like sprintf)? EDIT I understand that #define's can be evil, but for arguments sake... ADDITIONAL Does anyone have any criticism of this approach?
This will do it: #define NEW_LINE "\n" // Note double quotes Printf("Output" NEW_LINE); (Technically it's the compiler joining the strings rather than the preprocessor, but the end result is the same.)
1,079,238
1,079,682
Is it possible to get the debugger to display the name of the function pointed to by a boost function object?
When debugging code using boost function and bind in Visual Studio, I would like to be able to have the debugger show information about the actual function pointed to by the boost functor. For instance the name of the function, the signature of the original function (before bind was used on it), or the state of the functor. At the moment I have to step through the code to find out which function it is, and that requires stepping all the way through the boost code first. Does anyone know if this has been done or even if it can be done? Thank you! Edit I'd also be very happy to find out that someone has developed an answer to this question by now: How to debug code that uses boost w/o losing sanity? (I mean the problem mentioned in the accepted answer: How to step over the boost code but still step into the code called by the boost::function...)
There is an initiative in boost to make debug visualizers. There are already debug visualizers for different types (variant, multi_index, shared_ptr and more). Unfortunately boost::function is not there, but you can write a visualizer yourself as described there (and maybe submit it to boost ;). Alternatively you can make a request to write one for you. Regards, Ovanes
1,079,288
1,079,331
how am i able to declare an array with variable length determined at runtime in C++?
Please check this code out it compiles and runs absolutely fine.. The question is that when i started learning c++ (turbo c++) i never was able to declare an array of any type as .. datatype var[variable_set_at_runtime]; and i took it for granted that this cant be possible in latest gcc compilers...but surprisingly this is possible... So my related question is that whats the need of new operator then?? I know that new operator does a lot of things including dynamically allocating memory at runtime in heap,returning an address to that resource...etc.. What difference i feel is that my code dynamically allocates the memory on stack while new does it on heap?? is this the only difference... i am really astonished after writing something like this as i could not do this when i started learning c++ and ...to add to it i can do this for custom data types too... :-O #include<iostream> using namespace std; struct A { int a; int b; }; int main() { int a; cin>>a; int ch[a]; for(int i=0;i<a;i++) ch[i]=i+1; for(int i=0;i<a;i++) cout<<"\n\nch=="<<ch[i]; A obj[a]; cin>>obj[a-1].a; cin>>obj[a-1].b; cout<<"\n\n obj.a,obj.b=="<<obj[a-1].a<<" , "<<obj[a-1].b; } Please shed some light.. Thanks.
and i took it for granted that this cant be possible in latest gcc compilers...but surprisingly this is possible... It is legal in C, but not C++. GCC might allow it depending on compiler flags, but if you compile your code as strict C++ (which you should), dynamic-length arrays are not allowed, and you have to use new. (I'm surprised no one has mentioned this little detail yet) Apart from this, the two other big differences are that: data on the stack is automatically cleaned up when it goes out of scope only something like 1MB is typically allocated for the stack. Large datastructures should go on the heap instead. But really, the single most important point is the first one -- it's not valid C++. (And as Neil pointed out, it is not valid in C++0x either. There are no plans of adding this to C++)
1,079,522
1,079,699
Question about operation on files in Windows
I have two HANDLEs and they are created from the same file, in such condition I want to write on offset from 1 to 100 using the first HANDLE, and from 101 to 200 using the 2nd HANDLE, from 201 to 300 using the first HANDLE, ..., How can I make this operation seems like a sequential write and no time is wasted between positioning the the pointers in the HANDLE?
You should be able to do asynchronous overlapped IO. To get you started, look at the WriteFile win32 API call. It discusses how to use CreateFile with the FLAG_FILE_OVERLAPPED flag. You then call WriteFile and pass in an OVERLAPPED parameter, which contains the offset to start writing at and an event handle, which gets signaled when the IO is complete. Alternativally, you can call WriteFileEx, which calls a function that you supply when the IO is complete, rather than signaling an event. Note that you should write in blocks of 4K (4096) bytes rather then in blocks of 100 bytes, since this is the size of page files in Windows; it will speed up your IO considerably. Also note that this should only require one file handle, rather than multiple.
1,079,587
1,079,604
Qt +hiding window after startup
I'm trying to hide window after its startup. I have own window-class which is inherited from QMainWindow. I rewrote showEvent like this: void showEvent (QShowEvent *evt) { if (firstShow) { hide(); firstShow = false; } else { QMainWindow::showEvent(evt); } } But it doesn't work. firstShow is a boolean variable, which is true at start. Language: c++
I don't quite follow. Surely you just don't call show() on your main window in the first place?
1,079,623
1,079,631
What is the lifetime of class static variables in C++?
If I have a class called Test :: class Test { static std::vector<int> staticVector; }; when does staticVector get constructed and when does it get destructed ? Is it with the instantiation of the first object of Test class, or just like regular static variables ? Just to clarify, this question came to my mind after reading Concepts of Programming Languages (Sebesta Ch-5.4.3.1) and it says :: Note that when the static modifier appears in the declaration of a variable in a class definition in C++, Java and C#, it has nothing to do with the lifetime of the variable. In that context, it means the variable is a class variable, rather than an instance variable. The multiple use of a reserved word can be confusing particularly to those learning the language. did you understand? :(
Exactly like regular static (global) variables.
1,079,748
1,079,760
How to print '\n' instead of a newline?
I am writing a program that uses prints a hex dump of its input. However, I'm running into problems when newlines, tabs, etc are passed in and destroying my output formatting. How can I use printf (or cout I guess) to print '\n' instead of printing an actual newline? Do I just need to do some manual parsing for this? EDIT: I'm receiving my data dynamically, it's not just the \n that I'm corned about, but rather all symbols. For example, this is my printf statement: printf("%c", theChar); How can I make this print \n when a newline is passed in as theChar but still make it print normal text when theChar is a valid printable character?
Print "\\n" – "\\" produces "\" and then "n" is recognized as an ordinary symbol. For more information see here.
1,079,808
1,079,900
Problem in using C dynamic loading routines
I have an application consisting of different modules written in C++. One of the modules is meant for handling distributed tasks on SunGrid Engine. It uses the DRMAA API for submitting and monitoring grid jobs.If the client doesn't supports grid, local machine should be used The shared object of the API libdrmaa.so is linked at compile time and loaded at runtime. If the client using my application has this ".so" everything is fine but in case the client doesn't have that , the application exits failing to load shared libraries. To avoid this , I have replaced the API calls with function pointers obtained using dlsym() and dlopen(). Now I can use the local machine instead of grid if the call to dlopen doesn't succeeds and my objective is achieved. The problem now is that the application now runs successfully for small testcases but with larger testcases it throws segmentation fault while the same code using dynamic loading works correctly. Am I missing something while using dlsym() and dlopen()? Is there any other way to achieve the same goal? Any help would be appreciated. Thanx,
It is very unlikely to be a direct problem with the code loaded via dlsym() - in the sense that the dynamic loading makes it seg-fault. What it may be doing is exposing a separate problem, probably by moving stuff around. This probably means a stray (uninitialized) pointer that points somewhere 'legitimate' in the static link case but somewhere else in the dynamic link case - and the somewhere else triggers the seg-fault. Indeed, that is a benefit to you in the long run - it shows that there is a problem that otherwise might remain undetected for a long time. I regard this as particularly likely since you mention that it occurs with larger tests and not with small ones.
1,079,850
1,079,857
Adding MPI support to a C++ program
I have a program that is been implemented in C++ which I now want to add MPI support. There is an MPI binding for C++, with namespace MPI and everything. In my case I have a specific object that is suitable to be the parallelized process into the cluster. My questions are: Has anyone done something like this before? Can I get some advices on how best to implement this? How do I initialize MPI inside the constructor? After initializing MPI inside the constructor of the Class, will all the intermediate calls be parallelized too? For example: MyClass obj; x = x; //this will be parallelized ? onj.calc(); y = x++; //this will be parallelized ? z = obj.result();
I would really recommend picking up the Gropp MPI Book, it really helps for basic MPI!
1,080,313
1,080,562
MFC maximize window feature
I have tried to make the fullscreen feature of a SDI application with splitter windows by following the forum link. However, my status bar, system menu as well as the title bar of the application have disappeared. Do you have any suggestions on any easy ways of getting these back (or if I have to use different method of making the application maximized during startup instead of fullscreen)? Thanks in advance.
I got rid of any manipulations on the cs structure in the PreCreateWindow handler and used a ShowWindow(SW_MAXIMIZE) function call in the OnCreate function implementation of my MainFrame function, and it works quite well.
1,080,401
1,080,421
What is the Basic Structure of a Function in FORTRAN?
This is something that's I've wanted to know recently, mostly out of curiousity. I'm in the mood to learn some old coding styles, and FORTRAN seems like a good place to start. I guess I should help you guys out by providing a good starting point. So how would this C procedure be written in FORTRAN? int foo ( int x , int y ) { int tempX = x ; x += y / 2 ; y -= tempX * 3 ; // tempX holds x's original value. return x * y ; } I know the entire function could be a single line: return ( x + ( y / 2 ) ) * ( y - ( x * 3 ) ) ; But the point of me asking this question is to see how those four statements would be written individually in FORTRAN, not neccessarily combined into a single statement.
Where do you learn FORTRAN from? Just take a look at the wikibooks! Derived from the example, I'd say: function func(x, y) result(r) integer, intent(in) :: x, y integer :: r integer :: tempX tempX = x x = x / 2 y = y - tempX * 3 r = x * y end function foo
1,080,458
1,080,487
Pattern for objects initialization at startup
I'm building an application and as time goes on, I have more and more objects to initialize at startup. Moveover, some of the newer objects depend on others so I'm getting some kind of spaggetti initialization where objects are created then passed to other constructors. I'm suspecting that I'm getting it wrong. For example I have a WinForm which accepts a "Controller" class and 2 events. The controller needs to be told about the existence of a DataGridView from the WinForm so it has a method Controller::SetDataGridReference(DataGridView^ dgv) Is there a general method of instanciating objects at startup then referencing those objects to each another? I've been told that putting all the required classes as constructor parameters is a good practice but frankly, I don't see how I can do that here. I don't really think that the language matters
This looks like a textbook case for using dependency injection (DI). It will certainly help with your spaghetti code and can even assist with unit testing. If you want to make a gradual migration towards DI you might want to consider refactoring the objects with similar relationships and using a few sets of factory classes that can handle all the boilerplate chain intialization as well as centralizing where all that takes place in your code base. I can recommend Google Guice as a good DI framework for Java. Even if you arent using Java it is a good DI model to compare against other language's DI frameworks
1,080,482
1,080,616
C++ and Qt - encoding from page-content
Here is link where i got a code for web-page content fetching. But i have a trouble: i got text in wrong encoding. Could i correct it? Thanks. EDIT: I'm trying to get data from page: http://ru.wiktionary.org/wiki/example And got: alt text http://img44.imageshack.us/img44/6141/kfastwikimainwindow.png EDIT2: I just save all data to the html-file and show it in QWebView.
I think you're getting it with the correct encoding, it's just not being displayed with the correct encoding. I did a quick test and that's pretty much what it looks like when I display it with the Visual Studio HTML Visualizer, but if I save the data to file and open it with a browser, it is encoded correctly.
1,080,635
1,080,718
Other's library #define naming conflict
Hard to come up with a proper title for this problem. Anyway... I'm currently working on a GUI for my games in SDL. I've finished the software drawing and was on my way to start on the OpenGL part of it when a weird error came up. I included the "SDL/SDL_opengl.h" header and compile. It throws "error C2039: 'DrawTextW' : is not a member of 'GameLib::FontHandler'", which is a simple enough error, but I don't have anything called DrawTextW, only FontHandler::DrawText. I search for DrawTextW and find it in a #define call in the header "WinUser.h"! //WinUser.h #define DrawText DrawTextW Apparently it replaces my DrawText with DrawTextW! How can I stop it from spilling over into my code like that? It's a minor thing changing my own function's name, but naming conflicts like this seem pretty dangerous and I would really like to know how to avoid them all together. Cheers!
You have a couple of options, all of which suck. Add #undef DrawText in your own code Don't include windows.h. If another library includes it for you, don't include that directly. Instead, include it in a separate .cpp file, which can then expose your own wrapper functions in its header. Rename your own DrawText. When possible, I usually go for the middle option. windows.h behaves badly in countless other ways (for example, it doesn't actually compile unless you enable Microsoft's proprietary C++ extensions), so I simply avoid it like the plague. It doesn't get included in my files if I can help it. Instead, I write a separate .cpp file to contain it and expose the functionality I need. Also, feel free to submit it as a bug and/or feedback on connect.microsoft.com. Windows.h is a criminally badly designed header, and if people draw Microsoft's attention to it, there's a (slim) chance that they might one day fix it. The good news is that windows.h is the only header that behaves this badly. Other headers generally try to prefix their macros with some library-specific name to avoid name collisions, they try to avoid creating macros for common names, and they try avoid using more macros than necessary.
1,080,652
1,080,706
How to check the length of an input? (C++)
I have a program that allows the user to enter a level number, and then it plays that level: char lvlinput[4]; std::cin.getline(lvlinput, 4) char param_str[20] = "levelplayer.exe " strcat_s(param_str, 20, lvlinput); system(param_str); And the level data is stored in folders \001, \002, \003, etc., etc. However, I have no way of telling whether the user entered three digits, ie: 1, 01, or 001. And all of the folders are listed as three digit numbers. I can't just check the length of the lvlinput string because it's an array, so How could I make sure the user entered three digits?
Here's how you could do this in C++: std::string lvlinput; std::getline(std::cin, lvlinput); if (lvlinput.size() > 3) { // if the input is too long, there's nothing we can do throw std::exception("input string too long"); } while (lvlinput.size() < 3) { // if it is too short, we can fix it by prepending zeroes lvlinput = "0" + lvlinput; } std::string param_str = "levelplayer.exe "; param_str += lvlinput; system(param_str.c_str()); You've got a nice string class which takes care of concatenation, length and all those other fiddly things for you. So use it. Note that I use std::getline instead of cin.getline. The latter writes the input to a char array, while the former writes to a proper string.
1,080,662
1,080,698
Is this a good way to use dlls? (C++?)
I have a system that runs like this: main.exe runs sub.exe runs sub2.exe and etc. and etc... Well, would it be any faster of more efficient to change sub and sub2 to dlls? And if it would, could someone point me in the right direction for making them dlls without changing a lot of the code?
DLLs would definitely be faster than separate executables. But keeping them separate allows more flexibility and reuse (think Unix shell scripting). This seems to be a good DLL tutorial for Win32. As for not changing code much, I'm assuming you are just passing information to theses subs with command line arguments. In that case, just rename the main functions, export them from the DLL, and call these renamed "main" functions from the main program.
1,080,757
1,118,240
Why is msbuild and link.exe "hanging" during a build?
We have a few C++ solutions and we run some build scripts using batch files that call msbuild.exe for each of the configurations in the solutions. This had been working fine on 3 developer machines and one build machine, but then one of the projects started to hang when linking. This only happens on the newest machine which is a quad core, 2.8ghz I think. It runs on Windows Server 2003 and the others are on XP or Vista. This happens consistently even if I change the order of builds in the bat file. If I run the build from the IDE on that machine it does not hang. Any ideas about what could possibly be causing this? I am using Visual Studio 2008. Edit: I see now that when it is hung the following are running: link.exe (2 instances) One with large memory usage and one with a small amount of memory usage. vcbuild.exe msbuild.exe vcbuildhelper.exe mspdbsrv.exe Edit: The exe file exists and so does the pdb file. The exe file is locked by some process, and I can't delete it or move it. I can delete the pdb file though. I also have the problem if I just use VCBuild.exe. I decided to try debugging the 2 link.exe processes and the mspdbsrv.exe processes. When I attached the debugger/MSdev IDE to them I got a message box saying that the application was deadlocked and/or that "all threads have exited". I guess I will have to check for a service pack for that msdev install on that machine. Edit: In the debug.htm output file I get all sorts of stuff output after the link.exe command is generated. However, for the release buildlog.htm the linke.exe line is the last line. This is clearly a hang in the linker. Definitely a Microsoft bug. I am now trying to figure out what the .rsp (linker response) file is. When I issue: link.exe @c:\\Release\RSP00000535202392.rsp /NOLOGO /ERRORREPORT:QUEUE That is the last line in the release build log. The debug one has lots more information after that. Reinstalling a different version of Visual Studio did not solve the problem. I will open an issue/ticket with Microsoft. I will post an answer if I can.
Whole-program optimization (/GL and /LTCG) and /MP don't mix -- the linker hangs. I raised this on Connect. The upshot is that it's a confirmed bug in VS2008; contact PSS if you want a hotfix; and the fix is included in VS2010. If you can't wait that long, turn off /MP (slower compiles) or /LTCG (slower code).
1,080,770
1,080,791
Including Objective C++ Type in C++ Class Definition
I've got a project that is primarily in C++, but I'm trying to link in a Objective-C++ library. I have a header that looks something like: CPlus.h: #import "OBJCObject.h" class CPlus { OBJCObject *someObj; }; CPlus.mm: CPlus::CPlus() { someObj = [[OBJCObject alloc] init]; } When I import the Objective-C++ header into my code I end up with thousands of errors from inside the iPhone SDK. It seems that something is treating one language as if it were another. Sorry if this description is poor, I'm new to this, and am somewhat confused. Can you include Objective-C / Objective-C++ types in C++ classes? Is there something special you need to do to include the headers for the other types?
Are you #importing CPlus.h from an Objective-C (.m) file? If so, it will not understand the C++ class since it is being compiled with C semantics, and is not Objective-C++ aware. The .m compiler will see class and not know what to do. You can include Objective-C objects in C++ class definitions, and vice versa, as long as the source file is .mm.
1,080,805
1,080,836
C++ - how does Sleep() and cin work?
Just curious. How does actually the function Sleep() work (declared in windows.h)? Maybe not just that implementation, but anyone. With that I mean - how is it implemented? How can it make the code "stop" for a specific time? Also curious about how cin >> and those actually work. What do they do exactly? The only way I know how to "block" something from continuing to run is with a while loop, but considering that that takes a huge amount of processing power in comparison to what's happening when you're invoking methods to read from stdin (just compare a while (true) to a read from stdin), I'm guessing that isn't what they do.
The OS uses a mechanism called a scheduler to keep all of the threads or processes it's managing behaving nicely together. several times per second, the computer's hardware clock interrupts the CPU, which causes the OS's scheduler to become activated. The scheduler will then look at all the processes that are trying to run and decides which one gets to run for the next time slice. The different things it uses to decide depend on each processes state, and how much time it has had before. So if the current process has been using the CPU heavily, preventing other processes from making progress, it will make the current process wait and swaps in another process so that it can do some work. More often, though, most processes are going to be in a wait state. For instance, if a process is waiting for input from the console, the OS can look at the processes information and see which io ports its waiting for. It can check those ports to see if they have any data for the process to work on. If they do, it can start the process up again, but if there is no data, then that process gets skipped over for the current timeslice. as for sleep(), any process can notify the OS that it would like to wait for a while. The scheduler will then be activated even before a hardware interrupt (which is also what happens when a process tries to do a blocking read from a stream that has no data ready to be read,) and the OS makes a note of what the process is waiting for. For a sleep, the process is waiting for an alarm to go off, or it may just yield again each time it's restarted until the timer is up. Since the OS only resumes processes after something causes it to preempt a running process, such as the process yielding or the hardware timer interrupt i mentioned, sleep() is not very accurate, how accurate depends on the OS or hardware, but it's usually on the order of one or more milliseconds. If more accuracy is needed, or very short waits, the only option is to use the busy loop construct you mentioned.
1,080,876
1,080,947
Adding a minimize button to a Qt dialog?
I have created a QDialog based app using Qt Creator and all is well other than the dialog has no minimize button. How can I add one? Is there a property in the designer that I can set?
You can't add the minimize button yourself as it is handled by the window manager. You can tell the window manager how your dialog should be handled using Window Manager hints. This is done using the windowFlags property of your widget. There's also an example demonstrating this. setWindowFlags(windowFlags() | Qt::WindowMinimizeButtonHint);
1,080,953
1,080,995
What is the simplest RTTI implementation for C++?
I'm trying to implement exception handling for an embedded OS and I'm stuck at how to detect the type of the thrown "exception" (to select the appropriate handler). The saving and restoring context parts of the exception handling are already done, but I can't have specific handles since I can't detect the type of the thrown 'exception'. The standard RTTI implementation of c++ is too dependent of other libraries and for that reason I'm currently considering it unavailable. Considering that my target is an embedded system and for that reason I can't create much code, what is the smallest implementation of "Runtime Type Information" I can get (or make)? -- Edit -- I'm not working on the compiler, It's an ia32-g++.
As you're working in an embedded environment, you presumably favour extremely minimal solutions and you can take advantage of non-standard or non-portable facts about your compiler. If a class is polymorphic (has at least one virtual function of its own) in C++, it probably has a pointer to a vtable embedded somewhere in it. It may be that the vtable pointer appears at the beginning of the object's layout in memory. This is true of many compilers, ones that use the C++ ABI - a related SO question here. If so, you might be able to get at the vtable like this: void *get_vtable(void *obj) { return *(reinterpret_cast<void **>(obj)); } Then you can compare the vtables of two pointers-to-objects to see if they point to the same type of object. So a "type switch" (which is what catch basically is) would do something like this: P p; Q q; if (get_vtable(caught) == get_vtable(&p)) { // it's a P... } else if (get_vtable(caught) == get_vtable(&q)) { // it's a Q... } You could hide that pattern in a CATCH macro. Important point - if you derive a class from a base, but the derived class does not override any virtual functions or add any new virtual functions, then the compiler could conceivably reuse the base class's vtable for the derived class. This means that in order to distinguish between two exception types, they must each override a virtual function, to ensure that they have their own vtables. Note that this is only a tiny fraction of what exception handling involves. There is also the small matter of unwinding the stack! You need to call the destructors of all objects on the stack when you jump to the handler. It's not just a matter of doing setjmp/longjmp.
1,081,040
1,081,047
Linked List: Is this solution good?
I was looking for a way to avoid starting from the head of the list each time I want to find a node, so I thought of assigning indexes to nodes, keeping a pointer to a random (not exactly random; see below) node and then finding the pointer that's closest to the index I want to find. Allow me to explain with code: // head and last are pointers to the first and last items of a doubly-linked list // current is a pointer that will change over time. It's used as a temporary pointer template <class T>a Node<T>* List<T>::get_closest(Node<T> node, int& difference) { int curr_to_i = current->index - node->index; int last_to_i = last->index - node->index; Node* closest = node->index < abs(curr_to_i) ? head : current; closest = closest->index < abs(last_to_i) ? closest : last; difference = closest->index - node->index; return closest; } /* * This functions adds a node with the given value to the given index. The node at that * index and all the following are moved, and the new node is inserted before them. */ template <class T> bool List<T>::add(T value, int index) { if (index < 0) { //Invalid index return false; } else if (index == last->index +1) { push(value); return true; } else if (index > 0) { Node* new_n = new Node; new_n->value = value; new_n->index = index; int difference; Node* closest = get_closest(new_n, difference); if (difference < 0) { for (int i = 0; i < abs(difference); i++) { current = current->previous; } } else if (difference > 0) { for (int i = 0; i < abs(difference); i++) { current = current->next; } } /* current now points to the node we want to move */ new_n->previous = current->previous; new_n->next = current; current->previous->next = new_n; current->previous = new_n; if (index == 0) { root = new_n; } new_n = new_n->next; while (new_n != null) { new_n->index++; new_n = new_n->next; } return true; } } Is this more efficient than starting from the head and advancing forward a number of times?
If you need to access elements in the middle of the list, then you're better off using an array. A list is an abstract data structure (ADT) that can be implemented various ways. What you've essentially done is create a redundant representation that has the overhead of both methods. The advantage of a linked list is that inserts can be very fast at the head of the list - O(1) vs. O(n) for an array. However, since you have to maintain your index, you have O(N) overhead anyways for inserts. If you need indexing, just use an array. Simpler and faster.
1,081,250
1,081,262
Why does a%b produce SIGFPE when b is zero?
Today I was tracking down a floating point exception in some code I had just written. It took a little while to find because it was actually caused by taking an integer mod zero. Obviously doing anything mod zero is not going to be defined but I thought it was strange that the error was so misleading. What is it within the C++ modulo operator that would use floating point for two integers? (I'm using gcc 4.3.2) Here's a simple program to demonstrate the error. int main() { int a=3,b=0; int c=a%b; return 0; }
The operation triggers SIGFPE: SIG is a common prefix for signal names; FPE is an acronym for floating-point exception. Although SIGFPE does not necessarily involve floating-point arithmetic, there is no way to change its name without breaking backward compatibility. GDB is a bit clearer about this and calls it "Arithmetic exception": (gdb) run Starting program: /home/emil/float Program received signal SIGFPE, Arithmetic exception. 0x0804837d in main () at float.c:4 4 int c=a%b;