question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
2,115,977
2,116,020
Reorganize Classes into Static Libraries
I am about to attempt reorganizing the way my group builds a set of large applications that share about 90% of their source files. Right now, these applications are built without any libraries whatsoever involved except for externally linked ones that are not under our control. The applications use the same common source files (we are not maintaining 5 versions of the same .h/.cpp files), but these are not built into any common library. So, at the moment, we are paying the price of building the same code over-and-over per application, each time we intend to release a version. To me, this sounds like a prime candidate for using libraries to capture the shared code and reduce build times. I do not have the option of using DLL's, so the approach is to use static libraries. I would like to know what tips you would have for how to approach this task. I have limited experience with creating/organizing static libraries, so even the basic suggestions towards organization/gotchas are welcome. Maybe even a good book recommendation? I have done a brief exercise by finding the entire subset of files that each application share in common. As a proof of concept, I took these files and placed them in a single "Common Monster" static library. Building the full application using this single static library certainly improves the build time for all of the applications, but should I leave it at this? The purpose of the library in this form is not very focused and seems like a lazy attempt at modularity. There is ongoing development with these applications, and I'm afraid this setup will cause problems further down the line.
It's very hard to give general guidelines in this area - how you structure libraries depends very much on how you use them. Perhaps if I describe my own code libraries this may help: One general purpose library containing code that I expect all applications will have at least a 50/50 chance of needing to use. This includes string utilities, regexes, expression evaluation, XML parsing and ODBC support. Conceivably this should be split up a bit, but it makes distributing my code in FOSS projects easier to keep it monolithic. A library supporting multi-threading, providing wrappers around threads, mutexes, semaphores etc. One supporting SQLite via its native interface, rather than via ODBC. A C++ web server wrapper round the Mongoose C web server. The general purpose library is used in all the stuff I write, the others in more specialised circumstances. Headers for each library are held in separate directories, as are the library binaries themselves (though they should probably be in a single lib directory).
2,116,126
2,116,186
How can I make the preprocessor insert linebreaks into the macro expansion result?
With C/C++ macros it's quite easy to generated long constructs automatically. For example, if I want a huge set of methods to not ever throw exceptions (a must for COM-exposed methods) I can do something like this: #define BEGIN_COM_METHOD\ try{ #define END_COM_METHOD\ return S_OK;\ } catch( exception& ) {\ // set IErrorInfo here\ return E_FAIL;\ } to make such macros manageable one can use "\" character to make the macro definition multiline and more readable. The problem is sometimes code with such constructs will not compile - something will not expand as expected and invalid code will be present to the compiler. Compiler usually have "generate preprocessed file" option to show the developer the preprocessing result. But in the preprocessed file the macro is expanded into one line and the result is barely readable. Is it possible to make the preprocessor to keep the linebreaks present in the macro definition?
You can't do it. The replacement text is until the end of the line where it is #defined, so it will not have newlines in it. If your problems with compilation are infrequent, you could run the preprocessed file through indent or something like that before compiling when that happens to help you get more readable code.
2,116,128
2,116,174
Easier way to do callbacks for vectors (or maybe something else in the STL)? C++
I'm making a simple crime sim game. Throughout it I keep doing the same thing over and over: // vector<Drug*> drugSack; for (unsigned int i = 0; i < this->drugSack.size(); i++) this->sell(drugSack[i]); Just one example. I hate having all these for loops all over the place omg QQ, anyway to do something like: drugSack->DoForAll((void*)myCallBack); I'm not well versed in the STL.
Time to start knowing the stl algorithms: #include <algorithm> ... std::for_each( drugSack.begin(), drugSack.end(), std::bind1st( std::mem_fun_ptr( &ThisClass::Sell ), this ) ); The idea is to create an object, called a "functor", that can do a certain action for each of the elements in the range drugSack.begin(), drugSack.end(). This functor can be created using stl constructs like mem_fun_ptr, resulting in a functor taking a ThisClass* and a Drug* argument, and a wrapper around it that will substitute/bind the Class* for this.
2,116,132
2,116,171
How to port C++ code to C++/CLI in Visual Studio?
I have an application written in native C++ which I'd like to get running on the .NET virtual machine. I was thinking of recompiling the C++ code as C++/CLI, using the Visual Studio 2008 compiler. Regrettably, I don't find any documentation on how to do this, so hence my questions: Does this actually make sense? Am I trying the impossible? Where can information on the topic be found?
A lot of native C++ code will actually just compile and run on C++/CLI. This is really a kind of hybrid compiler that can call native Win32 functions and use standard C libraries like OpenGL. You can even call COM interfaces directly (all the stuff you can do with a native C++ compiler). The .Net library is also available but for these you create managed classes (using the ref class keyword). You will use gcnew to allocate memory for these classes (from a garbage collected heap). Memory for your normal classes is still allocated using new and delete (from a standard, non garbage-collected heap). In short, you can migrate to .Net in bits and pieces, though there is still some friction when switching between managed and unmanaged classes. I found this book useful: Pro Visual C++/CLI.
2,116,221
2,116,263
Loop efficiency - C++
Beginners question, on loop efficiency. I've started programming in C++ (my first language) and have been using 'Principles and Practice Using C++' by Bjarne Stroustrup. I've been making my way through the earlier chapters and have just been introduced to the concept of loops. The first exercise regarding loops asks of me the following: The character 'b' is char('a'+1), 'c' is char('a'+2), etc. Use a loop to write out a table of characters with their corresponding integer values: a 97, b 98, ..., z 122 Although, I used uppercase, I created the following: int number = 64; //integer value for @ sign, character before A char letter = number;//converts integer to char value int i = 0; while (i<=25){ cout << ++letter << "\t" << ++number << endl; ++i; } Should I aim for only having 'i' be present in a loop or is it simply not possible when converting between types? I can't really think of any other way the above can be done apart from having the character value being converted to it's integer counterpart(i.e. opposite of current method) or simply not having the conversion at all and have letter store '@'.
You should aim for clarity first and you try to micro-optimize instead. You could better rewrite that as a for loop: const int offsetToA = 65; const int numberOfCharacters = 26; for( int i = 0; i < numberOfCharacters; ++i ) { const int characterValue = i + offsetToA; cout << static_cast<char>( characterValue ) << characterValue << endl; } and you can convert between different types - that's called casting (the static_cast construct in the code above).
2,116,235
2,116,252
TEMP Environment variable expansion for C++ (Windows)
I need to get a %TEMP% environmental variable value string in Windows platform. If I try to use any methods(C / C++) (getenv(), …) to get this environmental variable, it returns with “~” in that string. For Example: C:\DOCUME~1\pkp\LOCALS~1\Temp. But I need to get full string for some reasons, as below: C:\Documents and Settings\pkp\Local Settings\Temp If anyone knows any methodology to obtain this, please let me know.
Call GetLongPathName() on the short name.
2,116,434
2,116,649
Sorting std::list using std::set
I'm adding two different elements to both std::list and std::set and I want the std::list to be sorted with the same order as of std::set. one way I tried is when the element is added to std::set, find that element then get the index of that element using std::distance(begin, found) and then insert the element to that index in std::list. is there any other way?
You should use the std::map, with the data you put in the set as key, and the data you put in the list as value. This way your list elements will be ordered.
2,116,490
2,116,563
boost::intrusive_ptr constructor ambiguity using a class' 'this' pointer
The offending code: template<typename T> class SharedObject { public: typedef boost::intrusive_ptr<T> Pointer; typedef boost::intrusive_ptr<T const> ConstPointer; inline Pointer GetPointer() { return Pointer(this); //Ambiguous call here } inline ConstPointer GetPointer() const { return ConstPointer(this); } ... and used like this: template <typename T> class SomeClass: public SharedObject<SomeClass<T> > { public: static inline boost::intrusive_ptr<SomeClass<T> > Create() { return (new SomeClass)->GetPointer(); } }; int main() { auto v = SomeClass<int>::Create(); } GCC (4.4.1) with boost 1.41 gives this error upon instatiating the first (non-const) version of GetPointer(): error: call of overloaded ‘intrusive_ptr SharedObject<SomeClass<int> >* const)’ is ambiguous boost/smart_ptr/intrusive_ptr.hpp:118: note: candidates are: boost::intrusive_ptr<T>::intrusive_ptr(boost::intrusive_ptr<T>&&) [with T = SomeClass<int>] <near match> boost/smart_ptr/intrusive_ptr.hpp:94: note: boost::intrusive_ptr<T>::intrusive_ptr(const boost::intrusive_ptr<T>&) [with T = SomeClass<int>] <near match> boost/smart_ptr/intrusive_ptr.hpp:70: note: boost::intrusive_ptr<T>::intrusive_ptr(T*, bool) [with T = SomeClass<int>] <near match> To my less than arcane skills in C++, I can't see why there is any ambiguity at all. The two canditates at lines 188 and 94 takes an existing intrusive_ptr rvalue reference, which SharedObject::this certainly is not. The final candidate however is a perfect match (the bool argument is optional). Anyone care to enlighten me as to what the problem is? EDIT+answer: I finally realized that in inline Pointer GetPointer() { return Pointer(this); //Ambiguous call here } this refers to SharedObject while the Pointer typedef is SomeClass. (Which is pretty much what Butterworth pointed out right away). inline Pointer GetPointer() { return Pointer(static_cast<C*>(this)); } Since I know this to really be SomeClass, inheriting from SharedObject, a static_cast makes the template class go 'round.
When you say: typedef boost::intrusive_ptr<T> Pointer; you are declaring a type which is an intrusive pointer to an int (because T is an int at that point), when the template is instantiated in your code. Your SharedObject class is not an int, so you can't instantiate such an intrusive pointer using this. Edit: OK, I misunderstood your code, I'll try again. At: return Pointer(this); //Ambiguous call here this is a SharedObject , as per the error messages, however the pointer is typedefed to a SomeClass I think. Your code is incredibly hard to understand - whatever it is you are trying to do, there must be a simpler way. And you seem to be missing a virtual destructor (and maybe a virtual function) in the base class.
2,116,782
2,116,793
C++ Inheritance, calling a derived function from the base class
How can I call a derived function from a base class? I mean, being able to replace one function from the base to the derived class. Ex. class a { public: void f1(); void f2(); }; void a::f1() { this->f2(); } /* here goes the a::f2() definition, etc */ class b : public a { public: void f2(); }; /* here goes the b::f2() definition, etc */ void lala(int s) { a *o; // I want this to be class 'a' in this specific example switch(s) { case 0: // type b o = new b(); break; case 1: // type c o = new c(); // y'a know, ... break; } o->f1(); // I want this f1 to call f2 on the derived class } Maybe I'm taking a wrong approach. Any comment about different designs around would also be appreciated.
Declare f2() virtual in the base class. class a { public: void f1(); virtual void f2(); }; Then whenever a derived class overrides f2() the version from the most derived class will be called depending on the type of the actual object the pointer points to, not the type of the pointer.
2,116,828
2,116,871
How to log an c++ exception
Do you know how can I log the exception ? right now the message in the catch statement is printed, but i cannot understood why ins´t Manage.Gere() called sussefully . try{ Manager.Gere(&par,&Acc, coman, comando, RunComando, log, &parti, comandosS, RunComandosSuper,true); } catch (...) { log("ERROR ENTER GERE*****"); } Perif::Gere(CString *par, CString *Acc, HANDLE coman, HANDLE comando, HANDLE RunComando, Log &log, CString *parti, HANDLE comandosS, HANDLE RunComandosSuper,bool first) { log->LogD("Perif :: Gere Enter****** "); //It doesnt get printed }
First thing you need there is find which exceptions Manage.Gere can throw. Then catch them specifically like catch(FirstExceptionGereThrows &exc) and when you catch all possible exceptions, you'll know what is failing in Manage.Gere. catch(FirstException &exc){ log << "Failed because FirstException\n"; }catch(SecondException &exc){ log << "Failed because SecondException\n"; } After, and if you are lucky enough the exceptions thrown by Manage.Gere may include some extra info about the crash which you could log as well. catch(FirstException &exc){ log << "Failed because FirstException: " << exc.what() << "\n"; }
2,116,959
2,117,123
how to write a function Click() for dynamic created button?
Trying to write a simple VCL program for educating purposes (dynamicly created forms, controls etc). Have such a sample code: void __fastcall TForm1::Button1Click(TObject *Sender) { TForm* formQuiz = new TForm(this); formQuiz->BorderIcons = TBorderIcons() << biSystemMenu >> biMinimize >> biMaximize; formQuiz->Position = TPosition::poDesktopCenter; formQuiz->Width = 250; formQuiz->Height = 250; formQuiz->Visible = true; TButton* btnDecToBin = new TButton(formQuiz); btnDecToBin->Parent = formQuiz; btnDecToBin->Left = 88; btnDecToBin->Top = 28; btnDecToBin->Caption = "Dec to Bin"; btnDecToBin->Visible = true; } I wonder how can i write a function for dynamic created button, so it would be called when the button is clicked. In this example i need a 'btnDecToBin->Click();' func but i don't know where should i place it. Inside 'void __fastcall TForm1::Button1Click(TObject *Sender){}' ? I will appreciate any input, some keywords for google too.
You could do two things, you could either create an action and associate it with the button, or you could make a function like so: void __fastcall TForm1::DynButtonClick(TObject *Sender) { // Find out which button was pressed: TButton *btn = dynamic_cast<TButton *>(Sender); if (btn) { // Do action here with button (btn). } } You bind it to the button instance by setting the OnClick property btnDecToBin->OnClick = DynButtonClick please note that the function is inside the form Form1. This will work due to the nature of closures (compiler specific addition). The problem comes if you delete Form1 before formQuiz without removing the reference to the click event. In many ways it might be a more clean solution to use an Action in this case. Edit: On other way to do this, if you have a standard layout for your quizforms, you could make a custom TQuizForm class inheriting from TForm. In this way you wouldn't have to bind the event each time you create the form.
2,117,042
2,117,246
Capturing real time images from a network camera
What is the best way to capture streamed MJPEG from a network IP camera? I'd like to get frames and process them, using c++ (or python extended with c++). Is OpenCV my best option?
Appart from OpenCV, you can use mplayer with -vo yuv4mpeg redirected to a pipe to get a stream of uncompressed yuv images. You can create the mplayer process and pipe from C++. Another way is to use a RTSP library (your IP camera probably uses it as protocol)
2,117,173
2,117,187
How to enable the mouse in C++ program under DOS using DJGPP?
I've been using DJGPP for the first time recently and can't seem to enable mouse support. What's the best way? Thanks for any help.
Gosh, this takes me back! You need the software interrupt 33H - see http://www.sentex.net/~ajy/mouseint.html, and a tutorial of sorts at http://www.writeka.com/emage/mouse_events.html.
2,117,312
18,447,136
How can i convert a string into a ZZ number?
I'm using the NTL library to implement ElGamal encryption/decryption algorithm. I've got it to the point that it's working but the algorithm wants the message to be converted to integers so it can be encrypted. So if i input a number like 1234 everything works ok but how would i go to be able to convert a C++ string (std::string) to a ZZ number and then back from that ZZ number to a string? LE: ZZ it's a class that represent a large number. Ex: 18287348238476283658234881728316274273671623781254124517353 So basically i'm looking to take "Hello World" for example and run it char by char and get the ascii code of the chars so i'll get a number: "72 101 108 108 111 32 87 111 114 108 100" And then i need to convert this number back to string "Hello World" Or maybe there's a better way.
Here is an easy way to do it: std::string str("1234567890"); NTL::ZZ number(NTL::INIT_VAL, str.c_str()); Now notice that: std::cout << str << std::endl; // prints 1234567890 std::cout << number << std::endl; // prints 1234567890
2,117,452
2,117,489
Visual Studio linking error LNK2005 and LNK2020
I'm using visual studio 2003 and I'm getting the following linking error in my project: Linking... LINK : warning LNK4075: ignoring '/EDITANDCONTINUE' due to '/INCREMENTAL:NO' specification msvcrtd.lib(MSVCR71D.dll) : error LNK2005: _fprintf already defined in LIBCMTD.lib(fprintf.obj) C:\Documents and Settings\mz07\Desktop\project\HLconsoleExample\Debug\HLconsoleExample.exe : fatal error LNK1169: one or more multiply defined symbols found I then included libcmtd.lib into "ignore specific library" line and got another error: Linking... LINK : warning LNK4075: ignoring '/EDITANDCONTINUE' due to '/INCREMENTAL:NO' specification LINK : error LNK2020: unresolved token (0A000037) _CxxThrowException LINK : error LNK2020: unresolved token (0A000039) delete LINK : fatal error LNK1120: 2 unresolved externals This is the beginning of my main class: #include <cstdio> #include <iostream> #if defined(WIN32) # include <conio.h> #else # include "conio.h" #endif #include <HL/hl.h> #include <HD/hd.h> #include <HDU/hduVector.h> #include <HDU/hduError.h> ... int main(int argc, char *argv[]) { HHD hHD; HHLRC hHLRC; HDErrorInfo error; ... } I included all the libraries I'm using into the Linker properties. Here is the Command Line output of it: /OUT:"C:\Documents and Settings\mz07\Desktop\project\HLconsoleExample\Debug\HLconsoleExample.exe" /INCREMENTAL /NOLOGO /NODEFAULTLIB:"libcmtd.lib" /DEBUG /ASSEMBLYDEBUG /PDB:"C:\Documents and Settings\mz07\Desktop\project\HLconsoleExample\Debug/HLconsoleExample.pdb" /FIXED:No hl.lib hd.lib HDUD.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib "\Program Files\SensAble\3DTouch\lib\hl.lib" "\Program Files\SensAble\3DTouch\lib\hd.lib" "\Program Files\SensAble\3DTouch\utilities\src\Hdu\Debug\HDUD.lib" I am new to c++ and I don't really understand how linking works :) so any help is appreciated.
You are linking a .lib whose code was compiled with an incompatible compiler setting. The problem one is Project + Properties, C/C++, Code Generation, Runtime library. /MD is not compatible with /MT. You'll either have to rebuild the .libs to match your .exe project setting or the other way around.
2,117,488
2,119,127
Extract audio from video as wav
I know there is a question similar to mine: Extract wav file from video file I am new to C++ and understand about COM library + directX is needed for video and audio. I been looking for tutorial and samples code but little success. My question is how do I code the application to take video file (any type) and saved the extracted audio as .wav in my application rather than using other applications such as graphedit or virtualdub?
I'll second the motion to just use a build of ffmpeg to perform the audio extraction. It can be done in one easy command as opposed to most likely hundreds of lines of code (If your going to check for all of the possible problems that could happen when dealing with different video formats and codecs). ffmpeg -i video.avi -vn soundfile.wav You could use libavformat and libavformat(libraries behind ffmpeg) to do the same thing, but unless you need to do some processing on the raw audio before outputting to wav, there would be nothing to gain except for knowledge. ffmpeg is nice because the executable contains all of the audio and video decoders you'll probably ever need so the solution is highly portable. You don't have it install codecs or anything. The input video file can be in any format or codec that ffmpeg supports and you don't have to bother with treating them differently in your code. From C++ you can call ffmpeg by building the command line string in your code and kicking off the process from your code (being new the C++, you'll probably need to research how to do this, but it's pretty easy).
2,117,536
2,119,919
Creating a library of template functions
I've been developing a library of mostly template functions and managed to keep things organized (to some extent) in the following manner: // MyLib.h class MyLib { template<class T> static void Func1() { } template<class T> static void Func2() { } }; And obviously calls would be made like this: MyLib::Func1(); As you can see, this can get quite ugly as more functions are added. At the very least, I'd like to separate it into different files! I initially considered defining batches of functions in separate files in the MyLib namespace and then using a MyLib.h to consolidate all of them but I kept getting truckloads of linker errors - of course, I can take a closer look at this approach if it's recommended. Any thoughts? PS: Since most of these functions have different objectives it doesn't make sense to group them under a class from which we'd instantiate objects. I've used a class here so I won't have to worry about the order in which I've defined the functions (there is interdependence among functions within MyLib as well). Linker Errors: So the basic structure's like this: I have two classes (say A & B) which compile to static libraries and a master application which runs instances of these classes. These classes A & B use functions in MyLib. When A & B are compiling I get the LNK4006 warning which states that symbols belonging to MyLib have already been defined in an OBJ file within the project and it's ignoring it. When it comes down to the application it becomes an LNK2005 error which states that it's already defined in the OBJ files of A & B. UPDATE: Thank you Mike & Mathieu for the inline idea - it was the problem! Except for one issue: I have some template functions which I've explicitly specialized and these are causing the already defined error (LNK2005): template<class t> int Cvt(){} template<> int Cvt<unsigned char>(){return 1;} template<> int Cvt<char>(){return 2;} template<> int Cvt<unsigned short>(){return 3;} Any ideas? Conlusion: Solved the explicit specialization problem by defining the template functions in a separate file - thanks for the help!
You should prefer the namespace over your class with static methods: namespace offer you the possibility to be shared among several files, one per logical group of methods namespace may be omitted: either because ADL kicks in or with using myNamespace::MyFunc; (note: it's bad practice to write using myNamespace;, and you should shun the practice) Now, let's speak of organization: it's good practice to have your file hierarchy shadowing the namespace hierarchy [1] it's good practice to split your methods by logical groups, so that the user does not have to include the whole world just because he wanted Hello, World! to be printed, commodity headers can help though (ie, headers that do a bunch of includes for lazy programmers to use) [1] Here is what I mean: #include "lib/string/manip.hpp" // Okay, this files come from "lib" int main(int argc, char* argv[]) { std::string s; lib::string::manip(s); // Same hierarchy, easy to remember the header return 0; } A motivating example ? Boost does it (with commodity headers). And what's more this does not cost much: just replace class by namespace and remove the static keywords, that's all folks. For the linker problem: all methods that are not templated should be either declared as inline (try to avoid it unless they're one-liners) or be defined outside of the header (in a separate .cpp file). UPDATE: The problem of template specialization is that you end up defining a now "normal" method: there is nothing template about it any longer once you've fixed each and every parameter. The solution is thus to do like you did for normal functions: declaration in a header file and definition in a source file (and thus only once). To be a bit more specific about this strange error: the problem of C++ is that each source files is compiled in isolation: the preprocessor will take the include and actually create a single text file that will contain every single included file (in order) and then your source at the end. The compiler takes this file and produces a ".o" file (for gcc). Then the linker kicks in and try to actually make a library (or binary) out of all these ".o" files, and it checks that each method is only defined once because otherwise how would it choose between the multiple definitions (unfortunately does not check if they are equivalent or not...) ? There is a special allowance for template methods and classes though, and it picks up one (at random) among all of the instantiations (one instantiation for each combination of template parameters). Of course, this assumes that all of them are identical and you might end up with quite a headache for something like: // foo.h template <class T> int foo(T) { return 10; } // foo.cpp #include "foo.h" char a; std::cout << foo(a) << std::endl; // bar.cpp #include "foo.h" template <> int foo<char>(char) { return 20; } char b; std::cout << foo(b) << std::endl; Both lines will print the same output, whether it is 10 or 20 is unknown though, and could change between the builds!!!
2,117,597
2,295,570
Reading response data from TCppWebBrowser in Borland C++Builder
How to I access the data returned on a web page using the TCppWebBrowser component in Borland C++Builder 6.0? I have succeeded in posting data using the sample at: http://edn.embarcadero.com/article/27519
One of my colleagues has found an answer for me at: cboard.cprogramming.com/cplusplus-programming/… This works very well.
2,118,047
2,118,096
Returning reference to a pointer- C++
Consider the following class. class mapping_items { public: mapping_items(){} void add(const mapping_item* item) { items_.push_back( item ); } size_t count() const{ return items_.size(); } const mapping_item& find(const std::string& pattern){ const mapping_item* item = // iterate vector and find item; return *item; } private: mapping_items(const mapping_items&); // not allowed mapping_items& operator=(const mapping_items&); // not allowed std::vector<const mapping_item*> items_; }; C++ FAQ says, Use references when you can, and pointers when you have to. So in the above example, should I return const mapping_item& or const mapping_item* ? The reason why I chose mapping_item& is because there will be always a default return value available. I will never have null returns. So a reference makes it clear that it can't have nulls. Is this the correct design?
There is a problem - what happens if your find() function fails? If this is expected never to happen, you are OK returning a reference (and raise an exception if it happens despite the fact it shouldn't). If on the other hand it may happen (e.g. looking up a name in an address book), you should consider returning a pointer, as a pointer can be NULL, indicating the find failed.
2,118,090
2,118,172
What are the "things to know" when diving into multi-threaded programming in C++
I'm currently working on a wireless networking application in C++ and it's coming to a point where I'm going to want to multi-thread pieces of software under one process, rather than have them all in separate processes. Theoretically, I understand multi-threading, but I've yet to dive in practically. What should every programmer know when writing multi-threaded code in C++?
I would focus on design the thing as much as partitioned as possible so you have the minimal amount of shared things across threads. If you make sure you don't have statics and other resources shared among threads (other than those that you would be sharing if you designed this with processes instead of threads) you would be fine. Therefore, while yes, you have to have in mind concepts like locks, semaphores, etc, the best way to tackle this is to try to avoid them.
2,118,238
2,118,278
storing a type's type for processing variable argument lists
Is it possible to do something along the lines of: type t = int;//this would be a function which identifies what type the next argument is if( t == int ) printf( "%d", va_arg( theva_list, t ) ); in a relatively trivial way? The only object I know which can hold a type is type_info and I can't work out how to use it in this way. Thanks, Patrick
Generally speaking, no. Types can only really be stored, manipulated, etc., at compile time. If you want something at run time, you have to convert (usually via rather hairy metaprogramming) the type to a value of some sort (e.g., an enumeration). Perhaps it would be better if you gave a somewhat higher level description of what you're really trying to accomplish here -- the combination of variable argument lists with an attempt at "switch on type" sounds like a train crash about to happen...
2,118,245
2,118,394
Is there any emulator programming tutorial or guide?
Possible duplicate of How do Emulators Work and How are they Written? I want to program an emulator ( may be NES or C64, I haven't decided yet ), I know there are lots of them so many may ask why would someone want to make one from scratch, but I want to include some specific characteristics in it, and also for the sake of building it myself. I'd like to read a guide from someone who has built one and can transmit the experience, it doesn't have to be platform-specific ( better if it's not ) since I know how to program, what I don't is how to emulate.
Both the NES and C64 are based on the 8 bit 65xx processor. Writing an instruction set emulator for that chip is pretty trivial since the instruction set is small. The larger issue is to emulate the other support hardware, video controller, etc. It's been a long time since I programmed a C64, and I never programmed an NES, so my memory is foggy. As I recall the C64 had a one or two chip solution for video and interfaces.
2,118,422
2,118,718
Scope of C libraries in C++ - <X.h> vs <cX>
The C++ Programming Language : Special Edition states on page 431 that... For every header < X.h > defining part of the C standard library in the global namespace and also in namespace std, there is a header < cX > defining the same names in the std namespace only. However, when I use C headers in the < cX > style, I don't need to qualify the namespace. For example... #include <cmath> void f() { double var = sqrt( 17 ); } This would compile fine. Even though the book says that using the < cX > header defines names in the std namespace only, you are allowed to use those names without qualifying the namespace. What am I missing here? P.S. Using the GNU.GCC compiler
Stephan T. Lavavej, a member of the MSVC team, addresses the reality of this situation (and some of the refinements to the standard) in this comment on one of his blog postings (http://blogs.msdn.com/vcblog/archive/2008/08/28/the-mallocator.aspx#8904359): > also, <cstddef>, <cstdlib>, and std::size_t etc should be used! I used to be very careful about that. C++98 had a splendid dream wherein <cfoo> would declare everything within namespace std, and <foo.h> would include <cfoo> and then drag everything into the global namespace with using-declarations. (This is D.5 [depr.c.headers].) This was ignored by lots of implementers (some of which had very little control over the C Standard Library headers). So, C++0x has been changed to match reality. As of the N2723 Working Paper, http://open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2723.pdf , now <cfoo> is guaranteed to declare everything within namespace std, and may or may not declare things within the global namespace. <foo.h> is the opposite: it is guaranteed to declare everything within the global namespace, and may or may not declare things within namespace std. In reality and in C++0x, including <cfoo> is no safeguard against everything getting declared in the global namespace anyways. That's why I'm ceasing to bother with <cfoo>. This was Library Issue 456, http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#456 . (C++0x still deprecates the <foo.h> headers from the C Standard Library, which is hilarious.) I'm in 100% agreement with Lavavej, except I never tried to be very careful about using the <cfoo> style headers even when I first started using C++ - the standard C ones were just too ingrained - and there was never any real world problem using them (and apparently there was never any real world benefit to using the <cfoo> style headers).
2,118,493
2,118,818
Asynchronous write to socket and user values (boost::asio question)
I'm pretty new to boost. I needed a cross platform low level C++ network API, so I chose asio. Now, I've successfully connected and written to a socket, but since I'm using the asynchronous read/write, I need a way to keep track of the requests (to have some kind of IDs, if you will). I've looked at the documentation/reference, and I found no way to pass user data to my handler, the only option I can think of is creating a special class that acts as a callback and keeps track of it's id, then pass it to the socket as a callback. Is there a better way? Or is the best way to do it?
The async_xxx functions are templated on the type of the completion handler. The handler does not have to be a plain "callback", and it can be anything that exposes the right operator() signature. You should thus be able to do something like this: // Warning: Not tested struct MyReadHandler { MyReadHandler(Whatever ContextInformation) : m_Context(ContextInformation){} void operator()(const boost::system::error_code& error, std::size_t bytes_transferred) { // Use m_Context // ... } Whatever m_Context; }; boost::asio::async_read(socket, buffer, MyReadHander(the_context)); Alternatively, you could also have your handler as a plain function and bind it at the call site, as described in the asio tutorial. The example above would then be: void HandleRead( const boost::system::error_code& error, std::size_t bytes_transferred Whatever context ) { //... } boost::asio::async_read(socket, buffer, boost::bind(&HandleRead, boost::asio::placeholders::error_code, boost::asio::placeholders::bytes_transferred, the_context ));
2,118,541
2,121,434
Check if parameter pack contains a type
I was wondering if C++0x provides any built-in capabilities to check if a parameter pack of a variadic template contains a specific type. Today, boost:::mpl::contains can be used to accomplish this if you are using boost::mpl::vector as a substitute for variadic templates proper. However, it has serious compilation-time overhead. I suppose, C++0x has compiler-level support for std::is_same. So I was thinking if a generalization like below is also supported in the compiler. template <typename... Args, typename What> struct is_present { enum { value = (What in Args...)? 1 : 0 }; };
No, you have to use (partial) specialization with variadic templates to do compile-time computations like this: #include <type_traits> template < typename Tp, typename... List > struct contains : std::true_type {}; template < typename Tp, typename Head, typename... Rest > struct contains<Tp, Head, Rest...> : std::conditional< std::is_same<Tp, Head>::value, std::true_type, contains<Tp, Rest...> >::type {}; template < typename Tp > struct contains<Tp> : std::false_type {}; There is only one other intrinsic operation for variadic templates and that is the special form of the sizeof operator which computes the length of the parameter list e.g.: template < typename... Types > struct typelist_len { const static size_t value = sizeof...(Types); }; Where are you getting "it has serious compilation-time overhead" with boost mpl from? I hope you are not just making assumptions here. Boost mpl uses techniques such as lazy template instantiation to try and reduce compile-times instead of exploding like naive template meta-programming does.
2,118,695
2,118,998
Intel Performance Primitive (IPP) runtime error
I have source code that was not written by me, and I cannot contact the author. It is written in C++ and requires libjpeg, boost, and the Intel Performance Primitives. Compilation was a chore, but after days of problem solving, it compiles. Now, I get the following runtime error: error while loading shared libraries: libippi.so.5.1: cannot open shared object file: No such file or directory. The error occurs immediately regardless of the command line arguments. I downloaded the trial version of IPP for Ubuntu 9.04. Under /opt/intel/ipp/6.1.2.051/ia32/sharedlib/, I see a bunch of files beginning with lib* and libippi*, including libippi.so.6.1. So I thought I would try to create a link libippi.so.5.1 that points to libippi.so.6.1, but that doesn't work. I tried creating a similar link in the local directory, and that does not work either. I am not familiar with any of these libraries, so I don't know what else to try. I could not find any solutions on the net or SO. If you could kindly help me fix this error, I would greatly appreciate it. Thank you.
Looks like the app is compiled against an older version of IPP. Since 6.1.2 is called libippi.so.6.1, it may be as simple installing IPP 5.1.x (though linux library versioning isn't as simple as this.) If you create a login for the intel non-commercial IPP download area, you can dig around and see if they offer older builds. Alternatively, doing a quick google search I found this FTP site which seems to have it but note I have not actually downloaded or tried this code, and can not verify if this is a legal mirror or not or if it is the original Intel libraries, you will need to do your own due dilligence before using this code http://21cma.bao.ac.cn/software/21cma/intel/ipp-5.1.1.005/ Note that to use this older version of IPP in a modern Ubuntu, you may need to get older versions of other libraries it depends on (the requirements are listed in the Release Notes), or even just run it under a chroot of a supported Linux Distro at least to test if it fixes your issue.
2,118,782
2,119,001
Selecting an explicit specialization of a class based on a derived type
Hi I'm having problems selecting the correct version of a templated class which has an explicit specialization. I'm wanting to select a specialization using a derived class of the class used to specialize. The scenario is: #include <stdio.h> class A {}; class B: public A {}; template<typename T> class Foo { public: int FooBar(void) { return 10; } }; // Explicit specialization for A template<> int Foo< A >::FooBar( void ) { return 20; } void main( void) { Foo<B> fooB; // This prints out 10 instead of wanted 20 ie compiler selects the general version printf("%d", fooB.FooBar() ); } As I say in my comments there I want to see 20 being printed out because B is derived from A but 10 gets printed out instead. How do I go about getting the specialization called without resorting to writing a specialization for each and every derived class (my actual scenario has a lot of derived types).
---EDIT : NEW ANSWER Let's make the original approach more maintainable. All the important choices can be found in the definition of Foo. It is supposed to be easy to maintain. #include <boost/mpl/if.hpp> #include <boost/type_traits/is_base_of.hpp> #include <iostream> class A {}; class B: public A {}; class C{}; class D : public C{}; class E{}; struct DefaultMethod { static int fooBar() { return 10; } }; struct Method1 { static int fooBar() { return 20; } }; struct Method2 { static int fooBar() { return 30; } }; template<typename T, typename BaseClass, typename Choice1, typename OtherChoice> struct IfDerivesFrom : boost::mpl::if_< typename boost::is_base_of<BaseClass, T>::type, Choice1, OtherChoice>::type { }; template<typename T> struct Foo : IfDerivesFrom<T, A, Method1, IfDerivesFrom<T, C, Method2, DefaultMethod> > { }; int main() { std::cout << Foo<A>::fooBar() << std::endl; std::cout << Foo<B>::fooBar() << std::endl; std::cout << Foo<C>::fooBar() << std::endl; std::cout << Foo<D>::fooBar() << std::endl; std::cout << Foo<E>::fooBar() << std::endl; return 0; } ---ORIGINAL ANSWER If you can use boost, you can do something like the following : #include <boost/type_traits/is_base_of.hpp> template<bool b> class FooHelper { int FooBar(); }; template<> FooHelper<true>::FooBar(){ return 20;} template<> FooHelper<false>::FooBar(){ return 10;} template<typename T> class Foo { public: int FooBar(void) { return FooHelper<boost::is_base_of<A, T>::type::value>(); } };
2,118,920
2,118,995
How do I copy the binary code of an executable into a new file without using a system copy command?
This is the code I have, but the file is a little smaller and doesn't execute: int WriteFileContentsToNewFile(string inFilename, string outFilename) { ifstream infile(inFilename.c_str(), ios::binary); ofstream outfile(outFilename.c_str(), ios::binary); string line; // Initial read infile >> line; outfile << line; // Read the rest while( infile ) { infile >> line; outfile << line; } infile.close(); outfile.close(); return 0; } What am I doing wrong? Is there a better way to read in the binary of an executable file and immediately write it out to another name? Any code examples? I need to do it without a system copy in order to simulate writing to disk.
One way is to use the stream inserter for a streambuf: int WriteFileContentsToNewFile(string inFilename, string outFilename) { ifstream infile(inFilename.c_str(), ios::binary); ofstream outfile(outFilename.c_str(), ios::binary); outfile << infile.rdbuf(); }
2,118,992
2,119,163
How do I use gcc builtin __sync_bool_compare_and_swap in g++ on MacOSX?
I have some code that uses: __sync_bool_compare_and_swap it compiles fine on Linux. But when I try to compile it on MacOSX in g++, I get: error: ‘__sync_bool_compare_and_swap’ was not declared in this scope How do I fix this? (This is Mac OSX 10.5.8, so it's intel .. .and should have this instruction). Thanks!
If you can't find it, you can use OSCompareAndSwap() on Mac. But it would be nice to have cross-platform code, wouldn't it.
2,119,015
2,119,821
ActiveMQ publisher and subscriber in multiple languages (C++ Java)
I want to use ActiveMQ as a message broker communicating between a C++ component and a Java component in two process. Eg. C++ component is the publisher and the Java component is the subscriber(there maybe multiple subscribers). I look at ActiveMQ website and it mentions the tool OpenWire and ActiveMQ-CPP. However, all the examples on the websites are using the same language for both producer and consumer. My questions are: 1.Can ActiveMQ work for producer/consumer in different languages? 2.In different processes? How?
OpenWire is a protocol and hence can theoretically be implemented anywhere, but that doesn't mean full implementations exist for every language. The fine print of the C++ client says: "As of version 2.0, ActiveMQ-CPP supports the OpenWire v2 protocol, with a few exceptions. ObjectMessage - We cannot reconstruct the object(s) contained in an ObjectMessage in C++, so if your application is subscribed to a queue or topic that has an ObjectMessage sent to it, you will receive the message but will not be able to extract an Object from it." So if you want to send data across processes, you write your C++ and Java components to use the API (making sure not to use ObjectMessage types if you're using ActiveMQ-CPP). Then run the ActiveMQ server... tell your programs to connect to it, and it should work. But if you're really just trying to do interprocess communication when you control both clients, this could be a bit heavy-handed. You might be interested in the responses to What is the best approach for IPC between Java and C++? and Good alternative to shared memory IPC for Java/C++ apps on Linux
2,119,080
2,119,203
Extract multiple words to one string variable
std::stringstream convertor("Tom Scott 25"); std::string name; int age; convertor >> name >> age; if(convertor.fail()) { // it fails of course } I'd like to extract two or more words to one string variable. So far I've read, it seems that it is not possible. If so, how else to do it? I'd like name to get all characters before number (the age). I'd feel most comfortable using sscanf, but I obviously can't. What I need is ability to extract all words before age for example.
Most of the solutions posted so far don't really meet the specification -- that all the data up to the age be treated as the name. For example, they would fail with a name like "Richard Van De Rothstyne". As the OP noted, with scanf you could do something like: scanf("%[^0-9] %d", name, &age);, and it would read this just fine. Assuming this is line oriented input, I'd tend to do that anyway: std::string temp; std::getline(infile, temp); // technically "[^0-9]" isn't required to work right... sscanf(temp.c_str(), "%[^0123456789] %d", name, &age); Unfortunately, iostreams don't provide a direct analog to a scanset conversion like that -- getline can read up to a delimiter, but you can only specify one character as the delimiter. If you really can't use scanf and company, the next stop would be either code it by hand (the beginning of the age would be temp.find_first_of("0123456789");) or use an RE package (TR1 if your compiler supplies it, otherwise probably Boost).
2,119,149
2,119,185
moving code from unix to windows xp
i have code written in c++. its a console app that takes an input and displays output. Now i can just give my a.out to someone without giving them the code and it should work on another unix system. but what if they have windows environment. I would like to learn how to make dll for them so they can run that. also, if they were going to use it as part of another program I guess i would need to make an api or function for them. But i am not sure how that works with dlls as i have never done this before.
You need to recompile your application for Windows, either on a Windows machine or by using a crosscompiler. This requires that all routines which you use need to be available under Windows, too. Either you wrote your application from scratch using portable libraries (read: no unix/posix system calls), or you will get problems porting your code to run under Windows. Cygwin can probably help, check it out. If you say its a pure console app, I assume you're using std::cout and std::cin or other stuff from the C++ standard library. These are indeed universally available on every C++ implementation.
2,119,161
2,119,262
What is an App Bundle on Mac?
I have a basic C++ applicatin build using g++ and -framework ... when I run it, I get a : Working in unbundled mode. You should build a .app wrapper for your Mac OS X applications. (which is not std::couted by any of my application). What causes this, and how can I get rid of it? Thanks!
You need to create a folder structure and place the binary in a special location. For an example with explanation see this Qt page Mac OS X handles most applications as "bundles". A bundle is a directory structure that groups related files together. Bundles are used for GUI applications, frameworks, and installer packages. These are presented to the user as one file in the Finder. When set up correctly, bundles make for easy deployment. All one needs to do is to archive the application using some preferred method. Users then open the archive and drag the application to wherever they please and are ready to go. There is something written about this for wxWidgets too MacOSX introduces a new way of putting together an application. Instead of adding a resource fork to the executable file, you can simply create a special directory (folder). This is the preferred method for OSX.
2,119,177
2,119,235
stl vector assign vs insert
I understand the semantics of the 2 operations , assign- erases before replacing with supplied values. insert - inserts values at specified location(allocates new memory if necessary). Apart from this is there any reason to preffer one over the other? Or to put it another way, is there any reason to use assign instead of insert.
If you wish to invoke the semantics of assign, call assign - if you wish to invoke the semantics of insert, call insert. They aren't interchangeable. As for calling them on an empty vector, the only difference is that you don't need to supply an iterator to insert at when you call assign. There may be a performance difference, but that's implementation specific and almost certainly negligable.
2,119,223
2,119,507
C/C++ Control Structure Limitations?
I have heard of a limitation in VC++ (not sure which version) on the number of nested if statements (somewhere in the ballpark of 300). The code was of the form: if (a) ... else if (b) ... else if (c) ... ... I was surprised to find out there is a limit to this sort of thing, and that the limit is so small. I'm not looking for comments about coding practice and why to avoid this sort of thing altogether. Here's a list of things that I'd imagine could have some limitation: Number of functions in a scope (global, class, or namespace). Number of expressions in a single statement (e.g., compound conditionals). Number of cases in a switch. Number of parameters to a function. Number of classes in a single hierarchy (either inheritance or containment). What other control structures/language features have limits such as this? Do the language standards say anything about these limits (perhaps minimum requirements for an implementation)? Has anyone run into a particular language limitation like this with a particular compiler/implementation? EDIT: Please note that the above form of if statements is indeed "nested." It is equivalent to: if (a) { //... } else { if (b) { //... } else { if (c) { //... } else { //... } } }
Visual C++ Compiler Limits The C++ standard recommends limits for various language constructs. The following is a list of constructs where the Visual C++ compiler does not implement the recommended limits. The first number is the recommended limit and the second number is the limit implemented by Visual C++: Nesting levels of compound statements, iteration control structures, and selection control structures [256] (256). Parameters in one macro definition [256] (127). Arguments in one macro invocation [256] (127). Characters in a character string literal or wide string literal (after concatenation) [65536] (65535). Levels of nested class, structure, or union definitions in a single struct-declaration-list [256] (16). Member initializers in a constructor definition [6144] (approximately 600, memory dependent, can increase with the /Zm compiler option). Scope qualifications of one identifier [256] (127). Nested external specifications [1024] (10). Template arguments in a template declaration [1024] (64).
2,119,392
2,119,518
c++ builder, label.caption, std::string to unicode conversion
Just need to set the lbl.caption (inside a loop) but the problem is bigger than i thought. I've tried even with vector of wstrings but there is no such thing. I've read some pages, tried some functions like WideString(), UnicodeString(), i know i can't and shouldn't turn off Unicode in C++Builder 2010. std::vector <std::string> myStringVec(20, ""); myStringVec.at(0) = "SomeText"; std::string s = "something"; // this works .. Form2->lblTxtPytanie1->Caption = "someSimpleText"; // both lines gives the same err Form2->lblTxtPytanie1->Caption = myStringVec.at(0); Form2->lblTxtPytanie1->Caption = s; Err: [BCC32 Error] myFile.cpp(129): E2034 Cannot convert 'std::string' to 'UnicodeString' It ate me few hours now. Is there any "quick & dirty" solution ? It just has to work... UPDATE Solved. I've mixed STL / VCL string classes. Thank You TommyA.
The problem is that you are mixing standard template library string class with the VCL string class. The caption property expects the VCL string which has all the functionality of the STL one. The example that works is really passing (const char*) which is fine because there is a constructor for this in the VCL UnicodeString class constructor, however there isn't a constructor for copying from STL strings. You could do one of two things, you could use one of the VCL string classes in your vector instead of the STL ones, so that: std::vector <std::string> myStringVec(20, ""); myStringVec.at(0) = "SomeText"; std::string s = "something"; Becomes: std::vector <String> myStringVec(20, ""); myStringVec.at(0) = "SomeText"; String s = "something"; In which case the bottom two lines will also work. Alternatively you can retrieve the actual null terminated character pointer from the STL strings and pass them to the caption, at which point it will be converted into a VCL String class like this: // both lines will now work Form2->lblTxtPytanie1->Caption = myStringVec.at(0).c_str(); Form2->lblTxtPytanie1->Caption = s.c_str(); Which solution you prefer is up to you, but unless you have some specific need for the STL string class I would strongly suggest you going with the VCL string classes (as I showed in my first example). This way you won't have to have two different string classes.
2,119,477
2,119,547
Statement reordering with locks
Here some C++ code that is accessed from multiple threads in parallel. It has a critical section: lock.Acquire(); current_id = shared_id; // small amounts of other code shared_id = (shared_id + 1) % max_id; lock.Release(); // do something with current_id The class of the lock variable is wrapper around the POSIX mutex implementation. Because of the module operations, it is not possible to use atomic operations. Is it possible that a gcc compiler with a O3 flag optimizes the code so that the assignment of current_id is moved before the lock?
It is possible to compile with O3! The compiler will never optimize across a function call unless the function is marked as pure using function-attributes. The mutex functions aren't pure, so it's absolutely safe to use them with O3.
2,119,504
2,136,814
Shell extension installation not recognized by Windows 7 64-bit shell
I have a Copy Hook Handler shell extension that I'm trying to install on Windows 7 64-bit. The shell extension DLL is compiled in two separate versions for 32-bit and 64-bit Windows. The DLL implements DLLRegisterServer which adds the necessary registry entries. After adding the registry entries, it calls the following line of code to nofity the Windows shell: SHChangeNotify(SHCNE_ASSOCCHANGED, SHCNF_IDLIST, NULL, NULL); Everything works great on Windows7 32-bit. The shell recognizes the extension immediately. On 64-bit, the shell extension is only recognized after the shell is restarted. Is there anything I can do to cause the extension to be recognized without restarting the 64-bit shell?
As it turns out, the problem was not specific to 64-bit Windows. After consulting with Microsoft, I learned that this behavior affects Copy Hook Handlers in both 32 and 64 bit systems. The SHChangeNotify() with SHCNE_ASSOCCHANGED API apparently does not cause the shell to reload Copy Hook Handlers. According to a Microsoft representative: The shell builds and caches a list of registered copy hook handlers the first time copy hook handlers are called in a process. Once the list is created, there is no mechanism for updating or flushing the cache other than terminating the process. This applies to Windows Explorer and any other process that may call shell file functions, such as SHFileOperation. The best option that we can offer at this point is to reboot the system after the copy hook handler is registered. Hope this helps someone!
2,119,708
2,120,132
Dynamic and Static Libraries in C++
In my quest to learn C++, I have come across dynamic and static libraries. I generally get the gist of them: compiled code to include into other programs. However, I would like to know a few things about them: Is writing them any different than a normal C++ program, minus the main() function? How does the compiled program get to be a library? It's obviously not an executable, so how do I turn, say 'test.cpp' into 'test.dll'? Once I get it to its format, how do I include it in another program? Is there a standard place to put them, so that whatever compilers/linkers need them can find them easily? What is the difference (technically and practically) between a dynamic and static library? How would I use third party libraries in my code (I'm staring at .dylib and .a files for the MySql C++ Connector) Everything I have found relating to libraries seems to be targeting those who already know how to use them. I, however, don't. (But would like to!) Thanks! (I should also note I'm using Mac OS X, and although would prefer to remain IDE-neutral or command-line oriented, I use QtCreator/Netbeans)
Is writing them any different than a normal C++ program, minus the main() function? No. How does the compiled program get to be a library? It's obviously not an executable, so how do I turn, say 'test.cpp' into 'test.dll'? Pass the -dynamiclib flag when you're compiling. (The name of the result is still by default a.out. On Mac OS X you should name your dynamic libraries as lib***.dylib, and on Linux, lib***.so (shared objects)) Once I get it to its format, how do I include it in another program? First, make a header file so the the other program can #include to know what functions can be used in your dylib. Second, link to your dylib. If your dylib is named as libblah.dylib, you pass the -lblah flag to gcc. Is there a standard place to put them, so that whatever compilers/linkers need them can find them easily? /usr/lib or /usr/local/lib. What is the difference (technically and practically) between a dynamic and static library? Basically, for a static lib, the whole library is embedded into the file it "links" to. How would I use third party libraries in my code (I'm staring at .dylib and .a files for the MySql C++ Connector) See the 3rd answer.
2,119,731
2,119,762
Limiting try block scope. Does it matter?
Possible Duplicate: Should java try blocks be scoped as tightly as possible? Is there any performance benefit (particularly in C++ or Java) in keeping the size of try block small [aside from it being more informative to the reader as to which statement can throw]. Given the following method where i do not want to throw out of the method. void function() throws Exception { statement1 statement2 statement3 // can throw statement4 statement5 } Is it better to do this: Option 1 void function() { try { statement1 statement2 statement3 // can throw statement4 statement5 } catch (...) { } } or Option 2 void function() { statement1 statement2 boolean success = false; try { statement3 // can throw success = true; } catch (...) { } if (success) { statement4 statement5 } }
At least with the compilers and exception handling mechanisms I've seen, there should be no difference. The depth at which an exception throw is nested can make a difference, but only when the exception is thrown and the general agreement is that in this case performance is something you can generally ignore.
2,120,030
24,665,400
Can I use glibc under windows?
Is it (or would it) be possible to use glibc under windows (as a replacement of msvcrt)? I know this is a stupid question, and answers like cygwin will pop up, but I am really asking: is it possible to link to glibc on windows and use all library functions like with msvcrt?
A possible workaround could exist: if someone combines http://0xef.wordpress.com/2012/11/17/emulate-linux-system-calls-on-windows/ with http://www.musl-libc.org/ and compiles source code with gcc against musl libc instead of glibc. So, I can't understand why nobody writes a such glibc analog for Windows. :-(
2,120,059
2,120,184
Deserializing unknown inherited type [C++]
Lets say I have a class which routes messages to their handlers. This class is getting the messages from another class who gets the messages through socket. So, the socket gets a buffer containing some sort of message. The class that routes the messages is aware of the message types. Every message is inheriting Message class which contains a message ID, and of course adds paraemters of it's own. The problem is, how can I transfer the message from the buffer to be an actucal message instance of the correct type? For exmaple, I have a DoSomethingMessage that inherites Message. I get the buffer containing the message, but I need somehow to convert the buffer back to DoSomethingMessage, without really knowing it's a DoSomethingMessage. I could have transfer the buffer to the MessageRouter, and there check by the ID and create the right instance, but it's seemes like a really bad design to me. Any suggestions?
You could abstract the message deserialization. Have a "MessageHolder" class that just has the buffer to the object initially. It would have a method: IMessageInterface NarrowToInterface(MessageId id); It wasn't clear to me if your router would already know what type of message it was or not. If it does, then it would receive the messageholder instance and call the NarrowToInterface method on it. It would pass the id of the appropriate type. If the router didn't know what type it was, then you'd also have a property on the MessageHolder object: MessageId GetMessageType(); that the router would use to learn what message type it was to decide where to route it. More on how that is implemented later. The IMessageInterface is an abstract class or interface that the recipient of the message would down-cast to the appropriate type, since it would know what type to expect. If all of the different messages are well-known and you have generics or templates available to you, you could have the NarrowToInterface method be a template method that took the return value as a template parameter, so that you would have better type safety. If you don't have templates, you could use the double-dispatch technique of the "Vistor" pattern. Google "double-dispatch visitor" for more info. If the types of messages is not well-defined or could grow in the future, you'll just have to live with a (compiler-unverifiable) downcast at some point. The implementation I'm suggesting encapsulates this as much as possible and limits coupling to its absolute minimum, as far as I know. Also, for this to work your messages have to be framed with a standard identifier in the header. i.e. there is a standard header that has the length of the entire message as well as the ID of the message type. That way the socket endpoint can parse the basics of the message and put it into the messageholder. The MessageHolder can either know about all the different messages types itself to implement the NarrowToInterface() method or there could be a global repository that would return an "IMessageDeserializer" objects to implement NarrowToInterface for each message Type. All of the loaded message clients would register all of the deserializers for all of the messages they support with the repository and also register the message type IDs that they want with the message router.
2,120,078
2,120,101
Using Function Templates
I have created a structure of different data types and i want to return each type of data. does this can be done using a function template which takes a different data argument not included in structure or no arguments? I have something like this, struct mystruct{ int _int; char _c; string _str }; In function template(int i) { mystruct s; switch (getInput) { case 1: return s._int; case 2: return s._c; case 3: return s._str; } } void main() { int getInput = 1; //pass getInput value to function template }
Yes: template<class T> T f() { return 0; // for the sake of example } int main() { return f<int>(); // specify the template parameter } template<class T> vector<T> another_example(); // use another_example<int>() which returns a vector<int>
2,120,082
2,120,234
Flex -- C++ connection?
How do I connect a Flex Application( Internet Site ) and C++ togehter ? a minimalistic example from what i mean (User Story): Frank goes to www.myflexsite.de there are 2 textboxes and 1 Button( Label = add two numbers) . He inserts 2 in the first textbox and 5 in the ohter. Now he clicks on the add button. The Backend : We have a add.cpp file where an add method is defined : int add(int a, int b ) ... After Frank clicked on the "add two numbers" Button the add-method in the C++ file is called and the result will be returned.An Alert Window with the result appears. This is what i want to accomplish, but i don't know how i build the bridge between these 2 Languages. How can they communicate ?
The easiest would be to write a small console application in C++ and then invoke it via Apache or any other web server using CGI. There are performance problems with this but it's a good start, and then you can move forward. From Flex just make HTTP requests and let your program parse them - for instance, you can send XML back and forth.
2,120,094
2,241,882
how to build dll files in netbeans using ms vc++ compiler?
i tried using cygwin gcc compiler along with netbeans to create dll files it seems there is an issue in the generated dll file. if i use the ms vc++ compiler and do all compiling on command line its runs file, but i dont know how to integrate ms vc++ tools in netbeans ... can anyone help me on that ? thanks jay
NetBeans doesn't provide a way to integrate microsoft compiler in its IDE. This is probably because it doens't have rights to do so. With NetBeans you can integrate only GNU compilers for C/C++. To use Microsoft C/C++ compiler, you have to use MS Visual Studio IDE, or MS Visual Studio provided command line tools.
2,120,146
2,120,576
Why doesn't this << overload compile
I can't figure out why the following code doesn't compile. The syntax is the same as my other operator overloads. Is there a restriction that the << overload must be friended? If so, why? Thanks for any help. This doesn't work - #include "stdafx.h" #include <iostream> #include <fstream> #include <string> class Test { public: explicit Test(int var): m_Var(var) { } std::ostream& operator<< (std::ostream& stream) { return stream << m_Var; } private: int m_Var; }; int _tmain(int argc, _TCHAR* argv[]) { Test temp(5); std::cout << temp; return 0; } This does work - #include "stdafx.h" #include <iostream> #include <fstream> #include <string> class Test { public: explicit Test(int var): m_Var(var) { } friend std::ostream& operator<< (std::ostream& stream, Test& temp); private: int m_Var; }; std::ostream& operator<< (std::ostream& stream, Test& temp) { return stream << temp.m_Var; }; int _tmain(int argc, _TCHAR* argv[]) { Test temp(5); std::cout << temp; return 0; }
Here's the fundamental reason why the stream operators have to be friends. Take this code: struct Gizmo { ostream& operator<<(ostream& os) const { os << 42; } }; int main() { Gizmo g; cout << g; return 0; } Consider the context of the call to cout << g; When the compiler compiles this function, it first tries this: cout.operator<<(g); ...and if that isn't found, then it looks in the global namespace for: operator<<(cout, g); ...and if that isn't found, then it can't be compiled. But when you try to implement the stream insertion operator as a member of Gizmo, you are hoping the compiler will resolve your code to: g.operator<<(cout); ...which it can't do unless you change your code to: g << cout; ...which is obviously not what you're going for.
2,120,298
2,120,311
Not using iterators into a resized vectors
I read in The C++ Programming Language : Special Edition Don't use iterators into a resized vector Consider this example. vector< int >::iterator it = foo.begin(); while ( it != foo.end() ) { if ( // something ) { foo.push_back( // some num ); } ++it; } Is there a problem with this? After the vector was resized, would the foo.end() in the loop condition be pushed forward 1? P.S. In addition, what if vector had reserved space for x number of ints. If push_back didn't violate this space, would it still be an issue ( I would assume so if it.end() points to one past the last element in the vector that contains something ).
Yes, there is a problem with it. Any call to push_back has the potential to invalidate all iterators into a vector. foo.end() will always retrieve the valid end iterator (which may be different to the value last returned by foo.end()), but it may have been invalidated. This means that incrementing it or comparing it may caused undefined behaviour.
2,120,303
2,120,389
what's interface vs. methods, abstraction vs. encapsulation in C++
I am confused about such concepts when I discussed with my friend. My friend's opinions are 1) abstraction is about pure virtual function. 2) interface is not member functions, but interface is pure virtual functions. I found that in C++ primer, interface are those operations the data type support, so member functions are interface. My opinions are 1) abstraction is about speration of interface and implementation; 2) member functions are interfaces. So could anybody clarify these concepts for me? 1) the difference among abstraction, abstract data type and abstract class. 2) the difference between interface and member functions. 3) the difference between abstraction and encapsulation.
I think your main problem is that you and your friend are using two different definitions of the word "interface", so you're both right in different ways. You are using "interface" in the everyday sense of "a defined way to inter-operate with something", as in "the interface between my computer and my keyboard is USB" or "the interface between the vacuum and the wall power is an outlet." In that sense, yes, methods (even concrete ones) are interfaces, since they define a way to inter-operate with an object. That's not to say that this is not applicable to software -- it is the sense of "interface" used in the term Application Programming Interface (API). Your friend is using "interface" in the more specific object oriented programming jargon sense of "a separately defined set of operations that a class can choose to guarantee that it will support". Here, the defining characteristic of an "interface" is that it has no implementation of its own. A class is supposed to support an interface by providing an implementation of the methods defined by the interface. Since C++ has no explicit concept of an interface in this sense, the equivalent construct is a class with only pure virtual functions (aka an Abstract Data Type). "Abstraction", on the other hand, is about many things and again you are both right. Abstraction in a general sense means being able to focus on higher-level concepts rather than lower level details. Encapsulation is a type of abstraction because its purpose is to hide the implementation details of the methods of a class; the implementation can change without the class definition changing. Pure virtual functions ("interfaces" in the OO-jargon sense) are another type of abstraction because they can, if used properly, hide not only the implementation but also the true underlying object type; the type being used can change so long as both types implement the same interface.
2,120,725
2,120,816
Help understanding boost::bind placeholder arguments
I was reading a StackOverFlow post regarding sorting a vector of pairs by the second element of the pair. The most obvious answer was to create a predicate, but one answer that used boost caught my eye. std::sort(a.begin(), a.end(), boost::bind(&std::pair<int, int>::second, _1) < boost::bind(&std::pair<int, int>::second, _2)); I've been trying to figure out how boost::bind works, or at least just how to use it, but I can't figure out what the purpose of the placeholder arguments _1 and _2 are, and the boost documentation doesn't sink in at all. Could anyone explain this specific usage of boost::bind? P.S. Original question: How do I sort a vector of pairs based on the second element of the pair?
This expression: boost::bind(&std::pair<int, int>::second, _1) < boost::bind(&std::pair<int, int>::second, _2) namely, the use of the < operator actually defines a functor between two other functors, both of which defined by bind. The functor expected by sort needs to have an operator() which looks like this: bool operator()(const T& arg1, const T& arg2); when you're creating a functor using boost's < then the name holders _1 and _2 correspond to arg1 and arg2 of the functor you're creating. The bind call create a functor that calls ::second of arg1 and arg2 With any luck, the introduction of lambdas in C++0x will make expressions like this obsolete.
2,121,003
2,121,115
What all Design Patterns can I use?
1. I need to build a "Web Service Server (Simulator)" which generates the xml files and also sends async calls to the client for notification. At this point, I am writing a code to generate dummy XML files which will be used for testing (FileGeneratorClass-- builder)? 2. Also, can I implement this in a way that I do not have to write a complete code from scratch to simulate another web service server and another file format ? - what pattern can I leverage there ? 3. The objects/classes are generated from a Schema file (for xml File) and WSDLs ( for web service ), how can I make my code immune to changes to these files (newer versions) ? - which design pattern ?? (Please let me know if information I provided is too much or too less, also if you need me to edit) Thank you very much. Disclaimer: I am a complete newbie and using patterns for this small project might be overkill yet I want to do it so that I learn/understand it. Which, I think, will give me confidence and clarity when I need to do that in a more complex project.
Patterns don't do anything. You are asking if you should use prepositional phrases when you are planning to write a mystery novel. You don't start a design saying what patterns do I need. Patterns emerge from the design process. You say my program will need x and y, that's similar to the such-and-such pattern, I should see if that pattern fits. If it does, use it. If it doesn't fit, don't force it to fit. You are treating patterns like classes. Don't do that. That's not their purpose. They are not building blocks. They are not checklist entries. They are exactly what the mundane meaning of patterns implies. They are things you see repeated over and over. Many times you sense their necessity ahead of time and so you include them in the design. But they are not a starting point.
2,121,027
2,121,136
Looking for a metafunction from bool to bool_type
Basically, I am looking for a library solution that does this: #include <boost/type_traits.hpp> template<bool> struct bool_to_bool_type; template<> struct bool_to_bool_type<false> { typedef boost::false_type type; }; template<> struct bool_to_bool_type<true> { typedef boost::true_type type; }; Is there such a metafunction?
Oh wait, true_type is just a typedef for std::integral_constant<bool, true>? Then there's an obvious solution: boost::integral_constant<bool, input_value>
2,121,172
2,121,359
Possible reasons for tellg() failing?
ifstream::tellg() is returning -13 for a certain file. Basically, I wrote a utility that analyzes some source code; I open all files alphabetically, I start with "Apple.cpp" and it works perfectly.. But when it gets to "Conversion.cpp", always on the same file, after reading one line successfully tellg() returns -13. The code in question is: for (int i = 0; i < files.size(); ++i) { /* For each .cpp and .h file */ TextIFile f(files[i]); while (!f.AtEof()) // When it gets to conversion.cpp (not on the others) // first is always successful, second always fails lines.push_back(f.ReadLine()); The code for AtEof is: bool AtEof() { if (mFile.tellg() < 0) FATAL(format("DEBUG - tellg(): %d") % mFile.tellg()); if (mFile.tellg() >= GetSize()) return true; return false; } After it reads successfully the first line of Conversion.cpp, it always crashes with DEBUG - tellg(): -13. This is the whole TextIFile class (wrote by me, the error may be there): class TextIFile { public: TextIFile(const string& path) : mPath(path), mSize(0) { mFile.open(path.c_str(), std::ios::in); if (!mFile.is_open()) FATAL(format("Cannot open %s: %s") % path.c_str() % strerror(errno)); } string GetPath() const { return mPath; } size_t GetSize() { if (mSize) return mSize; const size_t current_position = mFile.tellg(); mFile.seekg(0, std::ios::end); mSize = mFile.tellg(); mFile.seekg(current_position); return mSize; } bool AtEof() { if (mFile.tellg() < 0) FATAL(format("DEBUG - tellg(): %d") % mFile.tellg()); if (mFile.tellg() >= GetSize()) return true; return false; } string ReadLine() { string ret; getline(mFile, ret); CheckErrors(); return ret; } string ReadWhole() { string ret((std::istreambuf_iterator<char>(mFile)), std::istreambuf_iterator<char>()); CheckErrors(); return ret; } private: void CheckErrors() { if (!mFile.good()) FATAL(format("An error has occured while performing an I/O operation on %s") % mPath); } const string mPath; ifstream mFile; size_t mSize; }; Platform is Visual Studio, 32 bit, Windows. Edit: Works on Linux. Edit: I found the cause: line endings. Both Conversion and Guid and others had \n instead of \r\n. I saved them with \r\n instead and it worked. Still, this is not supposed to happen is it?
It's difficult to guess without knowing exactly what's in Conversion.cpp. However, using < with stream positions is not defined by the standard. You might want to consider an explicit cast to the correct integer type before formatting it; I don't know what formatting FATAL and format() expect to perform or how the % operator is overloaded. Stream positions don't have to map in a predicatable way to integers, certainly not if the file isn't opened in binary mode. You might want to consider an alternative implementation for AtEof(). Say something like: bool AtEof() { return mFile.peek() == ifstream::traits_type::eof(); }
2,121,258
2,121,274
i violated D.R.Y. help me please?
I'm making a blackjack sim and I want to deal the cards how it would be in a casino, i.e. all players get dealt a card, dealer gets one face down, players get another card, dealer gets one face up BUT LOOK I VIOLATED DRY :( How to redo?? void BlackJack::newHand() { resetHands(); for (unsigned int i = 0; i < players.size(); i++) players[i]->addCard(deck->nextCard()); Card* c = deck->nextCard(); c->setVisible(false); // dealer gets a face down card dealer->addCard(c); for (unsigned int i = 0; i < players.size(); i++) players[i]->addCard(deck->nextCard()); dealer->addCard(deck->nextCard()); }
Move the repeated code into another function: void BlackJack::addDealerCard(bool visible) { Card* c = deck->nextCard(); c->setVisible(visible); dealer->addCard(c); } void BlackJack::addCards() { for (unsigned int i = 0; i < players.size(); i++) players[i]->addCard(deck->nextCard()); } void BlackJack::newHand() { resetHands(); addCards(); addDealerCard(false); addCards(); addDealerCard(true); }
2,121,525
2,121,616
const pointers in overload resolution
GCC treats these two function declarations as equivalent: void F(int* a) { } void F(int* const a) { } test.cpp: In function 'void F(int*)': test.cpp:235: error: redefinition of 'void F(int*)' test.cpp:234: error: 'void F(int*)' previously defined here This makes some sense because a caller will always ignore the const in this case... it only affects the usage of the parameter 'a' inside of the function. What I'm wondering is where (if anywhere) the standard says that it's specifically OK to discard qualifiers on pointers used as function arguments for the purpose of overload resolution. (My real issue is that I'd like to figure out where GCC strips these pointless qualifiers internally, and since the C++ frontend of GCC is littered with comments referencing the standard, the relevant section of the standard might help me find the correct spot.)
Standard says in 8.3.5/3 that for the purposes of determining the function type any cv-qualifiers that directly qualify the parameter type are deleted. I.e. it literally says that a function declared as void foo(int *const a); has function type void (int *). A pedantic person might argue that this is not conclusive enough to claim that the above declaration should match the definition like this one void foo(int *a) { } or that it should make the code with dual declaration (as in your example) ill-formed, since neither of these concepts are described in the standard in terms of function types. I mean, we all know that these const were intended to be ignored for all external purposes, but so far I was unable to find the wording in the standard that would conclusively state exactly that. Maybe I missed something. Actually, in 13.1/3 it has a "Note" that says that function declarations with equivalent parameter declarations (as defined in 8.3.5) declare the same function. But it is just a note, it is non-normative, which suggests that somewhere in the standard there should be some normative text on the same issue.
2,121,607
2,121,641
Any RAII template in boost or C++0x
Is there any template available in boost for RAII. There are classes like scoped_ptr, shared_ptr which basically work on pointer. Can those classes be used for any other resources other than pointers. Is there any template which works with a general resources. Take for example some resource which is acquired in the beginning of a scope and has to be somehow released at the end of scope. Both acquire and release take some steps. We could write a template which takes two(or maybe one object) functors which do this task. I havent thought it through how this can be achieved, i was just wondering are there any existing methods to do it Edit: How about one in C++0x with support for lambda functions
shared_ptr provides the possibility to specify a custom deleter. When the pointer needs to be destroyed, the deleter will be invoked and can do whatever cleanup actions are necessary. This way more complicated resources than simple pointers can be managed with this smart pointer class.
2,121,617
2,121,638
Can Someone Explain Threads to Me?
I have been considering adding threaded procedures to my application to speed up execution, but the problem is that I honestly have no idea how to use threads, or what is considered "thread safe". For example, how does a game engine utilize threads in its rendering processes, or in what contexts would threads only be considered nothing but a hindrance? Can someone point the way to some resources to help me learn more or explain here?
This is a very broad topic. But here are the things I would want to know if I knew nothing about threads: They are units of execution within a single process that happen "in parallel" - what this means is that the current unit of execution in the processor switches rapidly. This can be achieved via different means. Switching is called "context switching", and there is some overhead associated with this. They can share memory! This is where problems can occur. I talk about this more in depth in a later bullet point. The benefit of parallelizing your application is that logic that uses different parts of the machine can happen simultaneously. That is, if part of your process is I/O-bound and part of it is CPU-bound, the I/O intensive operation doesn't have to wait until the CPU-intensive operation is done. Some languages also allow you to run threads at the same time if you have a multicore processor (and thus parallelize CPU-intensive operations as well), though this is not always the case. Thread-safe means that there are no race conditions, which is the term used for problems that occur when the execution of your process depends on timing (something you don't want to rely on). For example, if you have threads A and B both incrementing a shared counter C, you could see the case where A reads the value of C, then B reads the value of C, then A overwrites C with C+1, then B overwrites C with C+1. Notice that C only actually increments once! A couple of common ways avoid race conditions include synchronization, which excludes mutual access to shared state, or just not having any shared state at all. But this is just the tip of the iceberg - thread-safety is quite a broad topic. I hope that helps! Understand that this was a very quick introduction to something that requires a good bit of learning. I would recommend finding a resource about multithreading in your preferred language, whatever that happens to be, and giving it a thorough read.
2,121,633
2,121,645
Why does the interface for auto_ptr specify two copy-constructor-like constructors
I was going through the auto_ptr documentation on this link auto_ptr There is something which i could not fully understand why is it done. In the interface section there are two declarations for its copy constructor 1) auto_ptr(auto_ptr<X>&) throw (); 2) template <class Y> auto_ptr(auto_ptr<Y>&) throw(); What purpose is this for.
It's there in case you can implicitly convert the pointers: struct base {}; struct derived : base {}; std::auto_ptr<derived> d(new derived); std::auto_ptr<base> b(d); // converts Also, you didn't ask but you'll notice the copy-constructor is non-const. This is because the auto_ptr will take ownership of the pointer. In the sample above, after b is constructed, d holds on to nothing. This makes auto_ptr unsuitable for use in containers, because it can't be copied around. C++0x ditches auto_ptr and makes one called unique_ptr. This pointer has the same goals, but accomplishes them correctly because of move-semantics. That is, while it cannot be copied, it can "move" ownership: std::unique_ptr<derived> d(new derived); std::unique_ptr<base> b(d); // nope, cannot be copied std::unique_ptr<base> b(std::move(d)); // but can be moved This makes unique_ptr suitable for use in containers, because they no longer copy their values, they move them.
2,121,651
2,121,658
Visual Studio 2008 C++ dependencies
i'm developing a C++ simulation (OpenGL) on top of VS2008 enviroment. My current operating system is Windows Vista. The trouble is that when trying to execute the application on a Windows XP machine, my application crashes because incompatibilities beteween DLL's (namely, msvcrt.dll function entry points ) ... Has anyone experienced such problem?
I seriously doubt that's the real reason it crashes. You probably just didn't deploy the CRT libraries to the target machine. Or deployed the debug build. If this is a single EXE with no DLL dependencies then solve your problem by linking the static version of the CRT. Right-click the project in Solution Explorer, Properties, C/C++, Code Generation, Runtime libraries, select /MTd. Repeat for the Release configuration, now choosing /MT.
2,121,844
2,121,877
what is auto_ptr_ref, what it achieves and how it achieves it
auto_ptr_ref documentation here says this This is an instrumental class to allow certain conversions that allow auto_ptr objects to be passed to and returned from functions. Can somebody explain how auto_ptr_ref helps in achieving this. I just want to understand the auto_ptr class and its internals
It is rather confusing. Basically, auto_ptr_ref exists because the auto_ptr copy constructor isn't really a copy constructor in the standard sense of the word. Copy constructors typically have a signature that looks like this: X(const X &b); The auto_ptr copy constructor has a signature that looks like this: X(X &b) This is because auto_ptr needs to modify the object being copied from in order to set its pointer to 0 to facilitate the ownership semantics of auto_ptr. Sometimes, temporaries cannot match a copy constructor that doesn't declare its argument const. This is where auto_ptr_ref comes in. The compiler won't be able to call the non-const version of the copy constructor, but it can call the conversion operator. The conversion operator creates an auto_ptr_ref object that's just sort of a temporary holder for the pointer. The auto_ptr constructor or operator = is called with the auto_ptr_ref argument. If you notice, the conversion operator in auto_ptr that automatically converts to an auto_ptr_ref does a release on the source auto_ptr, just like the copy constructor does. It's kind of a weird little dance that happens behind the scenes because auto_ptr modifies the thing being copied from. Random related tanget about C++0x and unique_ptr In C++0x, auto_ptr is deprecated in favor of unique_ptr. unique_ptr doesn't even have a copy constructor and uses the new 'move constructor' which is explicit about the fact that it will modify the object being moved from and leave it in a useless (but still valid) state. Temporaries (aka rvalues) are explicitly always allowed to be arguments to a move constructor. The move constructor in C++0x has a number of other big benefits. It enables the standard STL containers to store unique_ptrs and do the right thing, as opposed to how auto_ptrs cannot be. It also mostly eliminates the need for the 'swap' function as the whole purpose of the swap function is usually to be a move constructor or move assignment operator that never throws. Which is the other expectation. The move constructor and move assignment operator (much like a destructor) are never supposed to throw.
2,121,889
2,121,898
Why does using boost increase file size so much?
I've noticed that when I use a boost feature the app size tends to increase by about .1 - .3 MB. This may not seem like much, but compared to using other external libraries it is (for me at least). Why is this?
Boost uses templates everywhere. These templates can be instantiated multiple times with the same parameters. A sufficiently smart linker will throw out all but one copy. However, not all linkers are sufficiently smart. Also, templates are instantiated implicitly sometimes and it's hard to even know how many times one has been instantiated.
2,122,018
2,122,019
Does this program show the four card suits (♠♣♥♦) on all standard-ish systems?
The following shows ♠♣♥♦ on windows xp, will it for all systems?? #include <stdio.h> int main(int argc, char *argv[]) { for (int i = 3; i <= 6; ++i) printf("%c", (char)i); getchar(); return 0; }
Nope. Character encoding is a very platform dependent, in my experience. Consider, in ASCII those characters don't even exist. And I have no clue where they are in Unicode. And where ever they are, you would then be depending on how your platform outputs Unicode.
2,122,025
2,122,031
What is the relationship of .lib and .obj to each other and my project in c++?
How do .lib and .obj files relate to each other? What is their purpose? Is a .lib just a collection of .obj files? If so are the .obj's then stored inside the .lib making the .obj's unnecessary?
Typically, the .obj files refer to object files. This is a source file in its compiled form. For example, a main.cpp and foo.cpp would produce main.obj and foo.obj. It is then the linkers job to link them together, so that main.obj can reach functions defined in foo.obj and vice-versa. The linker will output your binary file, which is the .lib (or .a, or .exe, or .dll``, etc). So in a loose sense, yes, the binary output (.lib in your case) is the collection of linked .obj files. Once you are finished compiling, and want to use the library, you only need other programs to link with the .lib. The .obj are what's considered intermediate files, and are not needed after linking is completed.
2,122,194
2,122,925
How I print UTF-8 characters C++?
How I print these UTF-8 characters in C++?
Well, you know it is possible because your browser could render them. On Windows you can use the charmap.exe applet to discover their Unicode code points: ♠ = 0x2660 ♣ = 0x2663 ♥ = 0x2665 ♦ = 0x2666 The challenge is to get a C/C++ program to display them. That's not going to be possible in any kind of non-platform specific way unless you use a cross-platform UI library like Qt or wxWidgets. In a Windows GUI program you can do it like this in the WM_PAINT message handler: case WM_PAINT: { hdc = BeginPaint(hWnd, &ps); HFONT hFont = CreateFont(16, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L"Arial Unicode MS"); HGDIOBJ oldFont = SelectObject(hdc, hFont); RECT rc = {0, 0, 666, 16}; DrawTextEx(hdc, L"\x2660\x2663\x2665\x2666", -1, &rc, DT_LEFT, 0); SelectObject(hdc, oldFont); DeleteObject(hFont); EndPaint(hWnd, &ps); } break;
2,122,278
19,895,227
How to show a MFC dialog without stealing focus on the other window
I have the dialog shown with ShowWindow(hWnd, SW_SHOWNOACTIVATE); But it doesn't work, the new dialog still steals the focus, why is it? here is the some code snippets from my program, QueryWindow is the MFC dialog class linked with the dialog: QueryWindow window; //window.DoModal(); window.Create(QueryWindow::IDD); window.ShowWindow(SW_SHOWNOACTIVATE);
There are few ways to skip dialog from getting focused: Make you OnInitDialog() to return zero value. Example: BOOL QueryWindow::OnInitDialog() { CDialog::OnInitDialog(); return FALSE; // return 0 to tell MFC not to activate dialog window } This is the best and most correct solution. Add WS_EX_NOACTIVATE style to your dialog window. You can edit dialog resource properties or change it in runtime: BOOL QueryWindow::PreCreateWindow(CREATESTRUCT& cs) { cs.dwExStyle |= WS_EX_NOACTIVATE; return CDialog::PreCreateWindow(cs); } Side-effect: you can use controls on your window, but it will look like it was not activated. Last way is to save foreground window before creating your dialog and set foreground window at the end: BOOL QueryWindow::Create(LPCTSTR lpszTemplateName, CWnd* pParentWnd) { CWnd* pForeground = GetForegroundWindow(); const BOOL bRes = CAlertDialog::Create(lpszTemplateName, pParentWnd); if(pForeground) pForeground->SetForegroundWindow(); return bRes; } This is the worth solution because potentially you can get flicker. Important! Don't forget to control following API calls: ShowWindow - you can use SW_SHOWNOACTIVATE, but can't use SW_SHOW SetWindowPos - add flag SWP_NOACTIVATE
2,122,282
2,122,404
Are function-local typedefs visible inside C++0x lambdas?
I've run into a strange problem. The following simplified code reproduces the problem in MSVC 2010: template <typename T> struct dummy { static T foo(void) { return T(); } }; int main(void) { typedef dummy<bool> dummy_type; auto x = []{ bool b = dummy_type::foo(); }; // auto x = []{ bool b = dummy<bool>::foo(); }; // works } The typedef I created locally in the function doesn't seem to be visible in the lambda. If I replace the typedef with the actual type, it works as expected. Here are some other test cases: // crashes the compiler, credit to Tarydon int main(void) { struct dummy {}; auto x = []{ dummy d; }; } // works as expected int main(void) { typedef int integer; auto x = []{ integer i = 0; }; } I don't have g++ available to test it, right now. Is this some strange rule in C++0x, or just a bug in the compiler? From the results above, I'm leaning towards bug. Though the crash is definitely a bug. For now, I have filed two bug reports. All code snippets above should compile. The error has to do with using the scope resolution on locally defined scopes. (Spotted by dvide.) And the crash bug has to do with... who knows. :) Update According to the bug reports, they have both been fixed for the next release of Visual Studio 2010. (Though this doesn't seem to be the case; VS11 perhaps.)
From n3000, 5.1.2/6, The lambda-expression’s compound-statement yields the function-body (8.4) of the function call operator, but for purposes of name lookup (3.4), … the compound-statement is considered in the context of the lambda-expression. Not surprisingly, the local type should be visible.
2,122,319
2,122,347
C++ type traits to check if class has operator/member
Possible Duplicate: Is it possible to write a C++ template to check for a function's existence? Is it possible to use boost type traits or some other mechanism to check if a particular template parameter has an operator/function, e.g. std::vector as a template parameter has operator[], while std::pair does not.
You can't solve this via type traits because you'd have to define if for every possible name. Here are the common solutions listed, which have one problem though: many STL implementations put common code in base classes and this method doesn't check for inherited names. If you need to check for inherited members too, see here. The answer provides a solution that checks whether the class in question has a member of that name and can also check for const-ness and argument count. It fails however to check for the full signature including argument and return types and member visibility doesn't make a difference. You should be able to solve that partially by using the linked is_call_possible<> (haven't had time yet too look at it).
2,122,397
2,122,434
error LNK2019: unresolved external symbol
Ok, so I'm having a problem trying figure out the problem in my code. I have a lot of code so I'm only going to post the relevant parts that are messing up when I compile. I have the following function inside of a class and it will compile and everything will run fine until I call the function "CalculateProbabilityResults" and it runs the 7th line of code within it. I've "de-commented" this line of code in my program so you can find it easier. I'm pretty sure I have the right #include directives needed since it compiles fine when not calling the function, so that can't be the problem can it? I know some of my naming notation needs a little help, so please bear with me. Thanks in advance for the help guys. int SQLServer::CalculateProbabilityResults(int profile, int frame, int time_period, int TimeWindowSize) { ofstream ResultFile; stringstream searchFileName; stringstream outputName; vector<vector<int>> timeFrameItemsets; int num = getTimeFrameFile(frame*TimeWindowSize, TimeWindowSize); cout << num << endl; //outputName << "Results" << getTimeFrameFile((frame*TimeWindowSize), TimeWindowSize) << ".csv"; cout << outputName.str() << endl; outputName.clear(); //ResultFile.open(outputName.str().c_str()); ResultFile.close(); result.resize(0); return 0; } int getTimeFrameFile(int timeInHours, int timeFrameSize) { int fileNum = 0; int testWin; if (timeInHours > 24) { while (timeInHours >24) timeInHours -= 24; } for (testWin = 0; testWin < 24/timeFrameSize; testWin++) { if (timeInHours >= testWin*timeFrameSize && timeInHours < (testWin+1)*timeFrameSize) fileNum = testWin+1; } if (fileNum == 0) fileNum = testWin+1; return fileNum; } Call Log 1>------ Rebuild All started: Project: MobileSPADE_1.3, Configuration: Debug Win32 ------ 1>Deleting intermediate and output files for project 'MobileSPADE_1.3', configuration 'Debug|Win32' 1>Compiling... 1>main.cpp 1>MobileSPADE.cpp 1>SQLServer.cpp 1>Generating Code... 1>Compiling manifest to resources... 1>Microsoft (R) Windows (R) Resource Compiler Version 6.0.5724.0 1>Copyright (C) Microsoft Corporation. All rights reserved. 1>Linking... 1>LINK : C:\Users\JoshBradley\Desktop\MobileSPADE_1.3\MobileSPADE_1.3\Debug\MobileSPADE_1.3.exe not found or not built by the last incremental link; performing full link 1>SQLServer.obj : error LNK2019: unresolved external symbol "public: int __thiscall SQLServer::getTimeFrameFile(int,int)" (?getTimeFrameFile@SQLServer@@QAEHHH@Z) referenced in function "public: int __thiscall SQLServer::CalculateProbabilityResults(int,int,int,int)" (?CalculateProbabilityResults@SQLServer@@QAEHHHHH@Z) 1>C:\Users\JoshBradley\Desktop\MobileSPADE_1.3\MobileSPADE_1.3\Debug\MobileSPADE_1.3.exe : fatal error LNK1120: 1 unresolved externals 1>Build log was saved at "file://c:\Users\JoshBradley\Desktop\MobileSPADE_1.3\MobileSPADE_1.3\MobileSPADE_1.3\Debug\BuildLog.htm" 1>MobileSPADE_1.3 - 2 error(s), 0 warning(s) ========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==========
The compiler thinks that getTimeFrameFile is a SQLServer method: unresolved external symbol "public: int __thiscall SQLServer::getTimeFrameFile(int,int)" but you have it defined as a free function: int getTimeFrameFile(int timeInHours, int timeFrameSize) { Change that from a free function to a class method will solve the problem: int SQLServer::getTimeFrameFile(int timeInHours, int timeFrameSize)
2,122,425
2,122,428
How do I install g++ on MacOS X?
I want to compile C++ code on MacOS X, using the g++ compiler. How do I install it?
That's the compiler that comes with Apple's XCode tools package. They've hacked on it a little, but basically it's just g++. You can download XCode for free (well, mostly, you do have to sign up to become an ADC member, but that's free too) here: http://developer.apple.com/technology/xcode.html Edit 2013-01-25: This answer was correct in 2010. It needs an update. While XCode tools still has a command-line C++ compiler, In recent versions of OS X (I think 10.7 and later) have switched to clang/llvm (mostly because Apple wants all the benefits of Open Source without having to contribute back and clang is BSD licensed). Secondly, I think all you have to do to install XCode is to download it from the App store. I'm pretty sure it's free there. So, in order to get g++ you'll have to use something like homebrew (seemingly the current way to install Open Source software on the Mac (though homebrew has a lot of caveats surrounding installing gcc using it)), fink (basically Debian's apt system for OS X/Darwin), or MacPorts (Basically, OpenBSDs ports system for OS X/Darwin) to get it. Fink definitely has the right packages. On 2016-12-26, it had gcc 5 and gcc 6 packages. I'm less familiar with how MacPorts works, though some initial cursory investigation indicates they have the relevant packages as well.
2,122,506
2,122,531
How to create a hidden window in C++
How to create a hidden window ? The purpose of this window is to receive some messages.
When you create the window, omit the WS_VISIBLE flag and don't call ShowWindow.
2,122,567
2,122,575
Access a value from a struct via a pointer? (C++)
Here is my struct: struct Checker { short row; short col; unsigned short number; short color; }; Now, I have to also make another struct to represent a checkers board: struct Board { Checker checkers[2][13]; // Zeroth entry of 13 is not used. Checker *grid[8][8]; // each entry holds Null or an address // of an element in the checkers array }; If it matters, checkers[0-1] represents what side, [1-12] are unique numbers for each piece. Anyways, I have a an 8x8 grid, that either points to NULL, or a checker piece. What i want to do is to be able to access that checker not by its unique ID (checkers[0][5] for instance), but instead by its position on the grid. How can I access the data members in this fashion? I'l try to summarize: Declared a Board gameBoard. Set up the grid so that I have a checkers piece at [0][0]. This would be checkers[0][1]. So instead of doing cout <<checkers[0][1].number, I'd like to be able to print the checker's number without knowing its ID, and instead knowing that that specific checker is at [0][0] Thanks.
cout << grid[0][0]->number; If the grid is a 2-dimensional array of pointers to Checker structs, then grid[0][0] is a pointer to the Checker at that location (0, 0). The -> syntax dereferences the pointer and then accesses the number field. If I am misunderstanding your question or my response fails, please let me know and I'll happily delete. It's late. :)
2,122,573
2,122,763
SIMD or not SIMD - cross platform
I need some idea how to write a C++ cross platform implementation of a few parallelizable problems in a way so I can take advantage of SIMD (SSE, SPU, etc) if available. As well as I want to be able at run time to switch between SIMD and not SIMD. How would you suggest me to approach this problem? (Of course I don't want to implement the problem multiple times for all possible options) I can see how this might not be very easy task with C++ but I believe that I'm missing something. So far my idea looks like this... A class cStream will be array of a single field. Using multiple cStreams I can achieve SoA (Structure of Arrays). Then using a few Functors I can fake Lambda function that I need to be executed over the whole cStream. // just for example I'm not expecting this code to compile cStream a; // something like float[1024] cStream b; cStream c; void Foo() { for_each( AssignSIMD(c, MulSIMD(AddSIMD(a, b), a))); } Where for_each will be responsible for incrementing the current pointer of the streams as well as inlining the functors' body with SIMD and without SIMD. something like so: // just for example I'm not expecting this code to compile for_each(functor<T> f) { #ifdef USE_SIMD if (simdEnabled) real_for_each(f<true>()); // true means use SIMD else #endif real_for_each(f<false>()); } Notice that if the SIMD is enabled is checked once and that the loop is around the main functor.
If someone is interested this is the dirty code I come with to test a new idea that I came with while reading about the library that Paul posted. Thanks Paul! // This is just a conceptual test // I haven't profile the code and I haven't verified if the result is correct #include <xmmintrin.h> // This class is doing all the math template <bool SIMD> class cStreamF32 { private: void* m_data; void* m_dataEnd; __m128* m_current128; float* m_current32; public: cStreamF32(int size) { if (SIMD) m_data = _mm_malloc(sizeof(float) * size, 16); else m_data = new float[size]; } ~cStreamF32() { if (SIMD) _mm_free(m_data); else delete[] (float*)m_data; } inline void Begin() { if (SIMD) m_current128 = (__m128*)m_data; else m_current32 = (float*)m_data; } inline bool Next() { if (SIMD) { m_current128++; return m_current128 < m_dataEnd; } else { m_current32++; return m_current32 < m_dataEnd; } } inline void operator=(const __m128 x) { *m_current128 = x; } inline void operator=(const float x) { *m_current32 = x; } inline __m128 operator+(const cStreamF32<true>& x) { return _mm_add_ss(*m_current128, *x.m_current128); } inline float operator+(const cStreamF32<false>& x) { return *m_current32 + *x.m_current32; } inline __m128 operator+(const __m128 x) { return _mm_add_ss(*m_current128, x); } inline float operator+(const float x) { return *m_current32 + x; } inline __m128 operator*(const cStreamF32<true>& x) { return _mm_mul_ss(*m_current128, *x.m_current128); } inline float operator*(const cStreamF32<false>& x) { return *m_current32 * *x.m_current32; } inline __m128 operator*(const __m128 x) { return _mm_mul_ss(*m_current128, x); } inline float operator*(const float x) { return *m_current32 * x; } }; // Executes both functors template<class T1, class T2> void Execute(T1& functor1, T2& functor2) { functor1.Begin(); do { functor1.Exec(); } while (functor1.Next()); functor2.Begin(); do { functor2.Exec(); } while (functor2.Next()); } // This is the implementation of the problem template <bool SIMD> class cTestFunctor { private: cStreamF32<SIMD> a; cStreamF32<SIMD> b; cStreamF32<SIMD> c; public: cTestFunctor() : a(1024), b(1024), c(1024) { } inline void Exec() { c = a + b * a; } inline void Begin() { a.Begin(); b.Begin(); c.Begin(); } inline bool Next() { a.Next(); b.Next(); return c.Next(); } }; int main (int argc, char * const argv[]) { cTestFunctor<true> functor1; cTestFunctor<false> functor2; Execute(functor1, functor2); return 0; }
2,122,739
2,122,756
Lifetime of a thrown object caught by reference
The C++ Standard, paragraph 15.1.4 sais: The memory for the temporary copy of the exception being thrown is allocated in an unspecified way, except as noted in 3.7.3.1. The temporary persists as long as there is a handler being executed for that exception. I'm wondering why this code crashes (I know that it's not best practice): class magicException { private: char* m_message; public: magicException(const char* message) { m_message = new char[strlen(message) + 1]; strcpy(m_message, message); } ~magicException() { cout << "Destructor called." << endl; delete[] m_message; } char* getMessage() { return m_message; } }; void someFunction() { throw magicException("Bang!"); } int main(int argc, char * argv[]) { try { someFunction(); } catch (magicException& ex) { cout << ex.getMessage() << endl; } return 0; } Specifically, the destructor of the thrown magicException object gets called before the catch block. If I however add a copy constructor to my class: magicException(const magicException& other) { cout << "Copy constructor called." << endl; m_message = new char[strlen(other.m_message) + 1]; strcpy(m_message, other.m_message); } Then the code works, the destructor gets called at the expected place (the end of the catch block), but interestingly the copy constructor still doesn't get called. Is it optimized away by the compiler (Visual C++ 2008 with optimizations turned off), or am I missing something?
Specifically, the destructor of the thrown magicException object gets called before the catch block. Yes, as your quote from the standard says, a copy is taken by the compiler, and the original (probably) discarded. Your problem is the lack of a copy constructor in your original code. However, a C++ compiler is allowed to remove (or add) copy constructor calls in all sorts of situations, including this one.
2,122,863
2,122,876
Generating a unique id of std::string
I want to generate any limited std::string size unique id (i.e of size 6) in 32 bit application. what would be the best and quick way to do this?
Look up hashing of strings, e.g. the Jenkins hash function. But you will never get unique hashes, because strings can be much longer than your size 6, and the Pigoenhole lemma shows trivially that hashes must collide as a a consequence.
2,122,986
2,123,011
Why does endl get used as a synonym for "\n" even though it incurs significant performance penalties?
This program: #include <iostream> #include <cstdlib> #include <string> int main(int argc, const char *argv[]) { using ::std::cerr; using ::std::cout; using ::std::endl; if (argc < 2 || argc > 3) { cerr << "Usage: " << argv[0] << " [<count>] <message>\n"; return 1; } unsigned long count = 10000; if (argc > 2) { char *endptr = 0; count = ::std::strtoul(argv[1], &endptr, 10); if ((argv[1][0] == '\0') || (*endptr != '\0')) { cerr << "Usage: " << argv[0] << " [<count>] <message>\n"; return 1; } } const ::std::string msg((argc < 3) ? argv[1] : argv[2]); for (unsigned long i = 0; i < count; ++i) { cout << i << ": " << msg << '\n'; } return 0; } when timed like so: $ time ./joe 10000000 fred >/dev/null real 0m15.410s user 0m10.551s sys 0m0.166s takes 15.4 seconds of real time to execute. Replace the output line with this: cout << i << ": " << msg << endl; and you end up with something like this: $ time ./joe 10000000 fred >/dev/null real 0m39.115s user 0m16.482s sys 0m15.803s As you can see, the time to run more than doubles, and the program goes from spending minimal time in the OS to spending nearly half of it's time in the OS. Both versions of the program have identical output, and are guaranteed by the standard to have identical output on every platform. Given this, why do people persist in using endl as a synonym for '\n'? Edit: In case it isn't obvious, this question is intended to be a leading question and is here for instructional purposes. I know why the performance penalty exists.
I'm not certain. Inserting std::endl into the output stream is defined as being equivalent to inserting .widen('\n') and then calling flush() and yet many programmers persist in using std::endl even when there is no cause to flush, for example they go on to immediately output something else. My assumption is that it comes from an incorrect belief that it is somehow a more portable because it doesn't explicitly use a specific newline character. This is incorrect as \n must always be mapped to the system's correct newline sequence for non-binary files by the stream library.
2,123,146
2,129,209
Application does not start in debugger
The application I'm working does not start in the debugger of Visual Studio 2005. Here's what I do: I rebuild the application and hit F5 to start it The title of the VS2005-window says "projectname (Running) ..." The debugger buttons appear but are greyed out The application appears in the Windows task manager, but it has only 80k in memory usage Nothing happens for a long while, and finally I get a windows with the following error message: "Debugging is being stopped but is not yet complete. You can force debugging to stop immediately, but any process being detached may be terminated instead. This window will automatically close when the debugging has completely stopped". The window does not disappear, so after a while I press the "Stop now" button. Nothing happens for a while (the debugger buttons still visible, but greyed) Some time later a new window appears: "Unable to start program '(path to exe)'. OLE har skickat en begäran och väntar på svar". The last sentence is swedish for "OLE has sent a request and is waiting for response". I press OK and the debugger buttons are gone. The application is still running, and still has only 80k in memory usage. I try to end the process with the task manager, but it is not killed. I quit Visual Studio and finally the process is gone. The application is an unmanaged C++ project, that use a lot of DLL-files as plugins. I'm using the "multi-threaded debug" runtime, and I've made sure all dependencies are compiled against the same runtime. It was while doing that this problem appeared all of a sudden. I've tried to reverse my changes, but it doesn't help. Restarting the computer doesn't help either. I've got the application running once or twice at random. If I then ended the application and started it again it wasn't started. So I don't think this is because of my configuration. Any ideas? One more note: the application starts and runs as it should if I start it from outside Visual Studio.
Ok, I've solved my problem, but I have no idea how. One thing i tried was deleting all build files and exe and dll files, and then recompile everything. But that didn't help. I then tried one thing at random: the plugins were in the same solution. So I removed them and tried to run again. And this time it worked! So I added all the plugin-projects back, and it still works! So, I guess I will never know what happened. But removing and adding a project to a solution might solve someone elses problem too ... :)
2,123,163
2,123,180
What does this stack trace possibly mean?
I'm having segfault problem in my application written using C++ and compiled using GCC 4.3.2. It is running under Debian 5 x64. The process crashed on the following line of code: #0 0x00000000007c720f in Action::LoadInfoFromDB (this=0x7fae10d38d90) at ../../../src/server/Action.cpp:1233 1233 m_tmap[tId]->slist[sId] = pItem; The stack trace that i got from the core dump is as follows: #0 0x00000000007c720f in Action::LoadInfoFromDB (this=0x7fae10d38d90) at ../../../src/server/Action.cpp:1233 ItemGuid = <value optimized out> ItemEntry = <value optimized out> pItem = (class Item *) 0x2b52bae0 fields = <value optimized out> tId = 1 '\001' sId = 0 '\0' result = (QueryResult *) 0x7fadcae3d8e0 #1 0x00000000007c7584 in Action::DisplayInfo (this=0x0, session=0x7fadbdd44a20) at ../../../src/server/Action.cpp:1090 data = {<ByteBuffer> = {static DEFAULT_SIZE = 4096, _rpos = 220043248, _wpos = 5469086, _storage = {<std::_Vector_base<unsigned char, std::allocator<unsigned char> >> = { _M_impl = {<std::allocator<unsigned char>> = {<__gnu_cxx::new_allocator<unsigned char>> = {<No data fields>}, <No data fields>}, _M_start = 0x41200000 <Address 0x41200000 out of bounds>, _M_finish = 0x0, _M_end_of_storage = 0x7fad00000000 <Address 0x7fad00000000 out of bounds>}}, <No data fields>}}, m_code = 51152} #2 0x00000000007d01a3 in Session::HandleAction (this=0x7fadbdd44a20, recv_data=@0x25d83780) at ../../../src/server/ActionHandler.cpp:862 pAction = (Action *) 0x0 ActionId = 1079 GoGuid = <value optimized out> In frame #1, Action::DisplayInfo was called from Session::HandleAction on pAction. However frame #1 shows this=0x0, and frame #2 shows pAction = (Action *) 0x0. I can't understand why this caused a crash. What does this possibly mean? DisplayInfo can't be called on a null reference ! Any help is most appreciated. Thanks
m_tmap[tId]->slist[sId] = pItem; If that's the crash position, you're most likely indexing into non-existent data. If m_tmap is a std::map it's ok - but did you verify slist[sId] is a valid subscript? Or - you called a member function on a NULL (or otherwise invalid)-Pointer and crash the first time you're accessing a member of the object directly, even if it's a few frames away. Are you sure pAction can't be NULL? Stack traces needn't be valid. Firstly, you can corrupt them in your application. Secondly, optimizing compilers optimize that much away that the resulting stack traces are not reliable. Try a build with compiler optimizations disabled and use assert to verify your array subscripting is ok.
2,123,265
2,123,964
In wxwidgets, how do I make one thread wait for another to complete before proceeding?
I have a system where my singleton class spawns a thread to do a calculation. If the user requests another calculation while another calculation is still running, I want it to tear down the existing thread and start a new one. But, it should wait for the first thread to exit completely before proceeding. I have all the tear down working but I seem to have an issue with making sure that only one thread runs. My approach is for the StartCalculation function to call mutex->Lock(). And the thread in the destructor releases the lock. It's not working. Am I right in assuming that if Lock() can't get the lock, it spins and keeps trying to reacquire the lock? Can this Lock() be called from my main application thread? Any ideas is helpful. Maybe wxMutex locks are the right mechanism for this.
To wait for a thread you need to create it joinable and simply use wxThread::Wait(). However I agree with the remark above: this is not something you'd normally do at all and definitely not from the main GUI thread as you should never block in it because this freezes the UI. Consider using a message queue to simply tell the existing thread about the new task it needs to perform instead.
2,123,480
2,123,506
Are object files platform independent?
Is it possible to compile program on one platform and link with other ? What does object file contain ? Can we delink an executable to produce object file ?
No. In general object file formats might be the same, e.g. ELF, but the contents of the object files will vary from system to system. An object file contains stuff like: Object code that implements the desired functionality A symbol table that can be used to resolve references Relocation information to allow the linker to locate the object code in memory Debugging information The object code is usually not only processor specific, but also OS specific if, for example, it contains system calls. Edit: Is it possible to compile program on one platform and link with other ? Absolutely. If you use a cross-compiler. This compiler specifically targets a platform and generates object files (and programs) that are compatible with the target platform. So you can use an X86 Linux system, for example, to make programs for a powerpc or ARM based system using the appropriate cross compiler. I do it here.
2,123,699
2,123,751
Where does my C++ compiler look to resolve my #includes?
this is a really basic question. I've been learning C++ and thus far I have only used the standard library. I have been including things like <iostream> and with no problems. Now I want to use Apache Xerces, so I've installed it on my machine (a Debian system) and am following a tutorial which says I need to include: #include <xercesc/sax2/SAX2XMLReader.hpp> but g++ says "error: xercesc/sax2/SAX2XMLReader.hpp: No such file or directory". Where is it looking? Do I need to give it more information? Thanks.
Use the --verbose option: [...] #include "..." search starts here: #include <...> search starts here: /usr/lib/gcc/i686-pc-linux-gnu/4.4.2/../../../../include/c++/4.4.2 /usr/lib/gcc/i686-pc-linux-gnu/4.4.2/../../../../include/c++/4.4.2/i686-pc-linux-gnu /usr/lib/gcc/i686-pc-linux-gnu/4.4.2/../../../../include/c++/4.4.2/backward /usr/local/include /usr/lib/gcc/i686-pc-linux-gnu/4.4.2/include /usr/lib/gcc/i686-pc-linux-gnu/4.4.2/include-fixed /usr/include End of search list. [...] You can use the -I option to add search directories, as explained here: http://gcc.gnu.org/onlinedocs/gcc-4.4.3/gcc/Directory-Options.html#Directory-Options You can also use environment variables to change this permanently: http://gcc.gnu.org/onlinedocs/gcc-4.4.3/gcc/Environment-Variables.html#Environment-Variables In your case, you could use CPLUS_INCLUDE_PATH.
2,123,823
2,123,851
Dump class/struct member variables in g++
Is there a flag in g++ or tools to dump the member variables of a struct/class? To illustrate, consider source code like this struct A { virtual void m() {}; }; struct B : public A { int b; virtual void n() = 0; }; struct C : public B { int c1, c2; void o(); }; struct D : public C { virtual void n() {}; A d; }; I want to get something similar to A: 0 = (vptr) B: 0 = (vptr) 4 = b C: 0 = (vptr) 4 = b 8 = c1 12 = c2 D: 0 = (vptr) 4 = b 8 = c1 12 = c2 16 = d (-fdump-class-hierarchy does not work. It only prints the member functions.) (Assume I don't know the classes A to D, or there are so many classes that I don't want to list them out myself.) (Specifically, I want to dump the member variables of http://www.opensource.apple.com/source/xnu/xnu-1456.1.26/iokit/IOKit/IOUserClient.h).
Use the right tool for the right job. g++ isn't much of a hierarchy viewing tool. You can always use a external tool like doxygen, that can dump graphviz diagrams. For power-solutions there is gcc-xml, that can dump your whole program into an xml file that you can parse at will.
2,123,877
2,123,889
The problem with header files
I have 3 header files in the project: Form1.h - this is header with implementation there, TaskModel.h with TaskModel.cpp, TaskController.h with TaskController.cpp. There are content of files: //----- TaskController.h #pragma once #include "TaskModel.h" .......... //---- Form1.h #pragma once #include "TaskModel.h" #include "TaskController.h" ......... The problem: How to make Form1.h to be included to TaskModel.h. When I directly include, Form1.h to TaskModel.h then there are many errors. If to use forward declaration, how to organaize that ?
You can forward declare classes not header files. The problem with cyclic dependencies is usually a mark of bad design. Do you want TaskModel.h to include Form1.h? Why is that? Can it be avoided? Couldn't you just include Form1.h into TaskModel.cpp? For forward declaration do: // in TaskModel.h class Form1; // or other classes that are using in TaskModel.h //... task model code // in TaskModel.cpp #include "Form1.h" Basically what you are doing here is declaring that such classes exist. Then in the cpp file you include them. Mind however that this has some limitations: you can only use the forward declared classes for simple tasks you cannot pass them to methods per value, you cannot make them members of classes As a rule of thumb, if the forwarded classes size is needed to compile the given piece of code, you cannot use a forward.
2,123,907
2,123,941
concurrent reference counter class and scoped retain: is this ok?
This is a question regarding coding design, so please forgive the long code listings: I could not resume these ideas and the potential pitfalls without showing the actual code. I am writing a ConcurrentReferenceCounted class and would appreciate some feedback on my implementation. Sub-classes from this class will receive "release" instead of a direct delete. Here is the class: class ConcurrentReferenceCounted : private NonCopyable { public: ConcurrentReferenceCounted() : ref_count_(1) {} virtual ~ConcurrentReferenceCounted() {} void retain() { ScopedLock lock(mutex_); ++ref_count_; } void release() { bool should_die = false; { ScopedLock lock(mutex_); should_die = --ref_count_ == 0; } if (should_die) delete this; } private: size_t ref_count_; Mutex mutex_; }; And here is a scoped retain: class ScopedRetain { public: ScopedRetain(ConcurrentReferenceCounted *object) : object_(object) { retain(); } ScopedRetain() : object_(NULL) {} ~ScopedRetain() { release(); } void hold(ConcurrentReferenceCounted *object) { assert(!object_); // cannot hold more then 1 object object_ = object; retain(); } private: ConcurrentReferenceCounted *object_; void release() { if (object_) object_->release(); } void retain() { object_->retain(); } }; And finally this is a use case: Object *target; ScopedRetain sr; if (objects_.get(key, &target)) sr.hold(target); else return; // use target // no need to 'release'
Your ConcurrentReferenceCounted seems to use a full mutex, which is not necessary and not very fast. Reference counting can be implemented atomically using architecture-dependent interlocked instructions. Under Windows, the InterlockedXXXfamily of functions simply wraps these instructions.
2,124,097
2,125,299
C++ container/array/tuple consistent access interface
Is there, perhaps in boost, consistent element access semantics which works across containers? something along the lines of: element_of(std_pair).get<1>(); element_of(boost_tuple).get<0>(); element_of(pod_array).get<2>(); in principle i can write myself, but I would rather not reinvent the wheel.thanks
I'm not aware of such a thing. You could most probably just implement a free get function for the types you're interested in. Boost.Tuple already has it. std::pair has it in C++0x. And the rest shouldn't be too complicated. E.g #include <iostream> #include <utility> #include <vector> #include <boost/tuple/tuple.hpp> namespace getter { template <size_t Index, class Container> typename Container::reference get(Container& c) { return c[Index]; } template <size_t Index, class Container> typename Container::const_reference get(const Container& c) { return c[Index]; } template <size_t Index, class T> T& get(T *arr) { return arr[Index]; } namespace detail { template <size_t Index, class T, class U> struct PairTypeByIndex; template <class T, class U> struct PairTypeByIndex<0u, T, U> { typedef T type; type& operator()(std::pair<T, U>& p) const { return p.first; } const type& operator()(const std::pair<T, U>& p) const { return p.first; } }; template <class T, class U> struct PairTypeByIndex<1u, T, U> { typedef U type; type& operator()(std::pair<T, U>& p) const { return p.second; } const type& operator()(const std::pair<T, U>& p) const { return p.second; } }; } template <size_t Index, class T, class U> typename detail::PairTypeByIndex<Index, T, U>::type& get(std::pair<T, U>& p) { return detail::PairTypeByIndex<Index, T, U>()(p); } template <size_t Index, class T, class U> const typename detail::PairTypeByIndex<Index, T, U>::type& get(const std::pair<T, U>& p) { return detail::PairTypeByIndex<Index, T, U>()(p); } using boost::get; } int main() { boost::tuple<int, int> tuple(2, 3); std::cout << getter::get<0>(tuple) << '\n'; std::vector<int> vec(10, 1); vec[2] = 100; std::cout << getter::get<2>(vec) << '\n'; const int arr[] = {1, 2, 3, 4, 5}; std::cout << getter::get<4>(arr) << '\n'; std::pair<int, float> pair(41, 3.14); ++getter::get<0>(pair); const std::pair<int, float> pair_ref = pair; std::cout << getter::get<0>(pair_ref) << ' ' << getter::get<1>(pair_ref) << '\n'; }
2,124,161
2,124,174
Manipulating scrollbars in third-party application
I need to create an application which do the following: At the beginning we have notepad window open with a lot of text in it. Our application must scroll through this file and take notepad window screenshot after each scroll action. I've tried to achieve this using SBM_GETRANGE, SBM_GETRANGE, SBM_SETPOS but it does not work for me. Please note that emulating keyboard events (e.g. PageDown, PageUp) is not an option for me because this application should also work with other applications which may not support keyboard shortcuts for manipulating scrolls. Thanks.
Don't try to manipulate the scrollbar directly - instead SetFocus() to the text window, then send Page Down messages. If there are applications where you must manipulate the scrollbar, you should get its window handle and send the messages there.
2,124,339
2,124,385
C++ preprocessor __VA_ARGS__ number of arguments
Simple question for which I could not find answer on the net. In variadic argument macros, how to find the number of arguments? I am okay with boost preprocessor, if it has the solution. If it makes a difference, I am trying to convert variable number of macro arguments to boost preprocessor sequence, list, or array for further reprocessing.
This is actually compiler dependent, and not supported by any standard. Here however you have a macro implementation that does the count: #define PP_NARG(...) \ PP_NARG_(__VA_ARGS__,PP_RSEQ_N()) #define PP_NARG_(...) \ PP_ARG_N(__VA_ARGS__) #define PP_ARG_N( \ _1, _2, _3, _4, _5, _6, _7, _8, _9,_10, \ _11,_12,_13,_14,_15,_16,_17,_18,_19,_20, \ _21,_22,_23,_24,_25,_26,_27,_28,_29,_30, \ _31,_32,_33,_34,_35,_36,_37,_38,_39,_40, \ _41,_42,_43,_44,_45,_46,_47,_48,_49,_50, \ _51,_52,_53,_54,_55,_56,_57,_58,_59,_60, \ _61,_62,_63,N,...) N #define PP_RSEQ_N() \ 63,62,61,60, \ 59,58,57,56,55,54,53,52,51,50, \ 49,48,47,46,45,44,43,42,41,40, \ 39,38,37,36,35,34,33,32,31,30, \ 29,28,27,26,25,24,23,22,21,20, \ 19,18,17,16,15,14,13,12,11,10, \ 9,8,7,6,5,4,3,2,1,0 /* Some test cases */ PP_NARG(A) -> 1 PP_NARG(A,B) -> 2 PP_NARG(A,B,C) -> 3 PP_NARG(A,B,C,D) -> 4 PP_NARG(A,B,C,D,E) -> 5 PP_NARG(1,2,3,4,5,6,7,8,9,0, 1,2,3,4,5,6,7,8,9,0, 1,2,3,4,5,6,7,8,9,0, 1,2,3,4,5,6,7,8,9,0, 1,2,3,4,5,6,7,8,9,0, 1,2,3,4,5,6,7,8,9,0, 1,2,3) -> 63
2,124,483
2,124,505
Controlling cursor and keyboard with C++/Visual C++
This time I have a question about C++. I'm using Dev-C++ for programming, but I also have Visual C++ Express installed so both are good. I'm creating a program like automated tasks, is it.. macro? But as I'm a noob in C++, because I started it a week ago, I need help. Please keep the answers simple :-D This is a part of my learning-progress in C++. Also, I tried to Google these for a long time without success, so I'd not like to hear answers relating to searching. How is it possible to move cursor around the screen? This should be possible, as everything is, but is there any simple way/function to do this. I also need to click mouse-buttons. This one must be a simple one: Pressing keys. I have nothing to add into this. Hope you can help., Martti Laine
If you're writing to the console, you'd rather use something like conio.h or curses.
2,124,514
2,124,521
How to ensure a member is 4-byte aligned?
In order to use OSAtomicDecrement (mac-specific atomic operation), I need to provide a 4-byte aligned SInt32. Does this kind of cooking work ? Is there another way to deal with alignment issues ? struct SomeClass { SomeClass() { member_ = &storage_ + ((4 - (&storage_ % 4)) % 4); *member_ = 0; } SInt32 *member_; struct { SInt32 a; SInt32 b; } storage_; };
If you're on a Mac, that means GCC. GCC can auto align variables for you: __attribute__((__aligned__(4))) int32_t member_; Please note that this is not portable across compilers, as this is GCC specific.
2,124,633
2,124,677
Atomic increment on mac OS X
I have googled for atomic increment and decrement operators on Mac OS X and found "OSAtomic.h", but it seems you can only use this in kernel space. Jeremy Friesner pointed me at a cross-platform atomic counter in which they use assembly or mutex on OS X (as far as I understood the interleaving of ifdefs). Isn't there something like InterlockedDecrement or atomic_dec() on OS X ?
What makes you think OSAtomic is kernel space only? The following compiles and works fine. #include <libkern/OSAtomic.h> #include <stdio.h> int main(int argc, char** argv) { int32_t foo = 1; OSAtomicDecrement32(&foo); printf("%d\n", foo); return 0; }
2,124,749
2,125,150
Compiling libmagic statically (c/c++ file type detection)
Thanks to the guys that helped me with my previous question (linked just for reference). I can place the files fileTypeTest.cpp, libmagic.a, and magic in a directory, and I can compile with g++ -lmagic fileTypeTest.cpp fileTypeTest. Later, I'll be testing to see if it runs in Windows compiled with MinGW. I'm planning on using libmagic in a small GUI application, and I'd like to compile it statically for distribution. My problem is that libmagic seems to require the external file, magic. (I'm actually using my own shortened and compiled version, magic_short.mgc, but I digress.) A hacky solution would be to code the file into the application, creating (and deleting) the external file as needed. How can I avoid this? added for clarity: magic is a text file that describes properties of different filetypes. When asked to identify a file, libmagic searches through magic. There is a compiled version, magic.mgc that works faster. My application only needs to identify a handful of filetypes before deciding what to do with them, so I'll be using my own magic_short file to create magic_short.mgc.
This is tricky, I suppose you could do it this way... by the way, I have downloaded the libmagic source and looking at it... There's a function in there called magic_read_entries within the minifile.c (this is the pure vanilla source that I downloaded from sourceforge where it is reading from the external file. You could append the magic file (which is found in the /etc directory) to the end of the library code, like this cat magic >> libmagic.a. In my system, magic is 474443 bytes, libmagic.a is 38588 bytes. In the magic.c file, you would need to change the magichandle_t* magic_init(unsigned flags) function, at the end of the function, add the line magic_read_entries and modify the function itself to read at the offset of the library itself to pull in the data, treat it as a pointer to pointer to char's (char **) and use that instead of reading from the file. Since you know where the offset is to the library data for reading, that should not be difficult. Now the function magic_read_entries will no longer be used, as it is not going to be read from a file anymore. The function `magichandle_t* magic_init(unsigned flags)' will take care of loading the entries and you should be ok there. If you need further help, let me know, Edit: I have used the old 'libmagic' from sourceforge.net and here is what I did: Extracted the downloaded archive into my home directory, ungzipping/untarring the archive will create a folder called libmagic. Create a folder within libmagic and call it Test Copy the original magic.c and minifile.c into Test Using the enclosed diff output highlighting the difference, apply it onto the magic.c source. 48a49,51 > #define MAGIC_DATA_OFFSET 0x971C > #define MAGIC_STAT_LIB_NAME "libmagic.a" > 125a129,130 > /* magic_read_entries is obsolete... */ > magic_read_entries(mh, MAGIC_STAT_LIB_NAME); 251c256,262 < --- > > if (!fseek(fp, MAGIC_DATA_OFFSET, SEEK_SET)){ > if (ftell(fp) != MAGIC_DATA_OFFSET) return 0; > }else{ > return 0; > } > Then issue make The magic file (which I copied from /etc, under Slackware Linux 12.2) is concatenated to the libmagic.a file, i.e. cat magic >> libmagic.a. The SHA checksum for magic is (4abf536f2ada050ce945fbba796564342d6c9a61 magic), here's the exact data for magic (-rw-r--r-- 1 root root 474443 2007-06-03 00:52 /etc/file/magic) as found on my system. Here's the diff for the minifile.c source, apply it and rebuild minifile executable by running make again. 40c40 < magic_read_entries(mh,"magic"); --- > /*magic_read_entries(mh,"magic");*/ It should work then. If not, you will need to adjust the offset into the library for reading by modifying the MAGIC_DATA_OFFSET. If you wish, I can stick up the magic data file into pastebin. Let me know. Hope this helps, Best regards, Tom.
2,124,836
2,124,846
redefine a non-virtual function in C++
When I read Effective C++, it says, never redefine a non-virtual function in C++. However, when I tested it, the code below compiles correctly. So what's the point? It's a mistake or just a bad practice? class A { public: void f() { cout<<"a.f()"<<endl;}; }; class B: public A { public: void f() { cout<<"b.f()"<<endl;}; }; int main(){ B *b = new B(); b->f(); return 0; }
Redefining a non-virtual function is fine so long as you aren't depending on virtual dispatch behavior. The author of the book is afraid that you will pass your B* to a function that takes an A* and then be upset when the the result is a call to the base method, not the derived method.
2,124,921
2,124,954
static binding of default parameter
In Effective C++, the book just mentioned one sentence why default parameters are static bound: If default parameter values were dynamically bound, compilers would have to come up with a way to determine the appropriate default values for parameters of virtual functions at runtime, which would be slower and more complicated than the current mechanism of determining them during compilation. Can anybody elaborate this a bit more? Why it is complicated and inefficient? Thanks so much!
Whenever a class has virtual functions, the compiler generates a so-called v-table to calculate the proper addresses that are needed at runtime to support dynamic binding and polymorphic behavior. Lots of class optimizers work toward removing virtual functions for this reason exactly. Less overhead, and smaller code. If default parameters were also calculated into the equation, it would make the whole virtual function mechanism all the more cumbersome and bloated.
2,124,963
2,124,983
Global variables in C++
I am supposed to write a program that should read from input numbers in the main() part, and then make some calculations in other bool functions. I don't want to insert the whole arrays of the numbers and all the other parameters in the functions everytime i call them. My question is this: Can i make somehow in c++ to read input in some variables, but in a way that other functions outside of main() will also "know" this variables and what's inside them so i don't have to put a lot of arguments when i call the functions ? This is the code: #include <iostream> using namespace std; inline bool del(int n) { int i; for(i=0;i<s1;i++) { if((n % a[i]) == 0) return true; } return false; } inline bool ned(int n) { int i; for(i=0;i<s2;i++) { if((n % b[i]) != 0) return true; } return false; } int main(void) { int s1, s2, a[25], b[25]; int m, n, i, k=0; bool d, nd; cin >> s1 >> s2 >> m >> n; for(i=0;i<s1;i++) cin >> a[i]; for(i=0;i<s2;i++) cin >> b[i]; for(i=m;i<=n;i++) { d = del(i); nd = ned(i); if(d == true && nd == true) ++k; } cout << k << endl; return 0; } int s1, s2, a[25], b[25] <- These are the vars i need to be seen by the other functions (because i use them as you can see). I tried declaring them like global, but that didn't work, i got errors like "was not declared in this scope" . Thank you for the help.
Making variables global for this reason is bad habit. Either just pass the arrays to the functions, or make the whole thing into an object and make the arrays and functions members of the class. This is what OOP is about.
2,124,986
2,125,040
cant exchange widget in QSplitter (Qt)
I have a QSplitter with two widgets. One of them is static, the other one is supposed to change on the press of a button. But the problem is the widget does not change? I have a pointer for the widget that is changing - this->content The widget to switch to is in the pointer named widget. Here's a code snippet where I switch the widget: qDebug() << "before: " << this->content; this->content = widget; qDebug() << "after: " << this->content; this->content->update(); this->content->repaint(); My debug output there verifies that the pointer points to the other widget: before: QLineEdit(0x363850) after: SCTableView(0x3644c0) Trying to make it show by calling update() and repaint(), without any success. Any ideas?
Problem solved. Got help from some people in #qt on freenode. Thanks. I forgot to call setVisible(true) on this->content after switching to the new widget.
2,125,003
2,125,651
Design choice for sound effects
I'm trying to decide how I want to implement sound effects in my program. I've debating between 2 options. 1) Create an abstract interface SoundEffect and have every sound effect derive from that. Each sound effect is its own class. Upon construction, it opens the sound file and plays, and upon destruction it closes the file. The main drawback I see to this approach is that I'll have a lot of very small objects which would greatly increase the number of files. I could put multiple sound effects in a single header (ones that are related), but I'm not sure. 2) Since the playing of any sound effect calls the same stuff, with the only difference being the file it opens, I could create a single SoundEffect class, with its constructor being an enumerator that contains the names of the sound effects. The class would use a switch to play the appropriate sound. Obviously I'm debating over an OOP approach vs a more "traditional" approach, and I'm wondering what the best design choice is here. I am heavily leaning towards the OOP approach, but I'm not sure how I want to structure the files. If you have any other recommendations, I'd be glad to hear them.
If i understand that right you are hard-coding the sound effects for all possible sounds? That sounds wrong, you create different subclasses for differing behaviour, not for differing data. If you have certain sound effect types that need preprocessing of the data, subclasses make sense - if the project is bigger, you might want to seperate effect handling code and effect parameters so you can change effects without rebuilding the application (e.g. FMOD seperates coding and sound design). For playing different sound-files just let the class' constructor take the path or some resource id for the sound file - there is no switch needed here. If you're dealing with a large number of sound files that are used repeatedly, a pool based approach would be useful to avoid reloading of files every time you play them. One idiom for that is the flyweight pattern (see e.g. Boost.Flyweight for an implementation).
2,125,021
2,125,317
Can I make this C++ code faster without making it much more complex?
here's a problem I've solved from a programming problem website(codechef.com in case anyone doesn't want to see this solution before trying themselves). This solved the problem in about 5.43 seconds with the test data, others have solved this same problem with the same test data in 0.14 seconds but with much more complex code. Can anyone point out specific areas of my code where I am losing performance? I'm still learning C++ so I know there are a million ways I could solve this problem, but I'd like to know if I can improve my own solution with some subtle changes rather than rewrite the whole thing. Or if there are any relatively simple solutions which are comparable in length but would perform better than mine I'd be interested to see them also. Please keep in mind I'm learning C++ so my goal here is to improve the code I understand, not just to be given a perfect solution. Thanks Problem: The purpose of this problem is to verify whether the method you are using to read input data is sufficiently fast to handle problems branded with the enormous Input/Output warning. You are expected to be able to process at least 2.5MB of input data per second at runtime. Time limit to process the test data is 8 seconds. The input begins with two positive integers n k (n, k<=10^7). The next n lines of input contain one positive integer ti, not greater than 10^9, each. Output Write a single integer to output, denoting how many integers ti are divisible by k. Example Input: 7 3 1 51 966369 7 9 999996 11 Output: 4 Solution: #include <iostream> #include <stdio.h> using namespace std; int main(){ //n is number of integers to perform calculation on //k is the divisor //inputnum is the number to be divided by k //total is the total number of inputnums divisible by k int n,k,inputnum,total; //initialize total to zero total=0; //read in n and k from stdin scanf("%i%i",&n,&k); //loop n times and if k divides into n, increment total for (n; n>0; n--) { scanf("%i",&inputnum); if(inputnum % k==0) total += 1; } //output value of total printf("%i",total); return 0; }
I tested the following on 28311552 lines of input. It's 10 times faster than your code. What it does is read a large block at once, then finishes up to the next newline. The goal here is to reduce I/O costs, since scanf() is reading a character at a time. Even with stdio, the buffer is likely too small. Once the block is ready, I parse the numbers directly in memory. This isn't the most elegant of codes, and I might have some edge cases a bit off, but it's enough to get you going with a faster approach. Here are the timings (without the optimizer my solution is only about 6-7 times faster than your original reference) [xavier:~/tmp] dalke% g++ -O3 my_solution.cpp [xavier:~/tmp] dalke% time ./a.out < c.dat 15728647 0.284u 0.057s 0:00.39 84.6% 0+0k 0+1io 0pf+0w [xavier:~/tmp] dalke% g++ -O3 your_solution.cpp [xavier:~/tmp] dalke% time ./a.out < c.dat 15728647 3.585u 0.087s 0:03.72 98.3% 0+0k 0+0io 0pf+0w Here's the code. #include <iostream> #include <stdio.h> using namespace std; const int BUFFER_SIZE=400000; const int EXTRA=30; // well over the size of an integer void read_to_newline(char *buffer) { int c; while (1) { c = getc_unlocked(stdin); if (c == '\n' || c == EOF) { *buffer = '\0'; return; } *buffer++ = c; } } int main() { char buffer[BUFFER_SIZE+EXTRA]; char *end_buffer; char *startptr, *endptr; //n is number of integers to perform calculation on //k is the divisor //inputnum is the number to be divided by k //total is the total number of inputnums divisible by k int n,k,inputnum,total,nbytes; //initialize total to zero total=0; //read in n and k from stdin read_to_newline(buffer); sscanf(buffer, "%i%i",&n,&k); while (1) { // Read a large block of values // There should be one integer per line, with nothing else. // This might truncate an integer! nbytes = fread(buffer, 1, BUFFER_SIZE, stdin); if (nbytes == 0) { cerr << "Reached end of file too early" << endl; break; } // Make sure I read to the next newline. read_to_newline(buffer+nbytes); startptr = buffer; while (n>0) { inputnum = 0; // I had used strtol but that was too slow // inputnum = strtol(startptr, &endptr, 10); // Instead, parse the integers myself. endptr = startptr; while (*endptr >= '0') { inputnum = inputnum * 10 + *endptr - '0'; endptr++; } // *endptr might be a '\n' or '\0' // Might occur with the last field if (startptr == endptr) { break; } // skip the newline; go to the // first digit of the next number. if (*endptr == '\n') { endptr++; } // Test if this is a factor if (inputnum % k==0) total += 1; // Advance to the next number startptr = endptr; // Reduce the count by one n--; } // Either we are done, or we need new data if (n==0) { break; } } // output value of total printf("%i\n",total); return 0; } Oh, and it very much assumes the input data is in the right format.
2,125,189
2,125,265
Generic Alpha Beta Search with C++
I'm trying to design a function template which searches for the best move for any game - of course the user of this function template has to implement some game specific functions. What i'm trying to do is to generalize the alpha beta search algorithm with a function template. The declaration of this function template looks like this: template<class GameState, class Move, class EndGame, class Evaluate, class GetMoves, class MakeMove) int alphaBetaMax(GameState g, int alpha, int beta, int depthleft); Among other things the function has to: Determine if a game has ended: bool EndGame(g) Evaluate the state of a game: int Evaluate(g) Get the possible moves: std::vector<Move> moves = GetMoves(g) Make a move: Gamestate gnew = MakeMove(g, moves[i]) Do you think the function has to many template arguments? Is there a way to reduce the number of arguments? One idea is to extend the GameState class with members that evaluate the gamestate or decide if the game has ended. But a alpha beta search tree contains a lot of Gamestate instances which may leads to unnecessary memory requirements, thus i like to keep Gamestate small. In general, is a function template actually the right way?
You could define an abstract interface say game_traits and have specialized game_traits implementation for each game: template<typename Game> class game_traits { ... }; class Chess { ... }; template<> class game_traits<Chess> { static bool endGame(Chess game); ... }; template <typename Game, typename traits = game_traits<Game> > int alphaBetaMax(Game game, int alpha, int beta, int depthleft) { ended = traits::endGame(game); ... } See char_traits in the C++ standard library how it is used there. Alternatively, you could make them just methods of the Game classes, you don't need inheritence here from some abstract class since you supply it as a template argument. You will just get a, perhaps not so transparent, compile error when your template function tries to access, say game.has_ended(), when no such method exists. This kind of mechanism is also used a lot in the standard template library. btw, there was a new feature planned for this; Concepts: auto concept GameType<typename Game> { bool has_ended(Game&); ... }; template<typename Game> requires GameType<Game> int alphaBetaMax(Game game, int alpha, int beta, int depthleft) { bool ended = game.has_ended(); ... } Unfortunately Concepts have been postponed to a future version of the standard and will not yet appear in c++0x :(
2,125,209
2,125,336
Linker errors even though I prevent them with #ifndef
I am getting linker errors that suggest I am not using #ifndef and #define. 1>TGALoader.obj : error LNK2005: "struct TGA tga" (?tga@@3UTGA@@A) already defined in main.obj 1>TGALoader.obj : error LNK2005: "struct TGAHeader tgaheader" (?tgaheader@@3UTGAHeader@@A) already defined in main.obj 1>TGALoader.obj : error LNK2005: "unsigned char * uTGAcompare" (?uTGAcompare@@3PAEA) already defined in main.obj 1>TGALoader.obj : error LNK2005: "unsigned char * cTGAcompare" (?cTGAcompare@@3PAEA) already defined in main.obj 1>LINK : warning LNK4098: defaultlib 'LIBCMTD' conflicts with use of other libs; use /NODEFAULTLIB:library I have included a header file Texture.h and tga.h from the nehe opengl tutorials into my project. I have #ifndef TGAISCOOL #define TGAISCOOL #endif in my tga.h file. If I include this more than once, I get the errors from the linker that I pasted above. The first two are from texture.h though the situation is the same. Any ideas on what is wrong?
You're not doing anything wrong. The problem is with the Tga.h file you got from NeHe. This header file defines four objects which means that if you include the file in different translation units the symbols for these will appear multiple times and that is what the linker is complaining about. The solution is to move the definitions of these objects into the Tga.cpp file. The lines in Tga.h that previously had the definitions should now read extern TGAHeader tgaheader; extern TGA tga; extern GLubyte uTGAcompare[12]; extern GLubyte cTGAcompare[12]; with the original versions of these lines now in Tga.cpp
2,125,330
2,125,420
get heap corruption when changing member variables order
i have a quite strange problem. my class has -among others- following memers: GLboolean has_alpha; GLuint width; GLuint height; GLuint length; GLuint millisPerFrame; GLfloat uv[2]; GLuint texsize[2]; GLint compsize; // location2 long preload_interval_next; long preload_interval; if i put the has_alpha at (location2) i get a) different object size, sizeof reports 248 instead of 252 bytes and b) hefty heap corruptions GLboolean is defined as unsigned char, but since i use NO optimization at all ( double checked this ) this should be padded to 4 bytes anyway. And in the end, if it pads, it should do it at both locations .. compilers tested: CLANG ( c++ ), GCC4.2 com.apple.compilers.llvmgcc42 Anyone an idea how to track this down?
The problem here is almost certainly not in the members you have listed, but another one, possibly an int, pointer or bool that is not properly initialised in the constructor. Please post a larger example that fails, and make sure you initialise all members using the constructor initialisation list.
2,125,476
2,125,507
Invalid ESP when using multiple inheritance in C++ (VS2005)
I've been making a game which uses the Box2D physics engine, and I've come across some weirdness with the stack pointer (ESP) and multiple inheritance. I've managed to reproduce it in a minimal amount of code, and it seems that the order in which I declare the classes to be used in multiple inheritance seems to dictate whether the program crashes or not. #include <iostream> #include <string.h> using namespace std; class IPhysicsObject { public: virtual void Collide(IPhysicsObject *other, float angle, int pos)=0; }; class IBoardFeature { public: IBoardFeature(){}; ~IBoardFeature(){}; virtual bool OnAttach(int x){ return true; } virtual bool Update(int x, float dt)=0; }; /* class CScorezone : public IBoardFeature, public IPhysicsObject // this breaks !!! class CScorezone : public IPhysicsObject, public IBoardFeature // this works !!! */ class CScorezone : public IBoardFeature, public IPhysicsObject { public: CScorezone(){} ~CScorezone(void){} virtual bool Update(int x, float dt) { return true; } virtual void Collide(IPhysicsObject *other, float angle, int pos) { } virtual bool OnAttach(int x){ return true; } }; int main(int argc, char *argv[]) { CScorezone *scoreZone = new CScorezone(); CScorezone *otherZone = new CScorezone(); void *voidZone = scoreZone; IPhysicsObject *physZone = static_cast<IPhysicsObject*>(voidZone); physZone->Collide(otherZone, 10, 1); delete scoreZone; delete otherZone; // wait for user input int x; cin >> x; return 0; } Running this in debug mode causes the following error Run-Time Check Failure #0 - The value of ESP was not properly saved across a function call. This is usually a result of calling a function declared with one calling convention with a function pointer declared with a different calling convention. When I step in to the following line of code: physZone->Collide(otherZone, 10, 1); I notice it's going into CScoreZone::OnAttach, not CScoreZone::Collide. Why is this? WHen I change the order of inheritance for CScoreZone, it works fine class CScorezone : public IPhysicsObject, public IBoardFeature I'm running on VS2005 SP2 (8.0.50727.768) on Windows XP. Any ideas?
The problem is that you cast the pointer to void* first. The compiler doesn't know then how to perform static cast for the pointer. It needs to change the pointer value during the cast if you use multiple inheritance to use second superclass virtual table. Just cast the pointer back to CScoreZone* before using static_cast.
2,125,485
2,125,496
c++: Dynamically choose which subclass to create
I am new to c++ and i have a question. Lets say we have a base class Base and two derived classes, Derived1 and Derived2. f.e. Derived1 has a constructor taking a integer and Derived2 a constructor taking a boolean. Is it possible to determine at run time (or at compile time) which of those two subclasses to create and assign it to the Base class. Something like this: Base b = ???(value), where value is of type integer or boolean. Thanks in advance!
You probably want Factory Design Pattern.