question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
3,099,075
3,099,233
Bitwise setting in C++
#define OUTGOING_MASK 0x0c #define OUTGOING_DISABLED 0x04 #define OUTGOING_ENABLED 0x08 #define OUTGOING_AUTO 0x00 #define REFER_SUPPORTED 0x80 Assume support is some value of type int. I have a getter function int get() { if(OUTGOING_DISABLED == support & OUTGOING_MASK) return 1; else if(OUTGOING_ENABLED == support & OUTGOING_MASK) return 2; else if(OUTGOING_AUTO == support & OUTGOING_MASK) return 3; } I need to write set function for this like void set(int val) { if(val ==1) //todo else if(value == 2) //todo else if(value == 3) //todo } How to write getter and setter functions for this? I need to get/set the support variable here REFER_SUPPORTED will always be set in support.
I have a statement such as a1 = b & a2; How to know the value of b using bitwise operators? You can't recover value of b, unless a has ALL bits set. "&" is irreversible. Explanation. & operation has following table: a b result 1 & 1 = 1 0 & 1 = 0 1 & 0 = 0 0 & 0 = 0 which means, to recover b, you could try to use following table: a result b 0 0 unknown - could be 1 or 0 0 1 invalid/impossible - could not happen 1 0 0 1 1 1 As you can see it isn't possible to guess b in all cases. In expression a & b = c, if you know c and a, you can't recover b, because for every zeroed bit of c, and if corresponding bit of a is also zero, there are two possible states of corresponding bits of b. You can reliably recover b only if every bit of a is set to 1.
3,099,082
3,099,279
twofaced file problem
I'm reading in my program some files from system32 folder; and one of these files (C:\Windows\System32\gdi32.dll) demonstrates a very strange behavior. When I'm reading it from my program, it shows size of 310'784 bytes; and when I view it's size from Explorer, it shows size of 404'480 bytes. How could that be?
The most likely explanation is that your program is 32-bit and Explorer is 64-bit. When a 32-bit program opens files in C:\Windows\System32 (which contains 64-bit DLLs), it's actually redirected to C:\Windows\SysWOW64 (which contains 32-bit DLLs). The size difference you're seeing is the difference between the C:\Windows\SysWOW64\gdi32.dll and C:\Windows\System32\gdi32.dll files. For more information, see KB article 896456.
3,099,135
3,099,139
c++ type error message from compiler, what does it mean?
I'm using g++ on fedora linux 13. I'm just practicing some exercises from my c++ textbook and can't get this one program to compile. Here is the code: double *MovieData::calcMed() { double medianValue; double *medValPtr = &medianValue; *medValPtr = (sortArray[numStudents-1] / 2); return medValPtr; } Here is the class declaration: class MovieData { private: int *students; // students points to int, will be dynamically allocated an array of integers. int **sortArray; // A pointer that is pointing to an array of pointers. double average; // Average movies seen by students. double *median; // Median value of movies seen by students. int *mode; // Mode value, or most frequent number of movies seen by students. int numStudents; // Number of students in sample. int totalMovies; // Total number of movies seen by all students in the sample. double calcAvg(); // Method which calculates the average number of movies seen. double *calcMed(); // Method that calculates the mean value of data. int *calcMode(); // Method that calculates the mode of the data. int calcTotalMovies(); // Method that calculates the total amount of movies seen. void selectSort(); // Sort the Data using selection sort algorithm. public: MovieData(int num, int movies[]); // constructor ~MovieData(); // destructor double getAvg() { return average; } // returns the average double *getMed() { return median; } // returns the mean int *getMode() { return mode; } // returns the mode int getNumStudents() { return numStudents; } // returns the number of students in sample }; Here is my constructor and destructor and selectSort(): MovieData::MovieData(int num, int movies[]) { numStudents = num; // Now I will allocate memory for student and sortArray: if(num > 0) { students = new int[num]; sortArray = new int*[num]; // The arrays will now be initialized: for(int index = 0;index < numStudents;index++) { students[index] = movies[index]; sortArray[index] = &students[index]; } selectSort(); // sort the elements of sortArray[] that point to the elements of students. totalMovies = calcTotalMovies(); average = calcAvg(); median = calcMed(); mode = calcMode(); } } // Destructor: // Delete the memory allocated in the constructor. MovieData::~MovieData() { if(numStudents > 0) { delete [] students; students = 0; delete [] sortArray; sortArray = 0; } } // selectSort() // performs selection sort algorithm on sortArray[], // an array of pointers. Sorted on the values its // elements point to. void MovieData::selectSort() { int scan, minIndex; int *minElement; for(scan = 0;scan < (numStudents - 1);scan++) { minIndex = scan; minElement = sortArray[scan]; for(int index = 0;index < numStudents;index++) { if(*(sortArray[index]) < *minElement) { minElement = sortArray[index]; minIndex = index; } } sortArray[minIndex] = sortArray[scan]; sortArray[scan] = minElement; } } The compiler is giving this error: moviedata.cpp: In memberfunction 'double * MovieData::calcMed()': moviedata.cpp:82: error: invalid operands of types 'int*' and 'double' to binary 'operator/' I'm not sure what to make of this error, i've tried static casting the types with no luck, what does this error message mean?
sortArray[numStudents - 1] is a pointer to int, which can't be on the left side of a division (when you remember pointers are addresses, this makes sense). If you post more of your code, we can help you correct it. Perhaps you want something like: int *MovieData::calcMed() { return sortArray[(numStudents - 1) / 2]; } This returns the middle element in your array, which should be a pointer to the middle student. I'm not clear why you're sorting lists of pointers (not the actual values), or why you're returning a pointer here. The return value + 1 will be a pointer to the next value in students, which is not the next greater value numerically. So you might as well return the actual student (int from students). If you do this, you can also average the two middle elements when the count is even (this rule is part of the typical median algorithm). Note that I changed the return type to int *, the type of sortArray's elements. Also, your comment is incorrect. This is the median, not the mean. Also, your selection sort is wrong. The inner loop should start at scan + 1.
3,099,229
3,099,295
In SDL, does SDL_Quit() free every surface?
Basically, on surfaces that are going to exist right until the program terminates, do I need to run SDL_FreeSurface() for each of them, or would SDL_Quit() take care of all this for me? I ask mainly because the pointers to a number of my surfaces are class members, and therefore I would need to keep track of each class instance (in a global array or something) if I wanted to run SDL_FreeSurface() on each of their respective surfaces. If SDL_Quit() will do it all in one fell swoop for me, I'd much rather go with that :D
It's been a while since I used SDL, but I'm pretty sure SDL_Quit just cleans up the screen surface (the main screen buffer that you set up at the beginning). You have to free the other surfaces you create manually or you get leaks. Of course, since they're already class members, one way to do that easily would be to just free them up in the class destructor.
3,099,360
3,101,748
Threading Model for a Game Engine
I'm interested in getting threading into the small engine I'm working on in my spare time, but I'm curious over what the best approuch is. I'm curious about the recommended way to sync the physics thread with the rest of the engine, similar to ThisGuy. I'm working with the Bullet Physics SDK, which already use the data copy method he was describing, but I was wondering, once bullet goes through one simulation then syncs the data back to the other threads, won't it result in something like vertical sync, where the rendering thread, half way through processing data suddenly starts using a newer and different set of information? Is this something which the viewer will be able to notice? What if an explosion of some sort appears with the object that is meant to be destroyed? If this is an issue, what is then is the best way to solve it? Lock the physics thread so it can't do anything until the rendering thread (And basically every other thread) has gone through its frame? That seems like it would waste some CPU time. Or is the preferable method to triple buffer, copy the physics data to a second location, continue the physics simulation then copy that data to the rendering thread once its ready? What approaches do you guys recommend?
The easiest and probably most used variant is to run physic, render, ai, ... threads in parallel and syncronise them after each of them has finished with a frame/timestep. This is not the fastest solution, but the one with the fewest problems. Writing back the data to the rendering thread while this is running, leads to massive syncronisation problems (e.g. you have to lock each vector/matrix while updating it). To make the paralellisation efficent, you have to minimize the amount of data to syncronize, e.g. only write data to the render thread, that can possible be rendered. When not synronizing after each frame, you can probably get the effect, that the physic/ai uses all the cpu power producing 60fps, while the renderer only renders 10fps, which in most cases is not, what you want. A double buffering would also increase performance, but you still need to syncronize your threads. A problem is ai and physic or similar threads, because they possible want modify the same data
3,099,385
3,100,047
Inheritance and storing static class information
I'm trying to set up some stuff with Lua, but the specifics of Lua aren't important for my question. What I would like to be able to do is call a function, say OpenLib<T>(L), and have it get the table name for a particular class (as well as it's table) and register it with Lua. It essentially boils down to this: template <class T> static void OpenLib(lua_State* L) { // this func does some other stuff too that I'm omitting, important bit below if (T::myTable && T::myTableName) { luaL_openlib(L, T::myTableName, T::myTable, 0); } } I've tried this a few different ways and I can't get it to work right. I tried making a base class that contains myTable and myTableName like so: class LuaInfo { public: static const char* myTableName; static luaL_reg* myTable; } Then I could just inherit from LuaInfo, and then fill in the info that I needed. That didn't work because all classes that inherit from LuaInfo would get the same info, so I looked around and got the idea of doing this: template <class t> class LuaInfo // ... Which made the syntax to initialize it a little silly as I now have to do class Widget : public LuaInfo, but it was closer to working. template <class T> void OpenLib(lua_State* L) { if (T::myTable && T::myTableName) { luaL_openlib(L, LuaInfo<T>::myTableName, LuaInfo<T>::myTable, 0); } } I've tried a few variants of this to try to get it right but I keep getting errors like undefined reference to `ag::LuaInfo<ag::ui::Widget>::myTable' Is what I want to do possible, and if so, whats the right way to go about doing it?
Using template<typename T> class LuaInfo { static const char* myTableName; static lua_reg* myTable; }; should work OK. Your problem is that you need to define your static variables. A single source file containing a bunch of lines like this will solve it luaL_reg* LuaInfo<ag::ui::Widget>::myTable = 0; const char * LuaInfo<ag::ui::Widget>::myTableName = 0; luaL_reg* LuaInfo<ag::ui::OtherClass>::myTable = 0; const char * LuaInfo<ag::ui::OtherClass>::myTableName = 0; and so on. You may want to define a macro to make this nicer. #define LUAINFOIMPL(X) luaL_reg* LuaInfo<X>::myTable=0; const char * LuaInfo<X>::myTableName=0 LUAINFOIMPL( ag::ui::Widget ); LUAINFOIMPL( ag::ui::OtherClass ); However its a bit ugly to scale that way. I was thinking traits style templates might solve this .. but I'm not sure they scale any better.
3,099,445
3,099,530
Please help me understand this syntax (implementing static assert in C++)
This syntax was used as a part of an answer to this question: template <bool> struct static_assert; template <> struct static_assert<true> {}; // only true is defined #define STATIC_ASSERT(x) static_assert<(x)>() I do not understand that syntax. How does it work? Suppose I do STATIC_ASSERT(true); it gets converted to static_assert<true>(); Now what?
STATIC_ASSERT(true); indeed means static_assert<true>(); which evaluates to nothing. static_assert<true> is just an empty structure without any members. static_assert<true>() creates an object of that structure and does not store it anywhere. This simply compiles and does nothing. On the other hand STATIC_ASSERT(false); means static_assert<false>(); which results in compilation error. static_assert has no specialization for false. So a general form is used. But the general form is given as follows: template <bool> struct static_assert; which is just a declaration of a structure and not its definition. So static_assert<false>() causes compilation error as it tries to make an object of a structure which is not defined.
3,099,595
3,099,658
problem with va_arg()
I want to wirte a function with variable arguments in this way: static void configElement(U32 localFaultId, char* name, U32 report, U32 localId, U32 detectTime, U32 ceaseTime,...) { U32 i = 0; U32 tmpNo = 0; va_list ap; if (nofFaults >= MAX_NOF_LOCAL_FAULTS) { //something here return; } else { faultList[nofFaults].ceaseTime = ceaseTime; va_start(ap, ceaseTime); tmpNo = va_arg(ap, U32); while ((tmpNo!= END_MARK) && (i < MAX_NOF_DEPEND)) { faultList[nofFaults].dependList[i++].faultNo = tmpNo; } faultList[nofFaults].dependList[i].faultNo = END_MARK; /* Finish by increment nofFaults parameter */ va_end(ap); nofFaults++; } } However, I got the error msg when compiling this code: fault_manager.cc:3344: error: expected primary-expression before ',' token fault_manager.cc:3387: error: expected primary-expression before 'U32' fault_manager.cc:3387: error: expected `)' before 'U32' fault_manager.cc:3387: error: expected `)' before ';' token fault_manager.cc:3387: error: expected `)' before ';' token I have no idea what is going wrong here. My platform is Windows, and I'm using cygwin+Eclipse(CDT). The version of gcc is 4.1.1. Any idea will be appreciated much!
It looks like the compiler does not know what U32 is. Did you include all necessary headers?
3,099,638
3,099,660
The difference between unsigned long and UINT64
What is the difference between unsigned long and UINT64? I think they are the same, but I'm not sure. The definition of UINT64 is : typedef unsigned __int64 UINT64 (by using StdAfx.h)
UINT64 is specific and declares your intent. You want a type that is an unsigned integer that is exactly 64 bits wide. That this may be equal to an unsigned long on some platforms is coincidence.
3,099,674
3,099,750
Virtual function and Classes
I need some answers to basic questions. I'm lost again. :( q1 - Is this statement valid: Whenever we define the function to be pure virtual function, this means that function has no body. q2 - And what is the concept of Dynamic Binding? I mean if the Compiler optimizes the code using VTABLEs and VPTRs then how is it Run-Time Polymorphism? q3 - What are VTABLES AND VPTRs and how do their sizes change? q4 - Please see this code: class base { public: virtual void display() { cout<<"Displaying from base"; } }; class derived:public base { public: void display(){cout<<"\nDisplaying from derived";} }; int main() { base b,*bptr; derived d; bptr=&b; bptr->display(); bptr=&d; bptr->display(); } Output: Displaying from base Displaying from derieved Please can somebody answer why a pointer of base class can point the member function of a derived class and the vice-versa is not possible, why ?
False. It just means any derived classes must implement said function. You can still provide a definition for the function, and it can be called by Base::Function().* Virtual tables are a way of implementing virtual functions. (The standard doesn't mandate this is the method, though.) When making a polymorphic call, the compiler will look up the function in the function table and call that one, enabling run-time binding. (The table is generated at compile time.) See above. Their sizes change as there are more virtual functions. However, instances don't store a table but rather a pointer to the table, so class size only has a single size increase. Sounds like you need a book. *A classic example of this is here: struct IBase { virtual ~IBase(void) = 0; }; inline IBase::~IBase(void) {} This wouldn't be an abstract class without a pure virtual function, but a destructor requires a definition (since it will be called when derived classes destruct.)
3,099,695
3,099,706
C++ Typedefs and operator overloading
If you define a type like typedef int MY_INT; and go on to overload, say, the adition operator of MY_INT like MY_INT operator+(MY_INT a, MY_INT b); will MY_INT a, b; a + b; be different from int A, B; A + B; ? Sorry for any syntax errors. I'm not near a compiler and I want to ask this before I forget about it.
No. A typedef is actually an alias for another type. The original and typedef-ed types are the same.
3,099,947
3,099,990
Multimap containing pairs?
Is it possible for a multimap to contain within it pairs? IE, rather then being defined as multimap<char,int> for instance, it would be defined as multimap<pair, pair>? How would this multimap then be sorted? Also, how would one access the individual contents of each pair?
Is it possible for a multimap to contain within it pairs? Yes its possible. How would this multimap then be sorted? By the key/first pair (ie, first by the first element of the first pair, then by the second element of the first pair). Also, how would one access the individual contents of each pair? multimap<pair <T1, T2>, pair<T3, T4> >::iterator it = mymultimap.begin(); it->first.first; it->first.second; it->second.first; it->second.second; In other words, a multimap of pairs works exactly as expected! Update: Also, I'd like to add that I discourage any use of pairs of pairs, it makes the code very hard to read, use structs with real variable names instead.
3,100,095
3,100,159
Festival C/C++ API compiling an example, linking libraries error
I am having problems with the festival C++ API (Windows XP). After I make both festival and speech_tools succesfully (Cygwin), I have a file, called festival_example.cc, which contains: #include <stdio.h> #include <festival.h> int main(int argc, char **argv) { EST_Wave wave; int heap_size = 210000; // default scheme heap size int load_init_files = 1; // we want the festival init files loaded festival_initialize(load_init_files,heap_size); // Say simple file //festival_say_file("/etc/motd"); festival_eval_command("(voice_ked_diphone)"); // Say some text; festival_say_text("hello world"); // Convert to a waveform festival_text_to_wave("hello world",wave); wave.save("/tmp/wave.wav","riff"); // festival_say_file puts the system in async mode so we better // wait for the spooler to reach the last waveform before exiting // This isn't necessary if only festival_say_text is being used (and // your own wave playing stuff) festival_wait_for_spooler(); return 0; } Then (Cygwin) I type: g++ festival_example.cc -I./festival/src/include -I./speech_tools/include -L./festival/src/lib -libFestival -L./speech_tools/lib -libestools -libestbase -libeststring It cannot find the libraries. If I write -I/cygdrive/c/0621/source/build/festival/src/include and the same in all, the error persists. I have my program in C:\0621\source\build And inside I have the folders \festival\ and \speech_tools\ :)
Replace -lib* with -l*. For instance -libFestival won't work. Do g++ festival_example.cc -I./festival/src/include -I./speech_tools/include -L./festival/src/lib -lFestival
3,100,121
3,100,288
c++ Compression library - Deflate or Gzip
I'm looking for a useful compression library for c++ (on windows) I need preferably Deflate or Gzip, and i need it to be compatible with .NET's System.IO.Compression. Also if it will give me a decorator over a stream that would be great so i could do: std::ostringstream stringStream; CompressionStream cs(stringStream); cs << object; cs.flush(); magicalThingy.Send(stringStream.str()); Thank you
Take a look on Boost.Iostream that provides such filter allowing to compress std::iostream to gzip or zlib formats (they acutally use zlib under the hood but have nicer interface). These formats are standard so anybody (.Net too) should open them,
3,100,311
3,100,438
Emitting native code (for a specific platform)
How do you get started generating native code for a target platform? I've got (some) experience and (some) skill in C++, and am interested in going and writing my own compiler (for C++). But I've got little idea how I'm going to turn the end result into native code to execute on my target platform, which at the moment is just Windows, x86. I've had a look at LLVM, but couldn't understand their documentation for shiz. Edit: In addition, LLVM won't build on VS2010. I went through and cleaned up all the places where they apparently added .in on the end of the filename for fun, and fixed up the typedefs, and now they have some strange .def files that appear to be utterly needless that won't compile, and I have no idea where in the trillion headers they're called from. More edit: I'm already building/built my own AST/parser/lexer. I just need to know how to turn the results into native.
Look at some of the other questions on writing compilers, such as learning-to-write-a-compiler, and having worked through something like the dragon book or Programming Language Pragmatics, you'll understand enough of LLVM or gnu lightning to use them as your back end. Don't try to write a C++ compiler unless you have several years to devote to the exercise; on the other hand creating something based on an existing back-end and expression templates might only take a couple of months.
3,100,322
3,125,903
boost.test vs. CppUnit
I've been using CppUnit for quite a while now (and am happy with it). As we are using more and more parts of the boost library I had a short look on boost.test and I'm wondering now if I should switch to boost.test in a new project or not. Can anyone here tell me about the differences between the two frameworks and the benefits (if there are any) of using boost.test?
Do yourself a favor and go straight to Google Test, which makes CppUnit and boost::unit_test look clunky and repetitive. For example, say you have a simple fixture: class MyFixture : public ::testing::Test { protected: int foo; virtual void SetUp() { foo = 0; } }; To add a test to your fixture, write it! TEST_F(MyFixture, FooStartsAtZero) { EXPECT_EQ(0, foo); } That's all you need. Notice the lack of explicit testsuite declarations or a separate agenda that repeats all your tests' names. Compile it as in $ g++ -o utest utest.cpp -lgtest -lgtest_main and run your test to get Running main() from gtest_main.cc [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from MyFixture [ RUN ] MyFixture.FooStartsAtZero [ OK ] MyFixture.FooStartsAtZero (0 ms) [----------] 1 test from MyFixture (0 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (0 ms total) [ PASSED ] 1 test. (Run it yourself to see the nice green text for passing tests!) This is only the beginning. Take a look at the Google Test primer and the advanced guide to see what else is possible.
3,100,341
3,106,081
Boost-Python raw pointers constructors
I am trying to expose a C++ library to python using boost-python. The library actually wraps an underlying C api, so uses raw pointers a lot. // implementation of function that creates a Request object inline Request Service::createRequest(const char* operation) const { blpapi_Request_t *request; ExceptionUtil::throwOnError( blpapi_Service_createRequest(d_handle, &request, operation) ); return Request(request); } // request.h class Request { blpapi_Request_t *d_handle; Element d_elements; Request& operator=(const Request& rhs); // not implemented public: explicit Request(blpapi_Request_t *handle); Request(RequestRef ref); Request(Request &src); }; // request.cpp BOOST_PYTHON_MODULE(request) { class_<blpapi_Request_t>; class_<Request, boost::noncopyable>("Request", init<blpapi_Request_t *>()) .def(init<Request&>()) ; } Although request.cpp compiles successfully, when I try and use the object I get the following error: // error output TypeError: No to_python (by-value) converter found for C++ type: class Request In-order to call this the python code looks like: from session import * from service import * from request import * so = SessionOptions() so.setServerHost('localhost') so.setServerPort(8194) session = Session(so) # start sesssion if not session.start(): print 'Failed to start session' raise Exception if not session.openService('//blp/refdata'): print 'Failed to open service //blp/refdata' raise Exception service = session.getService('//blp/refdata') request = service.createRequest('ReferenceDataRequest') The other objects (SessionOptions, Session, Service) etc are also c++ objects that I have successfully created boost-python wrappers for. As I understand from the boost-python docs this has something to do with passing a raw pointer around, but I don't really understand what else I should do ...
Your class_<blpapi_Request_t>; does not declare anything; is that code the correct version? If so, then update it: class_<blpapi_Request_t>("blpapi_Request_t"); That said, what that error indicates is that you are trying to use the Request object with an automatic conversion to a python object which has not been defined. The reason you get this error is because you have wrapped Request as boost::noncopyable, then provided a factory method which returns a Request object by value; the boost::noncopyable means no copy constructors are generated and therefore there's no automatic to-python converter. Two ways out of this: one is to remove the noncopyable hint; the other would be to register a converter which takes a C++ Request and returns a Python Request object. Do you really need the noncopyable semantics for Request?
3,100,343
3,100,775
Connecting to oracle database using C++, the basics
i have a question about the theory here, i'm just starting a project which is based on C++ applications integrating with oracle DB's, i've came to two choices, OCCI, and OCI the OCCI is said to be aimed at C++ environment, but i was wondering, if it would be any good to use the OCI libraries from my C++ app since it is said to have better performance, or would i run into compatibility issues ? thanks in advance :)
You can have a look at OTL it's a wrapper above the OCI or OCCI (not sure) will give some templates and samples to start with oracle connection in c++.
3,100,365
3,100,936
Why is −1 > sizeof(int)?
Consider the following code: template<bool> class StaticAssert; template<> class StaticAssert<true> {}; StaticAssert< (-1 < sizeof(int)) > xyz1; // Compile error StaticAssert< (-1 > sizeof(int)) > xyz2; // OK Why is -1 > sizeof(int) true? Is it true that -1 is promoted to unsigned(-1) and then unsigned(-1) > sizeof(int). Is it true that -1 > sizeof(int) is equivalent to -1 > size_t(4) if sizeof(int) is 4. If this is so why -1 > size_t(4) is false? Is this C++ standard comformant?
The following is how standard (ISO 14882) explains abort -1 > sizeof(int) Relational operator `>' is defined in 5.9 (expr.rel/2) The usual arithmetic conversions are performed on operands of arithmetic or enumeration type. ... The usual arithmetic conversions is defined in 5 (expr/9) ... The pattern is called the usual arithmetic conversions, which are defined as following: If either operand is of type long double, ... Otherwise, if either operand is dobule, ... Otherwise, if either operand is float, ... Otherwise, the integral promotions shall be performed on both operands. ... The integral promotions is defined in 4.5 (conv.prom/1) An rvalue of type char, signed char, unsigned char, short int, or unsigned short int can be converted to an rvalue of type int if int can represent all the values of the source type; otherwise, the source rvalue can be converted to an rvalue of type unsigned int. The result of sizeof is defined in 5.3.3 (expr.sizeof/6) The result is a constant of type size_t size_t is defined in C standard (ISO 9899), which is unsigned integer type. So for -1 > sizeof(int), the > triggers usual arithmetic conversions. The usual arithmetic conversion converts -1 to unsigned int because int cannot represent all the value of size_t. -1 becomes a very large number depend on platform. So -1 > sizeof(int) is true.
3,100,499
3,100,836
Signal a thread accross Process boundary
Would it be possible for a COM client to signal a thread in a COM Server?
To let a COM client signal the server, you'd need some COM interface like this: interface IClientServerSignalling { void SignalMyServer(); } The COM Client would QueryInterface on some existing object (or you could implement a specific object just for this purpose) and then call the method, which gets marshalled across to the COM server where it gets executed. The method could then do whatever you need. If you're trying to get an invocation on a specific worker thread on the COM server, then your SignalMyServer() method could use synchronization mechanisms such as CreateEventEx() and the wait functions to talk across. Arguably, you could do this from COM Client to COM Server without using a COM API but that assumes you know where the COM Server is running and that you have the right security privileges and permissions to do so.
3,100,554
3,100,786
Wrapping C++ dynamic array with Python+ctypes, segfault
I wanted to wrap a small C++ code allocating an array with ctypes and there is something wrong with storing the address in a c_void_p object. (Note: the pointers are intentionally cast to void*, 'cause later I want to do the allocation the same way for arrays of C++ objects, too.) The C(++) functions to be wrapped: void* test_alloc() { const int size = 100000000; int* ptr = new int[size]; std::cout << "Allocated " << size * sizeof(int) << " bytes @ " << ptr << std::endl; return static_cast<void*>(ptr); } void test_dealloc(void* ptr) { int* iptr = static_cast<int*>(ptr); std::cout << "Trying to free array @ " << iptr << std::endl; delete[] iptr; } The Python wrapper (assume the former functions are already imported with ctypes): class TestAlloc(object): def __init__(self): self.pointer = ctypes.c_void_p(test_alloc()) print "self.pointer points to ", hex(self.pointer.value) def __del__(self): test_dealloc(self.pointer) For small arrays (e.g. size = 10), it seems ok: In [5]: t = TestAlloc() Allocated 40 bytes @ 0x1f20ef0 self.pointer points to 0x1f20ef0 In [6]: del t Trying to free array @ 0x1f20ef0 But if I want to allocate a large one (size = 100 000 000), problems occur: In [2]: t = TestAlloc() Allocated 400000000 bytes @ 0x7faec3b71010 self.pointer points to 0xffffffffc3b71010L In [3]: del t Trying to free array @ 0xffffffffc3b71010 Segmentation fault The address stored in ctypes.c_void_p is obviously wrong, the upper 4 bytes are invalid. Somehow 32-bit and 64-bit addresses are mixed, and with the large array allocation the memory manager (in this case) is forced to return an address not representable on 32 bits (thx TonJ). Can someone please provide a workaround for this? The code has been compiled with g++ 4.4.3 and run on Ubuntu Linux 10.04 x86_64 with 4G RAM. Python version is 2.6.5. Thank you very much! UPDATE: I managed to solve the problem. I forgot to specify restype for test_alloc(). The default value for restype was ctypes.c_int, into which the 64-bit address did not fit. By also adding a test_alloc.restype = ctypes.c_void_p before the call of test_alloc() solved the problem.
From just looking at it, it seems that the problem is not in the small/big array allocation, but in a mix of 32bit and 64bit addresses. In your example, the address of the small array fits in 32 bits, but the address of the big array doesn't.
3,100,714
3,101,090
GetDIBits() is failing with PNG compression
I am trying to get the size of PNG image (Without storing into file). I am using this code as reference. When calling GetDIBits(), size of image would get updated into bi.biSizeImage. Everything works fine when bi.biCompression is BI_RGB. Then I have changed the compression mode from BI_RGB to BI_PNG; GetDIBits() started to fail. Please help me to solve this.
According to http://msdn.microsoft.com/en-us/library/dd145023%28VS.85%29.aspx: "This extension is not intended as a means to supply general JPEG and PNG decompression to applications, but rather to allow applications to send JPEG- and PNG-compressed images directly to printers having hardware support for JPEG and PNG images." using GetDIBits() with BI_PNG is not allowed.
3,100,842
3,100,893
C++ dynamic review tools
What's the best tool (commercial/open source) you've used for dynamic review/memory analysis of a C++ application? EDIT: removed 'static' as there is already a great question on this topic (thanks Iulian!)
For dynamic memory analysis definitely Valgrind.
3,100,989
3,101,403
Problem in Communicating with Digitally SIgned c# com dll from c++ in WIN 7 ultimate
I have a c# com dll which I register to registry Using regasm . I communicate with this c# dll from a c++ exe . I use create instance to initialize the dll . when both these components are not signed digital it is working perfectly . when they are digitally signed cocreate instance fails. get last error says "Token not found" . this happens in WIN 7 ultimate and PRo where as in xp and vista it works.
Do you re-register your DLLs using regasm once you sign them? Use regedit to check the registration info at HKEY_CLASSES_ROOT\CLSID\{your-guid-here}\InProceServer32, eg: Assembly REG_SZ YourComObjectName, Version=1.0.0.0, Culture=neutral, PublicKeyToken=abcdfc550b465bd3 If your PublicKeyToken=null then this is the cause. To get the token from your DLL use sn.exe tool, eg: sn.exe -T path\to\your.dll
3,100,997
3,101,049
C++ - STL vector question
Is there any way to make std::vector faster on reserving + resizing? I would like to achieve the performance which would be somewhat equivalent to plain C arrays. See the following code snippets: TEST(test, vector1) { for (int i = 0; i < 50; ++i) { std::vector<int> a; a.reserve(10000000); a.resize(10000000); } } TEST(test, vector2) { for (int i = 0; i < 50; ++i) { std::vector<int> a(10000000); } } TEST(test, carray) { for (int i = 0; i < 50; ++i) { int* new_a = new int[10000000]; delete[] new_a; } } First two tests are two times slower (4095 ms vs 2101 ms) and, obviously, that happens because std::vector is nulling the elements in it. Any ideas on how this could be avoided? Or probably there is some standard (boost?) container that implements a fixed-size and heap-based array? Thank you
Well naturally the first 2 tests are slower. They explicitly go through the entire vector and call "int()" on each element. Edit: This has the effect of setting all the elements to "0". Just try reserving. There is some very relevant info to your question in this question i asked a while back: std::vector reserve() and push_back() is faster than resize() and array index, why?
3,101,160
3,116,336
ide code information
I've been annoyed lately by the fact that PyDev doesn't information about classes and function when it code completes wxPython code. Can anybody tell me FOSS IDE's or extensions that offer code information (function params, returns etc.) when it code completes for C/C++ and Python. I am a fan of CodeLite, Eclipse CDT and CodeBlocks in that order for C/C++ (excepting non-FOSS) and PyScripter, PyDev for Python in that order.
Vim + Exuberant Ctags See here, here and here for C++ autocompletion (also referred to as IntelliSense, taken from the name for Visual Studio's autocomplete). And here for Python autocomplete/"intellisense" for vim. (I should point out I found the link to that from this post on SO). If that doesn't include the ctags for wxPython as you require, you might want to check out this guy's ctags-based highlighting which apparently does work for wxPython (and perhaps take the ctags file from that?) Probably also worth checking out this enormous list of Python IDEs on SO (specifically those with "AC" tags) if you've not already seen that? I realise your question is a bit more specific than just basic Auto Complete, but perhaps there's some new options in there for you...
3,101,185
3,101,207
Why C++ STL does not provide hashtable and union data structures?
At various places, I've read that STL does not provide hashtable and union data structures. How could these be implemented using other existing STL data structures?
Try the std::tr1::unordered_map for your hash map. std::map is ordered, so it's not really as efficient as hash. Not sure what you mean by a union data structure, but you can have unioned structs in C++ EDIT: Additionally there are many other implementations of hash maps that some have done. Boost has an unordered map, Prasoon mentioned one in the question comments, and Google has sparsehash.
3,101,211
3,101,232
c++: explain this function declaration
class PageNavigator { public: // Opens a URL with the given disposition. The transition specifies how this // navigation should be recorded in the history system (for example, typed). virtual void OpenURL(const GURL& url, const GURL& referrer, WindowOpenDisposition disposition, PageTransition::Type transition) = 0; }; I don't understand what is that =0; part...what are we trying to communicate?
'= 0' means it's a pure virtual method. It must be overriden in inheriting class. If a class has a pure virtual method it is considered abstract. Instances (objects) of abstract classes cannot be created. They are intended to be used as base classes only. Curious detail: '= 0' doesn't mean method has no definition (no body). You can still provide method body, e.g.: class A { public: virtual void f() = 0; virtual ~A() {} }; void A::f() { std::cout << "This is A::f.\n"; } class B : public A { public: void f(); } void B::f() { A::f(); std::cout << "And this is B::f.\n"; }
3,101,225
3,119,212
Problems calling Python from C++
test.py def add(a,b): """ """ print a,b,a+b return a+b c program #include <python.h> int _tmain(int argc, _TCHAR* argv[]) { try { PyObject *pName,*pModule,*pDict,*pFunc,*pArgs1,*pArgs2,*pOutput; Py_Initialize(); if(!Py_IsInitialized()) return -1; pModule=PyImport_ImportModule("test"); pDict=PyModule_GetDict(pModule); pFunc=PyDict_GetItemString(pDict,"add"); pArgs1=Py_BuildValue("ii", 1,2); //pArgs2=Py_BuildValue("i", 2); pOutput=PyEval_CallObject(pFunc,pArgs1); int c=0; PyArg_Parse(pOutput, "d", &c); cout<<c; //PyRun_SimpleString(""); Py_Finalize(); } catch(exception* ex) { cout<<ex->what(); } char c; cin>>c; return 0; } Console print nothing and closed. What's wrong? Thanks!
I found it contains some chinese words in first line. #XXX And, it also didn't work in pythonwin. Said something wrong. So, I deleted them, and it's OK!
3,101,387
3,101,425
Setting processor affinity for a process and its Effects
I am having an application which has a UI module and other supporting services. These other services have memory leaks and other thread synchronization issues. Hence in some machines especially quad core, dual core machines the services crash every now and then. I know the best way to fix this is to clean up the memory leaks and synchonization issues. But as a work around we set the processor affinity for 2 of those services and observed that the crash did not happen after that. Now my question is Will my services take a performance hit as i am limiting them to using only one processor? Edit 1: Note: These services are multi threaded.
This certainly depends on whether those services rely on threading. If they are single-threaded you won't notice much difference. In your case since services are multithreaded they might or might not experience perfomance penalty - this will depend on actual design, specifically on whether they rely on several threads being executed in parallel.
3,101,487
3,107,571
Release mode static library much larger than debug mode version
today i found out that the compiled static library i'm working on is much larger in Release mode than in Debug. I found it very surprising, since most of the time the exact opposite happens (as far as i can tell). The size in debug mode is slightly over 3 MB (its a fairly large project), but in release it goes up to 6,5 MB. Can someone tell me what could be the reason for this? I'm using the usual Visual Studio (2008) settings for a static library project, changed almost nothing in the build configuration settings. In release, i'm using /O2 and "Favor size or speed" is set to "Neither". Could the /O2 ("Maximize speed") cause the final .lib to be so much larger than the debug version with all the debugging info in it? EDIT: Additional info: Debug: - whole program optimization: No - enable function level linking: No Release: - whole program optimization: Enable link-time code generation - enable function level linking: Yes
The difference is specifically because of link-time code generation. Read the chapter Link-Time Code Generation in Compilers - What Every Programmer Should Know About Compiler Optimizations on MSDN - it basically says that with LTCG turned on the compiler produces much more data that is packed into the static library so that the linker can use that extra data for generating better machine code while actually linking the executable file. Since you have LTCG off in Debug configuration the produced library is noticeably smaller since it doesn't have that extra data. PS: Original Link (not working at 11/09/2015)
3,101,771
3,103,660
ESP error when sending window messages between threads
I have an Observer class and a Subscriber class. For testing purposes, the observer creates a thread that generates fake messages and calls CServerCommandObserver::NotifySubscribers(), which looks like this: void CServerCommandObserver::NotifySubscribers(const Command cmd, void const * const pData) { // Executed in worker thread // for (Subscribers::const_iterator it = m_subscribers.begin(); it != m_subscribers.end(); ++it) { const CServerCommandSubscriber * pSubscriber = *it; const HWND hWnd = pSubscriber->GetWindowHandle(); if (!IsWindow(hWnd)) { ASSERT(FALSE); continue; } SendMessage(hWnd, WM_SERVERCOMMAND, cmd, reinterpret_cast<LPARAM>(pData)); } } The subscriber is a CDialog derived class, that also inherits from CServerCommandSubscriber. In the derived class, I added a message map entry, that routes server commands to the subscriber class handler. // Derived dialog class .cpp ON_REGISTERED_MESSAGE(CServerCommandObserver::WM_SERVERCOMMAND, HandleServerCommand) // Subscriber base class .cpp void CServerCommandSubscriber::HandleServerCommand(const WPARAM wParam, const LPARAM lParam) { const Command cmd = static_cast<Command>(wParam); switch (cmd) { case something: OnSomething(SomethingData(lParam)); // Virtual method call break; case // ... }; } The problem is, that I see strange crashes in the HandleServerCommand() method: It looks something like this: Debug Error! Program: c:\myprogram.exe Module: File: i386\chkesp.c Line: 42 The value of ESP was not properly saved across a function call. This is usually the result of calling a function declared with one calling convention with a function pointer declared with a different calling convention. I checked the function pointer that AfxBeginThread() wants to have: typedef UINT (AFX_CDECL *AFX_THREADPROC)(LPVOID); // AFXWIN.H static UINT AFX_CDECL MessageGeneratorThread(LPVOID pParam); // My thread function To me, this looks compatible, isn't it? I don't know, what else I have to look for. Any ideas? I made another strange observation, that might be related: In the NotifySubscribersmethod, I call IsWindow() to check if the window to which the handle points, exists. Apparently it does. But calling CWnd::FromHandlePermanent() returns a NULL pointer.
I eventually decided to do it without window messages and am now posting my workaround here. Maybe it will help someone else. Instead of letting the observer post window messages to its subscribers, I let the observer put data into synchronized subscriber buffers. The dialog class subscriber uses a timer to periodically check its buffers and call the apropriate handlers if those aren't empty. There are some disadvantages: It's more coding effort because for each data type, a buffer member needs to be added to the subscriber. It's also more space consuming, as the data exists for each subscriber and not just once during the SendMessage() call. One also has to do the synchronization manually instead of relying on the observer thread being suspended while the messages are handled. A - IMO - huge advantage is that it has better type-safety. One doesn't have to cast some lParam values into pointers depending on wParam's value. Because of this, I think this workaround is very acceptable if not even superior to my original approach.
3,101,811
3,101,898
how to configure dynamic linking of libxml2?
I don't want to look stupid, but how should I link libxml2 to my g++ project (Linux environment)? What should I add to my code besides #include <libxml/tree.h>? Thanks for a link or a quick hint! ps. I added this to my CXXFLAGS: xml2-config —cflags --libs. Enough?
CXXFLAGS are for the compiler, LDFLAGS for the linker. So add xml2-config --libs to your LDFLAGS and xml2-config --cflags to your CXXFLAGS
3,101,906
3,102,168
Two-way inclusion of classes & template instances
I'm having a problem when trying to compile these two classes (Army and General) in their own header files: #ifndef ARMY_H #define ARMY_H #include "definitions.h" #include "UnitBase.h" #include "UnitList.h" #include "General.h" class Army { public: Army(UnitList& list); ~Army(void); UnitBase& operator[](const ushort offset); const UnitBase& operator[](const ushort offset) const; const uint getNumFightUnits() const; const ushort getNumUnits() const; const General<Warrior>* getWarrior() const; private: UnitBase** itsUnits; uint itsNumFightUnits; ushort itsNumUnits; WarriorGeneral* itsGeneral; }; #endif and #ifndef GENERAL_H #define GENERAL_H #include "generalbase.h" #include "Warrior.h" class Army; template <class T> class General : public GeneralBase, public T { public: General(void); ~General(void); void setArmy(Army& army); const Army& getArmy() const; private: Army* itsArmy; }; typedef General<Warrior> WarriorGeneral; #endif I have tried forward declaring WarriorGeneral in Army.h, but it doesn't seem to work, perhaps because it's a template instance? Anyway, the errors I'm getting with the above version are several of this kind and related problems: Army.h(21): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int They're not even unresolved linker problems... Note I put the typedef of WarriorGeneral in the General.h file. I don't know whether this is correct. Is there anything that can be done to make this work? Thanks in advance!
I can't tell what Army.h line 21 is because the one you posted doesn't have that many lines. The only thing I can see that's not declared in that header is UnitList. Is it properly forward-declared or have a header include you aren't showing us? Do generalbase.h or Warrior.h include Army.h? If so, that would cause the seemingly circular includes. Try having it not do the include but forward declare Army instead.
3,102,008
3,102,061
Choose a number from an array
Is there any function or method to randomly choose a number (or 2 numbers or more) from an array?
Depending on how many numbers you need, the size of the array, and whether the array needs to retain its order, you could use std::random_shuffle to reorder the array and then just loop from 0..n-1 to get n random numbers. This works better when you want to get a lot of numbers relative to the length of the array. If that doesn't seem appropriate, you can just use srand() and rand() % n as an index into the array to get a pretty good approximation of a random selection.
3,102,058
3,110,475
Is MsiOpenProduct the correct way to read properties from an installed product?
Given an MSI product code I want to get the upgrade code (among other properties) from an already installed product. I have tried this by calling the MsiOpenProduct method, followed by MsiGetProductProperty(). An (abbreviated) example looks like this: MSIHANDLE handle = NULL; MsiOpenProduct(strProductCode,&handle); CString strUpgradeCode; MsiGetProductProperty(handle,_T("UpgradeCode"), strUpgradeCode.GetBuffer(GUID_LENGTH), &dwSize); strUpgradeCode.ReleaseBuffer(); MsiCloseHandle(handle); This gets me the desired value, and judging from the MSDN documentation this seems like a valid way to do this: The MsiOpenProduct function opens a product for use with the functions that access the product database. The MsiCloseHandle function must be called with the handle when the handle is no longer needed. However the call to MsiOpenProduct() pops up the "Windows installer is preparing the installation..." dialog. The call to MsiCloseHandle() makes it disappear again. This left me wondering: What does the call to MsiOpenProduct() do under the hood? I do not want to trigger any actions, I just want to read properties. I don't mind the dialog popping up, as this is only for unit test code as long as this has no side effects. And as there are many unit tests that do this, it must still work when opening and closing handles in rapid succession. Although I stumbled over the MsiGetProductInfo method, there seems to be no way to get the upgrade code. Am I right? Is MsiOpenProduct the correct way to read properties like the upgrade code?
MsiOpenProduct should be fine So long as you don't run any sequences or actions, it won't do anything. If you want to silence the dialog, you can with careful use of either MsiSetInternalUI() or MsiSetExternalUI(). Another approach you can take, as long as the ProductCode and UpgradeCode are safely static (i.e. as long as they aren't changed by transforms), is to locate the database using MsiGetProductInfo() and call MsiOpenDatabase() on that. The difference is that MsiOpenProduct() (or similarly MsiOpenPackage) applies the transforms that were used at installation time and prepares a session, whereas MsiOpenDatabase() does neither.
3,102,096
3,102,134
When do C++ POD types get zero-initialized?
Coming from a C background, I've always assumed the POD types (eg ints) were never automatically zero-initialized in C++, but it seems this was plain wrong! My understanding is that only 'naked' non-static POD values don't get zero-filled, as shown in the code snippet. Have I got it right, and are there any other important cases that I've missed? static int a; struct Foo { int a;}; void test() { int b; Foo f; int *c = new(int); std::vector<int> d(1); // At this point... // a is zero // f.a is zero // *c is zero // d[0] is zero // ... BUT ... b is undefined }
Assuming you haven't modified a before calling test(), a has a value of zero, because objects with static storage duration are zero-initialized when the program starts. d[0] has a value of zero, because the constructor invoked by std::vector<int> d(1) has a second parameter that takes a default argument; that second argument is copied into all of the elements of the vector being constructed. The default argument is T(), so your code is equivalent to: std::vector<int> d(1, int()); You are correct that b has an indeterminate value. f.a and *c both have indeterminate values as well. To value initialize them (which for POD types is the same as zero initialization), you can use: Foo f = Foo(); // You could also use Foo f((Foo())) int* c = new int(); // Note the parentheses
3,102,485
3,102,541
How to select groups of numbers with the same sum from an array
I have an array from ten (or more) numbers, for example: arr[1,2,3,4,5,6,7,8,9,10] I want a method to check if three numbers from this arrays have the same sum to another three numbers (I want to get all the possibilities!), for example: {1.8.10} {2.10.7} {3.7.9} {4.9.6} {5.6.8} {Sum of each set: 19} {1.6.10} {3.10.4} {5.4.8} {7.8.2} {9.2.6} {Sum of each set: 17} {6.3.5} {7.5.2} {8.2.4} {9.4.1} {10.1.3} {Sum of each set: 14} Update: That's another example to what exactly I want to do: alt text http://img208.imageshack.us/img208/1131/77603708.png
Create a map of sets. Setup a triple nested for loop and start adding numbers. Every sum becomes a key into the map. The set is the 3 numbers you used. Should only be a dozen or so lines of code. map&ltint,set&ltint> > sums; for(int i=0; i&ltARRAY_SIZE; i++) { for(int j=0; j&ltARRAY_SIZE; j++) { if(j &lt= i) continue; for(int k=0; k&ltARRAY_SIZE; k++) { if(k &lt= i || k &lt= j) continue; int localSum = arr[i] + arr[j] + arr[k]; set&ltint> thisSum; thisSum.insert(arr[i]); thisSum.insert(arr[j]); thisSum.insert(arr[k]); sums.insert(make_pair(localSum, thisSum)); } } } and just iterate through the sums map and display your sets as you want. I'm not 100% sure on my continue logic here, but I think it's correct. The idea will work, though it's at least cubic in complexity.
3,102,580
3,102,598
Using a class type as map key_value
Could a variable of Data be used as a map key? struct Data { Data(int X, int Y) {x=X; y=Y;} int x; int y; } int main() { std::map<Data, int> map_; map_.insert(std::make_pair(Data(1,2), 0)); //error inserting }
Yes, but you either need to define operator< for the class type or use a custom comparison function for the std::map. There is an example of using a custom comparison function in the STL documentation.
3,102,805
3,102,829
What does boost::thread do if it fails to create the thread?
What does boost::thread do if it fails to create the thread? The winAPI returns a NULL (I guess posix does something similar) but as thread is an object how do I test to see if the thread was created?
According to the API, it throws a boost::thread_resource_error if the thread creation fails. I guess you cannot create an invalid boost::thread object: if something goes wrong, the constructor throws thus preventing the object creation.
3,102,888
3,102,930
Game Development: How to limit FPS?
I'm writing a game, and I saw the FPS algorithm doesn't work correctly (when he have to calculate more, he sleeps longer...) So, the question is very simple: how to calculate the sleeptime for having correct FPS? I know how long it took to update the game one frame in microseconds and of course the FPS I want to reach. I'm searching crazy for a simple example, but I can't find one.... The code may be in Java, C++ or pseudo....
The number of microseconds per frame is 1000000 / frames_per_second. If you know that you've spent elapsed_microseconds calculating, then the time that you need to sleep is: (1000000 / frames_per_second) - elapsed_microseconds
3,102,937
3,102,995
C++: why is the program reading the last blank from an input file as an element
My input file is: 2 5 <-- extra space at the end 4 <--extra space at the end int main(){ ifstream input("input.txt"); istream& in = input; string line1; while( getline(in,line1)){ istringstream number1(line1); while(number1.good()){ number1 >> temp1; cout<<temp1<<endl; } } input.close(); } The problem is with the extra space at the end of the line my output is: 2 5 5 4 4 which is not what i want.. but if i remove the extra space it would work: 2 5 4 why is this happening? and how can i fix it so that even with extra spaces it reads the correct input? Any help would be appreciated. Thanks!
The problem is with the while (number1.good()) loop. The number1 fail state will not be set until after the number1 >> temp1 extraction fails, but you don't test the fail state until the next time the loop condition is tested, which is after you print out the result of that extraction. You should change the inner loop to: while (number1 >> temp1) { std::cout << temp1 << std::endl; } This will extract the value, then test whether the extraction succeeded and will break out of the loop if the extraction fails, which is the behavior you want.
3,102,991
3,103,078
Menu disappears after running program later
Okay, so at first when i run my win32 program the menu works fine, however when i open the application later the next day or such the menu is gone but the code never changed. im making the menu with a .rc file. is this the recommended way? resource.rc #include "resource.h" IDR_MYMENU MENU BEGIN POPUP "&File" BEGIN MENUITEM "E&xit", ID_FILE_EXIT END END resource.h #define IDR_MYMENU 101 #define IDI_MYICON 201 #define ID_FILE_EXIT 9001 #define ID_STUFF_GO 9002 main.cpp #include "resource.h" wincl.lpszMenuName = MAKEINTRESOURCE(IDR_MYMENU); also i noticed that MSVC++ has a very very complex windows templates, vs bloodshed. should i maybe give up on bloodshed and use MSVC++? I am use to blooshed, but i want to have an edge when i finally learn this stuff? HWND hwnd; /* This is the handle for our window */ MSG messages; /* Here messages to the application are saved */ WNDCLASSEX wincl; /* Data structure for the windowclass */ /* The Window structure */ wincl.hInstance = hThisInstance; wincl.lpszClassName = szClassName; wincl.lpfnWndProc = WindowProcedure; /* This function is called by windows */ wincl.style = CS_DBLCLKS; /* Catch double-clicks */ wincl.cbSize = sizeof (WNDCLASSEX); /* Use default icon and mouse-pointer */ wincl.hIcon = LoadIcon (GetModuleHandle(NULL), MAKEINTRESOURCE(IDI_MYICON)); wincl.hIconSm = (HICON) LoadImage(GetModuleHandle(NULL), MAKEINTRESOURCE(IDI_MYICON), IMAGE_ICON, 16, 16 ,0); wincl.hCursor = LoadCursor (NULL, IDC_CROSS); wincl.lpszMenuName = MAKEINTRESOURCE(IDR_MYMENU); /* No menu */ wincl.cbClsExtra = 0; /* No extra bytes after the window class */ wincl.cbWndExtra = 0; /* structure or the window instance */ /* Use Windows's default color as the background of the window */ wincl.hbrBackground = (HBRUSH) GetStockObject(BLACK_BRUSH); /* Register the window class, and if it fails quit the program */ if (!RegisterClassEx (&wincl)) return 0; /* The class is registered, let's create the program*/ hwnd = CreateWindowEx ( 0, /* Extended possibilites for variation */ szClassName, /* Classname */ "Windows App", /* Title Text */ WS_OVERLAPPEDWINDOW, /* default window */ CW_USEDEFAULT, /* Windows decides the position */ CW_USEDEFAULT, /* where the window ends up on the screen */ 544, /* The programs width */ 375, /* and height in pixels */ HWND_DESKTOP, /* The window is a child-window to desktop */ NULL, /* No menu */ hThisInstance, /* Program Instance handler */ NULL /* No Window Creation data */ );
The content of your RC file looks fine, so I don't think the problem is there. I doubt the problem is in Bloodshed either -- while I'm not particularly fond of Dev-C++, I doubt it's causing anything like this. That leaves your code for the application as the most likely culprit for causing the problem. Unfortunately, you haven't shown enough of that to even guess at likely sources of the problem.
3,103,007
3,103,049
How to convert char* to unsigned short in C++
I have a char* name which is a string representation of the short I want, such as "15" and need to output this as unsigned short unitId to a binary file. This cast must also be cross-platform compatible. Is this the correct cast: unitId = unsigned short(temp); Please note that I am at an beginner level in understanding binary.
I assume that your char* name contains a string representation of the short that you want, i.e. "15". Do not cast a char* directly to a non-pointer type. Casts in C don't actually change the data at all (with a few exceptions)--they just inform the compiler that you want to treat one type into another type. If you cast a char* to an unsigned short, you'll be taking the value of the pointer (which has nothing to do with the contents), chopping off everything that doesn't fit into a short, and then throwing away the rest. This is absolutely not what you want. Instead use the std::strtoul function, which parses a string and gives you back the equivalent number: unsigned short number = (unsigned short) strtoul(name, NULL, 0); (You still need to use a cast, because strtoul returns an unsigned long. This cast is between two different integer types, however, and so is valid. The worst that can happen is that the number inside name is too big to fit into a short--a situation that you can check for elsewhere.)
3,103,153
3,103,314
Overloaded virtual function call resolution
Please consider the following code: class Abase{}; class A1:public Abase{}; class A2:public A1{}; //etc class Bbase{ public: virtual void f(Abase* a); virtual void f(A1* a); virtual void f(A2* a); }; class B1:public Bbase{ public: void f(A1* a); }; class B2:public Bbase{ public: void f(A2* a); }; int main(){ A1* a1=new A1(); A2* a2=new A2(); Bbase* b1=new B1(); Bbase* b2=new B2(); b1->f(a1); // calls B1::f(A1*), ok b2->f(a2); // calls B2::f(A2*), ok b2->f(a1); // calls Bbase::f(A1*), ok b1->f(a2); // calls Bbase::f(A2*), no- want B1::f(A1*)! } I'm interested to know why C++ chooses to resolve the function call on the last line by upcasting the this pointer of the object to the base class, rather than upcasting the argument of f()? Is there any way that I can get the behaviour I want?
The choice of which version of f to call is made by looking at the compile-time type of the parameter. The run-time type isn't considered for this name resolution. Since b1 is of type Bbase*, all of Bbase's members are considered; the one that takes an A2* is the best match, so that's the one that gets called.
3,103,164
3,103,194
Return dynamic array in C++
I need to return a unsigned int* from a function. The code below will compile but will crash at run time on a Windows 64 bit machine. I know I am making a silly mistake somewhere and can someone point it out for me. :p. I also have declared the function in my header, so I know its not that error. Please note I have censored the variable names and numbers because the problem in which this function resides is not for public release yet. Function: unsigned int* convertTime(unsigned int inputInteger, unsigned short inputFrac) { unsigned int* output = new unsigned int[2]; double messageTimeFraction = double(inputFrac) * 20e-6; output[1] = unsigned int(inputInteger + 2209032000); output[2] = unsigned int(messageTimeFraction * 2e32); return output; // Seconds } Implementation: unsigned int* timeStamp; timeStamp = convertTime(inputInteger,inputFrac);
Well, for starters you have output[1] and output[2]. Arrays are zero-indexed in c/c++, so these should be: output[0] and output[1]. But, since you're asking about c++... I urge you to use std::vector or std::pair. (Of course, for readability's sake, you might just want to use a trivial struct with useful field names)
3,103,445
3,103,483
How do I use C functions that have "this" arguments in a C++ program?
I want to copy and paste a C function I found in another program into my C++ program. One of the function arguments uses the "this" pointer. void cfunction( FILE *outfilefd, const VARTYPEDEFINED this); The C++ compiler errors here on the function prototype: error C2143: syntax error : missing ')' before 'this' How do I make this C++ usable? Thanks. EDIT ( per Betamoo comment ) void cfunction( FILE *outfilefd, const VARTYPEDEFINED this); { UINT8 temp = 0; temp = (UINT8)( this & 0x000000FF ); if ( ( temp > LIMIT ) ) ...... else { ...... } }
You have two choices. You can leave the code as C, and just create a C++ header to let you call that C code from C++: #ifdef __cplusplus extern "C" { #endif void cfunction(FILE *, const VAR); #ifdef __cplusplus } #endif Or you can rewrite that function enough to get it to compile as C++ (probably just rename its this parameter to something else like thisvar).
3,103,560
3,231,839
Convert C++ Code to Assembly for SPIM
I'm having a lot of trouble getting my compiled assembly file working on SPIM. Basically I want to write a c++ file, and then generate a .s file that I can open in SPIM without error. This means that the assembly must be in MIPS32 ABI using MIPS I instructions (some MIPS II). How do I do this? Right now I'm using g++ but I'm having major errors when I try ot run the file in SPIM. I'm working on MAC OSx 10.6.3 and I'm compiling remotely on a linux machine. Is there a special compiler I can use that will make this easy for me?
Give the compiler -S option, it will generate the assembly code. Then you will have to edit the code so that SPIM accepts it. You'll also want g++ -S -fno-delayed-branch if you enable optimization like -O1 or -Og for more readable code. Usually SPIM is configured to simulate a MIPS without branch-delay slots, but gcc will assume that the instruction after a branch is run even if it's taken. -fno-delayed-branch gets gcc to fill any branch-delay slots with nop. Another useful option is -fverbose-asm to have gcc add comments with the C variable name of each operand. You'll want to avoid using C++ libraries like std::vector that compile to a lot of extra code vs. local arrays, especially without optimization, unless you really need those features.
3,103,568
3,103,586
How to write portable code in c++?
What are the things that I should keep in mind to write portable code? Since I'm a c++ beginner, I want to practice it since beginning. Thanks.
learn to use the standard library read books (eg. this one) when you're experienced, learn to use boost
3,103,691
3,103,701
Hot-pluggable C++ library possible?
I'm looking to "hot plug" a library of C++ code. I'm interested in having this technique work cross platform between Linux/Mac/Windows. Basically I want to have the main program #include "StateMachine.h" which defines all callable interfaces. Then at runtime and DURING EXECUTION load and unload StateMachineLibrary.a to have my application use different state machines. One thought I have is maybe do something like write a wrapper that loads this compiled code in to my own malloc'd memory and creates function pointers in to that memory? Motivation is that the State Machine portions of my project will be frequently changing and need recompilation, also would allow the main app to continue running with different State Machines being loaded. I'm hoping to use a "hot-pluggable" library INSTEAD OF something like Lua scripts due to some concerns, so considering that as an alternative has already been explored.
Define a base interface and derive your implementations from it. Put these into dynamic libraries (DLL/SO) and load them at runtime. The library will just need a static factory function to deliver an instance of its implementation to you. // shared class Base { public: virtual void DoTheWork() = 0; }; // within the DLL/SO class Hotplugged : public Base { public: virtual void DoTheWork() { std::cout<<"I got hotplugged!"<<std::endl; } }; extern "C" Base* CreateIt() { return new Hotplugged(); } // within the app (sample for Windows/MSVC) ... ::LoadLibrary("mydll"); Base* (*fpCreateIt)() = (Base*(*)())::GetProcAddress(... "CreateIt"); // call the function pointer to obtain a Base instance Base* mybase = fpCreateIt(); // prints the above text mybase->DoTheWork(); delete mybase; Note: this is just a sketch. It has some flaws, for example I'm ignoring ownership semantics, and no actual checks are done if the DLL we just loaded is binary compatible with us. Think about it a bit, or look for existing implementations (some are mentioned in other responses).
3,103,934
3,104,006
issues with C++ executable
I have a C++ generated executable in Solaris 8. The problem I have is that this executable uses a command line parameter to run. For example: $ myprog 123412341234AB This is a valid 14 digit hexdecimal value. However, if for some reason there are symbols like ^ > < >> << & etc., then the program does not behave properly per se. I am not talking core dumps per se but for example one of the checks I do is via isxdigit. Apparently it is not good enough to catch something like 1234123412341^ or 12341234(12341, so I am just trying to see if I can detect all these symbols in an effort to just exit properly. I mean, some of these symbols have special meaning in Unix and I guess that is why the program does not understand how to handle it. Do you have any thoughts on how to address this? Do I just try to find all these symbols and the moment I detect them in the command, I just exit out with an error message? How would I go about doing this? I am using std::string. So maybe a list like !@#$%^&*()<><<>> etc., where I can detect and get out. I am not sure if there is an easier way to do this so Unix does not think I am giving it a system command when in fact it is just an input to a program, albeit it just happens to be a wrong/invalid input.
You can't fix this by modifying your program -- those special characters are being interpreted by the shell before your code ever sees them. You can prevent this by single-quoting the command-line argument: myprog 'some_string_<with_special&>!chars' or by escaping the special characters (by preceding each one with a backslash): myprog some_string_\<with_special\&\>\!chars
3,103,938
3,104,078
p/invoke a 32-bit dll from a C# program running on an x64 machine
I have a C# program that I compile with all of the default settings on an x64 computer. I want to p/invoke a DLL which I know is a 32-bit (unmanaged) C++ DLL. I can get this to work when my C# program runs on a 32-bit machine, but not a 64-bit machine. How can I specify in the DllImport call that I am calling into a 32-bit dll? Example of what I have now: [DllImport("test32bitdll.dll", SetLastError=true)] public static extern void MyFunc(); I do not have the source code of the test32bitdll.dll file.
Running 32-bit unmanaged code in a 64-bit process is not possible. Or the reverse. The options you have available: Force the EXE to run in x86 mode with the Target Platform setting in the Build tab Recompile the C++ DLL in x64 mode. That's often possible without too many hassles, provided you have the source code and not a dependency on some 3rd party code that is only available in 32-bits Run the C++ DLL in a surrogate process that is forced to run in 32-bit mode. You'll need to use an interprocess communication mechanism to get your 64-bit process to talk to the 32-bit surrogate. Named pipes, sockets, .NET Remoting, WCF are typical choices in .NET. The 3rd option can give you the most bang for your buck but it can be slow if there's a lot of data exchanged and tends to be fragile. It can be difficult to deal with failure of the surrogate process.
3,103,968
3,104,119
#include being ignored
So, I've got this code I'm trying to update. It was written for visual studio 6, and I'm trying to get it to compile in visual studio 2010. In stdafx.h, it includes afx.h and afxwin.h and a few other things necessary for the program to work. Notably, there's usage of CString in other header files. At the top of the includes in stdafx.h, I added in a #pragma message, to verify that it was being compiled first. There's one at the top of the header file which throws the error, as well. I can see from the compiler output that stdafx.h was being compiled first, so that's good. However, there was the error. (CString wasn't being recognized as a type.) So, I decided to make sure that it got through all of the includes. So, I put another #pragma message after #include and that message is not printed. Does that mean is not actually being included?
Passing my comment to an answer. CString in VS 6 times was a class and it changed afterwards to be a template. Maybe it has something to due with that? The problem had to do with using typedef with CString. Post VS 6, that's not possible. I just changed references by hand, and it compiles now.
3,104,282
3,104,322
Question About Which Design Pattern To Use
Given two classes: First class performs AES encryption / decryption and returns encrypted / decrypted data given a certain key and chain. Second class gathers data to be encrypted and then passes it to the encryption / decryption class. Is it proper design to call directly the encryption class from the class that gathers the data or should there be an object between the two classes which abstracts the process further? Should I have one abstract class instance and one encryption instance to handle all of these types of requests during the program's lifetime?
Personally, I would create some kind of abstract interface representing an encryption algorithm, with a factory function taking the key and producing a concrete instance of an encryption algorithm with a key installed. So the 'second class' here would call directly to the 'first class', but there would be a 'third class' in charge of instantiating the class. Something like: /* Core encryption framework definitions */ class CipherInstance { // ... public: virtual void encrypt(void *, size_t) = 0; virtual void decrypt(void *, size_t) = 0; virtual size_t blocksize() const = 0; // ... virtual ~CipherInstance() { } }; class Cipher { public: virtual CipherInstance *instantiate(const void *key, size_t keylen) = 0; virtual size_t minkeysize() const = 0; virtual size_t maxkeysize() const = 0; }; /* AES implementation */ class privateAESImpl : public Cipher { /* ... */ }; // This is the only public definition in the AES implementation. The privateAESImpl // class is a stateless singleton, and this is the only instance. Doing this instead // of static functions allows AES to be passed to a function taking a Cipher * extern privateAESImpl AES; // Much later: CipherInstance *aes = AES.instantiate(key, keylen); aes->encrypt(data, datalen); // or, to be more general: void frob(Cipher *cipher, void *key, size_t keylen, void *data, size_t datalen) { CipherInstance *inst = cipher->instantiate(key, keylen); inst->encrypt(data, datalen); } C#'s System.Security.Cryptography libraries use a similar approach - see, eg, System.Security.Cryptography.SymmetricAlgorithm. Note however that since C# supports introspection, there's no need for a factory class - instead there's simply a static method taking a name. With C++ a full factory class is needed.
3,104,356
3,106,427
In Visual Studio 2010 why is the .NETFramework,Version=v4.0.AssemblyAttributes.cpp file created, and can I disable this?
I've recently upgraded to Visual Studio 2010. Now when I build projects I get a line that reads: 1> .NETFramework,Version=v4.0.AssemblyAttributes.cpp I've learned that this is the result of the new build engine, msbuild.exe, but this file is actually auto-created and placed in my local temp directory (c:\Documents and Settings\me\Local Settings\Temp). Does anyone know why this file is created, and whether I can disable its creation? BTW, it doesn't seem to have anything useful in it, to my mind. See below: #using <mscorlib.dll> [assembly: System::Runtime::Versioning::TargetFrameworkAttribute(L".NETFramework,Version=v4.0", FrameworkDisplayName=L".NET Framework 4")]; And occasionally, as reported http://social.msdn.microsoft.com/Forums/en-US/vcgeneral/thread/15d65667-ac47-4234-9285-32a2cb397e32, it causes problems. So any information on this file, and how I can avoid its auto-creation would be much appreciated. Thank you!
This is common to all languages (C#, VB, and F# have something similar too). One way you can disable it is to override the GenerateTargetFrameworkMonikerAttribute target thusly: <!-- somewhere after the Import of Microsoft.somelanguage.targets --> <Target Name="GenerateTargetFrameworkMonikerAttribute" /> in your project file.
3,104,389
3,175,564
Can I bind an existing method to a LLVM Function* and use it from JIT-compiled code?
I'm toying around with the LLVM C++ API. I'd like to JIT compile code and run it. However, I need to call a C++ method from said JIT-compiled code. Normally, LLVM treats method calls as function calls with the object pointer passed as the first argument, so calling shouldn't be a problem. The real problem is to get that function into LLVM. As far as I can see, it's possible to use external linkage for functions and get it by its name. Problem is, since it's a C++ method, its name is going to be mangled, so I don't think it's a good idea to go that way. Making the FunctionType object is easy enough. But from there, how can I inform LLVM of my method and get a Function object for it?
The dudes from the LLVM mailing list were helpful enough to provide a better solution. They didn't say how to get the pointer from the method to the function, but I've already figured out this part so it's okay. EDIT A clean way to do this is simply to wrap your method into a function: int Foo_Bar(Foo* foo) { return foo->bar(); } Then use Foo_Bar's address instead of trying to get Foo::bar's. Use llvm::ExecutionEngine::addGlobalMapping to add the mapping as shown below. As usual, the simplest solution has some interesting benefits. For instance, it works with virtual functions without a hiccup. (But it's so much less entertaining. The rest of the answer is kept for historical purposes, mainly because I had a lot of fun poking at the internals of my C++ runtime. Also note that it's non-portable.) You'll need something along these lines to figure the address of a method (be warned, that's a dirty hack that probably will only be compatible with the Itanium ABI): template<typename T> const void* void_cast(const T& object) { union Retyper { const T object; void* pointer; Retyper(T obj) : object(obj) { } }; return Retyper(object).pointer; } template<typename T, typename M> const void* getMethodPointer(const T* object, M method) // will work for virtual methods { union MethodEntry { intptr_t offset; void* function; }; const MethodEntry* entry = static_cast<const MethodEntry*>(void_cast(&method)); if (entry->offset % sizeof(intptr_t) == 0) // looks like that's how the runtime guesses virtual from static return getMethodPointer(method); const void* const* const vtable = *reinterpret_cast<const void* const* const* const>(object); return vtable[(entry->offset - 1) / sizeof(void*)]; } template<typename M> const void* getMethodPointer(M method) // will only work with non-virtual methods { union MethodEntry { intptr_t offset; void* function; }; return static_cast<const MethodEntry*>(void_cast(&method))->function; } Then use llvm::ExecutionEngine::addGlobalMapping to map a function to the address you've gotten. To call it, pass it your object as the first parameter, and the rest as usual. Here's a quick example. class Foo { void Bar(); virtual void Baz(); }; class FooFoo : public Foo { virtual void Baz(); }; Foo* foo = new FooFoo; const void* barMethodPointer = getMethodPointer(&Foo::Bar); const void* bazMethodPointer = getMethodPointer(foo, &Foo::Baz); // will get FooFoo::Baz llvm::ExecutionEngine* engine = llvm::EngineBuilder(module).Create(); llvm::Function* bar = llvm::Function::Create(/* function type */, Function::ExternalLinkage, "foo", module); llvm::Function* baz = llvm::Function::Create(/* function type */, Function::ExternalLinkage, "baz", module); engine->addGlobalMapping(bar, const_cast<void*>(barMethodPointer)); // LLVM always takes non-const pointers engine->addGlobalMapping(baz, const_cast<void*>(bazMethodPointer));
3,104,467
3,105,071
Type conditionals in C++ templates?
I have a method in C# as follows (which wraps a number across a range, say 0 to 360... if you pass 0-359 you get the same value, if you pass 360 you get 0, 361 gets 1, etc.): /// <summary> /// Wraps the value across the specified boundary range. /// /// If the value is in the range <paramref name="min"/> (inclusive) to <paramref name="max"/> (exclusive), /// <paramref name="value"/> will be returned. If <paramref name="value"/> is equal to <paramref name="max"/>, /// <paramref name="min"/> will be returned. The method essentially creates a loop between <paramref name="min"/> /// and <paramref name="max"/>. /// </summary> /// <param name="value">The value to wrap.</param> /// <param name="min">The minimum value of the boundary range, inclusive.</param> /// <param name="max">The maximum value of the boundary range, exclusive.</param> /// <returns>The value wrapped across the specified range.</returns> public static T Wrap<T>(T value, T min, T max) where T : IComparable<T> { // If it's positive or negative infinity, we just return the minimum, which is the "origin" bool infinityDouble = typeof(T) == typeof(double) && (double.IsPositiveInfinity(Convert.ToDouble(value)) || double.IsNegativeInfinity(Convert.ToDouble(value))); bool infinityFloat = typeof(T) == typeof(float) && (float.IsPositiveInfinity(Convert.ToSingle(value)) || float.IsNegativeInfinity(Convert.ToSingle(value))); if (infinityDouble || infinityFloat) { return min; } // If the value is between the origin (inclusive) and the maximum value (exclusive), just return the value if (value.CompareTo(min) >= 0 && value.CompareTo(max) < 0) { return value; } // The range of the wrapping function var range = (dynamic)max - (dynamic)min; return ((((value % range) + range) - min) % range) + min; } I also needed this method in C++, which I defined as follows: /*! Wraps the value across the specified boundary range. If the value is in the range \a min (inclusive) to \a max (exclusive), \a value will be returned. If \a value is equal to \a max, \a min will be returned. The method essentially creates a loop between \a min and \a max. \param value The value to wrap. \param min The minimum value of the boundary range, inclusive. \param max The maximum value of the boundary range, exclusive. \return The value wrapped across the specified range. */ template <typename T> const T& MathHelper::wrap(const T &value, const T &min, const T &max) { // If it's positive or negative infinity, we just return the minimum, which is the "origin" bool infinityDouble = value == std::numeric_limits<double>::infinity() || value == -std::numeric_limits<double>::infinity(); bool infinityFloat = value == std::numeric_limits<float>::infinity() || value == -std::numeric_limits<float>::infinity(); if (infinityDouble || infinityFloat) { return min; } // If the value is between the origin (inclusive) and the maximum value (exclusive), just return the value if (value >= min && value < max) { return value; } // The range of the wrapping function T range = max - min; return ((((value % range) + range) - min) % range) + min; } Now my question is: am I checking for infinity correctly in the C++ version? I can't see any way to say "if double, do these checks, if float, do these checks". If it's not the type I want, will it just return false? Also, why is the % operator not defined for float and double? I guess I'll have to implement the modulo operator myself. The method is pretty much intended for numeric types - byte, short, int, long, float, double.
With the facilities provided by numeric_limits, you don't really need to use any complex specializations or anything like that for the infinity check. template <typename T> const T& MathHelper::wrap(const T &value, const T &min, const T &max) { bool isInfinity = std::numeric_limits<T>::has_infinity() && (std::abs(value) == std::numeric_limits<T>::infinity()); //the rest } Your final step, involving operator% will be more complicated. You will need to provide a custom mod function, that is overloaded to pass the floating point types into std::modf instead of using operator%. You might be able to use type traits [either via boost or TR1] to minimize the repetative aspects of this, although I'm not sure what the most elegant method of doing so would be. Perhaps something along the lines of: template<typename T> typename std::enable_if<std::is_floating_point<T>::value, T>::type mod(T, T) { //use std::modf } template<typename T> typename std::enable_if<std::is_integral<T>::value, T>::type mod(T, T) { //use % }
3,104,509
3,104,797
Getting input if the window is not active (Windows)
Short version: How can I receive input messages in Windows with C++/C when the window is not active? Background information: I'm currently working on an Input System that should not depend on any window, so it can e.g. be used in the console as well. My idea is to create an invisible window only receiving messages, which is possible using HWND_MESSAGE as hWndParent. It only receives input messages when it's active though, and I don't want this. It should always receive input (unless the application requests it no longer does so, e.g. because it lost focus). I know this is possible somehow, many applications support global shortcuts (e.g. media players (playback control) or instant messengers (opening the contact list)), I just don't know how. Do you know?
Options: RegisterHotKey if you need to register just one or a few hotkeys SetWindowsHookEx with WH_KEYBOARD / WH_KEYBOARD_LL. Use when you need to filter many or all keyboard events. However, the hook code needs to be implemented in a DLL (which is loaded into other processes). You need separate 32 bit and 64 bit versions of the DLL
3,104,543
3,104,591
C++ allocator<X>::deallocate(NULL,1) allowed?
Both free(NULL) and ::operator delete(NULL) are allowed. Does the allocator concept (e.g. std::allocator also allow deallocate(NULL,1), or is it required to put your own guard around it?
You'll need to add your own check. According to §20.4.​1.1/8, deallocate requires: p shall be a pointer value obtained from allocate(). n shall equal the value passed as the first argument to the invocation of allocate which returned p. allocate throws an exception when storage can't be given (§20.4.​1.1/7). In other words, allocate never returns 0, and therefore deallocate should never get a 0. Passing a 0 would lead to undefined behavior.
3,104,554
3,104,643
inline functions in c++
here is a small question about inline functions in c++. At what stage of the compilation in C++ are the inline functions actually inlined at the call? how does that basically work. lets say if the compiler has decided that a particualr function has to be inline after the programmer has requested with an inline keyword in front of the function ,when does the compiler does that for the programmer .i mean at what stage of the compilation. is it at the preprocessing stage like in c macros are expanded?
It will vary by compiler. And some stages in some compilers will have no corresponding stages in other compilers. So your question doesn't really have a definite answer. But generally it's done after the parse tree for the function is created, but before code is actually generated or many optimizations are done. This is the most optimum place to do it because you want the maximum amount of information available for optimizer to work with. Doing it like a preprocessor macro expansion would be generally too early. The compiler doesn't then have enough information to do the appropriate type checking, and it's easier also to make mistakes that cause side effects to happen more than once and so on. And GMan provided an excellent Wikipedia link in a comment that goes into much more detail about the function inlining process than I do here. My answer is generally true, but there is a LOT of variation, even more than I thought there was.
3,104,772
3,104,827
Should I use the C or C++ implementation for returning a maximum size of a variable type?
From a standards standpoint, should I use the following from the C++ <limits> header? UCHAR_MAX which is the c implementation or std::numeric_limits<unsigned char>::max() which is the C++ implementation. The result is equivalent between the two versions but should I choose an implementation based on some standard or on readability and portability in this case. Note this implementation must be cross-platform compatible. I am writing C++ code.
If you want the code to be able to compile as C, then you pretty much need to use <limits.h>. If you're writing C++, it's probably better to use the C++ <limits> header instead. The latter lets you write code that will work in templates that can't really be duplicated with the C header: template <class T> class mytemplate { T x; void somefunc() { x = std::numeric_limits<T>::max(); } // or whatever... };
3,104,838
3,105,167
How to detect client connection to a named pipe server using overlapped I/O?
I was studying the MSDN examples of using named pipes: Named pipe server using overlapped I/O Named pipe client The server easily detects when the client is disconnected and creates a instance of a named pipe. But I cannot figure out how the server knows that a client is connected to a pipe before any data from client is sent. Can server detect a connceted client before client sends any data? If server calls DisconnectNamedPipe before client disconnects itself first, will this disconnect the client as well? Can server disconnect a client from a pipe without negotiating it with the client?
Not sure I understand the hang-up. The server calls ConnectNamedPipe to wait for a client connection. No data needs to be sent. Nor can it be sent, you cannot issue a ReadFile until a client is connected. Note that the SDK sample uses this as well. If the server disconnects ungracefully (without notifying the client with some kind of message so it can close its end of the pipe) then the client will get an error, ERROR_PIPE_NOTCONNECTED (I think). There's little reason to rely on that for a normal shutdown, you need to do something reasonable when the pipe server process crashed and burned unexpectedly. Beware that pipes are tricky to get right due to their asynchronous nature. Getting errors that are not actually problems is common and you'll need to deal with it. My pipe code deals with these errors: ConnectNamedPipe: ERROR_PIPE_CONNECTED on connection race, ignore FlushFileBuffers: race on pipe closure, ignore all errors WaitNamedPipe: ERROR_FILE_NOT_FOUND if the timeout expired, translate to WAIT_TIMEOUT CreateFile: ERROR_PIPE_BUSY if another client managed to grab the pipe first, repeat
3,104,926
3,106,234
change value inside multimap
I have a multimap<key1,pair<key2, value2>> I want to change value2 in this multimap. typedef std::pair<int, int> comp_buf_pair; //pair<comp_t, dij> typedef std::pair<int, comp_buf_pair> node_buf_pair; typedef std::multimap<int, comp_buf_pair> buf_map; //key=PE, value = pair<comp_t, dij> typedef buf_map::iterator It_buf; buf_map bufsz_map; bufsz_map.insert(node_buf_pair(target(*ei,g), comp_buf_pair(comp_t[target(*ei,g)], dij))); for(It_buf it = bufsz_map.equal_range(*u_iter).first; it!= bufsz_map.equal_range(*u_iter).second;) { comp_buf_pair it1 = it->second; if(it1.first < c_i) { std::cout << it1.first << " : " << it1.second << std::endl; old_c_i = it1.first; old_dij = it1.second; updated_dij = (c_i-old_c_i) + old_dij; // I would like to erase the it1.second value and add the updated_dij value in the bufsz_map } } What would be a better way to do it?
The statement it->second.second = updated_dij; will overwrite the old value. Since you don't change the key field or indicate what a new key would be, this seems sufficient. Is this what you are looking for or do you want to insert updated_dij under a new key and delete the old key?
3,104,993
3,105,028
How to hide a function based on a template parameter
I'm using Visual Studio 2008. I have this class: template <bool T1> class Foo { public: void doSomething() {} Foo<T1>& operator=(int a) { doSomething(); return *this; } }; But I want that the method operator= be hidden (by simply doing: return *this) if the template parameter T1 is false. I need that for instances of Foo, the lines: Foo<false> foo; foo = 20; //this should give a compilation error So I tried specializing the class definition: template<> class Foo<false> { private: Foo<false>& operator=(int a) { return *this; } }; However, by doing this I lose the method doSomething() on instances that are Foo<false>, which is not what I need. I've tried removing the operator= with boost::enable_if, like this: typename boost::enable_if< boost::mpl::bool_<T1> , Foo<T1> >::type& operator=(int a) { callProxy(); return *this; } But that makes me unable to have a class like the following: class Bar { public: Foo<true> assignable; Foo<false> unassignable; }; I've also tried putting both methods in Foo and removing them with boost::enable_if and boost::disable_if, like this: template <bool T1> class Foo { public: void doSomething() {} typename boost::enable_if< boost::mpl::bool_<T1> , Foo<T1> >::type& operator=(int a) { doSomething(); return *this; } private: typename boost::disable_if< boost::mpl::bool_<T1> , Foo<T1> >::type& operator=(int a) { return *this; } }; Which didn't work too (I expected that, but it was worth trying). So, is it possible to get the behaviour I need, and if it is, how could I do it?
You can statically assert the condition: Foo<T1>& operator=(int a) { BOOST_STATIC_ASSERT(T1); doSomething(); return *this; }
3,105,001
3,105,769
Why is there no reallocation functionality in C++ allocators?
In C the standard memory handling functions are malloc(), realloc() and free(). However, C++ stdlib allocators only parallel two of them: there is no reallocation function. Of course, it would not be possible to do exactly the same as realloc(), because simply copying memory is not appropriate for non-aggregate types. But would there be a problem with, say, this function: bool reallocate (pointer ptr, size_type num_now, size_type num_requested); where ptr is previously allocated with the same allocator for num_now objects; num_requested >= num_now; and semantics as follows: if allocator can expand given memory block at ptr from size for num_now objects to num_requested objects, it does so (leaving additional memory uninitialized) and returns true; else it does nothing and returns false. Granted, this is not very simple, but allocators, as I understand, are mostly meant for containers and containers' code is usually complicated already. Given such a function, std::vector, say, could grow as follows (pseudocode): if (allocator.reallocate (buffer, capacity, new_capacity)) capacity = new_capacity; // That's all we need to do else ... // Do the standard reallocation by using a different buffer, // copying data and freeing the current one Allocators that are incapable of changing memory size altogether could just implement such a function by unconditional return false;. Are there so few reallocation-capable allocator implementation that it wouldn't worth it to bother? Or are there some problems I overlooked?
From: http://www.sgi.com/tech/stl/alloc.html This is probably the most questionable design decision. It would have probably been a bit more useful to provide a version of reallocate that either changed the size of the existing object without copying or returned NULL. This would have made it directly useful for objects with copy constructors. It would also have avoided unnecessary copying in cases in which the original object had not been completely filled in. Unfortunately, this would have prohibited use of realloc from the C library. This in turn would have added complexity to many allocator implementations, and would have made interaction with memory-debugging tools more difficult. Thus we decided against this alternative.
3,105,105
3,105,216
What is the most appropriate way to initialize a bi-dimensional array of objects?
I need a dynamically allocated bi-dimensional array of objects. I declared a static pointer to pointer to object as follows: server_session **server_session::Pglvcl_sess; There's a method to populate dynamically the array of array of object: int server_session::createSession() { int ret = -1; // looks for the next available position on the array for (int i = 0; i <= MAX_SESSION; i++) { // if there's an available position, instantiates an object if (server_session::Pglvcl_sess[i] == NULL) { // instantiates object server_session::Pglvcl_sess[i] = new server_session(); ret = i; break; } } return ret; } Should I malloc the server_session::Pglvcl_sess variable? What is the proper way to initialize it? EDIT: The application executes this method at startup, is this OK? void server_session::initializeSessions() { server_session::Pglvcl_sess = ( server_session * * ) malloc(MAX_SESSION * sizeof(server_session)); for (int i = 0; i <= MAX_SESSION; i++) { if (server_session::Pglvcl_sess[i] != NULL) { server_session::Pglvcl_sess[i] = NULL; } } }
Any reason not to use Vector<Vector<server_session> > and let it do the dynamic allocation and management for you?
3,105,114
3,105,382
How to get printf style compile-time warnings or errors
I would like to write a routine like printf, not functionally-wise, but rather I'd like the routine to have the same time compile check characteristics as printf. For example if i have: { int i; std::string s; printf("%d %d",i); printf("%d",s.c_str()); } The compiler complains like so: 1 cc1plus: warnings being treated as errors 2 In function 'int main()': 3 Line 8: warning: too few arguments for format 4 Line 9: warning: format '%d' expects type 'int', but argument 2 has type 'const char*' code example Are printf and co special functions that the compiler treats differently or is there some trick to getting this to work on any user defined function? The specific compilers I'm interested in are gcc and msvc
Different compilers might implement this functionality differently. In GCC it is implemented through __attribute__ specifier with format attribute (read about it here). The reason why the compiler performs the checking is just that in the standard header files supplied with GCC the printf function is declared with __attribute__((format(printf, 1, 2))) In exactly the same way you can use format attribute to extend the same format-checking functionality to your own variadic functions that use the same format specifiers as printf. This all will only work if the parameter passing convention and the format specifiers you use are the same as the ones used by the standard printf and scanf functions. The checks are hardcoded into the compiler. If you are using a different convention for variadic argument passing, the compiler will not help you to check it.
3,105,265
3,105,456
Subtle syntax error in default parameter not caught by compiler
I started getting the error, "error C2059: syntax error : 'default argument'" for a line of code that declared a function with a string argument that was given a default parameter. This was obviously a bit frustrating, as the error message was not exactly enlightening (I know it's a 'default argument'!), and the exact declaration would work elsewhere. After shifting about the declaration a bit, I found its position in its containing class actually had an effect. Narrowing it down, I found that I was declaring a different function somewhat erroneously, by including a semicolon after one of its default parameters. The compiler seemed perfectly fine with that, which seemed a bit odd. I investigated a bit more, and came up with the following test case to try to figure out the essence of what was going on: enum TestEnum1 { TEST_ONE }; class TestClass { public: enum TestEnum2 { TEST_TWO, TEST_THREE, TEST_FOUR }; void Func1( int iParm = TEST_ONE; ); // additional semicolon here void Func2( std::string strParm = "" ); }; As the code above stands, Func2 will produce the compilation error I mentioned above. If I move Func2 above Func1, then everything compiles fine. If I switch the default parameter in Func1 to an explicit number or use an enum declared within TestClass, then I get an expected syntax error for that line. So essentially, the strange thing seems to be that if I set a default parameter's value to an enum not defined directly in the current class and am a little too semicolon-happy, the compiler will ignore the syntax error, until some other seemingly-unrelated thing finally causes the parser to die in a very inscrutable way. Am I just missing something completely? Is this expected behavior? I'm hesitant to go calling it a bug in the compiler, certainly, but this hardly seems correct. If it's just me misunderstanding something about the standard, then I'd like to know where I'm wrong.
Agreed with @tlayton. Having dabbled a bit in parsers myself, I can attest that generating good error messages for syntax errors that confuse the parser's sense of scope can be very hard to do. This particular case is however close to a defect. The irony is that in VS2010, the compiler still generates the same lousy error message but the IntelliSense parser actually catches it: 3 IntelliSense: expected a ')' c:\projects\cpptemp14\cpptemp14.cpp 20 36 cpptemp14 That's borked. You can report it at connect.microsoft.com. Let me know if you don't want to take the time, I'll report it (MVP duty).
3,105,381
3,105,548
how to call C++ dll function with int& and int* parameters from C#?
My C++ DLL have a function like this: void func1(int& count, int* pValue); this function will determine the "count", and put some values into the pValue int array which is the length of "count". how can I define my C# code? you can ignor the [DllImport ...] part. thanks,
By ref & isn't going to happen as far as I've worked out. MSDN has the answers you seek and don't forget to export your function from C++ as extern "C" According to MSDN: Default marshaling for arrays on the C# side I think you want something like the following public static extern void func1( out int count, [MarshalAs(UnmanagedType.LPArray, SizeParamIndex=0)] int[] values ); Where SizeParamIndex tells .net which argument will hold the size of the array to be marshaled.
3,105,476
3,105,477
In C++, does adding a friend to a class change its memory layout?
Also, does it matter where in the class you declare the friend ? Does it matter if you add a friend class or a friend function ?
No it doesn't. It's a purely compile-time thing: similar to access modifiers themselves. Despite the fact that you write the declaration inside the class, you don't really add a friend to a class. You'd basically declare something else as a friend of the class and simply allow it to access the class's private members, as if they were public.
3,105,739
3,105,902
Is there a way to Boost.Assign a ptr_vector?
Usually like this: #include <boost/assign/std/vector.hpp> vector<int> v; v += 1,2,3,4,5; Except for a: #include <boost/ptr_container/ptr_vector.hpp> boost::ptr_vector<int> v; If you need to know the reason; I'm using ptr_vector instead of vector only so I don't have to delete elements, but I need to initialize it using Boost.Assign as I want the ptr_vector to be const (can't use push_back() or pop_back() anywhere else in code.) Thanks in advance for you answers, it's possible I'm using the wrong container type?
Use Boost.Assigns ptr_list_of(): #include <boost/assign/ptr_list_of.hpp> // ... const boost::ptr_vector<int> pv = boost::assign::ptr_list_of<int>(1)(2)(3);
3,105,798
3,106,307
Why must the copy assignment operator return a reference/const reference?
In C++, the concept of returning reference from the copy assignment operator is unclear to me. Why can't the copy assignment operator return a copy of the new object? In addition, if I have class A, and the following: A a1(param); A a2 = a1; A a3; a3 = a2; //<--- this is the problematic line The operator= is defined as follows: A A::operator=(const A& a) { if (this == &a) { return *this; } param = a.param; return *this; }
Strictly speaking, the result of a copy assignment operator doesn't need to return a reference, though to mimic the default behavior the C++ compiler uses, it should return a non-const reference to the object that is assigned to (an implicitly generated copy assignment operator will return a non-const reference - C++03: 12.8/10). I've seen a fair bit of code that returns void from copy assignment overloads, and I can't recall when that caused a serious problem. Returning void will prevent users from 'assignment chaining' (a = b = c;), and will prevent using the result of an assignment in a test expression, for example. While that kind of code is by no means unheard of, I also don't think it's particularly common - especially for non-primitive types (unless the interface for a class intends for these kinds of tests, such as for iostreams). I'm not recommending that you do this, just pointing out that it's permitted and that it doesn't seem to cause a whole lot of problems. These other SO questions are related (probably not quite dupes) that have information/opinions that might be of interest to you. Has anyone found the need to declare the return parameter of a copy assignment operator const? Overloading assignment operator in C++
3,105,857
3,105,890
Visual Studio 2010 indentation after for loop
Why am I getting this behavior right after the if block? Am I missing something? for (;;) if (/*...*/) { // statements } // statements indented to match the if indentation instead of the for loop;
About the only way to keep VS doing indentation reasonably is to always use a block to enclose the statement controlled by a for, if, while, etc. In your case that would mean: for (;;) { if (/* ... */) { // ... } } // further statements here indented to match for loop.
3,105,966
3,106,043
Set background color of a control? (WinAPI)
How can I set the backcolor of a control (equivilant of control.backcolor in .Net). I tried setBKColor with no luck. ex: TabHwnd = createWindowEx(NULL,WC_TAB.... then how could I set the back color of TabHwnd? Thanks
Windows will generate a message when it's painting the background of a control, and it is up to your program to respond to the message appropriately. WM_CTLCOLORBTN WM_CTLCOLOREDIT WM_CTLCOLORDLG WM_CTLCOLORLISTBOX WM_CTLCOLORSCROLLBAR WM_CTLCOLORSTATIC
3,106,015
3,106,023
How do I increment an enum in VS C++ 6.0?
I copy and pasted some code that increments an enum: myenum++; This code worked fine as it was compiled in VS.NET C++ 2003 I am now developing in VS 6.0 and get the error: error C2676: binary '++' : 'enum ID' does not define this operator or a conversion to a type acceptable to the predefined operator How can I get this to behave the same in 6.0?
Please try to convert to int, add one (+1) and convert back to the enum.
3,106,026
3,106,060
Is there a stoll()/stroll() (string to long long) alternative in Visual Studio 2008
Is there an alternative, either built into windows or apache license compatible, to stoll() for Visual Studio 2008. Even installing the windows 7 platform SDK does not add stoll() to the string header. On unix the same function is be called strtoll().
You can use _strtoi64. It's declared in stdlib.h. long long num = _strtoi64(str, NULL, 10); Similarly to strtoll, you can pass a char ** endptr as a second parameter. If so, *endptr will be set to the first invalid character (which may be the normal null-terminator).
3,106,038
3,106,068
Polymorphism: Accessing inherited class variable
I have a class that is inherited from an abstract base class. class CStateBase { friend class CApplication; friend class CGraphics; virtual int Update() =0; }; class CStateTitle: private CStateBase { friend class CApplication; friend class CGraphics; CApplication *f_App; int m_iR; int Update(); CStateTitle(CApplication *App); ~CStateTitle(); }; In a method of another class, CStateTitle is dynamically allocated into a CStateBase pointer. However, if I use that pointer to try and access the variable int m_iR, the compiler looks for the variable in CStateBase and therefore makes an error. If I could declare virtual int m_iR in the base class I would think it would work fine, but for some reason it's not letting me declare virtual data members. What is the recommended way to get around this problem? Thanks for any help.
The best way is to abstract the access of m_iR into some virtual function. Another choice would be to move m_iR from CStateTitle into CStateBase. This only makes sense if every class needs an m_iR. A last resort would be to do a dynamic cast: CStateBase *csb = ...; CStateTitle *cst = dynamic_cast<CStateTitle *>(csb); if (cst) { // have a valid CStateTitle } else { // csb is not pointing at a CStateTitle, do whatever is appropriate }
3,106,042
3,106,058
Is there a simple library for C++ to draw to the screen?
I'm just starting out in C++ programming and I want to try creating a space invaders clone in C++, I want to avoid using game libraries and things that would solve a lot of the problems (like game loop and vector maths etc) so I can tackle these myself, but, I have no idea how to begin drawing things to a screen. I was wondering if there's a good library I should use to simply allow myself to draw lines or graphics to the screen or whether I can do this without the use of a library? I'd appreciate any advice, Thanks.
I recommend either Allegro or SDL, even though they are mostly 2D: Allegro: http://alleg.sourceforge.net/ SDL: http://www.libsdl.org/
3,106,188
3,106,232
Programmatically Save an Excel File Using OLE
How do you programmatically save an excel workbook using OLE and C++ Builder? I'm guessing it might be something like: Variant excel = Variant::CreateObject("Excel.Application"); excel.OleProcedure("Save"); // but how might you specify the file name
Oh just found the answer from here: excel.OlePropertyGet(”Workbooks”).OlePropertyGet(”Item”,1).OleProcedure(”SaveAs”,”d:\\case1.xls”); First you get the workbooks object followed by the workbook. Then you can do a SaveAs.
3,106,206
3,106,218
Template object as static member of the template class
Imagine the following template class (setter and getter for the member _t omitted): template<class T> class chain { public: static chain<T> NONE; chain() : _next(&NONE) {} ~chain() {} chain<T>& getNext() const { return *_next; } void setNext(chain<T>* next) { if(next && next != this) _next = next; } chain<T>& getLast() const { if (_next == &NONE) return *this; else return _next->getLast(); } private: T _t; chain<T>* _next; }; The basic idea of this concept is, instead of using null-pointers, I have a static default element that takes in this role while still being a technically valid object; this could prevent some of the issues with null pointers while making the code more verbose at the same time... I can instantiate this template just fine, but the linker gives an unresolved-external error on the static member object NONE. I would have assumed that when instantiating the template, the line static chain<T> NONE; would effectively be a definition, too, as it actually happens within the implementation instantiating the template. However, it turns out not to be... My question is: is something like possible at all, and if so, how, without explicitly defining the NONE element before each and every template instantiation?
You still need to define that outside the class just like a non-template class. Just like in a non-template class, you have only declared NONE inside the class definition and still need to define it: template<class T> class chain { // the same as your example }; // Just add this template <class T> chain<T> chain<T>::NONE;
3,106,482
3,106,486
template fails to compile: 'double' is not a valid type for a template constant parameter
template<typename T, T Min> class LowerBoundedType {}; template<typename T> class vectorelement {}; template<> class vectorelement<Categorical> { typedef LowerBoundedType<double, 0.0> type; }; with error: error: 'double' is not a valid type for a template constant parameter
The only numeric types valid for a nontype template parameter are integers and enumerations. So, you can't have a nontype template parameter of type double.
3,106,517
3,109,770
Make child windows inherit parent background color?
Is there a way for windows created with WS_CHILD to inherit the parent's background color rather than default to Dialog Color? I'm trying to do this for a tab control, mainly the part that extends due to the tabs not filling up the area. I want this color to be the parent window's BG. Thanks
For the "content" of the tab, you would call EnableThemeDialogTexture (Visual Styles can have complex tab content backgrounds), for the area to the right of the actual tabs I think you might have to owner draw (TCS_OWNERDRAWFIXED style) unless just using the TCS_RIGHTJUSTIFY style is enough.
3,106,731
3,106,776
How to check current mouse button state using Win32/User32 library?
I know how to stimulate clicks using User32 SendInput method and what I need is a similar User32 method but to obtain the current mouse button state. Something similar to: public static extern bool GetCursorPos(ref System.Drawing.Point lpPoint); Function GetCursorPos gives me the current cursor position. What I need is the left button state (if it's clicked or not). Is there such a function?
There's a method called GetAsyncKeyState. The method signature looks like this: [DllImport("user32.dll")] public static extern short GetAsyncKeyState(UInt16 virtualKeyCode); Then you simply call it passing the left mouse key code (VK_LBUTTON = 0x01) and off you go. More information directly from MSDN.
3,106,740
3,106,799
timeval to string (converting between the two)
I'm trying to pull the two components out of a timeval struct and place them into strings. I'm not having much luck with this. I've attempted casting and converting first to a long and then to a string. I need the most efficient way to do this. Any ideas? I do NOT want to convert to another data structure first (localtime, etc). I need the seconds and the microseconds in their original state. EDIT: I know stringstream is an option here -- I'm just not sure how efficient that is. Every microsecond counts here, so I'm looking for the fastest implementation.
Boost's lexical_cast should be reasonably fast. Edit: Let me elaborate. Here's an example of its usage: std::string strSeconds = lexical_cast<std::string>(time.tv_sec); std::string strMicroSec = lexical_cast<std::string>(time.tv_usec); For more complicated string formatting, the Boost documentation recommends the basic std::stringstream. Something like: std::stringstream ss; ss << time.tv_sec << " seconds, " << (time.tv_usec/1000L) << " milliseconds"; return ss.str(); Reasonably fast, readable, safe and standard. You might be able to get a little more speed by using sprintf from the cstdio header. (preferably sprintf_s if available) There's no explicit support for long variables in printf, but these days on 32-bit+ machines they're usually the same size so you can use the %d specifier to handle them: std::string tvtostr(timeval time) { // unless corrupted, the number of microseconds is always less than 1 second assert(time.tv_sec >= 0 && time.tv_usec >= 0 && time.tv_usec < 1000000000L); static_assert(sizeof(long)==4 && sizeof(int)==sizeof(long), "assuming 32 bit ints and longs" ); // space for one unbounded positive long, one long from 0 to 999, // the string literal below, and a '\0' string terminator boost::array<CHAR, 10+3+23+1> buffer; sprintf_s(buffer.data(), buffer.size(), "%d seconds, %d milliseconds", time.tv_sec, (time.tv_usec/1000L) ); return buffer.data(); }
3,106,767
3,106,778
A question about cin in C++
When I declare int weight and then input a double value 165.1 a 2nd cin >> height; doesn't work and there is no any error message. Can you tell me why? VS2010 Console Application used. #include <iostream> using namespace std; const double lbs_to_kg = 2.2046, inches_to_meter = 39.370; int main() { int weight, height; double kilograms, meters; cout << "\nEnter weight in pounds: "; cin >> weight; kilograms = weight / lbs_to_kg; cout << "\nEnter height in inches: "; cin >> height; meters = height / inches_to_meter; cout << "\nYour BMI is approximately " << "\nbody fat ratio is " << kilograms / (meters * meters) << ". Under 25 is good." << endl; } output: Enter weight in pounds: 165.1 Enter height in inches: Your BMI is approximately body fat ratio is 1.57219e-013. Under 25 is good.
If you try to have cin extract data into a variable that can't hold it, the data is left in the input stream and cin is flagged as having failed. You need to check if it's failed with !cin, and use cin.clear() to clear the fail flag so you can read again (future extract operations will automatically fail until the flag is cleared). You can either extract the data into a different variable that's capable of holding it, or use cin.ignore() to discard it
3,106,882
3,106,947
C++ windows32 winsock UDP routing?
In C++ using Windows32 using windows socket library using UDP is there a way to give a client routing information to another client to establish a connection between clients without having to route through the server Clarification: server - waits for computers and gives routing info - a detached server client - sends a ack request and waits for routing info - a normal user computer but ok so its not posible to give routing info to clients to interconnect clients without requiring the data to be forwarded through the server?
Short answer: no Long answer: yes --- but you have to use IPPROTO_IP, not IPPROTO_UDP. Use IP_OPTIONS option in setsockopt() to set source routing.
3,106,958
3,107,336
Simple question regarding an equation inside of a function
Hey, so basically I have this issue, where I'm trying to put an equation inside of a function however it doesn't seem to set the value to the function and instead doesn't change it at all. This is a predator prey simulation and I have this code inside of a for loop. wolves[i+1] = ((1 - wBr) * wolves[i] + I * S * rabbits[i] * wolves[i]); rabbits[i+1] = (1 + rBr) * rabbits[i] - I * rabbits[i] * wolves[i]; When I execute this, it works as intended and changes the value of both of these arrays appropriately, however when I try to put it inside of a function, int calcRabbits(int R, int rBr, int I, int W) { int x = (1 + rBr) * R - I * R * W; return x; } int calcWolves(int wBr, int W, int I, int S, int R) { int x = ((1 - wBr) * W + I * S * R * R); return x; } And set the values as such rabbits[i+1] = calcRabbits ( rabbits[i], rBr, I, wolves[i]); wolves[i+1] = calcWolves(wBr, wolves[i], I, S, rabbits[i]); The values remain the same as they were when they were initialized and it doesn't seem to work at all, and I have no idea why. I have been at this for a good few hours and it's probably something that I'm missing, but I can't figure it out. Any and all help is appreciated. Edit: I realized the parameters were wrong, but I tried it before with the correct parameters and it still didnt work, just accidentally changed it to the wrong parameters (Compiler mouse-over was showing the old version of the parameters) Edit2: The entire section of code is this days = getDays(); // Runs function to get Number of days to run the simulation for dayCycle = getCycle(); // Runs the function get Cycle to get the # of days to mod by int wolves[days]; // Creates array wolves[] the size of the amount of days int rabbits[days]; // Creates array rabbits [] the size of the amount of days wolves[0] = W; // Sets the value of the starting number of wolves rabbits[0] = R; // sets starting value of rabbits for(int i = 0; i < days; i++) // For loop runs the simulation for the number of days { // rabbits[i+1] = calcRabbits ( rabbits[i], rBr, I, wolves[i]); // // //This is the code to change the value of both of these using the function // wolves[i+1] = calcWolves(wBr, wolves[i], I, S, rabbits[i]); // This is the code that works and correctly sets the value for wolves[i+1] wolves[i+1] = calcWolves(wBr, wolves[i], I, S, rabbits[i]); rabbits[i+1] = (1 + rBr) * rabbits[i] - I * rabbits[i] * wolves[i]; } Edit: I realized my mistake, I was putting rBr and wBr in as ints, and they were floats which were numbers that were below 1, so they were being automatically converted to be 0. Thanks sje
I was using an integer as an argument for a double.
3,107,005
3,107,053
Does the license on the .NET framework allow distribution of derivative forms of it's code?
I'd like to mimic a .NET class signature (and possibly some major pieces of implementation) in a C++ application. I plan on releasing said application under the Boost Software License. Specifically, I'd like to use the TimeSpan and DateTime interfaces, as well as some implementation I found using the handy dandy Reflector tool. Would use of code obtained through Reflector this way violate the terms of the .NET Framework's license?
You cannot redistibute modifications of their BCL implementation (mscorlib) for commercial purposes. It is available under the Shared Source license, which allows you to use it for academic/personal use only. The ECMA 335 standard requires that the BCL be implemented in any CLR implementation though, so you can duplicate the class and method names, but provide your own implementation behind them. You might want to look the mcs module from the mono-project, which is a free-to-use BCL implementation (MIT license, you can use as you want it.)
3,107,246
3,107,289
Reference to a Two-Dimesional Array
I want to implement a function with OpenGL to render a cylinder in C++. The signature of my function is as follows: #define POINTS_NUM 15 #define DEMESION 3 void drawCylinder( int slices, int segments, GLfloat (&vertices)[ POINTS_NUM ][ DEMESION ] ); I want to use a reference to a two-dimensional array to limit user input, but some strange behavior is happening. When I implement the function declared as above, an linker error occurs: Error 1 error LNK2005: "float (* vase)[3]" (?vase@@3PAY02MA) already defined in shapes.obj vase.obj VaseAndAnimation Here vase is defined as: GLfloat vase[ POINTS_NUM ][ DEMESION ]; At first, I thought there was something wrong with the last dimension. So I omitted it in my second trial. This time declaration of my function is like this: void drawCylinder( int slices, int segments, GLfloat (&vertices)[ POINTS_NUM ][] ); Now a compile-time error occurs when invoked as (vase definition isn't changed): drawCylinder( 10, 10, vase ); Compile error: Error 1 error C2087: 'vertices' : missing subscript d:\visual studio 2008\projects\project1\computer graphics\vaseandanimation\shapes.h 25 VaseAndAnimation Error 2 error C2664: 'drawCylinder' : cannot convert parameter 3 from 'GLfloat [14][3]' to 'GLfloat (&)[14][1]' d:\Visual Studio 2008\Projects\Project1\Computer Graphics\VaseAndAnimation\vase.cpp 64 VaseAndAnimation Error 3 error C2087: 'vertices' : missing subscript d:\visual studio 2008\projects\project1\computer graphics\vaseandanimation\shapes.h 25 VaseAndAnimation Error 4 error C2087: 'vertices' : missing subscript d:\Visual Studio 2008\Projects\Project1\Computer Graphics\VaseAndAnimation\shapes.cpp 12 VaseAndAnimation From this error, I can see that parameter vertices is really treated a reference to a two-dimensional array, but why is vase parsed as float (* vase)[3] in my first version? My IDE is Visual Studio 2008. I haven't tried it with GCC; is that behavior compiler-dependent? Hope someone can give me a hand to get rid of the trap.
Your first declaration is fine. Seems to be you have defined vase in a header file.
3,107,441
4,821,441
Prevent Up-Down control for Tab Control?
As documented: "If not all tabs can be shown at once, the tab control displays an up-down control so that the user can scroll additional tabs into view." http://msdn.microsoft.com/en-us/library/bb760550%28VS.85%29.aspx I don't want this. I don't want an up down control to show if I have too many and I don't want multiline tabs. I want a single strip. I will handle the case of too many tabs with a control I create myself, but I don't want the up-down control. Thanks
There's no style for that, so i believe the only way is a bit of hacking. From what i can see with my Spy++, the updown control is a true child control of the tab control with id = 1. So, you can actually hide it with ShowWindow().
3,107,697
3,107,719
What is the meaning of new int[25,2]?
What is the meaning of using the second parameter with a comma in the below code? int *num = new int[25,2];
That's the comma operator in action: it evaluates it's operand and returns the last one, in your case 2. So that is equivalent with: int *num = new int[2]; It's probably safe to say that the 25,2 part was not what was intended, unless it's a trick question. Edit: thank you Didier Trosset.
3,107,699
3,107,790
Is Visual C++ as powerful as gcc?
My definition of powerful is ability to customize. I'm familiar with gcc I wanted to try MSVC. So, I was searching for gcc equivalent options in msvc. I'm unable to find many of them. controlling kind of output Stop after the preprocessing stage; do not run the compiler proper. gcc: -E msvc: ??? Stop after the stage of compilation proper; do not assemble. gcc: -S msvc: ??? Compile or assemble the source files, but do not link. gcc: -c msvc:/c Useful for debugging Print (on standard error output) the commands executed to run the stages of compilation. gcc: -v msvc: ??? Store the usual “temporary” intermediate files permanently; gcc: -save-temps msvc: ??? Is there some kind of gcc <--> msvc compiler option mapping guide? gcc Option Summary lists more options in each section than Compiler Options Listed by Category. There are hell lot of important and interesting things missing in msvc. Am I missing something or msvc is really less powerful than gcc.
MSVC is an IDE, gcc is just a compiler. CL (the MSVC compiler) can do most of the steps that you are describing from gcc's point of view. CL /? gives help. E.g. Pre-process to stdout: CL /E Compile without linking: CL /c Generate assembly (unlike gcc, though, this doesn't prevent compiling): CL /Fa CL is really just a compiler, if you want to see what commands the IDE generates for compiling and linking the easiest thing to look at the the command line section of the property pages for an item in the IDE. CL doesn't call a separate preprocessor or assembler, though, so there are no separate commands to see. For -save-temps, the IDE performs separate compiling and linking so object files are preserved anyway. To preserve pre-processor output and assembler output you can enable the /P and /Fa through the IDE. gcc and CL are different but I wouldn't say that the MSVC lacks "a hell lot" of things, certainly not the outputs that you are looking for.
3,107,771
5,681,448
Pixel coordinates to 3D line (opencv)
I have an image displayed on screen which is undistorted via cvInitUndistortMap & cvRemap (having done camera calibration), and the user clicks on a feature in the image. So I have the (u,v) pixel coordinates of the feature, and I also have the intrinsic matrix and the distortion matrix. What I'm looking for is the equation of the 3D line in camera/real-world coordinates on which the feature the user clicked must lie. I already have the perpendicular distance between the camera's image plane and the feature, so I can combine that with the aforementioned equation to give me the (X,Y,Z) coordinate of the feature in space. Sounds easy (inverse intrinsic matrix or something?) but I can't find step-by-step instructions anywhere. C++ or C# code preferred.
This is a bit old question but still might be usefull for someone. All lines go through the point (0,0,0), so: line.x0 = 0; line.y0 = 0; line.z0 = 0; direction vector is as follows: line.A = (u/fx) - (cx/fx); line.B = (v/fy) - (cy/fy); line.C = 1; cx,cy,fx,fy are parameters from camera matrix. Equations are explained in "Learning OpenCv" book.
3,107,924
3,108,000
C++ Type Casting: benefit of using explicit casts?
What are benefits of using these operators instead of implicit casting in c++? dynamic_cast <new_type> (expression) reinterpret_cast <new_type> (expression) static_cast <new_type> (expression) Why, where, in which situation we should use them? And is it true that they are rarely used in OOP?
From the list of casts you provided, the only one that makes sense to be used to supstitute an implicit cast is the static_cast. dynamic_cast is used to downcast a superclass into its subclass. This cannot happen implicitly and is actually something that is not that rare in OOP. static_cast could be used in such a cast too, it is however more dangerous, as it does not check during run time that the downcast is valid. The last cast, reinterpret_cast, should be used very carefully as it is the most dangerous of all. You can essentially cast anything into anything with that - but you as the programmer will have to make sure that such cast makes sense semantically, as you essentially turn off type checking by doing such cast.
3,107,974
3,108,006
Call a C++ base class method automatically
I'm trying to implement the command design pattern, but I'm stumbling accross a conceptual problem. Let's say you have a base class and a few subclasses like in the example below: class Command : public boost::noncopyable { virtual ResultType operator()()=0; //Restores the model state as it was before command's execution. virtual void undo()=0; //Registers this command on the command stack. void register(); }; class SomeCommand : public Command { virtual ResultType operator()(); // Implementation doesn't really matter here virtual void undo(); // Same }; The thing is, everytime operator () is called on a SomeCommand instance, I'd like to add *this to a stack (mostly for undo purposes) by calling the Command's register method. I'd like to avoid calling "register" from SomeCommand::operator()(), but to have it called automaticaly (someway ;-) ) I know that when you construct a sub class such as SomeCommand, the base class constructor is called automaticaly, so I could add a call to "register" there. The thing I don't want to call register until operator()() is called. How can I do this? I guess my design is somewhat flawed, but I don't really know how to make this work.
It looks as if you can benefit from the NVI (Non-Virtual Interface) idiom. There the interface of the command object would have no virtual methods, but would call into private extension points: class command { public: void operator()() { do_command(); add_to_undo_stack(this); } void undo(); private: virtual void do_command(); virtual void do_undo(); }; There are different advantages to this approach, first of which is that you can add common functionality in the base class. Other advantages are that the interface of your class and the interface of the extension points is not bound to each other, so you could offer different signatures in your public interface and the virtual extension interface. Search for NVI and you will get much more and better explanations. Addendum: The original article by Herb Sutter where he introduces the concept (yet unnamed)
3,108,142
3,108,173
C++: proper inlining syntax
Is it preferred to define inline methods like: // math.h class Math { public: inline static int sum(int a, int b) { return a + b; } }; or // math.h class Math { public: static int sum(int a, int b); }; // math.cpp inline int Math::sum(int a, int b); { return a + b; } and why? Is there any difference? Should you leave inlining entirely up to the compiler? (I know this is pretty subjective but I'd like to hear some opinions and reasons) Also, I believe it is true that most compilers will inline or not inline of their own accord regardless of the presence or absence of the inline keyword?
Inlining is entirely up to the compiler. The inline keyword, like the old C register keyword is a suggestion to the compiler to do some optimisation. However, compiler writers know so much more than we lesser beings about their target architectures that this is really unnecessary. Because of that, I don't ever use inline so the question is moot (for me) but I would prefer to keep all code out of headers since that always led to double-defined symbols in C compilers. Putting code in header files also leaks information unnecessarily. People using your headers should not be able to see the implementation details.
3,108,220
3,108,279
Porting windows code, what to use instead of __int64 _tmain and _TCHAR*?
I'm currently porting some windows code and trying to make it available for use in Ubuntu. The project was originally compiled in VC++ without any issues. Also I should note that this only needs to work in Ubuntu, but more platform independent ideas are certainly welcome. Most of the code is easy to port as it is mostly a numerical simulation project with few OS specific parts. There is no UNICODE used in the ported version and there is not going to be any need to support this. I'd like to know what the best practices are when trying to get this code to compile with GCC, in particular: What is considered to be the best replacement for: __int64, _tmain and _TCHAR* ? Thanks!
For the 64-bit: #include <inttypes.h> typedef int64_t __int64; As for the TCHAR problem. I actually find TCHARs rather useful so I have a file with all the _t functions I use in it. e.g #ifdef UNICODE #define _tcslen wcslen #define _tcscpy wcscpy #define _tcscpy_s wcscpy_s #define _tcsncpy wcsncpy #define _tcsncpy_s wcsncpy_s #define _tcscat wcscat #define _tcscat_s wcscat_s #define _tcsupr wcsupr #define _tcsupr_s wcsupr_s #define _tcslwr wcslwr #define _tcslwr_s wcslwr_s #define _stprintf_s swprintf_s #define _stprintf swprintf #define _tprintf wprintf #define _vstprintf_s vswprintf_s #define _vstprintf vswprintf #define _tscanf wscanf #define TCHAR wchar_t #else #define _tcslen strlen #define _tcscpy strcpy #define _tcscpy_s strcpy_s #define _tcsncpy strncpy #define _tcsncpy_s strncpy_s #define _tcscat strcat #define _tcscat_s strcat_s #define _tcsupr strupr #define _tcsupr_s strupr_s #define _tcslwr strlwr #define _tcslwr_s strlwr_s #define _stprintf_s sprintf_s #define _stprintf sprintf #define _tprintf printf #define _vstprintf_s vsprintf_s #define _vstprintf vsprintf #define _tscanf scanf #define TCHAR char #endif as for the _s functions basically ... I implemented them. It takes about an hour of coding to do but it makes porting projects to other platforms or compilers IMMENSELY easier.
3,108,282
3,108,294
Unicode supported isdigit and isspace function
I have the following code. // mfc.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "mfc.h" #ifdef _DEBUG #define new DEBUG_NEW #endif #include <cctype> #include <string> #include <sstream> #include <tchar.h> #include <iostream> #include <Strsafe.h> #include <algorithm> #include <cmath> #include <limits> #include <functional> #include <cassert> std::wstring toStringWithoutNumerical(const std::wstring& str) { std::wstring result; bool alreadyAppendSpace = false; for (int i = 0, length = str.length(); i < length; i++) { const TCHAR c = str.at(i); if (isdigit(c)) { continue; } if (isspace(c)) { if (false == alreadyAppendSpace) { result.append(1, c); alreadyAppendSpace = true; } continue; } result.append(1, c); alreadyAppendSpace = false; } return result; } // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. } std::wstring me = toStringWithoutNumerical(_T("My Leg 1 Long")); AfxMessageBox(me.c_str()); // Crash! std::wstring he = toStringWithoutNumerical(L"我的脚1盘"); AfxMessageBox(he.c_str()); return nRetCode; } For 1st message box, My Leg Long will be shown. For 2nd message box, crash will happen, with assertion fail at isctype.c _ASSERTE((unsigned)(c + 1) <= 256); How I can get a standard function (isdigit, isspace...), to support unicode out of 256 range?
iswdigit? http://www.opengroup.org/onlinepubs/007908775/xsh/iswdigit.html http://msdn.microsoft.com/en-us/library/fcc4ksh8(VS.71).aspx also, look up iswspace ;)
3,108,311
3,108,528
Undefined references to functions with C++ template methods
Why am I getting undefined references to the methods in this class when I call them? Will I be forced to include the implementation in the header file or is there another way to do this better? class MathHelper { public: /*! Represents the ratio of the circumference of a circle to its diameter, specified by the constant, p. This value is accurate to 5 decimal places. */ static const double pi = 3.14159; template <typename T> static const T modulo(const T &numerator, const T &denominator); static const double modulo(double numerator, double denominator); static const float modulo(float numerator, float denominator); template <typename T> static const T& clamp(const T &value, const T &min, const T &max); template <typename T> static const T wrap(const T &value, const T &min, const T &max); template <typename T> static bool isPowerOfTwo(T number); template <typename T> static T nearestPowerOfTwo(T number); static float aspectRatio(const QSize &size); template <typename T> static float aspectRatio(T width, T height); template <typename T> static T degreesToRadians(T degrees); template <typename T> static T radiansToDegrees(T radians); template <typename T> static T factorial(T n); private: MathHelper() { } };
I think the explanation and answer to your question is this C++ faq lite answer and the next ones Basically, as templates are patterns to instanciate, any code unit needing it must know how to instanciate it. Therefore, the simpliest way is to define your templates in header files (like boost does). The C++ faq lite give another way to do that. In my humble opinion, I think it is cumbersome... my2c
3,108,340
3,108,356
Double Pointer Usage
Please take a look at following code: #include <stdio.h> #include <iostream> using namespace std; typedef struct MyType{ int num1; }; void test(MyType **src) { MyType *ret=new MyType; ret->num1=666; *src=ret; } int main(){ MyType *mSrc; test(&mSrc); printf("%d Address\n",mSrc); printf("%d Value \n",mSrc->num1); } I am wondering if the test() method had been implemented as follows,why the pointer assignment inside test() method is not visible to the caller? void test(MyType *src) { MyType *ret=new MyType; ret->num1=666; src=ret; //Why this assignment is only valid inside this method? } How do I implement this function without using a double pointer in the signature?
void test(MyType *src) Here src is just a local variable within the test function, it's a copy of the pointer you passed to it. Assigning to a local variable doesn't have any effect on the caller. If you wanted src to refer to the same variable you passed in, use a reference void test(MyType *&src)