question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
2,022,870
2,022,899
Assignment operator with Inheritance and virtual base class
I have an abstract virtual base class Foo from which I derive many other classes that differ in small ways. I have a factory that creates the derived classes and returns Foo*. One of my bigger problems is in my operator overloads, I need to make sure that the DFoo does not get operated on by DFoo1 (not shown). I have currently handled this with checking if a cast fails, but I'm pretty unhappy with that approach. I have to use the base implementation, because I only can return the base class from the factory. If this is the best way to do it, that's fine, I just want to make sure that this makes sense and that there isn't a pattern I'm missing. Any suggestions about how to handle this sort of thing are very much appreciated. class Foo { public: Foo(int x){...}; Bar m_bar; virtual Foo& operator=(const Foo& f) { m_bar = f.m_bar } } Now, my derived class class DFoo : public Foo { DFoo(int x, int y):Foo(int x) {...} FooBar m_foobar; Foo& operator=(const Foo& rhs) { if(this != &rhs) { Foo::operator=(rhs); DFoo temp = static_cast<DFoo>(rhs); if(temp != NULL) { m_foobar = static_cast<DFoo>(rhs).m_foobar; } else throw exception(ex); } } }
You are probably looking for boost::noncopyable.
2,023,007
2,023,227
gcc version 4.1.2 in mac os x
I am taking a programming class and we are required to use the gcc 4.1.2 compiler to compile our c++ projects. I will be creating my projects in xcode and can't find how to set that compiler. I went to the get info window on the project and hit the drop down under Compiler Version, however I do not have 4.1.2 on the list. It seems that this compiler is not installed on my computer. Does anybody know where I can download it and how I can set it as my system default gcc compiler for the term?
You can probably get away with using whatever version of GCC is on your Mac, and doing a final compile on the university machines as a check. In general, the user visible changes using a later version is stricter syntax checking, so you might do something on the Mac that won't pass a newer compiler, but that generally isn't too common. I haven't had to change more than a few lines on our 50k line codebase.
2,023,032
2,023,045
catch exception by pointer in C++
I found that there are three ways to catch an exception, what are the differences? 1) catch by value; 2) catch by reference; 3) catch by pointer; I only know that catch by value will invoke two copies of the object, catch by reference will invoke one. So how about catch by pointer? When to use catch by pointer? In addition to throw an object, can I throw a pointer to an object like this? class A {} void f() { A *p = new A(); throw p; }
The recommended way is to throw by value and catch by reference. Your example code throws a pointer, which is a bad idea since you would have to manage memory at the catch site. If you really feel you should throw a pointer, use a smart pointer such as shared_ptr. Anyway, Herb Sutter and Alexei Alexandrescu explain that really well in their C++ Coding Standards book which I paraphrased. See C++ Coding Standards: Throw by Value, Catch by Reference.
2,023,046
2,024,120
Is there any OpenSSL function to convert PKCS7 file to PEM
Is there any openssl api function to convert PKCS7 file to PEM. I am able to convert a PKCS12 file to PEM using PKCS12_parse() function which returns key and certificate given the password. There is no similar function for pkcs7. My pkcs7 input has just the certificate in binary format. I am able to do the conversion using command openssl pkcs7 -inform DER -in input.p7b -printcerts -text How do I do this in a C program? I am able to read it to a PKCS7 structure like this FILE* fp; if (!(fp = fopen("ca.p7b", "rb"))) { fprintf(stderr, "Error reading input pkcs7 file\n" ); exit(1); } PKCS7 *p7; p7 = d2i_PKCS7_fp(cafp, NULL);
After some googling I am able to do that. if(p7->d.sign->cert != NULL){ PEM_write_X509(fp, sk_X509_value(p7->d.sign->cert, 0)); } where p7 is a pointer to pkcs7 struct and fp is the file pointer to PEM file
2,023,519
2,023,600
convert a string to int
I have a large file where each line contains space-separated integers. The task is to sparse this file line-by-line. For the string to int conversion I have three solutions: static int stringToIntV1(const string& str) { return (atoi(str.c_str())); } However, if I pass a malformed string, it doesn't produce any error. For instance the string "123error" is converted to 123. Second solution: static int stringToIntV2(const string& str) { int result; istringstream myStream(str); if (myStream >> result) { return result; } // else throw domain_error(str + " is not an int!"); } I have the same problem here, malformed strings don't raise an error. Third solution with Boost (found at Boost Library): static int stringToIntV3(const string& str) { int iResult = 0; try { iResult = lexical_cast<int>(str); } catch(bad_lexical_cast &) { throw domain_error(str + " is not an int!"); } return iResult; } This one gives correct result. However, there is a significant difference in the execution time. Testing on a large text file (32 MB), I got the following times: (1) with atoi: 4.522s (winner) (2) with istringstream: 15.303s (very slow) (3) with lexical_cast: 10.958s (between the two) My question: do you know how to notice malformed strings with atoi? It would give the fastest solution. Or do you know a better solution? Update: Thanks for the answers. Following the tips, I came up with this solution: static int stringToIntV4(const string& str) { char * pEnd; const char * c_str = str.c_str(); int result = strtol(c_str, &pEnd, 10); if (pEnd == c_str+str.length()) { return result; } // else throw domain_error("'" + str + "'" + " is not an int!"); } The good news is that it yields if there is a problem and as efficient as the atoi version.
I'd use strtol. It takes a parameter that it sets to point at the first character it couldn't convert, so you can use that to determine whether the entire string was converted. Edit: as far as speed goes, I'd expect it to be slightly slower than atoi, but faster than the others you tried.
2,023,808
2,023,835
how to take a very large hex string and format the output
i have a very long hex string in a byte array and I would like to format the output of that byte array such that it shows 0x??,0x??. The reason for this because I have a key that I am making and I don't want to type out a 512bit key like that. Any native code that could help me do that would be appreciated. basically, i have a rsa key generated in hex and was hoping to use that as a static byte array. But, i dont want to type it all out {0x??, 0x??} etc thanks in advance! you guys are the best!
For each byte in the array: cout << "0x" << hex << unsigned(theByte) << ","; where theByte is the value (hopefully an unsigned char) that you want to print.
2,023,952
2,023,956
C++ extern class definition
I'm reading some code that goes: extern class MyClass : BaseClass { ... } MyInstance; Does the extern refer to the class declaration or the instance?
Instance. Classes cannot be extern. Although the code smells - this snippet suggests that true declaration of that instance uses a separate class definition. Bad, bad idea - defining the class twice.
2,023,962
2,024,093
Organising .libs in a codebase of several C++ projects
Let's say you have several bespoke C++ projects in separate repositories or top-level directories in the same repository. Maybe 10 are library projects for stuff like graphics, database, maths, etc and 2 are actual applications using those libraries. What's the best way to organise those 2 application projects to have the .libs they need? Each lib project builds the .lib in its own directory, developers have to copy these across to the application area manually and make sure to get the right version Application projects expect lib projects to be in particular paths and look for .libs inside those locations A common /libs directory is used by all projects Something else This is focused on C++, but I think it's pretty similar with other languages, for instance organising JARs in a Java project.
I'd suggest this approach: Organise your code in a root folder. Let's call it code. Now put your projects and libraries as subfolders (e.g. Projects and Libraries). Build your libraries as normal and add a post-build step that copies the resulting headers and .lib files into a set of shared folders. For example, Libraries\include and Libraries\lib. It's a good idea to use subfolders or a naming convention (myLib.lib, myLib_d.lib) to differentiate different builds (e.g. debug and release) so that any lib reference explicitly targets a single file that can never be mixed up. It sucks when you accidentally link against the wrong variant of a lib! You can also copy third-party libraries that you use into these folders as well. Note: To keep them organised, include your files with #include "Math\Utils.h" rather than just "Utils.h". And put the headers for the whole Math library into include\Math, rather than dropping them all in the root of the include folder. This way you can have many libraries without name clashes. It also lets you have different versions of libraries (e.g. Photoshop 7, Photoshop 8) which allows you to multi-target your code at different runtime environments. Then set up your projects to reference the libraries in one of two ways: 1) Tell your IDE/compiler where the libs are using its global lib/include paths. This means you set up the IDE once on each PC and never have to specify where the libs are for any projects. 2) Or, set each project to reference the libs with its own lib/include paths. This gives you more flexibility and avoids the need to set up every PC, but means you have to set the same paths in every new project. (Which is best depends on the number of projects versus the number of developer PCs) And the most important part: When you reference the includes/libs, use relative paths. e.g. from Projects\WebApp\WebApp.proj, use "..\..\Libraries\include" rather than "C:\Code\Libraries\Include". This will allow other developers and your buildserver to have the source code elsewhere (D:\MyWork instead of C:\Code) for convenience. If you don't do this, it'll bite you one day when you find a developer without enough disk space on C:\ or if you want to branch your source control.
2,023,976
2,024,481
Costs and benefits of Linux-like Windows development environment
I'm taking an Introduction to C++ this semester, so I need to set up development environments in both my Windows and Ubuntu partitions (I switch between them). I was planning to use GCC in both environments for consistency and because I plan to do my serious C++ developing in Linux with GCC. It appears that installing MSYS and MinGW is the best way to use GCC and replicate my Linux dev environment. However, just setting up MSYS and MinGW in Windows appears to be a long and arduous process, and I'm imagining that I will have limitations or compatibility problems in the future. Do the benefits of setting up a MSYS Linux-like development environment on Windows outweigh the costs? Will I be able to use all the libraries that I could if I were using Visual C++?
I think you're going about this the wrong way - I would actually suggest you use Visual Studio on the Windows environment, rather than going out of your way to setup GCC. It's a benefit, not a drawback, to run your code on multiple compilers from multiple vendors. Both GCC and Visual Studio are highly conformant (at least recent versions). You won't have any trouble with standard libraries and going between them, and if you do have trouble, it's probably an issue in your code.
2,023,977
2,024,173
Difference of keywords 'typename' and 'class' in templates?
For templates I have seen both declarations: template < typename T > template < class T > What's the difference? And what exactly do those keywords mean in the following example (taken from the German Wikipedia article about templates)? template < template < typename, typename > class Container, typename Type > class Example { Container< Type, std::allocator < Type > > baz; };
typename and class are interchangeable in the basic case of specifying a template: template<class T> class Foo { }; and template<typename T> class Foo { }; are equivalent. Having said that, there are specific cases where there is a difference between typename and class. The first one is in the case of dependent types. typename is used to declare when you are referencing a nested type that depends on another template parameter, such as the typedef in this example: template<typename param_t> class Foo { typedef typename param_t::baz sub_t; }; The second one you actually show in your question, though you might not realize it: template < template < typename, typename > class Container, typename Type > When specifying a template template, the class keyword MUST be used as above -- it is not interchangeable with typename in this case (note: since C++17 both keywords are allowed in this case). You also must use class when explicitly instantiating a template: template class Foo<int>; I'm sure that there are other cases that I've missed, but the bottom line is: these two keywords are not equivalent, and these are some common cases where you need to use one or the other.
2,024,185
2,024,333
(C++ QT) QList only allows appending constant class objects?
I'm pretty new to QT. I've been messing with it for a week now. I came across a error while I was trying to add a custom datatype to a Qlist like so QObject parent; QList<MyInt*> myintarray; myintarray.append(new const MyInt(1,"intvar1",&parent)); myintarray.append(new const MyInt(2,"intvar2",&parent)); myintarray.append(new const MyInt(3,"intvar3",&parent)); and my MyInt class is a simple wrapper for int and looks something like this #ifndef MYINT_H #define MYINT_H #include <QString> #include <QObject> class MyInt : public QObject { Q_OBJECT public: MyInt(const QString name=0, QObject *parent = 0); MyInt(const int &value,const QString name=0, QObject *parent = 0); MyInt(const MyInt &value,const QString name=0,QObject *parent = 0); int getInt() const; public slots: void setInt(const int &value); void setInt(const MyInt &value); signals: void valueChanged(const int newValue); private: int intStore; }; #endif the error i'm getting during the Qlist append error: invalid conversion from 'const MyInt*' to 'MyInt*' error: initializing argument 1 of 'void QList::append(const T&) [with T = MyInt*]' If anyone can point out what i'm doing wrong, that would be awesome.
So you created a list of: QList<MyInt*> myintarray; Then you later try to append myintarray.append(new const MyInt(1,"intvar1",&parent)); The problem is new const MyInt is creating a const MyInt *, which you can't assign to a MyInt * because it loses the constness. You either need to change your QList to hold const MyInts like so : QList<const MyInt*> myintarray; or you need to not create a const MyInt * by changing your appends to: myintarray.append(new MyInt(1,"intvar1",&parent)); The method you will choose will depend on exactly how you want to use your QList. You only want const MyInt * if you never want to change the data in your MyInt
2,024,595
2,024,622
c++ getting dynamic generic type of pointer?
the title probably is misleading, but i didn't really know how to name it. let's say I have the following structs template <typename T> struct SillyBase{ void doFunnyStuff(vector<T> vec){ dummyField = T(); for(int i=0; i<10; i++) vec.push_back(dummyField++); } T dummyField; }; struct A : public SillyBase<char>{}; struct B : public SillyBase<float>{}; now let's further assume i have a pointer ISillyBase* ptr; which is pointing to an object of a DECENDANT class (A or B) of SillyBase - however, i DON'T KNOW which one (i just know it's either A or B); Is there ANY way for me to call doFunnyStuff() ? maybe something like: vector<dynamic_generic_type_of(ptr)> vec; ptr->doFunnyStuff(vec); thanks!
In your example you can't have SillyBase* because SillyBase is defined as template <typename T> struct SillyBase {...} So you need to provide type ... Another problem is that you pass a copy of vector<T> into doFunnyStuff() which you then populate ... that does not seems right because when the method returns you lose your vec, was it supposed to be reference vector<T>& ?
2,024,650
2,024,678
Parsing a bit field parameter, how to "discard" bits in an unsigned long?
First of all, I want to know if this is possible: let's say I have an unsigned long which contains some abritrary unsigned shorts, which may or may not be in the number. For example: unsigned short int id1 = 3456, id2 = 30998; unsigned long long bitfld = id1|id2; Can the other 2 fields be assumed as 0? And is OR the right operation for this? After that let's say I pass bitfld as an argument: void dostuff (unsigned long long bf) { //pseudo code if( the first field exists in bf) get first field; if( the second field exists in bf) get second field; //etc... } I think I have to poll out the first 16 bits of the bitfield and check those, then recursively poll them, verify them and store them if they are greater than 0. But I'm not sure how to do this, bit shifting only shifts left or right, thus, it only divides or multiplies right? sorry for the bump. Thanks all for your answers, but I ended up using a simpler and more efficient method, an internal structure. You see, I could have done this easily with a string, but my purpose was transparency to the user of the code, easy to program so to say. I created an internal structure to hold my values and then a public method to create and return such structure, so it is easy to use and faster to parse (though it has the overhead of allocating in the stack a (albeit small) structure, which the bit field solution hasn't, but alas). So thank you all for your answers.
short int is 2 bytes long, but long long is 8 bytes, so you have some kind of length mismatch; You may have meant this: unsigned long long bitfld = id1|(id2<<16); you can check is there is a field occupied by ANDing it like: void dostuff (unsigned long long bf) { //pseudo code if(bf & 0xFFFF) return bf & 0xFFFF; if(bf & 0xFF00) return (bf & 0xFFFF0000) >> 32; //etc... }
2,024,933
2,105,840
Warning "might be clobbered" on C++ object with setjmp
#include <setjmp.h> #include <vector> int main(int argc, char**) { std::vector<int> foo(argc); jmp_buf env; if (setjmp(env)) return 1; } Compiling the above code with GCC 4.4.1, g++ test.cc -Wextra -O1, gives this confusing warning: /usr/include/c++/4.4/bits/stl_vector.h: In function ‘int main(int, char**)’: /usr/include/c++/4.4/bits/stl_vector.h:1035: warning: variable ‘__first’ might be clobbered by ‘longjmp’ or ‘vfork’ Line 1035 of stl_vector.h is in a helper function used by the vector(n, value) constructor that I invoke while constructing foo. The warning disappears if the compiler can figure out the argument value (e.g. it is a numeric literal), so I use argc in this test case because the compiler cannot determine the value of that. I guess the warning might be because of compiler optimizing the vector construction so that it actually happens after the setjmp landing point (which seems to be the case here when the constructor argument depends on a parameter of the function). How can I avoid the problem, preferably without having to break the setjmp part to another function? Not using setjmp is not an option because I am stuck with a bunch of C libraries that require using it for error handling.
The rule is that any non-volatile, non-static local variable in the stack frame calling setjmp might be clobbered by a call to longjmp. The easiest way to deal with it is to ensure that the frame you call setjmp doesn't contain any such variables you care about. This can usually be done by putting the setjmp into a function by itself and passing in references to things that have been declared in another function that doesn't call setjmp: #include <setjmp.h> #include <vector> int wrap_libcall(std::vector<int> &foo) { jmp_buf env; // no other local vars if (setjmp(env)) return 1; // do stuff with your library that might call longjmp return 0; } int main(int argc, char**) { std::vector<int> foo(argc); return wrap_libcall(foo); } Note also that in this context, clobbering really just means resetting to the value it had when setjmp was called. So if longjmp can never be called after a modification of a local, you're ok too. Edit The exact quote from the C99 spec on setjmp is: All accessible objects have values, and all other components of the abstract machine have state, as of the time the longjmp function was called, except that the values of objects of automatic storage duration that are local to the function containing the invocation of the corresponding setjmp macro that do not have volatile-qualified type and have been changed between the setjmp invocation and longjmp call are indeterminate.
2,025,019
2,025,043
What object is rethrown in C++?
I am quite confused about the type of the object which is rethrown in C++. For example, in the code above, why is the output 241? My understanding is that in Line 1, an object of class Bar is thrown. It is caught in Line 2. The object of type Bar is sliced to type of Foo. However, when the exception is rethrown, what's the type of that? Why Line 3 is executed? It's not Foo any more? What's the basic policy of rethrow? The type remains the same? Or anything changed? #include <iostream> using namespace std; class Foo{ public: Foo(int i): i(i) {} Foo(){} int i; }; class Bar: public Foo{ public: Bar(int i): Foo(i) {} Bar(const Bar& b){i=b.i;} }; int main () { Bar b(1); try{ try{ throw b; //Line 1 } catch(Foo& e){ e.i=2; //Line 2 cout<<e.i; throw; } catch(Bar& e){ e.i = 3; cout<<e.i; throw e; } } catch (Bar e) { e.i*=2; //Line 3 cout<<e.i; } catch (Foo e) { e.i*=3; cout<<e.i; } cout<<b.i; return 0; }
throw; on its own throws the same object. The object is really a Bar, even though your reference to it is a Foo&. So when you say, "It is caught in Line 2. The object of type Bar is sliced to type of Foo", that's not right. It's not sliced either by the catch or by the rethrow. If you change the line throw; to throw e;, then it will be sliced, and you'll see 261. Also, when you catch by value the object is copied, and so potentially can be sliced. The reason you get 1 at the end is that the object b is never thrown, and is not modified in any of the catch blocks. throw <expression>; throws a copy of its operand.
2,025,104
2,025,132
Fast conversion from YUY2 to RGB24
I'm writing a program, that will do some transformations with image from a webcam in real-time. As almost all other webcams, my noname gives data in YUY2 format (as written in bmiHeader.biCompression). I tried straight conversion on CPU side according to http://www.fourcc.org/yuv.php#YUY2, but it is VERY slow and wrong :). When I connect source output pin to renderer, picture and framerate are well. So I'm looking for a DirectShow filter, or, maybe, some codec, that will do fast conversion from YUY2 to RGB24 on video card and will give me result without rendering it on screen. I'm using: 32 bit Windows XP SP3. VC++ 2005 Recently downloaded Windows SDK. DirectX August'09.
I don't have the code available at the moment. but take a look at using the GDI to do the conversion its very fast. Basically capture the source frame, create a memory dib in the correct format (rgb24) and blit to the bitmap. the conversion occurs during the blitting and in my experience is very fast. I use this to grab frames from a webcam and then I can drop the frames to the hard drive at the required frame rate as either jpegs or bitmaps. DC
2,025,119
2,025,122
How do i peek at at the next value of string iterator
In a loop running over the entire string how do i peek at the next value of iterator? for (string::iterator it = inp.begin(); it!= inp.end(); ++it) { // Just peek at the next value of it, without actually incrementing the iterator } This is quite simple in C, for (i = 0; i < strlen(str); ++i) { if (str[i] == str[i+1]) { // Processing } } Any efficient way to do above in c++? Note: Im not using Boost.
if ( not imp.empty() ) { for (string::iterator it = inp.begin(); it!= inp.end(); ++it) if (it + 1 != inp.end() and *it == *(it + 1)) { // Processing } } } or if ( not imp.empty() ) { for (string::iterator it = inp.begin(); it!= inp.end() - 1; ++it) if ( *it == *(it+1) ) { // Processing } } }
2,025,145
2,032,797
Debugging C++ virtual multiple inheritance in Visual Studio 2008 watch window
I'm having trouble debugging a project in Visual Studio C++ 2008 with pointers to objects that have virtual multiple inheritance. I'm unable to examine the fields in the derived class, if the pointer is a type of a base. A simple test case I made: class A { public: A() { a = 3; }; virtual ~A() {} int a; }; class B : virtual public A { public: B() { b = 6; } int b; }; class C : virtual public A { public: C() { c = 9; } int c; }; class D : virtual public B, virtual public C { public: D() { d = 12; } int d; }; int main(int argc, char **argv) { D *pD = new D(); B *pB = dynamic_cast<B*>(pD); return(0); } Put a breakpoint on the "return(0)", and put pD and pB in the watch window. I can't figure out a way to see "d" in the pB in the watch window. The debugger won't accept a C style cast, or dynamic_cast. Expanding to the v-table shows that the debugger knows it's actually pointing a D destructor, but no way to see "d". Remove the "virtual's" from the base class definitions (so D has 2 A's) and the debugger will let me expand pB and see that it's really a D* object which can be expanded. This is what I want to see in the virtual case as well. Is there any way to make this work? Do i need to figure out the actual offsets of the object layout to find it? Or is it time to just say I'm not smart enough for virtual multiple inheritance and redesign, cause the actual project is much more complicated, and if I can't debug, I should make it simpler :)
This link also indicates that the debug symbol engine has problems with multiple inheritance with virtual base classes. But if you just want help debugging, why not add a helper function on the class A to get a D pointer if available. You can watch pB->GetMyD(). class D; class A { ... D* GetMyD(); ... } class D... D* A::GetMyD() { return dynamic_cast<D*>(this); } That will leave the pointer arithmetic to the compiler.
2,025,153
2,026,225
C++ Language template question
Below is a small test case that demonstrates a problem that I am trying to solve using templates in C++: template<typename T> void unused(T const &) { /* Do nothing. */ } int main() { volatile bool x = false; unused(!x); // type of "!x" is bool } As written below, the g++ v3.4.6 compiler complains: test.cc: In constructor `test::test()': test.cc:11: error: invalid initialization of reference of type 'const volatile bool&' from expression of type 'volatile bool' test.cc:3: error: in passing argument 1 of `void unused(const T&) [with T = volatile bool]' The goal here is to have unused suppress unused variable warnings that occur in optimized code. I have a macro that does an assertion check and in optimized code the assertion goes away, but I want any variables in the assertion's expression to remain referenced so that I don't get unused variable warnings only in optimized code. In the definition for unused() template function, I use a reference so that no copy constructor code gets inadvertently run so that the call to unused can be completely elided by the compiler. For those interested, the assertion macro looks like this: #ifdef NDEBUG # define Assert(expression) unused(expression) #else // not NDEBUG # define Assert(expression) \ { \ bool test = (expression); \ \ if (!test) { \ if (StopHere(__LINE__, __FILE__, __PRETTY_FUNCTION__, \ #expression, false)) { \ throw Exit(-1); /* So that destructors are run. */ \ } \ } \ } #endif // else not NDEBUG For the above test case, I can make the error go away by adding another similar unused function like this: template<typename T> void unused(T const) { /* Do nothing. */ } However, then other cases calling unused() fail due to ambiguity when the argument can be made a reference to with something like: file.h:176: error: call of overloaded `unused(bool)' is ambiguous myAssert.h:27: note: candidates are: void unused(T) [with T = bool] myAssert.h:34: note: void unused(const T&) [with T = bool] So my question is, how can I change unused() or overload it so that it meets the following requirements: The call to unused() can be optimized away into a no-op by the compiler. It causes any variables that are present in the expression passed to unused() to appear used and thus not result in a warning about them being defined but not used. The argument to unused() may or may not be able to be referenced. The argument to unused() may be an object with an expensive copy constructor which should not be invoked when unused() is invoked. Thanks. -William
As Johannes said in the comments, you hit a compiler bug. You can work around it by explicitly converting to bool: unused( bool( !readWriteActivated) ); // add bool() to any (!volatile_bool_var) Old answer (but still not a bad idea) If I recall the const-volatile qualification rules, all you need is to qualify the dummy variable more. Essentially, you just want to parrot the error message back in the declared type :vP . template<typename T> void unused(T const volatile &) { // only change is to add "volatile" /* Do nothing. */ } Also, nice that you put the const after the type, where it belongs.
2,025,159
2,025,170
What's the use of const here
in int salary() const { return mySalary; } as far as I understand const is for this pointer, but I'm not sure. Can any one tell me what is the use of const over here?
Sounds like you've got the right idea, in C++ const on a method of an object means that the method cannot modify the object. For example, this would not be allowed: class Animal { int _state = 0; void changeState() const { _state = 1; } }
2,025,217
2,025,374
Need help with C++ templates
I'm fairly sure this is a template question, since I can't seem to solve it any other way - but non-template solutions are also welcome. A Finite State Machine has a number of program States and each state can react to a number of Events. So, I want to define classes for Event, State and FSM. FSM has a collection (probably vector, might be linked list if STL gives problems in an embedded system) of States, and State has a collection of Events. Each state and event have a unique Id and a name string for debugging porpoises. To be awkward, I don't want the Ids to be integers, but elements of an enum. Each FSM has different enums for its states & events. How best to code this? Can you give an example with two simple FSMs, or one FSm with two states, each with two events? For example, if I have enum myEvents {a, b, c}; enum hisEvents {d, e, f, g}; I want to be able to declare an Event class which accpts constructors params (myEvents a,char *"event_a") and (hisEvents g,char* "event_g") Note that I don't want to just overload the constructor, since that is restrictive - what if new event enums are added? And similarly with states, then have my FSMs each have a list of states. Or am I just being anel, insisting on enums for eventId, when it would be much simpler to pass an int? thanks. Btw, I'd rather avoid Boost as it is itself undecided on how well it works in embedded systems. I prefer in-house developed, for complete control.
I'm not shure if I'm understanding things correctly but I'll take a stab at it: I'm assuming you want to define a state machine by defining the transitions; e.g. "when in state 'myEvents' and you see 'a' do 'event_a'" class State {}; template<T> RealState : State { static void Add(T event, char*) { /* save stuff */ } }; class Event {}; template<T> RealEvent : Event { RealEvent(T event, char* name) {RealState<T>(event, name); } }; Some how you would need to tack in actions and whatnot and you will want to muck it up a bit to get more than one state machine, but I hope that gets you started.
2,025,228
2,025,532
Creating function pointers to functions created at runtime
I would like to do something like: for(int i=0;i<10;i++) addresses[i] = & function(){ callSomeFunction(i) }; Basically, having an array of addresses of functions with behaviours related to a list of numbers. If it's possible with external classes like Boost.Lambda is ok. Edit: after some discussion I've come to conclusion that I wasn't explicit enough. Please read Creating function pointers to functions created at runtime What I really really want to do in the end is: class X { void action(); } X* objects; for(int i=0;i<0xFFFF;i++) addresses[i] = & function(){ objects[i]->action() }; void someFunctionUnknownAtCompileTime() { } void anotherFunctionUnknowAtCompileTime() { } patch someFunctionUnknownAtCompileTime() with assembly to jump to function at addresses[0] patch anotherFunctionUnknownAtCompileTime() with assembly to jump to function at addresses[1] sth, I don't think your method will work because of them not being real functions but my bad in not explaining exactly what I want to do.
If I understand you correctly, you're trying to fill a buffer with machine code generated at runtime and get a function pointer to that code so that you can call it. It is possible, but challenging. You can use reinterpret_cast<> to turn a data pointer into a function pointer, but you'll need to make sure that the memory you allocated for your buffer is flagged as executable by the operating system. That will involve a system call (LocalAlloc() on Windows iirc, can't remember on Unix) rather than a "plain vanilla" malloc/new call. Assuming you've got an executable block of memory, you'll have to make sure that your machine code respects the calling convention indicated by the function pointer you create. That means pushing/popping the appropriate registers at the beginning of the function, etc. But, once you've done that, you should be able to use your function pointer just like any other function. It might be worth looking at an open source JVM (or Mono) to see how they do it. This is the essence of JIT compilation.
2,025,287
2,025,377
sending back a vector from a function
How to translate properly the following Java code to C++? Vector v; v = getLargeVector(); ... Vector getLargeVector() { Vector v2 = new Vector(); // fill v2 return v2; } So here v is a reference. The function creates a new Vector object and returns a reference to it. Nice and clean. However, let's see the following C++ mirror-translation: vector<int> v; v = getLargeVector(); ... vector<int> getLargeVector() { vector<int> v2; // fill v2 return v2; } Now v is a vector object, and if I understand correctly, v = getLargeVector() will copy all the elements from the vector returned by the function to v, which can be expensive. Furthermore, v2 is created on the stack and returning it will result in another copy (but as I know modern compilers can optimize it out). Currently this is what I do: vector<int> v; getLargeVector(v); ... void getLargeVector(vector<int>& vec) { // fill vec } But I don't find it an elegant solution. So my question is: what is the best practice to do it (by avoiding unnecessary copy operations)? If possible, I'd like to avoid normal pointers. I've never used smart pointers so far, I don't know if they could help here.
Most C++ compilers implement return value optimization which means you can efficiently return a class from a function without the overhead of copying all the objects. I would also recommend that you write: vector<int> v(getLargeVector()); So that you copy construct the object instead of default construct and then operator assign to it.
2,025,380
2,025,423
unpredictable behavior of Inline functions with different definitions
I have the following source files: //test1.cpp #include <iostream> using namespace std; inline void foo() { cout << "test1's foo" << endl; } void bar(); int main(int argc, char *argv[]) { foo(); bar(); } and //test2.cpp #include <iostream> using namespace std; inline void foo() { cout << "test2's foo" << endl; } void bar() { foo(); } The output: test1's foo test1's foo Huh??? Ok, so I should have declared the foos static... but shouldn't this kind of thing generate a linker error, or at least a warning? And how does the compiler "see" the inline functions from across compilation units? EDIT: This is using gcc 4.4.1.
You are running into the one-definition-rule. You are not seeing any error because: [Some] violations, particularly those that span translation units, are not required to be diagnosed What going on under the covers is that the compiler is not inlining those functions (many compilers will not inline a function unless the code is compiled with the optimizer). Since the function is inline and can appear in multiple translation units, the compiler will mark the function as link-once which tells the linker that it not treat multiple definitions as an error but just use one of them. If you really want them to be different, you want a static function.
2,025,795
2,026,407
Private inheritance from std::basic_string
I've been trying to learn more about private inheritance and decided to create a string_t class that inherits from std::basic_string. I know a lot of you will tell me inheriting from STL classes is a bad idea and that it's better to just create global functions that accept references to instances of these classes if I want to extend their functionality. I agree, but like I said earlier, I'm trying to learn how to implement private inheritance. This is what the class looks like so far: class string_t : #if defined(UNICODE) || defined(_UNICODE) private std::basic_string<wchar_t> #else private std::basic_string<char> #endif { public: string_t() : basic_string<value_type>() {} string_t( const basic_string<value_type>& str ) : basic_string<value_type>( str ) {} virtual ~string_t() {} using std::basic_string<value_type>::operator=; /* Line causing error */ std::vector<string_t> split( const string_t& delims ) { std::vector<string_t> tokens; tokens.push_back( substr( 0, npos ) ); } }; I get the following errors: 1>c:\program files\microsoft visual studio 9.0\vc\include\xutility(3133) : error C2243: 'type cast' : conversion from 'const string_t *' to 'const std::basic_string &' exists, but is inaccessible 1> with 1> [ 1> _Elem=wchar_t, 1> _Traits=std::char_traits, 1> _Ax=std::allocator 1> ] 1> c:\program files\microsoft visual studio 9.0\vc\include\xutility(3161) : see reference to function template instantiation 'void std::_Fill(_FwdIt,_FwdIt,const _Ty &)' being compiled 1> with 1> [ 1> _Ty=string_t, 1> _FwdIt=string_t * 1> ] 1> c:\program files\microsoft visual studio 9.0\vc\include\vector(1229) : see reference to function template instantiation 'void std::fill(_FwdIt,_FwdIt,const _Ty &)' being compiled 1> with 1> [ 1> _Ty=string_t, 1> _FwdIt=string_t * 1> ] 1> c:\program files\microsoft visual studio 9.0\vc\include\vector(1158) : while compiling class template member function 'void std::vector::_Insert_n(std::_Vector_const_iterator,unsigned int,const _Ty &)' 1> with 1> [ 1> _Ty=string_t, 1> _Alloc=std::allocator 1> ] 1> c:\work\c++\string_t\string_t.h(658) : see reference to class template instantiation 'std::vector' being compiled 1> with 1> [ 1> _Ty=string_t 1> ] The line number (658) in the last error points to the opening brace of the split() function definition. I can get rid of the error if I comment out the using std::basic_string<value_type>::operator=; line. As I understand it, the using keyword specifies that the assignment operator is being brought from private to public scope within string_t. Why am I getting this error and how can I fix it? Also, my string_t class doesn't contain a single data member of it's own, much less any dynamically allocated members. So if I don't create a destructor for this class doesn't that mean that if someone were to delete an instance of string_t using a base class pointer the base class destructor would be called? The following code throws an exception when I have a destructor defined for string_t but works when I comment out the destructor when compiled with VS2008. basic_string<wchar_t> *p = new string_t( L"Test" ); delete p;
Your default constructor should not be explicit. I think explicitness may be the reason it can't convert std::string to string_t as well, but you erased that construtor from your snippet :vP . This program compiles and runs fine with GCC 4.2: #include <iostream> #include <string> #include <vector> using namespace std; class string_t : #if defined(UNICODE) || defined(_UNICODE) private std::basic_string<wchar_t> #else private std::basic_string<char> #endif { public: string_t() : basic_string<value_type>() {} string_t( const basic_string<value_type>& str ) : basic_string<value_type>( str ) {} virtual ~string_t() {} using std::basic_string<value_type>::operator=; /* Line causing error */ std::vector<string_t> split( const string_t& delims ) { std::vector<string_t> tokens; for ( size_t pen = 0, next = 0; next != npos; pen = next + 1 ) { next = find_first_of( delims, pen ); if ( pen != next ) tokens.push_back( substr( pen, next - pen ) ); } return tokens; } template<class os> friend os &operator<<(os &, string_t const&); }; template< class os_t > os_t &operator<<( os_t &os, string_t const &str ) { return os << static_cast< string >(str); } int main( int argc, char ** argv ) { vector<string_t> mytoks = string_t( argv[1] ).split( string( "_" ) ); for ( vector<string_t>::iterator it = mytoks.begin(); it != mytoks.end(); ++ it ) { cerr << * it << endl; } return 0; }
2,025,938
2,025,952
Searching c++ std vector of structs for struct with matching string
I'm sure I'm making this harder than it needs to be. I have a vector... vector<Joints> mJointsVector; ...comprised of structs patterned after the following: struct Joints { string name; float origUpperLimit; float origLowerLimit; }; I'm trying to search mJointsVector with "std::find" to locate an individual joint by its string name - no luck so far, but the examples from the following have helped, at least conceptually: Vectors, structs and std::find Can anyone point me further in the right direction?
A straight-forward-approach: struct FindByName { const std::string name; FindByName(const std::string& name) : name(name) {} bool operator()(const Joints& j) const { return j.name == name; } }; std::vector<Joints>::iterator it = std::find_if(m_jointsVector.begin(), m_jointsVector.end(), FindByName("foo")); if(it != m_jointsVector.end()) { // ... } Alternatively you might want to look into something like Boost.Bind to reduce the amount of code.
2,026,217
2,026,296
Difference in linkage between C and C++?
I have read the existing questions on external/internal linkage over here on SO. My question is different - what happens if I have multiple definitions of the same variable with external linkage in different translation units under C and C++? For example: /*file1.c*/ typedef struct foo { int a; int b; int c; } foo; foo xyz; /*file2.c*/ typedef struct abc { double x; } foo; foo xyz; Using Dev-C++ and as a C program, the above program compiles and links perfectly; whereas it gives a multiple redefinition error if the same is compiled as a C++ program. Why should it work under C and what's the difference with C++? Is this behavior undefined and compiler-dependent? How "bad" is this code and what should I do if I want to refactor it (i've come across a lot of old code written like this)?
Both C and C++ have a "one definition rule" which is that each object may only be defined once in any program. Violations of this rule cause undefined behaviour which means that you may or may not see a diagnostic message when compiling. There is a language difference between the following declarations at file scope, but it does not directly concern the problem with your example. int a; In C this is a tentative definition. It may be amalgamated with other tentative definitions in the same translation unit to form a single definition. In C++ it is always a definition (you have to use extern to declare an object without defining it) and any subsequent definitions of the same object in the same translation unit are an error. In your example both translation units have a (conflicting) definition of xyz from their tentative definitions.
2,026,287
2,026,309
Exception handling before and after main
Is it possible to handle exceptions in these scenarios: thrown from constructor before entering main() thrown from destructor after leaving main()
You can wrap up your constructor withing a try-catch inside of it. No, you should never allow exception throwing in a destructor. The funny less-known feature of how to embed try-catch in a constructor: object::object( int param ) try : optional( initialization ) { // ... } catch(...) { // ... } Yes, this is valid C++. The added benefit here is the fact that the try will catch exceptions thrown by the constructors of the data members of the class, even if they're not mentioned in the ctor initializer or there is no ctor initializer: struct Throws { int answer; Throws() : answer(((throw std::runtime_error("whoosh!")), 42)) {} }; struct Contains { Throws baseball; Contains() try {} catch (std::exception& e) { std::cerr << e.what() << '\n'; } };
2,026,305
2,026,349
std::min is being redefined, but how?
Do streflop or boost libraries change the definition of std::min? I have a project that compiles fine with g++/make UNTIL I merge it with the CMake build of another project (using add_directory). Suddenly I get: no matching function for call to min(double&,float) The line number it claims the error is on is wrong (it's pointing to the last line of the source file) but I'm going to assume the relevant code is this: first = std::min (first, key.mTime); Where first is declared as a double. The 'parent' project (Spring RTS) uses boost and streflop but even after replacing all includes for <math.h> with "streflop_cond.h" in the child project (assimp) the problem remains. Maybe some compiler flags are responsible, I'm not sure. An theories would be appreciated. The source for both projects are available online. I've spent nearly 7 hours on this now and I don't seem any closer to a solution. The full error and build flags is: [ 61%] Building CXX object rts/lib/assimp/code/CMakeFiles/assimp.dir/ScenePreprocessor.cpp.o cd /mnt/work/workspace/spring-patch-git/linux/build/rts/lib/assimp/code && /usr/bin/g++ -Dassimp_EXPORTS -DSYNCCHECK -DNO_AVI -DSPRING_DATADIR=\"/usr/local/share/games/spring\" -DSTREFLOP_SSE -DASSIMP_BUILD_DLL_EXPORT -msse -mfpmath=sse -fsingle-precision-constant -frounding-math -mieee-fp -pipe -fno-strict-aliasing -fvisibility=hidden -fvisibility-inlines-hidden -pthread -O0 -Wall -Wno-sign-compare -DDEBUG -D_DEBUG -DNO_CATCH_EXCEPTIONS -gstabs -fPIC -I/mnt/work/workspace/spring-patch-git/spring/rts/System -I/mnt/work/workspace/spring-patch-git/spring/rts/lib/lua/include -I/mnt/work/workspace/spring-patch-git/spring/rts/lib/streflop -I/usr/include/SDL -I/usr/include/boost-1_39 -I/mnt/work/workspace/spring-patch-git/spring/rts -I/usr/include/AL -I/usr/include/freetype2 -I/mnt/work/workspace/spring-patch-git/spring/rts/lib/assimp/include -I/mnt/work/workspace/spring-patch-git/spring/rts/lib/assimp/../streflop -o CMakeFiles/assimp.dir/ScenePreprocessor.cpp.o -c /mnt/work/workspace/spring-patch-git/spring/rts/lib/assimp/code/ScenePreprocessor.cpp /mnt/work/workspace/spring-patch-git/spring/rts/lib/assimp/code/ScenePreprocessor.cpp: In member function void Assimp::ScenePreprocessor::ProcessAnimation(aiAnimation*): /mnt/work/workspace/spring-patch-git/spring/rts/lib/assimp/code/ScenePreprocessor.cpp:280: error: no matching function for call to min(double&, float) make[2]: *** [rts/lib/assimp/code/CMakeFiles/assimp.dir/ScenePreprocessor.cpp.o] Error 1
Try std::min<double>(first, key.mTime); The two arguments seem to have different types so the compiler can't resolve the template argument to std::min EDIT3: I actually took a look at the assimp library and from your error message, it's line 280 of ScenePreprocessor.cpp that's the cause of the problems: anim->mDuration = last - std::min( first, 0. ); There's nothing wrong with this line however, first is declared as a double and 0. means a zero double literal. I would guess that the problem lies in the STREFLOP library, it seems like it's incorrectly interpreting 0. as a float literal.
2,026,437
2,026,465
Writing a filter for incoming connections
I'm using C++/boost::asio under Win7. I'm trying to "sniff" trafic over a given TCP/IP port. Hence, I'd like to listen on that port, receive messages, analyze them, but also immidately allow them to flow further, as if I never intercepted them. I want them to sink into the program that normally listens and connects on that port. Imagine a transparent proxy, but not for HTTP. I'd rather find a code-based solution, but barring that, maybe you would suggest a tool?
what you are trying to do is basically a firewall program. On windows there is several approach to do that, you can hook winsock. The better (or not hacky) is to use TDI filter (you take a look a this) or to make a NDIS filter. Microsoft also introduced new API, WPF and LSP. I think you have better to use it because the TDI filter and NDIS wrapper involve driver programming which complicated and can be time consuming.
2,026,516
2,027,088
How can a QToolBar know where it is?
In Qt, when moving a QToolBar, one can use the QToolBar::topLevelChanged(bool) signal to know if the the QToolBar is floating or docked. When the QToolBar is docked, how can one get the Qt::ToolBarArea (LeftToolBarArea, RightToolBarArea, TopToolBarArea, BottomToolBarArea) where the QTookBar is docked? Thanks.
I would try this : Qt::ToolBarArea QMainWindow::toolBarArea ( QToolBar * toolbar ) const; Hope this helps !
2,026,652
2,026,709
Macro's with n number of arguments
Possible Duplicate: C/C++: How to make a variadic macro (variable number of arguments) Just wondering if this is at all possible., so instead of how im currently handling logging and messages with multiple parameters im having to have a number of different macros for each case such as: #define MSG( msg ) #define MSG1( fmt, arg1 ) #define MSG2( fmt, arg1, arg2 ) #define MSG3( fmt, arg1, arg2, arg3 ) #define MSG4( fmt, arg1, arg2, arg3, arg4 ) #define MSG5( fmt, arg1, arg2, arg3, arg4, arg5 ) #define MSG6( fmt, arg1, arg2, arg3, arg4, arg5, arg6) is there any way of defining just one macro that can accept any number of arguments? thanks
Well since @GMan didn't want to put that as an answer himself, have a look at variadic macros which are part of the C99 standard. Your question is tagged C++ though. Variadic macros are not part of the C++ standard but they are supported by most compilers anyway: GCC and MSVC++ starting from MSVC2005.
2,026,724
2,029,203
Eclipse CDT Editor support for altivec C++ extensions?
Does the Eclipse CDT C++ editor have a means of supporting the Altivec C++ language extensions, as implemented for example in the GNU g++ compilers when compiling with -maltivec? Specifically, can it be made to stop reporting the vector data types as syntax errors? e.g. vector unsigned char foo; declares a 128-bit vector variable named "foo" containing sixteen 8-bit unsigned chars.
The Eclipse CDT has two C++ parsers, one of which aims for GNU compatibility and currently lacks support for Altivec. The second aims for compatibility with XLC, and has syntactic support for Altivec types in program code (but not semantic support!), with support for some GNU extensions too. That can be gotten from Eclipse CDT CVS (look for the java package org.eclipse.cdt.core.lrparser.xlc) Once the XLC parser is installed, it can be selected using the Language Mappings properties page to switch to the XLC C++ parser.
2,026,853
2,065,622
Unable to attach to created process with Visual Studio 2005
I'm having problems attaching to a process spawned from one of my own processes. When I attempt to attach to the process using Visual Studio 2005 (Debug -> Attach to process) I receive the error message: "Unable to attach to the process. The system cannot find the file specified." In my program, I spawned the process that I later want to attach to using the command BOOL res = CreateProcess(exe, cmdLine, NULL, NULL, FALSE, 0, NULL, workingDir, &startupInfo, &procInfo); If I manually start the second process from the command prompt, I can attach to it without any problems. I am also able to attach to it using WinDbg, just not Visual Studio 2005. There is no difference whether I've started the first process from within VS (thus running as an administrator) or if I've started it from the command prompt as a regular user. I am running Visual Studio as an administrator under Vista 64 bit and the executables are all 64-bit. Has anyone seen this before or have any ideas of what I might be doing wrong? Any help is appreciated. Update: I've also tried to set the security attributes for the new process and thread using the following code: DWORD dwRes, dwDisposition; PSID pEveryoneSID = NULL, pAdminSID = NULL; PACL pACL = NULL; PSECURITY_DESCRIPTOR pSD = NULL; EXPLICIT_ACCESS ea[2]; SID_IDENTIFIER_AUTHORITY SIDAuthWorld = SECURITY_WORLD_SID_AUTHORITY; SID_IDENTIFIER_AUTHORITY SIDAuthNT = SECURITY_NT_AUTHORITY; SECURITY_ATTRIBUTES sa; LONG lRes; HKEY hkSub = NULL; // Create a well-known SID for the Everyone group. if(!AllocateAndInitializeSid(&SIDAuthWorld, 1, SECURITY_WORLD_RID, 0, 0, 0, 0, 0, 0, 0, &pEveryoneSID)) {...} // Initialize an EXPLICIT_ACCESS structure for an ACE. // The ACE will allow Everyone read access to the key. ZeroMemory(&ea, 2 * sizeof(EXPLICIT_ACCESS)); ea[0].grfAccessPermissions = GENERIC_ALL; ea[0].grfAccessMode = SET_ACCESS; ea[0].grfInheritance= SUB_CONTAINERS_AND_OBJECTS_INHERIT; ea[0].Trustee.TrusteeForm = TRUSTEE_IS_SID; ea[0].Trustee.TrusteeType = TRUSTEE_IS_WELL_KNOWN_GROUP; ea[0].Trustee.ptstrName = (LPTSTR) pEveryoneSID; // Create a SID for the BUILTIN\Administrators group. if(! AllocateAndInitializeSid(&SIDAuthNT, 2, SECURITY_BUILTIN_DOMAIN_RID, DOMAIN_ALIAS_RID_ADMINS, 0, 0, 0, 0, 0, 0, &pAdminSID)) {...} // Initialize an EXPLICIT_ACCESS structure for an ACE. // The ACE will allow the Administrators group full access to // the key. ea[1].grfAccessPermissions = GENERIC_ALL; ea[1].grfAccessMode = SET_ACCESS; ea[1].grfInheritance= SUB_CONTAINERS_AND_OBJECTS_INHERIT; ea[1].Trustee.TrusteeForm = TRUSTEE_IS_SID; ea[1].Trustee.TrusteeType = TRUSTEE_IS_GROUP; ea[1].Trustee.ptstrName = (LPTSTR) pAdminSID; // Create a new ACL that contains the new ACEs. dwRes = SetEntriesInAcl(2, ea, NULL, &pACL); if (ERROR_SUCCESS != dwRes) {...} // Initialize a security descriptor. pSD = (PSECURITY_DESCRIPTOR) LocalAlloc(LPTR, SECURITY_DESCRIPTOR_MIN_LENGTH); if (NULL == pSD) {...} if (!InitializeSecurityDescriptor(pSD, SECURITY_DESCRIPTOR_REVISION)) {...} // Add the ACL to the security descriptor. if (!SetSecurityDescriptorDacl(pSD, TRUE, pACL, FALSE)) {...} // Initialize a security attributes structure. sa.nLength = sizeof (SECURITY_ATTRIBUTES); sa.lpSecurityDescriptor = pSD; sa.bInheritHandle = FALSE; CreateProcess(exe, cmdLine, &sa, &sa, ... with no luck. Update: I am also able to attach to the process using Visual Studio 2008 (still compiled using VS2005), which solves my immediate needs. Since this is under Vista x64, could there be some form of "Vista magic" at play here, where VS2005 does not play nice with Vista? Why this is the case only for processes that I've built and started from my code I cannot really understand...
Ok, I finally found out what caused this problem. I'll post it here in case anyone else encounters this (from the scarcity of answers I guess it ain't that common, but hey...). The problem was that the path used to launch the executable contained a path element consisting of a single dot, like this: c:\dir1\.\dir2\program.exe which apparently made VS2005 go look for an executable at c:\dir1\dir1\dir2\program.exe that of course does not exist... Thank you Mark for Process Monitor! Removing the . made attaching to the process work as expected again.
2,027,079
2,027,111
Why insert from std::map doesn't want to update? [C++]
I'm trying to insert multiple times this same key into map but with different values. It doesn't work. I know that operator[] does this job, but my question is, if this behaviour of insert is correct? Shouldn't insert() inserts? I wonder what standard says. Unfortunately I don't have it(Standard for C++) so I can't check. Thank you for helpful ansers.
If you want to insert the same key with different values, you need std::multimap instead. The std::map::insert will not do anything if the key already exists. The std::map::operator[] will overwrite the old value. For STL reference you don`t necesary need the C++ standard itself; something like http://www.cplusplus.com/reference/ will do too.
2,027,363
2,027,452
using tapi to monitor multiple phones and dial or hangup
I have with a good level of success got a C# application to use TAPI to connect to my office PBX and dial and hangup calls but need to go further and be able to monitor activity and provide CTI to client pc's as well as integration back to my companies web based CRM. I am focusing on the client app for CTI popups and dial/hangup functions as the phone number lookup to the CRM is relatively easy. I initially started by registering one handset in the tapi that I could then dial and hangup, I even seem to have registered all the handsets on the system and to be able to dial from any of them but I don't seem to be able to get any activity logs as to when any of the handsets are ringing etc. Does anyone have any example tapi code that can get me started or point me in the right direction? I can work with C++, C# or VB.Net as I am okay with any of them.
To monitor multiple devices you will need a 3rd-party TAPI driver from your PBX manufacturer (and they don't all supply them.) The default Windows driver will probably be a 1st-party driver that can only handle one device at a time. You should consider using a central server to monitor all devices and use a hand-rolled socket-based protocol to talk to your CTI clients - that's what we do and it means you don't need TAPI drivers on every PC (which I assure you is a massive PITA.)
2,027,472
2,028,244
Why is the type library in my dll corrupt (registering returns TYPE_E_CANTLOADLIBRARY)?
We have a mature c++ COM codebase that has been building, registering and running for many years. This includes numerous developer machines and autobuild machines. The codebase builds several dlls and exes. Some of these are COM servers. The typical setup is Xp64 using both visual studio 2005 and 2008. We have both 32 bit and 64bit builds. Suddenly our xp64 2005 autobuild machine is failing. The only code change was a trivial change within a c++ helper method and an update to some version numbers. The failure that we see is a failure to register the x64 release version of a dll. The failure appears to be caused by a corrupt dll. The dll builds successfully but attempts to register it fail with TYPE_E_CANTLOADLIBRARY. The dll is supposed to have the type library built in (through an include in the rc file). This has always worked before and still works on our other machines, xp64 VS 2005 and 2008. When inspecting the binary of the broken dll the typelibrary idl source can be seen - although it is at a different location than in a non broken version of the dll. The broken dll fails to register on our other machines - the same machines successfully register their own local builds of the same dll. Oleview also fails with the same error when opening the dll. I'm looking for any suggestions or similar experiences that might help?
Well I think we have nailed this as a visual studio bug. We found that the path where our autobuild runs had recently been changed - increasing the absolute pathname lengths of any files that the compiler generates. We also know that 64bit release build's target folder would have the longest pathname of any of our configurations. We have shortened the path (by renaming our top level directory under which our source tree is checked out) and the problem looks to have gone away - obviously we will repeat this a few times to make sure it isn't a fluke. I'm thinking that when visual studio inserts absolute paths name in the binary - as it still does - it might be overunning on a buffer...and corrupting the binary.
2,027,508
2,028,513
Simple tool for callgraph in C++
Is there are simple tool, which can be used to determine from where a function is called, which other function the function calls ...? Edit: I'm using Mac OS X (10.6) and just want to do static analysis. Thanks!
How about cscope? Check out 3rd & 4th bullet items on the page: functions called by a function functions calling a function It's been a while since I used cscope on C++, I seem to remember it being rock-solid on C code, but not as strong with C++.
2,027,556
2,027,582
c++ why is constructor in this example called twice?
I just try to understand the behaviour of the following situation: template <typename T1> struct A{ template <typename T2> A(T2 val){ cout<<"sizeof(T1): "<<sizeof(T1)<<" sizeof(T2): "<<sizeof(T2)<<endl; } T1 dummyField; }; so - the class is templated with T1 and the constructor is templated with T2 now - if i write: A<bool> a = A<bool>(true); the output is as expected: sizeof(T1): 1 sizeof(T2): 1 however - if i write: A<bool> a = A<float>(3.5f); the output is: sizeof(T1): 4 sizeof(T2): 4 sizeof(T1): 1 sizeof(T2): 4 why is the constructor called twice with template parameter float? thanks for satisfying my curiosity
How to avoid copying? In both cases two constructors are called, however you do not see it in the first case as one of them is the compiler generated one. If you want to avoid copying, you need to use a different syntax, like this: A<bool> a(true); A<bool> a(3.5f); Why (and what) copy constructor is called? A<bool> a = A<bool>(true); Here the A (bool val) constructor is used to construct the temporary value, while default compiler generated copy constructor is used to perform the copy of A to A. You are copying the same type, and for same type copy constructor is used. The interesting and not obvious fact here is: Template constructor is never used as a copy constructor, even if it looks like one. A<bool> a = A<float>(3.5f); Here A<float>(float val) constructor is used first to construct the temporary value, and then A<bool>( A<float> val) templated constructor is used to perform the copy.
2,027,558
2,027,585
Communication between processes
I'm looking for some data to help me decide which would be the better/faster for communication between two independent processes on Linux: TCP Named Pipes Which is worse: the system overhead for the pipes or the tcp stack overhead? Updated exact requirements: only local IPC needed will mostly be a lot of short messages no cross-platform needed, only Linux
In the past I've used local domain sockets for that sort of thing. My library determined whether the other process was local to the system or remote and used TCP/IP for remote communication and local domain sockets for local communication. The nice thing about this technique is that local/remote connections are transparent to the rest of the application. Local domain sockets use the same mechanism as pipes for communication and don't have the TCP/IP stack overhead.
2,027,790
2,027,852
c/c++ passing argument by pointer/argument by reference stack frame layout
Will the compiler produce the same code for both of these statements? foo1(int* val){(*val)++;} foo2(int &val){val++;} Will it simply write a pointer into the parameter part of foo's stack frame? Or, in the second case, will the callers' and foos' stack frames somehow overlap such that the callers' local variable takes the same memory on the stack as the parameter for foo?
The stacks cannot be made to overlap. Consider that the argument could be a global, a heap object, or even if stored in the stack it could be not the very last element. Depending on the calling convention, other elements might be placed in between one stack frame and the parameters passed into the function (i.e. return address)... And note that even if nothing was added in the stack, the decision cannot be made while compiling the function, but rather when the compiler is processing the calling function. Once the function is compiled, it will not change depending on where it is called from.
2,027,873
2,027,914
Copy constructors and Assignment Operators
I wrote the following program to test when the copy constructor is called and when the assignment operator is called: #include class Test { public: Test() : iItem (0) { std::cout << "This is the default ctor" << std::endl; } Test (const Test& t) : iItem (t.iItem) { std::cout << "This is the copy ctor" << std::endl; } ~Test() { std::cout << "This is the dtor" << std::endl; } const Test& operator=(const Test& t) { iItem = t.iItem; std::cout << "This is the assignment operator" << std::endl; return *this; } private: int iItem; }; int main() { { Test t1; Test t2 = t1; } { Test t1; Test t2 (t1); } { Test t1; Test t2; t2 = t1; } } This results in the following output (just added empy lines to make it more understandable): doronw@DW01:~$ ./test This is the default ctor This is the copy ctor This is the dtor This is the dtor This is the default ctor This is the copy ctor This is the dtor This is the dtor This is the default ctor This is the default ctor This is the assignment operator This is the dtor This is the dtor The second and third set behave as expected, but in the first set the copy constructor is called even though the assignment operator is used. Is this behaviour part of the C++ standard or just a clever compiler optimization (I am using gcc 4.4.1)
No assignment operator is used in the first test-case. It just uses the initialization form called "copy initialization". Copy initialization does not consider explicit constructors when initializing the object. struct A { A(); // explicit copy constructor explicit A(A const&); // explicit constructor explicit A(int); // non-explicit "converting" constructor A(char const*c); }; A a; A b = a; // fail A b1(a); // succeeds, "direct initialization" A c = 1; // fail, no converting constructor found A d(1); // succeeds A e = "hello"; // succeeds, converting constructor used Copy initialization is used in those cases that correspond to implicit conversions, where one does not explicitly kick off a conversion, as in function argument passing, and returning from a function.
2,027,973
2,028,227
parallel Bubble sort using openmp
i write a c++ code for Bubble sort algorithm and i dont know how to make it parallel using openmp so please help me ..... this is the code : #include "stdafx.h" #include <iostream> #include <time.h> #include <omp.h> using namespace std; int a[40001]; void sortArray(int [], int); int q=0; int _tmain(int argc, _TCHAR* argv[]) { int x=40000; int values[40000]; for (int i=0;i<x;i++) { values[i]=rand(); } cout << "Sorting Array .......\n"; clock_t start = clock(); sortArray(values, x); cout << "The Array Now Sorted\n"; printf("Elapsed Time : %f\n", ((double)clock() - start) / CLOCKS_PER_SEC); cout << "\n"; } void sortArray(int array[], int size) { bool swap; int temp; do { swap = false; for (int count = 0; count < (size - 1); count++) { if (array[count] > array[count + 1]) { temp = array[count]; array[count] = array[count + 1]; array[count + 1] = temp; swap = true; } } }while (swap); } it takes now about 13 seconds i tried to put ##pragma omp parallel for before "for statment" in sortArray method and it didnt make any difference it take also about 13 second ..... so please help me as fast as you can
Try this Parallel Bubble Sort algorithm: 1. For k = 0 to n-2 2. If k is even then 3. for i = 0 to (n/2)-1 do in parallel 4. If A[2i] > A[2i+1] then 5. Exchange A[2i] ↔ A[2i+1] 6. Else 7. for i = 0 to (n/2)-2 do in parallel 8. If A[2i+1] > A[2i+2] then 9. Exchange A[2i+1] ↔ A[2i+2] 10. Next k Parallel Analysis Steps 1-10 is a one big loop that is represented n -1 times. Therefore, the parallel time complexity is O(n). If the algorithm, odd-numbered steps need (n/2) - 2 processors and even-numbered steps require (n/2) - 1 processors.Therefore, this needs O(n) processors. You can still use a swap flag check to stop the routine right before Next k. Of course don't expect great speed improvement without hundreds of physical processors :)
2,027,991
2,028,018
List of standard header files in C and C++
Where could I find the list of all header files in C and C++? While I am building a library, I am getting an error like 'tree.h not found'. I suppose this is a standard header file in C and C++. This raised in me the curiosity to know all the header files and their contribution. Is there a place I can search for? I am working on Solaris Unix.
Try here : http://en.cppreference.com/w/ However, you may also be refering to the header files of your OS. These can be found either on MSDN (Windows) or by man command (POSIX systems). Or another source if you're on another OS.
2,028,107
2,028,286
STL-friendly pImpl class?
I am maintaining a project that can take a considerable time to build so am trying to reduce dependencies where possible. Some of the classes could make use if the pImpl idiom and I want to make sure I do this correctly and that the classes will play nicely with the STL (especially containers.) Here is a sample of what I plan to do - does this look OK? I am using std::auto_ptr for the implementation pointer - is this acceptable? Would using a boost::shared_ptr be a better idea? Here is some code for a SampleImpl class that uses classes called Foo and Bar: // SampleImpl.h #ifndef SAMPLEIMPL_H #define SAMPLEIMPL_H #include <memory> // Forward references class Foo; class Bar; class SampleImpl { public: // Default constructor SampleImpl(); // Full constructor SampleImpl(const Foo& foo, const Bar& bar); // Copy constructor SampleImpl(const SampleImpl& SampleImpl); // Required for std::auto_ptr? ~SampleImpl(); // Assignment operator SampleImpl& operator=(const SampleImpl& rhs); // Equality operator bool operator==(const SampleImpl& rhs) const; // Inequality operator bool operator!=(const SampleImpl& rhs) const; // Accessors Foo foo() const; Bar bar() const; private: // Implementation forward reference struct Impl; // Implementation ptr std::auto_ptr<Impl> impl_; }; #endif // SAMPLEIMPL_H // SampleImpl.cpp #include "SampleImpl.h" #include "Foo.h" #include "Bar.h" // Implementation definition struct SampleImpl::Impl { Foo foo_; Bar bar_; // Default constructor Impl() { } // Full constructor Impl(const Foo& foo, const Bar& bar) : foo_(foo), bar_(bar) { } }; SampleImpl::SampleImpl() : impl_(new Impl) { } SampleImpl::SampleImpl(const Foo& foo, const Bar& bar) : impl_(new Impl(foo, bar)) { } SampleImpl::SampleImpl(const SampleImpl& sample) : impl_(new Impl(*sample.impl_)) { } SampleImpl& SampleImpl::operator=(const SampleImpl& rhs) { if (this != &rhs) { *impl_ = *rhs.impl_; } return *this; } bool SampleImpl::operator==(const SampleImpl& rhs) const { return impl_->foo_ == rhs.impl_->foo_ && impl_->bar_ == rhs.impl_->bar_; } bool SampleImpl::operator!=(const SampleImpl& rhs) const { return !(*this == rhs); } SampleImpl::~SampleImpl() { } Foo SampleImpl::foo() const { return impl_->foo_; } Bar SampleImpl::bar() const { return impl_->bar_; }
You should consider using copy-and-swap for assignment if it's possible that Foo or Bar might throw as they're being copied. Without seeing the definitions of those classes, it's not possible to say whether they can or not. Without seeing their published interface, it's not possible to say whether they will in future change to do so, without you realising. As jalf says, using auto_ptr is slightly dangerous. It doesn't behave the way you want on copy or assignment. At a quick look, I don't think your code ever allows the impl_ member to be copied or assigned, so it's probably OK. If you can use scoped_ptr, though, then the compiler will do that tricky job for you of checking that it's never wrongly modified. const might be tempting, but then you can't swap.
2,028,331
2,028,385
Qt use-case for same signal to 2 slots on same object?
I am a total newbie to Qt. As I was reading the documentation, I came across this configuration: connect( Object1, Signal1, Object2, slot1 ) connect( Object1, Signal1, Object2, slot2 ) What could possibly be the use-case for this? Looks odd to me coming from an Erlang/Python background. It must have to do with C++ inheritance twists and turns I guess.
This is for cases when you have something like one button that changes two parts of another. It may sound silly, but it would be equivalent to calling the second slot function from the first slot. Say, clicking the play/pause button makes the stop button active or in active and also changes the tool tip. This could easily be done with one slot, but you may want the option to do them independently other times. To promote reuse, you use the above method of connecting one signal to 2 slots.
2,028,361
2,028,398
How does Google Maps know my position?
I have a nokia 5500 sport mobile phone, and I found after i installed google map, it can automatically locate to my current position. and I want to know how google map get my position and mark it on the map ? How can I programming implement this feature with symbian C++(nokia 5500 's operate system is Symbian 3rd).
This is one of those questions where you need to know the jargon in order to find the answer. The magic word is "Location API". Using it, I found this: http://wiki.forum.nokia.com/index.php/Google_Maps_using_Location_Api_in_Symbian For mobile devices, a location API sometimes more or less hides the details of how the location is actually discovered, and just tells the app where you are. If the phone has GPS, it will use that. Otherwise, the phone network will help it out - it might calculate based on distance and/or direction to one or more base stations, or it might just say "sorry, I can't be very accurate, but I know you're in this cell, so you must be with a few hundred yards of the base station (city cells), or a few of miles of it (rural cells)". The API used on the page I link to seems to be specifically GPS. Not sure whether it exists on non-GPS handsets, but if you poke around the Symbian docs long enough you can probably find an alternate location API.
2,028,464
2,028,481
Logic differences in C and Java
Compile and run this code in C #include <stdio.h> int main() { int a[] = {10, 20, 30, 40, 50}; int index = 2; int i; a[index++] = index = index + 2; for(i = 0; i <= 4; i++) printf("%d\n", a[i]); } Output : 10 20 4 40 50 Now for the same logic in Java class Check { public static void main(String[] ar) { int a[] = {10, 20, 30, 40, 50}; int index = 2; a[index++] = index = index + 2; for(int i = 0; i <= 4; i++) System.out.println(a[i]); } } Output : 10 20 5 40 50 Why is there output difference in both languages, output is understandable for Java but I cannot understand output in C One more thing, if we apply the prefix ++ operator, we get the same result in both languages, why?
That is because a[index++] = index = index + 2; invokes Undefined Behavior in C. Have a look at this From the link: ..the second sentence says: if an object is written to within a full expression, any and all accesses to it within the same expression must be directly involved in the computation of the value to be written. This rule effectively constrains legal expressions to those in which the accesses demonstrably precede the modification. For example, the old standby i = i + 1 is allowed, because the access of i is used to determine i's final value. The example a[i] = i++ is disallowed because one of the accesses of i (the one in a[i]) has nothing to do with the value which ends up being stored in i (which happens over in i++), and so there's no good way to define--either for our understanding or the compiler's--whether the access should take place before or after the incremented value is stored. Since there's no good way to define it, the Standard declares that it is undefined, and that portable programs simply must not use such constructs. Similar to a[i++]=i (which invokes UB) your expression too invokes UB. Your expression also has a similar behavior. The behavior is well defined in Java.
2,028,561
2,028,708
c++ Garbage collection and calling destructors
Per-frame I need to allocate some data that needs to stick around until the end of the frame. Currently, I'm allocating the data off a different memory pool that allows me to mark it with the frame count. At the end of the frame, I walk the memory pool and delete the memory that was allocated in a particular frame. The problem I'm running into is that in order to keep a hold on the data, I have to place it in a structure thusly: struct FrameMemory { uint32 frameIndex; bool allocatedType; //0 = new(), 1 = new[] void* pMemPtr; } So later, when i get around to freeing the memory, it looks something like this: { for(all blocks) if(block[i].frameIndex == targetIndex) if(block[i].allocatedType == 0) delete block[i].pMemPtr; else if (block[i].allocatedType ==1) delete[] block[i].pMemPtr; } The issue is that, because I have to overload the pointer to the memory as a void*, the DELETE operator doesn't properly DELETE the memory as its' native base type. IE the destructor NEVER gets called for the object. I've attempted to find ways to use smart-pointer templated objects for the solution, but in order to do that, I have to overload the templated class to a non-templated base-type, which makes deletion even more difficult. Does anyone have a solution to a problem like this?
If you don't want to force all the objects to inherit from Destructible, you can store a pointer to a deleter function (or functor) along with the pointer to the data itself. The client code is responsible for providing a function that knows how to delete the data correctly, typically something like: void xxx_deleter(void *data) { xxx *ptr = static_cast<xxx *>(data); delete ptr; } Though the deleter will usually be a lot like the above, this also gives the client the option of storing complex data structures and still getting them deleted correctly.
2,028,862
2,040,900
SSL_CTX_use_PrivateKey_file fail under Linux
I'm trying to use the SSL_CTX_use_PrivateKey_file function in OpenSSL under Linux, but it returns false. The surrounding code has been ported from Windows, where everything runs fine. Is there something that must be done differently depending on system? I've compiled the OpenSSL library myself (default config etc) under Ubuntu and am using pre-compiled binaries for Windows (linked from the OpenSSL site). The certificates are in .pem files as well as the key. Also, there's a password established. The following is basically what's done; SSL_CTX_set_default_passwd_cb( pContext, passwdCallback ); SSL_CTX_set_default_passwd_cb_userdata( pContext, (void*)this ); SSL_CTX_use_certificate_file( pContext, strCertificateFile, SSL_FILETYPE_PEM ); SSL_CTX_use_Privatekey_file( pContext, strPrivateKeyFile, SSL_FILETYPE_PEM ); // fail in Linux but work fine in Windows Does anyone have an idea?
To keep things simple, I removed all code from my password callback, and had simple pBuf = "mypass"; return 6; This would be the bare-minimum of the callback function. This worked fine. So what was different between the Windows code and the Linux code? Well, a call to strcpy_s and strcpy, respectively, was the only difference in the code. What's different between those two (except additonal validation parameters)? To validate the string copy operation's success, the code simply checked for equality to 0. However, the two copy functions have different specifications for their return values. Microsoft changed "strcpy"'s return behaviour from "0 means error" to "0 means success". Sigh...
2,029,258
2,029,435
How can i compare queues in cpp?
i need to compare the size of 10 queues and determine the least one in size to insert the next element in creating normal if statements will take A LOT of cases so is there any way to do it using a queue of queue for example or an array of queues ? note : i will need to compare my queues based on 2 separate things in 2 situations 1- based on size ( number of nods in it ) 2- based on the total number of the data in the nods in it ( which i have a separate function to calculate )
You could do something like that std::queue<int> queue1; std::vector<std::queue<int> > queues; // Declare a vector of queue queues.push_back(queue1); // Add all of your queues to the vector // insert other queue here ... std::vector<std::queue<int> >::const_iterator minItt = queues.begin(); // Get the first queue in the vector // Iterate over all of the queues in the vector to fin the one with the smallest size for(std::vector<std::queue<int> >::const_iterator itt = ++minItt; itt != queues.end(); ++itt) { if(itt->size() < minItt->size()) minItt = itt; } If it's not fast enough for you, you could always make your search in the vector with std::for_each() and a functor.
2,029,272
2,029,330
How to declare a global variable that could be used in the entire program
I have a variable that I would like to use in all my classes without needing to pass it to the class constructor every time I would like to use it. How would I accomplish this in C++? Thanks.
global.h extern int myVar; global.cpp #include "global.h" int myVar = 0; // initialize class1.cpp #include "global.h" ... class2.cpp #include "global.h" ... class3.cpp #include "global.h" ... MyVar will be known and usable in every module as a global variable. You do not have to have global.cpp. You could initialize myVar in any of the class .cpp's but I think this is cleaner for larger programs.
2,029,278
2,056,599
Forward declaring a function that uses enable_if : ambiguous call
I have some trouble forward declaring a function that uses boost::enable_if: the following piece of code gives me a compiler error: // Declaration template <typename T> void foo(T t); // Definition template <typename T> typename boost::enable_if<boost::is_same<T, int> >::type foo(T t) { } int main() { foo(12); return 0; } When compiling, I get an "ambiguous call to foo" error. According to the definition of enable_if, the 'type' typedef corresponds to void when the condition is true, so as far as I can see, the two signatures of foo match. Why does the compiler think they are different, and is there a correct way to forward declare foo (preferably without repeating the enable_if part)?
This is not only a problem with enable_if. You get the same error on Visual Studio and gcc with the following code: struct TypeVoid { typedef void type; }; template<typename T> void f(); template<typename T> typename T::type f() { } int main() { f<TypeVoid>(); return 0; } I think the main problem is that the return type (before instantiation) is part of the signature of a template function. There is more information here. Regarding your code, if the declaration refers to the definition, you should match both: // Declaration template <typename T> typename boost::enable_if<boost::is_same<T, int> >::type foo(T t); // Definition template <typename T> typename boost::enable_if<boost::is_same<T, int> >::type foo(T t) { } If the declaration refers to a different function, the compiler would never be able to choose the correct one for ints, because they both are valid. However, you can disable the first one for ints using disable_if: // Other function declaration template <typename T> typename boost::disable_if<boost::is_same<T, int> >::type foo(T t); // Defition template <typename T> typename boost::enable_if<boost::is_same<T, int> >::type foo(T t) { }
2,029,283
2,029,331
Reading and writing to a file in c++
I am trying to write a triple vector to a file and then be able to read back into the data structure afterward. When I try to read the file back after its been saved the first fifty values come out correct but the rest of the values are garbage. I'd be really appreciative if someone could help me out here. Thanks a lot! File declaration: fstream memory_file("C:\\Users\\Amichai\\Pictures\\output.txt", ios::in | ios::out); Save function: void save_training_data(fstream &memory_file, vector<vector<vector<long double> > > &training_data) { int sizeI = training_data.size(); memory_file.write((const char *)&sizeI, sizeof(int)); for (int i=0; i < sizeI; i++) { int sizeJ = training_data[i].size(); memory_file.write((const char *)&sizeJ, sizeof(int)); for (int j=0; j < sizeJ; j++) { int sizeK = training_data[i][j].size(); memory_file.write((const char *)&sizeK, sizeof(int)); for (int k = 0; k < sizeK; k++) { int temp; temp = (int)training_data[i][j][k]; memory_file.write((const char *)&temp, sizeof(int)); } } } } Read function: void upload_memory(fstream &memory_file, vector<vector<vector<long double> > > &training_data) { memory_file.seekg(ios::beg); int temp=0; int sizeK, sizeJ, sizeI; memory_file.read((char*)&sizeI, sizeof(int)); training_data.resize(sizeI); for (int i=0; i < sizeI; i++) { memory_file.read((char*)&sizeJ, sizeof(int)); training_data[i].resize(sizeJ); for (int j=0; j < sizeJ; j++) { memory_file.read((char*)&sizeK, sizeof(int)); training_data[i][j].resize(sizeK); for (int k = 0; k < sizeK; k++) { memory_file.read((char*)&temp, sizeof(int)); training_data[i][j][k]=temp; } } } }
Since you're writing binary data (and apparently working under Windows) you really need to specify ios::binary when you open the fstream.
2,029,507
2,029,579
How to create a map function in c++?
Say there is a list of integers [1,2,3,4,5] and a map function that multiplies each element with 10 and returns modified list as [10,20,30,40,50] , with out modifying the original list. How this can be done efficiently in c++.
Here's an example: #include <vector> #include <iostream> #include <algorithm> using namespace std; int multiply(int); int main() { vector<int> source; for(int i = 1; i <= 5; i++) { source.push_back(i); } vector<int> result; result.resize(source.size()); transform(source.begin(), source.end(), result.begin(), multiply); for(vector<int>::iterator it = result.begin(); it != result.end(); ++it) { cout << *it << endl; } } int multiply(int value) { return value * 10; }
2,029,565
2,029,610
Troubleshooting compile time link errors
I'm trying to statically link to libcrypto.a (from the openssl library) after building it from source with a new toolchain. However whenever I try to use any of the functions from that library, I keep receiving "undefined reference" errors. I've made sure the right header file was included. I've also double checked the symbol table of libcrypto.a and made sure these functions are indeed defined. Is there anything else I can do to debug this error- like getting more info out of the linker or examining libcrypto.a itself, to find out why the linker is spitting out "undefined reference" error when the indicted symbols shows up in the symbol table?
Undefined reference/symbol is a linker error indicating that the linker can't find the specified symbol in any of the object modules being linked. This indicates one or more of the following: The specified class/method/function/variable/whatever is not defined anywhere in the project. The symbol is inaccessible, probably due to an access specifier (in C++, doesn't apply to ANSI C) The reference is incorrect. This can be due to a spelling error or similar problem. (unlikely) You've neglected to include the required library in your build script when calling the linker.
2,029,651
2,030,018
How do you initialise a dynamic array in C++?
How do I achieve the dynamic equivalent of this static array initialisation: char c[2] = {}; // Sets all members to '\0'; In other words, create a dynamic array with all values initialised to the termination character: char* c = new char[length]; // how do i amend this?
char* c = new char[length]();
2,029,676
2,029,762
Why can't a Visual C++ interface contain operators?
As per the MSDN doc on __interface, a Visual C++ interface "Cannot contain constructors, destructors, or operators." Why can't an interface contain an operator? Is there that much of a difference between a get method that returns a reference: SomeType& Get(WORD wIndex); and the overloaded indexer operator? SomeType& operator[](WORD wIndex);
The __interface modifier is a Visual C++ extension to help implementing COM interfaces. This allows you to specify a COM 'interface' and enforces the COM interface rules. And because COM is a C compatible definition, you cannot have operators, Ctor or Dtors.
2,029,741
2,029,798
Why does the C++ linker require the library files during a build, even though I am dynamically linking?
I have a C++ executable and I'm dynamically linking against several libraries (Boost, Xerces-c and custom libs). I understand why I would require the .lib/.a files if I choose to statically link against these libraries (relevant SO question here). However, why do I need to provide the corresponding .lib/.so library files when linking my executable if I'm dynamically linking against these external libraries?
The compiler isn't aware of dynamic linking, it just knows that a function exists via its prototype. The linker needs the lib files to resolve the symbol. The lib for a DLL contains additional information like what DLL the functions live in and how they are exported (by name, by ordinal, etc.) The lib files for DLL's contain much less information than lib files that contain the full object code - libcmmt.lib on my system is 19.2 MB, but msvcrt.lib is "only" 2.6 MB. Note that this compile/link model is nearly 40 years old at this point, and predates dynamic linking on most platforms. If it were designed today, dynamic linking would be a first class citizen (for instance, in .NET, each assembly has rich metadata describing exactly what it exports, so you don't need separate headers and libs.)
2,030,750
2,031,002
Using DB Api in a portable manner
I need to develop some kind of application and use DB in it. Let's say i want to develop it over Windows currently, however, in a couple months i may have to migrate it to Linux. I started reading a little bit about it, but couldn't get to point i needed. Is there or isn't a generic/protable/standart api for using DB ? I read there is ODBC,JDBC, iOBDC,unixODBC ? why all of these exist ? Can someone help clearing and setting my head straight regarding the issue ? Edit - I'm using C++ - so please advise to that direction, even though i'll appreciate inter-language/inter-platform recommendations
There's a bunch of C++ "wrapper" libraries for generic DB access, here's couple of top of my head: SOCI - modern C++ syntax, active development, plays nice with boost, supports multiple backends OTL - header-only (templates), very light-weight Both of these grew out of Oracle-specific work, but support at least several other databases now. Of course you can't really hide vendor differences, but that is general law of leaky abstractions.
2,031,003
2,031,269
Plugin application hangs when invoking functionality from another DLL
I am trying to render a GStreamer pipeline running on top of a XUL window. For this I wrote an XPCOM plugin. A XPCOM plugin is basically a dll file that gets loaded by the Gecko engine. My plugin links with GStreamer and as a consequence it depends on many other GStreamer plugins (also dll files). Invoking GStreamer code (for example a simple function like gst_pipeline_new) causes the application to crash. More specifically it freezes and hangs in glib consuming an entire CPU core (50% of total CPU): Would someone be willing to help me figure out what's going wrong? Edit A few remarks: Stand-alone GStreamer projects on Windows work fine. The XUL plugin works fine as well (as long as I don't make any GStreamer calls). From within the XUL plugin I can call a simple glib functions like g_strndup without any problems. Calling GStreamer functions from within the plugin crashes the app. This code can reproduce the problem. (I'm not sure if it's helpful though..) The plugin .idl file defines the property videoWindow: #include "nsISupports.idl" interface nsIDOMXULElement; [scriptable, uuid(BFE3F1BF-1C7B-4da2-8EAB-12F7683FAF71)] interface IVideo : nsISupports { attribute nsIDOMXULElement videoWindow; }; Its implementation can reproduce the problem: NS_IMETHODIMP Video::SetVideoWindow(nsIDOMXULElement * inXULVideoWindow) { GstElement * pipeline = gst_pipeline_new("test"); // freezes here return NS_OK; } Edit Problem is fixed. See my own answer to this post.
Ok, I'm embarrased... I forgot to execute the GStreamer initialization function: gst_init(NULL, NULL); Problem is fixed now.
2,031,007
2,031,091
fstream skipping characters without reading in bitmap
I am trying to read a bmp file using fstream. However it skips the values between 08 and 0E (hex) for example, for values 42 4d 8a 16 0b 00 00 00 00 00 36 it reads 42 4d 8a 16 00 00 00 00 00 36 skipping 0b like it does not even exist in the document. What to do? code: ifstream in; in.open("ben.bmp", ios::binary); unsigned char a='\0'; ofstream f("s.txt"); while(!in.eof()) { in>>a; f<<a; } EDIT: using in.read(a,1); instead of in>>a; solves the reading problem but I need to write unsigned chars and f.write(a,1); does not accept unsigned chars. Anybody got a function to do the writing with unsigned chars?
#include <fstream> #include <iostream> #include <string> int main(int argc, char *argv[]) { const char *bitmap; const char *output = "s.txt"; if (argc < 2) bitmap = "ben.bmp"; else bitmap = argv[1]; std::ifstream in(bitmap, std::ios::in | std::ios::binary); std::ofstream out(output, std::ios::out | std::ios::binary); char a; while (in.read(&a, 1) && out.write(&a, 1)) ; if (!in.eof() && in.fail()) std::cerr << "Error reading from " << bitmap << std::endl; if (!out) std::cerr << "Error writing to " << output << std::endl; return 0; }
2,031,483
2,037,511
Using C# how to clean up MSMQ message format to work with C++ IXMLDOMDocument2
I'm trying to get a C++ service to load an XML document from a MSMQ message generated by C#. I can't really change the C++ side of things because I'm trying to inject test messages into the queue. The C++ service is using the following to load the XML. CComPtr<IXMLDOMDocument2> spDOM; CComPtr<IXMLDOMNode> spNode; CComBSTR bstrVal; if(_FAILED(hr = spDOM.CoCreateInstance(CLSID_DOMDocument30))) { g_infoLog->LogCOMError(hr, "CWorker::ProcessBody() Can't Create DOM"); pWork->m_nFailure = WORKFAIL_BADXML; goto Exit; } hr = spDOM->loadXML(bstrBody, &vbResult); The C# code to send the MSMQ message looks like this (just test code not pretty): // open the queue var mq = new MessageQueue(destinationQueue) { // store message on disk at all intermediaries DefaultPropertiesToSend = { Recoverable = true }, // set the formatter to Binary, default is XML Formatter = new BinaryMessageFormatter() }; // send message mq.Send(messageContent, "TestMessage"); mq.Close(); I tried to send the same message using BinaryMessageFormatter but it puts what I think are unicode characters at the top before the XML starts. .....ÿÿÿ ÿ....... ......À) If I use the default XML formatter the message has the following top element. The C++ service doesn't seem to handle this. <?xml version="1 .0"?>..<string>& lt; Do you know of a way I could easily clean up the unicode characters when using the binary formatter? If so I think it might work.
Have you tried the ActiveXMessageFormatter? It might not compile with it as the formatter, i have no way to test here, but it might. EDIT: just tried and it compiles ok, whether the result is any better i still couldn't say for sure.
2,031,524
2,031,555
C++ STL data structure alignment, algorithm vectorization
Is there a way to enforce STL container alignment to specific byte, using attribute((aligned))perhaps? the target compilers are not Microsoft Visual C++. What libraries, if any, provide specialized templates of STL algorithms which have specific explicit vectorization, e.g. SSE. My compilers of interest are g++, Intel, and IBM XL.
With STL containers, you can provide your own allocator via an optional template parameter. I wouldn't recommend writing an entire allocator from scratch, but you could write one that's just a wrapper around new and delete but ensures that the returned memory meets your alignment requirement. (E.g., if you need n bytes with 16-byte alignment, you use new to allocate n + 15 bytes and return a pointer to the first 16-byte aligned address in that block.) But it might be enough just to add the alignment attribute to the element type. That's outside the scope of the standard, so you'd have to check your compiler documentation and try it.
2,031,746
2,031,775
C++: Continue execution after SIGINT
Okay, I am writing a program that is doing some pretty heavy analysis and I would like to be able to stop it quickly. I added signal(SIGINT, terminate); to the beginning of main and defined terminate like: void terminate(int param){ cout << endl << endl << "Exit [N]ow, or [A]fter this url?" << endl; std::string answer; cin >> answer; if(answer[0] == 'n' || answer[0] == 'N'){ terminateParser(); exit(1); }else if(answer[0] == 'a' || answer[0] == 'A'){ quitAfterUrl = true; } } In linux, this worked as I expected it to, that is it waited for user input. But when I try to do the same in windows, it shows the message and exits anyway. Is there any way to stop SIGINT from closing the program immediately? Update: when I tried BOOL WINAPI handler(DWORD dwCtrlType) { if (CTRL_C_EVENT == dwCtrlType) { // ask the user } return FALSE; } as Gregory suggested, the program still unceremoniously exited without stopping for user input. Update 2: I am not exactly sure what did it, but the code is working now. Thank you all for the help.
From MSDN: Note SIGINT is not supported for any Win32 application, including Windows 98/Me and Windows NT/2000/XP. When a CTRL+C interrupt occurs, Win32 operating systems generate a new thread to specifically handle that interrupt. This can cause a single-thread application such as UNIX, to become multithreaded, resulting in unexpected behavior. Which means you will have to use preprocessing directive and implement a Windows specific solution. BOOL WINAPI handler(DWORD dwCtrlType) { if (CTRL_C_EVENT == dwCtrlType) { // ask the user } return FALSE; } And at the beginning of main you do SetConsoleCtrlHandler(handler, TRUE);
2,031,849
2,031,937
OpenCV's IplImage* as function parametr error
I am using OpenCV library and I want to clone picture in separate function, but I cannot send address to the function IplImage* image = cvLoadImage( path, CV_LOAD_IMAGE_GRAYSCALE ); // loading is ok showFoundPoints(image); // -> here it shows errors ... //my function int showFoundPoints(IplImage*image) {...} And I got this build error: error: conversion from IplImage*' to non-scalar typestd::string' requested I don't get why is there conversion. It looks very simple, but I really don't know what to do with this. Thanks for help.
Is the definition of showFoundPoints consistent in the header and the source? It would seem you have it declared differently; one taking a std::string and the other not.
2,031,922
2,034,294
Tips for writing a DBMS
I have taken a graduate level course which is just one big project - to write a DBMS. The objective is not to reinvent the wheel and make an enterprise DBMS to rival Oracle. Only a small subset of SQL commands need to be supported. Nor is the objective to create some fancy hybrid model DBMS for storing multimedia or something. It has to be a traditional RDBMS. The main goal of the project is to use programing techniques to take advantage of modern architectures (multicore processors) to build a high performing database (speed, load). I was just wondering if there were any resources on query evaluations, optimizers, data structures ideal for DBMSes or basically anything that could help me create a standout project. The professor was throwing around terms like metaprogramming for example. The project must be done entirely in C++. Thanks for the replies so far! I cannot optimize an existing DBMS such as MySQL as the project requires you to build your own DBMS from scratch. Yes I know this is pretty much reinventing the wheel for most part, but there is scope for some novel query evaluation and optimization algorithms. If you know any good resources or books dealing with this specific area, then please tell me!
Since your professor mentioned metaprogramming, you might want to look at the following: WAM - Warren Abstract Machine. This compiles prolog code into a set of instructions that can be executed on an abstract machine. The idea is similar to jvm and cli. You don't need to go into this in detail, just understand the idea of an abstract machine. JVM, CLI - same as above. Tools such as lex, yacc, flex, bison. Since you will be writing essentially an interpreter/compiler for SQL commands, you probably want to use some tools. This can be viewed as a form of metaprogramming, since you are using a language to write a tool - so you are programming at the meta-level. Again, the idea of meta-programming - perhaps you can augment your language with constructs that will allow your SQL compiler/interpreter to automatically optimize for parallel queries. These can be implemented as hints etc. to the compiler. Recompilers - you might want to write an interpreter/compiler that recompiles the initial queries into ones that can run in parallel for your target architecture. For example, for an N-core architecture, it might recompile a query into N-subqueries that execute in parallel, then combine the results. I'm not sure that you should go into a great deal of research into standard optimization practices. These can be complex, and the subject of a lifetime of research in themselves. Since the object of the exercise is to take advantage of parallel processing, and meta-programming, that should be the focus of your research.
2,031,940
2,032,126
How to use boost::array with unknown size as object variable
I'd like to use boost::array as a class member, but I do not know the size at compile time. I thought of something like this, but it doesn't work: int main() { boost::array<int, 4> array = {{1,2,3,4}}; MyClass obj(array); } class MyClass { private: boost::array<int, std::size_t> array; public: template<std::size_t N> MyClass(boost::array<int, N> array) : array(array) {}; }; The compiler, gcc, says: error: type/value mismatch at argument 2 in template parameter list for ‘template<class _Tp, long unsigned int _Nm> struct boost::array’ error: expected a constant of type ‘long unsigned int’, got ‘size_t’ Which obviously means that one cannot use variable-sized arrays as class members. If so, this would negate all the advantages of boost::array over vectors or standard arrays. Can you show me what I did wrong?
Boost's array is fixed-size based on the second template parameter, and boost::array<int,4> is a different type from boost::array<int,2>. You cannot have instances of the same class (MyClass in your example) which have different types for their members. However, std::vectors can have different sizes without being different types: struct MyClass { template<std::size_t N> explicit MyClass(boost::array<int, N> const& array) : data(array.begin(), array.end()) {} private: std::vector<int> data; }; int main() { boost::array<int, 4> a = {{1,2,3,4}}; MyClass obj(a); boost::array<int, 2> a2 = {{42,3}}; MyClass obj2(a2); // notice obj.data.size() != obj2.data.size() return 0; } That said, boost::array is still useful (it's even useful in this example code), just not in the exact way you want to use it.
2,032,056
2,048,266
Compilable C++ code to implement a secure SLL/TLS client using MS SSPI
As described here http://www.ddj.com/cpp/184401688 I do not have time to write this from scratch. Asked and not answered https://stackoverflow.com/questions/434961/implementing-ssl THE QUESTION IS: I am looking for some compilable working source code that implements MS SSPI (as alluded to in the thread above), procedural not OOP preferred. I have looked at the code projects sample here: http://www.codeproject.com/KB/IP/sslclasses.aspx But this is C# OOP. Converting this to C++ code is not trivial. OpenSSL SChannel calls follow GSS API standards. There are, of course, some alternatives -- OpenSSL for example. This package is a complete and thorough implementation of the protocol and for someone all too familiar with UNIX is undoubtedly the best choice. The package originally targeted the UNIX community and to compile it relies on the Perl runtime, so some learning curve is required for Windows developers who never worked with UNIX-type systems. Apart from that, OpenSLL does some very non-standard things Nikolai, Having contibuted a lot of COMPILABLE source code (www.coastrd.com) I was hoping to find someone willing to do the same.
This SSPI SChannel SMTPS example should compile and run in Visual Studio 2008 as is http://www.coastrd.com/c-schannel-smtp (the original site seems dead; fortunately WaybackMachine has it archived) SChannel is the Microsoft implementation of the GSS API that wraps the SSL/TLS protocol. Advantages of utilizing SChannel: gory details are shielded from the developer by the SSPI. No extra setup is required to run the final application: SChannel is an integral part of the operating system On Windows ME/2000/XP/... platforms, SChannel is installed and configured by default SChannel calls follow GSS API standards. You do not need to create/install any certificates no third party dll's (1MB or larger) to ship and install The code should produce a session that looks like this: ----- SSPI Initialized ----- WinSock Initialized ----- Credentials Initialized ----- Connectd To Server 70 bytes of handshake data sent 974 bytes of handshake data received 182 bytes of handshake data sent 43 bytes of handshake data received Handshake was successful ----- Client Handshake Performed ----- Server Credentials Authenticated Server subject: C=US, S=California, L=Mountain View, O=Google Inc, CN=smtp.gmail.com Server issuer: C=ZA, S=Western Cape, L=Cape Town, O=Thawte Consulting cc, OU=Certification Services Division, CN=Thawte Premium Server CA, E=premium-server@thawte.com ----- Certificate Chain Displayed ----- Server Certificate Verified ----- Server certificate context released Protocol: TLS1 Cipher: RC4 Cipher strength: 128 Hash: MD5 Hash strength: 128 Key exchange: RSA Key exchange strength: 1024 ----- Secure Connection Info 64 bytes of (encrypted) application data received Decrypted data: 43 bytes 220 mx.google.com ESMTP 6sm17740567yxg.66 Sending 7 bytes of plaintext: EHLO 28 bytes of encrypted data sent 169 bytes of (encrypted) application data received Decrypted data: 148 bytes 250-mx.google.com at your service, [22.33.111.222] 250-SIZE 35651584 250-8BITMIME 250-AUTH LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250 PIPELINING Sending 7 bytes of plaintext: QUIT 28 bytes of encrypted data sent 69 bytes of (encrypted) application data received Decrypted data: 48 bytes 221 2.0.0 closing connection 6sm17740567yxg.66 ----- SMTP session Complete Sending Close Notify 23 bytes of handshake data sent ----- Disconnected From Server ----- Begin Cleanup ----- All Done -----
2,032,325
2,032,340
C++ virtual function execution efficiency
I am trying to get a better idea of performance of virtual functions here is an example code: struct Foo { virtual void function1(); virtual void function2() { function1(); } }; struct Bar : Foo { virtual void function1(); } Bar b; Foo &f = b; b.function2(); b.function1(); f.function2(); for each of three calls in the last three lines of the code sample, do all of them have to look up function pointer in virtual table? how many lookup have to be done for f object. which once can be inlined by compiler? thanks
The calls on b are static - the compiler knows for sure at compilation time what the type of b will be at runtime (obviously a Bar) so it will directly use the addresses of the methods that will be invoked. Virtual only matters when you make a call via pointer/reference as the call could have different targets at runtime. This would matter if, for example, you called function1 on a pointer and during runtime changed the actual type that the pointer pointed to. Now the situation here, where you call function2 on f is tricky for two reasons: the function is never overridden, and you use a reference which cannot be reassigned. Therefore, a really smart compiler that sees all input files could conceivably figure out what the target of the call really will be with 100% confidence (since you're not going to add new classes to the already compiled code). However, AFAIK, the compilers does not have to do it so you would pay the cost. Generally speaking, if you don't plan to override a function ever, don't make it virtual.
2,032,361
2,032,368
what's polymorphic type in C++?
I found in one article saying "static_cast is used for non-polymorphic type casting and dynamic_cast is used for polymorphic type casting". I understand that int and double are not polymorphic types. However, I also found that static_cast can be used between base class and derived class. What does polymorphic type here mean? Some people says polymorphic type means the base class with virtual function. Is that right? Is this the only situation? What else? Can anybody could elaborate this for me more?
First of all, the article is not completely correct. dynamic_cast checks the type of an object and may fail, static_cast does not check and largely requires the programmer to know what they're doing (though it will issue compile errors for some egregious mistakes), but they may both be used in polymorphic situations. (dynamic_cast has the additional requirement that at least one of the involved types has a virtual method.) Polymorphism in C++, in a nutshell, is using objects through a separately-defined interface. That interface is the base class, and it is almost always only useful to do this when it has virtual methods. However, it's rare-but-possible to have polymorphism without any virtual methods; often this is a sign of either bad design or having to meet external requirements, and because of that, there's no way to give a good example that will fit here. ("You'll know when to use it when you see it," is, unfortunately, the best advice I can give you here.) Polymorphism example: struct Animal { virtual ~Animal() {} virtual void speak() = 0; }; struct Cat : Animal { virtual void speak() { std::cout << "meow\n"; } }; struct Dog : Animal { virtual void speak() { std::cout << "wouf\n"; } }; struct Programmer : Animal { virtual void speak() { std::clog << "I refuse to participate in this trite example.\n"; } }; Exercising the above classes slightly—also see my generic factory example: std::auto_ptr<Animal> new_animal(std::string const& name) { if (name == "cat") return std::auto_ptr<Animal>(new Cat()); if (name == "dog") return std::auto_ptr<Animal>(new Dog()); if (name == "human") return std::auto_ptr<Animal>(new Programmer()); throw std::logic_error("unknown animal type"); } int main(int argc, char** argv) try { std::auto_ptr<Animal> p = new_animal(argc > 1 ? argv[1] : "human"); p->speak(); return 0; } catch (std::exception& e) { std::clog << "error: " << e.what() << std::endl; return 1; } It's also possible to use polymorphism without inheritance, as it's really a design technique or style. (I refuse to use the buzzword pattern here... :P)
2,032,502
2,032,508
Why is Application Binary Interface important for programming
I don't understand why the ABI is important context of developing user-space applications. Is the set of system calls for an operating system considered an ABI? But if so then aren't all the complexities regarding system calls encapsulated within standard libraries? So then is ABI compatibility only relevant for running statically linked applications on different platforms, since the system calls would be embedded into the binary?
An ABI defines a set of alignment, calling convention, and data types that are common to a system. This makes an ABI awfully important if you're doing any sort of dynamic linking; as without it code from one application has no way of calling code provided by another. So, no. ABI compatibility is relevant for all dynamic linking (less so for static). Its worth emphasizing again that a system's ABI affects inter-application work as well as application-to-operating-system work.
2,032,651
2,032,905
Firefox basic modification
I have to modify firefox to make it an automated client for testing some personal servers. I have to:1.Have firefox connect normaly, send the GET HTTP, and run all scripts on that web page. 2.Firefox does not display the page but save it to a file. I have not yet red the documentation, or the source, sorry. I want some hints on what functions firefox uses for this actions. Also this functions should be like an internal API, I mean that I should always find the functions in the version n+1. I also want to remove all the junk that remains and I not use, this will probably be an ugly task. Also I want some hints on the memory structure of firefox. Example: How do I access the variables that are available for the client side scripts firefox will run. What about the cookies. Thank you.
Instead of modifying the internal code of Firefox you should try implementing what you need in an extension. Better yet, use something already created, like Selenium. You generally don't get useful answers to general questions like this.
2,032,654
2,032,750
Can you call a copy constructor from another method?
/** @file ListP.cpp * ADT list - Pointer-based implementation. */ #include <iostream> #include <cstddef> // for NULL #include <new> // for bad_alloc #include "ListP.h" // header file using namespace std; List::List() : size(0), head(NULL) { } // end default constructor List::List(const List& aList) : size(aList.size) { if (aList.head == NULL) head = NULL; // original list is empty else { // copy first node head = new ListNode; head->item = aList.head->item; // copy rest of list ListNode *newPtr = head; // new pointer // newPtr points to last node in new list // origPtr points to nodes in original list for (ListNode *origPtr = aList.head->next; origPtr != NULL; origPtr = origPtr->next) { newPtr->next = new ListNode; newPtr = newPtr->next; newPtr->item = origPtr->item; } // end for newPtr->next = NULL; } // end if } // end copy constructor void List::copy(const List& aList) { List::List(aList); } // end copy I am trying to create a method called copy that simply calls the copy constructor. When I test this method in main the target list still remains empty. I have stepped through it and all the right lines are executed, but when the copy constructor returns nothing seems to be saved. I feel this has something to do with scope, but cannot pinpoint the problem. Here is the driver program: #include <iostream> using namespace std; #include "ListP.h" int main () { List aList; ListItemType dataItem; aList.insert(1, 9); aList.insert(2, 4); aList.insert(3, 1); aList.insert(4, 2); List bList; bList.copy(aList); bList.retrieve(1, dataItem); cout << dataItem << endl; cout << bList.getLength() << endl; return 0; }
If I understand your question, you cannot do what you are trying to do. Before you can call any other methods on an object, the object must be fully constructed (there is an exception here, I'll get back to that). Furthermore, an object can only be constructed once (*). Therefore, by the time you could call your copy method, the object would already be constructed and you can't (and shouldn't) construct it a second time. The one exception to not being able to call a method on an object that is not fully constructed (i.e. the constructor has not yet returned) is that a constructor itself can call a method on the partially constructed object. So, you could call a copy method from the copy constructor, but not vice versa. That said, if your object provides an optimized swap function, there is an standard trick that may be thinking of: void List::copy(const List& aList) { List acopy(aList); swap(*this, acopy); } This makes a copy of aList and then swaps the current contents of your object with this copy. acopy which now has the contents of your list before will be properly destructed when copy returns. Finally, if you are going to do it, current recommendation is actually tweak it a bit and write it this way: void List::copy(List aList) { swap(*this, aList); } Under certain circumstances, this can be more efficient (and is never less efficient). * - you can do weird things and construct an object twice with placement new. But there is no good reason to do that and many reasons why not to.
2,032,719
2,032,725
C++ cin.fail() question
When running the following code and enter a number, it works fine. But when entering a letter, the program enters an infinite loop, displaying "Enter a number (0 to exit): cin failed." My intent was to handle the cin fail case and prompt the user again. int number; do{ cout << "Enter a number (0 to exit): "; cin >> number; if(cin.fail()){ cout << "cin failed." << endl; cin.clear(); }else{ cout << "cin succeeded, " << number << " entered." << endl; } }while(number != 0);
You need to clear the line from cin, using cin.ignore, in addition to clearing the stream state (which is what cin.clear does). I have several utility functions to make this easier (you'll be interested in clearline in particular, which clears the stream state and the current line) and almost an exact example of what you want. Your code, more or less, using my clearline: #include "clinput.hpp" // move my file to a location it can be used from int main() { using namespace std; while (true) { cout << "Enter a number (0 to exit): "; int number; if (cin >> number) { cout << "Read " << number << '\n'; if (number == 0) { break; } } else { if (cin.eof()) { // tested only *after* failed state cerr << "Input failed due to EOF, exiting.\n"; return 1; } cerr << "Input failed, try again.\n"; clearline(cin); // "cin >> clearline" is identical } } return 0; } There is still a potential issue here (fixed in my clinput_loop.cpp with blankline), with leaving input in the buffer that will screw up later IO (see "42 abc" in the sample session). Extracting the above code into a separate and self-contained function is left as an exercise for the reader, but here's a skeleton: template<class Type, class Ch, class ChTr> Type read(std::basic_istream<Ch,ChTr>& stream, Ch const* prompt) { Type value; // *try input here* if (could_not_get_input or more_of_line_left) { throw std::runtime_error("..."); } return value; } template<class Type, class Ch, class ChTr> void read_into( Type& value, std::basic_istream<Ch,ChTr>& stream, Ch const* prompt ) { value = read<Type>(stream, prompt); } Example use: int n; try { read_into(n, std::cin, "Enter a number: "); } catch (std::runtime_error& e) { //... raise; } cout << "Read " << n << '\n'; clearline function extracted for posterity, in case above links ever break (and slightly changed to make self-contained): #include <istream> #include <limits> template<class C, class T> std::basic_istream<C,T>& clearline(std::basic_istream<C,T>& s) { s.clear(); s.ignore(std::numeric_limits<std::streamsize>::max(), s.widen('\n')) return s; } The template stuff is a bit confusing if you're not used to it, but it's not hard: std::istream is a typedef of std::basic_istream<char, std::char_traits<char> > std::wistream is a typedef of std::basic_istream<wchar_t, std::char_traits<wchar_t> > widen allows '\n' to become L'\n' as appropriate this code works for both of the common char and wchar_t cases, but also any compatible instantiation of basic_istream it's written to be called as clearline(stream) or stream >> clearline, compare to other manipulators like std::endl, std::ws, or std::boolalpha
2,032,939
2,032,969
Why is COM (Component Object Model) language-independent?
I know that COM provides reusability at the binary level across languages and applications. I read that all components built for COM must adhere to a standard memory layout in order to be language-independent. I do not understand what "standard memory layout" means. What makes COM language-independent?
First, some technical background: C++ compilers usually generate something called a "vtable" for any class with virtual functions. This is basically a table of function pointers. The vtable contains a function pointer to every virtual method implemented by a class. In COM, interfaces are basically abstract base classes which a component implements, e.g.: class CSomeComponent : IUnknown, ISomeOtherInterface { ... }; The vtable for CSomeComponent will include function pointers for all methods defined in these two interfaces. struct __imaginary_vtable_for_CSomeComponent { // methods required by IUnknown HRESULT (*QueryInterface)( const IID& iid, void** ppv ); ULONG (*AddRef)(); ULONG (*Release)(); // methods required by ISomeOtherInterface void (*foo)(); ... }; Any instantiated object has a reference to the vtable of its dynamic type. This is how the program knows how to call the proper method in cases where a base method is overridden in a derived class: class Base { public: virtual void foo() { ... } } class Derived : public Base { public: virtual void foo() { ... } // overrides Base::foo() virtual void bar() { ... } } ... Base* X = new Derived; X->foo(); The last line should call Derived::foo. This works because object X has a reference to the vtable for class Derived. As said, the vtable is like a list of function pointers. Now, vtables have a fixed layout: If class Derived inherits from class Base, the function pointer for method foo will be at the same relative location in Derived's vtable than in Base's vtable: struct __imaginary_vtable_for_Base { void (*foo)(); }; // __imaginary_vtable_for_Base::foo = Base::foo struct __imaginary_vtable_for_Derived { void (*foo)(); void (*bar)(); }; // __imaginary_vtable_for_Derived::foo = Derived::foo Now, if the compiler sees something like X->foo(), it knows that all for all classes derived from Base, method foo corresponds to the first entry in the vtable. So it issues a call to the first function pointer, which in X's case is a call to Derived::foo. Answer to your question: Compilers can only generate COM components if they generate the same layout for vtables that the COM specification demands. vtables can be implemented in various different ways, especially when it comes to multiple inheritance (which is required with COM components). Adhering to a certain vtable format is necessary so that when you call a component's method f, you will actually call method f and not some other method g which happens to sit at f's position in the component class's vtable. I suppose COM-compliant compilers essentially have to produce the same vtable layouts as Microsoft Visual C++, since the COM technology was defined by Microsoft. P.S.: Sorry for being so technical, I hope the above information is of some use to you.
2,033,110
2,033,112
Passing a string literal as a type argument to a class template
I want to declare a class template in which one of the template parameters takes a string literal, e.g. my_class<"string">. Can anyone give me some compilable code which declares a simple class template as described? Note: The previous wording of this question was rather ambiguous as to what the asker was actually trying to accomplish, and should probably have been closed as insufficiently clear. However, since then this question became multiple times referred-to as the canonical ‘string literal type parameter’ question. As such, it has been re-worded to agree with that premise.
Sorry, C++ does not currently support the use of string literals (or real literals) as template parameters. But re-reading your question, is that what you are asking? You cannot say: foo <"bar"> x; but you can say template <typename T> struct foo { foo( T t ) {} }; foo <const char *> f( "bar" );
2,033,258
2,033,308
Creating libraries for Arduino
I want to write a library for my Arduino(header and class files), but I don't know what tools to use for this job and how to test and debug them. The Arduino IDE just helps in writing plain programs for direct bootloading, not full project management thing (correct me if I am wrong and guide appropriately with relevant references). Please help.
The compiler supports the #include directive, you can write your library, then #include it. This is expanded on in this tutorial about writing libraries for the Arduino.
2,033,306
2,033,365
Looking for a permissive and active cross-platform image processing library in C/C++
I'm looking a cross-platform image processing library in C/C++ which is under active development. One more requirement: No GPL license. Some references: Fast Cross-Platform C/C++ Image Processing Libraries Cross-platform drawing library
We used ImageMagick for some courses in university. Played quite well.
2,033,473
2,033,495
How to sort filenames with possibly unpadded numbers in c++?
I need to sort filenames that can have a common root, but are then followed by numbers that are not necessarily padded uniformely; one example is what you obtain when you rename multiple files in Windows. filenamea (1).txt filenamea (2).txt ... filenamea (10).txt ... filenamea (100).txt ... filenameb.txt ... filenamec (1).txt filenamec (2).txt and so on...
There are already similar questions, I know of Sort on a string that may contain a number and How to implement a natural sort algorithm in C. So you can also look there for more inspiration and help. Both questions' answers suggest, http://www.davekoelle.com/alphanum.html, which is basically what Pascal Cuoq suggested. You can also look at the Coding Horror article, where some other algorithms are linked: Sorting for Humans : Natural Sort Order
2,033,608
2,033,632
MinGW linker error: winsock
I am using MinGW compiler on Windows to compile my C++ application with sockets. My command for linking looks like: g++.exe -Wall -Wno-long-long -pedantic -lwsock32 -o dist/Windows/piskvorky { there are a lot of object files } and I have also tried g++.exe -Wall -Wno-long-long -pedantic -lws2_32 -o dist/Windows/piskvorky { there are a lot of object files } but in both case I get this error: build/Windows/MinGW-Windows/src/utils/tcpunit.o:tcpunit.cpp:(.text+0x33): undefined reference to `closesocket@4' build/Windows/MinGW-Windows/src/utils/tcpunit.o:tcpunit.cpp:(.text+0xd0): undefined reference to `send@16' build/Windows/MinGW-Windows/src/utils/tcpunit.o:tcpunit.cpp:(.text+0x1ee): undefined reference to `recv@16' build/Windows/MinGW-Windows/src/utils/tcpdevice.o:tcpdevice.cpp:(.text+0x184): undefined reference to `WSAStartup@8' build/Windows/MinGW-Windows/src/utils/tcpdevice.o:tcpdevice.cpp:(.text+0x1a5): undefined reference to `closesocket@4' build/Windows/MinGW-Windows/src/utils/tcpdevice.o:tcpdevice.cpp:(.text+0x1cb): undefined reference to `closesocket@4' build/Windows/MinGW-Windows/src/utils/tcpdevice.o:tcpdevice.cpp:(.text+0x1d3): undefined reference to `WSACleanup@0' build/Windows/MinGW-Windows/src/utils/tcpdevice.o:tcpdevice.cpp:(.text+0x6fe): undefined reference to `bind@12' build/Windows/MinGW-Windows/src/utils/tcpdevice.o:tcpdevice.cpp:(.text+0x724): undefined reference to `listen@8' build/Windows/MinGW-Windows/src/utils/tcpdevice.o:tcpdevice.cpp:(.text+0x7f0): undefined reference to `gethostbyaddr@12' build/Windows/MinGW-Windows/src/utils/tcpdevice.o:tcpdevice.cpp:(.text+0x83c): undefined reference to `socket@12' build/Windows/MinGW-Windows/src/utils/tcpdevice.o:tcpdevice.cpp:(.text+0x86f): undefined reference to `htons@4' build/Windows/MinGW-Windows/src/utils/tcpdevice.o:tcpdevice.cpp:(.text+0x8b5): undefined reference to `connect@12' build/Windows/MinGW-Windows/src/utils/tcpdevice.o:tcpdevice.cpp:(.text+0x9c6): undefined reference to `accept@12' Do you have any ideas where the problem can be, please?
Put the -lws2_32 AFTER the list of object files - GCC searches libraries and object files in the order they appear on the command line. Just to help the other viewers out there: gcc hello.c -o hello.o -lws2_32
2,033,809
2,033,920
I'm using Crypto++ for RSA encryption. My plain text exceeds FixedMaxPlaintextLength. What should I do?
Should I break the text into chunks? Is RSA the wrong encryption scheme?
Wrong scheme. The standard technique for message encryption (for example, PGP and CMS) is to generate a random symmetric session key K for something like AES and encrypted the message with AES using key K. Then encrypt K with the public key of each recipient of the message.
2,033,878
2,034,182
Cross-platform redirect of standard input and output of spawned process in native C/C++ (edit with solution)
I have a string command I'd like to execute asynchronously while writing to its input and reading its output. Sounds easy, right, the devil is in the cross-platform. I'm targeting both MSVC/Win32 and gcc/Linux and obviously want to write the minimum amount of platform-specific code. My google-fu has failed me, I get too much noise for my queries, so I started with what I know. popen - nice and easy, returns FILE* that is easy to consume everywhere. But here's what MSDN have to say about _popen: If used in a Windows program, the _popen function returns an invalid file pointer that causes the program to stop responding indefinitely. _popen works properly in a console application. To create a Windows application that redirects input and output, see Creating a Child Process with Redirected Input and Output in the Platform SDK. and so popen is out of the question (edit: because I'd like my code to work in GUI application). The Windows way to do it is in my opinion rather ugly and verbose. I could live with platform specific spawn code but I'd want at least the I/O code to be the same. Here, however, I hit a wall between the WinAPI HANDLEs and C FILE*, and int file descriptor. Is there a way to "convert" a HANDLE to FILE* or int fd or vice-versa? (Google failed me once more on this one, all the keywords I tried are way overused) Is there better way to do the whole thing with little platform-specific code? External libraries are not out of the question, however dependency maintenance is a pain, especially so on multiple platforms so I'd like to reduce dependencies. I didn't find such library in boost also. Just for the record, what worked for me in the end. On Windows/MSVC, CreatePipe() + CreateProcess() as outlined here, using _open_osfhandle() followed by _fdopen() to get FILE* to the process input and output. On Linux/GCC, nothing new here, creating pipe()s; fork() then dup2() the pipes; exec(); fdopen() on the relevant file descriptors. That way, only the process spawning code is platform dependent (which is ok, as on Windows I'd like to control additional STARTUPINFO parameters), writing input and reading output is done through standard FILE* and related functions.
for converting windows HANDLEs to C file descriptors use _open_osfhandle http://msdn.microsoft.com/en-us/library/bdts1c9x%28VS.71%29.aspx EDIT: this example once helped me aswell with a similar problem: http://www.halcyon.com/~ast/dload/guicon.htm
2,033,903
2,034,131
How many palindromes can be formed by selections of characters from a string?
I'm posting this on behalf of a friend since I believe this is pretty interesting: Take the string "abb". By leaving out any number of letters less than the length of the string we end up with 7 strings. a b b ab ab bb abb Out of these 4 are palindromes. Similarly for the string "hihellolookhavealookatthispalindromexxqwertyuiopasdfghjklzxcvbnmmnbvcxzlkjhgfdsapoiuytrewqxxsoundsfamiliardoesit" (a length 112 string) 2^112 - 1 strings can be formed. Out of these how many are palindromes?? Below there is his implementation (in C++, C is fine too though). It's pretty slow with very long words; he wants to know what's the fastest algorithm possible for this (and I'm curious too :D). #include <iostream> #include <cstring> using namespace std; void find_palindrome(const char* str, const char* max, long& count) { for(const char* begin = str; begin < max; begin++) { count++; const char* end = strchr(begin + 1, *begin); while(end != NULL) { count++; find_palindrome(begin + 1, end, count); end = strchr(end + 1, *begin); } } } int main(int argc, char *argv[]) { const char* s = "hihellolookhavealookatthis"; long count = 0; find_palindrome(s, strlen(s) + s, count); cout << count << endl; }
First of all, your friend's solution seems to have a bug since strchr can search past max. Even if you fix this, the solution is exponential in time. For a faster solution, you can use dynamic programming to solve this in O(n^3) time. This will require O(n^2) additional memory. Note that for long strings, even 64-bit ints as I have used here will not be enough to hold the solution. #define MAX_SIZE 1000 long long numFound[MAX_SIZE][MAX_SIZE]; //intermediate results, indexed by [startPosition][endPosition] long long countPalindromes(const char *str) { int len = strlen(str); for (int startPos=0; startPos<=len; startPos++) for (int endPos=0; endPos<=len; endPos++) numFound[startPos][endPos] = 0; for (int spanSize=1; spanSize<=len; spanSize++) { for (int startPos=0; startPos<=len-spanSize; startPos++) { int endPos = startPos + spanSize; long long count = numFound[startPos+1][endPos]; //if str[startPos] is not in the palindrome, this will be the count char ch = str[startPos]; //if str[startPos] is in the palindrome, choose a matching character for the palindrome end for (int searchPos=startPos; searchPos<endPos; searchPos++) { if (str[searchPos] == ch) count += 1 + numFound[startPos+1][searchPos]; } numFound[startPos][endPos] = count; } } return numFound[0][len]; } Explanation: The array numFound[startPos][endPos] will hold the number of palindromes contained in the substring with indexes startPos to endPos. We go over all pairs of indexes (startPos, endPos), starting from short spans and moving to longer ones. For each such pair, there are two options: The character at str[startPos] is not in the palindrome. In that case, there are numFound[startPos+1][endPos] possible palindromes - a number that we have calculated already. character at str[startPos] is in the palindrome (at its beginning). We scan through the string to find a matching character to put at the end of the palindrome. For each such character, we use the already-calculated results in numFound to find number of possibilities for the inner palindrome. EDIT: Clarification: when I say "number of palindromes contained in a string", this includes non-contiguous substrings. For example, the palindrome "aba" is contained in "abca". It's possible to reduce memory usage to O(n) by taking advantage of the fact that calculation of numFound[startPos][x] only requires knowledge of numFound[startPos+1][y] for all y. I won't do this here since it complicates the code a bit. Pregenerating lists of indices containing each letter can make the inner loop faster, but it will still be O(n^3) overall.
2,033,908
2,033,943
How do you determine full paths from filename command line arguments in a c++ program?
I am writing a program in c++ that accepts a filename as an argument on the command line: >> ./myprogram ../path/to/file.txt I know I can simply open an fstream using argv[1], but the program needs more information about the exact location (ie. full pathname) of the file. I thought about appending argv[1] to getcwd(), however obviously in the example above you'd end up with /path/../path/to/file.txt. Not sure whether fstream would resolve that path automatically, but even if it did, I still don't have the full path without a lot of string processing. Of course, that method wouldn't work at all if the path provided was already absolute. And since this program may be run on Linux/Windows/etc, simply detecting a starting '/' character won't work to determine whether the argument was a full path or not. I would think this is a fairly common issue to deal with path names across multiple OSs. So how does one retreive the full path name of a command line argument and how is this handled between operating systems?
Pathname handling is highly OS-specific: some OS have a hierarchy with just one root (e.g. / on Unix ), some have several roots a la MS-DOS' drive letters; some may have symbolic links, hard links or other kinds of links, which can make traversal tricky. Some may not even have the concept of a "canonical" path to a file (e.g. if a file has hard links, it has multiple names, none of which is more "canonical"). If you've ever tried to do path-name manipulation across multiple OS in Java, you know what I mean :-). In short, pathname handling is system-specific, so you'll have to do it separately for each OS (family), or use a suitable library. Edit: You could look at Apache Portable Runtime, or at Boost (C++ though), both have pathname handling functions.
2,033,997
2,034,007
How to compile for Windows on Linux with gcc/g++?
I have written some effects in C++ (g++) using freeglut on Linux, and I compile them with g++ -Wall -lglut part8.cpp -o part8 So I was wondering if it is possible to have g++ make static compiled Windows executables that contains everything needed? I don't have Windows, so it would be really cool, if I could do that on Linux :)
mingw32 exists as a package for Linux. You can cross-compile and -link Windows applications with it. There's a tutorial here at the Code::Blocks forum. Mind that the command changes to x86_64-w64-mingw32-gcc-win32, for example. Ubuntu, for example, has MinGW in its repositories: $ apt-cache search mingw [...] g++-mingw-w64 - GNU C++ compiler for MinGW-w64 gcc-mingw-w64 - GNU C compiler for MinGW-w64 mingw-w64 - Development environment targeting 32- and 64-bit Windows [...]
2,034,450
2,034,454
size of dynamically allocated array
Is it true that a pointer assigned to the starting address of a dynamically allocated array does not have the information of the size of the array? So we have to use another variable to store its size for later processing the array through the pointer. But when we free the dynamically allocated array, we don't specify the size, instead we just "free ptr" or "delete [] ptr". How could free or delete know the size of the array? Can we use the same scheme to avoid storing the size of the array in another variable? Thanks!
Yes, this is true. delete knows the size of the memory chunk because new adds extra information to the chunk (usually before the area returned to the user), containing its size, along with other information. Note that this is all very much implementation specific and shouldn't be used by your code. So to answer your last question: No - we can't use it - it's an implementation detail that's highly platform and compiler dependent. For example, in the sample memory allocator demonstrated in K&R2, this is the "header" placed before each allocated chunk: typedef long Align; /* for alignment to long boundary */ union header { /* block header */ struct { union header *ptr; /* next block if on free list */ unsigned size; /* size of this block */ } s; Align x; /* force alignment of blocks */ }; typedef union header Header; size is the size of the allocated block (that's then used by free, or delete).
2,034,465
2,034,493
How to make exe in Qt?
I'm starting to learn Qt and I'm stuck on particular step, which is: I cannot create executable file. My steps are as follows: Creation of *.cpp In console typing qmake -project (this creates .pro file) In console typing qmake -makefile (now I have makefile + some other files) I'm trying to create .exe by typing qmake but this isn't working. I've also tried nmake, bmake and make but no results. Any help will be appreciated. Thank you.
It depends on what compiler you are using. If you're using GCC or MinGW, type make. If make cannot be found, either it is not installed, or it's not in your path (more likely to be the case). Try using the command prompt shortcut Qt provides you (if on Windows). If on a POSIX-based/-like system, make should exist. If it doesn't, then it depends if you're on a Mac or on Linux/BSD. On a Mac, make should come with the developer tools, which is one of the last CDs in the OS X installation CDs. If you're on Linux, use your package manager. rpm for Red Hat based systems, apt for Debian based systems, and so on. Google about them. If you're using Visual C++ and nmake doesn't work, it could mean that nmake isn't on your path. Try using the Visual C++ command prompt instead of the normal command prompt (should be somewhere in your start menu). It would be more helpful if you could mention how you installed Qt, and on what system.
2,034,635
2,034,756
explicit copy constructor or implicit parameter by value
I recently read (and unfortunately forgot where), that the best way to write operator= is like this: foo &operator=(foo other) { swap(*this, other); return *this; } instead of this: foo &operator=(const foo &other) { foo copy(other); swap(*this, copy); return *this; } The idea is that if operator= is called with an rvalue, the first version can optimize away construction of a copy. So when called with a rvalue, the first version is faster and when called with an lvalue the two are equivalent. I'm curious as to what other people think about this? Would people avoid the first version because of lack of explicitness? Am I correct that the first version can be better and can never be worse?
You probably read it from: http://cpp-next.com/archive/2009/08/want-speed-pass-by-value/ I don't have much to say since I think the link explains the rationale pretty well. Anecdotally I can confirm that the first form results in fewer copies in my builds with MSVC, which makes sense since compilers might not be able to do copy-elision on the second form. I agree that the first form is a strict improvement and is never worse than the second. Edit: The first form might be a bit less idiomatic, but I don't think it's much less clear. (IMO, it's not any more surprising than seeing the copy-and-swap implementation of the assignment operator for the first time.) Edit #2: Oops, I meant to write copy-elision, not RVO.
2,034,835
2,034,851
linked-list in C++ how to go to "next element" using STL list
I have a very basic question. I want to use STL's list instead of creating my own linked-list ( my code is shown below) struct myList { myList *next; myList *previous; }; myList->next = NULL; Using STL list: #include <list> std::list<int> L; L.push_back(1); My question is, how to access the "next" element in STL's list?
std::list is a container. To access individual nodes, you need to use an iterator. For example, to get the head node, you use std::list<int>::const_iterator cit = L.begin(); To move to the next node, you use ++ cit;
2,034,916
2,034,936
Is it okay to inherit implementation from STL containers, rather than delegate?
I have a class that adapts std::vector to model a container of domain-specific objects. I want to expose most of the std::vector API to the user, so that they may use familiar methods (size, clear, at, etc...) and standard algorithms on the container. This seems to be a reoccurring pattern for me in my designs: class MyContainer : public std::vector<MyObject> { public: // Redeclare all container traits: value_type, iterator, etc... // Domain-specific constructors // (more useful to the user than std::vector ones...) // Add a few domain-specific helper methods... // Perhaps modify or hide a few methods (domain-related) }; I'm aware of the practice of preferring composition to inheritance when reusing a class for implementation -- but there's gotta be a limit! If I were to delegate everything to std::vector, there would be (by my count) 32 forwarding functions! So my questions are... Is it really so bad to inherit implementation in such cases? What are the risks? Is there a safer way I can implement this without so much typing? Am I a heretic for using implementation inheritance? :) Edit: What about making it clear that the user should not use MyContainer via a std::vector<> pointer: // non_api_header_file.h namespace detail { typedef std::vector<MyObject> MyObjectBase; } // api_header_file.h class MyContainer : public detail::MyObjectBase { // ... }; The boost libraries seem to do this stuff all the time. Edit 2: One of the suggestions was to use free functions. I'll show it here as pseudo-code: typedef std::vector<MyObject> MyCollection; void specialCollectionInitializer(MyCollection& c, arguments...); result specialCollectionFunction(const MyCollection& c); etc... A more OO way of doing it: typedef std::vector<MyObject> MyCollection; class MyCollectionWrapper { public: // Constructor MyCollectionWrapper(arguments...) {construct coll_} // Access collection directly MyCollection& collection() {return coll_;} const MyCollection& collection() const {return coll_;} // Special domain-related methods result mySpecialMethod(arguments...); private: MyCollection coll_; // Other domain-specific member variables used // in conjunction with the collection. }
The risk is deallocating through a pointer to the base class (delete, delete[], and potentially other deallocation methods). Since these classes (deque, map, string, etc.) don't have virtual dtors, it's impossible to clean them up properly with only a pointer to those classes: struct BadExample : vector<int> {}; int main() { vector<int>* p = new BadExample(); delete p; // this is Undefined Behavior return 0; } That said, if you're willing to make sure you never accidentally do this, there's little major drawback to inheriting them—but in some cases that's a big if. Other drawbacks include clashing with implementation specifics and extensions (some of which may not use reserved identifiers) and dealing with bloated interfaces (string in particular). However, inheritance is intended in some cases, as container adapters like stack have a protected member c (the underlying container they adapt), and it's almost only accessible from a derived class instance. Instead of either inheritance or composition, consider writing free functions which take either an iterator pair or a container reference, and operate on that. Practically all of <algorithm> is an example of this; and make_heap, pop_heap, and push_heap, in particular, are an example of using free functions instead of a domain-specific container. So, use the container classes for your data types, and still call the free functions for your domain-specific logic. But you can still achieve some modularity using a typedef, which allows you to both simplify declaring them and provides a single point if part of them needs to change: typedef std::deque<int, MyAllocator> Example; // ... Example c (42); example_algorithm(c); example_algorithm2(c.begin() + 5, c.end() - 5); Example::iterator i; // nested types are especially easier Notice the value_type and allocator can change without affecting later code using the typedef, and even the container can change from a deque to a vector.
2,034,951
2,034,971
Enforcing File Integrity
I've been working on a project in C++ using openGL and am looking to save the current scene to a text file. Something simple along the lines of, cube at x,y,z and its color etc. My question is about how to make sure that the file has not been changed by a user. I thought about calculating a checksum of the string and including that in the file. e.g. checksum, string But again this is open to the user modifying the values. Any recommendations or is this just a case of writing a good parser? Cheers
theoretically: you can't. practically: encrypt it and obfuscate the key within your program (this is how much of DRM works) although you will never be able to stop a determined user. Why is it so important that the user can't modify it? If you want users to be able to read, but not modify make the last line a HMAC of the file and a secret key.
2,034,955
2,037,190
VC++ Library Clashing Problem
I am working on a C++ project that uses Qt (gui lib), VTK (graphics lib) and another library which is so obscure I won't mention its name and will instead call it LIB_X. The project uses Qt for the gui components and VTK (more precisely the QVTKWidget extension provided by VTK that supports Qt) for rendering geometry.. and it uses LIB_X to gather and manipulate geometry. The problem is that it turns out that LIB_X actually uses VTK (where and how, I don't know, it's closed source). At first there was no problem, compiling with both libs linked was going fine, but at some point I called a certain (and highly needed) LIB_X function and compiling led to a bunch of 'blah blah something about a VTK lib/obj already defined in LIB_X dll' errors. e.g. (and note this is with /FORCE:MULTIPLE so it's a warning here, let me know if you want the error without /FORCE:MULTIPLE and I'll post it): 1>LIB_X.lib(LIB_X.dll) : warning LNK4006: "public: __thiscall std::vector<double,class std::allocator<double> >::~vector<double,class std::allocator<double> >(void)" (??1?$vector@NV?$allocator@N@std@@@std@@QAE@XZ) already defined in vtkCommon.lib(vtkInformationDoubleVectorKey.obj); I tried using /FORCE:MULTIPLE and it seemed to be a miracle at first, but I am getting random errors in code that would mostly give heap errors. I decided to remove all references to LIB_X from the main project and created a static lib that would handle all LIB_X stuff. I'm not a C++ expert, so I'm not certain how it handles lib clashing when you're using a pre-compiled lib, but I still received lib clashing errors when linking my static lib into my main project, so I still have to use /FORCE:MULTIPLE. Once I had the static lib it seemed like the random errors had gone away, I was able to do a lot with LIB_X methods in the main project via the static lib, BUT out of nowhere, I added a new data member to my main project's class (a std::vector of doubles) and suddenly I was getting a heap error in one of my static library's methods. If I commented out the new data member, the static library's method would run fine. I hate to give the current error, because honestly I'm not sure if examining it will be worthwhile, but here it is anyway in case it can help: note: it crashes to xutility on about line 151, pops up assertion: "file: dbgheap.c line: 1279 expression: _CrtIsValidHeapPointer(pUserData)" The error comes after adding a vector vector double to a vector vector vector double, crashing on the push_back line: std::vector<std::vector<double>> tmpVec; for(srvl_iter = srvl.begin(); srvl_iter != srvl.end(); ++srvl_iter) { tmpVec.push_back((*srvl_iter).getControlPoints()); } this->_splines.push_back(tmpVec); //CRASH It only started crashing here when I added a new data member to my main project (separate from the static lib!) Commenting out the new data member takes the error away. std::vector<std::vector<std::vector<double>>> _geometry; So, /FORCE:MULTIPLE seems bad, I get random errors that just don't make sense to me. Are there other solutions? Am I screwed? Is there something I can do with LIB_X's linking of VTK?
I encountered a bunch of LNK4006 errors when linking my app to a library (call it library LIB_Y) that made heavy use of std::vector<std::string>, which I also did in my app. After a bit of experimenting I found one solution that worked -- wrap LIB_Y in a separate DLL that calls LIB_Y (LIB_Y_WRAPPER, say), and then link the main app against LIB_Y_WRAPPER. To try out my suggestion you will need to: Change your "static lib that handles all LIB_X stuff" from a static LIB project into a DLL project (which I will call LIB_X_WRAPPER). Make sure the header files of LIB_X_WRAPPER don't include any of the LIB_X header files. This is really important because the wrapper needs to completely isolate your app from the data types declared in the LIB_X header files (such as std::vector<double>). Only refer to LIB_X's header files from within the source files of LIB_X_WRAPPER. Change the declaration of all classes and functions in your static lib to ensure they are exported from the DLL (see this answer if you need details about exporting from a DLL). This solution worked for me because it kept the instantiation (compiler generated functions) of the std::vector<std::string> class used by LIBY completely separate from the instantiation of std::vector<std::string> in my app. As an aside, I suspect the cause of the crash you are seeing (you comment it is in the destructor of std::vector<double>) is because the instantiation of std::vector<double> in your app is different to that in LIB_X.
2,035,083
2,035,104
Compile to a stand-alone executable (.exe) in Visual Studio
how can I make a stand-alone exe in Visual Studio. Its just a simple Console application that I think users would not like to install a tiny Console application. I compiled a simple cpp file using the visual studio command prompt. Will the exe work even if the .NET framework is not installed? I used native C++ code.
Anything using the managed environment (which includes anything written in C# and VB.NET) requires the .NET framework. You can simply redistribute your .EXE in that scenario, but they'll need to install the appropriate framework if they don't already have it.
2,035,243
2,035,271
Java C++ without JNI
My app is written in Java. There is a C++ library I need to utilize. I don't want to use JNI. 60 times a second, the C++ app needs to send the Java app 10MB of data; and the Java app needs to send the C++ app 10 MB of data. Both apps are running on the same machine; the OS is either Linux or Mac OS X. What is the most efficient way to do this? (At the moment, I'm considering TCPIP ports; but in C++, I can do memory mapping -- can I do something similar in Java?) Thanks!
Using mapped files is a way of hand-rolling a highly optimized rpc. You might consider starting with a web service talking over local sockets, using MTOM for attaching the data, or just dropping it into a file. Then you could measure the performance. If the data was a problem, you could then use mapping. Note that there are some odd restrictions on this that make your code sensitive to whether it is running on Windows or not. On Windows, you can't delete something that is open. I should point out that I have done exactly what you are proposing here. It has a control channel on a socket, and the data is shared via a file that is mmapped in C++ (or the Windows equivalent) and NIO mapped in Java. It works. I've never measured maximum throughput, though.
2,035,287
2,035,316
Static Runtime Library Linking for Visual C++ Express 2008
How do you tell Visual C++ Express 2008 to statically link runtime libraries instead of dynamically? My exes do not currently run on computers w/o some sort of VS installed and I would love to change that. :)
Sorry, I do not have VC++ Express to test, but in Standard edition I use Project Properties -> Configuration Properties -> C/C++ -> Code Generation -> Runtime Library. Dll and Dll Debug are for dynamic linking.
2,035,348
2,035,471
Can we design singleton by setting all the data member and method of a class to be static?
how to answer this question?
EDIT: Oops, the answer no. As others have pointed out, simply setting all methods/members to static follows the Monostate pattern (of which I was not aware). I was too eager to show off my shiny Singleton template (a simplified version of Alexandrescu's SingletonHolder, really). This answer should be downvoted. Original Answer: Yes. But it is less flexible than other ways of designing singletons. See Modern C++ Design by Alexandrescu and his Loki library: http://en.wikipedia.org/wiki/Loki_%28C%2B%2B%29 If you have several static singletons that depend on each other and on other global objects, you risk having problems because the order of their initialization (before main() kicks in) is tricky and can lead to unexpected results. Using templates, you can convert normal classes into singletons. If you later decide that your singleton is no longer a singleton (i.e. you can have multiple instances), then you don't have to convert all the class's methods to non-static. One way, using templates, is something like this: template <class T> class Singleton { public: static T& instance() { static T singleton; return singleton; } private: Singleton() {} // Disallow construction of Singleton<T> instances } class Foo { public: void print() {std::cout << "Hello world\n";} }; typedef Singleton<Foo> TheFoo; main() { TheFoo::instance().print(); } Note that this does not prevent you from creating Foo instances, unless you make the Foo constructor private (and make Singleton a friend of Foo). An advantage with this method over all-static classes is that you have more control over when the singleton object is constructed. It'll be constructed the first time you access the singleton. So you can have something like this: main() { TheFoo::instance(); // Make sure the Foo is constructed before the Bar TheBar::instance(); } There is debate over of the appropriateness of Singletons. Some say they are global objects in disguise and can make your code less reusable. I will not comment further on that, as I have not made up my mind myself. EDIT: If you find TheFoo::instance() too verbose, you can always provide an inline shortcut function or use references: inline Foo& theFoo() {return TheFoo::instance();} main() { theFoo().makeMeSomeCoffee("1 milk, 1 sugar"); Foo& foo = theFoo(); foo.makeMeASandwich("BLT"); } Mmmm.... off to make coffee & sandwich.
2,035,595
2,035,779
C++ Xcode assert evaluated in release
I'm quite new to Xcode (and Mac in general). I started a little iPhone project - coding in C++ whatever possible - and just noted that my assert(); commands are executed also in release mode. Is this a known problem and how do I solve it properly? thanks!
Xcode does not add -DNDEBUG for release mode automatically. To add it, go to Project | Edit Project Settings. You need to be on the Build tab and choose Configuration Release. Add a new Preprocessing Macro of NDEBUG.
2,035,609
2,037,177
How do I implement port redirecting/mapping on windows?
I am programming a web proxy that is installed on client machines (same machine as the web browser). I would like for all traffic that is web traffic to get redirected into my proxy server. I am using a windows operating system (XP, Windows 2003, Windows 2008, Vista, Windows 7). What would I need to do, to redirect all internet traffic on port 80 to my web proxy? I noticed this software allows you to create port mappings in windows. And I can get my proxy to work in that way. One of the components they install is called NDIS and I assume they use that technology to be able to port redirect. So my question is... what is the best (best as in easiest to implement) API or technology to use to program a port redirector in Windows?
Your options are Application Layer Proxy (= browser proxy settings), Layered Service Provider; TDI driver (or more modern equivalent?); NDIS driver. There is no simple answer to what is best or easiest. You need to take your own skill set into account, as well as the potential compatibility issues. Lots of other programs use each of these methods and you need to be compatible with them, or at least not break them.
2,035,838
2,035,850
What language do they build other languages with?
What language is used to build low level languages like c++ and java? How could you build the first language with no language?
In the context of compilers, this operation is often called bootstrapping. In particular, see the "Chicken and egg problem" section for a direct answer to your question. The very first compiler would have been hand-written in assembly language. If your next question is "how was the first assembler written?" then the answer would be that the first assembler was hand-written in binary machine code, possibly with front panel toggle switches. This is undoubtedly a simplification of what really happened, but the concept is the same. There is also an excellent article titled Reflections on Trusting Trust by Ken Thompson about the risks of using a compiler for a language to build the compiler for that language.