question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
3,283,901
3,283,957
Why am I getting error LNK2001 when linking to zlib.lib?
I'm working on a project that already contains the gzip library as follows: zlib\zlib.h zlib\zlib.lib zlib\zconf.h I would like to use the gzip functions from this .lib but am getting the following errors: Compress.cpp Linking... Compress.obj : error LNK2001: unresolved external symbol _gzclose Compress.obj : error LNK2001: unresolved external symbol _gzerror Compress.obj : error LNK2001: unresolved external symbol _gzwrite Compress.obj : error LNK2001: unresolved external symbol _gzopen .\Debug/files.exe : fatal error LNK1120: 4 unresolved externals Error executing link.exe. The link settings include: Object/library modules: zlib.lib Project Options: zlib.lib In the file using the gzX() functions, it #include "zlib/zlib.h" What else needs to be done here to use these functions? Thank You. EDIT: Using Visual Studio 6.0 C++ EDIT2: It turned out the static library I was using had the gz() functions taken out of them. The header file still had them which was misleading.
I grabbed the one off here to get zlib to build in windows. If you did the same, you may have forgotten to #define ZLIB_WINAPI before including zlib.h
3,284,061
3,284,166
WebBrowser Control from MFC — How to Inject Javascript?
This is like the question How to inject Javascript in WebBrowser control? But i don't know how to transform this code into good old MFC C++ code. For example what is the InvokeScript method, i can't find it anywhere. Isn't .NET webbrowser the same as IWebBrowser2 ?
You can find a CodeGuru article describing how to accomplish this here.
3,284,103
3,284,294
Heavy computations analysis/optimization
First of all, I don't have multiplication, division operations so i could use shifting/adding, overflow-multiplication, precalculations etc. I'm just comparing one n-bit binary number to another, but according to algorithm the quantity of such operations seems to be huge. Here it is : There is given a sequence of 0's and 1's that is divided into blocks. Let the length of a sequence be S, the length of a block is N which is some power of two (4,8,16,32, etc.). Quantity of blocks is n=S/N, no rocket science here. According to chosen N i'm building a set of all possible N-bit binary numbers, which is a collection of 2^N-1 objects. After this I need to compare each binary number with each block from source sequence and calculate how much times there was a match for each binary number, for example : S : 000000001111111100000000111111110000000011111111... (0000000011111111 is repeated 6 times, 16bit x 6 = 96bits overall) N : 8 blocks : {00000000, 11111111, 00000000, 1111111,...} calculations: . // _n = S/N; // _N2 = Math.Pow(2,N)-1 // S=96, N=8, n=12, 2^N-1=255 for this specific case // sourceEpsilons = list of blocks from input, List<string>[_n] var X = new int[_n]; // result array of frequencies for (var i = 0; i < X.Length; i++) X[i] = 0; // setting up for (ulong l = 0; l <= _N2; l++) // loop from 0 to max N-bit binary number var currentl = l.ToBinaryNumberString(_N/8); // converting counter to string, getting "current binary number as string" var sum = 0; // quantity of currentl numbers in blocks array for (long i = 0; i < sourceEpsilons.LongLength; i++) { if (currentl == sourceEpsilons[i]) sum++; // evaluations of strings, evaluation of numbers (longs) takes the same time } // sum is different each time, != blocks quantity for (var j = 0; j < X.Length; j++) if (sum - 1 == j) X[j]++; // further processing // result : 00000000 was matched 6 times, 11111111 6 times, X[6]=2. Don't ask me why do i need this >_< With even small S i seem to have (2^N-1)(S/N) iterations, with N=64 the number grows to 2^64=(max value of type long) so that ain't pretty. I'm sure there is a need to optimize loops and maybe change the approach cardinally (c# implementation for N=32 takes 2h @ dual-core pc w/ Parallel.For). Any ideas how to make the above scheme less time and resource-consuming? It seems like i have to precompute binary numbers and get rid of first loop by reading "i" from file and evaluate it with blocks "on-the-fly", but the filesize will be (2^N)*N bytes ((2^N-1)+1)*N) which is somehow unacceptable too.
It seems like what you want is a count of how many times each specific block occurred in your sequence; if that's the case, comparing every block to all possible blocks and then tallying is a horrible way to go about it. You're much better off making a dictionary that maps blocks to counts; something like this: var dict = Dictionary<int, int>(); for (int j=0; j<blocks_count; j++) { int count; if (dict.TryGetValue(block[j], out count)) // block seen before, so increment { dict[block[j]] = count + 1; } else // first time seeing this block, so set count to 1 { dict[block[j]] = 1; } } After this, the count q for any particular block will be in dict[the_block], and if that key doesn't exist, then the count is 0.
3,284,283
3,284,329
Calculate high and low value of array
struct WeatherStation { string Name; double Temperature; }; void Initialize(WeatherStation[]); void HL(WeatherStation List[]); int main() { string Command; WeatherStation Stations[5]; //some commands } void Initialize(WeatherStation StationList[]) { StationList[0].Name = "A"; StationList[0].Temperature = 0.0; StationList[1].Name = "B"; StationList[1].Temperature = 0.0; StationList[2].Name = "C"; StationList[2].Temperature = 0.0; StationList[3].Name = "D"; StationList[3].Temperature = 0.0; StationList[4].Name = "E"; StationList[4].Temperature = 0.0; } void HL(WeatherStation List[]) { int K; int Low = List[0]; int High = List[0]; for(K = 0 ; K < 5 ; K++) if(List[K] < Low) Low = List[K]; for(K=0 ; K < 5 ; K++) if(List[K] > High) High = List[K]; cout << "Lowest Temperature: " <<Low << endl; cout << "Highest Temperature: "<< High << endl; } The last part is tripping me up. chief.cpp: In function ‘void HL(WeatherStation*)’: chief.cpp:124: error: cannot convert ‘WeatherStation’ to ‘int’ in initialization chief.cpp:125: error: cannot convert ‘WeatherStation’ to ‘int’ in initialization chief.cpp:128: error: no match for ‘operator<’ in ‘*(List + ((unsigned int)(((unsigned int)K) * 12u))) < Low’ chief.cpp:129: error: cannot convert ‘WeatherStation’ to ‘int’ in assignment chief.cpp:132: error: no match for ‘operator>’ in ‘*(List + ((unsigned int)(((unsigned int)K) * 12u))) > High’ chief.cpp:133: error: cannot convert ‘WeatherStation’ to ‘int’ in assignment
It cannot convert WeatherStation to int because WeatherStation is a structure. If you want to get a member of a structure you should write, for instance, List[0].Temperature.
3,284,641
3,284,662
Private variable needs to be initialized only in constructor. How?
I have a class called Foo with a constructor that needs arguments, and a other class Bar with a Foo private variable class Foo { public: Foo(string); } class Bar { public: Bar() { this->foo = Foo("test") } private: Foo foo; } However, when I try to compile this, I get a compile error that there is no Foo::Foo() constructor. It looks like the private variable foo in class Bar gets initialized before getting a value assigned in the constructor. How can I have a private foo variable that waits to gets initialized in my constructor?
You need to use an initializer list. If you don't, your code will call the default constructor for that object. Bar::Bar() : foo("test") { // stuff }
3,284,646
3,284,768
About pointer downcasts/inheritance
So I'm about finishing up prata's C++ primer and I'm u to RTTI. He showed a line of downcasting and just said it's wrong but I want to see a better example. class Grand { private: int hold; public: Grand(int h=0) : hold(h) {} virtual void Speak() const { cout << "I am a grand class\n";} virtual int Value() const {return hold; } void Gah() const {cout << "ok" << endl;} }; class Superb : public Grand { public: Superb(int h = 0) : Grand(h){} void Speak() const {cout << "I am a superb class!!\n";} virtual void Say() const { cout << "I hold the superb value of " << Value() << "!\n";} void Sah() const { cout << "Noak" << endl;} }; class Magnificent : public Superb { private: char ch; public: int hour; Magnificent(int h = 0, char c = 'A') : Superb (h), ch(c){} void Speak() const {cout << "I am a magnificent class!!!\n";} void Say() const {cout << "I hold the character " << ch << "and the integer " << Value() << "!\n";} void Mah() const {cout << "Ok" << endl;} }; Grand * GetOne(); int _tmain(int argc, _TCHAR* argv[]) { /* srand(time(0)); Grand * pg; Superb * ps; */ Grand * pg = new Grand; Grand * ps = new Superb; Grand * pm = new Magnificent; Magnificent * ps2 = (Magnificent *)pg; ps2->Gah(); cout << ps2->hour << endl; system("pause"); } So above, I'm casting a base to a derived which is overall not to be done. However, in this example, what am I really limited to? When I am casting pg, I still have access through ps2 to all of the grand/superb/magnificent properties and methods. In other words, nothing fails here. Can anyone give me an example or add something to the code which will clearly show to me how assigning a base to a derived can mess things up?
Do not use C style casts. They are not safe. C++ has introduced 4 new casts the one you are looking for is dynamic_cast<> Magnificent * ps2 = dynamic_cast<Magnificent*>(pg); // If pg is a Magnificent // (or is a super class of // Magnificent) it works fine. // If pg is not a Magnificent (in this case) it will return NULL. When you use a C style cast you are telling the compiler to ignore all the rules and do what you tell it (which the compiler is happy to do). There is no checking done to make sure what you are doing makes any sense. The C++ style casts are much more limiting and each does a specific range of casting. The dynamic_cast is used to cast up and down the class hierarchy.
3,284,720
3,284,770
Non-owning holder with assignment semantics
I have a class that should hold a reference to some data, without owning that data (i.e. the actual data is guaranteed not to go out of scope). In particular, the class cannot make a copy – the data is easily several gigabytes in size. Now, the usual implementation (I assume) is to have a reference to the data: struct holder_ref { type const& value; holder_ref(type const& value) : value(value) { } }; (Please note that the constness has absolutely no bearing on the problem). Now, I absolutely need this class to be assignable (i.e. have a working operator =). I thought this was a fairly common problem but I can’t remember how (if ever) I’ve solved it before. The problem is that a reference cannot be assigned and there’s simply no way around this. The only solution I’ve come up with uses placement new in place of the assignment operator: // x = other_x; gets replaced with: x.~T(); new (&x) T(other_x); Now, this works and is standard compliant. But it sure is ugly. No – inacceptable. So I’m searching for alternatives. One idea is to use pointers, but I’m unsure whether my constructor is actually guaranteed to work (and passing a pointer is impossible due to the interface I have to adhere to): struct holder_ptr { type const* value; // Is this legal? holder_ptr(type const& value = 0) : value(&value) { } }; But I’d rather use a reference, if at all possible. Only – how to implement the assignment operator? struct holder_ref { type const& value; holder_ref(type const& value = 0) : value(value) { } holder_ref& operator =(holder_ref const& other) { // Now what?! return *this; } }; As a test case, consider the following code: int main() { int const TEST1 = 23; int const TEST2 = 13; int const TEST3 = 42; std::vector<holder_ptr> hptr(1); std::vector<holder_ref> href(2); // Variant 1. Pointer. hptr[0] = holder_ptr(TEST1); // Variant 2. Placement new. href[0].~holder_ref(); new (&href[0]) holder_ref(TEST2); // Variant 3. ??? href[1] = holder_ref(TEST3); assert(*hptr[0].value == TEST1); // Works (?) assert(href[0].value == TEST2); // Works assert(href[1].value == TEST3); // BOOM! } (Also, just to make this clear – the type we’re talking about is non-POD and I need a standard compliant solution.)
I don't see anything wrong with using a holder_ptr. It can be implemented something like so: struct bad_holder : std::exception { }; struct holder_ptr { holder_ptr() : value(0) { } holder_ptr(type const& value) : value(&value) { } type const& get() { if (value == 0) throw bad_holder(); return *value; } private: type const* value; }; So long as you always assign to the pointer from a reference, you know that you have a valid object (that, or you ended up with a "null reference" previously, in which case you have other, bigger problems since you'll already have invoked undefined behavior). With this solution, the interface is implemented entirely in terms of references, but under the hood a pointer is used so that the type is assignable. The use of references in the interface ensures there are none of the concerns that come with using pointers (namely, you never have to worry whether the pointer is null). Edit: I've updated the example to allow for the holder to be default constructible.
3,284,784
3,284,801
C++ removing from list while iterating over list
I have a std::list of Bananas, and I want to get rid of the bad ones. Is there any relatively simple way to perform the following pseudocode? foreach(Banana banana in bananaList) { if(banana.isBad()) bananaList.remove(banana); } (Making a transition from C# and Java to C++ has been a rocky road.)
bananaList.remove_if(std::mem_fun_ref(&Banana::isBad)); Note that you should probably be using std::vector instead of std::list though -- vector performs better in 99.9% of cases, and it's easier to work with. EDIT: If you were using vectors, vectors don't have a remove_if member function, so you'd have to use the plain remove_if in namespace std: bananaVector.erase( std::remove_if(bananaVector.begin(), bananaVector.end(), std::mem_fun_ref(&Banana::isBad)), bananaVector.end());
3,285,019
3,285,050
can I specialize operator<<?
I want to specialize operator<< but this code is not compiling; template<> std::ostream& operator<< < my_type >( std::ostream& strm, my_type obj);
To specialize a template, first you have to have a template declared. In the case of a free operator<< you don't need a template; you can just overload it for your my_type class: std::ostream& operator<<( std::ostream& strm, my_type obj ); If your object isn't trivial in size, you may want to consider passing via a const reference so that you don't copy it every time you stream it: std::ostream& operator<<( std::ostream& strm, const my_type& obj ); (Technically you can explicitly specialize an operator<<, but I don't think that this is what you want or need. In order to be able to use a template operator<< with the usual << syntax you need to make the template specialization deducible from one of the parameter types. E.g. // template op << template< class T > std::ostream& operator<<( std::ostream&, const MyTemplClass<T>& ); // specialization of above template<> std::ostream& operator<< <int>( std::ostream&, const MyTemplClass<int>& ); )
3,285,038
3,285,962
XMLRPCPP asynchronously handling multiple calls?
I have a remote server which handles various different commands, one of which is an event fetching method. The event fetch returns right away if there is 1 or more events listed in the queue ready for processing. If the event queue is empty, this method does not return until a timeout of a few seconds. This way I don't run into any HTTP/socket timeouts. The moment an event becomes available, the method returns right away. This way the client only ever makes connections to the server, and the server does not have to make any connections to the client. This event mechanism works nicely. I'm using the boost library to handle queues, event notifications, etc. Here's the problem. While the server is holding back on returning from the event fetch method, during that time, I can't issue any other commands. In the source code, XmlRpcDispatch.cpp, I'm seeing in the "work" method, a simple loop that uses a blocking call to "select". Seems like while the handling of a method is busy, no other requests are processed. Question: am I not seeing something and can XmlRpcpp (xmlrpc++) handle multiple requests asynchronously? Does anyone know of a better xmlrpc library for C++? I don't suppose the Boost library has a component that lets me issue remote commands? I actually don't care about the XML or over-HTTP feature. I simply need to issue (asynchronous) commands over TCP in any shape or form? I look forward to any input anyone might offer.
I had some problems with XMLRPC also, and investigated many solutions like GSoap and XMLRPC++, but in the end I gave up and wrote the whole HTTP+XMLRPC from scratch using Boost.ASIO and TinyXML++ (later I swaped TinyXML to expat). It wasn't really that much work; I did it myself in about a week, starting from scratch and ending up with many RPC calls fully implemented. Boost.ASIO gave great results. It is, as its name says, totally async, and with excellent performance with little overhead, which to me was very important because it was running in an embedded environment (MIPS). Later, and this might be your case, I changed XML to Google's Protocol-buffers, and was even happier. Its API, as well as its message containers, are all type safe (i.e. you send an int and a float, and it never gets converted to string and back, as is the case with XML), and once you get the hang of it, which doesn't take very long, its very productive solution. My recomendation: if you can ditch XML, go with Boost.ASIO + ProtobufIf you need XML: Boost.ASIO + Expat Doing this stuff from scratch is really worth it.
3,285,057
3,285,071
c++ constructor with new
I'm making a very dumb mistake just wrapping a pointer to some new'ed memory in a simple class. class Matrix { public: Matrix(int w,int h) : width(w),height(h) { data = new unsigned char[width*height]; } ~Matrix() { delete data; } Matrix& Matrix::operator=(const Matrix&p) { width = p.width; height = p.height; data= p.data; return *this; } int width,height; unsigned char *data; } ......... // main code std::vector<Matrix> some_data; for (int i=0;i<N;i++) { some_data.push_back(Matrix(100,100)); // all Matrix.data pointers are the same } When I fill the vector with instances of the class, the internal data pointers all end up pointing to the same memory ?
1. You're missing the copy constructor. 2. Your assignment operator should not just copy the pointer because that leaves multiple Matrix objects with the same data pointer, which means that pointer will be deleted multiple times. Instead, you should create a deep copy of the matrix. See this question about the copy-and-swap idiom in which @GMan gives a thorough explanation about how to write an efficient, exception-safe operator= function. 3. You need to use delete[] in your destructor, not delete.
3,285,130
3,285,182
standards compliant way to typedef my enums
How can I get rid of the warning, without explicitly scoping the enum properly? The standards-compliant code would be to compare against foo::bar::mUpload (see here), but the explicit scopes are really long and make the darn thing unreadable. maybe there's another way that doesn't use typedef? i don't want to modify the enum--i didn't write it and its in use elsewhere. warning C4482: nonstandard extension used: enum 'foo::bar::baz' used in qualified name namespace foo { class bar { enum baz {mUpload = 0, mDownload}; } } typedef foo::bar::baz mode_t; mode_t mode = getMode(); if (mode == mode_t::mUpload) //C4482 { return uploadthingy(); } else { assert(mode == mode_t::mDownload); //C4482 return downloadthingy(); }
If the enum is defined within a class, the best that you can do is bring the class into your own scope and just use class_name::value or define a typedef of the class. In C++03 the values of an enum are part of the enclosing scope (which in your case is the class). In C++0x/11 you will be able to qualify the values with the enum name: namespace first { namespace second { struct enclosing { enum the_enum { one_value, another };   } }} using first::second::enclosing; typedef first::second::enclosing the_enclosing; assert( enclosing::one_value != the_enclosing::another ); In the future, your usage will be correct (C++11): typedef first::second::enclosing::the_enum my_enum; assert( my_enum::one_value != my_enum::another );
3,285,351
3,285,401
Differences between a `typename` parameterized template and and integral type one
I've been trying to work with templates for a while now and the more I do the less I realise I understand. This latest problem feels like it has unearthed a rather fundamental misunderstanding on my part and I'm starting to think more than ever that, "Right, tomorrow I shouldn't write any code but instead find a library with a good CS section and just read everything they have on templates"! I wonder if in the mean time you can help me. So, the following code, template <typename T> // or replace `typename` with `class` struct Foo { struct Bar {}; Foo(Bar) {} }; Foo<float>::Bar x; Foo<int> y (x); doesn't compile since x is of type Foo<float>::Bar but to construct y we need a Foo<int>::Bar. That's fine, and expected, but now consider the following, template <int I> struct Foo { struct Bar {}; Foo(Bar) {} }; Foo<0>::Bar x; Foo<1> y (x); I was hoping/thinking (although, thankfully, not as yet relying on) that x would be of type Foo<0>::Bar and to construct y we would need a Foo<1>::Bar and as such it would not compile - as in the previous example. But it seems that both are in fact of type Foo<int>::Bar and so, this will compile. So, I wonder, what, first, is the correct terminology to describe this difference between a typename/class parameterized template and an integral type parameterized one, what other differences in behaviour does this entail, and, what method could I use to solve this problem and get the desired behaviour for this simple example so that Foo<0> and Foo<1> will describe incompatible types? And, before that trip to the library, any links to "essential", online, reading materials on the subject would be welcomed too. Thanks.
On gcc 4.4.3 your second example fails to compile with the message "error: no matching function for call to 'Foo<1>::Foo(Foo<0>::Bar&)'", which is exactly what you expected to happen. So you did not misunderstand anything. If this compiles for you, that's non-standard behavior by your compiler.
3,285,429
3,286,325
Profiling embedded application
I have an application that runs on an embedded processor (ARM), and I'd like to profile the application to get an idea of where it's using system resources, like CPU, memory, IO, etc. The application is running on top of Linux, so I'm assuming there's a number of profiling applications available. Does anyone have any suggestions? Thanks! edit: I should also add the version of Linux we're using is somewhat old (2.6.18). Unfortunately I don't have a lot of control over that right now.
As bobah said, gprof and valgrind are useful. You might also want to try OProfile. If your application is in C++ (as indicated by the tags), you might want to consider disabling exceptions (if your compiler lets you) and avoiding dynamic casts, as mentioned above by sashang. See also Embedded C++.
3,285,519
3,285,622
Some final questions about inheritance/casting
I asked a question an hour or two ago that is similar but this is fundamentally different to me. After this I should be good. class base { private: string tame; public: void kaz(){} virtual ~base() {} void print() const { cout << tame << endl; } }; class derived: public base { private: string taok; public: std::string name_; explicit derived( const std::string& n ) : name_( n ) {} derived(){} void blah(){taok = "ok";} void print() const { std::cout << "derived: " << name_ << std::endl; } }; int _tmain(int argc, _TCHAR* argv[]) { base b; derived d; base * c = &b; derived * e = (derived *)&b; e->kaz(); system("pause"); return 0; } I know downcasting in in this example is not good practice but I'm just using it as an example. So when I now am pointing to a base object from a derived pointer, I don't get why I am still able to do certain operations only belonging to the base class. For example, the base class's interface has a Kaz() method but the derived method does not. When I downcast, why does the compiler not yell at me for doing this even though Kaz() is not part of the derived class's interface? Why is the compiler not complaining for using members of the base class when I am using a derived pointer? Why does the compiler yell at me only when I access a member from the base class interface from within a method but not directly? For example: I can't do this: e->print() //Program crashes But I can do this: e->tame = "Blah"; cout << e->tame << endl;
The derived class inherits all the members of the base class, so kaz() exists also for derived objects. If you call kaz() on a derived object, simply the method that was inherited from base is called. If you access the inherited members from within a method or directly doesn't matter. The problem with e is that it is really pointing to a base object, not a derived. With the cast e = (derived *)&b you tell the compiler "I know it doesn't look like it, but this really is a derived *, believe me!". And the compiler believes you, since you are the master. But you lied and &b was actually not a derived*. Therefore horrible things happen when the compiler tries to call derived::print() on it, in this case it leads to a crash of the program. When you access e->tame directly, also horrible things could happen (the compiler still treats e as a derived* while it only is a base*). In this case, by chance, it happens to print out the expected value anyway.
3,285,558
3,285,626
Simple MIPS Instructions and Compilers
Is it common for compilers (gcc for instance) to generate an instruction that loads some empty memory element into a register? Like... lw at,0(sp) where memory[sp + 0] = 0. This basically just places 0 into $at ($R1.) I ask because I'm looking through an executable file's hex dump (executable file is the result of the compilation of a c++ file) and I'm manually verifying it and if I start at the objdump state entry point I run into an instruction that does this. I'm not sure whether I should take this to be an error if it's just a common compiler action. It seems like a poor way to zero a register. ADDU $at,$0,$0 would be better. Or SLL $at,$0,$0.. The entry point is 400890. The jump target of the jal at the end is an empty memory location (tells me something is probably wrong...) Note that my previous example was purposefully arbitrated. And just to be clear, -32636+gp is an empty memory location. I can post the memory contents at the point if you'd like proof :). 00400890 <__start>: 400890: 03e00021 move zero,ra 400894: 04110001 bal 40089c <__start+0xc> 400898: 00000000 nop 40089c: 3c1c0fc0 lui gp,0xfc0 4008a0: 279c7864 addiu gp,gp,30820 4008a4: 039fe021 addu gp,gp,ra 4008a8: 0000f821 move ra,zero 4008ac: 8f848034 lw a0,-32716(gp) 4008b0: 8fa50000 lw a1,0(sp) 4008b4: 27a60004 addiu a2,sp,4 4008b8: 2401fff8 li at,-8 4008bc: 03a1e824 and sp,sp,at 4008c0: 27bdffe0 addiu sp,sp,-32 4008c4: 8f878054 lw a3,-32684(gp) 4008c8: 8f888084 lw t0,-32636(gp)<------ this instruction 4008cc: 00000000 nop 4008d0: afa80010 sw t0,16(sp) 4008d4: afa20014 sw v0,20(sp) 4008d8: afbd0018 sw sp,24(sp) 4008dc: 8f998068 lw t9,-32664(gp) 4008e0: 00000000 nop 4008e4: 0320f809 jalr t9 4008e8: 00000000 nop Jal target is 4010c0. 4010c0: 8f998010 lw t9,-32752(gp) 4010c4: 03e07821 move t7,ra 4010c8: 0320f809 jalr t9
Perhaps it's being placed after a jump statement? If so, that statement is run before the jump occurs and could be a do nothing instruction (nop). Beyond that, it could just be the compiler on a lower optimization setting. Another possibility is that the compiler is preserving the CPU flags field. Shift and Add play with flags while a load I don't believe does.
3,285,707
3,285,775
How do I use less CPU with loops?
I've got a loop that looks like this: while (elapsedTime < refreshRate) { timer.stopTimer(); elapsedTime=timer.getElapsedTime(); } I read something similar to this elsewhere (C Main Loop without 100% cpu), but this loop is running a high resolution timer that must be accurate. So how am I supposed to not take up 100% CPU while still keeping it high resolution?
You shouldn't busy-wait but rather have the OS tell you when the time has passed. http://msdn.microsoft.com/en-us/library/ms712704(VS.85).aspx High resolution timers (Higher than 10 ms) http://msdn.microsoft.com/en-us/magazine/cc163996.aspx
3,285,970
3,285,986
C++ and returning a null - what worked in Java doesn't work in C++
So I'm having a rather tumultuous conversion to C++ from Java/C#. Even though I feel like I understand most of the basics, there are some big fat gaping holes in my understanding. For instance, consider the following function: Fruit& FruitBasket::getFruitByName(std::string fruitName) { std::map<std::string,Fruit>::iterator it = _fruitInTheBascit.find(fruitName); if(it != _fruitInTheBascit.end()) { return (*it).second; } else { //I would so love to just return null here } } Where _fruitsInTheBascit is a std::map<std::string,Fruit>. If I query getFruitByName("kumquat") you know it's not going to be there - who eats kumquats? But I don't want my program to crash. What should be done in these cases? P.S. tell me of any other stupidity that I haven't already identified.
There is no such thing in C++ as a null reference, so if the function returns a reference, you can't return null. You have several options: Change the return type so that the function returns a pointer; return null if the element is not found. Keep the reference return type but have some sort of "sentinel" fruit object and a return a reference to it if the object is not found. Keep the reference return type and throw an exception (e.g., FruitNotFoundException) if the fruit is not found in the map. I tend to use (1) if a failure is likely and (3) if a failure is unlikely, where "likely" is a completely subjective measure. I think (2) is a bit of a hack, but I've seen it used neatly in some circumstances. As an example of an "unlikely" failure: in my current project, I have a class that manages objects and has a function is_object_present that returns whether an object is present and a function get_object that returns the object. I always expect that a caller will have verified the existence of an object by calling is_object_present before calling get_object, so a failure in this case is quite unlikely.
3,286,129
3,290,190
What is the preferred STL collection when that's all you need?
I just need a "bag of things". It doesn't need to be a set, a map or even have any particular order. I just need to be able to add things and iterate over it, nothing more. I don't expect it to be very large but it can't get really bad perf if it does. What container should I use?
The standard recommends using vector as your default container. But Herb Sutter actually makes a case for using deque as your first choice.
3,286,236
3,286,382
Executing member function of class through pointer to abstract parent of said class
I have created an abstract base class Animal which has public virtual abstract method makeSound(). I created a subclass Cow which implements Animal.makeSound() as you would expect (you know... "moo"). And I have a Farm class which holds a private member variable std::vector<Animal*> animals. In one of the Farm methods I iterate over all animals and make them make their sound. for(unsigned int i = 0; i < animals.size(); i++) { animals[i]->makeSound() } Unfortunately I get an error Unhandled exception at 0x65766974 in TestBed.exe: 0xC0000005: Access violation reading location 0x65766974. Any idea what's going on here? UPDATE: adding more code per request class Farm { public: Farm(); virtual ~Farm(void); void setBarnOnFire(); private: vector<Animal*> animals; }; Farm::Farm() { animals.push_back(new Dog()); animals.push_back(new Cat()); animals.push_back(new Chicken()); animals.push_back(new Horse()); animals.push_back(new Cow()); } Farm::setBarnOnFire() { for(unsigned int i = 0; i < animals.size(); i++) { animals[i]->makeSound() } } Is there something I'm supposed to do to initialize animals. RESOLUTION: So you were all correct. I was accessing memory that I didn't own. But it took me forever to track it down. It was due to a misunderstanding about how object initialization takes place. Basically, in an effort to "initialize" a member variable I was actually overwriting it with a local variable. I then gave the local to all the animals that I created. Later, the animals would try to call the local variable - which no longer existed.
ok, let me take a guess: "Unhandled exception at 0x65766974 in TestBed.exe: 0xC0000005: Access violation reading location 0x65766974." it seems that the code pointer is being sent to 0x65766974 ("exception at 0x65766974") but this is not a valid place to be reading, let alone code: ("Access violation reading location 0x65766974", note, the same number) so is it possible the vtable, or vtable pointer is being corrupted? perhaps the object is being overwritten by a string? as it is being stored in a vector, perhaps you have something overflowing a buffer (maybe a char array?) in the preceding object in the vector, and this is corrupting the next objects vtable pointer?
3,286,363
3,286,403
What does C++ add to C?
What does C++ add to C? What features of the language are the Clang/LLVM projects, the parts of GCC that are being written in C++, chromium, and any others all taking advantage of? What features are they avoiding?
Because despite academic efforts such as Singularity, there's not a single mainstream OS where drivers can be written in a high-level language. Note that anything that can be done in C++ can also be done in C, but some things are a lot easier in C++.
3,286,422
3,313,510
Eclipse CDT Build Configs - Testing a DLL with CPP Unit
I'm making a DLL (and probably a Linux port at some later date) in C++ using eclipse. The situation is as follows: I am trying to make two separate build configurations, one that will build a DLL and one that will build an executable CppUnit test. Currently I have all of the DLL build working, and I can make a separate project to test the DLL with; however, I was wondering if there was any way to do this all in one project. Help on this matter would be greatly appreciated! Thanks, Chris
Well, I found out how to do it, so if anyone else stumbles across this... If you go into "Project->Properties->C/C++ Build->Settings", then select a debug configuration (or create a new one). Go to the "Build Artifact" tab, and change the "Artifact Type" to executable. Now to avoid having all of your source code compiled into all Build Configurations (such as your main() being built into a DLL, which doesn't make much sense), go to "Project->Properties->C/C++ General->Paths and Symbols". Select your build configuration and go to the "Source Location" tab. Here you can add new source folders/remove source folders that already exist.
3,286,448
3,310,608
Calling a python method from C/C++, and extracting its return value
I'd like to call a custom function that is defined in a Python module from C. I have some preliminary code to do that, but it just prints the output to stdout. mytest.py import math def myabs(x): return math.fabs(x) test.cpp #include <Python.h> int main() { Py_Initialize(); PyRun_SimpleString("import sys; sys.path.append('.')"); PyRun_SimpleString("import mytest;"); PyRun_SimpleString("print mytest.myabs(2.0)"); Py_Finalize(); return 0; } How can I extract the return value into a C double and use it in C?
As explained before, using PyRun_SimpleString seems to be a bad idea. You should definitely use the methods provided by the C-API (http://docs.python.org/c-api/). Reading the introduction is the first thing to do to understand the way it works. First, you have to learn about PyObject that is the basic object for the C API. It can represent any kind of python basic types (string, float, int,...). Many functions exist to convert for example python string to char* or PyFloat to double. First, import your module : PyObject* myModuleString = PyString_FromString((char*)"mytest"); PyObject* myModule = PyImport_Import(myModuleString); Then getting a reference to your function : PyObject* myFunction = PyObject_GetAttrString(myModule,(char*)"myabs"); PyObject* args = PyTuple_Pack(1,PyFloat_FromDouble(2.0)); Then getting your result : PyObject* myResult = PyObject_CallObject(myFunction, args) And getting back to a double : double result = PyFloat_AsDouble(myResult); You should obviously check the errors (cf. link given by Mark Tolonen). If you have any question, don't hesitate. Good luck.
3,286,524
3,286,553
Help understanding class example code for C++, templates, operator()
I'm not sure exactly what the following class does that we have for a class example. In the following code, what does the operator() do in this case? I don't quite get the *(begin + first) and pretty much the whole return expression as what is being evaluated. Any help would be great. Thanks! // IndexCompare.h - interface for IndexCompare class template #ifndef _INDEXCOMPARE_H_ #define _INDEXCOMPARE_H_ #pragma once template <class random_iterator> class IndexCompare { public: IndexCompare(random_iterator begin, random_iterator end) : begin(begin), end(end) {} ~IndexCompare() {} bool operator() (unsigned int first, unsigned int second) { return (*(begin + first) < *(begin + second)); } private: random_iterator begin; random_iterator end; }; #endif
If you're asking what operator () does, it allows you to call the object like a function. See this article for an example. If you're asking what the function in your example is doing, it's comparing the values of two elements specified by the indices passed to the function. begin + first refers to the element at index first starting from the iterator begin, similarly begin + second. *(begin + first) gets the value at that location. You can use this class with any STL container (that supports random access) by passing in a pair of iterators. For example, you could use it with a vector like this: vector<int> vec; /* add some elements here */ IndexCompare<vector<int>::iterator> compare(vec.begin(), vec.end()); Now calling compare(2, 5) for example would compare the values of vec[2] and vec[5].
3,286,572
3,287,207
Which features of C++ are particularly resource intensive at compile time?
I believe C is generally faster to compile than C++, because it lacks features like late binding and operator overloading. I'm interested in knowing which features of C++ tend to slow the compilation process the most?
This is a difficult question to answer in a meaningful way. If you look purely at lines of code per second (or something on that order), there's no question that a C compiler should be faster than a C++ compiler. By itself, that doesn't mean much though. The mention of late-binding in the question is an excellent case in point: it's almost certainly true that compiling a C++ virtual function is at least somewhat slower than compiling a C (non-virtual) function. That doesn't mean much though -- the two aren't equivalent at all. The C equivalent of a C++ virtual function will typically be a pointer to a function or else code that uses a switch on an enumerated type to determine which of a number of pieces of code to invoke. By the time you create code that's actually equivalent, it's open to question whether C will have any advantage at all. In fact, my guess would be rather the opposite: at least in the compilers I've written, an awful lot of the time is spent on the front-end, doing relatively simple things like just tokenizing the input stream. Given the extra length I'd expect from code like this in C, by the time you had code that was actually equivalent, it wouldn't surprise me a whole lot if it ended up about the same or even somewhat slower to compile. Operator overloading could give somewhat the same effect: on one hand, the code that overloads the operator almost certainly takes a bit of extra time to compile. At the same time, the code that uses the overloaded operator will often be shorter specifically because it uses an overloaded operator instead of needing to invoke functions via names that will almost inevitably be longer. That's likely to reduce that expensive up-front tokenization step, so if you use the overloaded operator a lot, overall compilation time might actually be reduced. Templates can be a bit the same way, except that in this case it's often substantially more difficult to even conceive of a reasonable comparison. Just for example, when you're doing sorting in C, you typically use qsort, which takes a pointer to a function to handle the comparison. The most common alternative in C++ is std::sort, which is a template that includes a template argument for the comparison. The difference is that since that is a template argument, the code for the comparison is typically generated inline instead of being invoked via a pointer. In theory I suppose one could perhaps write a giant macro to do the same -- but I'm pretty sure I've never seen such a thing actually done, so it's extremely difficult to guess at how much slower or faster it might be to use if it one existed. Given the simplicity of macros versus templates, I'd guess it would compile faster, but exactly how much faster will probably remain forever a mystery; I'm certainly not going to try to write a complete Quicksort or Introsort in a C macro!
3,286,669
3,287,393
Zooming into the mouse, factoring in a camera translation? (OpenGL)
Here is my issue, I have a scale point, which is the unprojected mouse position. I also have a "camera which basically translates all objects by X and Y. What I want to do is achieve zooming into mouse position. I'v tried this: 1. Find the mouse's x and y coordinates 2. Translate by (x,y,0) to put the origin at those coordinates 3. Scale by your desired vector (i,j,k) 4. Translate by (-x,-y,0) to put the origin back at the top left But this doesn't factor in a translation for the camera. How can I properly do this. Thanks glTranslatef(controls.MainGlFrame.GetCameraX(), controls.MainGlFrame.GetCameraY(),0); glTranslatef(current.ScalePoint.x,current.ScalePoint.y,0); glScalef(current.ScaleFactor,current.ScaleFactor,0); glTranslatef(-current.ScalePoint.x,-current.ScalePoint.y,0);
Instead of using glTranslate to move all the objects, you should try glOrtho. It takes as parameters the wanted left coords, right coords, bottom coords, top coords, and min/max depth. For example if you call glOrtho(-5, 5, -2, 2, ...); your screen will show all the points whose coords are inside a rectangle going from (-5,2) to (5,-2). The advantage is that you can easily adjust the zoom level. If you don't multiply by any view/projection matrix (which I assume is the case), the default screen coords range from (-1,1) to (1,-1). But in your project it can be very useful to control the camera. Call this before you draw any object instead of your glTranslate: float left = cameraX - zoomLevel * 2; float right = cameraX + zoomLevel * 2; float top = cameraY + zoomLevel * 2; float bottom = cameraY - zoomLevel * 2; glOrtho(left, right, bottom, top, -1.f, 1.f); Note that cameraX and cameraY now represent the center of the screen. Now when you zoom on a point, you simply have to do something like this: cameraX += (cameraX - screenX) * 0.5f; cameraY += (cameraY - screenY) * 0.5f; zoomLevel += 0.5f;
3,286,822
3,286,860
Reading a file to a string in C++
As somebody who is new to C++ and coming from a python background, I am trying to translate the code below to C++ f = open('transit_test.py') s = f.read() What is the shortest C++ idiom to do something like this?
The C++ STL way to do this is this: #include <string> #include <iterator> #include <fstream> using namespace std; wifstream f(L"transit_test.py"); wstring s(istreambuf_iterator<wchar_t>(f), (istreambuf_iterator<wchar_t>()) );
3,286,882
3,286,914
Why does this code result in 0?
I have the following code #include <stdio.h> #include <iostream> #include <stdlib.h> #include <stdint.h> using namespace std; int main(){ int x; cin>>x; uint32_t Ex; Ex=(x<<1)>>24; cout<<Ex<<endl; return 0; } but it gives 0 for any value of x? My task is the following: Computation of the biased exponent Ex of a binary32 datum x.
It is not so much that you get zero for 'any value of x' but that you get zero for any positive value of x smaller than 0x01000000 (which is 16777216). None of this helps much with explaining a 'biassed exponent of a binary32 datum'. That sounds like the exponent of a 32-bit floating point (IEEE) number. You probably have to worry about endianness of the representation, amongst other things.
3,286,901
3,286,927
How is the memory use in a queue?
In my project I use the std::queue class. I would like to know what happen if I do the following. Get a a pointer of a element inside the queue (note: a pointer and not a iterator). I make modification in the queue like push and pop in the queue (pop element which is not the pointed by the previous pointer) Does my pointer still point on the same element I specify in the beginning ? Is it defined by the queue specification?
std::queue uses a sequence container for its implementation. By default, std::deque is used. With a std::deque, so long as all the insertions and erasures are at the beginning or the end of the container, references and pointers to elements in the container are not invalidated. However, I don't know how you are going to get a pointer to an element in the queue; it doesn't provide functionality for that (you can only get a reference to the first and last elements in the queue).
3,287,052
3,287,086
Famous design patterns that a C++ programmer should know
Possible Duplicate: What C++ idioms should C++ programmers use? After reading books like C++ Primer, Effective C++ and TC++PL I want to learn some important design patterns. So, what are the famous design patterns that every C++ programmer should know?
The obvious answer is the Gang-Of-Four patterns from the famous book. These are the same patterns that get listed all over the place. http://en.wikipedia.org/wiki/Design_Patterns Beyond that, have a look around Martin Fowlers web site... http://martinfowler.com/ There's a fair bit on there - the "famous" one is probably "dependency injection". Most others are pretty domain specific, though. "Mixin layers" can be interesting for C++. A template class takes its own base as a template parameter, so that the template can be used to add the same functionality to many different classes, or as a composition method so that various features can be easily included/excluded for a library. The curiously recurring template trick is sometimes used as well (the original base is the final fully-composed class) so that the various mixin layers can do some degree of "reflection", so that intermediate methods can be defined in terms of fully-composed member types etc. Of course it can be a bit prone to unresolvable cyclic dependencies, if you're not careful. http://portal.acm.org/citation.cfm?id=505148 Note - "the original base" doesn't mean the original base class that's inhereted from as that would cause an illegal inheritance cycle - it's just a template parameter used to refer to, to access the types/constants/etc in the final result and perhaps for metaprogramming reflection techniques. I honestly don't know at this point if I was confused when I wrote "base", or just chose a confusing word.
3,287,079
3,287,238
Verify the structure of a database? (SQLite in C++ / Qt)
I was wondering what the "best" way to verify the structure of my database is with SQLite in Qt / C++. I'm using SQLite so there is a file which contains my database, and I want to make sure that, when launching the program, the database is structured the way it should be- i.e., it has X tables each with their own Y columns, appropriately named, etc. Could someone point my in the right direction? Thanks so much!
You can get a list of all the tables in the database with this query: select tbl_name from sqlite_master; And then for each table returned, run this query to get column information pragma table_info(my_table); For the pragma, each row of the result set will contain: a column index, the column name, the column's type affinity, whether the column may be NULL, and the column's default value. (I'm assuming here that you know how to run SQL queries against your database in the SQLite C interface.)
3,287,087
3,287,206
Attach a video stream onto an existing application
Is it possible to show a video that is playing onto an existing application? Application A is running. Get Video A and place it on top of Application A and then play it. Thanks! Cheers!
If you mean to load a video and play it, you can use the DirectShow API, which will use the installed Windows codecs to attempt playback. You can also use ffmpeg for a selection of codecs that may not be installed on the computer.
3,287,096
3,287,116
What is the simplest way to "cast" a member function pointer to a function pointer in C++?
I want to provide a member function for the "comp" parameter of an STL algorithm like lower_bound( ..., Compare comp ). The comp() function accesses a non-static member field so it must itself be a non-static member but the type of a non-static member function pointer is different from that of an ordinary function pointer. What is the best way around this problem?
This is the most common use of std::mem_fun and std::mem_fun_ref. They're templates that create functors that invoke the specified member function. TR1 adds an std::tr1::bind that's also useful and more versatile (and if you don't have TR1 available, that's based on Boost::bind). C++0x will include std::bind in the standard library (virtually unchanged from TR1).
3,287,102
3,297,600
C++ inheritance pattern + CRTP
am trying to understand pattern used in ublas. pattern is such: struct vector : vector_expression<vector> where vector_expression is like this: template<class E> class vector_expression { ... // no constructor or E pointer/reference in class // const E &operator () () const { return *static_cast<const E*>(this); } complete source code is here: http://www.tena-sda.org/doc/5.2.2/boost/dd/d44/vector__expression_8hpp-source.html#l00088 my question is, how does *static_cast<const E*>(this) work? does it rely on inheritance? next question: if I derive template<class E> class vector_expression2 : private vector_expression<E> { //friend class ublas::vector_expression<E>; // this is the fix typedef vector_expression<E> base; const E& operator()() const { return base::operator()(); } }; i get compiler error regarding inaccessible vector_expression base in static cast. why does it happen? Thank you
This is a trick to constrain function templates -- to restrict the class of types. There are lots of concepts like vector expression, scalar expression, matrix expression etc. If you want to write a function template that multiplies a vector with a scalar you could try to write template<typename V, typename S> some_type operator*(V v, S s); // vector * scalar template<typename V, typename S> some_type operator*(S s, V v); // scalar * vector but this is not going to work because both declarations are essentially equivalent and nobody said that V is supposed to be a vector expression and S is supposed to be a scalar expression. So, what the uBlas developers did is to use the CRTP to constrain these templates: template<typename V, typename S> some_Type operator*(vector_expression<V> ve, scalar_expression<S> se); To make this work all scalar expressions S have to derive from scalar_expression<S> and all vector expressions V have to derive from vector_expression<V>. This way this operator is only considered if the fist operand is really an expression for a vector and the second argument is really an expression for a scalar. You can overload this function template with a second one that swaps both parameters and everything is okay. Now, to be able to access anything from V and S (the derived types) we need a cast from base class to derived class. This is what the conversion operator in the base class is for. Since the base class knows the derived class (it is a template parameter), this is not a problem. It makes sense to choose the weakest cast operator that allows this cast to avoid errors. This is the static_cast. It can be used to convert base* to derived* without any significant overhead. I don't understand what you try to do with your code template<class E> class vector_expression2 : private vector_expression<E>; If you want to write your own vector expression as a template you would do it like this: template<class E> class my_parameterized_vector_expression : public vector_expression<my_parameterized_vector_expression<E> >; I don't think it works with private inheritance. At least all the function templates that take a vector expression as argument won't be able to access the conversion operator from the base class if you use private inheritance here.
3,287,472
3,287,493
Why is my .cpp file not being processed?
I'm trying to compile (make) a game source and it seems that my gRace.cpp file is being excluded or something because it keeps returning undefined reference errors for all my gRace class methods. libtron.a(libtron_a-gGame.o): In function `gGame::StateUpdate()': gGame.cpp:(.text+0x99e9): undefined reference to `gRace::Reset()' libtron.a(libtron_a-gGame.o): In function `gGame::Analysis(float)': gGame.cpp:(.text+0xad48): undefined reference to `gRace::Sync(int, int, int)' gGame.cpp:(.text+0xad4d): undefined reference to `gRace::Done()' gGame.cpp:(.text+0xad61): undefined reference to `gRace::Winner()' gGame.cpp:(.text+0xb786): undefined reference to `gRace::End()' libtron.a(libtron_a-gWinZone.o): In function `gWinZoneHack::OnEnter(gCycle*, float)': gWinZone.cpp:(.text+0x9206): undefined reference to `gRace::ZoneHit(ePlayerNetID*)' libtron.a(libtron_a-gWinZone.o): In function `gWinZoneHack::gWinZoneHack(eGrid*, eCoord const&, bool)': gWinZone.cpp:(.text+0xda96): undefined reference to `gRace::NewZone(gWinZoneHack*)' libtron.a(libtron_a-gWinZone.o): In function `gWinZoneHack::gWinZoneHack(eGrid*, eCoord const&, bool)': gWinZone.cpp:(.text+0xdcc6): undefined reference to `gRace::NewZone(gWinZoneHack*)' collect2: ld returned 1 exit status I'm including the gRace.h file in both files via: #include "gRace.h" Any ideas on what might be causing it to not be processed?
Not including the header file would cause undefined function compiler errors. These are linker errors, which means the actual source file isn't being linked with the other files (that is, it has nothing to do with whether or not you included gRace.h in the right places). Check your build script to ensure gRace.cpp is being linked in properly
3,287,540
3,365,018
How do i view source code in totalview?
I just fired up totalview on my "hello world" application (c++) and i only get to view the assembly code. Is there any settings/flags i need to set to view the source code? Menubar->View->Source As->Source does not work for me. the application im trying to debug is just a cout << "Hello World" application, just to get the debugger up and running.
Lets start with the simple stuff. Did you compile your application with the '-g' debugging flag? The debugger relies on the compiler to provide it with a symbol table and line number table to map what happens in the executable back to your source code. Without that -g flag (or if you subsequently strip your application) that info won't be present and assembly debugging is the best you can hope for. If you did compile with -g are the source and the executable all together in the same directory, or if not have they been moved since you compiled them? The compiler only knows the locations of the source and executable at the time they are created, if you move them around then sometimes the debugger won't be able to locate the source code file. In that case you might need to give it some help by defining a source code search path. Write back here and let me know if -g fixed your problem. If not we can look into the search path and such. Cheers, Chris
3,287,664
3,287,693
C++ Classes and Overloaded Operators
I have been trying to make a StringTable class that holds a simple unordered_map<string, string>, and has the array index operator '[]' overloaded to work for accessing the map; however, the compiler will tell me that I have yet to define the overloaded operator when I try to use it. My code is as follows: CStringTable.h #include <string> #include <fstream> #include <tr1/unordered_map> class CStringTable { public: bool Load(const char* filename); inline const char* operator [](const char* key); const char* Get(const char* key); private: std::tr1::unordered_map<std::string, std::string>* m_StringMap; }; CStringTable.cpp #include "CStringTable.h" inline const char* CStringTable::operator [](const char* key) { std::string sKey = key; return (*m_StringMap)[sKey].c_str(); } I try to access the map as follows: (*m_cStringTable)[msgKey] where m_cStringTable is a pointer to an instance of the CStringTable class and msgKey is a const char*. Can anybody enlighten me as to why this won't work?
Regarding the inline keyword, the compiler needs to be able to see the body of that method as well in the .h file. So either move the implementation of the operator from the .cpp file to the .h file, or include the body of the operator in the class declaration.
3,287,716
3,287,786
Problems understanding iterators and operator overload in c++
We have a class example and I just don't get it. I don't quite understand how the operator() works in this case, and everything starting with sort. I looked at the output after running the program, and I don't see how those values are obtained. sort indices array: 2 8 10 4 1 7 5 3 0 9 6 11 replay numbers array: 37 33 29 36 32 35 39 34 30 38 31 40 number array via indices 29 30 31 32 33 34 35 36 37 38 39 40 I tried looking up functors on this board since the title is functor example, but I guess I don't see how functors are in play here. Any thoughts would be GREATLY appreciated as I am COMPLETELY lost. Thanks! #include <iostream> #include <vector> #include <algorithm> #include <numeric> #include "IndexCompare.h" using namespace std; template <class ForwardIterator, class T> void iota(ForwardIterator first, ForwardIterator last, T value) { while (first != last) { *first++ = value++; } } const int MAX = 12; int main() { int numbers[] = {37, 33, 29, 36, 32, 35, 39, 34, 30, 38, 31, 40}; vector<int> vecNum(numbers, numbers + MAX); // Display original number array. cout << "--- initial numbers array ---" << endl; vector<int>::iterator iter = vecNum.begin(); for (; iter != vecNum.end(); iter++ ) { cout << *iter << " "; } cout << "\n"; vector<int> indices( vecNum.size() ); // fill indices array cout << "\n--- invoke 'iota' on indices array ---"; iota( indices.begin(), indices.end(), 0 ); // Display original indices array. cout << "\n linear indices array: "; vector<int>::iterator iterIdx = indices.begin(); for (; iterIdx != indices.end(); iterIdx++ ) { cout << *iterIdx << " "; } cout << "\n"; // sort indices array cout << "\n--- invoke 'Sort' on indices based on number array ---"; sort(indices.begin(), indices.end(), IndexCompare<vector<int>::iterator>(vecNum.begin(),vecNum.end())); // Display sorted indices array cout << "\n Sorted indices array: "; for (iterIdx = indices.begin(); iterIdx != indices.end(); iterIdx++ ) { cout << *iterIdx << " "; } cout << "\n"; cout << "\n--- Run check on number array indexed normally ---"; // Display original numbers array. cout << "\n replay numbers array: "; iter = vecNum.begin(); for (; iter != vecNum.end(); iter++ ) { cout << *iter << " "; } cout << "\n"; cout << "\n--- Run check on number array indexed with sorted indices ---"; // Print original nums array indirectly through indices. cout << "\n number array via indices: "; for (int index = 0; index < vecNum.size(); index++ ) cout << vecNum[indices[index]] << " "; cout << "\n"; getchar(); return 0; } // IndexCompare.h - interface for IndexCompare class template #ifndef _INDEXCOMPARE_H_ #define _INDEXCOMPARE_H_ #pragma once template <class random_iterator> class IndexCompare { public: IndexCompare(random_iterator begin, random_iterator end) : begin(begin), end(end) {} ~IndexCompare() {} bool operator() (unsigned int first, unsigned int second) { return (*(begin + first) < *(begin + second)); } private: random_iterator begin; random_iterator end; }; #endif
I am not sure I will be able to explain this correctly. Here is my try: (1). vector<int> indices( vecNum.size() ); You are creating a vector to hold the indexes for the elements in vector vecNum. Obviously the number of elements in this vector is same as number of elements in vecNum. (2). iota( indices.begin(), indices.end(), 0 ); Initializing the indices with values from 0 - vecNum.size() - 1 (3). sort(indices.begin(), indices.end(), IndexCompare<vector<int>::iterator>(vecNum.begin(),vecNum.end())); For each element in the indices vector invoke the functor IndexCompare. This functor in its operator() gets the value from the vecNum vector corresponding to the given index position. So basically you are sorting the indices vector (not vecNum) based on the values in vecNum. Hence the vecNum remains unaffected and indices gets sorted based on the values from vecNum. To make it more clearer (I hope), the initial state of the indices vector will be say: indices = 0,1,2 and vecNum = 20,10,30 Now you are calling std::sort on this with your own functor. So to determine whether 0 is less than 1 sort algorithm will use your functor. Inside the functor you are determinng whether 0 < 1 using the logic whether vecNum[0] (i.e. 20) < vecNum[1] (i.e. 10). So the sorted out put will be indices = 1,0,2.
3,287,801
3,287,828
Pointers to elements of std::vector and std::list
I'm having a std::vector with elements of some class ClassA. Additionally I want to create an index using a std::map<key,ClassA*> which maps some key value to pointers to elements contained in the vector. Is there any guarantee that these pointers remain valid (and point to the same object) when elements are added at the end of the vector (not inserted). I.e, would the following code be correct: std::vector<ClassA> storage; std::map<int, ClassA*> map; for (int i=0; i<10000; ++i) { storage.push_back(ClassA()); map.insert(std::make_pair(storage.back().getKey(), &(storage.back())); } // map contains only valid pointers to the 'correct' elements of storage How is the situation, if I use std::list instead of std::vector?
Vectors - No. Because the capacity of vectors never shrinks, it is guaranteed that references, pointers, and iterators remain valid even when elements are deleted or changed, provided they refer to a position before the manipulated elements. However, insertions may invalidate references, pointers, and iterators. Lists - Yes, inserting and deleting elements does not invalidate pointers, references, and iterators to other elements
3,287,834
3,287,841
What is this at the end of function ,...) in c++
Possible Duplicate: In a C function declaration, what does “…” as the last parameter do? What does this mean ,...); it is written at the end of a function in a code i am debuging. like this void abc( int a, int b, ...);
It means the function can take any number of extra arguments. For example, consider printf; the first argument is the format string, and then there can be any number of arguments after that for all of the modifiers. This would be represented by using ... after the first argument when defining the function.
3,287,933
3,287,984
Convert LPTSTR to string or char * to be written to a file
I want to convert LPTSTR to string or char * to be able to write it to file using ofstream. Any Ideas?
Most solutions presented in the other threads unnecessarily convert to an obsolete encoding instead of an Unicode encoding. Simply use reinterpret_cast<const char*> to write UTF-16 files, or convert to UTF-8 using WideCharToMultiByte. To depart a bit from the question, using LPTSTR instead of LPWSTR doesn't make much sense nowadays since the old 9x series of Windows is completely obsolete and unsupported. Simply use LPWSTR and the accompanying "wide character" (i.e., UTF-16 code unit) types like WCHAR or wchar_t everywhere. Here is an example that (I hope) writes UTF-16 or UTF-32 (the latter on Linux/OS X): #include <fstream> #include <string> int main() { std::ofstream stream("test.txt"); // better use L"test.txt" on Windows if possible std::wstring string = L"Test\n"; stream.write(reinterpret_cast<const char*>(string.data()), string.size() * sizeof(wchar_t)); }
3,288,037
3,288,054
Why was the array type of formal parameter of a function converted to pointer?
The output of the following function is "int *", which means the formal parameter is converted to a integer pointer. Is there any necessary reason for this design? Why can't we reserve the array type? // the output is "int *" #include<typeinfo> void Func(int ar[5]) { printf("%s\n", typeid(ar).name(); } int main() { int ar[5]; Func(ar); return 0; }
Is there any necessary reason for this design? This is historical baggage from C. Supposedly 1 this was convenience as you can't pass arrays by-value anyway. If you want to preserve the type, you can use a references or pointers: void Func(int (&ar)[5]); Or using template functions to accept an arbitrarily sized array: template<std::size_t N> void Func(int (&ar)[N]);
3,288,422
3,288,481
How to calculate the cumulative sum for a vector of doubles in C++?
I have a vector of doubles and I need to create another array which is a cumulative sum of the elements of the first. For example; vector<double> Array(10,1); vector<double> Sum(10); Sum[0] = Array[0]; for(unsigned int i=1; i<Array.size(); i++) Sum[i] = Sum[i-1] + Array[i]; Is there an in-built function that will perform the above cumulative sum?
Without having tested it, something like std::partial_sum(Array.begin(), Array.end(), Sum.begin(), plus<double>()); should do the trick, if it's C++. (Actually, the plus<double>() can be defaulted out, it seems.)
3,288,551
3,288,571
Cannot convert from const Point to const D2D1_POINT_2F
class ADot : public Shape { private: Point me_; operator D2D1_POINT_2F() const;//HERE I HAVE CONVERSION OPERATOR BUT IT DOES NOT WORK public: ADot(signed, signed); ~ADot(void); void draw()const; Point center() const; Point north() const; Point south() const; Point east() const; Point west() const; Point nw() const; Point ne() const; Point sw() const; Point se() const; }; error: Error 7 error C2664: 'D2D1::Ellipse' : cannot convert parameter 1 from 'const Point' to 'const D2D1_POINT_2F &' I'm getting this error but I do not know how to write operator which would convert my const object to const D2D1_POINT_2F. Thank you.
The operator is declared PRIVATE. Make it public You are also trying to convert a Point to D2D1_POINT_2F, but the operator is declared in ADot class
3,288,848
3,288,884
Initialization of object static members
Static members confuse me sometimes. I understand how to initialize a simple built in type such as int with something along the lines of int myClass::statVar = 10;, which you place in a .cpp file, but I have something of the following sort: class myClass { public: // Some methods... protected: static RandomGenerator itsGenerator; } The basic idea is simple enough: myClass needs access to a random generator for one of its member functions. I also can have only a few instances of the generator since each object is quite big. However, the RandomGenerator type needs to be "initialized", so to speak, by a call to RandomGenerator::Randomize(), which the compiler won't allow you to do since it's not a const rvalue (is that right?). So how can I make this work? Or perhaps should I not make use of a static variable in this case, and do it some other way?
You could create wrapper class which will hold RandomGenerator instance in it and will call RandomGenerator::Randomize in its constructor.
3,289,028
3,289,062
Any ideas for a dissertation?
I was wondering whether anyone had some ideas for a dissertation i have to do for university. It will be a 12 month project and I will probably be looking to do something in c++ but I'm open to anything. I was thinking about looking in AI but not sure. Thank in adv.
I would suggest you to look for people who work in this field at your university and ask them for project suggestions. You will eventually end up with someone from yor Uni as a supervisor anyway, so why not get in touch with them right away? On the other hand, if you really want some suggestions, look at the numerous AI competitions that are on the web. http://www.thousandparsec.net/tp/comp.php http://eis.ucsc.edu/StarCraftAICompetition ... and more http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=ai+competition
3,289,106
3,289,155
How to write console data into a text file in C++?
I'm working on a file sharing application in C++. I want to write console output into a separate file and at the same time I want to see the output in console also. Can anybody help me...Thanks in advance.
i don't program in c++ but here is my advice: create new class, that takes InputStream (istream in c++ or smth), and than every incoming byte it will transfer in std.out and in file. I am sure there is a way to change standard output stream with forementioned class. As i remember, std.out is some kind of property of cout. And again, i spent 1 week on c++ more than half a year ago, so there is a chance that all i've said is garbage.
3,289,149
3,289,879
Tracking - and correctly ending - native and managed threads in a C# - C++/CLI - C++ Windows forms application prior to exit
This is a follow-on from: Debugging a Multithreaded C# - C++/CLI - C++ Solution in Visual Studio 2008: What are these threads? Please excuse the format, I've just repeated some of the description of the application here: I've inherited a project consisting of three levels of code. The lowest layer is native C++ that interacts with hardware. This is mature, stable and well-tested. The intermediate level code is C++/CLI, which interacts with top-level C# code that contains the UI elements and some additional functionality. This C# code is incomplete and was rushed in development: it crashes frequently and is not fit for purpose. My task is debug it and complete it. I received some very helpful info from the last question I asked - but now there's more issues! My problem at the moment is that when I invoke Application.Exit() to shut down the UI and quit the application, an exception is thrown: System.InvalidOperationException: Collection was modified; enumeration operation may not execute I understand that this is because I need to ensure that all of my threads are ended before I call Application.Exit() (or Application.ExitThread()). I've tried using MainForm.Close() as quick fix while I investigate further but it doesn't alleviate the problem. I don't want to just called Thread.CurrentThread.Abort(), mainly because some of the threads originate in the C++ section of the code, via Boost::Thread, and I'm unsure exactly what resources I may leave in an undesirable state (a lot of the code consists of objects for interaction with hardware, such as a serial port - it's not been implemented via RAII so I'm rather cautious of brute forcing things). What I'd like to be able to do is identify what threads are doing what, and gracefully end them prior to exiting the application. However, in VS 2008, the stack trace - with 'Show External Code' activated - only reveals [Native to managed transition] [Managed to native transition] so I'm still having difficulty tracing the individual native threads and thus working out the best way to end them. I tried using Allinea DDTLite, which seemed excellent - but I've had some issues with my VS installation and I had to disable the plug-ins, so it wasn't a solution. To summarise: what is the most effective way to ensure that all threads - both managed and native - are correctly finished so that the UI and then the the entire application can exit cleanly?
What I'd like to be able to do is identify what threads are doing what You cannot make this work. There are ways to enumerate the threads in a process, like the managed Process.Threads property or the native Thread32First/Next but you don't get nearly enough info about the threads to know what they do. And most certainly not to shut them down cleanly. Further complicated by the .NET framework using threads for its own purposes, like the debugger thread and the finalizer thread and a smattering of threadpool threads. You can kill these threads rudely with TerminateThread, albeit that killing the finalizer thread will crash the program immediately, but that's no different from rudely terminating the process with Environment.Exit(). With the caveat that nothing is cleaned-up nicely. Windows will clean up most of the shrapnel though. This should not normally be a problem. You know what threads you started, there should also be a mechanism to ask them to shut down. That's normally done by signaling an event, something that's tested in thread's main loop. Waiting on the thread handle confirms that the thread indeed exited. After which you can close the windows. But that's probably plumbing that's missing, you'll have to add it. If the current native C++ code has no mechanism to take care of thread shutdown then you've got a fairly big problem. I'll guess that maintaining this native C++ code is the real problem. You may have to hire a gun to get this done.
3,289,290
3,289,933
Qt - How to specify and make constant an element size in a layout?
Say there is a QHBoxLayout and some widgets in it. How to specify a widget width and hight in the layout, so that while resizing the widget which containes the layout the given width and hight stay constant?
You can use void QWidget::setFixedSize ( int w, int h ) which Sets the width of the widget to w and the height to h. This will make the size of the particular widget fixed when the window is re-sized. Also you can use the combination of these functions, void QWidget::setFixedHeight ( int h ) and also void QWidget::setFixedWidth ( int w ) whichever is required for your need.. Hope it helps.
3,289,321
3,291,311
C# - Capturing Windows Messages from a specific application
I'm writing a C# application which needs to intercept Window Messages that another applications is sending out. The company who wrote the application I'm monitoring sent me some example code, however it's in C++ which I don't really know. In the C++ example code I've got they use the following code: UINT uMsg = RegisterWindowMessage(SHOCK_MESSAGE_BROADCAST); ON_REGISTERED_MESSAGE(WM_SHOCK_BROADCAST_MESSAGE, OnShockStatusMessage) LRESULT OnShockStatusMessage(WPARAM wParam, LPARAM lParam); As I understand it this retrieves an Id from Windows for the specific message we want to listen for. Then we're asking C++ to call OnShockStatusMessage whenever an message matching the Id is intercepted. After a bit of research I've put together the following in C# [DllImport("user32.dll", SetLastError = true)] public static extern IntPtr FindWindow(string lpClassName, string lpWindowName); [DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)] static extern uint RegisterWindowMessage(string lpString); private IntPtr _hWnd; // APS-50 class reference private List<IntPtr> _windowsMessages = new List<IntPtr>(); // APS-50 messages private const string _className = "www.AuPix.com/SHOCK/MessageWindowClass"; // Windows Messages events private const string _messageBroadcast = "www.AuPix.com/SHOCK/BROADCAST"; private const string _messageCallEvents = "www.AuPix.com/SHOCK/CallEvents"; private const string _messageRegistrationEvents = "www.AuPix.com/SHOCK/RegistrationEvents"; private const string _messageActions = "www.AuPix.com/SHOCK/Actions"; private void DemoProblem() { // Find hidden window handle _hWnd = FindWindow(_className, null); // Register for events _windowsMessages.Add( new IntPtr( RegisterWindowMessage( _messageActions ) ) ); _windowsMessages.Add( new IntPtr( RegisterWindowMessage( _messageBroadcast ) ) ); _windowsMessages.Add( new IntPtr( RegisterWindowMessage( _messageCallEvents ) ) ); _windowsMessages.Add( new IntPtr( RegisterWindowMessage( _messageRegistrationEvents ) ) ); } protected override void WndProc(ref Message m) { base.WndProc(ref m); // Are they registered Windows Messages for the APS-50 application? foreach (IntPtr message in _windowsMessages) { if ((IntPtr)m.Msg == message) { Debug.WriteLine("Message from specified application found!"); } } // Are they coming from the APS-50 application? if ( m.HWnd == shock.WindowsHandle) { Debug.WriteLine("Message from specified application found!"); } } As I understand this should do the same basic thing, in that it: Finds the application I wish to monitor Registers the Window Messages I wish to intercept Watches for all Window Messages - then strips out the ones I need However in my override of the WndProc() method neither of my checks intercept any of the specific messages or any message from the application I'm monitoring. If I Debug.WriteLine for all messages that come through it, I can see that it's monitoring them. However it never filters out the messages that I want. By running the example monitoring application written in C++ I can see that Window Messages are being sent and picked up - it's just my C# implemention doesn't do the same.
Turns out I also needed to send the other application a PostMessage asking it to send my application the Window Messages. PostMessage((int)_hWnd, _windowsMessages[0], SHOCK_REQUEST_ACTIVE_CALLINFO, (int)_thisHandle); PostMessage((int)_hWnd, _windowsMessages[0], SHOCK_REQUEST_ALL_REGISTRATIONINFO, (int)_thisHandle); PostMessage((int)_hWnd, _windowsMessages[0], SHOCK_REQUEST_CALL_EVENTS, (int)_thisHandle); PostMessage((int)_hWnd, _windowsMessages[0], SHOCK_REQUEST_REGISTRATION_EVENTS, (int)_thisHandle); Not pretty code, but good enough to prove it works which is all I need for now :)
3,289,354
3,289,439
Output difference in gcc and turbo C
Why is there a difference in the output produced when the code is compiled using the two compilers gcc and turbo c. #include <stdio.h> int main() { char *p = "I am a string"; char *q = "I am a string"; if(p==q) { printf("Optimized"); } else{ printf("Change your compiler"); } return 0; } I get "Optimized" on gcc and "Change your compiler" on turbo c. Why?
Your questions has been tagged C as well as C++. So I'd answer for both the languages. [C] From ISO C99 (Section 6.4.5/6) It is unspecified whether these arrays are distinct provided their elements have the appropriate values. That means it is unspecified whether p and q are pointing to the same string literal or not. In case of gcc they both are pointing to "I am a string" (gcc optimizes your code) whereas in turbo c they are not. Unspecified Behavior: Use of an unspecified value, or other behavior where this International Standard provides two or more possibilities and imposes no further requirements on which is chosen in any instance [C++] From ISO C++-98 (Section 2.13.4/2) Whether all string literals are distinct(that is, are stored in non overlapping objects) is implementation defined. In C++ your code invokes Implementation defined behaviour. Implementation-defined Behavior: Unspecified Behavior where each implementation documents how the choice is made Also see this question.
3,289,726
3,289,913
In C++, any general guidelines for handling memory allocation/deletion?
Probably all that I'm asking for is a link to a website that I have yet to find. But coming from a Java background, what are the general guidelines for handling memory allocation and deletion in C++? I feel like I may be adding all sorts of memory leaks to my application. I realize that there are several variants of smart pointers, and you can mention them too me as well, but I'd like to focus on standard C++ pointers.
My usual policy is this Use smart pointers where usage is at all complex. All raw pointers are owned by a specific object that is responsible for deleting it. The constructor always either allocates the pointer or initializes it to null if it's to be set later. The destructor always deletes any contained pointers Those rules ensure that pointers get deleted when their owning objects are deleted eliminating most common memory leak situations. Never pass an internal pointer into another object, always pass the container object and have the the called function call member functions of the container object to act on the "pointer". Disable copying of the container object. In rare cases implement the copy so that it copies the pointed to object. But never allow the owning object to be copied without also copying the contained object. The previous two rules ensure that you can't have copies of the pointer pointing to deleted memory. Don't try to implement reference counting. If you need a reference counted pointer use a smart pointer class and contain that. I've found those rules generally ensure you can use raw pointers safely and efficiently, and if you want to break those rules then use a smart pointer instead.
3,289,818
3,289,887
Hybrid Inheritance Example
Can anyone suggest any real life example of Hybrid inheritance?
Hybrid Inheritance is a method where one or more types of inheritance are combined together. I use Multilevel inheritance + Single Inheritance almost at all time when I need to implement an interface. struct ExtraBase { void some_func(); }; struct Base : public ExtraBase {}; struct Derived : public Base, public IUnknown {}; ... Derived x = new Derived; x->AddRef(); x->some_func(); Here is an example where Derived uses some_func from ExtraBase (multilevel inheritance) and Derived uses AddRef from IUnknown which is inherited a single time. Surely it is not from a production code, but the idea is close to it.
3,290,134
5,570,956
vtune - no symbols available
I have used vtune several times in the past, usually without too much trouble. Unfortunately the gaps between each use are often so long that I forget some aspects of how to use it each time. I know that the line number and symbols information needs to be stored somehow. I thought that all that was required was to compile your exe with "Program Database" (/Zi), but I have just done a sampling and found that vtune reports there are no symbols available. Is there anything I missed?
The problem has been solved: It turned out that it was a mistake in setting the working directory; "/Zi" appears to be all that is required after all. I don't need to switch off optimization.
3,290,282
3,290,591
typedef and non-simple type specifiers
Why is this code invalid? typedef int INT; unsigned INT a=6; whereas the following code is valid typedef int INT; static INT a=1; ? As per my understanding unsigned int is not a "simple type specifier" and so the code is ill-formed. I am not sure though. Can anyone point to the relevant section of the Standard which makes the first code invalid(and the second code valid)? EDIT Although Johannes Schaub's answer seemed to be correct and to the point(he had deleted his answer BTW) I accepted James Curran's answer for its correctness and preciseness.
typedefs are not like macros. They are not just text substitution. A Typedef creates a new typename. Now when you say unsigned int, the unsigned isn't a modifier which is tacked onto the int. unsigned int is the complete typename; it just happens to have a space in it. So, when you say typedef int INT; then INT is the complete typename. It can't be modified. static (like const) is a storage class specifier. It's not actually part of the type name.
3,290,332
3,293,032
var arg list to tempfile, why is it needed?
I have this code inside a constructor of a class (not written by me) and it writes a variable arg list to a tmp file. I wondered why this would be needed? The tmpfile is removed after this ctor goes out of scope and the var arg list sits inside the m_str vector. Can someone suggest a better way of doing this without the use of a tmpfile? DString(const char *fmt, ...) { DLog::Instance()->Log("Inside DString with ellipses"); va_list varptr; va_start(varptr, fmt); FILE *f = tmpfile(); if (f != NULL) { int n = ::vfprintf(f, fmt, varptr) + 1; m_str.resize(n + 1); ::vsprintf(&m_str[0], fmt, varptr); va_end(varptr); } else DLog::Instance()->Log("[ERROR TMPFILE:] Unable to create TmpFile for request!"); }
This is C++ code: I think you may be trying to solve the wrong problem here. The need for a temp file would go away completely if you consider using a C++-esque design instead of continuing to use the varargs. It may seem like a lot of work to convert all the calling sites to use a new mechanism, but varargs provide a wide variety of possibilities to mis-pass parameters leaving you open to insidious bugs, not to mention you can't pass non-POD types at all. I believe in the long (or even medium) term it will pay off in reliability, clarity, and ease of debugging. Instead try to implement a C++-style streams interface that provides type safety and even the ability to disallow certain operations if needed.
3,290,389
3,291,200
Is there an alternative for boost::phoenix::at_c in combination with boost::spirit::qi::grammar
I have created a test application to illustrate my problem. It parses a list of integers preceded by "a=" or "b=" and is separated by "\r\n". The list contains multiple occurrences of those fields in any order. #include <string> #include <vector> #include <iostream> #include <boost/spirit/include/qi.hpp> #include <boost/spirit/include/phoenix.hpp> #include <boost/fusion/include/adapt_struct.hpp> typedef std::vector<unsigned int> uint_vector_t; std::ostream& operator<<(std::ostream& out, const uint_vector_t &data) { for (unsigned int i(0); i < data.size(); i++) { out << data[i] << '\n'; } return out; } struct MyStruct { uint_vector_t m_aList; uint_vector_t m_bList; }; BOOST_FUSION_ADAPT_STRUCT ( MyStruct, (uint_vector_t, m_aList) (uint_vector_t, m_bList) ) ; template<typename Iterator> struct MyParser : public boost::spirit::qi::grammar<Iterator, MyStruct()> { MyParser() : MyParser::base_type(Parser, "Parser") { using boost::spirit::qi::uint_; using boost::spirit::qi::_val; using boost::spirit::qi::_1; using boost::phoenix::at_c; using boost::phoenix::push_back; Parser = *( aParser [push_back(at_c<0>(_val), _1)] | bParser [push_back(at_c<1>(_val), _1)] ); aParser = "a=" >> uint_ >> "\r\n"; bParser = "b=" >> uint_ >> "\r\n"; } boost::spirit::qi::rule<Iterator, MyStruct()> Parser; boost::spirit::qi::rule<Iterator, unsigned int()> aParser, bParser; }; int main() { using boost::spirit::qi::phrase_parse; std::string input("a=0\r\nb=7531\r\na=2\r\na=3\r\nb=246\r\n"); std::string::const_iterator begin = input.begin(); std::string::const_iterator end = input.end(); MyParser<std::string::const_iterator> parser; MyStruct result; bool succes = phrase_parse(begin, end, parser, "", result); assert(succes); std::cout << "===A===\n" <<result.m_aList << "===B===\n" << result.m_bList << std::endl; } In practice there are more fields with different types which need to be parsed. My objection with this approach lies in the following expression: [push_back(at_c<0>(_val), _1)] Here is a 'hidden dependency' between the assignment and the first element of MyStruct. This makes the code fragile to changes. If the struct is changed it might still compile, but no longer do what is expected. I'm hoping for a construction like: [push_back(at_c<0>bind(&MyStruct::aList, arg1)(_val), _1)] See this. So that it is really bound by name. Is something like this possible? Or should I take a total different approach?
Phoenix allows you to bind data members as well, so you can write: Parser = *( aParser [push_back(bind(&MyStruct::m_aList, _val), _1)] | bParser [push_back(bind(&MyStruct::m_bList, _val), _1)] ); Moreover, in this case you don't need the FUSION_ADAPT magic for your structure anymore.
3,290,408
3,290,530
How to list threads opened by every application in Linux?
Is there a way to know, at a real time, what threads are opened and what application opened them?
You can look in /proc/<PID>/task/ (where <PID> is a process-ID) which will have a number of subdirectories, each with the name equal to the thread-ID of one of the threads in that task. Note that this is only sort-of real-time though -- unless you were to "freeze" the entire system for the duration, the information you get can always be stale, because a process may create or destroy threads even as you're looking at the data.
3,290,561
3,358,035
Problems accessing uccapi.dll COM interface C++
I'm working on a project involving the Microsoft Unified Communications Client API; uccapi.dll. I'm also using Codegear C++Builder 2010, not Visual Studio. After registering the dll with regsvr32 and importing it as type library into C++Builder 2010, uccapi_tlb- and uccapi_ocx-files were generated. When having imported these into my new project I'm trying to follow the msdn guideline for creating a Office Communicator Client able of signing into the Office Communication server. In this regard I have two questions: What is the correct way of accessing the com-interfaces made available through the ocx? I've so far found several ways of creating access points, such as. TCOMIUccPlatform plat; plat = CoUccPlatform::Create(); and IUccPlatformPtr im; im = CreateComObject(CLSID_UccPlatform); and IUccPlatform* pIUccPlatform; hr = CoCreateInstance(CLSID_UccPlatform, NULL, CLSCTX_INPROC_SERVER, __uuidof(IUccPlatform), (void**)&pIUccPlatform); and IUccPlatformPtr pIPlat; pIPlat.CreateInstance(__uuidof(IUccPlatform)); The three first seem to work well. The latter will give me an Assertion failed: intf!=0 error with 0×40000015 exception. Using any of the three top ones I can access methods and initialize the platform interface. However when trying any of the same tactics to access any other interface, such as IUccContext, IUccUriManager or IUccUri, all of which have a clsid defined in the _tlb.h file, I either get a "class not registered" error in the first two cases, or a hresult failure in the third case. Which brings me to my next question. Using ole-viewer all interfaces are registered as they should. Why wouldn't all co-creatable classes in the dll be registered when registering the dll? And what could be the reasons why don't they act similarly? Edit1 from UCCAPILib_tlb.h: // // COCLASS DEFAULT INTERFACE CREATOR // CoClass : UccPlatform // Interface: TCOMIUccPlatform // typedef TCoClassCreatorT<TCOMIUccPlatform, IUccPlatform, &CLSID_UccPlatform, &IID_IUccPlatform> CoUccPlatform; // // COCLASS DEFAULT INTERFACE CREATOR // CoClass : UccUriManager // Interface: TCOMIUccUriManager // typedef TCoClassCreatorT<TCOMIUccUriManager, IUccUriManager, &CLSID_UccUriManager, &IID_IUccUriManager> CoUccUriManager;
This issue is already being discussed in detail in the Embarcadero forums.
3,290,729
3,306,698
CodeSourcery giving compilation error: missing bits/c++config.h
in my project I'm making use of Eigen C++ library for linear algebra. ONLY when I turn on the vectorization flags (-mfpu=neon -mfloat-abi=softfp) for ARM NEON, I get a compiler error - c++config.h no such file or directory. I'm not able to understand whats going wrong, what is this bits/c++config.h? What should I do to fix this problem? Vikram main.c #include<iostream> #include <Eigen/Core> // import most common Eigen types using namespace Eigen; int main(int, char *[]) { Matrix4f m3; m3 << 1, 2, 3, 0, 4, 5, 6, 0, 7, 8, 9, 0, 0, 0, 0, 0; Matrix4f m4; asm("#begins here"); m4 = m3*m3; asm("#ends here"); std::cout << "m3\n" << m3 << "\nm4:\n" << m4 << std::endl; std::cout << "DONE!!"; } makefile CPP= /home/ubuntu/CodeSourcery/Sourcery_G++/bin/arm-none-linux-gnueabi-c++ all: main main: main.cpp $(CPP) -mfpu=neon -mfloat-abi=softfp -I /home/ubuntu/Documents/eigen/ main.cpp -o main clean: rm -rf *o main Errors **** Build of configuration Debug for project Test_Eigen **** make all /home/ubuntu/CodeSourcery/Sourcery_G++/bin/arm-none-linux-gnueabi-c++ -mfpu=neon -mfloat-abi=softfp -I /home/ubuntu/Documents/eigen/ main.cpp -o main In file included from main.cpp:1: /home/ubuntu/CodeSourcery/Sourcery_G++/bin/../lib/gcc/arm-none-linux-gnueabi/4.4.1/../../../../arm-none-linux-gnueabi/include/c++/4.4.1/iostream:39: fatal error: bits/c++config.h: No such file or directory compilation terminated. make: *** [main] Error 1
I got a response from the Codesourcery team. This problem was caused because I had not installed all the add-ons. The installation of the add-ons is a very simple step, if you are running in CodeSourcery's Eclipse environment then you have to just go to Help > Install New Software and after that its pretty straight forward (For more follow the 3rd chapter of getting-started guide). Once the add-ons were installed, I stopped getting the fatal error: bits/c++config.h: No such file or directory compilation terminated. (Read more about the compiler options from 3rd chapter)
3,290,780
3,290,814
Does myVector.erase(myObject) call delete on myObject?
Similar to this question but with objects instead of pointers. If I have the following code Foo f; vector<Foo> vect; vect.push_back(f); vect.erase(vect.begin()); Where does my object go? Is delete called on it? What if someone else holds a pointer to it? Is this a memory leak?
push_back stores a copy of f in the vector, and erase destroys it. f itself is not affected by that. All pointers, references and iterators to an element in a vector are invalidated when you erase it. Using them to access the element after erase yields undefined behavior.
3,290,874
3,290,933
Consequences of this buffer overflow?
So here I believe I have a small buffer overflow problem I found when reviewing someone else's code. It immediately struck me as incorrect, and potentially dangerous, but admittedly I couldn't explain the ACTUAL consequences of this "mistake", if any. I had written up a test app to demonstrate the error, but found (to my dismay) that it seems to run correctly regardless of the overflow. I want to believe that this is just by chance, but wanted some feedback to determine if my thinking were wrong, or if there truly is a problem here that just isn't showing its head in my test app. The problem code (I think it is, anyway): char* buffer = new char[strlen("This string is 27 char long" + 1)]; sprintf(buffer, "This string is 27 char long"); Now, the reason this stood out to me and I want to flag it as a possible buffer overflow is because of the first strlen. Due to pointer arithmetic, the 'incorrect' placement of the + 1 will cause the strlen to return 26 instead of 27 (taking the length of "his string is 27 char long"). sprintf, I believe, then prints 27 char into the buffer and has caused a buffer overflow. Is that a correct assessment? I wrote a test app to demonstrate this for the person who's code I was looking at, and found that even in the debugger the string will print correctly. I also attempting putting other variables on the stack and heap before and after this code to see if I could affect neighboring areas of memory, but was still receiving correct output. I realize that my newly allocated heap memory might not be adjacent, which would explain the lack of useful overflow, but I just really wanted to confirm with others' opinions if this is in fact an issue. Since this is a pretty simple "question", it'd be nice if you could support your answer with some sort of reference as well. While I value and welcome your input, I'm not going to accept "yes it is" as the final answer. Thank you kindly in advance. Update: Many good answers with a lot of additional insight. Unfortunately, I can't accept them all. Thank you for sharing your knowledge and for being my 'second opinion'. I appreciate the help.
Your assessment is correct. [edit] with the addition of the correction mentioned by James Curran.[/edit] Likely, your test app didn't show the problem because the allocation is rounded up to the next multiple of 4, 8 or 16 (which are common allocation granularities). This means you should be able to demonstrate with a 31 character long string. Alternatively, use an "instrumenting" native memory profiler that can place guard bytes closely around such an allocation.
3,291,047
3,291,315
How do I print the string which __FILE__ expands to correctly?
Consider this program: #include <stdio.h> int main() { printf("%s\n", __FILE__); return 0; } Depending on the name of the file, this program works - or not. The issue I'm facing is that I'd like to print the name of the current file in an encoding-safe way. However, in case the file has funny characters which cannot be represented in the current code page, the compiler yields a warning (rightfully so): ?????????.c(3) : warning C4566: character represented by universal-character-name '\u043F' cannot be represented in the current code page (1252) How do I tackle this? I'd like to store the string given by __FILE__ in e.g. UTF-16 so that I can properly print it on any other system at runtime (by converting the stored UTF-16 representation to whatever the runtime system uses). To do so, I need to know: What encoding is used for the string given by __FILE__? It seems that, at least on Windows, the current system code page (in my case, Windows-1252) is used - but this is just guessing. Is this true? How can I store the UTF-8 (or UTF-16) representation of that string in my source code at build time? My real life use case: I have a macro which traces the current program execution, writing the current sourcecode/line number information to a file. It looks like this: struct LogFile { // Write message to file. The file should contain the UTF-8 encoded data! void writeMessage( const std::string &msg ); }; // Global function which returns a pointer to the 'active' log file. LogFile *activeLogFile(); #define TRACE_BEACON activeLogFile()->write( __FILE__ ); This breaks in case the current source file has a name which contains characters which cannot be represented by the current code page.
Use can use the token pasting operator, like this: #define WIDEN2(x) L ## x #define WIDEN(x) WIDEN2(x) #define WFILE WIDEN(__FILE__) int main() { wprintf("%s\n", WFILE); return 0; }
3,291,167
3,291,411
How can I take a screenshot in a windows application?
How can I take a screenshot of the current screen using Win32?
HDC hScreenDC = GetDC(nullptr); // CreateDC("DISPLAY",nullptr,nullptr,nullptr); HDC hMemoryDC = CreateCompatibleDC(hScreenDC); int width = GetDeviceCaps(hScreenDC,HORZRES); int height = GetDeviceCaps(hScreenDC,VERTRES); HBITMAP hBitmap = CreateCompatibleBitmap(hScreenDC,width,height); HBITMAP hOldBitmap = static_cast<HBITMAP>(SelectObject(hMemoryDC,hBitmap)); BitBlt(hMemoryDC,0,0,width,height,hScreenDC,0,0,SRCCOPY); hBitmap = static_cast<HBITMAP>(SelectObject(hMemoryDC,hOldBitmap)); DeleteDC(hMemoryDC); DeleteDC(hScreenDC);
3,291,218
3,291,854
Convert Mouse Points to Quadratic BSplines
I'm writing a drawing program. I'm trying to take an ordered list mouse positions, and approximate a smooth Quadratic BSpline Curve. Does anyone know how to accomplish this? Thanks!
"B-spline curve fitting based on adaptive curve refinement using dominant points" by Park & Lee and "Fair interpolation and approximation of B-splines by energy minimization and points insertion" by Vassilev seem to be solving this problem. Also there look like a few references on the first link that should help you. Converting data points to control points in areas of high curvature and removing data points in areas of little curvature is a general approach.
3,291,258
3,291,319
square-root and square of vector doubles in C++
I'd like to calculate the square and square-root of a vector of doubles. For example given: vector<double> Array1(10,2.0); vector<double> Array2(10,2.0); for(unsigned int i=0; i<Array1.size(); i++) Array1[i] = sqrt(Array1[i]); for(unsigned int i=0; i<Array2.size(); i++) Array2[i] = Array2[i] * Array2[i]; Is there a way of doing the above using a STL function such as transform? Perhaps there is an in-built sqrt function that acts on arrays?
Same answer as your previous question... static inline double computeSquare (double x) { return x*x; } ... std::transform(Array1.begin(), Array1.end(), Array1.begin(), (double(*)(double)) sqrt); std::transform(Array2.begin(), Array2.end(), Array2.begin(), computeSquare); (The (double(*)(double)) cast is to force the sqrt function to use the double variant — it's an overloaded function. You could use std::ptr_fun<double, double>(sqrt) to avoid the cast.)
3,291,440
3,291,697
How to write only regularly spaced items from a char buffer to disk in C++
How can I write only every third item in a char buffer to file quickly in C++? I get a three-channel image from my camera, but each channel contains the same info (the image is grayscale). I'd like to write only one channel to disk to save space and make the writes faster, since this is part of a real-time, data collection system. C++'s ofstream::write command seems to only write contiguous blocks of binary data, so my current code writes all three channels and runs too slowly: char * data = getDataFromCamera(); int dataSize = imageWidth * imageHeight * imageChannels; std::ofstream output; output.open( fileName, std::ios::out | std::ios::binary ); output.write( data, dataSize ); I'd love to be able to replace the last line with a call like: int skipSize = imageChannels; output.write( data, dataSize, skipSize ); where skipSize would cause write to put only every third into the output file. However, I haven't been able to find any function that does this. I'd love to hear any ideas for getting a single channel written to disk quickly. Thanks.
Let's say your buffer is 24-bit RGB, and you're using a 32-bit processor (so that operations on 32-bit entities are the most efficient). For the most speed, let's work with a 12-byte chunk at a time. In twelve bytes, we'll have 4 pixels, like so: AAABBBCCCDDD Which is 3 32-bit values: AAAB BBCC CDDD We want to turn this into ABCD (a single 32-bit value). We can create ABCD by applying a mask to each input and ORing. ABCD = A000 | 0BC0 | 000D In C++, with a little-endian processor, I think it would be: unsigned int turn12grayBytesInto4ColorBytes( unsigned int buf[3] ) { return (buf[0]&0x000000FF) // mask seems reversed because of little-endianness | (buf[1]&0x00FFFF00) | (buf[2]&0xFF000000); } It's probably fastest to do this another conversion to another buffer and THEN dump to disk, instead of going directly to disk.
3,291,507
3,291,787
Does explicitly calling destructor result in Undefined Behavior here?
In my opinion, the following code (from some C++ question) should lead to UB, but the it seems it is not. Here is the code: #include <iostream> using namespace std; class some{ public: ~some() { cout<<"some's destructor"<<endl; } }; int main() { some s; s.~some(); } and the answer is: some's destructor some's destructor I learned form c++ faq lite that we should not explicitly call destructor. I think after the explicitly call to the destructor, the object s should be deleted. The program automatically calls the destructor again when it's finished, it should be UB. However, I tried it on g++, and get the same result as the above answer. Is it because the class is too simple (no new/delete involved)? Or it's not UB at all in this case?
The behavior is undefined because the destructor is invoked twice for the same object: Once when you invoke it explicitly Once when the scope ends and the automatic variable is destroyed Invoking the destructor on an object whose lifetime has ended results in undefined behavior per C++03 §12.4/6: the behavior is undefined if the destructor is invoked for an object whose lifetime has ended An object's lifetime ends when its destructor is called per §3.8/1: The lifetime of an object of type T ends when: — if T is a class type with a non-trivial destructor (12.4), the destructor call starts, or — the storage which the object occupies is reused or released. Note that this means if your class has a trivial destructor, the behavior is well-defined because the lifetime of an object of such a type does not end until its storage is released, which for automatic variables does not happen until the end of the function. Of course, I don't know why you would explicitly invoke the destructor if it is trivial. What is a trivial destructor? §12.4/3 says: A destructor is trivial if it is an implicitly-declared destructor and if: — all of the direct base classes of its class have trivial destructors and — for all of the non-static data members of its class that are of class type (or array thereof), each such class has a trivial destructor. As others have mentioned, one possible result of undefined behavior is your program appearing to continue running correctly; another possible result is your program crashing. Anything can happen and there are no guarantees whatsoever.
3,291,519
3,548,766
Weird behavior of UuidCreateSequential
I have a software that runs over 2 000 computers on my company, without any issues. This software, at some time, generate a GUID (or UUID) using UuidCreateSequential() (MSDN link). The call usually returns RPC_S_OK on every computer. But on one of them, it always returns RPC_S_UUID_LOCAL_ONLY. The documentation states that: The UuidCreateSequential function returns RPC_S_UUID_LOCAL_ONLY when the originating computer does not have an ethernet/token ring (IEEE 802.x) address. However, there seem to be no networking issues with this computer. It has a network card with both valid and unique MAC address and IP address, and it is working perfectly. What else could cause UuidCreateSequential() to always return RPC_S_UUID_LOCAL_ONLY ? Have you ever experienced a similar situation ? I this can help, the computer which has the issues runs an updated Windows XP, with Service Pack 3.
I contacted Microsoft and it seems that bug occurs only on Windows XP, when the first byte of the MAC address is superior or equal to 0x80. This has been fixed for Windows Vista and Windows Seven. It won't be fixed for Windows XP.
3,291,521
3,324,173
How are the Poco C++ events handled?
Lets say i have a Poco::Thread: Thread Parent has an eventhandler method within it. The parent then spawns two children threads, who are given events that are the parent subscribes the eventhandler to. So two events both have the same event handler attached. If Child A triggers their event, and Parent starts to execute it, what would happen if Child B triggered their event before Parent was finished? Are these requests queued up automatically, or do i have to lock everything out myself?
Event delegates are called within the thread of the caller (unless you're using notifyAsync()), so in the case of multiple threads triggering the same event you'll have to take care of synchronization in your event handlers yourself.
3,291,568
3,291,620
"import" a definition of a function from a base class to implement abstract interface (multiple inheritance in C++)
Say we have a class inheriting from two base classes (multiple inheritance). Base class A is abstract, declaring a pure virtual function foo, the other base class B declares and implements a function foo of the very same signature. struct A { virtual void foo(int i) = 0; }; struct B { virtual void foo(int i) {} }; struct C : public A, public B {}; I want to use the implementation of foo from base class B in my derived class C. However, if I do not implement the function foo a second time in my derived class C, I cannot instantiate any object of it (it remains abstract). Virtual inheritance does not help here as expected (class A and class B have no common base class). I wonder if there is a way to "import" the implementation of foo from class B into class C in order not to have to repeat the same code. Above example is of course contrived. The reason I want implement foo in class B is that I want to derive class D : public B and use class Bs implementation of foo. I know that inheritance is not (primarily) intended for code reuse, but I'd still like to use it in that way.
In java, your sample code works. In C++ it doesn't. A subtle difference between those languages. Your best option in C++ is to define C::foo() by forwarding to B::foo(): struct C : public A, public B { virtual void foo(int i) { B::foo(i); } };
3,291,644
3,291,788
C++: Creating an uninitialized placeholder variable rather than a default object
I'm moving from Java to C++ right now and I'm having some difficulties whenever a commonly used concept in Java doesn't map directly into C++. For instance, in Java I would do something like: Fruit GetFruit(String fruitName) { Fruit fruit; if(fruitName == "apple") fruit = new Fruit("apple"); else if(fruitName == "banana") fruit = new Fruit("banana"); else fruit = new Fruit("kumquat"); //'cause who really wants to eat a kumquat? return fruit; } Of course, in C++ the Fruit fruit; statement actually creates a fruit. Does this mean I have to have a default constructor? This seems unsafe! What if my default fruit escaped?
C++ gives you much more headache when it comes to creating fruits. Depending on your needs, you can choose one of the following options: 1) create a Fruit on a stack and return a copy (you need a copy constructor) then and must provide some default fruit in case the name does not match: Fruit GetFruit(const std::string &name) { if ( name == "banana" ) return Fruit("banana"); if ( name == "apple" ) return Fruit("apple"); return Fruit("default"); } 2) create a Fruit on a heap and take care, that there could be null-pointer returned and also remember to delete this fruit somewhere and take care that it is deleted only once and only by its owner (and take care that noone holds a pointer to the deleted fruit): Fruit* GetFruit(const std::string &name) { if ( name == "banana" ) return new Fruit("banana"); if ( name == "apple" ) return new Fruit("apple"); return NULL; } 3) and finally, use a smart pointer to avoid many possible pointer problems (but take care of null pointers). This option is the closest to your Java programming experience: typedef boost::shared_ptr<Fruit> FruitRef; FruitRef GetFruit(const std::string &name) { if ( name == "banana" ) return new Fruit("banana"); if ( name == "apple" ) return new Fruit("apple"); return FruitRef(); }
3,291,923
3,292,143
How to get rgb value by cimg?
CImg<unsigned char> src("image.jpg"); int width = src.width(); int height = src.height(); unsigned char* ptr = src.data(10,10); How can I get rgb from ptr?
From the CImg documentation -- section 6.13 on page 34, and section 8.1.4.16 on page 120 -- it looks like the data method can take four arguments: x, y, z, and c: T* data(const unsigned int x, const unsigned int y = 0, const unsigned int z = 0, const unsigned int c = 0) ...where c refers to the color channel. I'm guessing that if your image is indeed an RGB image, then using values of 0, 1, or 2 for c will give you the red, green, and blue components at a given x, y location. For example: unsigned char *r = src.data(10, 10, 0, 0); unsigned char *g = src.data(10, 10, 0, 1); unsigned char *b = src.data(10, 10, 0, 2); (But this is just a guess!) Edit: It looks like there's also an operator() for CImg that works in a similar manner: unsigned char r = src(10, 10, 0, 0);
3,291,992
3,292,104
Shunting-yard: missing argument to operator
I'm implementing the shunting-yard algorithm. I'm having trouble detecting when there are missing arguments to operators. The wikipedia entry is very bad on this topic, and their code also crashes for the example below. For instance 3 - (5 + ) is incorrect because the + is missing an argument. Just before the algorithm reaches the ), the operator stack contains - ( + and the operand stack contains 3 5. Then it goes like this: it pops + from the operator stack discovers that + is a binary operator pops two operands, apply operator and push result (8) to operand stack then it pops the matching ( from the stack, and continues So how can I detect that the + is missing an argument? extra kudos if you also update wikipedia :-)
For binary operator only expressions, the postfix expression has the invariant that in any prefix of the expression, numbers of operands > numbers of operators and in the end, that difference is exactly one. So you can verify the RPN expression for validity at each stage of the shunting yard by maintaining a running count of number of operands - number of operators. If that drops below one, or becomes more than one at the end, you have an error. It does not pinpoint the error, but at least lets you know there is one. (Note: I haven't tried proving the above fact, but seems like it will work)
3,292,103
3,380,768
How do I use a class wstringstream variable?
I have a std::wstringstream that I'm using as sort of a buffer in my class and it is used by a good portion of the methods in this class. However, when I try to do something like this: #include <sstream> class foo { public: void methodA(int x, int y); // Uses mBufferStream void methodB(int x, int y); // Uses mBufferStream private: std::wstringstream mBufferStream; }; I get the following error: error C2248: 'std::basic_ios<_Elem,_Traits>::basic_ios' : cannot access private member declared in class 'std::basic_ios<_Elem,_Traits>' This isn't my exact class obviously, but it is the same setup. Any thoughts as to what I may be doing wrong? I am using Microsoft Visual Studio 2005. [Edit] showing use in method body in .cpp file (as an example of it's use): void foo::methodA(int x, int y) { mBufferStream << "From " << x << " To " << y; externalfunction(mBufferStream.str()); // Prints to message service mBufferStream.str(L""); }
This is because the compiler is implicitly declaring a copy constructor for class foo. std::wstringstream is noncopyable, because it inherits from ios_base. Change your class to this: #include <sstream> class foo { public: void methodA(int x, int y); // Uses mBufferStream void methodB(int x, int y); // Uses mBufferStream private: std::wstringstream mBufferStream; foo(const foo&); //noncopyable void operator=(const foo&) }; and the compiler should point you at the culprit.
3,292,107
3,292,157
What's the difference between istringstream, ostringstream and stringstream? / Why not use stringstream in every case?
When would I use std::istringstream, std::ostringstream and std::stringstream and why shouldn't I just use std::stringstream in every scenario (are there any runtime performance issues?). Lastly, is there anything bad about this (instead of using a stream at all): std::string stHehe("Hello "); stHehe += "stackoverflow.com"; stHehe += "!";
Personally, I find it very rare that I want to perform streaming into and out of the same string stream. Usually I want to either initialize a stream from a string and then parse it; or stream things to a string stream and then extract the result and store it. If you're streaming to and from the same stream, you have to be very careful with the stream state and stream positions. Using 'just' istringstream or ostringstream better expresses your intent and gives you some checking against silly mistakes such as accidental use of << vs >>. There might be some performance improvement but I wouldn't be looking at that first. There's nothing wrong with what you've written. If you find it doesn't perform well enough, then you could profile other approaches, otherwise stick with what's clearest. Personally, I'd just go for: std::string stHehe( "Hello stackoverflow.com!" );
3,292,145
3,292,273
C++ Template Specialization Compilation
I'm going to outline my problem in detail to explain what I'm trying to achieve, the question is in the last paragraph if you wish to ignore the details of my problem. I have a problem with a class design in which I wish to pass a value of any type into push() and pop() functions which will convert the value passed into a string representation that will be appended to a string inside the class, effectively creating a stream of data. The reverse will occur for pop(), taking the stream and converting several bytes at the front of the stream back into a specified type. Making push() and pop() templates tied with stringstream is an obvious solution. However, I wish to use this functionality inside a DLL in which I can change the way the string is stored (encryption or compression, for example) without recompilation of clients. A template of type T would need to be recompiled if the algorithm changes. My next idea was to just use functions such as pushByte(), pushInt(), popByte(), popInt() etc. This would allow me to change the implementation without recompilation of clients, since they rely only on a static interface. This would be fine. However, it isn't so flexible. If a value was changed from a byte to a short, for example, all instances of pushByte() corresponding to that value would need to be changed to pushShort(), similarly for popByte() to popShort(). Overloading pop() and push() to combat this would cause conflictions in types (causing explicit casting, which would end up causing the same problem anyway). With the above ideas, I could create a working class. However, I wondered how specialized templates are compiled. If I created push<byte>() and push<short>(), it would be a type specific overload, and the change from byte to short would automatically switch the template used, which would be ideal. Now, my question is, if I used specialized templates only to simulate this kind of overloading (without a template of type T), would all specializations compile into my DLL allowing me to dispatch a new implementation without client recompilation? Or are specialized templates selected or dropped in the same way as a template of type T at client compilation time?
First of all, you can't just have specialized templates without a base template to specialize. It's just not allowed. You have to start with a template, then you can provide specializations of it. You can explicitly instantiate a template over an arbitrary set of types, and have all those instantiations compiled into your DLL, but I'm not sure this will really accomplish much for you. Ultimately, templates are basically a compile-time form of polymorphism, and you seem to need (at least a limited form of) run-time polymorphism. I'd probably just use overloading. The problem that I'd guess you're talking about arises with something on the order of: int a; byte b; a = pop(); b = pop(); Where you'd basically just be overloading pop on the return type (which, as we all know, isn't allowed). I'd avoid that pretty simply -- instead of returning the value, pass a reference to the value to be modified: int a; byte b; pop(a); pop(b); This not only lets overload resolution work, but at least to me looks cleaner as well (though maybe I've just written too much assembly language, so I'm accustomed to things like "pop ax").
3,292,259
3,292,304
Can I make Visual Studio create the Debug DLL as XXXd.DLL instead of XXX.DLL?
I have found a solution for this, but it only works if you use .DEF files (I don't). I wonder if this can be done without .DEF files.
Project > Properties. Then Configuration Properties > Linker > General > Output file. Here you should have something like: $(OutDir)\$(ProjectName).dll just put $(OutDir)\$(ProjectName)d.dll
3,292,537
3,292,623
How to make a function call bulletproof?
I need to call a function (an LLVM JIT to be specific) from a C++ application. This call might fail or even signal abort() or exit(). How can I avoid or at least reduce effects on my host application? Someone suggested using fork(), however I need a solution for both windows and posix. Even if I would use fork() ... would it be possible for the two processes to communicate (pass some pointers around)?
You basically have to isolate the call that might fail spectacularly, so yes, you probably have to create a separate process for it. I'd actually be tempted to create a small executable just containing this particular call and the necessary supporting functionality and call that from your main executable. This gets you around the lack of fork() on Windows and allows you to use the same mechanisms to communicate. You can't pass pointers around between processes as they're not sharing the same address space. What I would do is have the spawned process reading data from stdin and write to stdout with the controlling process piping data into the child's stdin and reading from the child's stdout. Basically the way a Unix (command line) filter works. Another alternative if you're passing around a lot of data would be to write/read to/from a file on disk (better, a RAM disk) and communicate that way, but unless you're talking a lot of data, that's overkill. As Eugen pointed out in the comments, you can also use shared memory if you want to pass pointers around or another inter-process communication mechanism depending on how much data you need to pass around. That said, choose the simplest possible method as nested executables like these aren't that easy to debug in the first place.
3,292,683
3,292,870
Where is a reference to regex header on MSDN?
I can include this file directly now without tr1 in VS 2010 but can't find description of this file anywhere on MSDN. Where is a reference to regex header on MSDN?
The regex description on MSDN: http://msdn.microsoft.com/en-us/library/bb982382.aspx Basically, you create "basic_regex" objects, then call the "regex_match" or "regex_replace" functions
3,292,700
3,292,729
uint64 flag or uint32 flag[2] for function argument to 32bit compiler?
I have a uint32 variable for some bit field flag state and an enum for these flags. This variable is passed as argument to some functions by value. I need to add about 20 more flags to it. What are my best options? Extend it to 64 bits or treat it as an array of 2 32bits? Extending to 64 bits will lead me to use compiler 'extensions', as enum is 32bits wide. But using an array seems to lead to more work. My code compiles on 32 and 64 bit compilers, and runs on windows, linux and mac. Thanks in advance.
I would use an STL bitset. Then my enum wouldn't have to be a power of 2. With normal enumeration, it would just be the index into the bitset. You could also use a vector<bool>. The chief benefit would be portability since your flag set would now fit in any platform's normal enum range. example: enum errorFlags { parity, framing, overrun, errorFlags_size }; std::bitset<errorFlags_size> flagSet; . . . // either of these set framing error condition flagSet[framing] = true; flagset.set(framing); // either of these clear the framing error condition flagSet[framing] = false; flagset.reset(framing); // this tests for the framing error condition if (flagSet[framing]) { // we have a framing error }
3,292,795
3,292,852
How to declare a templated struct/class as a friend?
I'd like to do the following: template <typename T> struct foo { template <typename S> friend struct foo<S>; private: // ... }; but my compiler (VC8) chokes on it: error C3857: 'foo<T>': multiple template parameter lists are not allowed I'd like to have all possible instantiations of template struct foo friends of foo<T> for all T. How do I make this work ? EDIT: This template <typename T> struct foo { template <typename> friend struct foo; private: // ... }; seems to compile, but is it correct ? Friends and templates have very unnatural syntax.
template<typename> friend class foo this will however make all templates friends to each other. But I think this is what you want?
3,292,811
3,292,828
C# - DLLImport and function default values
I'm interfacing with a native 3rd party C++ DLL via C# and the provided interop layer looks like below: C#: [DllImport("csvcomm.dll")] public static extern int CSVC_ValidateCertificate(byte[] certDER, int length); C++: CSVC_Status_t CSVCOMM_API CSVC_ValidateCertificate(BYTE* certDER, DWORD length, DWORD context = CONTEXT_DEFAULT); Note, there are only two parameters in the C# extern definition since the the C++ function provides a default value for the third parameter. Is this correct? I was receiving some non-deterministic results when using the provided definition, but when I added the third parameter like below, it seems to be working correctly each time rather than sporadically. [DllImport("csvcomm.dll")] public static extern int CSVC_ValidateCertificate(byte[] certDER, int length, int context); Any ideas? Would the addition of the 3rd parameter really fix this issue?
The optional parameter in C++ is resolved at compile time. When you call into this via P/Invoke, you need to always specify all three parameters. If you want to have an optional parameter, you'll need to make a C# wrapper around this method with an overload that provides the optional support (or a C# 4 optional parameter). The actual call into the C++ library should always specify all three arguments, however.
3,292,862
3,294,786
Floating Point Div/Mul > 30 times slower than Add/Sub?
I recently read this post: Floating point vs integer calculations on modern hardware and was curious as to the performance of my own processor on this quasi-benchmark, so I put together two versions of the code, one in C# and one in C++ (Visual Studio 2010 Express) and compiled them both with optimizations to see what falls out. The output from my C# version is fairly reasonable: int add/sub: 350ms int div/mul: 3469ms float add/sub: 1007ms float div/mul: 67493ms double add/sub: 1914ms double div/mul: 2766ms When I compiled and ran the C++ version something completely different shook out: int add/sub: 210.653ms int div/mul: 2946.58ms float add/sub: 3022.58ms float div/mul: 172931ms double add/sub: 1007.63ms double div/mul: 74171.9ms I expected some performance differences, but not this large! I don't understand why the division/multiplication in C++ is so much slower than addition/subtraction, where the managed C# version is more reasonable to my expectations. The code for the C++ version of the function is as follows: template< typename T> void GenericTest(const char *typestring) { T v = 0; T v0 = (T)((rand() % 256) / 16) + 1; T v1 = (T)((rand() % 256) / 16) + 1; T v2 = (T)((rand() % 256) / 16) + 1; T v3 = (T)((rand() % 256) / 16) + 1; T v4 = (T)((rand() % 256) / 16) + 1; T v5 = (T)((rand() % 256) / 16) + 1; T v6 = (T)((rand() % 256) / 16) + 1; T v7 = (T)((rand() % 256) / 16) + 1; T v8 = (T)((rand() % 256) / 16) + 1; T v9 = (T)((rand() % 256) / 16) + 1; HTimer tmr = HTimer(); tmr.Start(); for (int i = 0 ; i < 100000000 ; ++i) { v += v0; v -= v1; v += v2; v -= v3; v += v4; v -= v5; v += v6; v -= v7; v += v8; v -= v9; } tmr.Stop(); // I removed the bracketed values from the table above, they just make the compiler // assume I am using the value for something do it doesn't optimize it out. cout << typestring << " add/sub: " << tmr.Elapsed() * 1000 << "ms [" << (int)v << "]" << endl; tmr.Start(); for (int i = 0 ; i < 100000000 ; ++i) { v /= v0; v *= v1; v /= v2; v *= v3; v /= v4; v *= v5; v /= v6; v *= v7; v /= v8; v *= v9; } tmr.Stop(); cout << typestring << " div/mul: " << tmr.Elapsed() * 1000 << "ms [" << (int)v << "]" << endl; } The code for the C# tests are not generic, and are implemented thus: static double DoubleTest() { Random rnd = new Random(); Stopwatch sw = new Stopwatch(); double v = 0; double v0 = (double)rnd.Next(1, int.MaxValue); double v1 = (double)rnd.Next(1, int.MaxValue); double v2 = (double)rnd.Next(1, int.MaxValue); double v3 = (double)rnd.Next(1, int.MaxValue); double v4 = (double)rnd.Next(1, int.MaxValue); double v5 = (double)rnd.Next(1, int.MaxValue); double v6 = (double)rnd.Next(1, int.MaxValue); double v7 = (double)rnd.Next(1, int.MaxValue); double v8 = (double)rnd.Next(1, int.MaxValue); double v9 = (double)rnd.Next(1, int.MaxValue); sw.Start(); for (int i = 0; i < 100000000; i++) { v += v0; v -= v1; v += v2; v -= v3; v += v4; v -= v5; v += v6; v -= v7; v += v8; v -= v9; } sw.Stop(); Console.WriteLine("double add/sub: {0}", sw.ElapsedMilliseconds); sw.Reset(); sw.Start(); for (int i = 0; i < 100000000; i++) { v /= v0; v *= v1; v /= v2; v *= v3; v /= v4; v *= v5; v /= v6; v *= v7; v /= v8; v *= v9; } sw.Stop(); Console.WriteLine("double div/mul: {0}", sw.ElapsedMilliseconds); sw.Reset(); return v; } Any ideas here?
For the float div/mul tests, you're probably getting denormalized values, which are much slower to process that normal floating point values. This isn't an issue for the int tests and would crop up much later for the double tests. You should be able to add this to the start of the C++ to flush denormals to zero: _controlfp(_DN_FLUSH, _MCW_DN); I'm not sure how to do it in C# though (or if it's even possible). Some more info here: Floating Point Math Execution Time
3,293,013
3,293,167
C++ Base class destrutor order problem
Does anyone know any trick I could use to keep the Derived class until the base class destructor have been called? i.e: #include <iostream.h> class Base { public: Base(){ cout<<"Constructor: Base"<<endl;} virtual ~Base(){ cout<<"Destructor : Base"<<endl;} }; class Derived: public Base { //Doing a lot of jobs by extending the functionality public: Derived(){ cout<<"Constructor: Derived"<<endl;} ~Derived(){ cout<<"Destructor : Derived"<<endl;} }; void main() { Base *Var = new Derived(); delete Var; } This will result in Derived class to be destroyed, then Base class will be destroyed. The reason I need something like this is I have a custom Event(signal/slot) class. The Event class provide an Observer class. If I define : class A : public Event::Observer and then delete an instance of class A, when the ~Observer automatically remove any signal connected to this observer. But since Class A is destroyed before the Observer, if something on a different thread call a slot on A after ~A and before ~Observer get called. Everything goes to hell... I can always call the Observer.release method from the ~A, which fix the timing issue. But it was cleaner if I wouldnt need to. Any ideas?
You definitely don't want to change destruction order, which is good, because you can't. What you really want to do is to dispose/disconnect/shutdown the Observer. What I would do is add this to your Event::Observer class: void Event::Observer::Shutdown() { if(!isShutdown) { //Shut down any links to this observer isShutdown = true; } } void ~Event::Observer() { Shutdown(); //rest of Event::Observer destruction } and then: ~A() { Shutdown(); //clean up any other A resources } If you did something like IDisposable suggested by David, that would work too -- just call Observer::Dispose() in your destructor for class A. My code all assumes that you have only a single thread accessing these objects. Thread synchronization is an entirely separate subject.
3,293,062
3,293,145
How do I fix this lvalue warning?
My code is: void main() { person student[10]; student[0].names[0] = 'C'; student[0].names[1] = 'a'; student[0].names[2] = 'm'; student[0].names[3] = 'i'; student[0].ages = 16; student[0].sex[0] = 'F'; student[0].sex[1] = 'e'; student[0].sex[2] = 'm'; student[0].sex[3] = 'a'; student[0].sex[4] = 'l'; student[0].sex[5] = 'e'; student[0].month = 8; student[0].day = 2; student[0].year = 1993; } All of the "student" is underlined saying expression must be a modifiable lvalue. How can i fix this? person typedef struct person { char names[20][10]; char sex[6][10]; int ages[10]; int month[10]; int day[10]; int year[10]; } person;
Array usage You say you have: typedef struct person { char names[20][10]; char sex[6][10]; int ages[10]; int month[10]; int day[10]; int year[10]; } person; There's no need for the [10]'s. You already have that in the person student[10] declaration, which is the proper place for the [10]. Remove the extraneous arrays: typedef struct person { char name[20]; char sex[6]; int age; int month; int day; int year; } person; String handling Also your strings aren't null-terminated. In C strings need to have an extra '\0' character at the end to indicate where the end of the string is. Your name assignment, for example, should be: student[0].name[0] = 'C'; student[0].name[1] = 'a'; student[0].name[2] = 'm'; student[0].name[3] = 'i'; student[0].name[4] = '\0'; Actually though, there's an easier way to assign to a string than to do it a character at a time. The strcpy function will copy an entire string in one go: strcpy(student[0].name, "Cami"); Or, the easiest option of all is to use the string class available in C++. It makes string-handling a whole lot easier than the C way of manipulating character arrays. With the string class your code would look like this: // Modified struct declaration. typedef struct person { std::string name; std::string sex; int age; // ... } person; // Modified assignment. student[0].name = "Cami";
3,293,248
3,293,272
how to write a cast-to-reference-to-array operator for a class?
I have following class: template <size_t size> class Araye{ public: Araye(int input[]){ for (int i=0;i<size;i++) araye[i]=input[i]; } int araye[size]; }; How should I write a cast-to-reference-to-array operator for this class so that following works: int adad[3]={1,2,3}; Araye<3> araye(adad); int (&reference)[3]=araye;
template <size_t size> class Araye { public: typedef int (&array_ref)[size]; operator array_ref () { return araye; } // ... Or with identity (thanks Johannes): operator typename identity<int[size]>::type &() { return araye; } With that your example works, but i'd prefer the following declaration instead: Araye<3>::array_ref reference = araye; There usually should be no need for this though as the subscripting operator should cover most needs: int& operator[](size_t i) { return araye[i]; } Note that if you are ok with limiting your class to being an aggregate, you could shorten your sample to the following instead: template <size_t size> struct Araye { int araye[size]; typedef int (&array_ref)[size]; operator array_ref () { return araye; } }; Araye<3> araye = {1,2,3}; Araye<3>::array_ref reference = araye;
3,293,279
3,293,461
How do you import an enum into a different namespace in C++?
I have an enum in a namespace and I'd like to use it as if it were in a different namespace. Intuitively, I figured I could use 'using' or 'typedef' to accomplish this, but neither actually work. Code snippet to prove it, tested on GCC and Sun CC: namespace foo { enum bar { A }; } namespace buzz { // Which of these two methods I use doesn't matter, // the results are the same. using foo::bar; //typedef foo::bar bar; } int main() { foo::bar f; // works foo::bar g = foo::A; // works buzz::bar x; // works //buzz::bar y = buzz::A; // doesn't work buzz::bar z = foo::A; } The problem is that the enum itself is imported but none of its elements. Unfortunately, I can't change the original enum to be encased in an extra dummy namespace or class without breaking lots of other existing code. The best solution I can think of is to manually reproduce the enum: namespace buzz { enum bar { A = foo::A }; } But it violates the DRY principle. Is there a better way?
Wrap the existing namespace in a nested namespace which you then "use" in the original namespace. namespace foo { namespace bar_wrapper { enum bar { A }; } using namespace bar_wrapper; } namespace buzz { using namespace foo::bar_wrapper; }
3,293,471
3,295,141
Accessing negative pixel values OpenCV
I am attempting to perform a zero-crossing edge detection on an image in OpenCV. I blur and use the cvLaplace() then scale it from (0, max). My question is: How can I access the pixel values in that image in such a way as to correctly identify negative values? Using the function provided by OpenCV (cvPtr2D) returns unsigned chars. Any ideas or comments? Thank you
Pixels are stored internally as IPL_DEPTH_8U, which means 8-bit unsigned char, ranging from 0 to 255. But you could also pack them as IPL_DEPTH_16S (signed integer) and even IPL_DEPTH_32F (single precision floating point number). cvConvertScale() probably will do the job! But if you want to convert it manually: OpenCV need to convert IPL_DEPTH_32S to IPL_DEPTH_32F The basic idea is to create a new image with cvCreateImage() and the format you need them to be, then use cvConvertScale() to copy the original data to new format. In the end, your code might look something like the following: IplImage* img = cvLoadImage("file.png", CV_LOAD_IMAGE_ UNCHANGED); // then retrieve size of loaded image to create the new one IplImage* new_img = cvCreateImage(img_size, IPL_DEPTH_16S, 1); cvConvertScale(img, new_img, 1/255.0, -128); I think this answers the question of the thread. Answering your comment, you could access the pixel information like this: IplImage* pRGBImg = cvLoadImage(input_file.c_str(), CV_LOAD_IMAGE_UNCHANGED); int width = pRGBImg->width; int height = pRGBImg->height; int bpp = pRGBImg->nChannels; for (int i=0; i < width*height*bpp; i+=bpp) { if (!(i % (width*bpp))) // print empty line for better readability std::cout << std::endl; std::cout << std::dec << "R:" << (int) pRGBImg->imageData[i] << " G:" << (int) pRGBImg->imageData[i+1] << " B:" << (int) pRGBImg->imageData[i+2] << " "; } Dont forget to vote up and mark this answer as accepted in case it did.
3,293,534
3,293,624
C++ append one vector to another
I fully understand this question has been asked a lot, but I'm asking for a specific variation and my search-foo has given up, as I've only found algorithms that append one existing vector to another, but not one returned to from a function. I have this function that lists all files in a directory: vector<string> scanDir( const string& dir ) which may call itself internally (for subdirectories). I need a short way of appending the returned value to the caller's vector. I have in my mind something like this (but of course it doesn't exist :( ): vector<string> fileList; //... fileList.append( scanDir(subdirname) ); I fear that storing the return value and inserting it in fileList would bring performance badness. What I mean is this: vector<string> temp( scanDir(subdirname) ); copy( temp.begin(), temp.end(), back_inserter(fileList) ); Thanks! PS: I'm not forcing myself to using vector, any other container that performs equally well and can prevent the potential large copy operation is fine by me.
If you're in the position to change scanDir, make it a (template) function accepting an output iterator: template <class OutIt> void scanDir(const std::string& dirname, OutIt it) { // ... // Scan subdir scanDir(subdir, it); // ... } You'll have the additional benefit to be able to fill all sort of data structures like std::vector<string> vector; scanDir(dir1, std::back_inserter(vector)); std::set<string> fileset scanDir(dir1, std::inserter(fileset, fileset.begin())); etc. EDIT (see comment ...) For using this function for class member initialization, you could either call it in the constructor as in class MyClass { private: std::vector<string> m_fileList; public: MyClass(const std::string& dirname) { scanDir(dirname, std::back_inserter(m_fileList); } } or using a wrapper function std::vector<string> scanDir(const std::string& dirname) { std::vector<string> result; scanDir(dirname, std::back_inserter(result); return result; } class MyClass { // Same as above.. MyClass(const std::string& dirname) : m_fileList(scanDir(dirname)) { } } I would prefer the first version for performance (and other) reasons ...
3,293,796
3,293,992
typedef declaration of template class
is there a difference, from prospective of meta-programming for example, between the two declarations? template<typename T> struct matrix { typedef matrix self_type; // or typedef matrix<T> self_type; }; thank you
In this particular situation (inside a class template), matrix is a shorthand for matrix<T>. When you write lots of hairy templates all day long while trying to fit everything in 80 columns, the shorthand is welcome. Note that you can also abbreviate method arguments: template <typename T> struct matrix { typedef matrix my_type; matrix(); // constructor is abbreviated too matrix& operator=(matrix); }; // Method argument types can be abbreviated too // but not result types. template <typename T> matrix<T>& matrix<T>::operator=(matrix m) { // ... }
3,293,883
3,293,917
Call class constructor from new operator on GNU - use invalid class
The closest thread to my question is here. I am trying to compile the following code with gcc: #include <malloc.h> class A { public: A(){}; ~A(){}; };//class A int main() { A* obj = (A*) malloc( sizeof(A) ); if(obj==0) return 1 ; obj->A::A(); /*error: invalid use of 'class A' */ obj->A::~A(); free(obj); return 0; };// From the command line I compile the code with: $ g++ -o main main.cpp main.cpp: In function 'int main()': main.cpp:22: error: invalid use of 'class A' Can you please point me in the right direction?
You can't call a constructor on an object; a constructor can only be called in the creation of an object so by definition the object can't exist yet. The way to do this is with placement new. There's no need to cast your malloc return. It should be void * as it doesn't return a pointer to an A; only a pointer to raw memory in which you plan to construct an A. E.g. void* mem = malloc( sizeof(A) ); A* obj = new (mem) A(); obj->~A(); free(mem);
3,294,972
3,295,024
setting max frames per second in openGL
Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific? I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer. Or how else can I prevent FPS to drop or raise dramatically? For this time I'm testing it on drawing big number of vertices in line, and using fraps I can see frame rate to go from 400 to 200 fps with evident slowing down of drawing it.
You have two different ways to solve this problem: Suppose that you have a variable called maximum_fps, which contains for the maximum number of frames you want to display. Then You measure the amount of time spent on the last frame (a timer will do) Now suppose that you said that you wanted a maximum of 60FPS on your application. Then you want that the time measured be no lower than 1/60. If the time measured s lower, then you call sleep() to reach the amount of time left for a frame. Or you can have a variable called tick, that contains the current "game time" of the application. With the same timer, you will incremented it at each main loop of your application. Then, on your drawing routines you calculate the positions based on the tick var, since it contains the current time of the application. The big advantage of option 2 is that your application will be much easier to debug, since you can play around with the tick variable, go forward and back in time whenever you want. This is a big plus.
3,295,053
3,296,077
(C/C++/C#) DirectX 9 Overlay, preferably the same way xfire or Steam does it
I wonder what techniques xfire and/or Steam uses to overlay into games. I'm trying to do something similar and I really would like to know what is the least intrusive way, I.e. won't alert any anti-cheat systems. I don't need any kind of information from the game (no wall-hack BS). I would basically just like to display a rectangle with custom contents in the game. PS. I have seen some examples by googling but most of them seem very old. Are they still good options?
A few days back, I had asked this question. This might help you too. Read Alan's reply and the links he's mentioned : How to overlay graphics on Windows games?
3,295,337
3,296,434
Template specialization with struct and bool
I have a template class in which I am specializing a couple of methods. For some reason, when I added a specialization for a struct, it seems to be conflicting with the specialization for bool. I am getting a type conversion error because it is trying to set the struct = bool (resolving to the wrong specialization). Here is some code .h: typedef struct foo { ... } template <class T> class bar { template <class T> void method1() {...} template <> void method1<bool>() {...} template <> void method1<foo>() {...} } .cpp template class bar<bool>; template class bar<foo>; I am getting the error inside method1<bool> because it is setting T=foo instead of resolving it to method1<foo>. Any ideas?
(EDITED) You may try the following, which delegates the method implementation to a templated helper class. .h: typedef struct Foo { ... } template<class T_Bar, class T2> struct BarMethod1; template <class T> class Bar { template<class T2> void method1(...) { BarMethod1<Bar, T2>(...)(...); } } template <class T_Bar, class T2> class BarMethod1 {void operator()(...){...}}; template <class T_Bar> class BarMethod1<T_Bar, bool> {void operator()(...){...}}; template <class T_Bar> BarMethod1<T_Bar, Foo> {void operator()(...){...}}; .cpp template class Bar<bool>; template class BarMethod1<Bar<bool>, bool>; template class BarMethod1<Bar<bool>, Foo>; template class Bar<Foo>; template class BarMethod1<Bar<Foo>, bool>; template class BarMethod1<Bar<Foo>, Foo>;
3,295,628
3,295,716
Can I compile using VS2008's C++ compiler using VS2010 and only the Server 2008 Platform SDK?
I'd rather not install the entire VS 2008 installation given that I'm not going to be using anything other than the compiler. Will VS 2010's multitargeting work correctly using only the Platform SDK instead of the full VS2008 install?
The custom setup options are not nearly fine-grained enough to allow you to leave the big chunks like the IDE out. It isn't just the SDK that's used, at least the VC subdirectory needs to be there. And bits of Common7, also the folder that contains the IDE. Rename the folders, delete them later if it works out.
3,295,690
3,295,761
What is the end of line character when reading a file in using C++ get(char& c);?
My issue is I am trying my first attempt at writing a very basic lexical analyzer for ascii text files. so far, it reads and compares to my token list properly, however I am unable to grab the final token without a space or pressing enter. I've tried using the delimiter ^Z ASCII 26 as another selection before comparing the string to my token list. This failed to work. I've also tried moving the f->eof() check to below the comparison location to see if it will snag it then check the eof flag. I've had no luck. could anyone possibly enlighten me? The code is below for the read method. m_TokenList is just a vector of type string. void CelestialAnalyzer::ReadInTokens(ifstream *f){ vector<string> statement; vector<string> tokens; string token; char c; do{ f->get(c); // Read in each character if(f->eof()) break; if(c == '\n' || c == ' ' || c == '^Z' || c == '\r'){ // 26 ASCII ^Z (end of file marker) for(unsigned int i=0; i<m_TokenList.size(); i++){ if(!token.compare(m_TokenList[i])){ tokens.push_back(token); token.clear(); } } } else { token.push_back(c); // Add it to the token array } } while (true); f->close(); for(unsigned int i=0; i<tokens.size(); i++){ cout << "Found Token: " << tokens[i].c_str() << endl; } } The m_TokenList is initialized as CelestialAnalyzer::CelestialAnalyzer(){ m_TokenList.push_back("KEY"); // Prints data m_TokenList.push_back("GETINPUT"); // Grabs user data m_TokenList.push_back("+"); // Addition/Concation m_TokenList.push_back("-"); // Subtraction m_TokenList.push_back("=="); // Equator m_TokenList.push_back("="); // Assignment m_TokenList.push_back(";"); // End statement m_TokenList.push_back(" "); // Blank m_TokenList.push_back("{"); // Open Grouping m_TokenList.push_back("}"); // Close Grouping m_TokenList.push_back("("); // Parameter opening m_TokenList.push_back(")"); // Parameter closing for(unsigned int i=48; i<=57; i++){ string s; s.push_back((char)i); m_TokenList.push_back(s); s.clear(); } } A test file for reading is this simple example. 1 + 2 = KEY It will register all but 'KEY' unless there is a space or a newline after it.
Why don't you just delete: if(f->eof()) break; and use if(f->eof() || c == '\n' || c == ' ' || c == '^Z' || c == '\r'){ then break afterwards? That way, when you hit EOF, you will add whatever remaining token you have. Alternately, you could just check if the token is nonempty after you break out of the loop, and add it in that case.
3,295,742
3,295,778
Implementing a stack based virtual machine for a subset of C
Hello everyone I'm currently implementing a simple programming language for learning experience but I'm in need of some advice. Currently I'm designing my Interpreter and I've come into a problem. My language is a subset of C and I'm having a problem regarding the stack interpreter implementation. In the language the following will compile: somefunc () { 1 + 2; } main () { somefunc (); } Now this is alright but when "1+2" is computed the result is pushed onto a stack and then the function returns but there's still a number on the stack, and there shouldn't be. How can I get around this problem? I've thought about saving a "state" of the stack before a function call and restoring the "state" after the function call. For example saving the number of elements on the stack, then execute the function code, return, and then pop from the stack until we have the same number of elements as before (or maybe +1 if the function returned something). Any ideas? Thanks for any tips!
Great question! One of my hobbies is writing compilers for toy languages, so kudos for your excellent programming taste. An expression statement is one where the code in the statement is simply an expression. This means anything of the form <expression> ;, which includes things like assignments and function calls, but not ifs, whiles, or returns. Any expression statement will have a left over value on the stack at the end, which you should discard. 1 + 2 is an expression statement, but so are these: x = 5; The assignment expression leaves the value 5 on the stack since the result of an assignment is the value of the left-hand operand. After the statement is finished you pop off the unused value 5. printf("hello world!\n"); printf() returns the number of characters output. You will have this value left over on the stack, so pop it when the statement finishes. Effectively every expression statement will leave a value on the stack unless the expression's type is void. In that case you either special-case void statements and don't pop anything afterwards, or push a pretend "void" value onto the stack so you can always pop a value.
3,295,817
3,295,853
Any good recursive tutorials? Python?
Wondering if anyone could point me towards a good recursion tutorial. I am a bit rusty on it as I learned about it in my Data Structures class first semester. Would like to brush up on my recursion...any help?
Consider this. More seriously… Recursion is a way of solving problems that have a clearly defined base case (or cases, btu I'm keeping it simple here.) For examples, the commonly cited factorial problem is a great one. What does factorial do? Let's see some examples: factorial(0) = 1 factorial(1) = 1 factorial(2) = 2 factorial(3) = 6 factorial(4) = 24 The factorial of a number is that number multiplied by the factorial of the number that comes before it, unless (now, this is the base case) the number is 0. The factorial of 0 is 1. (You can't take the factorial of a negative number; only positive integers.) So we have our clearly defined base case. And we know what to do with numbers that aren't our base case (we multiply them times the factorial of the number one less than it.) We're ready to write our function. def factorial(x): if x == 0: # this is our base case return 1 # and this is what we do when we see it else: # this is what we do with all other numbers return x * factorial(x-1) So you Clearly define your base case. Find a way to reduce your problem from a non-base case to the base case. Formally express that in a function that (when it's simple!) looks like function: if base case: this else: something + function(something closer to the base case) If you want something more advanced, Google's got a lot of info.
3,295,979
3,295,987
Is it valid to compare iterators which are got from the container separately?
For example, it this expression valid in semantic? container.begin() == container.begin();
Yes, so long as neither iterator has been invalidated. For example, the following would not be valid: std::deque<int> d; std::deque<int> begin1 = d.begin(); d.push_front(42); // invalidates begin1! std::deque<int> begin2 = d.begin(); assert(begin1 == begin2); // wrong; you can't use begin1 anymore.