question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
2,631,926
2,631,977
C++ iostream not setting eof bit even if gcount returns 0
I'm developping an application under windows, and i'm using fstreams to read and write to the file. I'm writing with fstream opened like this : fs.open(this->filename.c_str(), std::ios::in|std::ios::out|std::ios::binary); and writing with this command fs.write(reinterpret_cast<char*>(&e.element), sizeof(T)); closing the file after each write with fs.close() Reading with ifstream opened like this : is.open(filename, std::ios::in); and reading with this command : is.read(reinterpret_cast<char*>(&e.element), sizeof(T)); The write is going fine. However, i read in a loop this way : while(!is.eof()) { is.read(reinterpret_cast<char*>(&e.element), sizeof(T)); } and the program keeps reading, even though the end of file should be reached. istellg pos is 0, and gcount is equal to 0 too, but the fail bit and eof bit are both ok. I'm running crazy over this, need some help ...
Try this: while(is.read(reinterpret_cast<char*>(&e.element), sizeof(T))) {} Also you should open the istream with the binary flag as well: is.open(filename, std::ios::in | std:ios::binary); If it reads forever, what does it read? What type is T?
2,632,021
2,632,075
Can sizeof return 0 (zero)
Is it possible for the sizeof operator to ever return 0 (zero) in C or C++? If it is possible, is it correct from a standards point of view?
In C++ an empty class or struct has a sizeof at least 1 by definition. From the C++ standard, 9/3 "Classes": "Complete objects and member subobjects of class type shall have nonzero size." In C an empty struct is not permitted, except by extension (or a flaw in the compiler). This is a consequence of the grammar (which requires that there be something inside the braces) along with this sentence from 6.7.2.1/7 "Structure and union specifiers": "If the struct-declaration-list contains no named members, the behavior is undefined". If a zero-sized structure is permitted, then it's a language extension (or a flaw in the compiler). For example, in GCC the extension is documented in "Structures with No Members", which says: GCC permits a C structure to have no members: struct empty { }; The structure will have size zero. In C++, empty structures are part of the language. G++ treats empty structures as if they had a single member of type char.
2,632,309
2,632,334
Never ending function problem
Im trying to run a function that will never end (until the program is killed) How would i start such a function and be able to continue on past that function, because at the moment the program will not run past the never ending function. Regards Paul
You'd need to start a new thread. A given thread can only execute one function at a time. So, if you want to have two separate functions executing at the same time, you need multiple threads. You'll need to look into multi-threaded programming. If you're on linux or another unix-based system, then the library to look at would be pthreads.
2,632,430
2,632,440
Running for loop depending on size of string vector
I have made a string vector vector<string> actor_; and then added elements in it using push_back. I now want to display all of them, for which I need to run a loop according to the number of elements in the vector. For that, I need to run the following loop: for (int i = 0; i < (int)actor_.size; i++) { } but this returns the following error: error C2440: 'type cast' : cannot convert from 'unsigned int (__thiscall std::vector<_Ty>::* )(void) const' to 'int' 1> with 1> [ 1> _Ty=std::string 1> ] 1> There is no context in which this conversion is possible
size is a member function; you mean: for (unsigned int i = 0; i < actor_.size(); i++) { } (it's a good idea to use std::size_t instead of unsigned int as well)
2,632,466
2,632,491
What does binding mean exactly?
I always see people mention that "Python binding" and "C Sharp binding" etc. when I am actually using their C++ libraries. What does binding mean? If the library is written in C, and does Python binding means that they use SWIG kind of tool to mock a Python interface? Newbie in this field, and any suggestion will be welcomed.
When someone talks about something like a "C# binding" of a library, they are indicating that you and API is being provided in C# for a library written in a different language. This may or may not involve an autogeneration tool like SWIG.
2,632,503
2,632,512
m.find(...) == m.end() - which is used, iterator or const_iterator
std::map find/end both provides const_iterator and iterator, e.g. iterator end (); const_iterator end () const Out of curiosity,if I have a std::map , which will be called/compared here, an iterator or a const_iterator ? : if(m.find(key) != m.end()) { ... } And should I care ?
If m is const, then a const_iterator will be returned; otherwise an iterator will be returned. If all you are doing is testing for existence of an element in the map, then it doesn't really matter which one is used.
2,632,601
2,632,676
Why are forward declarations necessary?
Possible Duplicate: Should C++ eliminate header files? In languages like C# and Java there is no need to declare (for example) a class before using it. If I understand it correctly this is because the compiler does two passes on the code. In the first it just "collects the information available" and in the second one it checks that the code is correct. In C and C++ the compiler does only one pass so everything needs to be available at that time. So my question basically is why isn't it done this way in C and C++. Wouldn't it eliminate the needs for header files?
The short answer is that computing power and resources advanced exponentially between the time that C was defined and the time that Java came along 25 years later. The longer answer... The maximum size of a compilation unit -- the block of code that a compiler processes in a single chunk -- is going to be limited by the amount of memory that the compiling computer has. In order to process the symbols that you type into machine code, the compiler needs to hold all the symbols in a lookup table and reference them as it comes across them in your code. When C was created in 1972, computing resources were much more scarce and at a high premium -- the memory required to store a complex program's entire symbolic table at once simply wasn't available in most systems. Fixed storage was also expensive, and extremely slow, so ideas like virtual memory or storing parts of the symbolic table on disk simply wouldn't have allowed compilation in a reasonable timeframe. The best solution to the problem was to chunk the code into smaller pieces by having a human sort out which portions of the symbol table would be needed in which compilation units ahead of time. Imposing a fairly small task on the programmer of declaring what he would use saved the tremendous effort of having the computer search the entire program for anything the programmer could use. It also saved the compiler from having to make two passes on every source file: the first one to index all the symbols inside, and the second to parse the references and look them up. When you're dealing with magnetic tape where seek times were measured in seconds and read throughput was measured in bytes per second (not kilobytes or megabytes), that was pretty meaningful. C++, while created almost 17 years later, was defined as a superset of C, and therefore had to use the same mechanism. By the time Java rolled around in 1995, average computers had enough memory that holding a symbolic table, even for a complex project, was no longer a substantial burden. And Java wasn't designed to be backwards-compatible with C, so it had no need to adopt a legacy mechanism. C# was similarly unencumbered. As a result, their designers chose to shift the burden of compartmentalizing symbolic declaration back off the programmer and put it on the computer again, since its cost in proportion to the total effort of compilation was minimal.
2,632,741
2,632,997
Targeting .NET Framework 4.0
I just downloaded MSVS 2010 from university MSDN AA. The IDE itself is wonderful, I can't complain, but... I'm developing project that combines C#, C++/CLI and C++ (native core, cli bridge DLL and c# GUI). But the VS 2010 seems NOT TO support targeting .NET for C++/CLI projects unless VS 2008 is installed. Requiring both VS 2010 and 2008 installed is in my opinion kind of unreasonable for open-source project. Thee only other solution is targeting .NET 4.0. Do you think it is already time to start releasing applications requiring .NET 4.0? Couldn't it deter potential users since it is so new and not yet exactly wide-spread?
I don't see why it's too soon, users can download it easily and they're gonna have to download it eventually anyway right?
2,632,846
2,632,979
understanding z buffer formats direct x
A z buffer is just a 3d array that shows what object should be written in front of another object. each element in the array represents a pixel that holds a value from 0.0 to 1.0. My question is if that is all a z buffer does, then why are some buffers 24bit, 32bit, and 16 bit ?
A Z-Buffer is not a 3D array. It's a 2D array that has a value at each pixel. That value represents the depth of the last pixel written to that position. If the pending pixel has a depth that's behind the current value on the Z-Buffer, the pixel is not visible and so it is skipped. This is what allows objects to be rendered in any order: pixel behind won't overwrite pixel in front; they will be discarded. The thing is, that value has differing precision. That's where the bits come in. A 16-bit Z-Buffer takes half as much memory as a 32-bit Z-Buffer, but cannot represent the same range. Memory is not exactly cheap (well, that's changing, but still), so if you don't need lots of precision use 16-bit and save memory. (This was more important in the past, where memory truly was scarce.) Trying to store too many values in a buffer that can't hold them will cause them to combine (16.5 and 15.5 both becoming 16, for example), and you get artifacts.
2,632,895
2,637,627
is XULRUNNER suitable as a replacement for other C++ desktop applications frameworks such as QT?
XulRunner/Gecko seems to be really interesting for developing GUI-intensive applications (by using widely used technologies such as HTML / CSS / SVG / XUL / Javascript). But the underlaying C++ APIS (XPCOM, NECKO, ...) looks so old and complex. Moreover the general lack of documentation/developper tools is really frightening. On the other hand, QT have a quite nice platform, and is well documented and supported. The UI part is really "traditional" though. What are your experiences with XULRUNNER, specially compared to other C++ desktop applications frameworks such as QT/GTK/MFC...? What is missing? What is awesome? Side question: If I wanted to migrate an existing MFC app to a cross platform C++ desktop application framework, would it be wise to use XULRUNNER instead of QT or GTK?
There aren't actually that many applications built using XulRunner, as far as I'm aware. And I should know, as I was Tech Lead for one of them and we tried to hire experienced people. In hindsight, this doesn't surprise me. Our decision to use XulRunner was made by a non-developer, against my advice. Many things took twice the time they would have taken in wxWidgets, which we used before. Now I have also used Qt in other projects, and I'd have to say it's even better than wxWidgets. So I can fairly reliably state that Qt will be more than twice as efficient as XulRunner, and besides you will have a much easier time finding experienced developers. Sure, Javascript in XulRunner is nice. But Qt also comes with QtScript, which wraps JavaScriptCore. And when it comes to building truly rich UI's - i.e. more than just a stack of images - then HTML+SVG+CSS+JS just don't cut it. They were developed to make simple things easy, not to make complex things possible. Just look at the newest feature, video. HTML5's solution: a tag, and let some C++ code behind the scenes do the real work. Even though video is just a big stack of images shown one at a time. So, the problem isn't so much that there are things missing. It's just that development is slow, and the result is slow. On the awesome side, the plugin mechanism actually works quite well. Now, this all applies if you start from scratch. If you already have a lot of MFC/C++ code, stick with C++ and drop only the MFC part. That means Qt or possibly wxWidgets are the obvious winners.
2,633,085
2,633,175
implement the URL match in C++
Given a list of URLs, such as a/b/e/f, b/c/d/k/g, s/e/c/d, how to match an input URL to the one in the list, for example, an input c/d should be matched to s/e/c/d, not b/c/d/k/g
Why not b/c/d/k/g? Are the "Url"s simply strings? If so simply search it using strstr or one of its derivative (wcsstr, _mbsstr, _mbsstr_l).
2,633,092
2,633,127
istringstream in C++
I'm sure I'm just doing something stupid here, but I can't quite figure out what it is. When I try to run this code: #include <iostream> #include <string> #include <sstream> using namespace std; int main(int argc, char *argv[]) { string s("hello"); istringstream input(s, istringstream::in); string s2; input >> s2; cout << s; } I get this error: malloc: *** error for object 0x100016200: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug The only thing I can think of is that I allocated s2 on the stack, but I thought strings manage their own content on the heap. Any help here would be appreciated. Thanks, helixed EDIT: Fixed the last line of main, where cout << s should have been cout << s2. It runs without error if I initialized s2 to "hi", but not otherwise. Is this just a weird gcc compilation problem?
Works for me. But I have never done this: istringstream input(s, istringstream::in); Try istringstream input(s);
2,633,302
2,633,356
Google Bot information?
Does anyone know any more details about google's web-crawler (aka GoogleBot)? I was curious about what it was written in (I've made a few crawlers myself and am about to make another) and if it parses images and such. I'm assuming it does somewhere along the line, b/c the images in images.google.com are all resized. It also wouldn't surprise me if it was all written in Python and if they used all their own libraries for most everything, including html/image/pdf parsing. Maybe they don't though. Maybe it's all written in C/C++. Thanks in advance-
Officially allowed languages at Google, I think, are Python/C++/Java. The bot likely uses all 3 for different tasks.
2,633,314
2,635,384
Pattern for UI configuration
I have a Win32 C++ program that validates user input and updates the UI with status information and options. Currently it is written like this: void ShowError() { SetIcon(kError); SetMessageString("There was an error"); HideButton(kButton1); HideButton(kButton2); ShowButton(kButton3); } void ShowSuccess() { SetIcon(kError); std::String statusText (GetStatusText()); SetMessageString(statusText); HideButton(kButton1); HideButton(kButton2); ShowButton(kButton3); } // plus several more methods to update the UI using similar mechanisms I do not likes this because it duplicates code and causes me to update several methods if something changes in the UI. I am wondering if there is a design pattern or best practice to remove the duplication and make the functionality easier to understand and update. I could consolidate the code inside a config function and pass in flags to enable/disable UI items, but I am not convinced this is the best approach. Any suggestions and ideas?
I would recommend Observer Pattern and State Pattern, when an validation happens to be successful or unsuccessful, attached buttons can change their state according to information provided in "notify" method. Please refer to GoF's book for further details, or just google them. Hope it helps.
2,633,330
2,634,467
Microsoft C++ Language Reference
Whenever any question is asked, and a reference text is needed, I never see MSDN C++ Language Reference being referred. I was browsing through it and I personally feel that it is extremely well written. Is there some specific reason it is not used as often as a standard? Is it because it contains some VC++ specific features?
The answer is fairly simple: The MSDN reference is not authoritative. It tells you how Microsoft's compiler behaves, and yes, it usually happens to coincide with what the standard says. But when someone asks how the C++ language deals with some situation, only one text has any authority: the ISO standard. So when answering questions about C++, people tend to reference the standard. If you ask specifically about how MSVC implements it, then MSDN would be a perfectly valid source. But most questions are simply about C++. Or to put it another way: if MSDN contains a typo, then MSDN is wrong. If the ISO standard contains a typo, then that's how the language is defined.
2,633,400
2,633,413
C/C++ efficient bit array
Can you recommend efficient/clean way to manipulate arbitrary length bit array? Right now I am using regular int/char bitmask, but those are not very clean when array length is greater than datatype length. std vector<bool> is not available for me.
boost::dynamic_bitset if the length is only known in run time. std::bitset if the length is known in compile time (although arbitrary).
2,633,670
2,633,748
High Precision Constants for Templated Code
I am writing a template class which takes a floating-point-like type (float, double, decimal, GMP) as a parameter. However, my class requires various numeric constants. Some of these are rational numbers (int/int) while others are irrational and available to 30 or so decimal places. What is the best way to go about initialising these constants, so: T c1 = <constant>; where T is the templated type? While I could always fall-back on doubles (T c1 = 0.1415926535...) and rely on the compiler/implicit initialiser to convert to the appropriate type I would like to retain the extra precision if at all possible. I am interested in both current solutions and those which C++0x (or is it C++1x?) might bring to the table.
I think the easiest way to do this is to create a specialized container class that holds the constants, something like this: template<class T> class Constants { public: static const T pi = T(3.1415); }; //Example specialization: template<> class Constants<double> { public: static const double pi = 3.1415926535897932384626433832795; }; In your real class you can then do something like this: const T c1 = Constants<T>::pi; This avoid that you have to write complete specialization classes only to redefine those constants. Note that default behavior can fall back to implicit double assignment.
2,633,702
2,633,807
Templates --> How to decipher, decide if necessary and create?
I have a few classes in a project that I inherited that are really old, last I knew they compiled with CodeWarrior 8. I am now in XCode 3.2 Here is an example of what I struggle with: template <class registeredObject> typename std::vector<registeredObject>::iterator FxRegistry<registeredObject>::begin(void) { return mRegistryList.begin(); } The errors are: no 'typename std::vector<registeredObject, std::allocator<_CharT> >::iterator FxRegistry<registeredObject>::begin()' member function declared in class 'FxRegistry<registeredObject>' template definition of non-template 'typename std::vector<registeredObject, std::allocator<_CharT> >::iterator FxRegistry<registeredObject>::begin()' How do I decide how to solve these and where do I start looking? UPDATE: Here is the FxRegistry Class: template <class registeredObject> class FxRegistry { public: // Construction FxRegistry() {} ~FxRegistry(); // List management void Register(const registeredObject &ob); void Unregister(registeredObject &ob); // List iteration typedef std::vector<registeredObject>::iterator iterator; typedef std::vector<registeredObject>::const_iterator const_iterator; std::vector<registeredObject>::iterator begin(void); std::vector<registeredObject>::const_iterator begin(void) const; std::vector<registeredObject>::iterator end(void); std::vector<registeredObject>::const_iterator end(void) const; FxSize size(void); void Insert(iterator iter,const registeredObject &ob); registeredObject &operator[](FxInt32 index) { return mRegistryList[index]; } void clear() {mRegistryList.clear();}; protected: vector<registeredObject> mRegistryList; }; I get errors on every line above almost that are: error: type 'std::vector >' is not derived from type 'FxRegistry' error: expected ';' before 'iterator' I thought an iterator was of vector type, so I would declare one vector iterator.
typedef std::vector<registeredObject>::iterator iterator; typedef std::vector<registeredObject>::const_iterator const_iterator; std::vector<registeredObject>::iterator begin(void); std::vector<registeredObject>::const_iterator begin(void) const; std::vector<registeredObject>::iterator end(void); std::vector<registeredObject>::const_iterator end(void) const; In all those places you should be using: typename std::vector<registeredObject>::iterator iterator and const_iterator are dependent names (their meaning depends on what a particular instantiation of vector<X> might turn out to contain), and the language requires that you tell whether it is a type name or not, so the compiler can know without instantiating vector<X>. Also it might make sense to use the typedefs you define. As to what to start from - the first error on the list. :)
2,633,787
2,633,822
Compile time type determination in C++
A coworker recently showed me some code that he found online. It appears to allow compile time determination of whether a type has an "is a" relationship with another type. I think this is totally awesome, but I have to admit that I'm clueless as to how this actually works. Can anyone explain this to me? template<typename BaseT, typename DerivedT> inline bool isRelated(const DerivedT&) { DerivedT derived(); char test(const BaseT&); // sizeof(test()) == sizeof(char) char (&test(...))[2]; // sizeof(test()) == sizeof(char[2]) struct conversion { enum { exists = (sizeof(test(derived())) == sizeof(char)) }; }; return conversion::exists; } Once this function is defined, you can use it like this: #include <iostream> class base {}; class derived : public base {}; class unrelated {}; int main() { base b; derived d; unrelated u; if( isRelated<base>( b ) ) std::cout << "b is related to base" << std::endl; if( isRelated<base>( d ) ) std::cout << "d is related to base" << std::endl; if( !isRelated<base>( u ) ) std::cout << "u is not related to base" << std::endl; }
It declares two overloaded functions named test, one taking a Base and one taking anything (...), and returning different types. It then calls the function with a Derived and checks the size of its return type to see which overload is called. (It actually calls the function with the return value of a function that returns Derived, to avoid using memory) Because enums are compile-time constants, all of this is done within the type system at compile-time. Since the functions don't end up getting called at runtime, it doesn't matter that they have no bodies.
2,633,795
2,633,905
Need to make sure value is within 0-255
This is probably really easy, but I'm lost on how to "make sure" it is in this range.. So basically we have class Color and many functions to implement from it. this function I need is: Effects: corrects a color value to be within 0-255 inclusive. If value is outside this range, adjusts to either 0 or 255, whichever is closer. This is what I have so far: static int correctValue(int value) { if(value<0) value=0; if(value>255) value=255; } Sorry for such a simple question ;/
I agree with the other answers, with one modification; this should be an else-if statement. There is no need to test if the value is over 255 if you already know it is less than 0 static unsigned char correctValue(int value) { if(value<0) value=0; else if(value>255) value=255; return value; }
2,633,999
2,634,030
Using a map with set_intersection
Not used set_intersection before, but I believe it will work with maps. I wrote the following example code but it doesn't give me what I'd expect: #include <map> #include <string> #include <iostream> #include <algorithm> using namespace std; struct Money { double amount; string currency; bool operator< ( const Money& rhs ) const { if ( amount != rhs.amount ) return ( amount < rhs.amount ); return ( currency < rhs.currency ); } }; int main( int argc, char* argv[] ) { Money mn[] = { { 2.32, "USD" }, { 2.76, "USD" }, { 4.30, "GBP" }, { 1.21, "GBP" }, { 1.37, "GBP" }, { 6.74, "GBP" }, { 2.55, "EUR" } }; typedef pair< int, Money > MoneyPair; typedef map< int, Money > MoneyMap; MoneyMap map1; map1.insert( MoneyPair( 1, mn[0] ) ); map1.insert( MoneyPair( 2, mn[1] ) ); map1.insert( MoneyPair( 3, mn[2] ) ); // (3) map1.insert( MoneyPair( 4, mn[3] ) ); // (4) MoneyMap map2; map2.insert( MoneyPair( 3, mn[2] ) ); // (3) map2.insert( MoneyPair( 4, mn[3] ) ); // (4) map2.insert( MoneyPair( 5, mn[4] ) ); map2.insert( MoneyPair( 6, mn[5] ) ); map2.insert( MoneyPair( 7, mn[6] ) ); MoneyMap out; MoneyMap::iterator out_itr( out.begin() ); set_intersection( map1.begin(), map1.end(), map2.begin(), map2.end(), inserter( out, out_itr ) ); cout << "intersection has " << out.size() << " elements." << endl; return 0; } Since the pair labelled (3) and (4) appear in both maps, I was expecting that I'd get 2 elements in the intersection, but no, I get: intersection has 0 elements. I'm sure this is something to do with the comparitor on the map / pair but can't figure it out.
MoneyMap map2; map1.insert( MoneyPair( 3, mn[3] ) ); // (3) map1.insert( MoneyPair( 4, mn[4] ) ); // (4) map1.insert( MoneyPair( 5, mn[5] ) ); map1.insert( MoneyPair( 6, mn[6] ) ); map1.insert( MoneyPair( 7, mn[7] ) ); Unless this is a typo, you are just reinserting stuff into map1 instead of inserting into map2. I tested it out with the corrected code and it outputted "Intersection has 2 elements."
2,634,278
2,635,399
How to compile Microsoft Silverlight for Symbian as .SIS (stand alone) application?
So in nokia we can have sort of Microsoft Silverlight installed to system. We can see Silverlight apps in browser, interact with them. But how to compile that silverlight application into .SIS applications?
http://www.silverlight.net/getstarted/devices/symbian/ Sounds like it is still a beta, so you may try its beta forum where experts are available.
2,634,279
2,634,303
Assigning a vector of one type to a vector of another type
I have an "Event" class. Due to the way dates are handled, we need to wrap this class in a "UIEvent" class, which holds the Event, and the date of the Event in another format. What is the best way of allowing conversion from Event to UIEvent and back? I thought overloading the assignment or copy constructor of UIEvent to accept Events (and vice versa)might be best.
There are two simple options that I can think of. The first option would be the one you describe: create a constructor that takes an object of the other type: struct UIEvent { UIEvent(const Event&); }; and use std::copy to copy elements from a vector of one type to a vector of the other: std::vector<Event> eventVector; std::vector<UIEvent> uiEventVector; std::copy(eventVector.begin(), eventVector.end(), std::back_inserter(uiEventVector)); The second option would be to write a non-member conversion function: UIEvent EventToUIEvent(const Event&); and use std::transform: std::transform(eventVector.begin(), eventVector.end(), std::back_inserter(uiEventVector), &EventToUIEvent); The advantage of doing it this way is that there is less direct coupling between the classes. On the other hand, sometimes classes are naturally coupled anyway, in which case the first option might make just as much sense and could be less cumbersome.
2,634,558
2,634,566
converting string to int in C++
I am trying to convert a string I read in from a file to an int value so I can store it in an integer variable. This is what my code looks like: ifstream sin; sin.open("movie_output.txt"); string line; getline(sin,line); myMovie.setYear(atoi(line)); Over here, setYear is a mutator in the Movie class (myMovie is an object of Movie class) that looks like this: void Movie::setYear(unsigned int year) { year_ = year; } When I run the code, I get the following error: error C2664: 'atoi' : cannot convert parameter 1 from 'std::string' to 'const char *' 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called
myMovie.setYear(atoi(line.c_str()));
2,634,580
2,634,720
string parsing to double fails in C++
Here's a fun one I've been trying to figure out. I have the following program: #include <iostream> #include <string> #include <sstream> using namespace std; int main(int argc, char *argv[]) { string s("5"); istringstream stream(s); double theValue; stream >> theValue; cout << theValue << endl; cout << stream.fail(); } The output is: 0 1 I don't understand why this is failing. Could somebody please tell me what I'm doing wrong? Thanks, helixed EDIT: Okay, sorry to turn this into a double post, but this looks like a problem specific to Xcode. If I compile this in g++, the code works without a problem. Does anybody have an idea why this is happening in Xcode, and how I could possibly fix it?
Perhaps this is the problem you're having: stringstream question See the accepted answer and the link therein. An example in the Apple discussion link sounds very much like what you're experiencing.
2,634,636
2,634,641
Properties declared beside the constructor
I am very very new to C/C++ and not sure what the method is called. But thats why I am here trying to find the answer. let me show you an example MyClass::MyClass() : valueOne(1), valueTwo(2) { //code } Where valueOne and valueTwo are class properties that are assigned values outside of the body, what method is this called and why is it done this way. Why not do it this way MyClass::MyClass() { valueOne = 1; valueTwo = 2 //code } If anyone can help me out that will be great.
That is an initializer list. You can initialize your member variables using an initializer list after the constructor. By default the constructor will automatically create the objects that are member variables by calling their default constructors. By using an initializer list you can specify to use other constructors. Sometimes if your member variable has no constructor with no argument you have to use an initializer list.
2,634,661
2,634,729
C++ detecting end of file
I have a C++ program that is reading in info from a file, that stores info seperated by "$$$$". An example of how the file looks is: abc bcd $$$$ def $$$$ qwe gfh $$$$ How can I detect the file has ended, without using the eof() function? I have it set like this, getline(sin,temp) if (temp=="$$$$") { } After reading the last set of four dollars, how can I check the next line to ensure another set of data isn't beginning, and that the file has ended? If the file has indeed ended, I plan on using a boolean flag to terminate the while loop.
while (getline(sin, temp)) { // use temp. } You will come out of loop once you are end of file.
2,634,690
2,634,715
Good hash function for a 2d index
I have a struct called Point. Point is pretty simple: struct Point { Row row; Column column; // some other code for addition and subtraction of points is there too } Row and Column are basically glorified ints, but I got sick of accidentally transposing the input arguments to functions and gave them each a wrapper class. Right now I use a set of points, but repeated lookups are really slowing things down. I want to switch to an unordered_set. So, I want to have an unordered_set of Points. Typically this set might contain, for example, every point on a 80x24 terminal = 1920 points. I need a good hash function. I just came up with the following: struct PointHash : public std::unary_function<Point, std::size_t> { result_type operator()(const argument_type& val) const { return val.row.value() * 1000 + val.col.value(); } }; However, I'm not sure that this is really a good hash function. I wanted something fast, since I need to do many lookups very quickly. Is there a better hash function I can use, or is this OK?
Following the technique is given in Effective Java (2nd edition), and quoted from there in Programming in Scala. Have a prime constant (we'll say 53 but you may find something larger will give more even distribution here), and perform multiplication and addition as follows: (53 + int_hash(row)) * 53 + int_hash(col) For more values (say you add a z coordinate), just keep nesting, like ((53 + int_hash(row)) * 53 + int_hash(col)) * 53 + int_hash(z) Where int_hash is a function for hashing a single integer. You can visit this page to find a bunch of good hash functions for single integers.
2,634,732
2,634,757
Find number of differences in 2 strings
int n = string.numDifferences("noob", "newb"); // 2 ??
The number you are trying to find is called the edit distance. Wikipedia lists several algorithms you might want to use; the Hamming distance is a very common way of finding the edit difference between two strings of the same length (it's often used in error-correcting codes); the Levenshtein distance is similar, but also takes insertions and deletions into account. Wikipedia, of course, lists several others (e.g. Damerau-Levenshtein distance, which includes transpositions); I don't know which you want, as I'm no expert and the choice is domain-specific. One of these, though, should do the trick.
2,634,800
2,634,961
Can I use Visual Studio's testing facilities in native code?
Is it possible to use Visual Studio's testing system with native code? I have no objection to recompiling the code itself under C++/CLI if it's possible the code can be recompiled without changes -- but the production code shipped has to be native code. The Premium Edition comes with code coverage support which I might be able to get cheaply from my University -- but I can get the Professional Edition for free from DreamSpark -- and that's the only thing I can see that I'd use. (But I'd use it a LOT)
Well, I hate to answer my own question, but the answer is no: This is not going to fly, you cannot use any unmanaged code when compiling safe. That prevents use of any of the unmanaged CRT headers. Unit testing requires /clr:safe. In other words, no, this is not supported. sigh
2,634,978
2,634,994
Is it possible to reinterpret pointer as dimensioned array reference?
Suppose I have some pointer, which I want to reinterpret as static dimension array reference: double *p; double (&r)[4] = ?(p); // some construct? // clarify template< size_t N> void function(double (&a)[N]); ... double *p; function(p); // this will not work. // I would like to cast p as to make it appear as double[N] Is it possible to do so? how do I do it?
It's ugly: double arr[4]; double* d = arr; double (&a)[4] = *static_cast<double(*)[4]>(static_cast<void*>(d)); Be sure the array type matches what the pointer originally came from.
2,635,036
2,635,110
C++ meta Programming to create a new typelist with remove_const applied to each element
Hi Could anyone give me a sample program to "Create an ApplyRemoveConst template that constructs a new typelist with remove_const applied to each element" For example: typedef TYPELIST_3(A, const B, B) TL; typedef ApplyRemoveConst<TL>::Result TL2; // TL2 is the same as TYPELIST_3(A, B, B) //Typelist Definition: template<class T, class U> struct Typelist { typedef T Head; typedef U Tail; // Lets us a print a typelist inline static ostream &print(ostream &os) { return printInternal(os, "["); } inline static ostream &printInternal(ostream &os, string delimiter) { os << delimiter << typeid(Head).name(); return Tail::printInternal(os, ", "); } private: Typelist(); // Cannot create! }; #define TYPELIST_1(T1) Typelist<T1, NullType> #define TYPELIST_2(T1, T2) Typelist<T1, TYPELIST_1(T2)> #define TYPELIST_3(T1, T2, T3) Typelist<T1, TYPELIST_2(T2, T3)> // Null type definition class NullType{ public: // NullType ends a typelist (just like NULL ends a C string) inline static ostream &printInternal(ostream &os, string delimiter) { return os << "]"; } };
I think what you want is something like this: template <typename, template <typename> class> struct transform; template <template <typename> class Func> struct transform<NullType, Func> { typedef NullType type; // nothing to do }; template <typename T, typename U, template <typename> class Func> struct transform<Typelist<T, U>, Func> { typedef typename Func<T>::type Head; // apply to head typedef typename transform<U, Func>::type Tail; // tail is transformed tail typedef Typelist<Head, Tail> type; // put together }; This applies something recursively. It makes a new list, by applying to the head, then taking everything else as the tail, applied. That in turn applies the head, and so on, until a NullType is reached in which we just get NullType. Then you just need a meta-functor: template <typename T> struct remove_const { typedef T type; }; template <typename T> struct remove_const<const T> { typedef T type; }; Then put them together: typedef TYPELIST_3(A, const B, B) TL; typedef transform<TL, remove_const>::type TL2; // TL2 is the same as TYPELIST_3(A, B, B) It should be noted I haven't tried any of this.
2,635,123
2,635,143
activate RTTI in c++
Can anybody tell me how to activate RTTI in c++ when working on unix. I heard that it can be disabled and enabled. on my unix environment,how could i check whether RTTI is enabled or disabled? I am using the aCC compiler on HPUX.
Are you using g++ or some other compiler? In g++ RTTI is enabled by default IIRC, and you can disable it with -fno-rtti. To test whether it is active or not use dynamic_cast or typeid UPDATE I believe that HPUX's aCC/aC++ also has RTTI on by default, and I am unaware of a way to disable it. Check your man pages.
2,635,272
6,412,333
fastest (low latency) method for Inter Process Communication between Java and C/C++
I have a Java app, connecting through TCP socket to a "server" developed in C/C++. both app & server are running on the same machine, a Solaris box (but we're considering migrating to Linux eventually). type of data exchanged is simple messages (login, login ACK, then client asks for something, server replies). each message is around 300 bytes long. Currently we're using Sockets, and all is OK, however I'm looking for a faster way to exchange data (lower latency), using IPC methods. I've been researching the net and came up with references to the following technologies: shared memory pipes queues as well as what's referred as DMA (Direct Memory Access) but I couldn't find proper analysis of their respective performances, neither how to implement them in both JAVA and C/C++ (so that they can talk to each other), except maybe pipes that I could imagine how to do. can anyone comment about performances & feasibility of each method in this context ? any pointer / link to useful implementation information ? EDIT / UPDATE following the comment & answers I got here, I found info about Unix Domain Sockets, which seem to be built just over pipes, and would save me the whole TCP stack. it's platform specific, so I plan on testing it with JNI or either juds or junixsocket. next possible steps would be direct implementation of pipes, then shared memory, although I've been warned of the extra level of complexity... thanks for your help
Just tested latency from Java on my Corei5 2.8GHz, only single byte send/received, 2 Java processes just spawned, without assigning specific CPU cores with taskset: TCP - 25 microseconds Named pipes - 15 microseconds Now explicitly specifying core masks, like taskset 1 java Srv or taskset 2 java Cli: TCP, same cores: 30 microseconds TCP, explicit different cores: 22 microseconds Named pipes, same core: 4-5 microseconds !!!! Named pipes, taskset different cores: 7-8 microseconds !!!! so TCP overhead is visible scheduling overhead (or core caches?) is also the culprit At the same time Thread.sleep(0) (which as strace shows causes a single sched_yield() Linux kernel call to be executed) takes 0.3 microsecond - so named pipes scheduled to single core still have much overhead Some shared memory measurement: September 14, 2009 – Solace Systems announced today that its Unified Messaging Platform API can achieve an average latency of less than 700 nanoseconds using a shared memory transport. http://solacesystems.com/news/fastest-ipc-messaging/ P.S. - tried shared memory next day in the form of memory mapped files, if busy waiting is acceptable, we can reduce latency to 0.3 microsecond for passing a single byte with code like this: MappedByteBuffer mem = new RandomAccessFile("/tmp/mapped.txt", "rw").getChannel() .map(FileChannel.MapMode.READ_WRITE, 0, 1); while(true){ while(mem.get(0)!=5) Thread.sleep(0); // waiting for client request mem.put(0, (byte)10); // sending the reply } Notes: Thread.sleep(0) is needed so 2 processes can see each other's changes (I don't know of another way yet). If 2 processes forced to same core with taskset, the latency becomes 1.5 microseconds - that's a context switch delay P.P.S - and 0.3 microsecond is a good number! The following code takes exactly 0.1 microsecond, while doing a primitive string concatenation only: int j=123456789; String ret = "my-record-key-" + j + "-in-db"; P.P.P.S - hope this is not too much off-topic, but finally I tried replacing Thread.sleep(0) with incrementing a static volatile int variable (JVM happens to flush CPU caches when doing so) and obtained - record! - 72 nanoseconds latency java-to-java process communication! When forced to same CPU Core, however, volatile-incrementing JVMs never yield control to each other, thus producing exactly 10 millisecond latency - Linux time quantum seems to be 5ms... So this should be used only if there is a spare core - otherwise sleep(0) is safer.
2,635,373
2,635,517
parse localtime in c++
Is there an easy "beginner" way to take the current time using <ctime> to a Date object that has int month int day int year for it's member variables? Thanks.
time_t tt = time(NULL); // get current time as time_t struct tm* t = localtime(&tt) // convert t_time to a struct tm cout << "Month " << t->tm_mon << ", Day " << t->tm_mday << ", Year " << t->tm_year << endl The tm struct ints are all 0-based (0 = Jan, 1 = Feb) and you can get various day measures, day in month (tm_mday), week (tm_wday) and year(tm_yday).
2,635,609
4,117,427
Error while Trying to Hook "TerminateProcess" Function. Target Process crashes. Can anyone help me?
Debugging with visual studio 2005 The following Error Displayed : Unhandled exception at 0x00000000 in procexp.exe: 0xC0000005: Access violation reading location 0x00000000. And Thread Information: 2704 Win32 Thread 00000000 Normal 0 extern "C" VDLL2_API BOOL WINAPI MyTerminateProcess(HANDLE hProcess,UINT uExitCode) { SetLastError(5); return FALSE; } FARPROC HookFunction(char *UserDll,FARPROC pfn,FARPROC HookFunc) { DWORD dwSizeofExportTable=0; DWORD dwRelativeVirtualAddress=0; HMODULE hm=GetModuleHandle(NULL); FARPROC pfnOriginalAddressToReturn; PIMAGE_DOS_HEADER pim=(PIMAGE_DOS_HEADER)hm; PIMAGE_NT_HEADERS pimnt=(PIMAGE_NT_HEADERS)((DWORD)pim + (DWORD)pim->e_lfanew); PIMAGE_DATA_DIRECTORY pimdata=(PIMAGE_DATA_DIRECTORY)&(pimnt->OptionalHeader.DataDirectory); PIMAGE_OPTIONAL_HEADER pot=&(pimnt->OptionalHeader); PIMAGE_DATA_DIRECTORY pim2=(PIMAGE_DATA_DIRECTORY)((DWORD)pot+(DWORD)104); dwSizeofExportTable=pim2->Size; dwRelativeVirtualAddress=pim2->VirtualAddress; char *ascstr; PIMAGE_IMPORT_DESCRIPTOR pimexp=(PIMAGE_IMPORT_DESCRIPTOR)(pim2->VirtualAddress + (DWORD)pim); while(pimexp->Name) { ascstr=(char *)((DWORD)pim + (DWORD)pimexp->Name); if(strcmpi(ascstr,UserDll) == 0) { break; } pimexp++; } PIMAGE_THUNK_DATA pname=(PIMAGE_THUNK_DATA)((DWORD)pim+(DWORD)pimexp->FirstThunk); LPDWORD lpdw=&(pname->u1.Function); DWORD dwError=0; DWORD OldProtect=0; while(pname->u1.Function) { if((DWORD)pname->u1.Function == (DWORD)pfn) { lpdw=&(pname->u1.Function); VirtualProtect((LPVOID)lpdw,sizeof(DWORD),PAGE_READWRITE,&OldProtect); pname->u1.Function=(DWORD)HookFunc; VirtualProtect((LPVOID)lpdw,sizeof(DWORD),PAGE_READONLY,&OldProtect); return pfn; } pname++; } return (FARPROC)0; } FARPROC CallHook(void) { HMODULE hm=GetModuleHandle(TEXT("Kernel32.dll")); FARPROC fp=GetProcAddress(hm,"TerminateProcess"); HMODULE hm2=GetModuleHandle(TEXT("vdll2.dll")); FARPROC fpHook=GetProcAddress(hm2,"MyTerminateProcess"); dwAddOfTerminateProcess=HookFunction("Kernel32.dll",fp,fpHook); if(dwAddOfTerminateProcess == 0) { MessageBox(NULL,TEXT("Unable TO Hook Function."),TEXT("Parth"),MB_OK); } else { MessageBox(NULL,TEXT("Success Hooked."),TEXT("Parth"),MB_OK); } return 0; } Thanks in advance for any help. 004118AC mov esi,esp 004118AE push 0 004118B0 mov eax,dword ptr [hProc] 004118B3 push eax 004118B4 call dword ptr[__imp__TerminateProcess@8(4181E4h)] 004118BA cmp esi,esp esi returned zero. why ?
What is VDLL2_API defined as? It may be interfering with the calling convention (which is meant to be WINAPI for this function, as you write it later on the same line). Stack problems on exit (ESI, ESP) usually indicate that you have your calling conventions mixed up. You appear to have used FARPROC consistently everywhere else, but since you know the exact prototype of the function, try typedef-ing that as the type to use instead: typedef BOOL (WINAPI *TERMINATEPROCESS_PROC)(HANDLE, UINT); Now use TERMINATEPROCESS_PROC everywhere instead of FARPROC.
2,635,720
2,635,833
log4j/log4cxx : exclusive 1 to 1 relation between logger and appender
Using the xml configuration of log4cxx (which is identical in configuration to log4j). I want to have a certain logger output exclusively to a specific appender (have it the only logger which outputs to that appender). I found that it's possible to bind a logger to a specific appender like this: <logger name="LoggerName"> <level value="info"/> <appender-ref ref="AppenderName"/> </logger> but it that logger still outputs to the root appender because I have this standard piece in the conf file: <root> <priority value="DEBUG"/> <appender-ref ref="OtherAppender"/> </root> How can I exclude that logger from the root logger? in other words, how do I configure the log such that all loggers inherit the appenders of the root logger except a specific logger?
You use the following piece of configuration for this: <logger name="TRACER" additivity="false"> <level value="Debug" /> <appender-ref ref="DebugAppender" /> </logger> All loggers with a name that starts with TRACER will log to the appender DebugAppender. For more info, check here or here. Additivity="false" means messages to this logger will not propogate up the loggers hierarchy so it will not print anything to the root logger.
2,635,724
2,635,738
Size of abstract class
How can I find the size of an abstract class? class A { virtual void PureVirtualFunction() = 0; }; Since this is an abstract class, I can't create objects of this class. How will I be able to find the size of the abstract class A using the 'sizeof' operator?
You can use the sizeof operator: int a_size = sizeof(A);
2,635,882
2,636,897
RAII tutorial for C++
I'd like to learn how to use RAII in c++. I think I know what it is, but have no idea how to implement it in my programs. A quick google search did not show any nice tutorials. Does any one have any nice links to teach me RAII?
There's nothing to it (that is, I don't think you need a full tutorial). RAII can be shortly explained as "Every resource requiring cleanup should be given to an object's constructor." In other words: Pointers should be encapsulated in smart pointer classes (see std::auto_ptr, boost::shared_ptr and boost::scoped_ptr for examples). Handles requiring cleanup should be encapsulated in classes that automatically free/release the handles upon destruction. Synchronization should rely on releasing the mutex/synchronization primitive upon scope exit (see boost::mutex::scoped_lock usage for an example). I don't think you can really have a tutorial on RAII (not anymore than you can have one on design patterns for example). RAII is more of a way of looking at resources than anything else. For example, at the moment I'm coding using WinAPI and I wrote the following class: template<typename H, BOOL _stdcall CloseFunction(H)> class checked_handle { public: typedef checked_handle<H,CloseFunction> MyType; typedef typename H HandleType; static const HandleType NoValue; checked_handle(const HandleType value) : _value(value) { } ~checked_handle() { Close(); } HandleType* operator &() { return &_value; } operator HandleType() { return _value; } private: HandleType _value; void Close(const HandleType newValue = NoValue) { CloseFunction(_value); _value = newValue; } }; template<typename H,BOOL _stdcall CloseFunction(H)> const typename checked_handle<H,CloseFunction>::HandleType checked_handle<H,CloseFunction>::NoValue = checked_handle<H,CloseFunction>::HandleType(INVALID_HANDLE_VALUE); typedef checked_handle<HANDLE,::CloseHandle> CheckedHandle; typedef checked_handle<HWINSTA,::CloseWindowStation> WinStationHandle; typedef checked_handle<HDESK,::CloseDesktop> DesktopHandle; typedef checked_handle<HDEVNOTIFY,::UnregisterDeviceNotification> DevNotifyHandle; typedef checked_handle<HWND,::DestroyWindow> WindowHandle; BOOL __stdcall CloseKey(HKEY hKey); typedef checked_handle<HKEY,CloseKey> RegHandle; This class doesn't include assignment and copy semantics (I removed them to provide a minimal example) so returning by value, will cause the handles to be closed twice. Here's how it's used: class declaration: class Something { public: // ... private: WindowHandle _window; }; This member is allocated but I never call ::CloseWindow(_window._handle) explicitely (it will be called when instances of Something go out of scope (as Something::~Something -> WindowHandle::WindowHandle -> ::Close(_window._value) ).
2,636,303
2,636,338
How to initialize a private static const map in C++?
I need just dictionary or associative array string => int. There is type map C++ for this case. But I need only one map forall instances(-> static) and this map can't be changed(-> const); I have found this way with boost library std::map<int, char> example = boost::assign::map_list_of(1, 'a') (2, 'b') (3, 'c'); Is there other solution without this lib? I have tried something like this, but there are always some issues with map initialization. class myClass{ private: static map<int,int> create_map() { map<int,int> m; m[1] = 2; m[3] = 4; m[5] = 6; return m; } static map<int,int> myMap = create_map(); };
#include <map> using namespace std; struct A{ static map<int,int> create_map() { map<int,int> m; m[1] = 2; m[3] = 4; m[5] = 6; return m; } static const map<int,int> myMap; }; const map<int,int> A:: myMap = A::create_map(); int main() { }
2,636,314
2,636,355
How can I get identity of a disk?
I want to identify disk in c++ in my windows application. For example: I have a disk on E:\ Then I changed the disk, and replace it with another one. the name is still E:\ How can I know the disk is changed, it is not the original one? If I have no administrator priority in win7, Can I still use some method to identy different disks? Many thanks!
Probably the relevant methods are: GetLogicalDrives() BOOL WINAPI GetVolumeInformation( __in_opt LPCTSTR lpRootPathName, __out LPTSTR lpVolumeNameBuffer, __in DWORD nVolumeNameSize, __out_opt LPDWORD lpVolumeSerialNumber, __out_opt LPDWORD lpMaximumComponentLength, __out_opt LPDWORD lpFileSystemFlags, __out LPTSTR lpFileSystemNameBuffer, __in DWORD nFileSystemNameSize ) GetDriveType(string vol) GetVolumeInformation will give you the serial number. If this isn't enough you will probably have to resort to WMI_PhysicalMedia. I believe all of this should work without Administrator privileges. A page listing a bunch of relevant functions is here: http://msdn.microsoft.com/en-us/library/aa365730(v=VS.85).aspx
2,636,320
2,636,348
Read multiple strings from a file C++
I need to read different values stored in a file one by one. So I was thinking I can use ifstream to open the file, but since the file is set up in such a way that a line might contain three numbers, and the other line one number or two numbers I'm not sure how to read each number one by one. I was thinking of using stringstream but I'm not sure if that would work. The file is a format like this. 52500.00 64029.50 56000.00 65500.00 53780.00 77300.00 44000.50 80100.20 90000.00 41000.00 60500.50 72000.00 I need to read each number and store it in a vector. What is the best way to accomplish this? Reading one number at a time even though each line contains a different amount of numbers?
Why not read them as numbers from the file? double temp; vector<double> vec; ifstream myfile ("file.txt"); if (myfile.is_open()) { while ( myfile >> temp) { vec.push_back(temp); } myfile.close(); }
2,636,479
2,644,164
HTML file: add annotations through IHTMLDocument
I need to add "annotations" to existing HTML documents - best in the form of string property values I can read & write by name. Apparently (to me), meta elements in the header seem to be the common way - i.e. adding/modifying elements like <head> <meta name="unique-id_property-name" content="property-value"/> ... </head> Question 1: Ist that "acceptable" / ok, or is there a better way to add meta data? I have a little previous experience with getting/mut(il)ating HTML contents through the document in an web browser control. For this task, I've already loaded the HTML document into a HTMLDocument object, but I'm not sure how to go on: // what I have: IHTMLDocument2Ptr doc; doc.CreateInstance(__uuidof(HTMLDocument)); IPersistFile pf = doc; pf->Load(fileName, STGM_READ); // ... what now? Questions 2: Should I be using anything else than HTMLDocument? Questions 3..N: How do I get the head element? How do I get the value of a meta element with a given name? How do I set the value of a meta element (adding the item if and only if it doesn't exist yet)? doc->all returns a collection of all tags, which I can enumerate even though count returns 0. I could scan that for head, then scan that for all meta where the name starts with a certain string, etc. - but this feels very clumsy.
I do not have enough experience working with HTML using C++. I am not sure if this answers your question, but you could do the following enum all elements using doc->all for each IHtmlElement you can compare the tagname to meta (use IHtmlElement::get_tagName) for these meta tags you can get and set property using IHtmlElement::getAttribute and setAttribute methods) Or Write a Javascript function that accepts a property name and value. This should be able to update the meta tag you have You can then get the script object using IHtmlDocument2::get_script as IDispatch Use this script dispatch to Invoke your javascript function.
2,636,563
2,636,596
Question about compilers and how they work
This is the C code that frees memory of a singly linked list. It is compiled with Visual C++ 2008 and code works as it should be. /* Program done, so free allocated memory */ current = head; struct film * temp; temp = current; while (current != NULL) { temp = current->next; free(current); current = temp; } But I also encountered ( even in a books ) same code written like this: /* Program done, so free allocated memory */ current = head; while (current != NULL) { free(current); current = current->next; } If I compile that code with my VC++ 2008, program crashes because I am first freeing current and then assigning current->next to current. But obviously if I compile this code with some other complier ( for example, compiler that book author used ) program will work. So question is, why does this code compiled with specific compiler work? Is it because that compiler put instructions in binary file that remember address of current->next although I freed current and my VC++ doesn't. I just want to understand how compilers work.
The second program is invoking undefined behavior. It is not a difference in the compiler, but rather a difference in the implementation of the C standard library and the function free(). The compiler will store the pointer current as a local variable, but it will not store a copy of the memory that it references. When you invoke free(), you give up ownership of the memory being pointed-to by the pointer passed to the free() function. It is possible that after you relinquish ownership, the contents of the memory pointed-to are still reasonable and are still valid memory locations in your process's address space. Consequently, it is possible that accessing them will appear to work (note that you can silently corrupt memory this way). A pointer that is non-null and points to memory that has already been relinquished is known as a dangling pointer and is incredibly dangerous. Just because it may appear to work does not mean it is correct. I should also point out that it is possible to implement free() in such a way as to catch these errors, such as using a separate page per allocation, and unmapping the page when free() is called (so that the memory address is no longer a valid address for that process). Such implementations are highly inefficient, but are sometimes used by certain compilers when in debugging mode to catch dangling pointer errors.
2,636,594
2,637,105
Microsoft Detours - DetourUpdateThread?
I have a few quick questions about the Microsoft Detours Library. I have used it before (successfully), but I just had a thought about this function: LONG DetourUpdateThread(HANDLE hThread); I read elsewhere that this function will actually suspend the thread until the transaction completes. This seems odd since most sample code calls: DetourUpdateThread(GetCurrentThread()); Anyway, apparently this function "enlists" threads so that, when the transaction commits (and the detours are made), their instruction pointers are modified if they lie "within the rewritten code in either the target function or the trampoline function." My questions are: When the transaction commits, is the current thread's instruction pointer going to be within the DetourTransactionCommit function? If so, why should we bother enlisting it to be updated? Also, if the enlisted threads are suspended, how can the current thread continue executing (given that most sample code calls DetourUpdateThread(GetCurrentThread());)? Finally, could you suspend all threads for the current process, avoiding race conditions (considering that threads could be getting created and destroyed at any time)? Perhaps this is done when the transaction begins? This would allow us to enumerate threads more safely (as it seems less likely that new threads could be created), although what about CreateRemoteThread()? Thanks, Paul For reference, here is an extract from the simple sample: // DllMain function attaches and detaches the TimedSleep detour to the // Sleep target function. The Sleep target function is referred to // through the TrueSleep target pointer. BOOL WINAPI DllMain(HINSTANCE hinst, DWORD dwReason, LPVOID reserved) { if (dwReason == DLL_PROCESS_ATTACH) { DetourTransactionBegin(); DetourUpdateThread(GetCurrentThread()); DetourAttach(&(PVOID&)TrueSleep, TimedSleep); DetourTransactionCommit(); } else if (dwReason == DLL_PROCESS_DETACH) { DetourTransactionBegin(); DetourUpdateThread(GetCurrentThread()); DetourDetach(&(PVOID&)TrueSleep, TimedSleep); DetourTransactionCommit(); } return TRUE; }
How embaressing: I forgot that the source was available! DetourUpdateThread silently ignores the enlisting of the current thread. Otherwise, the given thread is suspended. I wonder why ALL code examples enlist the current thread anyway! This answers the first 2 questions. As for the 3rd question: I found another detouring library that attempts to suspend all threads by doing the following: Get snapshot of all threads Loop through the snapshot and suspend threads that we have not already suspended. If threads were suspended, then go back to 1 (we still keep track of threads that we have suspended). If no threads were suspended then we are done. I think the assumption is that if we can loop through all threads and they are all already suspended (i.e. from before we took the snapshot), then no more threads can have been created. Not so sure about CreateRemoteThread though! Edit: Re: CreateRemoteThread. "Only one thread in a process can be in a DLL initialization or detach routine at a time." CreateRemoteThread "results in a call to the entry point of each DLL in the process". http://msdn.microsoft.com/en-us/library/ms682437%28VS.85%29.aspx A new thread cannot start executing if you are in a DllMain function (as long as the new thread has not yet caused the calling the entry point of each DLL in the process). So if you apply your detours within a DllMain function, you may just be safe from the race condition of a new remote thread being created and having its instruction pointer within your rewritten target/trampoline function. Thanks, Paul
2,636,846
2,637,495
How to detect first or last element iterating over a container?
How to do the following in more stylish/short way? for(i=container.begin(); i!=container.end(); ++i) { if (i!=container.begin()) { cout << ", "; } cout << *i; j=i; if (++j==container.end()) { cout << "!" << endl; } } Solutions like foreach are acceptable (actions on first and last elements need to be configurable, though). P.S. There are many answers that are handling first element, but not last. Here is what I mean by handling last element: for(i=container.begin(); i!=container.end(); ++i) { j=i; if (i==container.begin()) { cout << "[" << *i << "]" << endl; } else if (++j==container.end()) { cout << ", (" << *i << ")" << "!" << endl; } else { cout << ", " << *i; } } Don't you think it's very easy to handle first element outside the cycle body? The real problem is the last one! I'm sorry for not being able to clarify the important point asking the question. I think I'll just accept the top ranked answer eventually.
Boost has next / prior which can sometimes help in such situations. for(i=container.begin(); i!=container.end(); ++i) { if (boost::next(i) == container.end()) { std::cout << "!" << std::endl; } } Although for this specific case, I'd simply output the first element, loop from second till last while always outputting the ',' and then output the '!' after the loop has ended. (as others have suggested already) I don't see the point in moving the special cases inside the loop, and then checking inside the loop for them....
2,636,907
2,637,187
how to generate String difference vectors?
a bit of a vague question but I am looking for pointers as to how can I generate String diff vectors in C++. The scenario is such that given a paragraph I want to store the various differences(Edit, cut copy paste etc.) it goes through in a draft mode to review Audit history. Any hints in this regard will be really appreciated.
An idea using C++ polymorphism: class Action { public: virtual void revert(std::string& base) = 0; }; class InsertAction : public Action { private: int pos, len; public: InsertAction(int pos, std::string& base, const std::string& in) : len(in.size()), pos(pos) { base.insert(pos, in); } virtual void revert(std::string& base) { base.erase(pos,len); } }; int main() { std::string text("hello !"); std::cout << text << std::endl; Action* action = new InsertAction(5, text, " world"); std::cout << text << std::endl; action->revert(text); std::cout << text << std::endl; delete action; } You can then add and pop Actions from a LIFO queue as you want. It's a simple example, you could also try to link it more to a string instead of always passing at as a param, but that's up to your own design. I know it's not 'real' diffing, but I think this solution to be closer coupled to the problem then really storing general string differences.
2,636,958
2,637,085
Create unmanaged c++ object in c#
I have an unmanaged dll with a class "MyClass" in it. Now is there a way to create an instance of this class in C# code? To call its constructor? I tried but the visual studio reports an error with a message that this memory area is corrupted or something. Thanks in advance
C# cannot create class instance exported from native Dll. You have two options: Create C++/CLI wrapper. This is .NET Class Library which can be added as Reference to any other .NET project. Internally, C++/CLI class works with unmanaged class, linking to native Dll by standard C++ rules. For .NET client, this C++/CLI class looks like .NET class. Write C wrapper for C++ class, which can be used by .NET client with PInvoke. For example, over-simplified C++ class: class MyClass() { public: MyClass(int n){data=n;} ~MyClass(){} int GetData(){return data;} private: int data; }; C API wrapper for this class: void* CreateInstance() { MyClass* p = new MyClass(); return p; } void ReleaseInstance(void* pInstance) { MyClass* p = (MyClass*)pInstance; delete p; } int GetData(void* pInstance) { MyClass* p = (MyClass*)pInstance; return p->GetData(); } // Write wrapper function for every MyClass public method. // First parameter of every wrapper function should be class instance. CreateInstance, ReleaseInstance and GetData may be declared in C# client using PInvoke, and called directly. void* parameter should be declared as IntPtr in PInvoke declaration.
2,637,235
2,637,288
Problem Loading multiple textures using multiple shaders with GLSL
I am trying to use multiple textures in the same scene but no matter what I try the same texture is loaded for each object. So this what I am doing at the moment, I initialise each shader: rightWall.SendShaders("wall.vert","wall.frag","brick3.bmp", "wallTex", 0); demoFloor.SendShaders("floor.vert","floor.frag","dirt1.bmp", "floorTex", 1); The code in SendShaders is: GLuint vert,frag; glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); char *vs = NULL,*fs = NULL; vert = glCreateShader(GL_VERTEX_SHADER); frag = glCreateShader(GL_FRAGMENT_SHADER); vs = textFileRead(vertFile); fs = textFileRead(fragFile); const char * ff = fs; const char * vv = vs; glShaderSource(vert, 1, &vv, NULL); glShaderSource(frag, 1, &ff, NULL); free(vs); free(fs); glCompileShader(vert); glCompileShader(frag); program = glCreateProgram(); glAttachShader(program, frag); glAttachShader(program, vert); glLinkProgram(program); glUseProgram(program); LoadGLTexture(textureImage, texture); And then in the main loop: rightWall.UseShader("wallTex"); rightWall.Draw(); demoFloor.UseShader("floorTex"); demoFloor.Draw(); The code in UseShader: void GraphicsObject::UseShader(char textureName []){ glUseProgram(program); GLint location = glGetUniformLocation(program, textureName); glUniform1i(location, 0); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture); } And finally, the texture load methods: int GraphicsObject::LoadGLTexture(const char fileName []){ AUX_RGBImageRec *TextureImage[1]; // Create Storage Space For The Texture memset(TextureImage,0,sizeof(void *)*1); // Set The Pointer To NULL int arraySize = strlen(fileName); arraySize += 1; if (TextureImage[0]=LoadBMP(fileName, arraySize)) { glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, TextureImage[0]->sizeX, TextureImage[0]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[0]->data); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); } if (TextureImage[0]) { if (TextureImage[0]->data) { free(TextureImage[0]->data); } free(TextureImage[0]); } return 1; } AUX_RGBImageRec* GraphicsObject::LoadBMP(const char fileName[], int arraySize){ FILE *File=NULL; LPWSTR pwszFileName; int lenW; BSTR unicodestr; lenW = MultiByteToWideChar(CP_ACP, 0, fileName, arraySize, 0,0); DWORD bottom = GetLastError(); unicodestr = SysAllocStringLen(0, lenW); MultiByteToWideChar(CP_ACP,0, fileName, arraySize, unicodestr,lenW); SysFreeString(unicodestr); DWORD tit = GetLastError(); if (!fileName) { return NULL; } File=fopen(fileName,"r"); if (File) { fclose(File); return auxDIBImageLoad((LPCWSTR)unicodestr); } return NULL; } Which ever shader is initialised last is the texture which is used for both objects. Thank you for your time and I appreciate any comments.
Like you have to call glUseProgram(program) before using the shader for rendering, you also have to bind the right texture directly before rendering. The code should look like this: glUseProgram(program1); glBindTexture(GL_TEXTURE_2D, texture1); // render calls for object 1 glUseProgram(program2); glBindTexture(GL_TEXTURE_2D, texture2); // render calls for object 2 Moreover also Uniforms must be set directly before rendering, and not at shader intialization. So in your case the part GLint location = glGetUniformLocation(program, textureName); glUniform1i(location, 0); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture); does not belong in the function SendShaders, but in UseShader!
2,637,338
2,637,557
class which cannot be derived
I found this code here class Usable; class Usable_lock { friend class Usable; private: Usable_lock() {} Usable_lock(const Usable_lock&) {} }; class Usable : public virtual Usable_lock { // ... public: Usable(); Usable(char*); // ... }; Usable a; class DD : public Usable { }; DD dd; // error: DD::DD() cannot access // Usable_lock::Usable_lock(): private member Could anybody explain me this code? EDIT: Also another question i have is what is a virtual derivation and when is it needed?
It's a property of virtual derivation. The idea of virtual derivation is to solve the "Dreaded Diamond Pattern": struct Base {}; struct D1: Base {}; struct D2: Base {}; struct TopDiamond: D1, D2 {}; The problem here is that TopDiamond has 2 instance of Base here. To solve this problem, very peculiar "MultiInheritance", C++ uses the virtual keyword and what is thus called "virtual inheritance". If we change the way D1 and D2 are defined such that: struct D1: virtual Base {}; struct D2: virtual Base {}; Then there will only be one instance of Base within TopDiamond: the job of actually instantiating it is left to the top-constructor (here TopDiamond). Thus, the little trick you have shown is simply explained here: because Usable derives virtually from Usable_lock, it's up to its derived class to instantiate the Usable_lock part of the object because Usable_lock constructor is private, only itself and Usable (friend) can access the constructor It's clever, I had never thought of that. I wonder what the cost of virtual inheritance is here (extra memory / speed overhead) ?
2,637,546
2,638,257
Python and C++ Sockets converting packet data
First of all, to clarify my goal: There exist two programs written in C in our laboratory. I am working on a Proxy Server (bidirectional) for them (which will also mainpulate the data). And I want to write that proxy server in Python. It is important to know that I know close to nothing about these two programs, I only know the definition file of the packets. Now: assuming a packet definition in one of the C++ programs reads like this: unsigned char Packet[0x32]; // Packet[Length] int z=0; Packet[0]=0x00; // Spare Packet[1]=0x32; // Length Packet[2]=0x01; // Source Packet[3]=0x02; // Destination Packet[4]=0x01; // ID Packet[5]=0x00; // Spare for(z=0;z<=24;z+=8) { Packet[9-z/8]=((int)(720000+armcontrolpacket->dof0_rot*1000)/(int)pow((double)2,(double)z)); Packet[13-z/8]=((int)(720000+armcontrolpacket->dof0_speed*1000)/(int)pow((double)2,(double)z)); Packet[17-z/8]=((int)(720000+armcontrolpacket->dof1_rot*1000)/(int)pow((double)2,(double)z)); Packet[21-z/8]=((int)(720000+armcontrolpacket->dof1_speed*1000)/(int)pow((double)2,(double)z)); Packet[25-z/8]=((int)(720000+armcontrolpacket->dof2_rot*1000)/(int)pow((double)2,(double)z)); Packet[29-z/8]=((int)(720000+armcontrolpacket->dof2_speed*1000)/(int)pow((double)2,(double)z)); Packet[33-z/8]=((int)(720000+armcontrolpacket->dof3_rot*1000)/(int)pow((double)2,(double)z)); Packet[37-z/8]=((int)(720000+armcontrolpacket->dof3_speed*1000)/(int)pow((double)2,(double)z)); Packet[41-z/8]=((int)(720000+armcontrolpacket->dof4_rot*1000)/(int)pow((double)2,(double)z)); Packet[45-z/8]=((int)(720000+armcontrolpacket->dof4_speed*1000)/(int)pow((double)2,(double)z)); Packet[49-z/8]=((int)armcontrolpacket->timestamp/(int)pow(2.0,(double)z)); } if(SendPacket(sock,(char*)&Packet,sizeof(Packet))) return 1; return 0; What would be the easiest way to receive that data, convert it into a readable python format, manipulate them and send them forward to the receiver?
You can receive the packet's 50 bytes with a .recv call on a properly connected socked (it might actually take more than one call in the unlikely event the TCP packet gets fragmented, so check incoming length until you have exactly 50 bytes in hand;-). After that, understanding that C code is puzzling. The assignments of ints (presumably 4-bytes each) to Packet[9], Packet[13], etc, give the impression that the intention is to set 4 bytes at a time within Packet, but that's not what happens: each assignment sets exactly one byte in the packet, from the lowest byte of the int that's the RHS of the assignment. But those bytes are the bytes of (int)(720000+armcontrolpacket->dof0_rot*1000) and so on... So must those last 44 bytes of the packet be interpreted as 11 4-byte integers (signed? unsigned?) or 44 independent values? I'll guess the former, and do...: import struct f = '>x4bx11i' values = struct.unpack(f, packet) the format f indicates: big-endian, 4 unsigned-byte values surrounded by two ignored "spare" bytes, 11 4-byte signed integers. Tuple values ends up with 15 values: the four single bytes (50, 1, 2, 1 in your example), then 11 signed integers. You can use the same format string to pack a modified version of the tuple back into a 50-bytes packet to resend. Since you explicitly place the length in the packet it may be that different packets have different lenghts (though that's incompatible with the fixed-length declaration in your C sample) in which case you need to be a bit more accurate in receiving and unpacking it; however such details depend on information you don't give, so I'll stop trying to guess;-).
2,637,571
2,637,838
Creating simple c++.net wrapper. Step-by-step
I've a c++ project. I admit that I'm a complete ZERO in c++. But still I need to write a c++.net wrapper so I could work with an unmanaged c++ library using it. So what I have: 1) unmanaged project's header files. 2) unmanaged project's libraries (.dll's and .lib's) 3) an empty C++.NET project which I plan to use as a wrapper for my c# application How can I start? I don't even know how to set a reference to an unmanaged library. S.O.S.
http://www.codeproject.com/KB/mcpp/quickcppcli.aspx#A8 This is general direction. You need to create C++/CLI Class Library project, add .NET class to it (StudentWrapper in this sample), create unmanaged class instance as managed class member, and wrap every unmanaged class function. Unmanaged library is added to C++/CLI project using linker dependencies list, and not as reference. In the Project - Properties - Linker open Additional Dependencies and add .lib name there. Note: since we are talking about C++/CLI wrapper, no PInvoke! PInvoke is used to call exported functions (API), and not classes.
2,637,671
2,637,687
where does main() return its value?
I'm newly using CODE::BLOCKS+mingw compiler If I don't type return 0 at the end of program,I can see that main() is returning some integer,I learnt that main() returning 0 infers program executes successfully.I don't find any flaw in my code, why is it returning some integer? secondly any function returns its value to its function call, to where does main() return its value?
The C++ Standard says that if you don't explicitly return a value, the compiler must generate code as if you had typed: return 0; Exactly what the return value means and how it is returned is implementation specific. For most OSs, the return value becomes the exit code of the process.
2,637,700
2,637,823
Is it possible to roll a significantly faster version of sqrt
In an app I'm profiling, I found that in some scenarios this function is able to take over 10% of total execution time. I've seen discussion over the years of faster sqrt implementations using sneaky floating-point trickery, but I don't know if such things are outdated on modern CPUs. MSVC++ 2008 compiler is being used, for reference... though I'd assume sqrt is not going to add much overhead though. See also here for similar discussion on modf function. EDIT: for reference, this is one widely-used method, but is it actually much quicker? How many cycles is SQRT anyway these days?
Yes, it is possible even without trickery: sacrifice accuracy for speed: the sqrt algorithm is iterative, re-implement with fewer iterations. lookup tables: either just for the start point of the iteration, or combined with interpolation to get you all the way there. caching: are you always sqrting the same limited set of values? if so, caching can work well. I've found this useful in graphics applications where the same thing is being calculated for lots of shapes the same size, so results can be usefully cached. Hello from 11 years in the future. Considering this still gets occasional votes, I thought I'd add a note about performance, which now even more than then is dramatically limited by memory accesses. You absolutely must use a realistic benchmark (ideally, your whole application) when optimising something like this - the memory access patterns of your application will have a dramatic effect on solutions like lookup tables and caches, and just comparing 'cycles' for your optimised version will lead you wildly astray: it is also very difficult to assign program time to individual instructions, and your profiling tool may mislead you here. On a related note, consider using simd/vectorised instructions for calculating square roots, like _mm512_sqrt_ps or similar, if they suit your use case. Take a look at section 15.12.3 of intel's optimisation reference manual, which describes approximation methods, with vectorised instructions, which would probably translate pretty well to other architectures too.
2,637,975
2,637,990
Zero out array sent as parameter in C++
How do you make all elements = 0 in the array sent as a parameter? int myArrayFunction(int p_myArray[]) { p_myArray[] = {0};//compiler error syntax error: ']' . . }
No you can't. There's not enough information. You need to pass the length of the array too. int myArrayFunction(int p_myArray[], int arrayLength) { // --------------------------------------^ !!! Then you can use memset or std::fill to fill the array with zero. (= {0} only works in initialization.) std::fill(p_myArray, p_myArray+arrayLength, 0); Alternatively, switch to use a std::vector, then you don't need to keep track of the length. int myArrayFunction(std::vector<int>& p_myArray) { std::fill(p_myArray.begin(), p_myArray.end(), 0);
2,638,015
2,638,070
How slow are bit fields in C++
I have a C++ application that includes a number of structures with manually controlled bit fields, something like #define FLAG1 0x0001 #define FLAG2 0x0002 #define FLAG3 0x0004 class MyClass { ' ' unsigned Flags; int IsFlag1Set() { return Flags & FLAG1; } void SetFlag1Set() { Flags |= FLAG1; } void ResetFlag1() { Flags &= 0xffffffff ^ FLAG1; } ' ' }; For obvious reasons I'd like to change this to use bit fields, something like class MyClass { ' ' struct Flags { unsigned Flag1:1; unsigned Flag2:1; unsigned Flag3:1; }; ' ' }; The one concern I have with making this switch is that I've come across a number of references on this site stating how slow bit fields are in C++. My assumption is that they are still faster than the manual code shown above, but is there any hard reference material covering the speed implications of using bit fields on various platforms, specifically 32bit and 64bit windows. The application deals with huge amounts of data in memory and must be both fast and memory efficient, which could well be why it was written this way in the first place.
The two examples should be very similar in speed because the compiler will have to end up issuing pretty much the same instructions for bit-masking in both cases. To know which is really best, run a few simple experiments. But don't be surprised if the results are inconclusive; that's what I'm predicting... You might be better saying that the bitfields are of type bool though.
2,638,095
2,639,122
Trouble compiling C/C++ project in NetBeans 6.8 with MinGW on Windows
I am learning C and because VC++ 2008 doesn't support C99 features I have just installed NetBeans and configure it to work with MinGW. I can compile single file project ( main.c) and use debugger but when I add new file to project I get error "undefined reference to ... function(code) in that file..". Obviously MinGW does't link my files or I don't know how properly add them to my project (c standard library files work fine). /bin/make -f nbproject/Makefile-Debug.mk SUBPROJECTS= .build-conf make[1]: Entering directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' /bin/make -f nbproject/Makefile-Debug.mk dist/Debug/MinGW-Windows/cppapplication_7.exe make[2]: Entering directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' mkdir -p dist/Debug/MinGW-Windows gcc.exe -o dist/Debug/MinGW-Windows/cppapplication_7 build/Debug/MinGW-Windows/main.o build/Debug/MinGW-Windows/main.o: In function `main': C:/Users/don/Documents/NetBeansProjects/CppApplication_7/main.c:5: undefined reference to `X' collect2: ld returned 1 exit status make[2]: *** [dist/Debug/MinGW-Windows/cppapplication_7.exe] Error 1 make[2]: Leaving directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' make[1]: *** [.build-conf] Error 2 make[1]: Leaving directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' make: *** [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 1s) main.c #include "header.h" int main(int argc, char** argv) { X(); return (EXIT_SUCCESS); } header.h #ifndef _HEADER_H #define _HEADER_H #include <stdio.h> #include <stdlib.h> void X(void); #endif source.c #include "header.h" void X(void) { printf("dsfdas"); }
I found what was wrong. I was adding files in physical view not while I am in logical view.
2,638,100
2,638,165
High memory usage for dummies
I've just restarted my firefox web browser again because it started stuttering and slowing down. This happens every other day due to (my understanding) of excessive memory usage. I've noticed it takes 40M when it starts and then, by the time I notice slow down, it goes to 1G and my machine has nothing more to offer unless I close other applications. I'm trying to understand the technical reasons behind why its such a difficult problem to sol ve. Mozilla have a page about high memory usage: http://support.mozilla.com/en-US/kb/High+memory+usage But I'm looking for a slightly more in depth and satisfying explanation. Not super technical but enough to give the issue more respect and please the crowd here. Some questions I'm already pondering (they could be silly so take it easy): When I close all tabs, why doesn't the memory usage go all the way down? Why is there no limits on extensions/themes/plugins memory usage? Why does the memory usage increase if it's left open for long periods of time? Why are memory leaks so difficult to find and fix? App and language agnostic answers also much appreciated.
Browsers are like people - they get old, they get bloated, and they get ditched for younger and leaner models. Firefox is not just a browser, it's an ecosystem. While I feel that recent versions are quite bloated, the core product is generally stable. However, firefox is an ecosystem/platform for: 1) Badly written plug-ins 2) Badly written JavaScript code that executes within it. 3) Adobe flash as a platform for heavyweight video and for poorly written ad scripts such as 'hit Osama bin Laden with a duck to reduce your mortgage rate and receive a free iPod* (participation required). 4) Quicktime and other media player. 5) Some embedded Java code. The description of a memory leak suggests a script running amok or a third-party tool requesting more memory. If you ever run Flash on a Mac, that's almost a given along with 90% CPU utilization. The goal of most programming languages is not to save you but to give you tools to save yourself. You can write bad and bloated code with memory leaks in any language, including ones with garbage collection. Third party tools are usually not as well tested as the platform itself. Web pages that try to do too much are also not uncommon. If you want to do an experiment to demonstrate this, get a mac with Firefox and go to a well-written site like Stack Overflow and spend an hour. Your memory usage shouldn't grow much. Then spend 5 minutes visiting random pages on Myspace. Now let me try and answer your questions based on my guesses since I'm not familiar with the source code When I close all tabs, why doesn't the memory usage go all the way down? Whereas each browser instance is an independent process with its own memory, the tabs in a single window are all within the same process. Firefox used to have some sort of in-memory caching and merely closing a tab doesn't clear the relevant information immediately from the in-memory cache. If you reopened a tab to the same site, you might get better performance. There was some advanced option to allow you to disable it, something like browser.cache.memory.enable. Or just search for disabling the memory cache. * Why is there no limits on extensions/themes/plugins memory usage? For the same reason that Windows or Linux doesn't have a vetting process on applications you can run on them. It's an open environment and you assume the risk. If you want an environment where applications and extensions are 'validated', Apple might be the way to go :) * Why does the memory usage increase if it's left open for long periods of time? Not all calculations and actions in a script have visual manifestations. A script could be doing some stuff in the background (like requesting extra materials, pre-fetching stuff, just bugs) even if you don't see it. * Why are memory leaks so difficult to find and fix? It's about bookkeeping. Think about every item you ever borrowed (even a pen) or that someone borrowed from you in your entire life. Are they all accounted for? Memory leaks are the same way (you borrow memory from the system), except that you pass items around. Then look at the stuff on your desk, did you leave anything lying around because 'you might need it soon' even though you probably won't? same story.
2,638,329
2,638,969
Detect a USB drive being inserted - Windows Service
I am trying to detect a USB disk drive being inserted within a Windows Service, I have done this as a normal Windows application. The problem is the following code doesn't work for volumes. Registering the device notification: DEV_BROADCAST_DEVICEINTERFACE notificationFilter; HDEVNOTIFY hDeviceNotify = NULL; ::ZeroMemory(&notificationFilter, sizeof(notificationFilter)); notificationFilter.dbcc_size = sizeof(DEV_BROADCAST_DEVICEINTERFACE); notificationFilter.dbcc_devicetype = DBT_DEVTYP_DEVICEINTERFACE; notificationFilter.dbcc_classguid = ::GUID_DEVINTERFACE_VOLUME; hDeviceNotify = ::RegisterDeviceNotification(g_serviceStatusHandle, &notificationFilter, DEVICE_NOTIFY_SERVICE_HANDLE); The code from the ServiceControlHandlerEx function: case SERVICE_CONTROL_DEVICEEVENT: PDEV_BROADCAST_HDR pBroadcastHdr = (PDEV_BROADCAST_HDR)lpEventData; switch (dwEventType) { case DBT_DEVICEARRIVAL: ::MessageBox(NULL, "A Device has been plugged in.", "Pounce", MB_OK | MB_ICONINFORMATION); switch (pBroadcastHdr->dbch_devicetype) { case DBT_DEVTYP_DEVICEINTERFACE: PDEV_BROADCAST_DEVICEINTERFACE pDevInt = (PDEV_BROADCAST_DEVICEINTERFACE)pBroadcastHdr; if (::IsEqualGUID(pDevInt->dbcc_classguid, GUID_DEVINTERFACE_VOLUME)) { PDEV_BROADCAST_VOLUME pVol = (PDEV_BROADCAST_VOLUME)pDevInt; char szMsg[80]; char cDriveLetter = ::GetDriveLetter(pVol->dbcv_unitmask); ::wsprintfA(szMsg, "USB disk drive with the drive letter '%c:' has been inserted.", cDriveLetter); ::MessageBoxA(NULL, szMsg, "Pounce", MB_OK | MB_ICONINFORMATION); } } return NO_ERROR; } In a Windows application I am able to get the DBT_DEVTYP_VOLUME in dbch_devicetype, however this isn't present in a Windows Service implementation. Has anyone seen or heard of a solution to this problem, without the obvious, rewrite as a Windows application?
Windows 7 supports "trigger started services". If you want to start your service, go around in a sleeping loop, and react whenever something is plugged in, I think you would be better off (assuming Windows 7 is an option) going with a trigger started service where the OS starts the service when a USB device is plugged in. (There are other triggers but you mentioned this one.) The sample application XP2Win7 (http://code.msdn.microsoft.com/XP2Win7) includes this functionality. It comes with full source code. Most is in VB and C# but the trigger started services part is in (native) C++.
2,638,361
2,638,438
Porting Python algorithm to C++ - different solution
Thank you all for helping. Below this post I put the corrected version's of both scripts which now produce the equal output. Hello, I have written a little brute string generation script in python to generate all possible combinations of an alphabet within a given length. It works quite nice, but for the reason I wan't it to be faster I try to port it to C++. The problem is that my C++ Code is creating far too much combination for one word. Heres my example in python: ./test.py gives me aaa aab aac aad aa aba .... while ./test (the c++ programm gives me) aaa aaa aaa aaa aa Here I also get all possible combinations, but I get them twice ore more often. Here is the Code for both programms: #!/usr/bin/env python import sys #Brute String Generator #Start it with ./brutestringer.py 4 6 "abcdefghijklmnopqrstuvwxyz1234567890" "" #will produce all strings with length 4 to 6 and chars from a to z and numbers 0 to 9 def rec(w, p, baseString): for c in "abcd": if (p<w - 1): rec(w, p + 1, baseString + "%c" % c) print baseString for b in range(3,4): rec(b, 0, "") And here the C++ Code #include <iostream> using namespace std; string chars="abcd"; void rec(int w,int b,string p){ unsigned int i; for(i=0;i<chars.size();i++){ if(b < (w-1)){ rec(w, (b+1), p+chars[i]); } cout << p << "\n"; } } int main () { int a=3, b=0; rec (a+1,b, ""); return 0; } Does anybody see my fault ? I don't have much experience with C++. Thanks indeed Here the corrected version: C++ #include <iostream> using namespace std; string chars="abcd"; void rec(int w,int b,string p){ unsigned int i; for(i=0;i<chars.size();i++){ if(b < (w)){ rec(w, (b+1), p+chars[i]); } } cout << p << "\n"; } int main () { rec (3,0, ""); return 0; } Python #!/usr/bin/env python import sys def rec(w, b, p): for c in "abcd": if (b < w - 1): rec(w, b + 1, p + "%c" % c) print p rec(4, 0, "") Equal Output: $ ./test > 1 $ ./test.py 3 3 "abcd" "" > 2 $ diff 1 2 $
I think the Python code is also broken but maybe you don't notice because the print is indented by one space too many (hey, now I've seen a Python program with a one-off error!) Shouldn't the output only happen in the else case? And the reason why the output happens more often is that you call print/cout 4 times. I suggest to change the code: def rec(w, p, baseString): if w == p: print baseString else: for ...
2,638,409
2,638,485
Just introducing myself to TMPing, and came across a quirk
I was just trying to learn the syntax of the beginner things, and how it worked when I was making this short bit of code in VS2008. The code below works in adding numbers 1 to 499, but if I add 1 to 500, the compiler bugs out giving me: fatal error C1001: An internal error has occurred in the compiler. And I was just wondering why that is. Is there some limit to how much code the compiler can generate or something and it just happened to be a nice round number of 500 for me? #include <iostream> using namespace std; template < int b > struct loop { enum { sum = loop< b - 1 >::sum + b }; }; template <> struct loop< 0 > { enum { sum = 0 }; }; int main() { cout << "Adding the numbers from 1 to 499 = " << loop< 499 >::sum << endl; return 0; }
I assume with gcc (and by extension g++) the default maximum template recursion depth is 500 as at least on my machine I managed to reproduce your problems with a (slightly better) warning message. Compiling loop<500>::sum worked fine but trying to compile loop<501>::sum failed. If you are using gcc (or g++) the solution is to compile it with -ftemplate-depth-## (Where ## is the maximum allowed depth). So for example to compile main.cpp with a maximum template recursion depth of 2000 g++ -ftemplate-depth-2000 main.cpp Or convert the code to this: template < int b > struct loop { enum { sum = (b*(b+1))/2 }; }; (But I will admit the above code doesn't help you learn about template metaprogramming)
2,638,443
2,638,465
iPhone c++ development / compiler on a non-Mac PC? (Windows? Linux?)
According to the (in)famous iPhone Developer Program License Agreement change 3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited). So it is allowed to develop iPhone apps using C++ My questions Is there a compiler / IDE for developing iPhone apps using C++? Is that compiler / IDE available on non Mac environments? (Windows? Linux?) If not, why? I mean an eclipse C++ plugin for iPhone development will be quite popular, or is there already any serious attempt to do that?
yes (XCode, though you'll still need a bit of Objective-C glue code to init your application) no because they don't want you to and you have to accept the license agreement EDIT: here you go for restrictions on 3). Simply put, you agree to only use the SDK provided by Apple, in conditions restricted by Apple. 1.2 Definitions "SDK" (Software Development Kit) means the Documentation, software (source code and object code), applications, sample code, simulator, tools, libraries, APIs, data, files, and materials provided by Apple for use by You in connection with Your Application development, and includes any Updates that may be provided by Apple to You pursuant to this Agreement. - 2.1 Permitted Uses and Restrictions Subject to the terms and conditions of this Agreement, Apple hereby grants You during the Term, a limited, non-exclusive, personal, revocable, non-sublicensable and non-transferable license to: (a) Install a reasonable number of copies of the SDK portion of the Apple Software on Apple-branded computers owned or controlled by You, to be used internally by You or Your Authorized Developers for the sole purpose of developing or testing Applications; - 2.6 No Other Permitted Uses You agree not to install, use or run the SDK on any non-Apple-branded computer, not to install, use or run the iPhone OS and Provisioning Profiles on or in connection with devices other than iPhone OS Products, or to enable others to do so. You may not and You agree not to, or to enable others to, copy (except as expressly permitted under this Agreement), decompile, reverse engineer, disassemble, attempt to derive the source code of, modify, decrypt, or create derivative works of the Apple Software or any services provided by the Apple Software or otherwise provided hereunder, or any part thereof (except as and only to the extent any foregoing restriction is prohibited by applicable law or to the extent as may be permitted by licensing terms governing use of open-sourced components or sample code included with the Apple Software).
2,638,460
2,638,494
Is it possible to simultaneously debug VB6 and a C++ COM dll?
I have a VB6 dll that is loaded by a VB6 frontend. This VB6 dll calls a C++ ATL dll via its COM interface. So, I can run from code in VB6 and I can debug in C++ also, however I can't seem to step through the VB6 code and then get into the C++ code. I feel that this should be possible. Currently I am doing the following steps Start VB6 debugging Start C++ debugging. This involves starting the VB6 front end and setting the working directory to the VB6 front end directory using the VS2008 Debugging Properties in the Options. Execute the code and step through the VB6 code to the point where I should be entering the C++ code. I see the loaded symbols window changing in the VS2008 IDE. Now, it looks like it should work, but I never hit any breakpoints in my C++ code. I hit the breakpoints if I don't start the VB6 debugging first.
You should be able to set vb6.exe as the startup program for your project in C++ and start debugging. Then in VB6, open the project and start debugging.
2,638,654
4,457,138
Redirect C++ std::clog to syslog on Unix
I work on Unix on a C++ program that send messages to syslog. The current code uses the syslog system call that works like printf. Now I would prefer to use a stream for that purpose instead, typically the built-in std::clog. But clog merely redirect output to stderr, not to syslog and that is useless for me as I also use stderr and stdout for other purposes. I've seen in another answer that it's quite easy to redirect it to a file using rdbuf() but I see no way to apply that method to call syslog as openlog does not return a file handler I could use to tie a stream on it. Is there another method to do that ? (looks pretty basic for unix programming) ? Edit: I'm looking for a solution that does not use external library. What @Chris is proposing could be a good start but is still a bit vague to become the accepted answer. Edit: using Boost.IOStreams is OK as my project already use Boost anyway. Linking with external library is possible but is also a concern as it's GPL code. Dependencies are also a burden as they may conflict with other components, not be available on my Linux distribution, introduce third-party bugs, etc. If this is the only solution I may consider completely avoiding streams... (a pity).
I needed something simple like this too, so I just put this together: log.h: #include <streambuf> #include <syslog.h> enum LogPriority { kLogEmerg = LOG_EMERG, // system is unusable kLogAlert = LOG_ALERT, // action must be taken immediately kLogCrit = LOG_CRIT, // critical conditions kLogErr = LOG_ERR, // error conditions kLogWarning = LOG_WARNING, // warning conditions kLogNotice = LOG_NOTICE, // normal, but significant, condition kLogInfo = LOG_INFO, // informational message kLogDebug = LOG_DEBUG // debug-level message }; std::ostream& operator<< (std::ostream& os, const LogPriority& log_priority); class Log : public std::basic_streambuf<char, std::char_traits<char> > { public: explicit Log(std::string ident, int facility); protected: int sync(); int overflow(int c); private: friend std::ostream& operator<< (std::ostream& os, const LogPriority& log_priority); std::string buffer_; int facility_; int priority_; char ident_[50]; }; log.cc: #include <cstring> #include <ostream> #include "log.h" Log::Log(std::string ident, int facility) { facility_ = facility; priority_ = LOG_DEBUG; strncpy(ident_, ident.c_str(), sizeof(ident_)); ident_[sizeof(ident_)-1] = '\0'; openlog(ident_, LOG_PID, facility_); } int Log::sync() { if (buffer_.length()) { syslog(priority_, "%s", buffer_.c_str()); buffer_.erase(); priority_ = LOG_DEBUG; // default to debug for each message } return 0; } int Log::overflow(int c) { if (c != EOF) { buffer_ += static_cast<char>(c); } else { sync(); } return c; } std::ostream& operator<< (std::ostream& os, const LogPriority& log_priority) { static_cast<Log *>(os.rdbuf())->priority_ = (int)log_priority; return os; } In main() I initialize clog: std::clog.rdbuf(new Log("foo", LOG_LOCAL0)); Then whenever I want to log, it's easy: std::clog << kLogNotice << "test log message" << std::endl; std::clog << "the default is debug level" << std::endl;
2,638,664
2,641,084
Is there any free OCaml to C translator?
So I have nice OCaml code (50000 lines). I want to port it to C. So Is there any free OCaml to C translator?
This probably isn't what you want, but you can get the OCaml compiler to dump its runtime code in C: ocamlc -output-obj -o foo.c foo.ml What you get is basically a static dump of the bytecode. The result will look something like: #include <caml/mlvalues.h> CAMLextern void caml_startup_code( code_t code, asize_t code_size, char *data, asize_t data_size, char *section_table, asize_t section_table_size, char **argv); static int caml_code[] = { 0x00000054, 0x000003df, 0x00000029, 0x0000002a, 0x00000001, 0x00000000, /* ... */ } static char caml_data[] = { 132, 149, 166, 190, 0, 0, 3, 153, 0, 0, 0, 118, /* ... */ }; static char caml_sections[] = { 132, 149, 166, 190, 0, 0, 21, 203, 0, 0, 0, 117, /* ... */ }; /* ... */ void caml_startup(char ** argv) { caml_startup_code(caml_code, sizeof(caml_code), caml_data, sizeof(caml_data), caml_sections, sizeof(caml_sections), argv); } You can compile it with gcc -L/usr/lib/ocaml foo.c -lcamlrun -lm -lncurses For more information, see the OCaml manual.
2,638,781
2,638,924
C++: conjunction of binds?
Suppose the following two functions: #include <iostream> #include <cstdlib> // atoi #include <cstring> // strcmp #include <boost/bind.hpp> bool match1(const char* a, const char* b) { return (strcmp(a, b) == 0); } bool match2(int a, const char* b) { return (atoi(b) == a); } Each of these functions takes two arguments, but can be transformed into a callable object that takes only one argument by using (std/boost)bind. Something along the lines of: boost::bind(match1, "a test"); boost::bind(match2, 42); I want to be able to obtain, from two functions like these that take one argument and return bool, a callable object that takes two arguments and returns the && of the bools. The type of the arguments is arbitrary. Something like an operator&& for functions that return bool.
The return type of boost::bind overloads operator && (as well as many others). So you can write boost::bind(match1, "a test", _1) && boost::bind(match2, 42, _2); If you want to store this value, use boost::function. In this case, the type would be boost::function<bool(const char *, const char *)> Note that this isn't the return type of boost::bind (that is unspecified), but any functor with the right signature is convertible to a boost::function.
2,638,843
2,639,602
Should I use C++0x Features Now?
With the official release of VS 2010, is it safe for me to start using the partially-implemented C++0x feature set in my new code? The features that are of interest to me right now are both implemented by VC++ 2010 and recent versions of GCC. These are the only two that I have to support. In terms of the "safety" mentioned in the first sentence: can I start using these features (e.g., lambda functions) and still be guaranteed that my code will compile in 10 years on a compiler that properly conforms to C++0x when it is officially released? I guess I'm asking if there is any chance that VC++ 2010 or GCC will end up like VC++ 6; it was released before the language was officially standardized and consequently allowed grossly ill-formed code to compile. After all, Microsoft does say that "10 is the new 6". ;)
There are several items I've already discovered that are not written to the standard. For instance, this would not work: struct test { int operator()(int); }; std::cout << typeid( std::result_of<test(int)>::type ).name() << std::endl; According to the wikipedia site on C++0x it should. Apparently VS2010 uses the TR1 definition of result_of, which is different from what C++0x will have (based on decltype). Also, this does not work: std::bind<int>([](int i)->int {return i; }); It fails because calling std::result_of (well, the implementation of it), fails because the lambda type has no result_of typedef. This is of course why you supply the return type to the bind call but apparently it ignores it for some reason and continues to search on its own. The boost version of bind works as expected. For this reason we're continuing to use the boost version of bind in our project. Also, if you'll note on http://blogs.msdn.com/vcblog/archive/2010/04/06/c-0x-core-language-features-in-vc10-the-table.aspx?CommentPosted=true#commentmessage that there's some changes yet to be implemented by VS2010 that will effect lambda expressions. I haven't been able to break them but then I haven't used nested lambdas and probably never will. You should also keep in mind that boost::shared_ptr and std::shared_ptr are incompatible. Not surprising, but you must know this if you intend to use one or the other...I'd recommend not both and we're just going to stick with boost. There's also no declval in VS2010. Easy enough to make though: template < typename T > T&& declval(); Example of use: template < typename T > struct point { T x,y; }; template < typename T1, typename T2 > point<decltype(declval<T1>() + declval<T2>())> operator + (point<T1> const& lh, point<T2> const& rh) { ... } You'll also note in the page I linked above that I've already discussed with members of the dev team (or the PR part or whatever) that there's a bug in decltype. There's more than just the one I mention so I'll show both: template < typename T1, typename T2 > auto operator + (point<T1> const& lh, point<T2> const& rh) -> point<decltype(lh.x + rh.x)> { ... } point<int> x; point<double> y; point<double> pt = x + y; // fails, operator + returned point<const double> void f(); auto ptr = &f; std::cout << typeid( decltype(*ptr) ).name() << std::endl; std::cout << typeid( decltype(*&f) ).name() << std::endl; // should output the same thing...outputs void (*)() Also...according to some email exchanges about decltype and result_of, this should work: std::result_of< decltype(f)() >::type x = f(); With my home brewed version of std::result_of that uses decltype this would work if the decltype(f)() expression worked correctly. It does not. Gives some error about function returning a function. You have to use "decltype(&f)()" to make the expression work. So, sure...we're using it. There's some bugs and crap though. The benefits outweigh waiting IMHO. Don't expect your code to be standard when the standard comes out though and future MS compilers may break it.
2,638,911
2,638,937
Trouble declaring and recognizing global functions
I've created some mathematical functions that will be used in main() and by member functions in multiple host classes. I was thinking it would be easiest to make these math functions global in scope, but I'm not sure how to do this. I've currently put all the functions in a file called Rdraws.cpp, with the prototypes in Rdraws.h. Even with all the #includes and externs, I'm getting a "symbol not found" compiler error at the first function call in main(). Here's what I have: // Rdraws.cpp #include <cstdlib> using namespace std; #include <cmath> #include "Rdraws.h" #include "rng.h" extern RNG rgen // this is the PRNG used in the simulation; global scope void rmultinom( double p_trans[], int numTrials, int numTrans, int numEachTrans[] ) { // function 1 def } void rmultinom( const double p_trans[], const int numTrials, int numTrans, int numEachTrans[]) { // function 2 def } int rbinom( int nTrials, double pLeaving ) { // function 3 def } // Rdraws.h #ifndef RDRAWS #define RDRAWS void rmultinom( double[], int, int, int[] ); void rmultinom( const double[], const int, int, int[] ); int rbinom( int, double ); #endif // main.cpp ... #include "Rdraws.h" ... extern void rmultinom(double p_trans[], int numTrials, int numTrans, int numEachTrans[]); extern void rmultinom(const double p_trans[], const int numTrials, int numTrans, int numEachTrans[]); extern int rbinom( int n, double p ); RNG rgen; // global PRNG object created for simulation int main() { ... } I'm pretty new to programming. If there's a dramatically smarter way to do this, I'd love to know. Update I'm a moron and didn't realize I still hadn't included Rdraws.cpp in my compiler. As a poster noted, I also forgot a semicolon. I would still appreciate suggestions if the method outlined here could be improved upon.
Which compiler are you using? You need to first compile all of the source files into object files and then link all of the object files together. Example: g++ -c -Wall -O2 main.cpp g++ -c -Wall -O2 Rdraws.cpp And then to get the executable... g++ -s main.o Rdraws.o
2,638,956
2,639,052
C++ open() fails for no apparent reason
The following code: char filename[64]; ifstream input; cout << "Please enter the filename: " << endl; cin >> filename; input.open(filename); if (!input.is_open()) { cout << "Opening file " << filename << " failed." << endl; exit(1); } fails, it enters the if() and exits. What could possibly be the cause for this? I'm using Microsoft Visual C++. When I hardcoded the filename as a constant it instead ended up garbled: http://pici.se/pictures/CNQEnwhgo.png Suggestions? [Edit] I managed to condense it into this minimal test case that fails: #include <iostream> #include <fstream> using namespace std; int main(int argc, char *argv[]){ ifstream input; input.open("C:\\test.txt"); if (!input.is_open()) { cout << "Failed." << endl; exit(1); } return 0; } I was wondering if there might be some discrepancy with the keymaps? That I'm inputting the filename in some charset while the filesystem knows it under some other name? I'm using Windows, by the way. [Edit] Thanks for all your help but I give up now. I'll use C style fopen instead. :) [Edit] Oh my god. Now I feel so stupid. Turns out the file was actually named test.txt.txt and Windows hid the second .txt Once again, thanks for all your help...
Can you make sure that the filename is what you think it is? cin >> filename; cout << filename; ifstream myFile(filename); if ( myFile.is_open() ) { // ... } On Unix/Linux systems, remember that file names are case sensitive. ThisIsMyFile thisIsMyFile Are two distinct and separate files. [EDIT] ifstream::open is defined as: void open ( const char * filename, ios_base::openmode mode = ios_base::in ); Opens a file whose name is s, associating its content with the stream object to perform input/output operations on it. The operations allowed and some operating details depend on parameter mode. The function effectively calls rdbuf()->open(filename,mode). If the object already has a file associated (open), the function fails. On failure, the failbit flag is set (which can be checked with member fail), and depending on the value set with exceptions an exception may be thrown. Try changing "C:\test.txt" to simply "test.txt" and run this program from the "C:\" directory. Here is an exact similar sample: // ifstream::is_open #include <iostream> #include <fstream> using namespace std; int main () { ifstream infile; infile.open ("test.txt"); if (infile.is_open()) { while (infile.good()) cout << (char) infile.get(); infile.close(); } else { cout << "Error opening file"; } return 0; } If something this obvious isn't working, it's time to fire up the debugger.
2,638,986
2,639,025
Can a Java class implement a C++ interface
Is it possible to a Java class to implement a C++ interface (interface meaning a full abstract class)? Where can I read more about?
Not directly, no. However, you can create a C++ implementation of that interface that simply delegates to a Java implementation via JNI. See the JNI Specification for more details.
2,639,017
2,639,071
Calling assignment operator in copy constructor
Are there some drawbacks of such implementation of copy-constructor? Foo::Foo(const Foo& i_foo) { *this = i_foo; } As I remember, it was recommend in some book to call copy constructor from assignment operator and use well-known swap trick, but I don't remember, why...
Yes, that's a bad idea. All member variables of user-defined types will be initialized first, and then immediately overwritten. That swap trick is this: Foo& operator=(Foo rhs) // note the copying { rhs.swap(*this); //swap our internals with the copy of rhs return *this; } // rhs, now containing our old internals, will be deleted
2,639,049
2,639,080
c++, when do you need/have to pass data as (*&)
I came across people passing data objects as: declaration: DataObject * data = 0; calling it as: SomeMethod( data ); definition of Somethod: void SomeMethod(SomeObject * & object) My obvious question is, when and why do you have to do this (& *)? Is it passing the pointer as reference?
Yes, exactly! So, not only can your function SomeMethod refer to the same data as the caller, but it can actually change what the caller points to! int main() { SomeObject* data = new SomeObject(1); cout << data << " -> " << *data; // "0xfffee314 -> 1" SomeMethod(data); cout << data << " -> " << *data; // "0xeee32435 -> 2" } void SomeMethod(SomeObject*& object) { delete object; object = new SomeObject(2); }
2,639,130
2,639,149
accessing the hidden 'this' pointer
I have a GUI architecture wherein elements fire events like so: guiManager->fireEvent(BUTTON_CLICKED, this); Every single event fired passes 'this' as the caller of the event. There is never a time I dont want to pass 'this', and further, no pointer except for 'this' should ever be passed. This brings me to a problem: How can I assert that fireEvent is never given a pointer other than 'this', and how can I simplify (and homogenize) calls to fireEvent to just: guiManager->fireEvent(BUTTON_CLICKED); At this point, I'm reminded of a fairly common compiler error when you write something like this: class A { public: void foo() {} }; class B { void oops() { const A* a = new A; a->foo(); } }; int main() { return 0; } Compiling this will give you In member function ‘void B::oops()’: error: passing ‘const A’ as ‘this’ argument of ‘void A::foo()’ discards qualifiers because member functions pass 'this' as a hidden parameter. "Aha!" I say. This (no pun intended) is exactly what I want. If I could somehow access the hidden 'this' pointer, it would solve both issues I mentioned earlier. The problem is, as far as I know you can't (can you?) and if you could, there would be outcries of "but it would break encapsulation!" Except I'm already passing 'this' every time, so what more could it break. So, is there a way to access the hidden 'this', and if not are there any idioms or alternative approaches that are more elegant than passing 'this' every time?
You could define void Element::fireEvent(EVENT e) { guiManager->fireEvent(e, this); } to save yourself a bit of writing each time. You're going to have to call it with this at some point, since the guiManager needs to know which Element called fireEvent.
2,639,144
2,639,587
Delphi - Capture stdout and stderr output from statically linked MSVC++ compiled DLL
I have been trying to capture stdout and stderr output from a DLL compiled in MSVC++ that my Delphi app statically links to, but so far have been unsuccessful. procedure Test; var fs: TFileStream; begin fs := TFileStream.Create('C:\temp\output.log', fmCreate or fmShareDenyWrite); SetStdHandle(STD_OUTPUT_HANDLE, fs.Handle); SetStdHandle(STD_ERROR_HANDLE, fs.Handle); dllFunc(0); // Writes to stdout in MSVC++ console app, but not here // fs.Length is always zero fs.Free; end; Thought I was on the right track, but it does not work. Is SetStdHandle() enough? Is TFileStream the right thing to use here? Am I using TFileStream properly for SetStdHandle()? Is it possible that the DLL sets its stdout/stderr handles when the app loads? If so, where is the best place to use SetStdHandle() or equivalent? Any help would be appreciated.
If the DLL grabs the stdout handles when it is loaded, then you will need to dynamically load the DLL after you have changed the stdout handles in your code.
2,639,145
2,639,182
look for evaluation function in tictactoe 3d game
I'm trying to apply minimax algorithm for the game tictactoe 3D in c++. I'm struggling to find a good evaluation function for it. Does anybody know where has good resource to find the evaluation function? Thank you.
Here's what I'd use: Go over all rows (in all directions). For each row, if it has only one player's marks, award that player points based on how many marks there are. You can have a lookup table mapping number of marks to score, which can be adjusted to get the best results. The final result will be the difference between the two players' scores. Example (pseudocode): const int markScore[4+1] = {0, 1, 3, 5, 99999}; //assuming 4x4x4 board //The above values are arbitrary - adjust to what you think makes sense. score = 0; for all rows in all directions: count Xs and Os if (xs>0 && os==0) score += markScore[xs]; else if (os>0 && xs==0) score -= markScore[os]; return score; This should work well, because any row with only one player's marks gives that player a chance to win.
2,639,199
2,639,233
int foo(type& bar); is a bad practice?
Well, here we are. Yet another proposed practice that my C++ book has an opinion on. It says "a returning-value(non-void) function should not take reference types as a parameter." So basically if you were to implement a function like this: int read_file(int& into){ ... } and used the integer return value as some sort of error indicator (ignoring the fact that we have exceptions) then that function would be poorly written and it should actually be like void read_file(int& into, int& error){ } Now to me, the first one is much clearer and nice to use. If you want to ignore the error value, you do so with ease. But this book suggests the later. Note that this book does not say returning value functions are bad. It rather says that you should either only return a value or you should only use references. What are your thoughts on this? Is my book full of crap? (again)
The advice is silly. A direct return value is much smaller and easier to type. Direct return: if (read_file(...)) { ... handle problem ... } Indirect return: int status; read_file(..., status); if (status) { ... handle problem ... } Edit: a bigger issue is whether to use non-const reference parameters at all. It can be surprising to have side effects come flying out of the parameters. One coding standard says that reference parameters should be const and output parameters should use pointers. That way the reader gets a & at the point of call that shouts out "something happens to this parameter".
2,639,255
2,639,268
Return a "NULL" object if search result not found
I'm pretty new to C++ so I tend to design with a lot of Java-isms while I'm learning. Anyway, in Java, if I had class with a 'search' method that would return an object T from a Collection< T > that matched a specific parameter, I would return that object and if the object was not found in the collection, I would return null. Then in my calling function I would just check if(tResult != null) { ... } In C++, I'm finding out that I can't return a null value if the object doesn't exist. I just want to return an 'indicator' of type T that notifies the calling function that no object has been found. I don't want to throw an exception because it's not really an exceptional circumstance. This is what my code looks like right now: class Node { Attr& getAttribute(const string& attribute_name) const { //search collection //if found at i return attributes[i]; //if not found return NULL; // what should this be? } private: vector<Attr> attributes; } How can I change it so I can give that kind of marker?
In C++, references can't be null. If you want to optionally return null if nothing is found, you need to return a pointer, not a reference: Attr *getAttribute(const string& attribute_name) const { //search collection //if found at i return &attributes[i]; //if not found return nullptr; } Otherwise, if you insist on returning by reference, then you should throw an exception if the attribute isn't found. (By the way, I'm a little worried about your method being const and returning a non-const attribute. For philosophical reasons, I'd suggest returning const Attr *. If you also may want to modify this attribute, you can overload with a non-const method returning a non-const attribute as well.)
2,639,282
2,639,333
opengl texture cube c++
hello i create a cube and want on one side an texture. glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, filterMode); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, filterMode); glBegin(GL_POLYGON); //Vorderseite glColor4f(1.0f,0.0f,0.0f,1.0f); //ROT glVertex3f(-fSeitenL/2.0f,-fSeitenL/2.0f,+fSeitenL/2.0f); glColor4f(1.0f,1.0f,0.0f,1.0f); //GELB glVertex3f(+fSeitenL/2.0f,-fSeitenL/2.0f,+fSeitenL/2.0f); glColor4f(1.0f,1.0f,1.0f,1.0f); //WEISS glVertex3f(+fSeitenL/2.0f,+fSeitenL/2.0f,+fSeitenL/2.0f); glColor4f(1.0f,0.0f,1.0f,1.0f); //MAGENTA glVertex3f(-fSeitenL/2.0f,+fSeitenL/2.0f,+fSeitenL/2.0f); glEnd(); glDisable(GL_TEXTURE_2D); but i can't see my texture, what did i wrong? thanks.
First of all this doesn't seem a cube but just a quad, a cube is made by 6 different quads.. (and you could use GL_QUADS instead that GL_POLYGON. Second thing is that you are loading the texture but not mapping it to the vertices. You need to supply coordinates to map how the texture should fit onto the quad. You can do it by using glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 1.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, 1.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 1.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 1.0f); the example is taken from NEHE OpenGL guide and I really suggest you to take a look since it's quite well explained: http://nehe.gamedev.net Check tutorial 6 about texture mapping: http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=06
2,639,300
2,639,322
std::list or std::multimap
Hey, I right now have a list of a struct that I made, I sort this list everytime I add a new object, using the std::list sort method. I want to know what would be faster, using a std::multimap for this or std::list, since I'm iterating the whole list every frame (I am making a game). I would like to hear your opinion, for what should I use for this incident.
std::multimap will probably be faster, as it is O(log n) per insertion, whereas an insert and sort of the list is O(n log n). Depending on your usage pattern, you might be better off with sorted vectors. If you insert a whole bunch of items at once and then do a bunch of reads -- i.e. reads and writes aren't interleaved -- then you'll have better performance with vector, std::sort, and std::binary_search.
2,639,557
2,639,581
Program to implement the is_same_type type trait in c++
HI Could anyone give a sample program to implement the is_same_type type trait in c++?
#include <iostream> template< typename T1, typename T2 > struct is_same_type { enum { result = false }; }; template< typename T> struct is_same_type<T,T> { enum { result = true }; }; int main() { std::cout << is_same_type<int,float>::result << '\n' << is_same_type<char,char>::result << '\n'; return 0; }
2,639,565
2,640,302
Visual Studio 2005 to VS 2008
I am a newbie in working on VS IDE and have not much experience in how the different libraries and files are linked in it. I have to build a OpenCV project which was made in VS2005 by one of my colleagues into VS2008. The project is for blob detection. Following is what he has to say in readme : Steps to use the library (using MSVC++ sp 5): open the project of the library and build it in the project where the library should be used, add: 2.1 In "Project/Settings/C++/Preprocessor/Additional Include directories" add the directory where the blob library is stored 2.2 In "Project/Settings/Link/Input/Additional library path" add the directory where the blob library is stored and in "Object/Library modules" add the cvblobslib.lib file Include the file "BlobResult.h" where you want to use blob variables. To see an example on using the blob library, see the file example.txt inside the zip file. NOTE: Verify that in the project where the cvblobslib.lib is used, the MFC Runtime Libraries are not mixed: Check in "Project->Settings->C/C++->Code Generation->Use run-time library" of your project and set it to Debug Multithreaded DLL (debug version ) or to Multithreaded DLL ( release version ). Check in "Project->Settings->General" how it uses the MFC. It should be "Use MFC in a shared DLL". NOTE1: The library can be compiled and used in .NET using this steps, but the menu options may differ a little NOTE2: In the .NET version, the character sets must be equal in the .lib and in the project. [OpenCV yahoo group: Msg 35500] Can anyone explain me , how to go about in doing this in VS2008. I would also appreciate if someone can explain me how the different libraries are linked , what is Debug, What is Release and all in a Visual Studio project folder we have.\
I got confused by this at first as well as it's not very well explained by MSDN. Your best hope to learn it is to try linking to a library with VS2008 instructions (like boost). Anyway, additional include directories is in Project->Properties->C++->General and additional libraries is in Project->Properties->Linker->General. You can put library names in additional libraries under Properties->Linker->Input. So whenever you include a file it looks in your Include directories plus your additional include directories for that file. Whenever you specific an additional library (.lib file) it looks in your library directories plus any additional library directories. If you use an include or library directory a lot, you can make it available for every solution by going to Tools->Options->Projects and Solutions->VC++ Directories and putting the directories you want in Include Files and Library Files respectively. As for Debug vs Release, under Project->Properties there is a drop down menu in the top left which tells you which configuration you are currently editing the properties of. To change which one you currently want to build, click the "Configuration Manager" in the top right of project properties, and then use the drop down list to select configurations for each project. Release usually has optimizations turned on etc, debug usually builds debugging databases (pdb files) etc. They also link to their respective libraries. In other words you usually need libraries you link to to match your configuration, so a debug build needs to link to a debug library and a release build needs to link to a release library. Usually libaries that are debug have a d following their name, and if they are multi-threaded have an mt in their name. The C++->Code Generation advice they give you is still the same and is a common source or problems. It means the library you are linking to links to a different C library than then one you are using, which can cause problems. It can get confusing, usually your best solution if you are confused is to ask the mailing list of whatever project you are trying to link to. If you get specific errors you can't figure out, try playing around with the configuration and if you still don't know, ask somewhere like here with your specific problem.
2,639,710
2,639,795
Can we write a portable include guard that doesn’t use the preprocessor in C++?
Can we write a portable include guard that doesn’t use the preprocessor in C++? If so how could that be done?
No. You cannot use #include without the preprocessor. Without preprocessor directives, including the same file twice will always result in the same sequence of tokens. There are a couple non-portable ways to do this (both use the preprocessor), such as: #pragma once and #import "file.h" But header guards work everywhere, and your compiler is probably optimized to check for header guards so it won't even bother processing a duplicate #include directive.
2,639,733
2,639,746
C++ Vector of class objects
I have a class called Movie with the following private data members: private: string title_ ; string director_ ; Movie_Rating rating_ ; unsigned int year_ ; string url_ ; vector<string> actor_; It also contains the following copy constructor: Movie::Movie(Movie& myMovie) { title_ = myMovie.title_; director_ = myMovie.director_; rating_ = myMovie.rating_; year_ = myMovie.year_; url_ = myMovie.url_; actor_ = myMovie.actor_; } When I try to create a vector of this class, vector<Movie> myMovies; and then accept all the info from a user into a temp Movie object (myMovie1), and then use push back: myMovies.push_back(myMovie1); I get the following error: 1>c:\program files (x86)\microsoft visual studio 9.0\vc\include\vector(1233) : error C2558: class 'Movie' : no copy constructor available or copy constructor is declared 'explicit' Where am I going wrong? It seems it wants a copy constructor but I do have one defined.
My guess is that it's protesting against binding a temporary to a non-const reference. Try Movie::Movie(const Movie & myMovie) as the signature to your copy constructor.
2,639,848
2,639,972
How do I link against Intel TBB on Mac OS X with GCC?
I can't for the life of me figure out how to compile and link against the Intel TBB library on my Mac. I've run the commercial installer and the tbbvars.sh script but I can't figure this out. I have a feeling it is something really obvious and it's just been a bit too long since I've done this kind of thing. tbb_test.cpp #include <tbb/concurrent_queue.h> int main() { tbb::concurrent_queue<int> q; } g++ tbb_test.cpp -I /Library/Frameworks/TBB.framework/Headers -ltbb ...can't find the symbols. Cheers! UPDATE: g++ tbb_test.cpp -I /Library/Frameworks/TBB.framework/Headers -L /Library/Frameworks/TBB.framework/Libraries/libtbb.dylib works!
Since you are using a framework instead of a traditional library, you need to use -framework, like: g++ tbb_test.cpp -o tbb_test -framework TBB Instead of: g++ tbb_test.cpp -o tbb_test -I /Library/Frameworks/TBB.framework/Headers -ltbb
2,640,069
2,640,081
Which cast am I using?
I'm trying to cast away const from an object but it doesn't work. But if I use old C-way of casting code compiles. So which casting I'm suppose to use to achieve this same effect? I wouldn't like to cast the old way. //file IntSet.h #include "stdafx.h" #pragma once /*Class representing set of integers*/ template<class T> class IntSet { private: T** myData_; std::size_t mySize_; std::size_t myIndex_; public: #pragma region ctor/dtor explicit IntSet(); virtual ~IntSet(); #pragma endregion #pragma region publicInterface IntSet makeUnion(const IntSet&)const; IntSet makeIntersection(const IntSet&)const; IntSet makeSymmetricDifference(const IntSet&)const; void insert(const T&); #pragma endregion }; //file IntSet_impl.h #include "StdAfx.h" #include "IntSet.h" #pragma region ctor/dtor template<class T> IntSet<T>::IntSet():myData_(nullptr), mySize_(0), myIndex_(0) { } IntSet<T>::~IntSet() { } #pragma endregion #pragma region publicInterface template<class T> void IntSet<T>::insert(const T& obj) { /*Check if we are initialized*/ if (mySize_ == 0) { mySize_ = 1; myData_ = new T*[mySize_]; } /*Check if we have place to insert obj in.*/ if (myIndex_ < mySize_) {//IS IT SAFE TO INCREMENT myIndex while assigning? myData_[myIndex_++] = &T(obj);//IF I DO IT THE OLD WAY IT WORKS return; } /*We didn't have enough place...*/ T** tmp = new T*[mySize_];//for copying old to temporary basket std::copy(&myData_[0],&myData_[mySize_],&tmp[0]); } #pragma endregion Thanks.
There's a dedicated C++ casting operator for dealing with const: const_cast: myData_[myIndex_++] = &const_cast<T>(obj);
2,640,101
2,640,116
How do you compare two unknown numbers to see if they're equal in a special case template?
Here, is my code. Just trying to wrap my head around some of the basic things you can do with TMP. I'm trying to supply two numbers with which the compiler will add up that range of numbers. I'm just not sure how to write the syntax for the "constraint" template. template < int b, int e > struct add { enum { sum = add< b + 1, e >::sum + b }; }; template <> struct add< e, e > { enum { sum = 0 }; }; int main() { cout << add< 4, 8 >::sum << endl; //30 return 0; }
template <int e> struct add< e, e > { ... And the result is 4 + 5 + 6 + 7 + 0 == 22, not 4 + 5 + 6 + 7 + 8 == 30. Once e==e in add<...>, add<...>::sum==0, not e.
2,640,424
2,640,443
run a command in C++ program using "system" API
I want to run a DOS command in my C++ program. The point is that I want my program stops while the DOS command is executed. I have used "System" API. My question is "Does 'system' make a new thread/process to run the DOS command in it or it just stops the program until the command is done?" If it creates a new process, how can I stop the program while 'system' is running? Thank you so much, Shadi.
It creates a new process and waits for it to exit. http://www.cplusplus.com/reference/clibrary/cstdlib/system/
2,640,446
2,640,505
Why do some people prefer "T const&" over "const T&"?
So, I realize that const T& and T const& are identical and both mean a reference to a const T. In both cases, the reference is also constant (references cannot be reassigned, unlike pointers). I've observed, in my somewhat limited experience, that most C++ programmers use const T&, but I have come across a few people who use T const&. I use const T& simply because I learned it that way, and so T const& looks a little bit funny to me. What is the reason that you use the variant that you use? Do any of you work at an organization for which the coding standards mandate the use of one variant over the other? Edit Based on the answers, it would appear that one reason for choosing between the two is whether you want to read it like the compiler (right-to-left) or like English (left-to-right). If one reads it like the compiler, then "T const&" reads as "& (reference) const (to a constant) T (of type T)". If one reads it like English, from left-to-right, then "const T&" is read as "a constant object of type T in the form of a reference". I prefer to read it like English prose, but I can certainly see the sense in interpreting it the way that the compiler does. No one has answered the organization or coding standards question, but I strongly suspect that most organizations do not mandate one over the other, although they might strive for consistency.
I think some people simply prefer to read the declarations from right to left. const applies to the left-hand token, except when there is nothing there and it applies on the right-hand token. Hence const T& involves the "except"-clause and can perhaps be thought more complicated (in reality both should be as easy to understand). Compare: const T* p; (pointer to T that is const) T const* p; (pointer to const T) //<- arguable more natural to read T* const p; (const pointer to T)
2,640,476
2,806,636
How does one port c++ functions to the internet?
I have a few years experience programming c++ and a little less then that using Qt. I built a data mining software using Qt and I want to make it available online. Unfortunately, I know close to nothing about web programming. Firstly, how easy or hard is this to do and what is the best way to go about it? Supposing I am looking to hire someone to make me a secure, long-term, extensible, website for an online software service, what skill set should I be looking for? Edit: I want to make my question a little more specific: How can I take a bunch of working c++ functions and port the code so I can run it server side on a website? Once this is done, would it be easy to make changes to the c++ code and have the algorithm automatically update on the site? What technologies would be involved? Are there any cloud computing platforms that would be good for something like this? @Niklaos-what does it mean to build a library and how does one do that?
Port the functions to Java, easily done from C++, you can even find some tools to help - don't trust them implicitly but they could provide a boost. See longer answer below. Wrap them in a web application, and deploy them on Google App-Engine. Java version of a library would be a jar file. If you really want to be able to update the algorithm implementation dynamically, then you could implement them in Groovy, and upload changes through a form on your webapp, either as files or as a big text block, need to consider version control.
2,640,542
2,640,597
Retrieving values of static const variables at a constructor of a static variable
I understand that the code below would result segmentation fault because at the cstr of A, B::SYMBOL was not initialized yet. But why? In reality, A is an object that serves as a map that maps the SYMBOLs of classes like B to their respective IDs. C holds this map(A) static-ly such that it can provide the mapping as a class function. The primary function of A is to serve as a map for C that initializes itself at startup. How should I be able to do that without segmentation fault, provided that I can still use B::ID and B::SYMBOL in the code (no #define pls)? (ps. assume I have implemented the include guards) //A.h #include "B.h" class A { public: A() { std::cout<<B::ID<<std::endl; std::cout<<B::SYMBOL<<std::endl; } }; //B.h class B { public: static const int ID; static const std::string SYMBOL; } //B.cpp #include "B.h" const int B::ID = 1; const std::string B::SYMBOL = "B"; //C.h #include "A.h" class C { public: static A s_A; }; //C.cpp #include "C.h" A C::s_A; //main.cpp #include "C.h" int main(int c, char** p) { }
Use lazy initialization of s_A. This might work: class C { public: static A& getA() { static A s_A; return s_A; } }; Or: class C { public: static A& getA() { if( ps_A == NULL) ps_A = new ps_A; return *ps_A; } private: static A* ps_A; }; A* C::ps_A = NULL; Neither solution is thread safe.
2,640,601
2,640,716
Does using ReadDirectoryChangesW require administrator rights?
The MSDN says that using ReadDirectoryChangesW implies the calling process having the Backup and Restore privileges. Does this mean that only process launched under administrator account will work correctly? I've tried the following code, it fails to enable the required privileges when running as a restricted user. void enablePrivileges() { enablePrivilege(SE_BACKUP_NAME); enablePrivilege(SE_RESTORE_NAME); } void enablePrivilege(LPCTSTR name) { HANDLE hToken; DWORD status; if (::OpenProcessToken(::GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken)) { TOKEN_PRIVILEGES tp = { 1 }; if( ::LookupPrivilegeValue(NULL, name, &tp.Privileges[0].Luid) ) { tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; BOOL result = ::AdjustTokenPrivileges(hToken, FALSE, &tp, 0, NULL, NULL); verify (result != FALSE); status = ::GetLastError(); } ::CloseHandle(hToken); } } Am I doing something wrong? Is there any workaround for using ReadDirectoryChangesW from a non-administrator user account? It seems that the .NET's FileSystemWatcher can do this. Thanks! Update: Here is the full code of the class: class DirectoryChangesWatcher { public: DirectoryChangesWatcher(wstring directory) { enablePrivileges(); hDir = ::CreateFile(directory.c_str(), FILE_LIST_DIRECTORY | FILE_FLAG_OVERLAPPED, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); ensure (hDir != INVALID_HANDLE_VALUE, err::SystemException); ::ZeroMemory(&overlapped, sizeof(OVERLAPPED)); overlapped.hEvent = dirChangedEvent.getHandle(); } ~DirectoryChangesWatcher() { ::CloseHandle(hDir); } public: Event& getEvent() { return dirChangedEvent; } FILE_NOTIFY_INFORMATION* getBuffer() { return buffer; } public: void startAsyncWatch() { DWORD bytesReturned; const BOOL res = ::ReadDirectoryChangesW( hDir, &buffer, sizeof(buffer), TRUE, FILE_NOTIFY_CHANGE_LAST_WRITE | FILE_NOTIFY_CHANGE_SIZE, &bytesReturned, &overlapped, NULL); ensure(res != FALSE, err::SystemException); } private: void enablePrivileges() { enablePrivilege(SE_BACKUP_NAME); enablePrivilege(SE_RESTORE_NAME); } void enablePrivilege(LPCTSTR name) { HANDLE hToken; DWORD status; if (::OpenProcessToken(::GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken)) { TOKEN_PRIVILEGES tp = { 1 }; if( ::LookupPrivilegeValue(NULL, name, &tp.Privileges[0].Luid) ) { tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; BOOL result = ::AdjustTokenPrivileges(hToken, FALSE, &tp, 0, NULL, NULL); verify (result != FALSE); status = ::GetLastError(); } ::CloseHandle(hToken); } } private: HANDLE hDir; OVERLAPPED overlapped; Event dirChangedEvent; FILE_NOTIFY_INFORMATION buffer[1024]; }; } Update: Good news! It turned out the problem really was in the FILE_SHARE_WRITE flag in the call to CreateFile. The notifications did not come unless I was an admin. When I removed this flag, everything is now working ona non-admin account too.
I have used ReadDirectoryChangesW without requiring administrator rights, at least on Vista. I don't think you need to manually elevate the process in order to use it on a folder the user already has permissions to see. It would be more helpful to see the actual code you are using to call ReadDirectoryChangesW, including how you create the handle you pass in.
2,640,642
2,640,756
C++: Implementing Named Pipes using the Win32 API
I'm trying to implement named pipes in C++, but either my reader isn't reading anything, or my writer isn't writing anything (or both). Here's my reader: int main() { HANDLE pipe = CreateFile(GetPipeName(), GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); char data[1024]; DWORD numRead = 1; while (numRead >= 0) { ReadFile(pipe, data, 1024, &numRead, NULL); if (numRead > 0) cout << data; } return 0; } LPCWSTR GetPipeName() { return L"\\\\.\\pipe\\LogPipe"; } And here's my writer: int main() { HANDLE pipe = CreateFile(GetPipeName(), GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); string message = "Hi"; WriteFile(pipe, message.c_str(), message.length() + 1, NULL, NULL); return 0; } LPCWSTR GetPipeName() { return L"\\\\.\\pipe\\LogPipe"; } Does that look right? numRead in the reader is always 0, for some reason, and it reads nothing but 1024 -54's (some weird I character). Solution: Reader (Server): while (true) { HANDLE pipe = CreateNamedPipe(GetPipeName(), PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND , PIPE_WAIT, 1, 1024, 1024, 120 * 1000, NULL); if (pipe == INVALID_HANDLE_VALUE) { cout << "Error: " << GetLastError(); } char data[1024]; DWORD numRead; ConnectNamedPipe(pipe, NULL); ReadFile(pipe, data, 1024, &numRead, NULL); if (numRead > 0) cout << data << endl; CloseHandle(pipe); } Writer (client): HANDLE pipe = CreateFile(GetPipeName(), GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL); if (pipe == INVALID_HANDLE_VALUE) { cout << "Error: " << GetLastError(); } string message = "Hi"; cout << message.length(); DWORD numWritten; WriteFile(pipe, message.c_str(), message.length(), &numWritten, NULL); return 0; The server blocks until it gets a connected client, reads what the client writes, and then sets itself up for a new connection, ad infinitum. Thanks for the help, all!
You must use CreateNamedPipe() to create the server end of a named pipe. Be sure to specify a non-zero buffer size, zero (documented by MSDN as 'use system default buffer size') doesn't work. MSDN has decent samples for a multi-threaded client&server.
2,640,742
2,640,780
C++ How to copy text in string (from i.e. 8 letter to 12 letter)
This is not homework, I need this for my program :) I ask this question, because I searched for this in Google about 1 hour, and I don't find anything ready to run. I know that is trivial question, but if you will help me, you will make my day :) Question: How to copy text in string (from for example 8 letter to 12 letter) and send to other string? I have string: string s = "RunnersAreTheBestLovers"; and I want text from 8 letter to 17 letter in next string Alice90
I assume you're trying to get the 8th - 17th characters in a another string. If so you should use the substring method string::substr string s = "RunnersAreTheBestLovers"; string other = s.substr(8, 9);
2,640,823
2,640,897
Is it possible to create a CImageList with alpha blending transparency?
I would like to knwo if it is possible to create a CImageList with alpha blending transparency. Sample code that creates a CImageList with ugly transparency (no alpha blending) CGdiPlusBitmapResource m_pBitmap; m_pBitmap.Load(IDB_RIBBON_FILESMALL,_T("PNG"),AfxGetResourceHandle()); HBITMAP hBitmap; m_pBitmap.m_pBitmap->GetHBITMAP(RGB(0,0,0),&hBitmap ); CImageList *pList=new CImageList; CBitmap bm; bm.Attach(hBitmap); pList->Create(16, 16, ILC_COLOR32 | ILC_MASK, 0, 4); pList->Add(&bm, RGB(255,0,255));
Don't use the ILC_MASK flag (from MSDN): Using 32 Bit Anti-Aliased Icons Windows XP imagelists, which are collections of images used with certain controls such as list-view controls, support the use of 32-bit anti-aliased icons and bitmaps. Color values use 24 bits, and 8 bits are used as an alpha channel on the icons. To create an imagelist that can handle a 32-bits-per-pixel (bpp) image, call the ImageList_Create function passing in an ILC_COLOR32 flag.
2,641,154
2,641,171
TRY/CATCH_ALL vs try/catch
I've been using c++ for a while, and I'm familiar with normal try/catch. However, I now find myself on Windows, coding in VisualStudio for COM development. Several parts of the code use things like: TRY { ... do stuff } CATCH_ALL(e) { ... issue a warning } END_CATCH_ALL; What's the point of these macros? What benefit do they offer over the built-in try/catch? I've tried googling this, but "try vs TRY" is hard to search for.
It's an MFC macro: http://msdn.microsoft.com/en-us/library/t8dwzac0%28VS.71%29.aspx This page says they're a remnant from MFC 1.0 - use normal C++ exceptions in new code: MFC versions lower than 3.0 did not support the C++ exception mechanism. MFC provided macros to deal with exceptions.
2,641,560
2,641,577
Unwanted SDL_QUIT Event on mouseclick
I'm having a slight problem with my SDL/Opengl code, specifically, when i try to do something on a mousebuttondown event, the program sends an sdl_quit event to the stack, closing my application. I know this because I can make the program work (sans the ability to quit out of it :| ) by checking for SDL_QUIT during my event loop, and making it do nothing, rather than quitting the application. If anyone could help make my program work, while retaining the ability to, well, close it, it'd be much appreciated. Code attached below: #include "SDL/SDL.h" #include "SDL/SDL_opengl.h" void draw_polygon(); void init(); int main(int argc, char *argv[]) { SDL_Event Event; int quit = 0; GLfloat color[] = { 0.0f, 0.0f, 0.0f }; init(); glColor3fv (color); glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0); draw_polygon(); while(!quit) { while(SDL_PollEvent( &Event )) { switch(Event.type) { case SDL_MOUSEBUTTONDOWN: for (int i = 0; i <= sizeof(color); i++) { color[i] += 0.1f; } glColor3fv ( color ); draw_polygon(); break; case SDL_KEYDOWN: switch(Event.key.keysym.sym) { case SDLK_ESCAPE: quit = 1; break; default: break; } default: break; } } } SDL_Quit(); return 0; } void draw_polygon() { glBegin(GL_POLYGON); glVertex3f (0.25, 0.25, 0.0); glVertex3f (0.75, 0.25, 0.0); glVertex3f (0.75, 0.75, 0.0); glVertex3f (0.25, 0.75, 0.0); glEnd(); SDL_GL_SwapBuffers(); } void init() { SDL_Init(SDL_INIT_EVERYTHING); SDL_SetVideoMode( 640, 480, 32, SDL_OPENGL ); glClearColor (0.0, 0.0, 0.0, 0.0); glMatrixMode( GL_PROJECTION | GL_MODELVIEW ); glLoadIdentity(); glClear (GL_COLOR_BUFFER_BIT); SDL_WM_SetCaption( "OpenGL Test", NULL ); } If it matters in this case, I'm compiling via the included compiler with Visual C++ 2008 express.
You're missing a break statement in the end of your SDL_MOUSEBUTTONDOWN event handler, resulting in unintentional fall-through to the SDL_KEYDOWN handler. Just add a break after the call to draw_polygon() and you're good to go; you should also add a break to the end of your SDL_KEYDOWN handler to avoid falling through into the default case, though that's not a problem now since the default case doesn't do anything, but if it does in the future, you'll have another bug. EDIT You also have a buffer overflow. sizeof(color) is the total size in bytes of the array, which in this case is 12 (3 values times 4 bytes/value). So, you're looping 13 times (12, plus 1 for using <= instead of <) instead of 3 when changing the color. It just happens that the compiler has laid out the local variable quit immediately after color, so you're accidentally writing out over quit, plus other unknown data on the stack. The fix for this is to divide by the size of the array member when calculating the number of members. This is often done using a macro: #define ARRAY_SIZE(a) (sizeof(a) / sizeof((a)[0])) ... for (int i = 0; i < ARRAY_SIZE(color); i++) You could also just hardcode the number of color components (3), which isn't likely to change -- you're also hardcoding this implicitly in the call to glColor3fv().
2,641,639
2,641,690
Fstream's tellg / seekg returning higher value than expected
Why does this fail, it's supposed to be simple and work ? fisier.seekg(0, ios::end); long lungime = fisier.tellg(); This returns a larger value than that of the file resulting in a wrong char *continut = new char[lungime]; Any idea what the problem could be ? I also tried counting to the end of the file one char at a time, that rendered the same result, a higher number than expected. But upon using getline() to read one line at a time, it works, there are no extra spaces...
At a guess, you're opening the file in translated mode, probably under Windows. When you simply seek to the end of the file, the current position doesn't take the line-end translations into account. The end of a line (in the external file) is marked with the pair "\r\n" -- but when you read it in, that's converted to just a "\n". When you use getline to read one line at a time, the \ns all get discarded as well, so even on a system (e.g. Unix/Linux) that does no translation from external to internal representation, you can still expect those to give different sizes. Then again, you should really forget that new [] exists at all. If you want to read an entire file into a string, try something like this: std::stringstream continut; continut << fisier.rdbuf(); continut.str() is then an std::string containing the data from the file.
2,641,855
2,641,970
standard rectangle class
I have a project that has a GUI (written in QT) and a command-line version. I made use of the rectangle class included in QT: QRect. I would like to break the command-line version's dependency on QT, so I need a drop-in rectangle class that supports intersection and union. I could write one, but I'd prefer including one if possible. Any ideas?
If you're going to find one to include, it's probably part of another dependency. So your best bet is to try to write your own. Now is a good time to practice making a template class. :) template <typename T> struct point { // or maybe you'd prefer to make these private T x; T y; }; template <typename T> struct rectangle { public: typedef point<T> point_type; bool contains(const point_type& pPoint) { return !(pPoint.x < topleft.x) && (pPoint.x < bottomright.x) && !(pPoint.y < topleft.y) && (pPoint.y < bottomright.y); } T width(void) const { return bottomright.x - topleft.x; } // and more stuff // or maybe you'd prefer to make these private, nor // is this the only way to represent a rectangle. point_type topleft; point_type bottomright; }; Sorry it's not the answer you're expecting. Just about your design, I hope you're not taking your GUI version, performing a copy, then modifying it into a console version. Better would be to make a library; then GUI versus console is merely a matter of presentation.
2,641,907
31,479,261
Do variable references (alias) incure runtime cost?
Maybe this is a compiler specific thing. If so, how about for gcc (g++)? If you use a variable reference/alias like this: int x = 5; int& y = x; y += 10; Does it actually require more cycles than if we didn't use the reference. int x = 5; x += 10; In other words, does the machine code change, or does the "alias" happen only at the compiler level? This may seem like a dumb question, but I am curious. Especially in the case where maybe it would be convenient to temporarily rename some member variables just so that the math code is a little easier to read. Sure, we're not exactly talking about a bottleneck here... but it's something that I'm doing and so I'm just wondering if there is any 'actual' difference... or if it's only cosmetic.
I compared 2 programs on Gnu/Linux. Only GCC output is shown below, but clang results lead to identical conclusions. GCC version: 4.9.2 Clang version: 3.4.2 The programs 1.cpp #include <stdio.h> int main() { int x = 3; printf("%d\n", x); return 0; } 2.cpp #include <stdio.h> int main() { int x = 3; int & y = x; printf("%d\n", y); return 0; } The test Attempt 1: No optimizations gcc -S --std=c++11 1.cpp gcc -S --std=c++11 2.cpp 1.cpp's resulting assembly was shorter. Attempt 2: Optimizations on gcc -S -O2 --std=c++11 1.cpp gcc -S -O2 --std=c++11 2.cpp The resulting assembly was completely identical. The assembly output 1.cpp, no optimization .file "1.cpp" .section .rodata .LC0: .string "%d\n" .text .globl main .type main, @function main: .LFB0: .cfi_startproc pushq %rbp .cfi_def_cfa_offset 16 .cfi_offset 6, -16 movq %rsp, %rbp .cfi_def_cfa_register 6 subq $16, %rsp movl $3, -4(%rbp) movl -4(%rbp), %eax movl %eax, %esi movl $.LC0, %edi movl $0, %eax call printf movl $0, %eax leave .cfi_def_cfa 7, 8 ret .cfi_endproc .LFE0: .size main, .-main .ident "GCC: (Debian 4.9.2-10) 4.9.2" .section .note.GNU-stack,"",@progbits 2.cpp, no optimization .file "2.cpp" .section .rodata .LC0: .string "%d\n" .text .globl main .type main, @function main: .LFB0: .cfi_startproc pushq %rbp .cfi_def_cfa_offset 16 .cfi_offset 6, -16 movq %rsp, %rbp .cfi_def_cfa_register 6 subq $16, %rsp movl $3, -12(%rbp) leaq -12(%rbp), %rax movq %rax, -8(%rbp) movq -8(%rbp), %rax movl (%rax), %eax movl %eax, %esi movl $.LC0, %edi movl $0, %eax call printf movl $0, %eax leave .cfi_def_cfa 7, 8 ret .cfi_endproc .LFE0: .size main, .-main .ident "GCC: (Debian 4.9.2-10) 4.9.2" .section .note.GNU-stack,"",@progbits 1.cpp, with optimizations .file "1.cpp" .section .rodata.str1.1,"aMS",@progbits,1 .LC0: .string "%d\n" .section .text.unlikely,"ax",@progbits .LCOLDB1: .section .text.startup,"ax",@progbits .LHOTB1: .p2align 4,,15 .globl main .type main, @function main: .LFB12: .cfi_startproc subq $8, %rsp .cfi_def_cfa_offset 16 movl $3, %esi movl $.LC0, %edi xorl %eax, %eax call printf xorl %eax, %eax addq $8, %rsp .cfi_def_cfa_offset 8 ret .cfi_endproc .LFE12: .size main, .-main .section .text.unlikely .LCOLDE1: .section .text.startup .LHOTE1: .ident "GCC: (Debian 4.9.2-10) 4.9.2" .section .note.GNU-stack,"",@progbits 2.cpp, with optimizations .file "1.cpp" .section .rodata.str1.1,"aMS",@progbits,1 .LC0: .string "%d\n" .section .text.unlikely,"ax",@progbits .LCOLDB1: .section .text.startup,"ax",@progbits .LHOTB1: .p2align 4,,15 .globl main .type main, @function main: .LFB12: .cfi_startproc subq $8, %rsp .cfi_def_cfa_offset 16 movl $3, %esi movl $.LC0, %edi xorl %eax, %eax call printf xorl %eax, %eax addq $8, %rsp .cfi_def_cfa_offset 8 ret .cfi_endproc .LFE12: .size main, .-main .section .text.unlikely .LCOLDE1: .section .text.startup .LHOTE1: .ident "GCC: (Debian 4.9.2-10) 4.9.2" .section .note.GNU-stack,"",@progbits Conclusion There is no runtime cost when it comes to optimized GCC output. Same goes with clang (tested with version 3.4.2): when optimizations are on, the generated assembly code is identical in both programs.
2,642,022
2,642,029
abstract class in C++
I have a derived derived class from an abstract class. The code is below. I have a FishTank class which is derived from an Aquarium and Aquarium is derived from item. My question is that should I put the definition of virtual int minWidth() const = 0; in aquarium again or is the code below sufficient? class Item{ public: virtual int minWidth() const = 0; }; class Aquarium: public Item{ public: virtual int calWidth() = 0; // Pure virtual function. }; class FishTank : public Aquarium{ public: FishTank(int base1, int base2, int height); ~FishTank(); int calWidth(); int minWidth(); };
There's no reason to do it again. It only serves to waste space and give you the opportunity to get compile errors from typos. :) Once you inherit, it's just like it had been there anyway. However, you don't actually ever implement it! Why? You're missing const in FishTank: int minWidth() const; // <-- const!