question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,193,693
1,194,670
Behaviour of static variables in dynamically linked libraries (C/C++)
As discussed here, a static variable is stored in the .BSS or .DATA segment. Where is this memory stored if the static variable is inside a function that's in a dynamically linked library ? Does storage for this variable get allocated in the .BSS or .DATA segment of the linking process at the time of linkage ?
The static variable is going to end up in the .BSS or .DATA section of the DLL file. The executable that links to the DLL probably won't even know it exists. When the EXE loads the DLL, the system sets up the DLL's data sections for it, and then calls the DllMain(). That's when the DLLs statics come into existence and get initialized.
1,193,748
1,193,769
How to create a BMP file from raw byte[] in Java
I have a C++ application which communicates with a camera and fetches raw image-data. I then have a Byte[] in C++, which i want to send to Java with JNI. However, i need to convert the raw Byte[] to an real file format(.bmp was my first choice). I can easily do this if i write it from C++ to an file on the hard-drive, using BITMAPFILEINFO and BITMAPHEADERINFO, but i do not know how one would go about sending the entire-format to Java. Then i thought about sending only the raw byte[] data using JNI and then converting it to .bmp, but i can't seem to find any good library for doing this in Java. What would be my best choice? Converting the image in C++ and then sending it using JNI or send the RAW data to Java and then convert it to .bmp? How would i easiest achieve this?
It's just two lines in Java 1.5: BufferedImage image = ImageIO.read( new ByteArrayInputStream( byteArray ) ); ImageIO.write(image, "BMP", new File("filename.bmp")); Java (on Windows) knows how to export jpg, png and bmp as far as i know.
1,193,756
1,202,685
Right side Explorer context menu (IID_IContextMenu?)
One of my applications has a Windows Explorer like file list control. When the user right clicks on a file I can successfully show the Explorer context menu (with some extra options of my own). However if the user right clicks on the list control itself (no items selected), then I'm unable to show the 'correct' context menu. I'd like to show the one you see in Windows Explorer on the right side rather then the one in the tree on the left side. I've tried a bunch of variations in my calls to GetUIObjectOf, I surfed the Google web, etc, but I haven't found a solution yet. Any help?
Call IShellFolder::CreateViewObject() to get the IContextMenu for a folder itself. IShellFolder::GetUIObjectOf() is meant for retrieving interfaces for individual items inside of a folder, not for a folder itself. This is stated in MSDN's documentation: IShellFolder::CreateViewObject Method This method is also used to request objects that expose one of several optional interfaces, including IContextMenu or IExtractIcon. In this context, CreateViewObject is similar in usage to IShellFolder::GetUIObjectOf. However, you call IShellFolder::GetUIObjectOf to request an object for one of the items contained by a folder. Call IShellFolder::CreateViewObject to request an object for the folder itself.
1,194,090
1,194,101
nested struct with array
Please, help me to create a nested struct with an array. How do I fix this code? class CMain { public: CMain(); ~CMain(); private: struct { CCheckSum() : BufferSize(500) {memset(Buffer, 0, BufferSize);} const int BufferSize; char Buffer[BufferSize]; }SmallBuffer; } Thanks.
Static arrays need to know their length at compile time, or you need to dynamically allocate memory: struct CCheckSum { CCheckSum() : BufferSize(500), Buffer(new char[BufferSize]) { memset(Buffer, 0, BufferSize); } ~CCheckSum() { delete[] Buffer; } // Note the use of delete[]! const int BufferSize; char* Buffer; }SmallBuffer; You're probably better off using std::vector though: struct CCheckSum { CCheckSum() : Buffer(500, 0) {} std::vector<char> Buffer; // A std::vector keeps // track of its size enternally }SmallBuffer;
1,194,371
1,197,117
Which one to use when static_cast and reinterpret_cast have the same effect?
Possible Duplicate: Should I use static_cast or reinterpret_cast when casting a void* to whatever Often, especially in Win32 programming it is required to cast from one opaque type to another. For example: HFONT font = cast_here<HFONT>( ::GetStockObject( SYSTEM_FONT ) ); Both static_cast and reinterpret_cast are applicable here and have exactly the same effect since HFONT is a pointer to a dummy struct specifically introduced for defining HFONT and HGDIOBJ returned by GetStockObject() is a void* pointer. Which one - static_cast or reinterpret_cast - is preferable?
Everybody has noted that reinterpret_cast<> is more dangerous than static_cast<>. This is because reinterpret_cast<> ignores all type information and just assigns a new type without any real processing, as a result the processing done is implementation defined (though usually the bit patterns of the pointers are the same). The thing everybody fails to mention is that reinterpret_cast<> is a means to document your program. It tells somebody reading the code that we had to compromise something and as a result we have ended up with a dangerous cast, and be careful when you mess with this code. Use the reinterpret_cast<> to highlight these dangerous areas in the code. When casting from a void* there is not type information for the cast to work with. So you are either doing an invalid cast or a casting back to the original type that was previously cast into a void*. Any other type of casting is going to end up with some undefined behavior. This is the perfect situation to use reinterpret_cast<> as the standard guarantees that casting a pointer to void* and back to its original type using reinterpret_cast<> will work. And by using reinterpret_cast<> you are pointing out to the humans that come along afterwords that something bad is happening here.
1,194,453
1,194,715
Fetching rows in a MySQL database table using MySQL C API and C++
I'm confused when trying to fetch table rows in mysql using C++ with MySQL C API. I can do it easily in PHP, just because C++ is a strongly-typed language so that we also need to take care of the dirty process.. This is how I done it in PHP $data = array(); $i = 0; $query = mysql_query("SELECT * FROM `my_table`"); while($fetch = mysql_fetch_array($query)) { $data[$i] = $fetch['columntobefetched']; $i++; } But how to do the same in C++ with MySQL API? Here's my code so far....with a confusing dead end...x__x MYSQL *sqlhnd = mysql_init(NULL); mysql_real_connect(sqlhnd, "server", "user", "pass", "database", port, NULL, 0); mysql_query(sqlhnd, "SELECT * FROM `my_table`"); MYSQL_RES *confres = mysql_store_result(sqlhnd); int totalrows = mysql_num_rows(confres); int numfields = mysql_num_fields(confres); MYSQL_FIELD *mfield; while((row = mysql_fetch_row(confres))) { for(i = 0; i < numfields; i++) { while(mfield = mysql_fetch_field(confres)) { mfield->//??? I'm dead } } } Basically I wanted to get a value from a field in the database table and store it to a variable.. Any kind of help would be appreciated :) Thanks
In the MySQL C API, mysql_fetch_row returns a MYSQL_ROW object, which is essentially an array of values in the current row. So, your code should be something like: mysql_query(sqlhnd, "SELECT * FROM `my_table`"); MYSQL_RES *confres = mysql_store_result(sqlhnd); int totalrows = mysql_num_rows(confres); int numfields = mysql_num_fields(confres); MYSQL_FIELD *mfield; while((row = mysql_fetch_row(confres))) { for(i = 0; i < numfields; i++) { char *val = row[i]; // do something with val... } } Better yet, don't do a "SELECT * FROM mytable" in a program. It would be much better to name the fields you expect, so that you can be sure of the order of the fields returned.
1,194,479
1,194,685
Write your own memory manager
I'd like to write my own memory manager. The target language is C++ and the goal of the memory manager is mainly to help debugging. It should detect double frees, memory overwrite and so on. And of course - I'd like to learn about memory management. Can someone give me a hint so resources where I can learn how to write such a memory manager? Thank you for your help.
I think this is a very interesting project that you might learn a lot from. Here's a little bit of reading material on the subject of memory management. It goes over some of the basics of memory management, leads into a simple malloc implementation, then touches on a couple more advanced topics. Inside memory management Also, since you mention that you want to make a memory manager that is useful for debugging, you may want to consider reading this paper by the developers of Memcheck/Valgrind (A wonderful memory debugger for Linux). It details how they track all the the metadata (whether or not a particular byte is defined, intialized, etc.) in memchck. It's a bit detailed, but it's good reading on how to make a scalable and efficient dynamic memory checker. How to Shadow Every Byte of Memory Used by a Program
1,194,709
1,194,795
Refactoring C++ in Eclipse CDT
I've installed the Galileo release (Eclipse 3.5/CDT 5.1) in hopes of utilizing the better refactoring support mentioned in What is the state of C++ refactor support in Eclipse? However I do not see all the mentioned refactoring options listed. I don't see any plug-ins related to refactoring on http://download.eclipse.org/tools/cdt/releases/galileo Attempts to add the plugin directly from the refactoring site fails http://ifs.hsr.ch/cdtrefactoring/updatesite/ due to outdated dependencies, so I'm assuming the newest release should have the latest refactoring abilities. Am I correct in this? How do others refactor in Eclipse CDT? Do people use other tools/plugins?
You should install CDT 6.0. However, my guess is that the options mentioned in the question you linked are not yet ready for mainline. My CDT offers Rename, Extract Variable/Constant/Function, Hide Method. From those, I only use Rename regularly, the others do not yet seem finished. One of the problems with such tools for C++ is that the language is way more complex than other languages (think templates, macros etc.) so implementing such a tool needs tremendously more effort than for, say, Java.
1,194,842
1,196,195
Is it safe to serialize a raw boost::variant?
boost::variant claims that it is a value type. Does this mean that it's safe to simply write out the raw representation of a boost::variant and load it back later, as long as it only contains POD types? Assume that it will be reloaded by code compiled by the same compiler, and same version of boost, on the same architecture. Also, (probably) equivalently, can boost::variant be used in shared memory?
Regarding serialisation: It should work, yes. But why don't you use boost::variant's visitation mechanism to write out the actual type contained in the variant? struct variant_serializer : boost::static_visitor<void> { template <typename T> typename boost::enable_if< boost::is_pod<T>, void>::type operator()( const T & t ) const { // ... serialize here, e.g. std::cout << t; } }; int main() { const boost::variant<int,char,float,double> v( '1' ); variant_serializer s; boost::apply_visitor( s, v ); return 0; } Regarding shared memory: boost::variant does not perform heap allocations, so you can place it into shared memory just like an int, assuming proper synchronisation, of course. Needless to say, as you said, the above is only valid if the variant can only contain POD types.
1,194,941
1,195,074
Does SQL Server Compact Edition depend on the .net framework?
My company is looking at using SQL Server Compact Edition 3.5 as a datastore for a few programs that have been developed over the last several years. Most of these programs are were written in C++ and are installed on some older machines with Windows 2000, XP, or Vista installed. Does SQL Server Compact Edition 3.5 depend on the .net framework? In otherwords, can I package and deploy a C++ program with a SQL Server Compact Edition 3.5 database on older versions of Windows without needing to install the .net framework? If it does depend on the .net framework which version of the framework? Thank you!
SQL Server Compact Edition does not depend on the .NET framework and can be deployed and used with a purely unmanaged, C++ client using the OLEDB provider. Most of the management and development tools do however rely on the .NET framework and most development is probably done using the ADO.NET provider, which does require the framework. This article goes into the subject a bit further. http://msdn.microsoft.com/en-us/library/ms172914.aspx
1,195,206
1,195,221
Is there a Java equivalent or methodology for the typedef keyword in C++?
Coming from a C and C++ background, I found judicious use of typedef to be incredibly helpful. Do you know of a way to achieve similar functionality in Java, whether that be a Java mechanism, pattern, or some other effective way you have used?
Java has primitive types, objects and arrays and that's it. No typedefs.
1,195,572
1,195,611
compiling "standard" C++ in visual studio (non .net)
(This is a probably a very beginner question and I may be missing something obvious) I've just moved from my mac back to windows, and I'm trying to get set up with C++. I have visual studio 2008 C++: how do I compile "normal" non .net/clr C++? I want a command line application, and the only project that seemed fit was "CLR console application." My stuff works and compiles fine with Xcode or dev C++, but VS gives me 57 errors, and seems to only want me to do .net stuff. I have to include "stdafx.h" before it will even do anything. Can I just use plain old C++, with std::string and vectors, without the .net stuff? Or do I need dev c++? thanks guys!
Select General->Empty project project type.
1,195,656
1,195,679
Integer storage - Hexadecimal/Octal
I understand that integers are stored in binary notation, but I was wondering how this affects the reading of them - for example: Assuming cin.unsetf(ios::dec); cin.unsetf(ios::hex); and cin.unsetf(ios::oct); the user inputs 0x43 0123 65 which are stored as integers. Now assume that the program wants to recognize these values as hex, oct, or dec and does something like this. void number_sys(int num, string& s) { string number; stringstream out; out << num; number = out.str(); if(number[0] == '0' && (number[1] != 'x' && number[1] != 'X')) s = "octal"; else if(number[0] == '0' && (number[1] == 'x' || number[1] == 'X')) s = "hexadecimal"; else s = "decimal"; } the function will read all of the integers as decimal. I put in some test code after the string conversion to output the string, and the string is the number in decimal form. I was wondering if there is a way for integers to keep their base notation. Of course you could input the numbers as strings and test that way, but then there is the problem of reading the string back as an int. For example: string a = 0x43; int num = atoi(a.c_str()); cout << num; // will output 43 It seems like keeping/converting base notation can get very tricky. As an added problem, the hex, dec, and oct manipulators wouldn't even help with the issue shown above since the integer is being stored completely incorrectly, it's not even converting to a decimal.
Integers (and all other data) are not stored using "binary notation", they are stored as binary numbers. And no, there is no way for integers to retain their input format (which is what you are actually asking).
1,195,675
1,195,690
convert a char* to std::string
I need to use an std::string to store data retrieved by fgets(). To do this I need to convert the char* return value from fgets() into an std::string to store in an array. How can this be done?
std::string has a constructor for this: const char *s = "Hello, World!"; std::string str(s); Note that this construct deep copies the character list at s and s should not be nullptr, or else behavior is undefined.
1,195,979
1,196,011
How to interpret numbers correctly (hex, oct, dec)
I'm trying to write a program that takes input of - hexadecimals, octals, and decimals -, stores them in integer variables, and outputs them along with their conversion to decimal form. For example: User inputs: 0x43, 0123, 65 Program outputs: 0x43 hexadecimal converts to 67 decimal 0123 octal converts to 83 decimal 65 decimal converts to 65 decimal So obviously I need a way to interpret the numbers, but I'm not sure how to go about doing it. I've tried various methods such as reading them into a function and converting them into a string, and vice versa (see here for code examples), but interpreting the numbers always requires conversion to some format that trashes the original input. The only thing I can think of is overloading a >> operator that reads a character at a time and if it sees 0x or 0 at the beginning of the input then it stores the whole input into a string before it is read into an int. Then the program would somehow have to determine the right manipulator during output. Not sure if there is a simpler way to do this, any help is appreciated. Edit: This has been solved, but I decided to post the code in if anyone is interested. #include "std_lib_facilities.h" void number_sys(string num, string& s) { if(num[0] == '0' && (num[1] != 'x' && num[1] != 'X')) s = "octal"; else if(num[0] == '0' && (num[1] == 'x' || num[1] == 'X')) s = "hexadecimal"; else s = "decimal"; } int main() { cout << "Input numbers in hex, dec, or oct. Use 0xx to cancel.\n"; string a; while(cin >> a){ if(a == "0xx")break; string atype; number_sys(a, atype); int anum = strtol(a.c_str(), NULL, 0); cout << a << setw(20-a.length()) << atype << setw(20) << "converts to" << setw(10) << anum << setw(10) << "decimal\n"; } keep_window_open(); }
Take a look at the strtol function. char * args[3] = {"0x43", "0123", "65"}; for (int i = 0; i < 3; ++i) { long int value = strtol(args[i], NULL, 0); printf("%s converts to %d decimal\n", args[i], value); } Outputs: 0x43 converts to 67 decimal 0123 converts to 83 decimal 65 converts to 65 decimal
1,196,000
1,196,054
How to keep items sorted based on dynamic attribute?
I'm using an STL std::multiset<> as a sorted list of pointers. The sort order is determined by a property of the items being pointed to, something along the lines of this simplified example: struct A { int x; }; bool CompareAPointers(const A* lhs, const A* rhs) { return lhs->x < rhs->x; } std::multiset<A*, CompareAPointers> sorted_set; The complication is that the values of the property used to sort the set can change (you can change A.x in the example above), which can make the sort order incorrect: A a1, a2; a1.x = 1; a2.x = 2; sorted_set.insert(&a1); sorted_set.insert(&a2); a1.x = 3; I'm able to keep the list sorted by erasing and reinserting elements when the relevant attribute changes, but the book keeping is getting to be a bit of a pain. I feel like I'm going about this all wrong. Can anyone suggest a better way to keep a list sorted when the sort order can dynamically change? The changes occur in predictable ways at predictable times, but my current approach just feels wrong.
Boost Multi-Index supports sorting anything you want and supports changing the fields the list gets oderdered by, although you can't just type a1.x=1 anymore, instead, you have to use MultiIndex::replace(). I can't think of a faster/more natural way of doing this, as deleting and reinserting the element would've to be done anyway.
1,196,039
1,196,130
How to use MS SOAP toolkit?
I know that the Microsoft SOAP toolkit has been deprecated for a while now (.NET has all this stuff built in) but I was wondering in anyone has a quick bit of info on setting up a simple app that uses it. I was referred to http://www.devarticles.com/c/a/Cplusplus/Building-A-SOAP-Client-With-Visual-C-plus/ but the service in the example is no longer functioning and I can't seem to find any documentation online. I've looked into gSoap, but it seems overly complicated for what I have to do and despite not being able to get it to work, the SOAP toolkit seems (relatively) elegant.
Don't do it. It's 5 years deprecated, and it was 2 years out of date when it became deprecated. Don't. Assuming you are running on Windows (since you mentioned the MS SOAP Toolkit), use the imminently-arriving WWSAPI instead. Also see this post: http://blogs.msdn.com/vcblog/archive/2009/03/31/interested-in-using-web-services-in-your-native-c-c-code.aspx EDIT: If you want something more basic, you can use MSXML and build your SOAP requests manually.
1,196,540
1,199,192
Assigning result of function which returns a Foo to a const Foo&
I've got a function which returns an object of type Foo: Foo getFoo(); I know the following will compile and will work, but why would I ever do it? const Foo& myFoo = getFoo(); To me, the following is much more readable, and doesn't force me to remember that C++ allows me to assign an r-value to a const reference: const Foo myFoo = getFoo(); What are the differences between the two? Why would I use the first over the second? Why would I use the second over the first?
Contrary to popular opinion, there is no guarantee that assigning the result of a function returning an object by value to a const reference will result in fewer copies than assigning it to the object itself. When you assign an rvalue to a const reference, the compiler may bind the reference in one of two ways. It may create a new temporary by copying the rvalue and bind the reference to that, or it may bind the reference directly to the rvalue itself. If the compiler is not able to make the 'obvious' optimization to remove the temporary and elide the copy constructor for the return value of getFoo, then how likely is it to be able to do the more efficient form of binding an rvalue to a const reference without making a new temporary? One reason to use a const reference would be to make the function more robust against potential slicing. If the return type were actually a type derived from Foo, then assigning to a base class const reference would be guaranteed not to slice, even if the compiler did make a temporary object from the rvalue returned by the function. The compiler will also generate the correct call to the derived class destructor whether or not the destructor in the base class is virtual or not. This is because the type of the temporary object created is based on the type of the expression being assigned and not on the type of the reference which is being initialized. Note that the issue of how many copies of the return value are made is entirely separate from the return value optimization and the named return value optimization. These optimizations refer to eliminating the copy of either the rvalue result of evaluating a return expression or of a named local variable into the return value of a function in the body of the function itself. Obviously, in the best possible case, both a return value optimization can be made and the temporary for the return value can be eliminated resulting in no copies being performed on the returned object.
1,196,808
1,196,895
How to detect "Use MFC" in preprocessor
For a static Win32 library, how can I detect that any of the "Use MFC" options is set? i.e. #ifdef ---BuildingForMFC--- .... #else ... #endif
I have always checked for the symbol _MFC_VER being defined. This is the version number of MFC being used 0x0700 = 7.0 It is in the "Predefined Macros" in MSDN
1,197,106
1,197,129
static constructors in C++? I need to initialize private static objects
I want to have a class with a private static data member (a vector that contains all the characters a-z). In java or C#, I can just make a "static constructor" that will run before I make any instances of the class, and sets up the static data members of the class. It only gets run once (as the variables are read only and only need to be set once) and since it's a function of the class it can access its private members. I could add code in the constructor that checks to see if the vector is initialized, and initialize it if it's not, but that introduces many necessary checks and doesn't seem like the optimal solution to the problem. The thought occurs to me that since the variables will be read only, they can just be public static const, so I can set them once outside the class, but once again, it seems sort of like an ugly hack. Is it possible to have private static data members in a class if I don't want to initialize them in the instance constructor?
To get the equivalent of a static constructor, you need to write a separate ordinary class to hold the static data and then make a static instance of that ordinary class. class StaticStuff { std::vector<char> letters_; public: StaticStuff() { for (char c = 'a'; c <= 'z'; c++) letters_.push_back(c); } // provide some way to get at letters_ }; class Elsewhere { static StaticStuff staticStuff; // constructor runs once, single instance };
1,197,111
1,197,136
Bit setting and code readability
I have an Arduino application (well actually a library) that has a number of status flags in it - and originally I simply declared them as ints (well uint8_t so 8 bit unsigned chars in this case). But I could have combined them all into one integer and used bitmask operations to set and test the status. An example of the former: if (_shift == HIGH) { _shift = LOW; } else { _shift = HIGH; } an example of the latter #define SHIFT_BIT 0 if (bitRead(_flags, SHIFT_BIT) == HIGH) { bitWrite(_flags, SHIFT_BIT, LOW); } else { bitWrite(_flags, SHIFT_BIT, HIGH); } The former reads better, but the latter is more efficient (space and time). Should the space and time efficiency always win in this situation or is this a kind of optimization that should only happen when needed? (Added) For completeness here's the Wiring definition of those bitWrite etc. macros: #define bitRead(value, bit) (((value) >> (bit)) & 0x01) #define bitSet(value, bit) ((value) |= (1UL << (bit))) #define bitClear(value, bit) ((value) &= ~(1UL << (bit))) #define bitWrite(value, bit, bitvalue) (bitvalue ? bitSet(value, bit) : bitClear(value, bit))
Check out Raymond Chen's excellent take on this issue. In summary, you need to do some detailed calculation to find out whether the latter case is actually more efficient, depending on how many objects there are vs. how many callsites actually set these states. As far as readability, it looks like you're doing this with member variables, which means you've probably encapsulated them in nice functions. In that case, I'm not as concerned with readability because at least the code for the people using the class looks nice. However, you could always encapsulate it in private functions if it's a concern.
1,197,148
1,197,182
How do I use the registry?
In the simplest possible terms (I'm an occasional programmer who lacks up-to-date detailed programming knowledge) can someone explain the simplest way to make use of the registry in codegear C++ (2007). I have a line of code in an old (OLD!) program I wrote which is causing a significant delay in startup... DLB->Directory=pIniFile->ReadString("Options","Last Directory","no key!"); The code is making use of an ini file. I would like to be able to use the registry instead (to write variables such as the last directory the application was using) But the specifics are not important. I'd just like a generic how-to about using the registry that's specific to codegear c++ builder. I've googled this, but as usual with this type of thing I get lots of pages about c++ builder and a few pages about the windows registry, but no pages that explain how to use one with the other.
Use the TRegistry class... (include registry.hpp) //Untested, but something like... TRegistry *reg = new TRegistry; reg->RootKey = HKEY_CURRENT_USER; // Or whatever root you want to use reg->OpenKey("theKey",true); reg->ReadString("theParam",defaultValue); reg->CloseKey(); Note, opening and reading a ini file is usually pretty fast, so maybe you need to test your assumption that the reading of the ini is actually your problem, I don't think that just grabbing your directory name from the registry instead is going to fix your problem.
1,197,340
1,197,490
Is it possible to write an impure template in C++?
Is it possible to write an impure template in C++? That is, a template that will sometimes give a different resulting type or int for the same template parameters. For example, is it possible to write a template Foo<T> where Foo<int>::type is sometimes char and at other times float? Or a template Foo<T> where Foo<double>::my_static_const_int is sometimes 10 and other times 20?
It's not possible. If you have a template that behaves that way, it violates the ODR and / or other rules, such as that a specialization should be declared before it would be instantiated. So you can't just put a specialization that would somehow change a typedef member to make it resolve to a different type for all following references. Remember that Foo<T> references a class if Foo is a class template. If the class's typedef member is defined to be one type at one point in the program, and another type at another point, then something must have been gone wrong. Here are various Standard quotes that concern this A specialization for a function template, a member function template, or of a member function or static data member of a class template may have multiple points of instantiations within a translation unit. A specialization for a class template has at most one point of instantiation within a translation unit. A specialization for any template may have points of instantiation in multiple translation units. If two different points of instantiation give a template specialization different meanings according to the one definition rule (3.2), the program is ill-formed, no diagnostic required. If a template, a member template or the member of a class template is explicitly specialized then that specialization shall be declared before the first use of that specialization that would cause an implicit instantiation to take place, in every translation unit in which such a use occurs; no diagnostic is required. (Various "noise" skipped) [..Various entities that may be defined multiple in the whole program..]. Given such an entity named D defined in more than one translation unit, then each definition of D shall consist of the same sequence of tokens; in each definition of D, corresponding names, looked up according to 3.4, shall refer to an entity defined within the definition of D, or shall refer to the same entity, after overload resolution (13.3) and after matching of partial template specialization (14.8.3)... If D is a template, and is defined in more than one translation unit, then the last four requirements from the list above shall apply to names from the template’s enclosing scope used in the template definition (14.6.3), and also to dependent names at the point of instantiation (14.6.2). If the definitions of D satisfy all these requirements, then the program shall behave as if there were a single definition of D. If the definitions of D do not satisfy these requirements, then the behavior is undefined.
1,197,357
1,197,374
What happens when you compare two objects?
If I wrote an operator == for class Foo (in C++), what happens, exactly? Does it compare each data member against each other? class Foo { private: int bar; public: bool operator==(const Foo other&) { return *this == other; //what? //is this the same as bar == bar? } }
The above code will call itself recursively until you get a stack overflow (snicker) and the program crashes. The method itself (the one you wrote) is the equality operator, which is then called again explicitly within the body. The idea behind overriding the equality operator (operator==) is that you can decide for yourself how equality should be implemented. So you would probably want to make the body of your method do something like: return this->bar == other.bar; Which would do what you most likely want. One of the reasons that you might not want C++ to be "smart" about equality and automatically do a member-wise comparison is that you may have very different ideas about what "equality" means than the C++ standards body. As an example, you might consider a class with a pointer member to be equal only if the pointers point to the exact same object, or you might only consider them to be equal if the pointed-to-objects are memberwise equal. Or they might be (Note: bad practice here, but people still do it) pointing to some random address as they haven't been initialized yet and dereferencing them will cause a crash ("you" might know this because of some flag variable, but C++ wouldn't when it tried to "helpfully" dereference it).
1,197,566
1,197,590
Is it ever not safe to throw an exception in a constructor?
I know that it's not safe to throw exceptions from destructors, but is it ever unsafe to throw exceptions from constructors? e.g. what happens for objects that are declared globally? A quick test with gcc and I get an abort, is that always guaranteed? What solution would you use to cater for that situation? Are there any situations where constructors can throw exceptions and not leave things how we expect. EDIT: I guess I should add that I'm trying to understand under what circumstances I could get a resource leak. Looks like the sensible thing to do is manually free up resources we've obtained part way through construction before throwing the exception. I've never needed to throw exceptions in constructors before today so trying to understand if there are any pitfalls. i.e. Is this also safe? class P{ public: P() { // do stuff... if (error) throw exception } } dostuff(P *p){ // do something with P } ... try { dostuff(new P()) } catch(exception) { } will the memory allocated to the object P be released? EDIT2: Forgot to mention that in this particular case dostuff is storing the reference to P in an output queue. P is actually a message and dostuff takes the message, routes it to the appropriate output queue and sends it. Essentially, once dostuff has hold of it, it gets released later in the innards of dostuff. I think I want to put an autoptr around P and call release on the autoptr after dostuff to prevent a memory leak, would that be correct?
Throwing exceptions from a constructor is a good thing. When something fails in a constructor, you have two options: Maintain a "zombie" state, where the class exists but does nothing, or Throw an exception. And maintaining zombie classes can be quite a hassle, when the real answer should have been, "this failed, now what?". According to the Standard at 3.6.2.4: If construction or destruction of a non-local static object ends in throwing an uncaught exception, the result is to call terminate (18.6.3.3). Where terminate refers to std::terminate. Concerning your example, no. This is because you aren't using RAII concepts. When an exception is thrown, the stack will be unwound, which means all objects get their destructor's called as the code gets to the closest corresponding catch clause. A pointer doesn't have a destructor. Let's make a simple test case: #include <string> int main(void) { try { std::string str = "Blah."; int *pi = new int; throw; delete pi; // cannot be reached } catch(...) { } } Here, str will allocate memory, and copy "Blah." into it, and pi will be initialized to point to an integer in memory. When an exception is thrown, stack-unwinding begins. It will first "call" the pointer's destructor (do nothing), then str's destructor, which will free the memory that was allocated to it. If you use RAII concepts, you'd use a smart pointer: #include <memory> #include <string> int main(void) { try { std::string s = "Blah."; std::auto_ptr<int> pi(new int); throw; // no need to manually delete. } catch(...) { } } Here, pi's destructor will call delete and no memory will be leaked. This is why you should always wrap your pointers, and is the same reason we use std::vector rather than manually allocating, resizing, and freeing pointers. (Cleanliness and Safety) Edit I forgot to mention. You asked this: I think I want to put an autoptr around P and call release on the autoptr after dostuff to prevent a memory leak, would that be correct? I didn't state it explicitly, and only implied it above, but the answer is no. All you have to do is place it inside of auto_ptr and when the time comes, it will be deleted automatically. Releasing it manually defeats the purpose of placing it in a container in the first place. I would also suggest you look at more advanced smart pointers, such as those in boost. An extraordinarily popular one is shared_ptr, which is reference counted, making it suitable for storage in containers and being copied around. (Unlike auto_ptr. Do not use auto_ptr in containers!)
1,197,858
1,197,881
Binary file I/O issues
Edit: I'm trying to convert a text file into bytes. I'm not sure if the code is turning it into bytes or not. Here is the link to the header so you can see the as_bytes function. link #include "std_lib_facilities.h" int main() { cout << "Enter input file name.\n"; string file; cin >> file; ifstream in(file.c_str(), ios::binary); int i; vector<int> bin; while(in.read(as_bytes(i), sizeof(int))) bin.push_back(i); ofstream out(file.c_str(), ios::out); for(int i = 0; i < bin.size(); ++i) out << bin[i]; keep_window_open(); } Note that now the out stream just outputs the contents of the vector. It doesn't use the write function or the binary mode. This converts the file to a large line of numbers - is this what I'm looking for? Here is an example of the second code's file conversion: that guy likes to eat lots of pie (not sure if this was exact text) turns to 543518319544825700191924850016351970295432362115448292821701667182186922608417526375411952522351186935715718643976841768956006
The reason your first method didn't change the file is because all files are stored in the same way. The only "difference" between text files and binary files is that text files contain only bytes that can be shown as ASCII characters, while binary files* have a much more random variety and order of bytes. So you are reading bytes in as bytes and then outputting them as bytes again! *I'm including Unicode text files as binary, since they can have multiple bytes to denote one character point, depending on the character point and the encoding used. The second method is also fairly simple. You are reading in the bytes, as before, and storing them in integers (which are probably 4 bytes long). Then you are just printing out the integers as if they are integers, so you are seeing a string of numbers. As for why your first method cut off some of the bytes, you're right in that it's probably some bug in your code. I thought it was more important to explain what the ideas are in this case, rather than debug some test code.
1,198,044
1,198,075
Function pointer as template argument?
Is it possible to pass a function pointer as a template argument without using a typedef? template<class PF> class STC { PF old; PF& ptr; public: STC(PF pf, PF& p) : old(*p), ptr(p) { p = pf; } ~STC() { ptr = old; } }; void foo() {} void foo2() {} int main() { void (*fp)() = foo; typedef void (*vfpv)(); STC<vfpv> s(foo2, fp); // possible to write this line without using the typedef? }
Yes: STC<void (*)()> s(foo2, fp); // like this It's the same as taking the typedef declaration and removing the typedef keyword and the name.
1,198,055
1,198,077
Best way to use a C++ Interface
I have an interface class similar to: class IInterface { public: virtual ~IInterface() {} virtual methodA() = 0; virtual methodB() = 0; }; I then implement the interface: class AImplementation : public IInterface { // etc... implementation here } When I use the interface in an application is it better to create an instance of the concrete class AImplementation. Eg. int main() { AImplementation* ai = new AIImplementation(); } Or is it better to put a factory "create" member function in the Interface like the following: class IInterface { public: virtual ~IInterface() {} static std::tr1::shared_ptr<IInterface> create(); // implementation in .cpp virtual methodA() = 0; virtual methodB() = 0; }; Then I would be able to use the interface in main like so: int main() { std::tr1::shared_ptr<IInterface> test(IInterface::create()); } The 1st option seems to be common practice (not to say its right). However, the 2nd option was sourced from "Effective C++".
One of the most common reasons for using an interface is so that you can "program against an abstraction" rather then a concrete implementation. The biggest benefit of this is that it allows changing of parts of your code while minimising the change on the remaining code. Therefore although we don't know the full background of what you're building, I would go for the Interface / factory approach. Having said this, in smaller applications or prototypes I often start with concrete classes until I get a feel for where/if an interface would be desirable. Interfaces can introduce a level of indirection that may just not be necessary for the scale of app you're building. As a result in smaller apps, I find I don't actually need my own custom interfaces. Like so many things, you need to weigh up the costs and benefits specific to your situation.
1,198,110
1,198,157
Is there an equivelant to 'AssemblyInfo.cs' in a Win32 DLL project?
I already looked at this topic, but I need the answer flipped around. How would I set the assembly information attributes* in a Win32 DLL?
Okay, I figured it out with a little more looking. Right click the Visual Studio Project, and select Add -> Resource.. Select 'Version', then click 'New...' Visual Studio will generate the files for you, and you can simply edit the information.
1,198,117
1,198,179
Getting commands from string input
I have a program which gets commands as a string. Each character in the string represents a command. An example of command is given below OBIPC O - Open a file B - Make the text in Bold I - Make the text in italics P - Print the text C - Close the file My program has to parse this string and do respective job. Each command is executed sequentially. I came up with the following idea to do this. Create action classes for each command. Each will have a Execute() method and implements an interface IExecutable. So if I have 5 commands, I will have 5 classes each for processing each command. Keep the command and associated object that can perform action in an associative container (std::map in C++ and Dictionary in .NET). Loop through each character in the input string. Check any action associated for each character. If yes, get the action object and call Execute(). Some sort of command design pattern. If some charcters don't have action associated, throw error or skip. Will this approach be the best for my problem or do you see any easy/efficient method to do this? This is not specific to any programming language, I am OK with any programming language. All I am looking for is ideas to do this. Any thoughts?
Yep. That's exactly how I'd do it, except using Runnable::run() instead of IExececutable::Execute().
1,198,159
1,198,167
Many small files or one big file? (Or, Overhead of opening and closing file handles) (C++)
I have created an application that does the following: Make some calculations, write calculated data to a file - repeat for 500,000 times (over all, write 500,000 files one after the other) - repeat 2 more times (over all, 1.5 mil files were written). Read data from a file, make some intense calculations with the data from the file - repeat for 1,500,000 iterations (iterate over all the files written in step 1.) Repeat step 2 for 200 iterations. Each file is ~212k, so over all i have ~300Gb of data. It looks like the entire process takes ~40 days on a Core 2 Duo CPU with 2.8 Ghz. My problem is (as you can probably guess) is the time it takes to complete the entire process. All the calculations are serial (each calculation is dependent on the one before), so i can't parallel this process to different CPUs or PCs. I'm trying to think how to make the process more efficient and I'm pretty sure the most of the overhead goes to file system access (duh...). Every time i access a file i open a handle to it and then close it once i finish reading the data. One of my ideas to improve the run time was to use one big file of 300Gb (or several big files of 50Gb each), and then I would only use one open file handle and simply seek to each relevant data and read it, but I'm not what is the overhead of opening and closing file handles. can someone shed some light on this? Another idea i had was to try and group the files to bigger ~100Mb files and then i would read 100Mb each time instead of many 212k reads, but this is much more complicated to implement than the idea above. Anyway, if anyone can give me some advice on this or have any idea how to improve the run time i would appreciate it! Thanks. Profiler update: I ran a profiler on the process, it looks like the calculations take 62% of runtime and the file read takes 34%. Meaning that even if i miraculously cut file i/o costs by a factor of 34, I'm still left with 24 days, which is quite an improvement, but still a long time :)
Opening a file handle isn't probable to be the bottleneck; actual disk IO is. If you can parallelize disk access (by e.g. using multiple disks, faster disks, a RAM disk, ...) you may benefit way more. Also, be sure to have IO not block the application: read from disk, and process while waiting for IO. E.g. with a reader and a processor thread. Another thing: if the next step depends on the current calculation, why go through the effort of saving it to disk? Maybe with another view on the process' dependencies you can rework the data flow and get rid of a lot of IO. Oh yes, and measure it :)
1,198,215
1,210,219
MFC: child dialog behavior
I'm trying to make my child dialog box to be created as a member of the main application class as follows: class ParentWindow : public CWinApp { public: // Other MFC and user-implemented classes before this line MiscSettings activeMiscSettings; public: ParentWindow(); ~ParentWindow(); // Overrides virtual BOOL InitInstance(); // Implementation afx_msg void OnAppAbout(); afx_msg void OnMiscSettingsPrompt(); DECLARE_MESSAGE_MAP() }; I would like to have the dialog box described by MiscSettings to be instantiated when the program starts up, destructed when the program exits, and show/hide according to whether the user select a particular menu option vs. the user clicking a "OK" or "Cancel" button of the dialog box. However, when I implemented the OnMiscSettingsPrompt() handler function as follows: void ParentWindow::OnMiscSettingsPrompt() { float temp; INT_PTR status = activeMiscSettings.DoModal(); switch(status) { case IDOK: temp = activeMiscSettings.GetSpeed(); break; case IDCANCEL: default: break; } } I cannot access activeMiscSettings.GetSpeed() method b/c the handle is invalid after the DoModal() call. I used this method similar to other examples on showing child dialog boxes. However, the contents of activeMiscSettings were not accessible by ParentWindow class. I know I can put handlers in MiscSettings class to transfer the contents properly in the OK button handler of the edit control and other user control settings to the appropriate class contents of the rest of the application. At this point, I'm not sure what would be the cleanest way of transferring the settings on the child popup dialog to the rest of the application. Another specification that I am trying to achieve is to have the misc. settings pop-up dialog to show pre-configured settings when it first appears when the user selected the menu option for the first time. After changing some settings and pressing ok, if the user opens the settings window again, I would like to have the current settings show up in the user controls rather than showing the preconfigured settings previously seen in the very first instance. Is this an easily achievable goal? Thanks in advance for the comments.
I have ended up deciding to create a struct containing the settings to be configured in the child dialog in the parent dialog class, passing in the pointer to the struct when calling a constructor, and have the child dialog's OK button handler modify the struct's contents as it is a pointer. I think this is as clean as I can make the implementation for now.
1,198,260
1,201,902
How can you iterate over the elements of an std::tuple?
How can I iterate over a tuple (using C++11)? I tried the following: for(int i=0; i<std::tuple_size<T...>::value; ++i) std::get<i>(my_tuple).do_sth(); but this doesn't work: Error 1: sorry, unimplemented: cannot expand ‘Listener ...’ into a fixed-length argument list. Error 2: i cannot appear in a constant expression. So, how do I correctly iterate over the elements of a tuple?
Boost.Fusion is a possibility: Untested example: struct DoSomething { template<typename T> void operator()(T& t) const { t.do_sth(); } }; tuple<....> t = ...; boost::fusion::for_each(t, DoSomething());
1,198,573
1,200,320
C API for getting CPU load in linux
In linux, is there a built-in C library function for getting the CPU load of the machine? Presumably I could write my own function for opening and parsing a file in /proc, but it seems like there ought to be a better way. Doesn't need to be portable Must not require any libraries beyond a base RHEL4 installation.
If you really want a c interface use getloadavg(), which also works in unixes without /proc. It has a man page with all the details.
1,198,577
1,198,584
std::auto_ptr, delete[] and leaks
Why this code does not cause memory leaks? int iterCount = 1000; int sizeBig = 100000; for (int i = 0; i < iterCount; i++) { std::auto_ptr<char> buffer(new char[sizeBig]); } WinXP sp2, Compiler : BCB.05.03
Because you're (un)lucky. auto_ptr calls delete, not delete []. This is undefined behavior. Try doing something like this and see if you get as lucky: struct Foo { char *bar; Foo(void) : bar(new char[100]) { } ~Foo(void) { delete [] bar; } } int iterCount = 1000; int sizeBig = 100000; for (int i = 0; i < iterCount; i++) { std::auto_ptr<Foo> buffer(new Foo[sizeBig]); } The idea here is that your destructor for Foo will not be called. The reason is something like this: When you say delete[] p, the implementation of delete[] is suppose to go to each element in the array, call its destructor, then free the memory pointed to by p. Similarly, delete p is suppose to call the destructor on p, then free the memory. char's don't have a destructor, so it's just going to delete the memory pointed to by p. In my code above, it is not going to destruct each element in the array (because it's not calling delete[]), so some Foo's will leave their local bar variable un-deleted.
1,198,717
1,198,734
creating zip file from a folder - in c++
I want to create a program that , when executed, will compress a selected folder. Can it be done?
If you don't want to use boost, there's also zlib, along with minizip, which is a wrapper around zlib for managing zip files.
1,198,893
1,198,946
Access to global data in a dll from an exported dll function
I am creating a C++ Win32 dll with some global data. There is a std::map defined globally and there are exported functions in the dll that write data into the map (after acquiring a write lock, ofcourse). My problem is, when I call the write function from inside the dll DllMain, it works without any problems. But when I load the dll from another program and call the function that writes data into the global map, it gives me this error: WindowsError: exception: access violation reading 0x00000008 Is there something that can be done about this? The same function when called from DllMain has access to the global data in the dll, but when called from a different process, it doesn't have access to the global data. Please advice. I am using the TDM-MinGW gcc 4.4.0 Compiler. EDIT: Ok, I've figured out what the problem is, and thanks for the help guys, but the problem was not with a constructor issue or inability to have maps in global space, but an issue in the boost::python that I'm using. I had tested it, but since I was calling the dll from within python or maybe something, the urllib2 module wasn't getting loaded into the dll. Now I have to see how to fix it.
Looks like the constructor of std::map did not run yet when your code was called. Lifetime of global non-PODs in a Win32 DLL is pretty tricky, and I'm not certain as to how MinGW specifically handles it. But it may be that the way you're compiling the DLL, you've set your own function (DllMain?) as an entry point, and thus overrode the CRT initialization routine that calls constructors.
1,198,918
1,198,955
How linker resolves the symbol in assembly code
I wanted to know how linker resolves the printf symbol in the following assembly code. #include<stdio.h> void main() { printf("Hello "); } .file "test.c" .def ___main; .scl 2; .type 32; .endef .section .rdata,"dr" LC0: .ascii "Hello \0" .text .globl _main .def _main; .scl 2; .type 32; .endef _main: pushl %ebp movl %esp, %ebp subl $8, %esp andl $-16, %esp movl $0, %eax addl $15, %eax addl $15, %eax shrl $4, %eax sall $4, %eax movl %eax, -4(%ebp) movl -4(%ebp), %eax call __alloca call ___main movl $LC0, (%esp) **call _printf** leave ret .def **_printf**; .scl 3; .type 32; .endef Bit of Low Level Explanation will be highly appreciated. Thanks in advance.
Assuming ELF file format, the assembler will generate an undefined symbol reference in the object file. This'll look like this: Symbol table '.symtab' contains 11 entries: Num: Value Size Type Bind Vis Ndx Name 0: 00000000 0 NOTYPE LOCAL DEFAULT UND 1: 00000000 0 FILE LOCAL DEFAULT ABS test.c 2: 00000000 0 SECTION LOCAL DEFAULT 1 3: 00000000 0 SECTION LOCAL DEFAULT 3 4: 00000000 0 SECTION LOCAL DEFAULT 4 5: 00000000 0 SECTION LOCAL DEFAULT 5 6: 00000000 0 SECTION LOCAL DEFAULT 6 7: 00000000 0 SECTION LOCAL DEFAULT 7 8: 00000000 52 FUNC GLOBAL DEFAULT 1 main 9: 00000000 0 NOTYPE GLOBAL DEFAULT UND printf 10: 00000000 0 NOTYPE GLOBAL DEFAULT UND exit It'll also create a relocation entry to point to the part of the code image that needs to be updated by the linker with the correct address. It'll look like this: $ readelf -r test.o Relocation section '.rel.text' at offset 0x358 contains 3 entries: Offset Info Type Sym.Value Sym. Name 0000001f 00000501 R_386_32 00000000 .rodata 00000024 00000902 R_386_PC32 00000000 printf 00000030 00000a02 R_386_PC32 00000000 exit The linker's job is then to walk through the relocation table fixing up the code image with the final symbol addresses. There's an excellent book, but I can't find the details right now (and it's out of print). However, this looks like it may be useful: http://www.linuxjournal.com/article/6463
1,199,018
1,199,057
Is it possible to make both a managed and unmanaged versions of the same C++ assembly?
We use a software from another company for one of our products. A developer from that company is kinda 'old' and works in C (no offence). We work in .Net 3.5 (C#). He asked me if it is possible, with the same source code (presumably in C, maybe C++), to create an assembly that he could compile both a managed and unmanaged version. Are there any good reason to do this?
In order to compile to managed assembly the code needs to be written using Managed C++ Extensions. Please note that C is not an OO language so you cannot compile to a managed assembly. The primary reason for doing this is if you have an existing code base written in C++ that you want to use directly in .NET application without resorting to P/Invoke.
1,199,336
1,199,649
cancellation handler won't run if pthread_exit called from C source instead of C++ source
I'm linking a C++ source with a C source and a C++ source. I create a thread with pthread, a cancellation point and then I call pthread_exit either through the C or the C++ source file. If the pthread_exit call comes from the C source, the cancellation handler does not fire! What may be the reason for this? b.cc: #include <cstdio> #include <cstdlib> #include <stdbool.h> #include <pthread.h> extern "C" void V(); extern "C" void Vpp(); extern "C" void Vs(); #define PTHREAD_EXIT Vs void cleanup(void*v) { fprintf(stderr, "Aadsfasdf\n"); exit(0); } void* f(void*p) { pthread_cleanup_push(cleanup, NULL); PTHREAD_EXIT(); pthread_cleanup_pop(true); return NULL; } int main() { pthread_t p; if (pthread_create(&p, NULL, f, NULL)) abort(); for(;;); } vpp.cc: #include <pthread.h> extern "C" void Vpp(); void Vpp() { pthread_exit(0); } v.c: #include <pthread.h> void V() { pthread_exit(0); } vs.s: .text Vs: .global Vs call pthread_exit spin: jmp spin compilation with g++ -c vpp.cc -g -o vpp.o -Wall gcc -c v.c -g -o v.o -Wall as vs.s -o vs.o g++ b.cc vpp.o v.o vs.o -o b -lpthread -g -Wall If PTHREAD_EXIT is Vpp the program displays a message and terminates, if it is V or Vs it doesn't. the disassembly for V and Vpp is identical, and changing the definition of PTHREAD_EXIT between V and Vpp merely changes between call V and call Vpp in the disassembly. EDIT: Not reproducible on another computer, so I guess I hit an error in the library or something.
Inspired by Ch. Vu-Brugier I took a look at pthread.h and found out that I have to add #undef __EXCEPTIONS before including pthread.h. This is a satisfactory workaround for my current needs.
1,199,513
1,199,526
Restart a process [exe] in Windows
I have a C++ exe; under a particular scenario I need to stop the exe and start it up again. This has to be done from within the same exe and not from outside. What is the best way to achieve this? My guess is to start a new instance of the process and then kill the running process. But is there any straight forward API to do this, like RestartProcess() or something? If not what do you suggest?
No, there's no such built-in method. You really have to detect the path to the executable (GetCurrentModule(), then GetModuleFileName()), run the new process (CreateProcess()), then exit the current process (ExitProcess()).
1,199,764
1,199,795
How to check if a SQL query is valid for writing with ADO?
My app has an advanced feature that accepts SQL queries written by the user. The feature should include a "Validate" button to check if the query is valid. The most simple way I found to do this using ADO is just trying to run the query and catch possible exceptions. But how can I also check if the query enables to add new records or to edit existing ones?
Transactions, anyone? begin transaction // Query being validated goes here rollback transaction
1,199,920
1,199,960
How to Find CPU Utilization of a Single Thread within a Process
I am looking a Tool on How to Find CPU Utilization of a Single Thread within a Process in VC++. It would be great full if any one could provide me a tool. Also it could be better if you guys provide how to do programmatically. Thank you in Advance.
Perhaps using GetThreadTimes would help ? To elaborate if the thread belongs to another executable, that would be something (not tested) in the lines of: // Returns true if thread times could be queried and its results are usable, // false otherwise. Error handling is minimal, considering throwing detailed // exceptions instead of returning a simple boolean. bool get_remote_thread_times(DWORD thread_id, FILETIME & kernel_time, FILETIME & user_time) { FILETIME creation_time = { 0 }; FILETIME exit_time = { 0 }; HANDLE thread_handle = OpenThread(THREAD_QUERY_INFORMATION, FALSE, thread_id); if (thread_handle == INVALID_HANDLE) return false; bool success = GetThreadTimes(thread_handle, &creation_time, &exit_time, &kernel_time, &user_time) != 0; CloseHandle(thread_handle); return success; }
1,200,021
1,200,058
Small question about precompiled headers
Looking at an open source code base i came across this code: #include "StableHeaders.h" #include "polygon.h" #include "exception.h" #include "vector.h" ... Now the StableHeaders.h is a precompiled header which is included by a 'control' cpp to force it's generation. The three includes that appear after the precompiled header are also included in the StableHeaders.h file anyway. My question is, are these files included twice so that the code base will build on compilers that don't support precompiled headers? As im assuming that include guards/header caching will make the multiple includes redundant anyway... EDIT btw, the stableheaders.h file has a check for win32 (roughly) so again im assuming that the includes inside stableheaders.h wont be included on compilers that don't support precompiled headers.
Compilers that don't support precompiled headers would just include StableHeaders.h and reparse it every time (rather than using the precompiled file). It won't cause any problems neither does it fix any problems for certain compilers as you asked. I think its just a minor 'mistake' that probably happened over time during development.
1,200,026
1,200,213
Error on dlopen: St9bad_alloc
I have some c++ code I'm using for testing in which the first line is a call to dlopen in an attempt to load my shared object. Upon hitting this line I get the following error: Terminate called after throwing an instance of std::bad_alloc: what() : St9bad_alloc I've upped the memory (free -m now reports that I have ~120 MB free when my exe is loaded in gdb) and I still get the same message. Anyone any ideas on what else could be causing this & what I can do to resolve it?
Take a look at the C++ dlopen mini HOWTO, hope that helps.
1,200,188
1,200,218
How to convert std::string to LPCSTR?
How can I convert a std::string to LPCSTR? Also, how can I convert a std::string to LPWSTR? I am totally confused with these LPCSTR LPSTR LPWSTR and LPCWSTR. Are LPWSTR and LPCWSTR the same?
str.c_str() gives you a const char *, which is an LPCSTR (Long Pointer to Constant STRing) -- means that it's a pointer to a 0 terminated string of characters. W means wide string (composed of wchar_t instead of char).
1,200,475
1,200,594
overload operator<< within a class in c++
I have a class that uses a struct, and I want to overload the << operator for that struct, but only within the class: typedef struct my_struct_t { int a; char c; } my_struct; class My_Class { public: My_Class(); friend ostream& operator<< (ostream& os, my_struct m); } I can only compile when I declare the operator<< overload w/ the friend keyword, but then the operator is overloaded everywhere in my code, not just in the class. How do I overload the << operator for my_struct ONLY within the class? Edit: I will want to use the overloaded operator to print a my_struct which IS a member of My_Class
How do I overload the << operator for my_struct ONLY within the class? Define it as static std::ostream & operator<<( std::ostream & o, const my_struct & s ) { //... or namespace { std::ostream & operator<<( std::ostream & o, const my_struct & s ) { //... } in the .cpp file in which you implement MyClass. EDIT: If you really, really need to scope on the class and nothing else, then define it as a private static function in said class. It will only be in scope in that class and it's subclasses. It will hide all other custom operator<<'s defined for unrelated classes, though (again, only inside the class, and it's subclasses), unless they can be found with ADL, or are members of std::ostream already.
1,200,727
1,200,836
Cross-platform drawing library
I've been looking for a good cross-platform 2D drawing library that can be called from C++ and can be used to draw some fairly simple geometry; lines, rectangles, circles, and text (horizontal and vertical) for some charts, and save the output to PNG. I think a commercial package would be preferable over open source because we would prefer not to have to worry about licensing issues (unless there's something with a BSD style license with no credit clause). I've looked at Cairo Graphics which seemed promising, but the text rendering looks like crap out of the box, and upgrading the text back-end brings us into murky license land. I need it for Windows, Mac and Linux. Preferably something fairly lightweight and simple to integrate. I've thought about Qt but that's way too heavy for our application. Any ideas on this would be awesome.
Try Anti-Grain Geometry. From the description: Anti-Grain Geometry (AGG) is an Open Source, free of charge graphic library, written in industrially standard C++. The terms and conditions of use AGG are described on The License page. AGG doesn't depend on any graphic API or technology. Basically, you can think of AGG as of a rendering engine that produces pixel images in memory from some vectorial data. But of course, AGG can do much more than that. The ideas and the philosophy of AGG are: Anti-Aliasing. Subpixel Accuracy. The highest possible quality. High performance. Platform independence and compatibility. Flexibility and extensibility. Lightweight design. Reliability and stability (including numerical stability).
1,201,051
1,204,926
floating point precision
I have a program written in C# and some parts are writing in native C/C++. I use doubles to calculate some values and sometimes the result is wrong because of too small precision. After some investigation i figured out that someone is setting the floating-point precision to 24-bits. My code works fine, when i reset the precision to at least 53-bits (using _fpreset or _controlfp), but i still need to figure out who is responsible for setting the precision to 24-bits in the first place. Any ideas who i could achieve this?
This is caused by the default Direct3D device initialisation. You can tell Direct3D not to mess with the FPU precision by passing the D3DCREATE_FPU_PRESERVE flag to CreateDevice. There is also a managed code equivalent to this flag (CreateFlags.FpuPreserve) if you need it. More information can be found at Direct3D and the FPU.
1,201,179
1,211,242
How to identify top-level X11 windows using xlib?
I'm trying to get a list of all top level desktop windows in an X11 session. Basically, I want to get a list of all windows that are shown in the window managers application-switching UI (commonly opened when the user presses ALT+TAB). I've never done any X11 programming before, but so far I've managed to enumerate through the entire window list, with code that looks something like this: void CSoftwareInfoLinux::enumerateWindows(Display *display, Window rootWindow) { Window parent; Window *children; Window *child; quint32 nNumChildren; XTextProperty wmName; XTextProperty wmCommand; int status = XGetWMName(display, rootWindow, &wmName); if (status && wmName.value && wmName.nitems) { int i; char **list; status = XmbTextPropertyToTextList(display, &wmName, &list, &i); if (status >= Success && i && *list) { qDebug() << "Found window with name:" << (char*) *list; } status = XGetCommand(display, rootWindow, &list, &i); if (status >= Success && i && *list) { qDebug() << "... and Command:" << i << (char*) *list; } Window tf; status = XGetTransientForHint(display, rootWindow, &tf); if (status >= Success && tf) { qDebug() << "TF set!"; } XWMHints *pHints = XGetWMHints(display, rootWindow); if (pHints) { qDebug() << "Flags:" << pHints->flags << "Window group:" << pHints->window_group; } } status = XQueryTree(display, rootWindow, &rootWindow, &parent, &children, &nNumChildren); if (status == 0) { // Could not query window tree further, aborting return; } if (nNumChildren == 0) { // No more children found. Aborting return; } for (int i = 0; i < nNumChildren; i++) { enumerateWindows(display, children[i]); } XFree((char*) children); } enumerateWindows() is called initially with the root window. This works, in so far as it prints out information about hundreds of windows - what I need, is to work out which property I can interrogate to determine if a given Window is a top-level Desktop application window (not sure what the official terminology is), or not. Can anyone shed some light on this? All the reference documentation I've found for X11 programming has been terribly dry and hard to understand. Perhaps someone could point be to a better resource?
I have a solution! Well, sort of. If your window manager uses the extended window manager hints (EWMH), you can query the root window using the "_NET_CLIENT_LIST" atom. This returna list of client windows the window manager is managing. For more information, see here. However, there are some issues with this. For a start, the window manager in use must support the EWMH. KDE and GNOME do, and I'm sure some others do as well. However, I'm sure there are many that don't. Also, I've noticed a few issues with KDE. Basically, some non-KDE applications don't get included in the list. For example, if you run xcalc under KDE it won't show up in this list. If anyone can provide any improvements on this method, I'd be glad to hear them. For reference, the code I'm using is listed below: Atom a = XInternAtom(m_pDisplay, "_NET_CLIENT_LIST" , true); Atom actualType; int format; unsigned long numItems, bytesAfter; unsigned char *data =0; int status = XGetWindowProperty(m_pDisplay, rootWindow, a, 0L, (~0L), false, AnyPropertyType, &actualType, &format, &numItems, &bytesAfter, &data); if (status >= Success && numItems) { // success - we have data: Format should always be 32: Q_ASSERT(format == 32); // cast to proper format, and iterate through values: quint32 *array = (quint32*) data; for (quint32 k = 0; k < numItems; k++) { // get window Id: Window w = (Window) array[k]; qDebug() << "Scanned client window:" << w; } XFree(data); }
1,201,198
1,201,282
Syntax error compiling header containing "char[]"
I am trying to build a Visual C++ 2008 DLL using SDL_Mixer 1.2: http://www.libsdl.org/projects/SDL_mixer/ This is supposedly from a build made for Visual C++, but when I include SDL_mixer.h I get error C2143: "syntax error : missing ';' before '['". The problem line is: const char[] MIX_EFFECTSMAXSPEED = "MIX_EFFECTSMAXSPEED"; Is this because of the use of the dynamic array construct "char[]", instead of "char*"? All the expressions in the file are wrapped by "extern "C" {".
My bad. Although the answers here are correct regarding C construct, the actual problem was that I had included a "D" language file instead of the C version.
1,201,261
1,202,176
What is the Fastest Method for High Performance Sequential File I/O in C++?
Assuming the following for... Output: The file is opened... Data is 'streamed' to disk. The data in memory is in a large contiguous buffer. It is written to disk in its raw form directly from that buffer. The size of the buffer is configurable, but fixed for the duration of the stream. Buffers are written to the file, one after another. No seek operations are conducted. ...the file is closed. Input: A large file (sequentially written as above) is read from disk from beginning to end. Are there generally accepted guidelines for achieving the fastest possible sequential file I/O in C++? Some possible considerations: Guidelines for choosing the optimal buffer size Will a portable library like boost::asio be too abstracted to expose the intricacies of a specific platform, or can they be assumed to be optimal? Is asynchronous I/O always preferable to synchronous? What if the application is not otherwise CPU-bound? I realize that this will have platform-specific considerations. I welcome general guidelines as well as those for particular platforms. (my most immediate interest in Win x64, but I am interested in comments on Solaris and Linux as well)
Are there generally accepted guidelines for achieving the fastest possible sequential file I/O in C++? Rule 0: Measure. Use all available profiling tools and get to know them. It's almost a commandment in programming that if you didn't measure it you don't know how fast it is, and for I/O this is even more true. Make sure to test under actual work conditions if you possibly can. A process that has no competition for the I/O system can be over-optimized, fine-tuned for conditions that don't exist under real loads. Use mapped memory instead of writing to files. This isn't always faster but it allows the opportunity to optimize the I/O in an operating system-specific but relatively portable way, by avoiding unnecessary copying, and taking advantage of the OS's knowledge of how the disk actually being used. ("Portable" if you use a wrapper, not an OS-specific API call). Try and linearize your output as much as possible. Having to jump around memory to find the buffers to write can have noticeable effects under optimized conditions, because cache lines, paging and other memory subsystem issues will start to matter. If you have lots of buffers look into support for scatter-gather I/O which tries to do that linearizing for you. Some possible considerations: Guidelines for choosing the optimal buffer size Page size for starters, but be ready to tune from there. Will a portable library like boost::asio be too abstracted to expose the intricacies of a specific platform, or can they be assumed to be optimal? Don't assume it's optimal. It depends on how thoroughly the library gets exercised on your platform, and how much effort the developers put into making it fast. Having said that a portable I/O library can be very fast, because fast abstractions exist on most systems, and it's usually possible to come up with a general API that covers a lot of the bases. Boost.Asio is, to the best of my limited knowledge, fairly fine tuned for the particular platform it is on: there's a whole family of OS and OS-variant specific APIs for fast async I/O (e.g. epoll, /dev/epoll, kqueue, Windows overlapped I/O), and Asio wraps them all. Is asynchronous I/O always preferable to synchronous? What if the application is not otherwise CPU-bound? Asynchronous I/O isn't faster in a raw sense than synchronous I/O. What asynchronous I/O does is ensure that your code is not wasting time waiting for the I/O to complete. It is faster in a general way than the other method of not wasting that time, namely using threads, because it will call back into your code when I/O is ready and not before. There are no false starts or concerns with idle threads needing to be terminated.
1,201,295
1,201,323
private non-const and public const member function - coexisting in peace?
I am trying to create a class with two methods with the same name, used to access a private member. One method is public and const qualified, the other is private and non-const (used by a friend class to modify the member by way of return-by-reference). Unfortunately, I am receiving compiling errors (using g++ 4.3): When using a non-const object to call the method, g++ complains that the non-const version of my method is private, even though a public (const) version exists. This seems strange, because if the private non-const version does not exist, everything compiles fine. Is there any way to make this work? Does it compile on other compilers? Thanks. Example: class A { public: A( int a = 0 ) : a_(a) {} public: int a() const { return a_; } private: int & a() { return a_; } /* Comment this out, everything works fine */ friend class B; private: int a_; }; int main() { A a1; A const a2; cout << a1.a() << endl; /* not fine: tries to use the non-const (private) version of a() and fails */ cout << a2.a() << endl; /* fine: uses the const version of a() */ }
Overload resolution happens before access checking, so when you call the a method on a non-const A, the non-const member is chosen as a better match. The compiler then fails due to the access check. There is no way to "make this work", my recommendation would be to rename the private function. Is there any need to have a private accessor?
1,201,388
1,202,550
Create thumbnails in C++
Wondering if anyone knows how to create thumbnails in C++ from NITF 2.1 images
Using the package below you should be able to read a NITF image and then generate your own smaller version to save as a thumbnail. NITRO is a full-fledged, extensible library solution for reading and writing National Imagery Transmission Format (NITF) files, a U.S. Department of Defense standard format. It is written in cross-platform C, with bindings available for other languages (C++, Java, Python). NITRO was originally developed by General Dynamics - Advanced Information Systems in 2004 and is continuously being improved. It is now released as open-source software under the Lesser GNU Public License. http://nitro-nitf.sourceforge.net/wikka.php?wakka=HomePage
1,201,537
1,201,573
How to enclose the path stored in a variable in quotes?
Let us have a path C:\Program Files\TestFolder this path i got programatically and stored in a varible dirpath(for example) Now i have concatinated string dirpath=getInstallationpath()+"\\ test.dll /codebase /tlb"; then dirpath is become C:\Program Files\TestFolder\test.dll /codebase /tlb But my problem is i have make the path enclosed in double quotes "C:\Program Files\TestFolder\test.dll" Because when i directly pass the dirpath as commandline for regasm in a CreateProcess() then it should accept for C:\Program only because of white spaces.so i tried lot of stunts like dirpath="\ "+getInstallationPath()+" \test.dll /codebase /tlb " like that but did not worked... So please Hep me in this regard... Thanks in Advance...
I can see two issues with that line. First of all, you need to escape the backslash preceding test.dll. Secondly, wrapping the path in quotation marks requires that you also escape the quotation marks. After these changes, it should look like this: dirpath="\""+getInstallationPath()+"\\test.dll\" /codebase /tlb " Edit: Fixed the assignment per Martin's request. Forgot a closing quotation mark for the first string!
1,201,560
1,201,664
Is it possible to implement a recursive Algorithm with an Iterator?
I have given a tree like this: http://www.seqan.de/dddoc/html/streePreorder.png http://www.seqan.de/dddoc/html/streePreorder.png i can acces each node with the next operator. // postorder dfs Iterator< Index<String<char> >, BottomUp<> >::Type myIterator(myIndex); for (; !atEnd(myIterator); goNext(myIterator)) // do something with myIterator but i want to use a recursive algorithm on the tree. Is there a way i can make the recursive algorithm (exlude the biggest subtree on each node) iterative ? or how i can acces the elements non-recursively ? Edit: The Actual problem : I have given a recursive algorithm , that works on trees. (recursive) I also use a library where i only can acces the Items with an iterator ( non standard , iterative) recursive <-> iterative. How can i solve this ?
If your iterator only supports forward (and possibly backward) traversal, but not following links on the tree or fast random access, you will have a very hard time adapting a tree algorithm to it. However, in the end any answer will depend on the interface presented by your custom iterators, which you have not provided. For example, consider the easy algorithm of tree search. If the only operation given by your iterator is "start from the first element and move on one-by-one", you obviously cannot implement tree search efficiently. Nor can you implement binary search. So you must provide a list of exactly what operations are supported, and (critically) the complexity bounds for each.
1,201,593
1,201,840
Where is C not a subset of C++?
I read in a lot of books that C is a subset of C++. Some books say that C is a subset of C++, except for the little details. What are some cases when code will compile in C, but not C++?
If you compare C89 with C++ then here are a couple of things No tentative definitions in C++ int n; int n; // ill-formed: n already defined int[] and int[N] not compatible (no compatible types in C++) int a[1]; int (*ap)[] = &a; // ill-formed: a does not have type int[] No K&R function definition style int b(a) int a; { } // ill-formed: grammar error Nested struct has class-scope in C++ struct A { struct B { int a; } b; int c; }; struct B b; // ill-formed: b has incomplete type (*not* A::B) No default int auto a; // ill-formed: type-specifier missing C99 adds a whole lot of other cases No special handling of declaration specifiers in array dimensions of parameters // ill-formed: invalid syntax void f(int p[static 100]) { } No variable length arrays // ill-formed: n is not a constant expression int n = 1; int an[n]; No flexible array member // ill-formed: fam has incomplete type struct A { int a; int fam[]; }; No restrict qualifier for helping aliasing analysis // ill-formed: two names for one parameter? void copy(int *restrict src, int *restrict dst);
1,201,875
1,201,940
When doing a parallel search, when will memory bandwidth become the limiting factor?
I have some large files (from several gigabytes to hundreds of gigabytes) that I'm searching and trying to find every occurrence of a given string. I've been looking into making this operate in parallel and have some questions. How should I be doing this? I can't copy the entire file into memory since its too big. Will multiple FILE* pointers work? How many threads can I put on the file before the disk bandwidth becomes a limiting factor, rather than the CPU? How can I work around this? Currently, what I was thinking is I would use 4 threads, task each with a FILE* at either 0%, 25%, 50%, and 75% way through the file, and have each save its results to a file or memory, and then collect the results as a final step. Though with this approach, depending on bandwidth, I could easily add more threads and possibly get a bigger speedup. What do you think? EDIT: When I said memory bandwidth, I actually meant disk I/O. Sorry about that.
With this new revised version of the question, the answer is "almost immediately". Hard disks aren't very good at reading from two places on the disk at the same time. :) If you had multiple hard drives and split your file across them, you could probably take advantage of some threading. To be fair, though, I would say that the disk speed is already the limiting factor. I highly doubt that your disk can read data faster than the processor can handle it.
1,201,959
1,202,052
How do I reference one VC++ project from another in the same project?
I am new to Visual Studio. Need your help with the following. Visual Studio 2005, VC++ 1 solution has 2 projects. Lets call the solution as 'solution' Project 1 is named 'p1' and project 2 is called 'p2' Do I need to export functions and classes from 'p1' so that I can use them by importing in 'p2'? What if I simply include the header files and then use the functions by calling them straight away? Thanks, viren
If I remember correctly (haven't used C++ for a while), there were two different kinds of C++ libraries - a static library (a .lib file) and a dynamic library (a .dll file). In the case of a static library you had to configure p2 so that it links to p1.lib (in project properties); add p1 to dependancies of p2, so that it is always built first; and then include the .h files from p1 as necessary. The .dll file was a bit more tricky - the .h files had to have __declspec(dllimport) and __declspec(dllexport) I think. And there was some more magic. Not sure really. But these are the keywords that might get you up and running. Note that this is a MS specific keyword and will not work on other compilers.
1,202,003
1,202,038
How do I see if another process is running on windows?
I have a VC++ console app and I need to check to see if another process is running. I don't have the window title, all I have is the executable name. How do I get the process handle / PID for it? Can I enumerate the processes running with this .exe ?
You can use EnumProcesses to enumerate the processes on a system. You'll need to use OpenProcess to get a process handle, then QueryFullProcessImageName to get the processes executable.
1,202,283
1,206,928
Native C/Managed C++ Debugging
I have a native C Dll that calls 'LoadLibrary' to load another Dll that has the /clr flag turned on. I then use 'GetProcAddress' to get a function and call it on dynamically loaded dll. I would like to step into the dynamic library in the debugger, but the symbols never load. Any idea? And I should have said I'm using Visual Studio 2008. Update: Thanks to some tips below, I changed the project debugging to Mixed. It didn't work, but I think I know why. I am developing an addin to an existing application. The app I'm connecting to starts one exe then it starts another. So I have to use "Attach to process" to fire up the debugger. My guess is launching the debugger that way will default to "Auto". Is there a way to change the default behavior of VS to use "Mixed" debugging?
This is from VS2008, but if I remember correctly VS2005 was similar. In the native project's properties, under "Configuration Properties->Debugging" there is a "Debugger Type" which is set to "Auto" by default. You'll need to change it to "Mixed", because VS isn't smart enough to realize you need managed debugging
1,202,365
1,202,392
Ways to speed up build time? (C#/Unmanaged C++)
A legacy app I am working on currenty takes ~2hours to build. The project has about 170 projects with 150 or so being unmanaged C++ and the other 30 C#.Net 2.0. What are some suggestions on ways to improve build times for something like this?
Focus on the C++ projects - they are almost guaranteed to be the largest time drains for building. Some tips on getting the C++ build times down: Make sure that you're only including headers that you need in the C++ projects! Use forward declarations whenever possible in headers instead of including other headers Use the /MP switch to build in parallel, when possible Use abstraction effectively Be sparing in the use of inline functions, since these cost more at compile time Get the dependencies correct, so you're not building more often that required Use pre-compiled headers appropriately Aside from that, if you're talking 2 hour build times, often there is a simple, cheap (in a big picture way) solution, as well: Upgrade your hardware to help reduce the computation times
1,202,446
1,202,464
Is there any free program for making diagrams of file dependencies extracted from header files?
Looking for something similar to Modelmaker that was for Delphi. Showing dependecies of modules. Any help is appreciated. Doxygen has been great so far. If someone know if it's possible to achieve what I want with Doxygen, then please let me know :)
is this what you are looking for?
1,202,653
1,203,011
Check for environment variable in another process?
In Windows, is there a way to check for the existence of an environment variable for another process? Just need to check existence, not necessarily get value. I need to do this from code.
If you know the virtual address at which the environment is stored, you can use OpenProcess and ReadProcessMemory to read the environment out of the other process. However, to find the virtual address, you'll need to poke around in the Thread Information Block of one of the process' threads. To get that, you'll need to call GetThreadContext() after calling SuspendThread(). But in order to call those, you need a thread handle, which you can get by calling CreateToolhelp32Snapshot with the TH32CS_SNAPTHREAD flag to create a snapshot of the process, Thread32First to get the thread ID of the first thread in the process, and OpenThread to get a handle to the thread.
1,202,714
1,202,830
Hypothetical, formerly-C++0x concepts questions
(Preamble: I am a late follower to the C++0x game and the recent controversy regarding the removal of concepts from the C++0x standard has motivated me to learn more about them. While I understand that all of my questions are completely hypothetical -- insofar as concepts won't be valid C++ code for some time to come, if at all -- I am still interested in learning more about concepts, especially given how it would help me understand more fully the merits behind the recent decision and the controversy that has followed) After having read some introductory material on concepts as C++0x (until recently) proposed them, I am having trouble wrapping my mind around some syntactical issues. Without further ado, here are my questions: 1) Would a type that supports a particular derived concept (either implicitly, via the auto keyword, or explicitly via concept_maps) also need to support the base concept indepdendently? In other words, does the act of deriving a concept from another (e.g. concept B<typename T> : A<T>) implicitly include an 'invisible' requires statement (within B, requires A<T>;)? The confusion arises from the Wikipedia page on concepts which states: Like in class inheritance, types that meet the requirements of the derived concept also meet the requirements of the base concept. That seems to say that a type only needs to satisfy the derived concept's requirements and not necessarily the base concept's requirements, which makes no sense to me. I understand that Wikipedia is far from a definitive source; is the above description just a poor choice of words? 2) Can a concept which lists typenames be 'auto'? If so, how would the compiler map these typenames automatically? If not, are there any other occasions where it would be invalid to use 'auto' on a concept? To clarify, consider the following hypothetical code: template<typename Type> class Dummy {}; class Dummy2 { public: typedef int Type; }; auto concept SomeType<typename T> { typename Type; } template<typename T> requires SomeType<T> void function(T t) {} int main() { function(Dummy<int>()); //would this match SomeType? function(Dummy2()); //how about this? return 0; } Would either of those classes match SomeType? Or is a concept_map necessary for concepts involving typenames? 3) Finally, I'm having a hard time understanding what axioms would be allowed to define. For example, could I have a concept define an axiom which is logically inconsistent, such as concept SomeConcept<typename T> { T operator*(T&, int); axiom Inconsistency(T a) { a * 1 == a * 2; } } What would that do? Is that even valid? I appreciate that this is a very long set of questions and so I thank you in advance.
I've used the most recent C++0x draft, N2914 (which still has concepts wording in it) as a reference for the following answer. 1) Concepts are like interfaces in that. If your type supports a concept, it should also support all "base" concepts. Wikipedia statement you quote makes sense from the point of view of a type's client - if he knows that T satisfies concept Derived<T>, then he also knows that it satisfies concept Base<T>. From type author perspective, this naturally means that both have to be implemented. See 14.10.3/2. 2) Yes, a concept with typename members can be auto. Such members can be automatically deduced if they are used in definitions of function members in the same concept. For example, value_type for iterator can be deduced as a return type of its operator*. However, if a type member is not used anywhere, it will not be deduced, and thus will not be implicitly defined. In your example, there's no way to deduce SomeType<T>::Type for either Dummy or Dummy1, as Type isn't used by other members of the concept, so neither class will map to the concept (and, in fact, no class could possibly auto-map to it). See 14.10.1.2/11 and 14.10.2.2/4. 3) Axioms were a weak point of the spec, and they were being constantly updated to make some (more) sense. Just before concepts were pulled from the draft, there was a paper that changed quite a bit - read it and see if it makes more sense to you, or you still have questions regarding it. For your specific example (accounting for syntactic difference), it would mean that compiler would be permitted to consider expression (a*1) to be the same as (a*2), for the purpose of the "as-if" rule of the language (i.e. the compiler permitted to do any optimizations it wants, so long as the result behaves as if there were none). However, the compiler is not in any way required to validate the correctness of axioms (hence why they're called axioms!) - it just takes them for what they are.
1,202,883
1,202,906
Detecting string argument in a loop
The program takes a string using getline, and then passes that string to a function where it stores the string into substrings separated by whitespace. I did that just by reading characters with a loop. However, now I'm trying to pass a second string argument that separates the strings into substrings if the loop encounters characters in the 2nd string argument. This is what I have so far. #include "std_lib_facilities.h" vector<string> split(const string& s, const string& w) // w is second argument { vector<string> words; string altered; for(int i = 0; i < s.length(); ++i) { altered+=s[i]; if(i == (s.length()-1)) words.push_back(altered); else if(s[i] == ' ') { words.push_back(altered); altered = ""; } } return words; } int main() { vector<string> words; cout << "Enter words.\n"; string word; getline(cin,word); words = split(word, "aeiou"); // this for example would make the letters a, e, i, o, // and u divide the string for(int i = 0; i < words.size(); ++i) cout << words[i]; cout << endl; keep_window_open(); } However, obviously I can't do something like if(s[i] == w) because s[i] is a char and w is a string. Do I need to use a stringstream to parse the string instead of the loop I implemented? I actually played around with stringstream, but don't really know how it could help because either way I have to read the characters 1 by 1. P.S. The arguments to split must be passed as strings, and the input form in main() must be a getline.
Look at std::string::find_first_of. This allows you to easily ask a std::string object for the position of the next of any characters in another string object. For example: string foo = "This is foo"; cout << foo.find_first_of("aeiou"); // outputs 2, the index of the 'i' in 'This' cout << foo.find_first_of("aeiou", 3); // outputs 5, the index of the 'i' in 'is' Edit: whoops, wrong link
1,203,451
1,203,504
How to write an application for the system tray in Linux
How do I write my application so it'll live in the system tray on Linux? In fact, just like CheckGmail. As with CheckGmail, I'd also like some sort of popup box to appear when I hover the tray icon. Is there an API, class or something for doing this? All I'm able to find seems to be for Windows. If I have to be language specific, then preferably in C/C++ but a solution in Python will most likely also do. Thanks.
The Qt framework contains a QSystemTrayIcon class. This means that you can write an application in C++ or Python (or any other language with Qt bindings, including C#, Ada, Pascal, Perl, PHP and Ruby) and run your application on Windows, Linux, Mac or any other supported Qt operating system. I should add that Qt applications generally do a pretty good job of looking native on whatever operating system you are using without very much effort (even between Gnome/KDE on Linux). Qt also has excellent documentation, lots of sample code, a generous license and is well-maintained.
1,203,765
1,203,772
How to dump the symbols in a .a file
Can you please tell me how can I dump all the symbols in a .a file on MacOS X? I am getting a linking error while compiling my c++ problem on MacOS X. I would like to find out if the sybmols exists on the .a file that I am linking with. Thank you.
man nm Nm displays the name list (symbol table) of each object file in the argument list. If an argument is an archive, a listing for each object file in the archive will be produced. File can be of the form libx.a(x.o), in which case only symbols from that member of the object file are listed. ... etc
1,204,118
1,204,160
Can I include iostream header file into custom namespace?
namespace A { #include <iostream> }; int main(){ A::std::cout << "\nSample"; return 0; }
Short answer: No. Long answer: Well, not really. You can fake it, though. You can declare it outside and use using statements inside the namespace, like this: #include <iostream> namespace A { using std::cout; }; int main(){ A::cout << "\nSample"; system("PAUSE"); return 0; } You cannot localize a library, because even if it had access in A, it would not have access in the standard namespace. Also, "The other problem is that the qualified names inside the namespace would be A::std::cout, but the library would not contain names qualified with the outer namespace." thanks Jonathon Leffler. If the problem is that you don't want to let other people know what all your code can do, you could have your own cpp file to include iostream in, and have the namespace defined there. Then you just include that in main (or whatever) and let the programmer know what he can and cannot do.
1,204,313
1,204,337
Counting occurrences in a vector
This program reads strings of numbers from a txt file, converts them to integers, stores them in a vector, and then tries to output them in an organized fashion like so.... If txt file says: 7 5 5 7 3 117 5 The program outputs: 3 5 3 7 2 117 so if the number occurs more than once it outputs how many times that happens. Here is the code so far. #include "std_lib_facilities.h" int str_to_int(string& s) { stringstream ss(s); int num; ss >> num; return num; } int main() { cout << "Enter file name.\n"; string file; cin >> file; ifstream f(file.c_str(), ios::in); string num; vector<int> numbers; while(f>>num) { int number = str_to_int(num); numbers.push_back(number); } sort(numbers.begin(), numbers.end()); for(int i = 0; i < numbers.size(); ++i) { if(i = 0 && numbers[i]!= numbers[i+1]) cout << numbers[i] << endl; if(i!=0 && numbers[i]!= numbers[i-1]) { cout << numbers[i] << '\t' << counter << endl; counter = 0; } else ++counter; } } Edit: Program is getting stuck. Looking for an infinite loop right now.
How about using a map, where the key is the number you're tracking and the value is the number of occurrences? If you must use a vector, you've already got it sorted. So just keep track of the number you previously saw. If it is the same as the current number, increment the counter. Every time the number changes: print out the current number and the count, reset the count, set the last_seen number to the new number.
1,204,452
1,204,554
Passing D3DFMT_UNKNOWN into IDirect3DDevice9::CreateTexture()
I'm kind of wondering about this, if you create a texture in memory in DirectX with the CreateTexture function: HRESULT CreateTexture( UINT Width, UINT Height, UINT Levels, DWORD Usage, D3DFORMAT Format, D3DPOOL Pool, IDirect3DTexture9** ppTexture, HANDLE* pSharedHandle ); ...and pass in D3DFMT_UNKNOWN format what is supposed to happen exactly? If I try to get the surface of the first or second level will it cause an error? Can it fail? Will the graphics device just choose a random format of its choosing? Could this cause problems between different graphics card models/brands?
I just tried it out and it does not fail, mostly When Usage is set to D3DUSAGE_RENDERTARGET or D3DUSAGE_DYNAMIC, it consistently came out as D3DFMT_A8R8G8B8, no matter what I did to the back buffer format or other settings. I don't know if that has to do with my graphics card or not. My guess is that specifying unknown means, "pick for me", and that the 32-bit format is easiest for my card. When the usage was D3DUSAGE_DEPTHSTENCIL, it failed consistently. So my best conclusion is that specifying D3DFMT_UNKNOWN as the format gives DirectX the choice of what it should be. Or perhaps it always just defaults to D3DFMT_A8R8G8B. Sadly, I can't confirm any of this in any documentation anywhere. :|
1,204,553
1,204,576
Are there any good libraries for solving cubic splines in C++?
I'm looking for a good C++ library to give me functions to solve for large cubic splines (on the order of 1000 points) anyone know one?
Try the Cubic B-Spline library: https://github.com/NCAR/bspline and ALGLIB: http://www.alglib.net/interpolation/spline3.php
1,204,693
1,204,806
Is C++0x collapsing under the weight of new features and the standardization process?
From Dr. Dobbs: Concepts were to have been the central new feature in C++0x Even after cutting "concepts," the next C++ standard may be delayed. Sadly, there will be no C++0x (unless you count the minor corrections in C++03). We must wait for C++1x, and hope that 'x' will be a low digit. There is hope because C++1x is now feature complete (excepting the possibility of some national standards bodies effectively insisting on some feature present in the formal proposal for the standard). "All" that is left is the massive work of resolving outstanding technical issues and comments. I was on the bleeding edge of MT- and MP-safe C++ programming circa 1997 - 2000. We had to do many things ourselves. It's a bit shocking that the standard has not addressed concurrency in the 9 years since. So what's the big deal?
Stroustrup was one of the voters to remove Concepts finally. I don't see C++ collapsing, instead I see that the C++ committee is doing its job. Half-baked features are not the solution for a robust language like C++. A look at what is going to be in C++0x tells you the opposite of what you are saying. Finally, I don't mind to wait to get something good forever, instead of something good for a while :)
1,204,697
1,204,739
Reading a UTF-8 Unicode file through non-unicode code
I have to read a text file which is Unicode with UTF-8 encoding and have to write this data to another text file. The file has tab-separated data in lines. My reading code is C++ code without unicode support. What I am doing is reading the file line-by-line in a string/char* and putting that string as-is to the destination file. I can't change the code so code-change suggestions are not welcome. What I want to know is that while reading line-by-line can I encounter a NULL terminating character ('\0') within a line since it is unicode and one character can span multiple bytes. My thinking was that it is quite possible that a NULL terminating character could be encountered within a line. Your thoughts?
UTF-8 uses 1 byte for all ASCII characters, which have the same code values as in the standard ASCII encoding, and up to 4 bytes for other characters. The upper bits of each byte are reserved as control bits. For code points using more then 1 byte, the control bits are set. Thus there shall not be 0 character in your UTF-8 file. Check Wikipedia for UTF-8
1,204,935
1,225,070
Problem with IDropTarget when using with a VCL Form
I have a VCL gui developed in Codegear. I have created a DropTarget for the mainform and the DropTarget object implements the IDropTarget interface which allows me to drag and drop files from explorer. Now because I only want some of the child components to be drop targets (not the whole form), I only have the DragEnter method return S_OK when the POINTL coordinates are within the bounds of the component. However, if I drag the item slowly into the bounds of the form but not the component, DragEnter returns E_NOINTERFACE, therefore not allowing a drop. If I continue to drag into the dropzone, DragEnter won't fire, I understand why it isn't firing. So my question is how can I manually fire the DragEnter event?
Sounds like you are ignoring that IDropTarget has a DragOver() method that you need to use in addition to DragEnter(). If DragEnter() does not begin with coordinates that you allow, then you have to return S_OK with the pdwEffect parameter set to DROPEFFECT_NONE, and then let DragOver() continue doing its own coordinate checking afterwards. In addition, since you only want to drag onto specific control, you should be calling RegisterDragDrop() for each of those individual controls (assuming they are TWinControl descendants), not for the TForm itself.
1,204,975
1,204,984
Does the compiler decide when to inline my functions (in C++)?
I understand you can use the inline keyword or just put a method in a class declaration ala short ctor or a getter method, but does the compiler make the final decision on when to inline my methods? For instance: inline void Foo::vLongBar() { //several function calls and lines of code } Will the compiler ignore my inline declaration if it thinks it will make my code inefficient? As a side issue, if I have a getter method declared outside my class like this: void Foo::bar() { std::cout << "baz"; } Will the compiler inline this under the covers?
Whether or not a fiunction is inlined is, at the end of the day, entirely up to the compiler. Typically, the more complex a function is in terms of flow, the less likely the compiler is to inline it. and some functions, such as recursive ones, simply cannot be inlined. The major reason for not inlining a function is that it would greatly increase the overall size of the code, preventing iot from being held in the processor's cache. This would in fact be a pessimisation, rather than an optimisation. As to letting the programmer decide to shoot himself in the foot, or elsewhere, you can inline the function yourself - write the code that would have gone in the function at what would have been the function's call site.
1,204,983
1,204,992
is it possible to store output of system call in windows?
E.G.: I want to store output of system("dir");
You can either use redirection to a file (system( "dir > file" )), read that file and delete it or go the unnamed pipes way as in Unix - call CreatePipe() to create a pipe and attach it as the input/output stream in PROCESS_INFORMATION structure and pass that structure into CreateProcess().
1,205,419
1,206,241
Writing a QNetworkReply to a file
I'm downloading a file using QNetworkAccessManager::get but unlike QHttp::get there's no built-in way to directly write the response to a different QIODevice. The easiest way would be to do something like this: QIODevice* device; QNetworkReply* reply = manager.get(url); connect(reply, SIGNAL(readyRead()), this, SLOT(newData())); and then in newData slot: device->write(reply->readAll()); But I'm not sure if this is the right way, maybe I missed something.
That looks correct. I would use the lower-level forms of read() and write(), not the QByteArray ones, which do not properly support error handling, but other than that, it looks fine. Are you having problems with it?
1,205,516
1,205,542
MS VC++ 6 : Why return !false rather than true?
Looking at some code I noticed that another dev had changed every instance of true to !false. Why would you do that? thx
There is no good reason to write !false instead of true. Bad reasons could include obfuscation (making the code harder to read), personal preferences, badly considered global search-and-replace, and shenanigans converting boolean values to integers. It's possible that some confusion has been caused by the TRUE and FALSE definitions in Win32, which are not of bool type but ints, and which may trigger warnings when used in boolean statements. Mainly, anything non-zero is "true", but if you want to make sure that "true" is always one when using integers instead of booleans, you sometimes see shenanigans like this. It's still not a good reason ;-)
1,205,753
1,209,811
which embedded web server to use for my app GUI
I'm writing an application in c++ and I was thinking to use an embedded simple web server that will be my gui, so i could set up my application port on localhost. What such web server would you recommend to use in c++/c? Thanks
If you are using boost then rolling your own in boost:asio is simple. I assume by embedded you mean a built in webserver not that you are running on some tiny embedded hardware. If you want something simpler look at mongoose - also see https://stackoverflow.com/questions/738273/open-source-c-c-embedded-web-server
1,206,540
1,354,689
WebBrowser control: Detect navigation failure
I am hosting a webbrowser control, that usually loads an external documents, then makes some modifications using HTML DOM. We also embed custom application links using a fake protocol, such as "Close This" that are caught and handled in BeforeNavigate2. When the link tarket is misspelled (say, "spp:CloseWindow"), BeforeNavigate will not trigger custom handling. The Browser control does not show an navigaiton error, but remains in READYSTATE_INTERACTIVE and doesn't fire a NavigateComplete or DocumentComplete. My Problem: Most operations (e.g. retrieving or updating the contents) are delayed and wait for the readystate reaching READYSTATE_COMPLETE. After such an invalid link is clicked, the browser doesn't get updated anymore - a state I'd like to avoid. How can I do that? Can I detect in "DownloadComplete" that navigation failed? (So I could relax the test to "READYSTATE_COMPLETE or READYSTATE_INTERACTIVE and last downloadComplete was broken") Can I "reset" the browser control to READYSTATE_COMPLETE (probably not) Could I detect the pseudoprotocols actually supported by the browser? (In hindsight, using an xxxx: prefix wasn't such a good idea, but changing that now is a bit of a problem.)
Internet Explorer and Windows have an extensible list of available protocols implemented in UrlMon.dll, I believe. See here for a bit about IE architecture. The reason you cannot detect the bad protocol in BeforeNavigate is that the protocol is unknown, so no real navigation is happening. The browser decides to show an error page instead. Error page navigation does not evidently raise all the normal events. However, there is a way to detect when navigation has gone in the weeds. If you hook up to the web browser's DocumentCompleted event, you can scan for particular URLs associated with errors, or more generally, anything with a URL that starts with res://ieframe.dll. Examples: res://ieframe.dll/unknownprotocol.htm#spp:CloseWindow res://ieframe.dll/dnserrordiagoff_webOC.htm#http://192... A cleaner way is to hook into the NavigateError of the DWebBrowserEvents2 interface.
1,206,690
1,207,161
how to print the unicode characters in hexadecimal codes in c++
I am reading the string of data from the oracle database that may or may not contain the Unicode characters into a c++ program.Is there any way for checking the string extracted from the database contains an Unicode characters(UTF-8).if any Unicode characters are present they should be converted into hexadecimal format and need to displayed.
There are two aspects to this question. Distinguish UTF-8-encoded characters from ordinary ASCII characters. UTF-8 encodes any code point higher than 127 as a series of two or more bytes. Values at 127 and lower remain untouched. The resultant bytes from the encoding are also higher than 127, so it is sufficient to check a byte's high bit to see whether it qualifies. Display the encoded characters in hexadecimal. C++ has std::hex to tell streams to format numeric values in hexadecimal. You can use std::showbase to make the output look pretty. A char isn't treated as numeric, though; streams will just print the character. You'll have to force the value to another numeric type, such as int. Beware of sign-extension, though. Here's some code to demonstrate: #include <iostream> void print_characters(char const* s) { std::cout << std::showbase << std::hex; for (char const* pc = s; *pc; ++pc) { if (*pc & 0x80) std::cout << (*pc & 0xff); else std::cout << *pc; std::cout << ' '; } std::cout << std::endl; } You could call it like this: int main() { char const* test = "ab\xef\xbb\xbfhu"; print_characters(test); return 0; } Output on Solaris 10 with Sun C++ 5.8: $ ./a.out a b 0xef 0xbb 0xbf h u The code detects UTF-8-encoded characters, but it makes no effort to decode them; you didn't mention needing to do that. I used *pc & 0xff to convert the expression to an integral type and to mask out the sign-extended bits. Without that, the output on my computer was 0xffffffbb, for instance.
1,206,829
1,206,845
How to read code without any struggle
I am a new to professional development. I mean I have only 5 months of professional development experience. Before that I have studied it by myself or at university. So I was looking over questions and found here a question about code quality. And I got a question related to it myself. How do I increase my code understanding/reading skills? Also will it improve the code quality I will write? Is there better code notation than Hungarian one? And is there any really good books for C++ design patterns(or the language doesn't matter?)? Thank you in advance answering these questions and helping me improving :) P.S. - Also I have forgot to tell you that I am developing with C++ and C# languages.
There is only way I've found to get better at reading other peoples code and that is read other peoples code, when you find a method or language construct you don't understand look it up and play with it until you understand what is going on. Hungarian notation is terrible, very few people use it today, it's more of an in-joke among programmers. In fact the name hungarian notation is a joke itself as: "The term Hungarian notation is memorable for many people because the strings of unpronounceable consonants vaguely resemble the consonant-rich orthography of some Eastern European languages." From How To Write Unmaintainable Code "Hungarian Notation is the tactical nuclear weapon of source code obfuscation techniques; use it! Due to the sheer volume of source code contaminated by this idiom nothing can kill a maintenance engineer faster than a well planned Hungarian Notation attack." And the ever popular linus has a few words to say on the matter. "Encoding the type of a function into the name (so-called Hungarian notation) is brain damaged—the compiler knows the types anyway and can check those, and it only confuses the programmer." - Linus Torvalds EDIT: Taken from a comment by Tobias Langner. "For the differences between Apss Hungarian and Systems Hungarian see Joel on Software". Joel on Software has tips on how to read other people code called Reading Code is Like Reading the Talmud.
1,206,854
1,206,896
Returning multiple auto_ptrs from a function
I have a function that allocates two variables on the heap and returns them to the caller. Something like this: void Create1(Obj** obj1, Obj** obj2) { *obj1 = new Obj; *obj2 = new Obj; } Usually, in similar cases, when I have a function with one variable I use the "source" trick with auto_ptr: auto_ptr<Obj> Create2() { return new Obj; } I would like to rewrite Create1 using auto_ptr but not sure how to do it. As far as I understand I cannot return auto_ptr by reference, am I right? So is it possible at all?
You can assign to a std::auto_ptr by calling its reset method: void f( std::auto_ptr<Obj>& pObj1, std::auto_ptr<Obj>& pObj2 ) { pObj1.reset( new Obj ); pObj2.reset( new Obj ); } The reset call will properly delete whatever the auto_ptr was pointing to before.
1,206,871
1,207,224
C++ MVC Model - How should it be implemented?
I'm a little confused as to how the model should 'work' in my basic C++ implementation, or rather how the data from say the database backend should be encapsulated/worked with. My thoughts at the moment are for a model with for example a static findById() method, which would return an instance of that same model, which would then represent a single 'row' of data (permitting manipulation etc etc) might be a suitable approach. In terms then of creating a new row in the database, I suppose a new instance of the model would be created, populated with data, and effectively a save() method called? Hopefully I'm on the right lines, but am just a little confused over the implementation/design here.
So, I think you are asking what kind of interface would be appropriate to "translate" between a relational database and an object oriented application, particularly in the context of an MVC application written in C++. A common approach is called object-relational mapping, or ORM. I'm only familiar with how Ruby on Rails implements ORM, but if you carry it over to C++ it looks like this: A database table maps to a class Operations on a table (such as queries) map to static member functions of the corresponding class Rows in a table correspond to instances of the corresponding class Fields in a table correspond to member variables of the corresponding class There are probably C++ libraries to do the ORM mapping for you. I'm not familiar with any myself since I've never had to do this in C++. Edit: This question asks about ORM libraries for C++.
1,206,879
1,206,912
How to map a resource file in Qt?
Is it possible to map a resource file in Qt? For example: QFile file(resource_name); file.open(QIODevice::ReadOnly); uchar* ptr = file.map(0, file.size()); When I try this, ptr == 0, indicating an error. It works fine if I try to map a regular file. I am running Qt on linux, which supports QFile::Map.
Yes, it is possible. There is one thing to keep in mind though. By default the qt resource compiler rcc compresses the resources. The file.size() call will return the actual, un-compressed size of the original file. However, the embedded resource is compressed and is most likely a different size. The file.map(0, file.size()) returns an error since the size passed to map() is larger than the resource being mapped. Even if you pass the correct size to map(), or a smaller size, the memory will contain the compressed data, not the un-compressed data. The solution is to not compress the embedded resource. This can be accomplished by adding: QMAKE_RESOURCE_FLAGS += -no-compress to your qt project file. See here for explanation of QMAKE_RESOURCE_FLAGS.
1,207,106
1,208,083
Calling base class definition of virtual member function with function pointer
I want to call the base class implementation of a virtual function using a member function pointer. class Base { public: virtual void func() { cout << "base" << endl; } }; class Derived: public Base { public: void func() { cout << "derived" << endl; } void callFunc() { void (Base::*fp)() = &Base::func; (this->*fp)(); // Derived::func will be called. // In my application I store the pointer for later use, // so I can't simply do Base::func(). } }; In the code above the derived class implementation of func will be called from callFunc. Is there a way I can save a member function pointer that points to Base::func, or will I have to use using in some way? In my actual application I use boost::bind to create a boost::function object in callFunc which I later use to call func from another part of my program. So if boost::bind or boost::function have some way of getting around this problem that would also help.
When you call a virtual method via a reference or a pointer you will always activate the virtual call mechanism that finds the most derived type. Your best bet is to add an alternative function that is not virtual.
1,207,265
1,207,310
In qt 4.5, is it possible to have resources in a statically linked plugin?
I have a custom QT plugin module that has embedded resources. I want to statically link this plugin with an application: LIBS += -lstatic_plugin_with_resources In the application I am using the Q_IMPORT_PLUGIN() macro, which allows the application to use the plugin; however the plugin can not access its embedded resources. Using the plugin as a shared library does work.
It is possible. In the application you need to explicitly initialize the resources that are contained in the static plugin. This is accomplished by calling the Q_INIT_RESOURCE(resource_base_name), where resource_base_name is the base name of the .qrc file that specifies the resources. This should probably be called in main() or at application startup. Optionally you can call Q_CLEANUP_RESOURCE() if the plugin is no longer being used. See the last section of the QT 4.5 resource doc. Also see the documentation for Q_INIT_RESOURCE. This worked for me on the linux version of QT 4.5.
1,207,456
1,207,484
Am I crazy to recreate a tiny garbage collection system inside my functions?
I have some (C++) functions each containing several calls creating similar arrays of the same basic type on the heap. At various points in these functions, I may need to throw an exception. Keeping track of which arrays have been deleted is a pain, and quite error prone, so I was thinking about just adding the array pointers to a Set<ArrType*>, of which I can just delete every item when I catch an exception, like this: try { set<ArrType*> sHeap; ArrType* myArr = new ArrType[5]; sHeap.Add(myArr); someExternalRoutine(myArr); ... } catch(CString s) { DeleteAllPointersInMyHeap(sHeap); throw(s); } It feels a bit like adding epicycles, but I can't get around the fact that any one of several external calls may throw an exception, and I need to definitely delete all the pointers allocated up to that point. Is this just foolishness? Should I just add smaller try-catch blocks around the external calls? I'd still end up with little lists of delete A; delete B; delete D; after each one...
You don't have to rely on garbage collection. You have std::auto_ptr that provides pointer like syntax and wraps a dynamically allocated object. When destroyed, it automatically destroys the object it points to. You could implement something similar for arrays.
1,207,478
1,207,517
Does it make sense to implement iterators for containers which has no obvious end - e.g. trees?
I`m writing binary search tree template for two reasons - learning C++ and learning most common algorithms and data structures. So, here is the question - as long as I want to implement iterators, it seems to me that there is no strict definition for where tree ends. What are your suggestions? How do I do this?
For trees, there are standards for traversing the tree, i.e. enumerating the nodes: Preorder traversal, inorder traversal, and postorder traversal. Rather than describe all these here, I'll redirect you to http://en.wikipedia.org/wiki/Tree_traversal. The concepts are mostly applied to binary trees, but you can extend the idea to arbitrary trees by adding more cases: Effectively, handle a node then recurse, recurse then handle a node, handle all children then recurse into each...etc. There's no canonical categorization of this approach that I'm aware of.
1,207,762
1,207,851
What is c printf %f default precision?
I'm curious: If you do a printf("%f", number); what is the precision of the statement? I.e. How many decimal places will show up? Is this compiler dependent?
The ANSI C standard, in section 7.19.6.1, says this about the f format specifier: If the precision is missing, 6 digits are given
1,207,906
1,207,933
Why is it illegal/immoral to reseat a reference?
Possible Duplicate: Why are references not reseatable in C++ I am trying to more or less swap two reference variables (as practice, I could have swapped the actual variables). I tried doing this by making a temporary variable and making one of the references equal the other, but this got shot down by the compiler. Here is an example: void Foo() { //code int& ref1 = a; int& ref2 = b; int temp; temp = ref1; ref1 = ref2; ref2 = temp; //or, better yet std::swap(ref1, ref2); } I got an error, and looked on the faq lite. It details that they cannot be reseated, but does not explain why. Why? Here is a link to the Faq Lite for reference (<---, get it?).
Because there is no syntax to do it: int x = 0; int y = 1; int & r = x; Now if I say: r = y; I assign the value of y to x. If I wanted to reseat I would need some special syntax: r @= y; // maybe? As the main reason for using references is as parameters and return types of functions, where this is not an issue, it didn't seem to C++'s designers that this was a path worth going down.
1,208,028
1,208,062
Significance of a .inl file in C++
What are the advantages of having declarations in a .inl file? When would I need to use the same?
.inl files are never mandatory and have no special significance to the compiler. It's just a way of structuring your code that provides a hint to the humans that might read it. I use .inl files in two cases: For definitions of inline functions. For definitions of function templates. In both cases, I put the declarations of the functions in a header file, which is included by other files, then I #include the .inl file at the bottom of the header file. I like it because it separates the interface from the implementation and makes the header file a little easier to read. If you care about the implementation details, you can open the .inl file and read it. If you don't, you don't have to.
1,208,153
1,208,207
C# generics compared to C++ templates
Possible Duplicate: What are the differences between Generics in C# and Java… and Templates in C++? What are the differences between C# generics compared to C++ templates? I understand that they do not solve exactly the same problem, so what are the pros and cons of both?
You can consider C++ templates to be an interpreted, functional programming language disguised as a generics system. If this doesn't scare you, it should :) C# generics are very restricted; you can parameterize a class on a type or types, and use those types in methods. So, to take an example from MSDN, you could do: public class Stack<T> { T[] m_Items; public void Push(T item) {...} public T Pop() {...} } And now you can declare a Stack<int> or Stack<SomeObject> and it'll store objects of that type, safely (ie, no worried about putting SomeOtherObject in by mistake). Internally, the .NET runtime will specialize it into variants for fundamental types like int, and a variant for object types. This allows the representation for Stack<byte> to be much smaller than that of Stack<SomeObject>, for example. C++ templates allow a similar use: template<typename T> class Stack { T *m_Items; public void Push(const T &item) {...} public T Pop() {...} }; This looks similar at first glance, but there are a few important differences. First, instead of one variant for each fundamental type and one for all object types, there is one variant for each type it's instantiated against. That can be a lot of types! The next major difference is (on most C++ compilers) it will be compiled in each translation unit it's used in. That can slow down compiles a lot. Another interesting attribute to C++'s templates is they can by applied to things other than classes - and when they are, their arguments can be automatically detected. For example: template<typename T> T min(const T &a, const T &b) { return a > b ? b : a; } The type T will be automatically determined by the context the function is used in. These attributes can be used to good ends, at the expense of your sanity. Because a C++ template is recompiled for each type it's used against, and the implementation of a template is always available to the compiler, C++ can do very aggressive inlining on templates. Add to that the automatic detection of template values in functions, and you can make anonymous pseudo-functions in C++, using boost::lambda. Thus, an expression like: _1 + _2 + _3 Produces an object with a seriously scary type, which has an operator() which adds up its arguments. There are plenty of other dark corners of the C++ template system - it's an extremely powerful tool, but can be painful to think about, and sometimes hard to use - particularly when it gives you a twenty-page long error message. The C# system is much simpler - less powerful, but easier to understand and harder to abuse.
1,208,540
1,208,690
How can I get a printers device context?
I'm on Windows and trying to print an Enhanced Metafile (EMF) using PlayEnhMetaFile(). I'm currently displaying it using a device context for a window on the screen, but now I want to send it to a printer. How can I get a device context for the printer and pass it into this function properly?
The easiest way is to use construct the device context from PRINTDLG.hDevMode and PRINTDLG.hDevNames after calling PrintDlg if using win32 API, or calling CPrintDialog::GetPrinterDC if you're using MFC. If using MFC: CPrintDialog dlgPrint(FALSE, PD_USEDEVMODECOPIES); HDC hPrinterDC = dlgPrint.GetPrinterDC(); or win32 API: HDC hPrinterDC = NULL; PRINTDLG dlgPrint; if (PrintDlg(&dlgPrint) && dlgPrint.hDevMode != NULL) { DEVNAMES *pDevNames = (DEVNAMES*)GlobalLock(dlgPrint.hDevNames); DEVMODE* pDevMode = NULL; if (dlgPrint.hDevMode != NULL) pDevMode = GlobalLock(dlgPrint.hDevMode); hPrinterDC = CreateDC((LPCTSTR)pDevNames + pDevNames->wDriverOffset, (LPCTSTR)pDevNames + pDevNames->wDeviceOffset, (LPCTSTR)pDevNames + pDevNames->wOutputOffset, pDevMode); GlobalUnlock(dlgPrint.hDevNames); if (dlgPrint.hDevMode != NULL) GlobalUnlock(dlgPrint.hDevMode); }
1,208,644
1,208,667
How can I get the size of a memory block allocated using malloc()?
Possible Duplicates: How can I get the size of an array from a pointer in C? Is there any way to determine the size of a C++ array programmatically? And if not, why? I get a pointer to a chunk of allocated memory out of a C style function. Now, it would be really interesting for debugging purposes to know how big the allocated memory block that this pointer points is. Is there anything more elegant than provoking an exception by blindly running over its boundaries? Thanks in advance, Andreas EDIT: I use VC++2005 on Windows, and GCC 4.3 on Linux EDIT2: I have _msize under VC++2005 Unfortunately it results in an exception in debug mode.... EDIT3: Well. I have tried the way I described above with the exception, and it works. At least while I am debugging and ensuring that immediately after the call to the library exits I run over the buffer boundaries. Works like a charm. It just isn't elegant and in no way usable in production code.
It's not standard but if your library has a msize() function that will give you the size. A common solution is to wrap malloc with your own function that logs each request along with the size and resulting memory range, in the release build you can switch back to the 'real' malloc.
1,208,766
1,208,858
quicksort in C++ is slow
I have 9 values in the form of a matrix and need to compute the median from these values as part of a simulation process. I use quicksort in C++ (i.e qsort()) which results in the process running slow (as this process iterates several times). Is there a better sorting algorithm that I could use?
Sorting to get a median is very inefficient. You could use STL nth_element instead: #include <algorithm> // Assuming you keep the elements in a vector v of size len std::nth_element( v.begin(), v.begin()+len/2, v.end() ); median = v[len/2]; //... or, if the elements are in a simple array v[len], then std::nth_element( v, v+len/2, v+len ); median = v[len/2]; Note: the nth_element will modify the vector/array v. Make a copy first if you need to preserve the original.
1,208,853
1,208,869
How to call C++ static method
Is it possible to return an object from a static method in C++ like there is in Java? I am doing this: class MyMath { public: static MyObject calcSomething(void); private: }; And I want to do this: int main() { MyObject o = MyMath.calcSomething(); // error happens here } There are only static methods in the MyMath class, so there's no point in instantiating it. But I get this compile error: MyMath.cpp:69: error: expected primary-expression before '.' token What am I doing wrong? Do I have to instantiate MyMath? I would rather not, if it is possible.
Use :: instead of . MyObject o = MyMath::calcSomething(); When you are calling the method without the object of the class you should use :: notation. You may also call static method via class objects or pointers to them, in this case you should use usual . or -> notation: MyObject obj; MyObject* p = new MyObject(); MyObject::calcSomething(); obj.calcSomething(); p->calcSomething();
1,208,961
1,209,000
Can an object instance null out the "this" pointer to itself safely?
Class A { public: NullIt() { this = NULL; } Foo() { NullIt(); } } A * a = new A; a->Foo(); assert(a); //should assert here Is there a way to achieve this effect, memory leaks aside?
No. The object knows nothing about the external references to it (in this case, "a"), so it can't change them. If you want the caller to forget your object, then you can do this: class MyClass { void Release(MyClass **ppObject) { assert(*pObject == this); // Ensure the pointer passed in points at us *ppObject = NULL; // Clear the caller's pointer } } MyClass *pA = new A; pA->Release(&pA); assert(pA); // This assert will fire, as pA is now NULL When you call Release, you pass in the pointer you hold to the object, and it NULLs it out so that after the call, your pointer is NULL. (Release() can also "delete this;" so that it destroys itself at the same time)
1,208,963
1,208,976
Looking for a bitwise operator
I have a list of objects of the same class. The order of the list is not important. What i want to do, is to (using bitwise operations) determine whether i should set some field with an incremental value or not. But the trick is that i want need this operation to return false (must not set field) only for the first element. for (Obj obj:list){ if (obj.isZero() (op) some_flag){ //set field } } Here are some things that im certain of. When called on the first element of the list, isZero() will return true. When called on the rest of the elements, its uncertain. But if called on the second element, isZero() returns true, then it will return true for all the list ([3..last]) Any way of doing this? I dont feel like keeping a counter and incrementing it, but if its THE BEST PRACTICE to do so, then ill do it.
I think you need a counter