question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,976,598
1,976,647
Template Return Types / Cast as function of Template
I'm working with some generated classes with broken polymorphism. For every generated class T, there are a handful of T_type_info, T_writer, T_reader classes which are only related to T conceptually. What I'm trying to do is something like this: template <class T> class Wrapper { public: template <class W> W topic_cast(BrokenBaseClassWriter* p); // other operations with the same problem ... }; template <> class Wrapper<MyTopic> { public: template <> MyTopicWriter* topic_cast(BrokenBaseClassWriter* p) { ... } }; So that I can do things like: void Write(const Wrapper<T>& topic) { BrokenBaseClassWriter p = not_important; topic.topic_cast(p)->do_stuff(); } My T classes are generated from an IDL and are concepts that exist in application space. They don't derive from anything. In my example above, W is not really an independent parameter, it's "Something Not T that depends on T". I'm trying to keep all knowledge of T in the app, and all knowledge of T' (without knowing about T) in the backend. The compiler however says my topic_cast function is not a template function - I think because the template occurs in the return type and it wouldn't be distinguishable from any other instantiations. I know that that (templates differ only by return type) is not legal. Only in my case it really would be unique because W is not an independent parameter. But arguing with the compiler is seldom helpful. Can I do this, or is there another way to do this "cast as function of template type"?
Could this not be achieved with a traits system? template <typename T> struct my_traits { }; template <> struct my_traits<MyClass> { typedef MyWriter writer_type; }; template <typename T> struct Wrapper { typename my_traits<T>::writer_type topic_cast(); };
1,976,719
1,977,185
learning c++ on linux mint ( for .net developer )
My goal is to hop on to C++ programming language by doing a homework project on linux mint and learn some linux & c++ at the same time. I intend to write a small desktop application to show current network traffic ( like DU meter in windows). I have following questions: I noticed in mint there is an application called 'System Monitor' which also shows network history with info and graph like current download/upload of data and speed. Is it a good idea to get started by looking at the code for this ? how can I find the code for same in mint and dig into it ? pls help with some steps here if possible. what tools do I need here for writing c++ application for/in linux mint ? Which GUI library to use ( like in c# winforms , it offers user controls as part of GDI lib) on linux mint what do we have that offers user controls like window/button/panel/etc ? Links to beginner level tutorials will be helpful. Hoping NOT to re-invent the wheel completely here. Would love to re-use some lib that do the network traffic part, ideas ? PS: i know this post reads 'wanna be' - I am really excited to kickstart with some c++. Will rephrase this post with more precise questions.Hunting in the dark at this point being a c# developer totally spoiled by windows. Thanks in Advance!!! for tips on this...
The mint distribution is based on Ubuntu/Debian, so I assume that my Ubuntu approach also works on mint. First you need some tools, libraries and headers: # install the standard toolchain (g++, make, etc.) sudo aptitude install build-essential # install the build dependencies for a desktop based networking tool sudo aptitude build-dep gnome-nettool Optionally because you mentioned the system-monitor - it might be helpful to build the gnome-system-monitor from source: # install the build dependencies for gnome-system-monitor sudo aptitude build-dep gnome-system-monitor # get the sources for the gnome-system-monitor mkdir example cd example apt-get source gnome-system-monitor # build the gnome-system-monitor # note: you might have a different version. But I'm sure you get the idea ;-) cd gnome-system-monitor-2.28.0 sh configure make Finally you need something to develop and debug. A lot of unix developers recommend emacs or vi(m). But my personal opinion is that you should start with a "modern" GUI based IDE. here's a collection of some commonly used IDEs: Eclipse with CDT NetBeans Code::Blocks Anjuta (was this used to develop the gnome-system-monitor ?) CodeLite (which is my personal favorite) see also: discussion on SOF regarding "the best" C++ IDE for Linux
1,976,850
1,976,921
3d max integration with c++, Cal3D where to start?
okay i'm making a game using c++ (for the engine) and openGL, now i've had lots of trouble using cal3d library for importing my 3d max models into my c++ project, as a matter of fact i dunno where to even start, i can't find any decent guide and their documentation is pure shit really. i've been searching and trying stuff in this for over a month, but i don't even understand the file structure it uses so far :S i really need some help, r there any other libraries? any decent guide i can use? i'm stuck thnx alot
Rather than write your own exporter, consider using one of the built-in exporters for FBX, COLLADA, Crosswalk (.XSI), the Quake/Doom3 .MD3/.MD4 format, or even OBJ. It'll be much easier to parse the resulting file format on your end than to write and maintain a brand-new exporter.
1,976,867
1,977,094
Starting a program fails with error code 1
I made an application and a dll, which are working this way: I have to register the dll. After registering the dll if i right click on an .exe file, the pop-up menu appears, and i have inserted into this menu one line ("Start MyApp"), and if i click there, it should start MyApp. MyApp has one parameter which is the full path of the selected .exe file. After starting MyApp with this path it should create a process with CreateProcessWithLogonW(). This application reads the username, password and the domain from an .ini file. My problem is, that after MyApp starts, it fails always, because it can't find the ini file. Errorcode is: 1 (Incorrect function). If i start MyApp manually, than it works fine. Does anyone has any idea why is this, and how could i fix this problem? Thanks in advance! kampi Update1: Here is the code which reads from the ini file. int main ( int argc, char *argv[] ) { int i, slash = 0, j; char application[size]; wchar_t wuser[65], wdomain[33], wpass[129]; memset( user, 0, sizeof ( user ) ); memset( password, 0, sizeof ( password ) ); memset( domain, 0, sizeof ( domain ) ); file_exists( "RunAs.ini" ); readfile( "RunAs.ini" ); .... .... .... } void file_exists( const char * filename ) { if (FILE * file = fopen(filename, "r")) { fclose(file); } else { printf("\nCan't find %s!\n",filename); getch(); exit(1); } }//file_exists void readfile( char * filename ) { FILE *inifile; char tmp[256], buf[256], what[128]; int i, j; inifile = fopen( "RunAs.ini", "r" ); while ( fgets(tmp, sizeof tmp, inifile) != NULL ) { if ( tmp[ strlen(tmp) - 1 ] == '\n' ) { tmp[ strlen(tmp) - 1 ] = '\0'; }//if memset ( buf, 0, sizeof( buf ) ); for ( i = 0; tmp[i]!= '='; i++ ) { buf[i] = tmp[i]; } buf[i] = '\0'; i++; // memset ( what, 0, sizeof( what ) ); SecureZeroMemory( what, sizeof(what) * 128 ); for ( j = 0; i != strlen(tmp); i++ ) { what[j] = tmp[i]; j++; } what[j] = '\0'; upcase( buf ); removespace( what ); if ( strcmp( buf, "USERNAME" ) == 0 ) { strcpy( user, what ); } if ( strcmp( buf, "PASSWORD" ) == 0 ) { strcpy( password, what ); } if ( strcmp( buf, "DOMAIN" ) == 0 ) { strcpy( domain, what ); } }//while fclose (inifile); }//readfile
As others have said, your problem is here: file_exists( "RunAs.ini" ); readfile( "RunAs.ini" ); Neither of the function calls provides a path. You're expecting the current working directory to be the folder where your application is located, but it doesn't have to be (in fact, you should never assume that it is). The context menu isn't setting the working directory first. Your safest bet is to retrieve the path to your folder using the path provided in argv[] (the 0th element is the fully qualified path and name of the application itself, and you can extract the path from that). You'll then have exact knowledge of where the file is located, and can append the name of the ini file to that path.
1,976,983
1,977,006
Why won't my C++ program link when my class has static members?
I have a little class called Stuff that I want to store things in. These things are a list of type int. Throughout my code in whatever classes I use I want to be able to access these things inside the Stuff class. Main.cpp: #include "Stuff.h" int main() { Stuff::things.push_back(123); return 0; } Stuff.h: #include <list> class Stuff { public: static list<int> things; }; but I get some build errors with this code: error LNK2001: unresolved external symbol "public: static class std::list<int,class std::allocator<int> > Stuff::things" (?things@Stuff@@2V?$list@HV?$allocator@H@std@@@std@@A) Main.obj CSandbox fatal error LNK1120: 1 unresolved externals C:\Stuff\Projects\CSandbox\Debug\CSandbox.exe CSandbox I am a C# guy, and I am trying to learn C++ for a side project. I think that I don't understand how C++ treats static members. So please explain what I have got wrong here.
Mentioning a static member in a class declaration is a declaration only. You must include one definition of the static member for the linker to hook everything up properly. Normally you would include something like the following in a Stuff.cpp file: #include "Stuff.h" list<int> Stuff::things; Be sure to include Stuff.cpp in your program along with Main.cpp.
1,977,174
1,977,236
C++ library to interface .dmg files on Mac
I want to write a C++ program that spawns off a thread to execute a .dmg file and monitor its completion (success/fail) on Snow Leopard. Would this be as trivial as fork/exec a shell script on Linux? Would I need a 3rd party C++ library to interface .dmg files?
A .dmg file on OS X is a container for an image of a volume or single file system so it's not clear what you mean by execute a .dmg file. If you mean mount the file systems contained in the .dmg file, the easiest way to do that is with the hdiutil command: hdiutil attach /path/to/file.dmg If you need to parse the information about the file systems mounted, use the -plist argument which will return that information in OS X plist format via stdout.
1,977,212
1,977,974
Asynchronous request using wininet
I have already used wininet to send some synchronous HTTP requests. Now, I want to go one step further and want to request some content asynchronously. The goal is to get something "reverse proxy"-like. I send an HTTP request which gets answered delayed - as soon as someone wants to contact me. My thread should continue as if there was nothing in the meanwhile, and a callback should be called in this thread as soon as the response arrives. Note that I don't want a second thread which handles the reply (if it is necessary, it should only provide some mechanism which interrupts the main thread to invoke the callback there)! Update: Maybe, the best way to describe what I want is a behaviour like in JavaScript where you have only one thread but can send AJAX requests which then result in a callback being invoked in this main thread. Since I want to understand how it works, I don't want library solutions. Does anybody know some good tutorial which explains me how to achieve my wanted behavior?
My thread should continue as if there was nothing in the meanwhile, and a callback should be called in this thread as soon as the response arrives. What you're asking for here is basically COME FROM (as opposed to GO TO). This is a mythical instruction which doesn't really exist. The only way you can get your code called is to either poll in the issuing thread, or to have a separate thread which is performing the synchronous IO and then executing the callback (in that thread, or in yet another spawned thread) with the results. When I was working in C++ with sockets I set up a dedicated thread to iterate over all the open sockets, poll for data which would be available without blocking, take the data and stuff it in a buffer, sending the buffer to a callback on a given circumstance (EOL, EOF, that sort of thing).
1,977,339
1,977,633
C++ range/xrange equivalent in STL or boost?
Is there C++ equivalent for python Xrange generator in either STL or boost? xrange basically generates incremented number with each call to ++ operator. the constructor is like this: xrange(first, last, increment) was hoping to do something like this using boost for each: foreach(int i, xrange(N)) I. am aware of the for loop. in my opinion they are too much boilerplate. Thanks my reasons: my main reason for wanting to do so is because i use speech to text software, and programming loop usual way is difficult, even if using code completion. It is much more efficient to have pronounceable constructs. many loops start with zero and increment by one, which is default for range. I find python construct more intuitive for(int i = 0; i < N; ++i) foreach(int i, range(N)) functions which need to take range as argument: Function(int start, int and, int inc); function(xrange r); I understand differences between languages, however if a particular construct in python is very useful for me and can be implemented efficiently in C++, I do not see a reason not to use it. For each construct is foreign to C++ as well however people use it. I put my implementation at the bottom of the page as well the example usage. in my domain i work with multidimensional arrays, often rank 4 tensor. so I would often end up with 4 nested loops with different ranges/increments to compute normalization, indexes, etc. those are not necessarily performance loops, and I am more concerned with correctness readability and ability to modify. for example int function(int ifirst, int ilast, int jfirst, int jlast, ...); versus int function(range irange, range jrange, ...); In the above, if different strids are needed, you have to pass more variables, modify loops, etc. eventually you end up with a mass of integers/nearly identical loops. foreach and range solve my problem exactly. familiarity to average C++ programmer is not high on my list of concerns - problem domain is a rather obscure, there is a lot of meta-programming, SSE intrinsic, generated code.
Boost has counting_iterator as far as I know, which seems to allow only incrementing in steps of 1. For full xrange functionality you might need to implement a similar iterator yourself. All in all it could look like this (edit: added an iterator for the third overload of xrange, to play around with boost's iterator facade): #include <iostream> #include <boost/iterator/counting_iterator.hpp> #include <boost/range/iterator_range.hpp> #include <boost/foreach.hpp> #include <boost/iterator/iterator_facade.hpp> #include <cassert> template <class T> boost::iterator_range<boost::counting_iterator<T> > xrange(T to) { //these assertions are somewhat problematic: //might produce warnings, if T is unsigned assert(T() <= to); return boost::make_iterator_range(boost::counting_iterator<T>(0), boost::counting_iterator<T>(to)); } template <class T> boost::iterator_range<boost::counting_iterator<T> > xrange(T from, T to) { assert(from <= to); return boost::make_iterator_range(boost::counting_iterator<T>(from), boost::counting_iterator<T>(to)); } //iterator that can do increments in steps (positive and negative) template <class T> class xrange_iterator: public boost::iterator_facade<xrange_iterator<T>, const T, std::forward_iterator_tag> { T value, incr; public: xrange_iterator(T value, T incr = T()): value(value), incr(incr) {} private: friend class boost::iterator_core_access; void increment() { value += incr; } bool equal(const xrange_iterator& other) const { //this is probably somewhat problematic, assuming that the "end iterator" //is always the right-hand value? return (incr >= 0 && value >= other.value) || (incr < 0 && value <= other.value); } const T& dereference() const { return value; } }; template <class T> boost::iterator_range<xrange_iterator<T> > xrange(T from, T to, T increment) { assert((increment >= T() && from <= to) || (increment < T() && from >= to)); return boost::make_iterator_range(xrange_iterator<T>(from, increment), xrange_iterator<T>(to)); } int main() { BOOST_FOREACH(int i, xrange(10)) { std::cout << i << ' '; } BOOST_FOREACH(int i, xrange(10, 20)) { std::cout << i << ' '; } std::cout << '\n'; BOOST_FOREACH(int i, xrange(0, 46, 5)) { std::cout << i << ' '; } BOOST_FOREACH(int i, xrange(10, 0, -1)) { std::cout << i << ' '; } } As others are saying, I don't see this buying you much over a normal for loop.
1,977,486
1,977,681
changing value in a stl map in place
I understand that when we insert values into the STL map, a copy is made and stored. I have code that essentially does a find on the map and obtains an iterator. I then intend to use the iterator to change the value in the map. The results are not what I would expect ie: the value is not changed when accessed from another part of the program. I suspect its because the change I am applying is to a copy of value. the relevant code is as follows. ObjectMappingType::iterator it = objectMapping_.find(symbol); if (it == objectMapping_.end()) { throw std::invalid_argument("Unknown symbol: " + symbol); } get<3>(it->second) = value; NOTE: I am actually trying to change a value inside a boost::tuple that is stored as the 'value' part of the map.
Hmm... both methods seem to work fine for me. Here's the entire example that I used: #include <iostream> #include <map> #include <string> #include <boost/tuple/tuple.hpp> typedef boost::tuple<int, std::string> value_type; typedef std::map<int, value_type> map_type; std::ostream& operator<<(std::ostream& os, value_type const& v) { os << " number " << boost::get<0>(v) << " string " << boost::get<1>(v); return os; } int main() { map_type m; m[0] = value_type(0, "zero"); m[1] = value_type(0, "one"); m[2] = value_type(0, "two"); std::cout << "m[0] " << m[0] << "\n" << "m[1] " << m[1] << "\n" << "m[2] " << m[2] << "\n" << std::endl; boost::get<0>(m[1]) = 1; map_type::iterator iter = m.find(2); boost::get<0>(iter->second) = 2; std::cout << "m[0] " << m[0] << "\n" << "m[1] " << m[1] << "\n" << "m[2] " << m[2] << "\n" << std::endl; return 0; } The output is exactly what I would have expected. lorien$ g++ -I/opt/include -gdwarf-2 foo.cpp lorien$ ./a.out m[0] number 0 string zero m[1] number 0 string one m[2] number 0 string two m[0] number 0 string zero m[1] number 1 string one m[2] number 2 string two lorien$
1,977,576
1,977,612
Efficiently finding multiple items in a container
I need to find a number of objects from a large container. The only way I can think of to do that seems to be to just search the container for one item at a time in a loop, however, even which an efficient search with an average case of say "log n" (where n is the size of the container), this gives me "m log n" (where m is the number of items I'm looking for) for the entire operation. That seems highly suboptimal to me, and as its something that I am likely to need to do on a frequent bases, something I'd definitely like to improve if possible. Neither part has been implemented yet, so I'm open for suggestions on the format of the main container, the "list" of items I'm looking for, etc, as well as the actual search algorithm. The items are complex objects, however the search key is just a simple integer.
Hash tables have basically O(1) lookup. This gives you O(m) to lookup m items; obviously you can't lookup m items faster than O(m) because you need to get the result out.
1,977,737
1,977,866
OpenGL Rotations around World Origin when they should be around Local Origin
I'm implementing a simple camera system in OpenGL. I set up gluPerspective under the projection matrix and then use gluLookAt on the ModelView matrix. After this I have my main render loop which checks for keyboard events and, if any of the arrow keys are pressed, modifies angular and forward speeds (I only rotate through the y axis and move through the z (forwards)). Then I move the view using the following code (deltaTime is the amount of time since the last frame was rendered in seconds, in order to decouple movement from framerate): //place our camera newTime = RunTime(); //get the time since app start deltaTime = newTime - time; //get the time since the last frame was rendered time = newTime; glRotatef(view.angularSpeed*deltaTime,0,1,0); //rotate glTranslatef(0,0,view.forwardSpeed*deltaTime); //move forwards //draw our vertices draw(); //swap buffers Swap_Buffers(); Then the code loops around again. My draw algorithm begins with a glPushMatrix() and ends in a glPopMatrix(). Each call to glRotatef() and glTranslatef() pushes the view forwards by the forwards speed in the direction of view. However when I run the code, my object is drawn in the correct place, but when I move the movement is done with the orientation of the world origin (0,0,0 - facing along the Z axis) as opposed to the local orientation (where I'm pointing) and when I rotate, the rotation is done about (0,0,0) and not the position of the camera. I end up with this strange effect of my camera orbiting (0,0,0) as opposed to rotating on the spot. I do not call glLoadIdentity() at all anywhere inside the loop, and I am sure that the Matrix Mode is set to GL_MODELVIEW for the entire loop. Another odd effect is if I put a glLoadIdentity() call inside the draw() function (between the PushMatrix and PopMatrix calls, the screen just goes black and no matter where I look I can't find the object I draw. Does anybody know what I've messed up in order to make this orbit (0,0,0) instead of rotate on the spot?
glRotate() rotates the ModelView Matrix around the World Origin, so to rotate around some arbitrary point, you need to translate your matrix to have that point at the origin, rotate and then translate back to where you started. I think what you need is this float x, y, z;//point you want to rotate around glTranslatef(0,0,view.forwardSpeed*deltaTime); //move forwards glTranslatef(x,y,z); //translate to origin glRotatef(view.angularSpeed*deltaTime,0,1,0); //rotate glTranslatef(-x,-y,-z); //translate back //draw our vertices draw(); //swap buffers Swap_Buffers();
1,977,742
1,977,885
Inter-thread communication. How to send a signal to another thread
In my application I have two threads a "main thread" which is busy most of the time an "additional thread" which sends out some HTTP request and which blocks until it gets a response. However, the HTTP response can only be handled by the main thread, since it relies on it's thread-local-storage and on non-threadsafe functions. I'm looking for a way to tell the main thread when a HTTP response was received and the corresponding data. The main thread should be interrupted by the additional thread and process the HTTP response as soon as possible, and afterwards continue working from the point where it was interrupted before. One way I can think about is that the additional thread suspends the main thread using SuspendThread, copies the TLS from the main thread using some inline assembler, executes the response-processing function itself and resumes the main thread afterwards. Another way in my thoughts is, setting a break point onto some specific address in the second threads callback routine, so that the main thread gets notified when the second threads instruction pointer steps on that break point - and therefore - has received the HTTP response. However, both methods don't seem to be nicely at all, they hurt even if just thinking about them, and they don't look really reliable. What can I use to interrupt my main thread, saying it that it should be polite and process the HTTP response before doing anything else? Answers without dependencies on libraries are appreciated, but I would also take some dependency, if it provides some nice solution. Following question (regarding the QueueUserAPC solution) was answered and explained that there is no safe method to have a push-behaviour in my case.
This may be one of those times where one works themselves into a very specific idea without reconsidering the bigger picture. There is no singular mechanism by which a single thread can stop executing in its current context, go do something else, and resume execution at the exact line from which it broke away. If it were possible, it would defeat the purpose of having threads in the first place. As you already mentioned, without stepping back and reconsidering the overall architecture, the most elegant of your options seems to be using another thread to wait for an HTTP response, have it suspend the main thread in a safe spot, process the response on its own, then resume the main thread. In this scenario you might rethink whether thread-local storage still makes sense or if something a little higher in scope would be more suitable, as you could potentially waste a lot of cycles copying it every time you interrupt the main thread.
1,977,783
1,977,798
Link a member function directly to C method declared in a header
Can I link a member function like this in some way? redeclaring the method as a member and get it call the Mmsystem.h method to not have to wrap it? #include <windows.h> #include <Mmsystem.h> namespace SoundLib { public class CWave { public: // WaveIn call external UINT waveOutGetNumDevs(VOID); }; }
No, you have to wrap it. Additionally, your code has some errors, such as external versus extern (though that was theoretical anyway) and public before your class.
1,977,917
1,979,921
How to convert DirectShow Filter to C++\C#?
We have some filter for DS. It works - uses standard win dll's. We want to convert that filter to some sort of program that doesn't rely on using DS. We want it to call dlls in the right order, do all what DS is doing but not be in any way dependable on DS - only on filter dll's. So... How to convert DirectShow Filter to C++\C#?
A better solution is to use the filter within a single-purpose graph, in which you have a custom source feeding the filter from the app, and a custom sink receiving the output and delivering it to the app. There's an example of this on www.gdcl.co.uk. I know this isn't quite what you are asking for, but your dependencies on dshow are very limited, and it's hard to see an environment in which the filter binary works but dshow basics are not available. G
1,978,297
1,978,761
Qt, Signals without naming?
Is there any way to use the signals without MOC and without the connecting via names? My one problem with Qt is that you have something like this->connect(this->SaveBtn, SIGNAL(click()), SLOT(SaveClicked())); And there is no error detection to tell that is wrong other then finding out the button doesn't work or searching through their documentation to find out the signal doesn't exist. Also it seems pointless and a waste of cycles to connect via names instead of classes.
There is error detection, the connect function returns false when it fails to connect, and a warning is output on standard error (or, on Windows, to the weird place which DebugView reads from). Also you can make these warnings into fatal errors by setting QT_FATAL_WARNINGS=1 in your environment. It's not pointless to connect by name. For example, it means that connections can be established where the signal/slot names are generated at runtime.
1,978,709
1,978,859
Are memory leaks "undefined behavior" class problem in C++?
Turns out many innocently looking things are undefined behavior in C++. For example, once a non-null pointer has been delete'd even printing out that pointer value is undefined behavior. Now memory leaks are definitely bad. But what class situation are they - defined, undefined or what other class of behavior?
Memory leaks. There is no undefined behavior. It is perfectly legal to leak memory. Undefined behavior: is actions the standard specifically does not want to define and leaves upto the implementation so that it is flexible to perform certain types of optimizations without breaking the standard. Memory management is well defined. If you dynamically allocate memory and don't release it. Then the memory remains the property of the application to manage as it sees fit. The fact that you have lost all references to that portion of memory is neither here nor there. Of course if you continue to leak then you will eventually run out of available memory and the application will start to throw bad_alloc exceptions. But that is another issue.
1,978,754
1,979,161
What's a good convex optimization library?
I am looking for a C++ library, and I am dealing with convex objective and constraint functions.
I am guessing your problem is non-linear. Where i work, we use SNOPT, Ipopt and another proprietary solver (not for sale). We have also tried and heard good things about Knitro. As long as your problem is convex, all these solvers work well. They all have their own API, but they all ask for the same information : values, first and second derivatives.
1,978,883
1,988,213
How to use SDL with OGRE?
When I go to use OGRE with SDL (as described in this article), I seem to be having trouble with a second window that appears behind my main render window. Basically, the code I'm using is this: SDL_init(SDL_INIT_VIDEO); SDL_Surface *screen = SDL_SetVideoMode(640, 480, 0, SDL_OPENGL); Ogre::Root *root = new Ogre::Root(); root->restoreConfig(); root->initialise(false); Ogre::NameValuePairList windowSettings; windowSettings["currentGLContext"] = Ogre::String("True"); Ogre::RenderWindow *window = root->createRenderWindow("MainRenderWindow", 640, 480, false, &windowSettings); window->setVisible(true); The question is, how do I get rid of the extra window? Just for posterity, I'm using OGRE 1.6.4, Mac OS X 10.6.2, and SDL 1.2.14.
I ended up figuring this out on my own. The problem ends up being that OGRE's Mac GL backend does not honor the currentGLContext option, so the best solution is to change to SDL 1.3 (directly from Subversion, as of time of writing) and use the SDL_CreateWindowFrom call to start getting events from a window created by OGRE. It should also be noted that the OGRE window needs to have the macAPI set to cocoa, or else SDL won't recognize the window handle.
1,978,967
1,979,720
How to get Python code to work with C++ App?
I have the following python 3 file: import base64 import xxx str = xxx.GetString() str2 = base64.b64encode(str.encode()) str3 = str2.decode() print str3 xxx is a module exported by some C++ code. This script does not work because calling Py_InitModule on this script returns NULL. The weird thing is if I create a stub xxx.py in the same directory def GetString() : return "test" and run the original script under python.exe, it works and outputs the base64 string. My question is why doesn't it like the return value of xxx.GetString? In the C++ code, it returns a string object. I hope I have explained my question well enough... this is a strange error.
Py_InitModule() is for initializing extension modules written in C, which is not what you are looking for here. If you want to import a module from C, there is a wealth of functions available in the C API: http://docs.python.org/c-api/import.html But if your aim is really to run a script rather than import a module, you could also use one of the PyRun_XXX() functions described here: http://docs.python.org/c-api/veryhigh.html
1,978,975
1,981,071
How to solve a problem in using RAII code and non-RAII code together in C++?
We have 3 different libraries, each developed by a different developer, and each was (presumably) well designed. But since some of the libraries are using RAII and some don't, and some of the libraries are loaded dynamically, and the others aren't - it doesn't work. Each of the developers is saying that what he is doing is right, and making a methodology change just for this case (e.g. creating a RAII singleton in B) would solve the problem, but will look just as an ugly patch. How would you recommend to solve this problem? Please see the code to understand the problem: My code: static A* Singleton::GetA() { static A* pA = NULL; if (pA == NULL) { pA = CreateA(); } return pA; } Singleton::~Singleton() // <-- static object's destructor, // executed at the unloading of My Dll. { if (pA != NULL) { DestroyA(); pA = NULL; } } "A" code (in another Dll, linked statically to my Dll) : A* CreateA() { // Load B Dll library dynamically // do all other initializations and return A* } void DestroyA() { DestroyB(); } "B" code (in another Dll, loaded dynamically from A) : static SomeIfc* pSomeIfc; void DestroyB() { if (pSomeIfc != NULL) { delete pSomeIfc; // <-- crashes because the Dll B was unloaded already, // since it was loaded dynamically, so it is unloaded // before the static Dlls are unloaded. pSomeIfc = NULL; } }
At first this looks like a problem of dueling APIs, but really it's just another static destructor problem. Generally it's best to avoid doing anything nontrivial from a global or static destructor, for the reason you've discovered but also for other reasons. In particular: On Windows, destructors for global and static objects in DLLs are called under special circumstances, and there are restrictions on what they may do. If your DLL is linked with the C run-time library (CRT), the entry point provided by the CRT calls the constructors and destructors for global and static C++ objects. Therefore, these restrictions for DllMain also apply to constructors and destructors and any code that is called from them. — http://msdn.microsoft.com/en-us/library/ms682583%28VS.85%29.aspx The restrictions are explained on that page, but not very well. I would just try to avoid the issue, perhaps by mimicking A's API (with its explicit create and destroy functions) instead of using a singleton.
1,979,335
1,979,349
Calculating the balance factor of a node in avl tree
I want to calculate the balance factor of a node in avl tree without using any recursive procedure. How can i do that? Please tell me method or provide C++ code snippet.
You can save the balance factor as a part of the information each node saves. Specifically, you can save the height of the left and right subtrees, and update the values with every insertion/deletion on the insertion/deletion path. Example: class Node { public: // stuff... int GetBF() { return lHeight - rHeight; } private: int data; Node* right; Node* left; Node* parent; // optional... int rHeight; // Height of the right subtree int lHeight; // Height of the left subtree };
1,979,989
1,980,011
Is using "operator &" on a reference a portable C++ construct?
Suppose I have: void function1( Type* object ); //whatever implementation void function2( Type& object ) { function1( &object ); } supposing Type doesn't have an overloaded operator &() will this construct - using operator & on a reference - obtain the actual address of the object (variable of Type type) on all decently standard-compliant C++ compilers?
Yes, and the reason is that on the very beginning of evaluating any expression, references are being replaced by the object that's referenced, as defined at 5[expr]/6 in the Standard. That will make it so the &-operator doesn't see any difference: If an expression initially has the type "reference to T" (8.3.2, 8.5.3), the type is adjusted to "T" prior to any further analysis, the expression designates the object or function denoted by the reference, and the expression is an lvalue. This makes it so that any operator that operates on an expression "sees through" the reference.
1,980,145
1,981,590
Callback, specified in QueueUserAPC , does not get called
In my code, I use QueueUserAPC to interrupt the main thread from his current work in order to invoke some callback first before going back to his previous work. std::string buffer; std::tr1::shared_ptr<void> hMainThread; VOID CALLBACK myCallback (ULONG_PTR dwParam) { FILE * f = fopen("somefile", "a"); fprintf(f, "CALLBACK WAS INVOKED!\n"); fclose(f); } void AdditionalThread () { // download some file using synchronous wininet and store the // HTTP response in buffer QueueUserAPC(myCallback, hMainThread.get(), (ULONG_PTR)0); } void storeHandle () { HANDLE hUnsafe; DuplicateHandle(GetCurrentProcess(), GetCurrentThread(), GetCurrentProcess(), &hUnsafe, 0, FALSE, DUPLICATE_SAME_ACCESS); hMainThread.reset(hUnsafe, CloseHandle); } void startSecondThread () { CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)AdditionalThread, 0, 0, NULL); } storeHandle and startSecondThread are exposed to a Lua interpreter which is running in the main thread along with other things. What I do now, is invoke storeHandle from my Lua interpreter. DuplicateHandle returns a non-zero value and therefore succeeds. invoke startSecondThread from my Lua interpreter. The additional thread gets started properly, and QueueUserAPC returns a nonzero value, stating, that all went well. as far as I understood QueueUserAPC, myCallback should now get called from the main thread. However, it doesn't. If QueueUserAPC is the correct way to accomplish my goal (==> see my other question): How can I get this working? If I should some other method to interrupt the main thread: What other method should I use? (Note that I don't want to use pull-ing method in the main thread for this like WaitForSingleObject or polling. I want that the additional thread push-es it's data straight into the main thread, as soon as possible.)
Yeah, QueueUserAPC is not the solution here. Its callback will only run when the thread blocks and the programmer has explicitly allowed the wait to be alertable. That's unlikely. I hesitate to post the solution because it is going to get you into enormous trouble. You can implement a thread interrupt with SuspendThread(), GetThreadContext(), SetThreadContext() and ResumeThread(). The key is to save the CONTEXT.Eip value on the thread's call stack and replace it with the address of the interrupt function. The reason you cannot make this work is because you'll have horrible re-entrancy problems. There is no way you can guess at which point of execution you'll interrupt the thread. It may well be right in the middle of it mutating state, the state that you need so badly that you are contemplating doing this. There is no way to not fall into this trap, you can't block it with a mutex or whatnot. It is also extremely hard to diagnose because it will work so well for so long, then randomly fail when the interrupt timing just happens to be unlucky. A thread must be in a well known state before it can safely run injected code. The traditional one has been mentioned many times before: when a thread is pumping a message loop is is implicitly idle and not doing anything dangerous. QueueUserAPC has the same approach, a thread explicitly signals the operating system that it is a state where the callback can be safely executed. Both by blocking (not executing dangerous code) and setting the bAlertable flag. A thread has to explicitly signal that it is in a safe state. There is no safe push model, only pull.
1,980,326
1,982,329
A way to do c++ "typedef struct foo foo;" for c
Going by gcc version 4.4.2, it appears that saying typedef struct foo foo; // more code here - like function declarations taking/returning foo* // then, in its own source file: typedef struct foo { int bar; } foo; is legal in C++ but not in C. Of course I have a body of code that compiles fine in C++ by using the foo type but it appears I must make it use struct foo (in the header file) to get it to work with some C code another developer wrote. Is there a way to predeclare a struct typedef foo foo in gcc C without getting a "redefinition of typedef 'foo'" error when compiling for C? (I don't want the marginally illegal and less clean underscore solution of struct typedef _foo foo)
One of the differences between C++ and C is that in C++ it is legal to make a repetitive typedef in the same scope as long as all these typedef are equivalent. In C repetitive typedef is illegal. typedef int TInt; typedef int TInt; /* OK in C++. Error in C */ This is what you have in your above code. If you are trying to write a code that can be compiled as both C and C++, get rid of the superfluous second typedef and just do typedef struct foo foo; ... struct foo { int bar; }; (although in C++ the first typedef is superfluous as well).
1,980,571
1,980,687
How could I refactor this code with performance in mind?
I have a method where performance is really important (I know premature optimization is the root of all evil. I know I should and I did profile my code. In this application every tenth of a second I save is a big win.) This method uses different heuristics to generate and return elements. The heuristics are used sequentially: the first heuristic is used until it can no longer return elements, then the second heuristic is used until it can no longer return elements and so on until all heuristics have been used. On each call of the method I use a switch to move to the right heuristic. This is ugly, but work well. Here is some pseudo code class MyClass { private: unsigned int m_step; public: MyClass() : m_step(0) {}; Elem GetElem() { // This switch statement will be optimized as a jump table by the compiler. // Note that there is no break statments between the cases. switch (m_step) { case 0: if (UseHeuristic1()) { m_step = 1; // Heuristic one is special it will never provide more than one element. return theElem; } m_step = 1; case 1: DoSomeOneTimeInitialisationForHeuristic2(); m_step = 2; case 2: if (UseHeuristic2()) { return theElem; } m_step = 3; case 3: if (UseHeuristic3()) { return theElem; } m_step = 4; // But the method should not be called again } return someErrorCode; }; } As I said, this works and it's efficient since at each call, the execution jumps right where it should. If a heuristic can't provide an element, m_step is incremented (so the next time we don't try this heuristic again) and because there is no break statement, the next heuristic is tried. Also note that some steps (like step 1) never return an element, but are one time initialization for the next heuristic. The reason initializations are not all done upfront is that they might never be needed. It is always possible (and common) for GetElem to not get called again after it returned an element, even if there are still elements it could return. While this is an efficient implementation, I find it really ugly. The case statement is a hack; using it without break is also hackish; the method gets really long, even if each heuristic is encapsulated in its own method. How should I refactor this code so it's more readable and elegant while keeping it as efficient as possible?
To my mind if you do not need to modify this code much, eg to add new heuristics then document it well and don't touch it. However if new heuristics are added and removed and you think that this is an error prone process then you should consider refactoring it. The obvious choice for this would be to introduce the State design pattern. This will replace your switch statement with polymorphism which might slow things down but you would have to profile both to be sure.
1,980,642
1,980,908
Static template field of template class?
I've got this code to port from windows to linux. template<class T, int Size> class CVector { /* ... */ }; template<int n, int m> class CTestClass { public: enum { Size = 1 << n }; private: static CVector<int, Size> a; // main.cpp:19 }; template<int n, int m> CVector<int, CTestClass<n, m>::Size> CTestClass<n, m>::a; // main.cpp:24 It compiles with VS2008, but doesn't with g++ 4.3.2. The error I receive is: main.cpp:24: error: conflicting declaration ‘CVector CTestClass::alpha_to’ main.cpp:19: error: ‘CTestClass< n, m >::alpha_to’ has a previous declaration as ‘CVector< int, CTestClass< n, m >::Size > CTestClass< n, m >::alpha_to’ main.cpp:24: error: declaration of ‘CVector< int, CTestClass< n, m >::Size > CTestClass< n, m >::alpha_to’ outside of class is not definition Does someone know how to make it compilable via g++? Thanks!
This works with gcc 3.4 & 4.3 as well as VC8: template<class T, int Size> class CVector { /* ... */ }; template<int n, int m> class CTestClass { public: enum { Size = 1 << n }; typedef CVector<int, Size> Vector; private: static Vector a; }; template<int n, int m> typename CTestClass<n,m>::Vector CTestClass<n,m>::a;
1,980,761
1,980,878
Why is this error: reference to ‘statusBar’ is ambiguous.. coming? Is this a bug?
I created a QMainWindow using QT Designer. As we know, it has statusBar by default. By default, QT Designer gave its objectname as "statusBar". Now, when I tried to call like:- statusBar()->showMessage(tr("File successfully loaded."), 3000); as we have a function with prototype: QStatusBar * QMainWindow::statusBar () const The Compiler shows the error:- error: reference to ‘statusBar’ is ambiguous. error: candidates are: QStatusBar* Ui_MainWindow::statusBar error: QStatusBar*QMainWindow::statusBar() const Actually, i was following a book "The Art of Building Qt Applications by DANIEL MOLKENTIN". And I am compiling the same code given in book. Above code is in mainwindows.cpp and i have included "mainwindow.h" & "ui_mainwindow.h" in it... Is this a bug in QT4??
Ask for a specific version of the method statusBar(): Ui_MainWindow::statusBar()->showMessage(tr("File successfully loaded."), 3000);
1,981,286
1,983,054
How to check if file is/isn't an image without loading full file? Is there an image header-reading library?
edit: Sorry, I guess my question was vague. I'd like to have a way to check if a file is not an image without wasting time loading the whole image, because then I can do the rest of the loading later. I don't want to just check the file extension. The application just views the images. By 'checking the validity', I meant 'detecting and skipping the non-image files' also in the directory. If the pixel data is corrupt, I'd like to still treat it as an image. I assign page numbers and pair up these images. Some images are the single left or right page. Some images are wide and are the "spread" of the left and right pages. For example, pagesAt(3) and pagesAt(4) could return the same std::pair of images or a std::pair of the same wide image. Sometimes, there is an odd number of 'thin' images, and the first image is to be displayed on its own, similar to a wide image. An example would be a single cover page. Not knowing which files in the directory are non-images means I can't confidently assign those page numbers and pair up the files for displaying. Also, the user may decide to jump to page X, and when I later discover and remove a non-image file and reassign page numbers accordingly, page X could appear to be a different image. original: In case it matters, I'm using c++ and QImage from the Qt library. I'm iterating through a directory and using the QImage constructor on the paths to the images. This is, of course, pretty slow and makes the application feel unresponsive. However, it does allow me to detect invalid image files and ignore them early on. I could just save only the paths to the images while going through the directory and actually load them only when they're needed, but then I wouldn't know if the image is invalid or not. I'm considering doing a combination of these two. i.e. While iterating through the directory, reading only the headers of the images to check validity and then load image data when needed. So, Will just loading the image headers be much faster than loading the whole image? Or is doing a bit of i/o to read the header mean I might as well finish off loading image in full? Later on, I'll be uncompressing images from archives as well, so this also applies to uncompressing just the header vs uncompressing the whole file. Also, I don't know how to load/read just the image headers. Is there a library that can read just the headers of images? Otherwise, I'd have to open each file as a stream and code image header readers for all the filetypes on my own.
The Unix file tool (which has been around since almost forever) does exactly this. It is a simple tool that uses a database of known file headers and binary signatures to identify the type of the file (and potentially extract some simple information). The database is a simple text file (which gets compiled for efficiency) that describes a plethora of binary file formats, using a simple structured format (documented in man magic). The source is in /usr/share/file/magic (in Ubuntu). For example, the entry for the PNG file format looks like this: 0 string \x89PNG\x0d\x0a\x1a\x0a PNG image !:mime image/png >16 belong x \b, %ld x >20 belong x %ld, >24 byte x %d-bit >25 byte 0 grayscale, >25 byte 2 \b/color RGB, >25 byte 3 colormap, >25 byte 4 gray+alpha, >25 byte 6 \b/color RGBA, >28 byte 0 non-interlaced >28 byte 1 interlaced You could extract the signatures for just the image file types, and build your own "sniffer", or even use the parser from the file tool (which seems to be BSD-licensed).
1,981,400
1,981,416
Functional Programming in C++
Can someone guide me how do functional programming in C++? Is there some good online material that I can refer? Please note that I know about the library FC++. I want to know how to do that with C++ standard library alone. Thanks.
Update August 2014: This answer was posted in 2009. C++11 improved matters considerably for functional programming in C++, so this answer is no longer accurate. I'm leaving it below for a historical record. Since this answer stuck as the accepted one - I'm turning it into a community Wiki. Feel free to collaboratively improve it to add real tips on function programming with modern C++. You can not do true functional programming with C++. All you can do is approximate it with a large amount of pain and complexity (although in C++11 it's a bit easier). Therefore, this approach isn't recommended. C++ supports other programming paradigms relatively well, and IMHO should not be bent to paradigms it supports less well - in the end it will make unreadable code only the author understands.
1,981,413
1,982,338
Why is typeid not compile-time constant like sizeof
Why is typeid(someType) not constant like sizeof(someType) ? This question came up because recently i tried something like: template <class T> class Foo { static_assert(typeid(T)==typeid(Bar) || typeid(T)==typeid(FooBar)); }; And i am curious why the compiler knows the size of types (sizeof) at compile time, but not the type itself (typeid)
When you are dealing with types, you'd rather use simple metaprogramming techniques: #include <type_traits> template <class T> void Foo() { static_assert((std::is_same<T, int>::value || std::is_same<T, double>::value)); } int main() { Foo<int>(); Foo<float>(); } where is_same could be implemented like this: template <class A, class B> struct is_same { static const bool value = false; }; template <class A> struct is_same<A, A> { static const bool value = true; }; typeid probably isn't compile-time because it has to deal with runtime polymorphic objects, and that is where you'd rather use it (if at all).
1,981,568
1,981,587
memset, memcpy with new operator
Can I reliably use memset and memcpy operators in C++ with memory been allocated with new? Edited: Yes, to allocate native data type Example BYTE *buffer = 0; DWORD bufferSize = _fat.GetSectorSize(); buffer = new BYTE[bufferSize]; _fat.ReadSector(streamChain[0], buffer, bufferSize); ULONG header = 0; memcpy(&header, buffer, sizeof(ULONG));
So long as you are only using new to allocate the built-in and/or POD types, then yes. However, with something like this: std::string * s = new string; memset( s, 0, sizeof(*s) ); then you would be looking at disaster. I have to ask though, why you and others seem so enamoured with these functions - I don't believe I ever use them in my own code. Using std::vector, which has its own copy and assignment facilities seems like a better bet for memcpy(), and I've never really believed in the magic of setting everything to zero, which seems to be the main use for memset().
1,981,576
1,981,957
Convert Hex Char To Int - Is there a better way?
I have written a function to take in the data from a Sirit IDentity MaX AVI reader and parse out the facility code and keycard number. How I am currently doing it works, but is there a better way? Seems little hackish... buff & buf are size 264 buf and buff are char Data received from reader: 2009/12/30 14:56:18 epc0 LN:001 C80507A0008A19FA 0000232F Xlat'd char TAccessReader::HexCharToInt(char n) { if (n >= '0' && n <= '9') return (n-'0'); else if (n >= 'A' && n <= 'F') return (n-'A'+10); else return 0; } bool TAccessReader::CheckSirit(char *buf, long *key_num, unsigned char *fac) { unsigned short i, j, k; *key_num = 0; // Default is zero memset(buff, 0, sizeof(buff)); i = sscanf(buf, "%s %s %s %s %s %s %s", &buff[0], &buff[20], &buff[40], &buff[60], &buff[80], &buff[140], &buff[160]); if (i == 7 && buff[147] && !buff[148]) { // UUGGNNNN UU=spare, GG=Facility Code, NNNN=Keycard Number (all HEX) // get facility code *fac = HexCharToInt(buff[142]) * 16 + HexCharToInt(buff[143]); *key_num = (unsigned short)HexCharToInt(buff[144]) * 4096 + (unsigned short)HexCharToInt(buff[145]) * 256 + (unsigned short)HexCharToInt(buff[146]) * 16 + HexCharToInt(buff[147]); } // do some basic checks.. return true or false }
Here's an easy way to get at the data you want. I do work in the access control business so this was something that interested me... template<typename TRet, typename Iterator> TRet ConvertHex(Iterator begin) { unsigned long result; Iterator end = begin + (sizeof(TRet) * 2); std::stringstream ss(std::string(begin, end)); ss >> std::hex >> result; return result; } bool TAccessReader::CheckSirit(char *buf, long *key_num, unsigned char *fac) { *key_num = 0; // Default is zero std::istringstream sbuf(std::string(buf, buf+264)); // Stuff all of the string elements into a vector std::vector<std::string> elements; std::copy (std::istream_iterator<std::string>(sbuf), std::istream_iterator<std::string>(), std::back_inserter (elements)); // We're interested in the 6th element std::string read = elements[5]; if (read.length() == 8) { // UUGGNNNN UU=spare, GG=Facility Code, NNNN=Keycard Number (all HEX) // get facility and card code std::string::const_iterator iter = read.begin(); *fac = ConvertHex<unsigned char>(iter + 2); *key_num = ConvertHex<unsigned short>(iter + 4); } // do some basic checks.. return true or false }
1,981,628
1,981,629
Check if a binary number has a '0' or a '1' at a specific position
I'd like to check if a binary number has a '0' or a '1' at a specific position. example: if the binary number is: 101000100 checking at position zero (that is at the rightmost '0') should result in '0'. checking at position 2 should result in '1'. checking at position 3 should result in '0'. checking at position 6 should result in '1'. etc... I'm coding in C, so obviously I could use sprintf / scanf and the likes, but I guess there must be something better (read: more time efficient / easier)! What would be a good mechanism to do this?
This will filter out the bit you're looking for: number & (1 << position) If you really need a 1 or 0 response, you can use this to make it a boolean value: !!(number & (1 << position)) Or even better (thanks Vadim K.): (number >> position) & 1
1,981,723
1,981,746
Unix socket: hostent makes memory leaks
I am writing client for TCP connection and conversion from IP to socket_addr makes memory leaks. There is following process: #include <netdb.h> #include <sys/socket.h> #include <sys/types.h> /** there is some code like method header etc. */ hostent * host = gethostbyaddr( ip, 4, AF_INET ); // ip is char[4], I use IPv4 if ( !host ) return -2; // bad IP netSocket = socket( AF_INET, SOCK_STREAM, IPPROTO_TCP ); if ( netSocket == -1 ) return -3; // error during socket opening sockaddr_in serverSock; serverSock.sin_family = AF_INET; serverSock.sin_port = htons( port ); memcpy( &( serverSock.sin_addr ), host->h_addr, host->h_length ); // and now there is function connect(...); /** end of method */ This code works fine but when I tracked memory using I took 5 memory leaks. They are created by this line: hostent * host = gethostbyaddr( ip, 4, AF_INET ); // ip is char[4], I use IPv4 I have tried delete it delete host; but this causes segmentation fault. Do you have any ideas how I can clean the memory, please? This is my school project and we have to work with memory correctly. EDIT: I am using Linux Ubuntu 9.04, g++ 4.3.3 and for memory testing mudflap library
You don't say what platform you are on, but typically the memory returned by gethostbyaddr will be allocated and managed by the sockets library you are using - you don't free it yourself. Whatever you are using to diagnose leaks is probably giveing false positives. For example, this man page http://www.opengroup.org/onlinepubs/009695399/functions/gethostbyaddr.html says that the pointer returned may be to static data, while MS use thread local storage. In neither case can or should the data be freed, and in neither case is there a leak.
1,981,804
1,981,918
Can getline() be used to get a char array from a fstream
I want to add a new (fstream) function in a program that already uses char arrays to process strings. The problem is that the below code yields strings, and the only way i can think of getting this to work would be to have an intermediary function that would copy the strings, char by char, into a new char array, pass these on to the functions in the program, get back the results and then copy the results char by char back into the string. Surely (hopefully) there must be a better way? Thanks! void translateStream(ifstream &input, ostream& cout) { string inputStr; string translated; getline(input, inputStr, ' '); while (!input.eof()) { translateWord(inputStr, translated); cout << translated; getline(input, inputStr, ' '); } cout << inputStr; the translateWord func: void translateWord(char orig[], char pig[]) { bool dropCap = false; int len = strlen(orig)-1; int firstVowel = findFirstVowel(orig); char tempStr[len]; strcpy(pig, orig); if (isdigit(orig[0])) return; //remember if dropped cap if (isupper(orig[0])) dropCap = true; if (firstVowel == -1) { strcat(pig, "ay"); // return; } if (isVowel(orig[0], 0, len)) { strcat(pig, "way"); // return; } else { splitString(pig,tempStr,firstVowel); strcat(tempStr, pig); strcat(tempStr, "ay"); strcpy(pig,tempStr); } if (dropCap) { pig[0] = toupper(pig[0]); } }
You can pass a string as the first parameter to translateWord by making the first parameter a const char *. Then you call the function with inputStr.c_str() as the first parameter. Do deal with the second (output) parameter though, you need to either completely re-write translateWord to use std::string (the best solution, IMHO), or pass a suitably sized array of char as the second parameter. Also, what you have posted is not actually C++ - for example: char tempStr[len]; is not supported by C++ - it is an extension of g++, taken from C99.
1,982,131
1,983,382
Is Loop Hoisting still a valid manual optimization for C code?
Using the latest gcc compiler, do I still have to think about these types of manual loop optimizations, or will the compiler take care of them for me well enough?
If your profiler tells you there is a problem with a loop, and only then, a thing to watch out for is a memory reference in the loop which you know is invariant across the loop but the compiler does not. Here's a contrived example, bubbling an element out to the end of an array: for ( ; i < a->length - 1; i++) swap_elements(a, i, i+1); You may know that the call to swap_elements does not change the value of a->length, but if the definition of swap_elements is in another source file, it is quite likely that the compiler does not. Hence it can be worthwhile hoisting the computation of a->length out of the loop: int n = a->length; for ( ; i < n - 1; i++) swap_elements(a, i, i+1); On performance-critical inner loops, my students get measurable speedups with transformations like this one. Note that there's no need to hoist the computation of n-1; any optimizing compiler is perfectly capable of discovering loop-invariant computations among local variables. It's memory references and function calls that may be more difficult. And the code with n-1 is more manifestly correct. As others have noted, you have no business doing any of this until you've profiled and have discovered that the loop is a performance bottleneck that actually matters.
1,982,178
1,983,183
Intel C++ compiler as an alternative to Microsoft's?
Is anyone here using the Intel C++ compiler instead of Microsoft's Visual c++ compiler? I would be very interested to hear your experience about integration, performance and build times.
The Intel compiler is one of the most advanced C++ compiler available, it has a number of advantages over for instance the Microsoft Visual C++ compiler, and one major drawback. The advantages include: Very good SIMD support, as far as I've been able to find out, it is the compiler that has the best support for SIMD instructions. Supports both automatic parallelization (multi core optimzations), as well as manual (through OpenMP), and does both very well. Support CPU dispatching, this is really important, since it allows the compiler to target the processor for optimized instructions when the program runs. As far as I can tell this is the only C++ compiler available that does this, unless G++ has introduced this in their yet. It is often shipped with optimized libraries, such as math and image libraries. However it has one major drawback, the dispatcher as mentioned above, only works on Intel CPU's, this means that advanced optimizations will be left out on AMD cpu's. There is a workaround for this, but it is still a major problem with the compiler. To work around the dispatcher problem, it is possible to replace the dispatcher code produced with a version working on AMD processors, one can for instance use Agner Fog's asmlib library which replaces the compiler generated dispatcher function. Much more information about the dispatching problem, and more detailed technical explanations of some of the topics can be found in the Optimizing software in C++ paper - also from Anger (which is really worth reading). On a personal note I have used the Intel c++ Compiler with Visual Studio 2005 where it worked flawlessly, I didn't experience any problems with microsoft specific language extensions, it seemed to understand those I used, but perhaps the ones mentioned by John Knoeller were different from the ones I had in my projects. While I like the Intel compiler, I'm currently working with the microsoft C++ compiler, simply because of the financial extra investment the Intel compiler requires. I would only use the Intel compiler as an alternative to Microsofts or the GNU compiler, if performance were critical to my project and I had a the financial part in order ;)
1,982,508
1,982,545
C# and C++ in relation to C
I've never programmed using C or whatever but I use this site a lot so as you can imagine I run into them quite a lot. And due to the fact I don't really understand the languages this is a question Google can't really answer. So in simple terms what are the differences between each of these languages. I assume they are related. All I know is that C++ is what brought object orientated programming to C.
They're loosely related in terms of syntax. In general, C++ added a huge number of capabilities to C, mostly object orientation and generic programming constructs. However, it did so in a way to try to maintain as much backwards compatibility with C as possible. C#, on the other hand, is a very different animal. It completely abandoned all attempts at backwards compatibility, and more tries to keep a superficial, syntax similarity to C++. However, all three languages are very unique, in practical terms. Development is done very differently in C vs. C++ vs. C#, due to the vast differences in supporting libraries and technologies.
1,982,595
1,982,675
boost asio io_service.run()
I was just going over the asio chat server example. My question is about their usage of the io_service.run() function. The documentation for the io_service.run() function says: The run() function blocks until all work has finished and there are no more handlers to be dispatched, or until the io_service has been stopped. Multiple threads may call the run() function to set up a pool of threads from which the io_service may execute handlers. All threads that are waiting in the pool are equivalent and the io_service may choose any one of them to invoke a handler. The run() function may be safely called again once it has completed only after a call to reset(). It says that the run function will return, and I'm assuming that when it does return the network thread stops until it is called again. If that is true, then why isn't the run function called in a loop, or at least given its own thread? the io_service.run() function is pretty much a mystery to me.
"until all work has finished and there are no more handlers to be dispatched, or until the io_service has been stopped" Notice that you DO install a handler, named handle_accept, that reinstalls itself at each execution. Hence, the io_service.run will never return, at least until you quit it manually. Basically, at the moment you run io_service.run in a thread, io_services proactor takes over program flow, using the handler's you installed. From that point on, you handle the program based on events (like the handle_accept) instead of normal procedural program flow. The loop you're mentioning is somewhere deep in the scary depths of the asio's proactor ;-).
1,982,743
1,982,761
C++: Get data from MIDI message (DWORD)
I've written a simple MIDI console application in C++. Here's the whole thing: #include <windows.h> #include <iostream> #include <math.h> using namespace std; void CALLBACK midiInputCallback(HMIDIIN hMidiIn, UINT wMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2) { switch (wMsg) { case MIM_MOREDATA: case MIM_DATA: cout << dwParam1 << " "; PlaySound("jingle.wav", NULL, SND_ASYNC | SND_FILENAME); break; } } int main() { unsigned int numDevs = midiInGetNumDevs(); cout << numDevs << " MIDI devices connected:" << endl; MIDIINCAPS inputCapabilities; for (unsigned int i = 0; i < numDevs; i++) { midiInGetDevCaps(i, &inputCapabilities, sizeof(inputCapabilities)); cout << "[" << i << "] " << inputCapabilities.szPname << endl; } int portID; cout << "Enter the port which you want to connect to: "; cin >> portID; cout << "Trying to connect with the device on port " << portID << "..." << endl; LPHMIDIIN device = new HMIDIIN[numDevs]; int flag = midiInOpen(&device[portID], portID, (DWORD)&midiInputCallback, 0, CALLBACK_FUNCTION); if (flag != MMSYSERR_NOERROR) { cout << "Error opening MIDI port." << endl; return 1; } else { cout << "You are now connected to port " << portID << "!" << endl; midiInStart(device[portID]); } while (1) {} } You can see that there's a callback function for handling the incoming MIDI messages from the device. Here is the description of this function on MSDN. On that page they say that the meaning of dwParam1 and dwParam2 are specified to the messagetype (wMsg), like MIM_DATA. If I look up the documentation of MIM_DATA, I can see that it is a doubleword (DWORD?) and that it has a 'high word' and a 'low word'. How can I now get data like the name of the control on the MIDI device that sended the data and what value it sends? I would appreciate it if somebody can correct my code if it can be done better. Thanks :)
To access the data you need to use dwParam1 and dwParam2 and call the macros HIWORD and LOWORD to get the high and low word from them. Respectively use HIBYTE and LOBYTE to get the data out of those words. In case of MIM_DATA, unfortunately that's byte encoded MIDI data, so you'll have to find the specific meanings for those -- these are documented here -- MIDI Messages. Your code however has a potential problem -- as we read in the MSDN pages: "Applications should not call any multimedia functions from inside the callback function, as doing so can cause a deadlock. Other system functions can safely be called from the callback". And you're calling PlaySound in the Callback...
1,982,788
1,982,873
writing pexpect like program in c++ on Linux
Is there any way of writing pexpect like small program which can launch a process and pass the password to that process? I don't want to install and use pexpect python library but want to know the logic behind it so that using linux system apis I can build something similar.
You could just use "expect". It is very light weight and is made to do what youre describing.
1,982,986
1,983,410
Scrolling different Widgets at the same time
I have different types of QWidgets into a DockWindow: 1 Qwt plot 1 QWidget 3 QGraphicsView And I need scrolling all of them at the same time with the same scrollbar when I zoom in. I know two solutions for this: Create one scrollbar and connect it to each widget. Create one scrollArea and manipulate all the widgets. What is the best solution to this? Do you know any scrollArea tutorial? Thank you so much
I would try to make it so that each of the items that needs to scroll in concert is inside its own QScrollArea. I would then put all those widgets into one widget, with a QScrollBar underneath (and/or to the side, if needed). Designate one of the interior scrolled widget as the "master", probably the plot widget. Then do the following: Set every QScrollArea's horizontal scroll bar policy to never show the scroll bars. The master QScrollArea's horizontalScrollBar()'s rangeChanged( int min, int max ) signal to a slot that sets the main widget's horizontal QScrollBar to the same range. Additionally, it should set the same range for the other scroll area widget's horizontal scroll bars. The horizontal QScrollBar's valueChanged( int value ) signal should be connected to every scroll area widget's horizontal scroll bar's setValue( int value ) slot. Repeat for vertical scroll bars, if doing vertical scrolling. There is one place where I think this could go wrong, and that is mouse-wheel scrolling. You could solve this in a couple of ways. One would be to connect all the scrolling areas to a slot that triggers when their value changes, which updates all the other scroll bars. The other would be to install event filters on those widgets, and either ignore the scroll or process it with the main scroll bars.
1,983,141
1,983,298
Problem in setting up boost library on ubuntu
I have compiled and installed my boost library in '/media/data/bin' in ubuntu 9.10. And I have setup the INCLUDE_PATH, LIBRARY_PATH env: $ echo $INCLUDE_PATH /media/data/bin/boost/include: $ echo $LIBRARY_PATH /media/data/bin/boost/lib: But when I compile the asio example, I get the following error: $ g++ blocking_tcp_echo_server.cpp blocking_tcp_echo_server.cpp:13:26: error: boost/bind.hpp: No such file or directory blocking_tcp_echo_server.cpp:14:31: error: boost/smart_ptr.hpp: No such file or directory blocking_tcp_echo_server.cpp:15:26: error: boost/asio.hpp: No such file or directory blocking_tcp_echo_server.cpp:16:28: error: boost/thread.hpp: No such file or directory blocking_tcp_echo_server.cpp:18: error: ‘boost’ has not been declared blocking_tcp_echo_server.cpp:22: error: ‘boost’ has not been declared blocking_tcp_echo_server.cpp:22: error: expected initializer before ‘<’ token blocking_tcp_echo_server.cpp:24: error: variable or field ‘session’ declared void blocking_tcp_echo_server.cpp:24: error: ‘socket_ptr’ was not declared in this scope
What is wrong with sudo apt-get install libboost-dev after which you don't need to set any -I and -L flags. If you need Boost 1.40, you can still rebuild the current Debian unstable package.
1,983,303
1,983,525
Using bts assembly instruction with gcc compiler
I want to use the bts and bt x86 assembly instructions to speed up bit operations in my C++ code on the Mac. On Windows, the _bittestandset and _bittest intrinsics work well, and provide significant performance gains. On the Mac, the gcc compiler doesn't seem to support those, so I'm trying to do it directly in assembler instead. Here's my C++ code (note that 'bit' can be >= 32): typedef unsigned long LongWord; #define DivLongWord(w) ((unsigned)w >> 5) #define ModLongWord(w) ((unsigned)w & (32-1)) inline void SetBit(LongWord array[], const int bit) { array[DivLongWord(bit)] |= 1 << ModLongWord(bit); } inline bool TestBit(const LongWord array[], const int bit) { return (array[DivLongWord(bit)] & (1 << ModLongWord(bit))) != 0; } The following assembler code works, but is not optimal, as the compiler can't optimize register allocation: inline void SetBit(LongWord* array, const int bit) { __asm { mov eax, bit mov ecx, array bts [ecx], eax } } Question: How do I get the compiler to fully optimize around the bts instruction? And how do I replace TestBit by a bt instruction?
inline void SetBit(*array, bit) { asm("bts %1,%0" : "+m" (*array) : "r" (bit)); }
1,983,639
1,983,663
Linking Error When Implementing Templated Based Operator Assignment Function
I try to implement the following function : template<typename T> class a { private: T var; friend bool operator==(const a<T> &, const a<T> &); }; template<typename T> inline bool operator==(const a<T> &r1, const a<T> &r2) { return r1.var==r2.var; } int main () { a<int> var0; a<int> var1; var0 == var1; } However, I get main.obj : error LNK2001: unresolved external symbol "bool __cdecl operator==(class a<int> const &,class a<int> const &)" (??8@YA_NABV?$a@H@@0@Z) under VC++ 2008 May I know how I can fix the linking error?
What you have declares the friend op== as a non-template, but you implement it as a template. That is why the definition is not found when linking. How I usually overload op== for class templates: template<class T> struct A { friend bool operator==(A const& a, A const& b) { return a.var == b.var; } private: T var; };
1,984,295
1,984,635
setCentralWidget() causing the QMainWindow to crash.. Why?
MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { this->setupUi(this); this->setupActions(); this->setWindowTitle(tr("CuteEdit")); label = new QLabel(tr("No Open Files")); this->setCentralWidget(label); label->setAlignment(Qt::AlignCenter); } By above code, I get a GUI like this(Its a screenshot of whole screen, Only observe the window displayed in middle of page of ebook). (I used QT Designer) Now, i want user to select File->Open.. A Dialog appears and file gets selected.. Its contents are to be displayed in *textEdit widget.. Function for that is below.. void MainWindow::loadFile() { QString filename = QFileDialog::getOpenFileName(this); QFile file(filename); if (file.open(QIODevice::ReadOnly|QIODevice::Text)) { label->hide(); textEdit->setPlainText(file.readAll()); mFilePath = filename; QMainWindow::statusBar()->showMessage(tr("File successfully loaded."), 3000); } } The window crashes at line:- textEdit->setPlainText(file.readAll()); But if i comment the line:- this->setCentralWidget(label); i mean i remove label as being the central widget, the program runs as expected.. Why? And also, I am not clear about the concept of CentralWidget. Pls guide.
JimDaniel is right in his last edit. Take a look at the source code of setCentralWidget(): void QMainWindow::setCentralWidget(QWidget *widget) { Q_D(QMainWindow); if (d->layout->centralWidget() && d->layout->centralWidget() != widget) { d->layout->centralWidget()->hide(); d->layout->centralWidget()->deleteLater(); } d->layout->setCentralWidget(widget); } Do you see that if your MainWindow already had centralWidget() Qt schedules this object for deletion by deleteLater()? And centralWidget() is the root widget for all layouts and other widgets in QMainWindow. Not the widget which is centered on window. So each QMainWindow produced by master in Qt Creator already has this root widget. (Take a look at your ui_mainwindow.h as JimDaniel proposed and you will see). And you schedule this root widget for deletion in your window constructor! Nonsense! =) I think for you it's a good idea to start new year by reading some book on Qt. =) Happy New Year!
1,984,492
1,984,596
runtime determine type for C++
I am wondering if type can be determined as runtime information in C++. (1) Although my question is quite general, for simplicity, I will start from a simple example: #include <stdio.h> #include <iostream> #include <cstring> using namespace std; int main(int argc, char * argv[]) { if (strcmp(argv[1], "int")==0) { int t = 2; }else if (strcmp(argv[1], "float")==0) { float t = 2.2; } cout << t << endl; // error: ‘t’ was not declared in this scope return 0; } For this example, there are two questions: (a) "argv[1] to t" is wrong, but can the type info in the C string argv[1] be converted to the actual type keyword? So we don't need to check for every type by if-else clause and strcmp. (b) how to make variable t defined inside the local scope of the if clause still valid outside. i.e how to "export" a local variable to the outside of its scope? (2) Generally speaking, not specific to the simple example above, what are the usual ways to runtime determine types? It seems to me that there might be some ways: (a) one can put the processing of the variable defined from the type inside the same scope its definition. e.g. #include <stdio.h> #include <iostream> #include <cstring> using namespace std; int main(int argc, char * argv[]) { if (strcmp(argv[1], "int")==0) { int t = 2; cout << t << endl; }else if (strcmp(argv[1], "float")==0) { float t = 2.2; cout << t << endl; } return 0; } And possibly use template function to make the common code for various types reusable. (b) or one may use abstract class type and polymorphism to indirectly export the definition out, but I am not sure how exactly. Thanks for your advice!
1a: No, types are not objects or values in C++ (as they are, for example, in Python). You can, however, use various values selected by the value of argv[1]. 1b: Sorry, just can't do that. 2: dynamic_cast and typeid (both operators) are the only tools currently provided by the language to query type (not unusual, most languages have very few, but dedicated, tools for that), and using them solely to query type is often discouraged depending on situation (also not unusual among other languages). 2a: Yes, and as that is the simple, obvious, and works here—there's no reason to use anything else, but as it's example code, let's assume you need a different solution. You could call a function template instantiated on the right type, but as this is pretty much the same thing as the rest of 2a, I don't go into it. 2b: Example using a subclass template, just because it's handy: struct Base { virtual ~Base() {} friend std::ostream& operator<<(std::ostream& s, Base const& v) { v._print(s); return s; } private: virtual void _print(std::ostream&) const = 0; }; template<class T> struct Value : Base { T data; explicit Value(T const& data) : data(data) {} private: virtual void _print(std::ostream& s) const { s << data; } }; Use: int main(int argc, char** argv) { using namespace std; auto_ptr<Base> p; string const type = argc > 1 ? argv[1] : "int"; if (type == "int") { p.reset(new Value<int>(2)); } else if (type == "float") { p.reset(new Value<double>(2.2)); } cout << *p << '\n'; return 0; } This is starting to merge the two types into one type, and they both present the same interface, Base, here. However, this doesn't lend itself well to every solution, and a variant such as boost.variant can be better, particularly when the various types required are small in number and known well in advance.
1,984,657
1,984,736
how to know whether disk is basic or dynamic?
In windows is it possible to know what kind of disk we are dealing with from a c/c++ program? forget about gpt or mbr, how to know whether it is basic or dynamic? Program input can be drive letter or any info related to disk, output should be dynamic or basic. No need of a direct way of doing, even if it is lengthy process, its okay. I couldn't find much in msdn. Please help me out.
There is a way in windows, but it's not straight forward. There is no direct API to determine if a disk is Basic or Dynamic, however all dynamic disks will have LDM Information. So if a drive has a partion with LDM information on it, then it's going to be a dynamic disk. the DeviceIoControl() method with the IOCTL_DISK_GET_DRIVE_LAYOUT_EX control code can be used to get this information. Here is a post with a sample console application to do what you're asking for.
1,984,877
2,211,539
Profiling C++ with Xcode
is it possible to profile C++ apps with Xcode so one gets; memory leaks like with valgrind possible errors before running the program Thanks, I am very new to mac and xcode Where can one find a good tutorial for this?
Regarding memory leaks, run XCode and then launch Start with Performance Tool -> Leaks Alternatively and necessarily for old pre-Panther users of XCode, it is possible to debug with guard malloc, detailed explanation in the Mac development docs, but here is a quick walk-through.
1,985,157
1,985,226
Issue writing to Excel file using C++
We have a requirement to parse a file and write the data to an excel file using C++. I did a search and able to find a project which serves my purpose. http://www.codeproject.com/KB/office/ExcelFormat.aspx Please find a few lines of code below where exactly the errors occur. typedef std::ctype CT; CT const& ct = std::_USE(std::locale(), CT); ct.widen(&str[0], &*str.begin()+str.size(), &ret[0]); The above functions (eg. _USE) seem to be related to VC++. Can somebody please let me know what exactly the above piece of code does? and how to implement the same in native C++ functions? Please let me know if any other information is needed... Thanks!!
It is difficult to answer without further information. In any case I would try CT const & ct = std::use_facet<CT>( std::locale() );
1,985,521
1,985,529
Explaining the declaration/definition of HRESULT
I just looked at the definition of HRESULT in VS2008. WinNT.h has the following line: typedef __success(return >= 0) long HRESULT; What exactly does it mean? It doesn't even look like C or C++ to my untrained eye
It is an annotation. In short, __success(expr) means that expr describes the conditions under which a function is considered to have succeeded. For functions returning HRESULT, that condition is that the return value (since HRESULT is a long) is non-negative. All functions returning HRESULT have this annotation applied to them because of this typedef. Probably way more detail than you will ever want in MSDN on SAL Annotations, The Evolution of HRESULT From Win32 and Success and Failure Annotations.
1,985,705
1,985,918
Need help on asynchrous non-blocking file loading with boost::asio and boost::iostreams ( or something different? )
I'm coding in c++, and I'm trying to load an image file asynchronously. After some research, I found some mentions about using boost::asio and boost::iostreams to do it. However, the documentation and example for boost::asio is mostly socket related, so it doesn't help me much. Here is what I need: Load a file asynchronously and upon load completion, executes a callback function.(In my case, the callback function executes a javascript function object using v8 javascript engine) The callback function must be executed within the same thread as the main function. ( Because v8 is not thread safe.) Need to work on linux and windows. (separate implementations are ok) So, something like this would be really nice: async_read("test.jpg", &the_callback_function); The function should not block, and upon file load completion, it should run 'the_callback_function'. Edit: as joshperry pointed out, boost::asio might not be able to dispatch back to the main thread. So, I guess I don't have to limit to only boost::asio and boost:iostreams. Any c/c++ library that can help with this requirement should be fine. Thanks!
You can do what you want with a little more scaffolding, but in order for the callback to be executed on your main thread, the main thread must be waiting on something which signals that the callback is ready. Here's one way to do it. I'm assuming that your main thread already has some form of execution loop. Add a thread safe notification queue, which background threads can use to notify the main thread of callbacks to be executed. Modify your main execution loop to wait on that queue along with whatever other event sources it waits on (obviously, I'm assuming you have an event-driven loop in your main thread, adjust to taste if you don't :). Continue to use asio async_read with a callback, only the callback won't directly execute the completion function, instead it will queue it to the main thread for execution. It's not hard to build such a queue portably using STL and synchronization objects (such as those provided by Boost.Thread). Building it to scale to high-throughput without contention is another matter, but it doesn't sound like you need that kind of scaling. Even if other libraries will do all this for you, building something like this yourself is a great exercise for anyone doing multithreaded programming, particular if one or more of the components involved are not re-entrant.
1,985,881
1,985,960
How to install and use libtool shared library (.lo files)?
So after I ran libtool and got out a libfoo.lo and foo.o file from my library source, how do I convert the libfoo.lo file into a normal Linux shared library, like libfoo.so.1.0.0 so I can install and link to it on my target system?
From the outputs mentioned in the question, it looks like you ran libtool with --mode=compile mode. You will need to run libtool again with --mode=link to produce .a and .so libraries. libtool is just a simple wrapper for gcc, ln ar and ranlib which is needed to produce libraries. All it does is run gcc adding the necessary options to ensure that your static and shared libraries are created correctly. When compiling libtool inserts the -fPIC tag to ensure the necessary generation of position independent code needed for shared libraries. The .o files are normal object files that can be archived into a static .a file. The .lo files are object files with position independent code which can now be linked into a .so file. When linking libtool will run ar to create a static library or ln to link the objects files into a .so shared library. libtool also can install the library when desired using the --mode=install. See http://www.gnu.org/software/libtool/manual/libtool.html for more info. Please remember that when building an executable there are always two stages, compiling and linking.
1,985,978
1,986,048
Combining a vector of strings
I've been reading Accelerated C++ and I have to say it's an interesting book. In chapter 6, I have to use a function from <algorithm> to concatenate from a vector<string> into a single string. I could use accumulate, but it doesn't help because string containers can only push_back characters. int main () { using namespace std; string str = "Hello, world!"; vector<string> vec (10, str); // Concatenate here? return 0; } How do I join the strings together?
Assuming this is question 6.8, it doesn't say you have to use accumulate - it says use "a library algorithm". However, you can use accumulate: #include <numeric> int main () { std::string str = "Hello World!"; std::vector<std::string> vec(10,str); std::string a = std::accumulate(vec.begin(), vec.end(), std::string("")); std::cout << a << std::endl; } All that accumulate does is set 'sum' to the third parameter, and then for all of the values 'val' from first parameter to second parameter, do: sum = sum + val it then returns 'sum'. Despite the fact that accumulate is declared in <numeric> it will work for anything that implements operator+() Note: This solution, while elegant, is inefficient, as a new string will be allocated and populated for each element of vec.
1,986,199
1,986,369
Change string locale
I'm not very familiar with locale-specific conversions so I may be using the wrong terminology here. This is what I want to have happen. I want to write a function std::string changeLocale( const std::string& str, const std::locale& loc ) such that if I call this function as follows: changeLocale( std::string( "1.01" ), std::locale( "french_france" ) ) the output string will be "1,01" Thanks for your help!
Something like this ought to do the trick #include <iostream> #include <sstream> #include <locale> int main (int argc,char** argv) { std::stringstream ss; ss.imbue(std::locale("fr_FR.UTF8")); double value = 1.01; ss << value; std::cout << ss.str() << std::endl; return 0; } Should give you output of 1,01 (at least it does on g++). You might have to fiddle with the locale specification since it's very specific to platform.
1,986,325
1,990,104
Operator overloading for a class containing boost::numeric::ublas::matrix<double>
I have a class which contains a few boost::numeric::ublas::matrix's within it. I would like to overload the class's operators (+-*/=) so that I can act on the set of matrices with one statement. However this seems to require temporary instances of my class to carry values around without modifying the original class. This makes sense to me, however, when I create a new instance within the function and return it I get: warning: reference to local variable ‘temp’ returned I'm pretty new to c++ and the examples of operator overloading seem to all return new temporary objects. I would also like to avoid the overhead in instantiating new matrix's, which leads me towards looping over all elements. How should I go about this? Performance is a concern.
If you're using boost already, I'd strongly suggest using boost::operators along with your example. You'll get several benefits: You'll only need to overload the +=/-=/= operators, and get the +/-/ operators for free. You'll have a optimal implementation of the freely implemented operators. You'll get rid of the problem you posted, because you'll be implementing the = versions, which require less design.
1,986,418
1,986,485
'typeid' versus 'typeof' in C++
I am wondering what the difference is between typeid and typeof in C++. Here's what I know: typeid is mentioned in the documentation for type_info which is defined in the C++ header file typeinfo. typeof is defined in the GCC extension for C and in the C++ Boost library. Also, here is test code test that I've created where I've discovered that typeid does not return what I expected. Why? main.cpp #include <iostream> #include <typeinfo> //for 'typeid' to work class Person { public: // ... Person members ... virtual ~Person() {} }; class Employee : public Person { // ... Employee members ... }; int main () { Person person; Employee employee; Person *ptr = &employee; int t = 3; std::cout << typeid(t).name() << std::endl; std::cout << typeid(person).name() << std::endl; // Person (statically known at compile-time) std::cout << typeid(employee).name() << std::endl; // Employee (statically known at compile-time) std::cout << typeid(ptr).name() << std::endl; // Person * (statically known at compile-time) std::cout << typeid(*ptr).name() << std::endl; // Employee (looked up dynamically at run-time // because it is the dereference of a pointer // to a polymorphic class) } output: bash-3.2$ g++ -Wall main.cpp -o main bash-3.2$ ./main i 6Person 8Employee P6Person 8Employee
C++ language has no such thing as typeof. You must be looking at some compiler-specific extension. If you are talking about GCC's typeof, then a similar feature is present in C++11 through the keyword decltype. Again, C++ has no such typeof keyword. typeid is a C++ language operator which returns type identification information at run time. It basically returns a type_info object, which is equality-comparable with other type_info objects. Note, that the only defined property of the returned type_info object has is its being equality- and non-equality-comparable, i.e. type_info objects describing different types shall compare non-equal, while type_info objects describing the same type have to compare equal. Everything else is implementation-defined. Methods that return various "names" are not guaranteed to return anything human-readable, and even not guaranteed to return anything at all. Note also, that the above probably implies (although the standard doesn't seem to mention it explicitly) that consecutive applications of typeid to the same type might return different type_info objects (which, of course, still have to compare equal).
1,986,424
1,986,783
Continue C++ project in VB.Net?
I was given a half-finished project to finish. It was written in C++ using Visual Studio 2005. Is it possible to somehow continue the project in VB.Net? If it is, can you guide me? Thanks
If the app isn't done, then I don't recommend trying to do the "rest" in VB unless there's a reasonable segmentation of the existing and new code such that you could turn the existing C++ stuff into a library to be used by the VB code. But only if it makes any kind of sense (think encapsulation here -- is the code suitable to stand (or at least lean) on its own?) Otherwise, it sounds like a maintenance nightmare, where parts of a routine are in one codebase and parts are in another and debugging and enhancing become 10x as hard.
1,986,641
1,987,792
Why is it important for C / C++ Code to be compilable on different compilers?
I'm interested in different aspects of portability (as you can see when browsing my other questions), so I read a lot about it. Quite often, I read/hear that Code should be written in a way that makes it compilable on different compilers. Without any real life experience with gcc / g++, it seems to me that it supports every major platform one can imagine, so Code that compiles on g++ can run on almost any system. So why would someone bother to have his code run on the MS Compiler, the Intel compiler and others? I can think of some reasons, too. As the FAQ suggest, I'll try to post them as an answer, opposed to including them into my own question. Edit: Conclusion You people got me completely convinced that there are several good reasons to support multiple compilers. There are so many reasons that it was hard to choose an answer to be the accepted one. The most important reasons for me: Contributors are much more likely to work an my project or just use it if they can use the compiler of their choice Being compilable everywhere, being usable with future compilers and tools, and adhering to the standards are enforcing each other, so it's a good idea On the other hand, I still believe that there are other things which are more important, and now I know that sometimes it isn't important at all. And last of all, there was no single answer that could convince me not to choose GCC as the primary or default compiler for my project.
For most languages I care less about portability and more about conforming to international standards or accepted language definitions, from which properties portability is likely to follow. For C, however, portability is a useful idea, because it is very hard to write a program that is "strictly conforming" to the standard. (Why? Because the standards committees felt it necessary to grandfather some existing practice, including giving compilers some freedom you might not like them to have.) So why try to conform to a standard or make your code acceptable to multiple compilers as opposed to simply writing whatever gcc (or your other favorite compiler) happens to accept? Likely in 2015 gcc will accept a rather different language than it does today. You would prefer not to have to rewrite your old code. Perhaps your code might be ported to very small devices, where the GNU toolchain is not as well supported. If your code compiles with any ANSI C compiler straight out of the box with no errors and no warnings, your users' lives will be easier and your software may be widely ported and used. Perhaps someone will invent a great new tool for analyzing C programs, refactoring C programs, improving performance of C programs, or finding bugs in C programs. We're not sure what version of C that tool will work on or what compiler it might be based on, but almost certainly the tool will accept standard C. Of all these arguments, it's the tool argument I find most convincing. People forget that there are other things one can do with source code besides just compile it and run it. In another language, Haskell, tools for analysis and refactoring lagged far behind compilers, but people who stuck with the Haskell 98 standard have access to a lot more tools. A similar situation is likely for C: if I am going to go to the effort of building a tool, I'm going to base it on a standard with a lifetime of 10 years or so, not on a gcc version which might change before my tool is finished. That said, lots of people can afford to ignore portability completely. For example, in 1995 I tried hard to persuade Linus Torvalds to make it possible to compile Linux with any ANSI C compiler, not just gcc. Linus had no interest whatever—I suspect he concluded that there was nothing in it for him or his project. And he was right. Having Linux compile only with gcc was a big loss for compiler researchers, but no loss for Linux. The "tool argument" didn't hold for Linux, because Linux became so wildly popular; people building analysis and bug-finding tools for C programs were willing to work with gcc because operating on Linux would allow their work to have a big impact. So if you can count on your project becoming a wild success like Linux or Mosaic/Netscape, you can afford to ignore standards :-)
1,986,660
1,986,684
C++ length of file and vectors
Hi I have a file with some text in it. Is there some easy way to get the number of lines in the file without traversing through the file? I also need to put the lines of the file into a vector. I am new to C++ but I think vector is like ArrayList in java so I wanted to use a vector and insert things into it. So how would I do it? Thanks.
You would need to traverse the file to detect the number of lines (or at least call a library method that traverse the file). Here is a sample code for parsing text file, assuming that you pass the file name as an argument, by using the getline method: #include <string> #include <vector> #include <fstream> #include <iostream> int main(int argc, char* argv[]) { std::vector<std::string> lines; std::string line; lines.clear(); // open the desired file for reading std::ifstream infile (argv[1], std::ios_base::in); // read each file individually (watch out for Windows new lines) while (getline(infile, line, '\n')) { // add line to vector lines.push_back (line); } // do anything you like with the vector. Output the size for example: std::cout << "Read " << lines.size() << " lines.\n"; return 0; } Update: The code could fail for many reasons (e.g. file not found, concurrent modifications to file, permission issues, etc). I'm leaving that as an exercise to the user.
1,986,918
1,986,963
Why don't we have <cstdfloat> in C++?
Why doesn't C++ have <cstdfloat> header for floats like it has <cstdint> for integers? EDIT : By <cstdfloat> I mean header that provides typedefs for float and double. Much like qreal typedef in Qt. Hope my question is clear now.
Often an application needs exactly 16 bits for an integer for, say, a bitfield, but having exactly 16 bits for a float is kind of useless. Manipulating bits in an integer is easy, so having exactly 16 is nice. Manipulating bits in a float requires casting it to an integer, making a float16 type rather extraneous. By the same token, having an integral type capable of storing (and also performing math on) pointers is useful, but who ever needs to convert a pointer value to a floating point value, then perform floating point math on it, then convert it back to a pointer? The point is that most of the functionality in stdint.h (or cstdint for C++, except that stdint.h is a C99 header and isn't technically part of C++) doesn't apply to floating point values.
1,986,966
1,986,974
Does "&s[0]" point to contiguous characters in a std::string?
I'm doing some maintenance work and ran across something like the following: std::string s; s.resize( strLength ); // strLength is a size_t with the length of a C string in it. memcpy( &s[0], str, strLength ); I know using &s[0] would be safe if it was a std::vector, but is this a safe use of std::string?
A std::string's allocation is not guaranteed to be contiguous under the C++98/03 standard, but C++11 forces it to be. In practice, neither I nor Herb Sutter know of an implementation that does not use contiguous storage. Notice that the &s[0] thing is always guaranteed to work by the C++11 standard, even in the 0-length string case. It would not be guaranteed if you did str.begin() or &*str.begin(), but for &s[0] the standard defines operator[] as: Returns: *(begin() + pos) if pos < size(), otherwise a reference to an object of type T with value charT(); the referenced value shall not be modified Continuing on, data() is defined as: Returns: A pointer p such that p + i == &operator[](i) for each i in [0,size()]. (notice the square brackets at both ends of the range) Notice: pre-standardization C++0x did not guarantee &s[0] to work with zero-length strings (actually, it was explicitly undefined behavior), and an older revision of this answer explained this; this has been fixed in later standard drafts, so the answer has been updated accordingly.
1,986,969
1,986,986
static initialization in c
I have a function which is passed a list of ints, until one value is "-1" and calculates the minimum. If the function gets called couple times, it is supposed to return the minimum between all calls. So I wrote something like that: int min_call(int num, ...) { va_list argptr; int number; va_start(argptr, num); //static int min = va_arg(argptr, int); //////// the questioned line static int all_min = -1; int min = va_arg(argptr, int); if (min != -1) { while ((number = va_arg(argptr, int)) != -1) { if (number < min) { min = number; } } } if (min < all_min || all_min == -1) { all_min = min; } return all_min; } I want to know something about the marked line... why can't I call it - the compiler says because the expression being used to initialize the static int is not constant. For some reason I remember that I can initialize a static variable and know that the initializing statement will be called only once (the first time) it's written in C++. If that line would be available it would have saved me couple variables. Is there a difference between C and C++ in this matter?
Yes, C++ allows for statics to be lazily initialized at runtime. Effectively C++ turn static initialization into this: static int XX_first_time = 1; if (XX_first_time) { // run the initializer XX_first_time = 0; } While this is convenient, it is not thread safe. The standard does not require this to be thread safe although some compilers have done that anyway (gcc 4.x does thread safe initialization unless explicitly requested not to with -fno-threadsafe-statics). C requires statics be to have their value configured at compile time. Yes, this is more limited but is more in line with C doing little work for you behind your back (C can be thought of as portable assembly).
1,987,284
1,987,307
Native C++ and C# interop
So I'm architecting an application that does necessarily C++ work, but MFC/ATL is too messy for my liking, so I had this brilliant idea of doing all the "thinking" code in native C++ and all the pretty UI code in C#. The problem, though, is interoperability between the two of them. Before I get too carried away with this, I was wondering if this is a solved problem, and there's a really good way to do this. Note that I don't want to mix logic and display in the same module, as it gives rise to annoyingly high coupling. Here's what I have so far: So tell me, can it be done better?
The easiest way to handle this is to use C++/CLI, and expose your logic as .NET types. It's very easy to wrap a native C++ class in a ref class that's usuable directly from a C# user interface. That being said - this was my plan, originally, in my current project. My thinking was that I'd need the native code for some of the heavy math work we typically do. I've found, however, that it's been easier, faster, and nicer to just move most of my logic directly into C# (separated from the UI code, but still in a C# assembly) rather than try to implement it in C++. My experience has been that speed has not been an issue - unsafe C# code has nearly always managed to be as fast or faster than the equivelent C++ when tuned, and it's easier to profile and tune the C# code.
1,987,286
1,987,290
Get type of variable
If I understand correctly, typeid can determine the actual type in polymorphism, while typeof cannot. Is it also true that their returns are used for different purposes: the return of typeof is used as type keyword that can define variable, but the return of typeid cannot? Is there any way to both get the actual type for polymorphism and use the return as type keyword to define another variable? I hope to get the derived class type from a pointer pointing to the base class and define a variable of or a pointer to the derived class. Something like: baseclass *p = new derivedclass typexxx(*p) *pp = dynamic_cast<typexxx(*p) *> (p); // would like to convert the pointer from pointing to a base class // to its derived class Thank you very much!
c++0x will have decltype which can be used like this: int someInt; decltype(someInt) otherIntegerVariable = 5; but for plain old c++, unfortunately, no. I suppose that decltype won't really be much help either though since you want the polymorphic type, not the declared type. The most straight forward way to do what you want is to attempt to dynamic cast to a particular type and check for NULL. struct A { virtual ~A() {} }; struct B : public A {}; struct C : public A {}; int main() { A* x = new C; if(B* b_ptr = dynamic_cast<B*>(x)) { // it's a B } else if(C* c_ptr = dynamic_cast<C*>(x)) { // it's a C } }
1,987,413
1,987,495
Inclusion of unused symbols in object files by compiler in C vs C++
This might be a dumb question, but maybe someone can provide some insight. I have some global variables defined in a header file (yes yes I know that's bad, but this is just a hypothetical situation). I include this header file in two source files, which are then compiled into two object files. The global symbols are not referenced anywhere in the code. If the source files are C, then it looks like the compiler omits the global symbols and everything links without errors. If the source files are C++, the symbols are included in both object files and then I get linker errors. For C++ I am using extern "C" when I include the header. I am using the Microsoft compiler from VS2005. Here is my code: Header file (test.h): #ifndef __TEST_H #define __TEST_H /* declaration in header file */ void *ptr; #endif C Source files: test1.c #include "test.h" int main( ) { return 0; } test2.c #include "test.h" C++ Source Files: test1.cpp extern "C" { #include "test.h" } int main( ) { return 0; } test2.cpp extern "C" { #include "test.h" } For C, the object files look something like this: Dump of file test1.obj File Type: COFF OBJECT COFF SYMBOL TABLE 000 006DC627 ABS notype Static | @comp.id 001 00000001 ABS notype Static | @feat.00 002 00000000 SECT1 notype Static | .drectve Section length 2F, #relocs 0, #linenums 0, checksum 0 004 00000000 SECT2 notype Static | .debug$S Section length 228, #relocs 7, #linenums 0, checksum 0 006 00000004 UNDEF notype External | _ptr 007 00000000 SECT3 notype Static | .text Section length 7, #relocs 0, #linenums 0, checksum 96F779C9 009 00000000 SECT3 notype () External | _main 00A 00000000 SECT4 notype Static | .debug$T Section length 1C, #relocs 0, #linenums 0, checksum 0 String Table Size = 0x0 bytes And for C++ they look something like this: Dump of file test1.obj File Type: COFF OBJECT COFF SYMBOL TABLE 000 006EC627 ABS notype Static | @comp.id 001 00000001 ABS notype Static | @feat.00 002 00000000 SECT1 notype Static | .drectve Section length 2F, #relocs 0, #linenums 0, checksum 0 004 00000000 SECT2 notype Static | .debug$S Section length 228, #relocs 7, #linenums 0, checksum 0 006 00000000 SECT3 notype Static | .bss Section length 4, #relocs 0, #linenums 0, checksum 0 008 00000000 SECT3 notype External | _ptr 009 00000000 SECT4 notype Static | .text Section length 7, #relocs 0, #linenums 0, checksum 96F779C9 00B 00000000 SECT4 notype () External | _main 00C 00000000 SECT5 notype Static | .debug$T Section length 1C, #relocs 0, #linenums 0, checksum 0 String Table Size = 0x0 bytes I notice that _ptr is listed as UNDEF when I compile the C source, and it is defined when I compile the C++ source, which results in linker errors. I understand that this is not a good thing to do in real life, I am just trying to understand why this is different. Thanks.
In C, identifiers have three different types of "linkage": external linkage: roughly, this is what people mean by "global variables". In common terms, it refers to identifiers that are visible "everywhere". internal linkage: these are objects that are declared with static keyword. no linkage: these are objects that are "temporary", or "automatic", such as variables declared inside a function (commonly referred as "local variables"). For objects with external linkage, you can have only one definition. Since your header file defines such an object and is included in two C files, it is undefined behavior (but see below). The fact that your C compiler doesn't complain does not mean it is OK to do so in C. For this, you must read the C standard. (Or, assuming no bugs in your compiler, if it is invoked in a standards-compliant mode, and if it complains about something [gives a diagnostic], it probably means your program isn't compliant.) In other words, you can't test what is allowed by the language by testing something and checking if your compiler allows it. For this, you must read the standard. Note that there is a subtle difference between definition and tentative definition. $ cat a.c int x = 0; $ cat b.c #include <stdio.h> int x = 0; int main(void) { printf("%d\n", x); return 0; } $ gcc -ansi -pedantic -W -Wall -c a.c $ gcc -ansi -pedantic -W -Wall -c b.c $ gcc -o def a.o b.o b.o:(.bss+0x0): multiple definition of `x' a.o:(.bss+0x0): first defined here collect2: ld returned 1 exit status Now, let's change a.c: $ cat a.c int x; /* Note missing " = 0", so tentative definition */ Now compile it: $ gcc -ansi -pedantic -W -Wall -c a.c $ gcc -o def a.o b.o $ ./def 0 We can change b.c instead: $ cat a.c int x = 0; $ cat b.c #include <stdio.h> int x; /* tentative definition */ int main(void) { printf("%d\n", x); return 0; } $ gcc -ansi -pedantic -W -Wall -c a.c $ gcc -ansi -pedantic -W -Wall -c b.c $ gcc -o def a.o b.o $ ./def 0 A "tentative definition" becomes "real definition" in C if there is no other definition. So, we could have changed both files to contain int x;, and it would be legal C. So, you may have a tentative definition in the header file. We need to see the actual code to be sure. The C standard says that the following is undefined behavior (appendix J.2p1): An identifier with external linkage is used, but in the program there does not exist exactly one external definition for the identifier, or the identifier is not used and there exist multiple external definitions for the identifier. C++ may have different rules. Edit: As per this thread on comp.lang.c++, C++ does not have tentative definitions. The reason being: This avoids having different initialization rules for built-in types and user-defined types. (The thread deals with the same question, btw.) Now I am almost sure that OP's code contains what C calls "tentative definition" in the header file, which makes it legal in C and illegal in C++. We will know for sure only when we see the code though. More information on "tentative definitions" and why they are needed is in this excellent post on comp.lang.c (by Chris Torek).
1,987,541
4,427,060
Cannot marshal a struct that contains a union
I have a C++ struct that looks like this: struct unmanagedstruct { int flags; union { int offset[6]; struct { float pos[3]; float q[4]; } posedesc; } u; }; And I'm trying to Marshal it like so in C#: [StructLayout(LayoutKind.Explicit)] public class managedstruct { [FieldOffset(0)] public int flags; [FieldOffset(4), MarshalAsAttribute(UnmanagedType.ByValArray, SizeConst = 6)] public int[] offset; [StructLayout(LayoutKind.Explicit)] public struct posedesc { [FieldOffset(0), MarshalAsAttribute(UnmanagedType.ByValArray, SizeConst = 3)] public float[] pos; [FieldOffset(12), MarshalAsAttribute(UnmanagedType.ByValArray, SizeConst = 4)] public float[] q; } [FieldOffset(4)] public posedesc pose; } However, when I try loading data into my struct only the first 3 elements of the offset array are there (the array's length is 3). I can confirm that their values are correct - but I still need the other 3 elements. Am I doing something obviously wrong? I'm using these functions to load the struct: private static IntPtr addOffset(IntPtr baseAddress, int byteOffset) { switch (IntPtr.Size) { case 4: return new IntPtr(baseAddress.ToInt32() + byteOffset); case 8: return new IntPtr(baseAddress.ToInt64() + byteOffset); default: throw new NotImplementedException(); } } public static T loadStructData<T>(byte[] data, int byteOffset) { GCHandle pinnedData = GCHandle.Alloc(data, GCHandleType.Pinned); T output = (T)Marshal.PtrToStructure(addOffset(pinnedData.AddrOfPinnedObject(), byteOffset), typeof(T)); pinnedData.Free(); return output; } Loading example: managedstruct mystruct = loadStructData<managedstruct>(buffer, 9000); Let me know if you need more information.
I'm not 100% sure about this but I believe that the Union means that the same memory is used for both members. In the case of the C++ structure, an int[] or a posedesc structure. So the size of the structure will be sizeof(int) + sizeof(posedisc). Meaning, Union doesn't mean you'll have both an int[] and a posedisc you'll have shared memory that can be either of those types in C++ land but only one or the other in managed land. So I think you probably need two managed structures, one that has offset and one that has posedisc. You can pick one or the other in your call to LoadStruct. Optionally you could create a byte[] field and have calculated properties that convert those bytes into the desired types.
1,987,602
1,987,647
Pure virtual method called
I understand why calling a virtual function from a constructor is bad, but I'm not sure why defining a destructor would result in a "pure virtual method called" exception. The code uses const values to reduce the use of dynamic allocation - possibly also the culprit. #include <iostream> using namespace std; class ActionBase { public: ~ActionBase() { } // Comment out and works as expected virtual void invoke() const = 0; }; template <class T> class Action : public ActionBase { public: Action( T& target, void (T::*action)()) : _target( target ), _action( action ) { } virtual void invoke() const { if (_action) (_target.*_action)(); } T& _target; void (T::*_action)(); }; class View { public: void foo() { cout << "here" << endl; } }; class Button : public View { public: Button( const ActionBase& action ) : _action( action ) { } virtual void mouseDown() { _action.invoke(); } private: const ActionBase& _action; }; int main( int argc, char* argv[] ) { View view; Button button = Button( Action<View>( view, &View::foo ) ); button.mouseDown(); return 0; }
You have Undefined Behavior. As the parameter to Button's ctor is a const& from a temporary, it is destroyed at the end of that line, right after the ctor finishes. You later use _action, after Action's dtor has already run. Since this is UB, the implementation is allowed to let anything happen, and apparently your implementation happens to do something slightly different depending on whether you have a trivial dtor in ActionBase or not. You get the "pure virtual called" message because the implementation is providing behavior for calling ActionBase::invoke directly, which is what happens when the implementation changes the object's vtable pointer in Action's dtor. I recommend using boost.function or a similar 'action callback' library (boost has signals and signals2, for example).
1,987,679
1,987,716
C++0x static initializations and thread safety
I know that as of the C++03 standard, function-scope static initializations are not guaranteed to be thread safe: void moo() { static std::string cat("argent"); // not thread safe ... } With the C++0x standard finally providing standard thread support, are function-scope static initializations required to be thread safe?
it seems the initialization would be thread safe, since in the case the object is dynamically initialized upon entering the function, it's guaranteed to be executed in a critical section: § 6.7 stmt.decl 4. ...such an object is initialized the first time control passes through its declaration... If control enters the declaration concurrently while the object is being initialized, the concurrent execution shall wait for completion of the initialization... there is a potential edge-case, if after returning from main(), the destructor of a static object calls the function after the static local has already destroyed, the behavior is undefined. however, that should be easy to avoid.
1,988,192
1,988,667
ACE_Mutex::acquire problem
I have a mutex in my class with the following definition: ACE_Mutex m_specsMutex; When i use the acquire() method that takes no parameters everything works just fine. But when i use it with a time value (as follows) it just immediately returns with -1 value. I'm sure that this mutex hasn't been acquired anywhere else so it shouldn't return -1. m_specsMutex.acquire(ACE_OS::gettimeofday() + ACE_Time_Value(30)) Am i doing anything wrong?
Browing through the doxygen docs for ACE_Mutex, I don't understand how your code could possibly compile. The time-out value (tv) is passed either by reference or a pointer so that acquire() can update the absolute time at which the mutex was acquired. You cannot pass an expression. Try it like this: ACE_Time_Value time = ACE_OS::gettimeofday() + ACE_Time_Value(30); m_specsMutex.acquire(&time);
1,988,385
1,988,571
How are open source projects commonly organized and deployed?
I am looking for documentation on how to commonly do the technical part of publishing the source of first open source projects, in particular with library-intensive stuff in C/C++, Java, Python. To give an example, if I built a C++ project with an IDE like Netbeans and various libraries like Xerces-C and Boost, I would like to find out about these questions: which are the most common tools to organize the build process for such a process outside of my own environment, and more importantly how do I learn them in the way that it is 'generally being done' ? I use many open source projects and can certainly read the build code (makefiles and config options and so on), but that doesn't tell me how to get there, what are the important details and what is generally expected. is there for specific languages (like the ones mentioned) something like a 'coding style' guidance on deployment ? Are there open source projects that have guidelines on that ? when deploying source code (rather than packages with apt/port/etc, where you can resolve dependencies), what is the typical way to deploy library dependencies ? I know that I can read all the manpages and all the documentation, but I would like to read about the 'conventions' and how they are implemented and expected rather than all the possible technical options. I found this one on another stackoverflow post, it's nice, but not very specific: http://producingoss.com/en/producingoss.html
Let's look at one of open-source features. If you want to learn how it's deployed, download a couple of similar open-source projects and learn from them. So, find one that's done like yours and study its sources. Why should it help? The thing is that open-source projects have to be able to build on users' machines easily. Otherwise noone will be able to contribute to them. Therefore all necessary information about how they should be deployed is usually included in INSTALL or README files witihn the sources you downloaded. They usually consist of several simple steps. For the same purpose, checking availability and versions of prerequisites is automated (in configure scripts), and sometimes such scripts even aid in installing them. What is generally expected is something like # Download sources (this line is read from your website) wget http://myapp.org/myapp-source-2.15.tgz tar -xzf myapp-source-2.15.tgz cd myapp-2.15 less INSTALL # read INSTALL file, where instructions about installing prerequisites are ./configure --help # Read help, learn about possible options ./configure --prefix=/install/here --without-sound make make install Nowadays some applications use cmake instead of autotools (the stufff with configure script). I doubt that Linux projects actually requires NetBeans as a build system--that would be an overkill. But this IDE seems to generate makefiles, so ship them. You may also commit IDE-specific porject files into repository, for convenience, but it shouldn't be the primary way of building your soft. There're some more things users expect to find in your package: Licensing information (usually in LICENSE file and at the beginning of every source file as well) Link to project homepage (where to report bugs) Coding guidelines (as a text flie) Information for maintainers (how to bump version, how to add module, etc)
1,988,459
1,988,477
global low level keyboard hook being called when SendInput is made. how to prevent it?
I have a win 32 application written in c++ which sets the low level keyboard hook. now i want to sendInput to any app like word / notepad. how do i do this? i have already done enough of using findwindow / sendmessage. for all these, i need to know edit controls. finding the edit control is very difficult. since SendInput works for any windows application, i want to use it. the problem is i get a call to my callback function with the pressed key. for e.g i pressed A and i want to send U+0BAF unicode character to the active applciation windows. in this case, assume it is notepad. the problem is i get two characters U+0BAF and A in the notepad. A is being sent because i am calling CallNextHookEx( NULL, nCode, wParam, lParam); if i return 1 after sendInput, then nothing is sent to notepad. any suggestion?
If I understood your problem correctly, you should ignore "injected" key events in your hook procedure, like this: LRESULT CALLBACK hook_proc( int code, WPARAM wParam, LPARAM lParam ) { KBDLLHOOKSTRUCT* kbd = (KBDLLHOOKSTRUCT*)lParam; // Ignore injected events if (code < 0 || (kbd->flags & LLKHF_INJECTED)) { return CallNextHookEx(kbdhook, code, wParam, lParam); } ... Update: additionally, you have to eat characters and notify some other routine for a character press through Windows messages. Example: ... // Pseudocode if (kbd->vkCode is character) { if (WM_KEYDOWN == wParam) { PostMessage(mainwnd, WM_MY_KEYDOWN, kbd->vkCode, 0); return 1; // eat the char, ie 'a' } } return CallNextHookEx(kbdhook, code, wParam, lParam); And, in some other module, you handle WM_MY_KEYDOWN: ie, #define WM_MY_KEYDOWN (WM_USER + 1) and call the appropriate routine that will generate new key events.
1,988,642
1,988,650
set visual studio 2008 compiler for c not c++
I have installed visual studio 2008 and i want to create some simple applications using C language. I do this by creating c++ console applications but i want the compiler to work for C not C++. Any way to accomplish this or i need another compiler if i want to deal with C?
Use .c file extension instead of .cpp, those files will be compiled as C-only code by default in a C/C++ Visual Studio project.
1,988,685
2,125,751
Is it possible to specify specific flags/define for DLL/SO build?
How can I specify some unique flags for DLL only builds. By default libtool adds -DDLL_EXPORT which is fine for most projects that follow GNU conventions, but if I work, for example, with Boost I may need to specify for my library flags: -DDLL_EXPORT -DBOOST_ALL_DYN_LINK for DLL only builds, also sometimes I want conditional builds so I need some specific defines for DLL/SO build. Unfortunately I can't find a way how to do this with libtool, what flags should I use? P.S.: Don't even try to suggest to move to CMake. Example: I use library foo that links to bar and requires -DBAR_EXPORTS to get symbols for dynamic library only. Is there something like libfoo_la_dynamic_CXXFLAGS = -DBAR_EXPORTS Anybody? Alternative Solution: (quite ugly) Create a file `defines.h #if defined(PIC) || defined(DLL_EXPORT) #define BAR_EXPORTS #endif And then: libfoo_la_CXXFLAGS += -include defines.h Ugly but should work.
You can disable building shared library by default with LT_INIT([disable-shared]) then you can use AM_CONDITIONAL combined with --enabled-shared and set the extra definitions if shared library is explicitly requested. IOW, enable building static or shared, but not both at the same time.
1,988,814
1,988,851
Changing a pointer to an array in C
I have got a structure and in it a pointer to a 2D array. But when I try to assign an actual 2D array to that pointer I do not succeed - compiler says that my pointer is a pointer to a 1D array. Here's my code: typedef GLfloat Vertex2f[2]; typedef GLfloat TextureCoordinate[2]; typedef struct { GLuint texture_name; // OpenGL ID of texture used for this sprite Vertex2f *vertices; // array of vertices TextureCoordinate *texture_coords; // texture vertices to match GLubyte *vertex_indices; } game_sprite; void loadState() { game_sprite ballSprite; createAndLoadTexture("ball.png", &ballSprite.texture_name); const Vertex2f tempVerticesArray[4] = { {-100.0f, -100.0f}, {-100.0f, 100.0f}, {100.0f, 100.0f}, {100.0f, -100.0f} }; ballSprite.vertices = &tempVerticesArray; //The problem appears to be here } How can I make it work? Thanks.
You have two problems. First is that tempVerticesArray is const. You can't assign a pointer to a const value (&tempVerticesArray) to a pointer to a non-const variable (ballSprite.vertices) without a typecast, so the compiler is complaining. You should modify the vertices data member to be of the type const Vertex2f *, assuming you're not actually modifying that data ever. The second problem is that as soon as loadState() ends, the variable tempVerticesArray goes out of scope, and so any dangling pointers to it (specifically ballSprite.vertices) are invalid. You should make tempVerticesArray a static variable so it's not a stack variable that can go out of scope. This is assuming that the ballSprite object is used after that function ends, which I'm guessing it does based on context. If you do need to modify your vertices after initialization, you'll need to allocate for each ballSprite its own set of vertex data (e.g. using malloc()) and copy the vertex data in (e.g. using memcpy()). If you don't, all ballSprite instances will share a pointer to the same vertex data, and when you modify it, they will all be affected.
1,988,849
2,030,130
Using enums with Pococapsule (C++ IoC-container)
Is there a way of supplying enum values as method-args in pococapsule without resorting to factory-methods? Let say I have a class that take an enum value in its constructor class A { A(myEnum val); } Using Pococapsule xml configuration: I would like to express something like this: <bean id="A" class="A"> <method-arg type="MyEnum" value="MyEnum::Value1" /> </bean> However, since pococapsule's basic types only includes built in types such as short, char, etc this is not possible. How would I go about to instantiate a class A using pococapsule? I could resort to using factory methods something like this: MyEnum GetMyEnumValue1() { return MyEnum::Value1; } <bean id="A" class="A"> <method-arg factory-method="GetMyEnumValue1" /> </bean> Which isn't very practical. I would have to implement a new factory method for every possible value of each and every enum used. Some would argue that enum:s shouldn't be passed in constructors or setter methods as it is a sign of a class doing to much. Yes I agree. However, there is a lot of third party code and c++ frameworks out there that uses this style, so I need to be able to do this. Edit: The issue was resolved on Pococapsule's discussion forum. The work-around in this specific case was to have factory methods perform the desired action. It is not as flexible as declaring enum-use in the xml-config file, but it moved the project forward. Thanks Ke for your help.
(repost, as the XML code was filtered out in previous reply) In C/C++ enums are able to be passed as int implicitly, therefore, you can simply have type="long" in the method-arg element. You can also use the DSM feature to define your own extend schema that supports your specific enum (it should be similar to the user example in examples/basic-ioc/ext-schema where DSM is used to typesafely support user defined Map type). -Ke
1,988,914
1,989,073
Avoid excessive function parameters: class-centered or function-centered approach?
How would you fix the following bad code that passes too many parameters around? void helper1(int p1, int p3, int p5, int p7, int p9, int p10) { // ... } void helper2(int p1, int p2, int p3, int p5, int p6, int p7, int p9, int p10) { // ... } void foo(int p1, int p2, int p3, int p4, int p5, int p6, int p7, int p8, int p9, int p10) { helper1(p1, p3, p5, p7, p9, p10); helper2(p1, p2, p3, p5, p6, p7, p9, p10); } I see two different approaches: Approach 1: Put all functions in a class class Foo { private: int p1, p2, p3, p4, p5, p6, p7, p8, p9, p10; void helper1() {} void helper2() {} public: void foo() { helper1(); helper2(); } // add constructor }; Approach 2: Just pass parameters as a class struct FooOptions { int p1, p2, p3, p4, p5, p6, p7, p8, p9, p10; }; void helper1(const FooOptions& opt) { // ... } void helper2(const FooOptions& opt) { // ... } void foo(const FooOptions& opt) { helper1(opt); helper2(opt); } What are the advantages and disadvantages of the approaches? An advantage of Approach 1 is that -- if you make the helper functions virtual -- you can subclass and overload them, adding flexibility. But then, in my case (outside of the toy mini example that I gave) such helpers are often templated, so they cannot be virtual anyway. An advantage of Approach 2 is that the helper functions can easily be called from other functions, too. (This question is related, but does not discuss these two alternatives.)
Short Answer: Happy New Year! I'd avoid option #1 and only go with option #2 if the parameters can be separated into clear and logical groups that make sense away from your function. Long Answer I have seen many examples of functions as you described from coworkers. I'll agree with you on the fact that it's a bad code smell. However, grouping parameters into a class just so you don't have to pass parameters and deciding rather arbitrarily to group them based on those helper functions can lead to more bad smells. You have to ask yourself if you're improving readability and understanding for other that come after you. calcTime(int p1, int p2, int p3, int p4, int p5, int p6) { dist = calcDistance( p1, p2, p3 ); speed = calcSpeed( p4, p5, p6 ); return speed == 0 : 0 ? dist/speed; } There you can group things to be more understandable because there is a clear distinction amongst parameters. Then I would suggest approach #2. On the other hand, code in which I've been handed often looks like: calcTime(int p1, int p2, int p3, int p4, int p5, int p6) { height = height( p1, p2, p3, p6 ); type = getType( p1, p4, p5, p6 ); if( type == 4 ) { return 2.345; //some magic number } value = calcValue( p2, p3, type ); //what a nice variable name... a = resetA( p3, height, value ); return a * value; } which leaves you with a feeling that these parameters aren't exactly friendly to breaking up into something meaningful class-wise. Instead you'd be better off served flipping things around such as calcTime(Type type, int height, int value, int p2, int p3) and then calling it calcTime( getType( p1, p4, p5, p6 ), height( p1, p2, p3, p6 ), p3, p4 ); which may send shivers up your spine as that little voice inside your head screams "DRY, DRY, DRY!" Which one is more readable and thus maintainable? Option #1 is a no-go in my head as there is a very good possibility someone will forget to set one of the parameters. This could very easily lead to a hard to detect bug that passes simple unit tests. YMMV.
1,988,973
1,988,979
How to reinterpret the bits of a float as an int
What is the Java equivalent of following C++ code? float f=12.5f; int& i = reinterpret_cast<int&>(f);
float f = 12.5f; int i = Float.floatToIntBits(f);
1,988,978
1,989,082
Overloading for_each for specific iterator types
I'm using a typedef to define the type of a container in my program so that I can easily switch between using normal STL containers and STXXL containers, along the lines of: typedef stxxl:vector<Data> MyContainer; or typedef std:vector<Data> MyContainer; One difficulty is that STXXL provides a special version of std::for_each, stxxl::for_each that is optimised for use with STXXL containers. I'd prefer to use this function when MyContainer is typedeffed as a stxxl::vector. One solution would be to define my own for_each function that calls the right for_each function and use that whenever I want to call for_each. Another solution that I'm currently investigating is to overload/specialize std::foreach so that it calls stxxl::for_each whenever it is called with a stxxl::vector<Data>::(const_)iterator as first and second argument. I cannot get the second idea to work though. I've tried the following: namespace std { template <class UnaryFunction> UnaryFunction for_each(stxxl:vector<Data>::const_iterator first, stxxl:vector<Data>::const_iterator last, UnaryFunction f) { stxxl::for_each(first, last, f, 4); } } Along with a similar function for non-const iterators. They don't get called though. What would be the preferred solution to this problem? How can I get my version of std::for_each for stxxl::vector iterators to get called? Update: I got the second idea to work now, as posted. The problem was that I was including the wrong file (ouch...). The first question remains though: What is the preferred solution to this problem? Is it okay to overload std::for_each, as the std namespace is not intended for mere mortals?
You can specialize templates in std (17.4.3.1), but you can't add overloads. Your definition is an overload, not a specialization of the standard for_each template, and in any case functions can't be partially specialized. So it's undefined to put any definition in namespace std that might do what you want. ADL is supposed to make this work smoothly without any need for that, though. I assume the stxxl iterators are in the stxxl namespace, so for_each(first, last, f, 4); should call stxxl::for_each. If you want std::for_each, you fully qualify the name when you call it.
1,989,222
1,989,433
How to calculate quantization error from 16bit to 8bit?
Does anyone know how to calculate the error of quantizing from 16bit to 8bit? I have looked at the Wikipedia article about Quantization, but it doesn't explain this. Can anyone explain how it is done? Lots of love, Louise Update: My function looks like this. unsigned char quantize(double d, double max) { return (unsigned char)((d / max) * 255.0); }
It is there in the Wikipedia article, expressed as signal to noise ratio. But I guess the real question is, in what units do you want the result? As a signal to noise power ratio, it's 20 log(2^8) = 55 dB You probably need to read this: http://en.wikipedia.org/wiki/Decibel
1,989,552
2,118,537
GCC error with variadic templates: "Sorry, unimplemented: cannot expand 'Identifier...' into a fixed-length argument list"
While doing variadic template programming in C++11 on GCC, once in a while I get an error that says "Sorry, unimplemented: cannot expand 'Identifier...' into a fixed-length arugment list." If I remove the "..." in the code then I get a different error: "error: parameter packs not expanded with '...'". So if I have the "..." in, GCC calls that an error, and if I take the "..." out, GCC calls that an error too. The only way I have been able to deal with this is to completely rewrite the template metaprogram from scratch using a different approach, and (with luck) I eventually come up with code that doesn't cause the error. But I would really like to know what I was doing wrong. Despite Googling for it and despite much experimentation, I can't pin down what it is that I'm doing differently between variadic template code that does produce this error, and code that does not have the error. The wording of the error message seems to imply that the code should work according the C++11 standard, but that GCC doesn't support it yet. Or perhaps it is a compiler bug? Here's some code that produces the error. Note: I don't need you to write a correct implementation for me, but rather just to point out what is about my code that is causing this specific error // Used as a container for a set of types. template <typename... Types> struct TypePack { // Given a TypePack<T1, T2, T3> and T=T4, returns TypePack<T1, T2, T3, T4> template <typename T> struct Add { typedef TypePack<Types..., T> type; }; }; // Takes the set (First, Others...) and, while N > 0, adds (First) to TPack. // TPack is a TypePack containing between 0 and N-1 types. template <int N, typename TPack, typename First, typename... Others> struct TypePackFirstN { // sorry, unimplemented: cannot expand ‘Others ...’ into a fixed-length argument list typedef typename TypePackFirstN<N-1, typename TPack::template Add<First>::type, Others...>::type type; }; // The stop condition for TypePackFirstN: when N is 0, return the TypePack that has been built up. template <typename TPack, typename... Others> struct TypePackFirstN<0, TPack, Others...> //sorry, unimplemented: cannot expand ‘Others ...’ into a fixed-length argument list { typedef TPack type; }; EDIT: I've noticed that while a partial template instantiation that looks like does incur the error: template <typename... T> struct SomeStruct<1, 2, 3, T...> {}; Rewriting it as this does not produce an error: template <typename... T> struct SomeStruct<1, 2, 3, TypePack<T...>> {}; It seems that you can declare parameters to partial specializations to be variadic; i.e. this line is OK: template <typename... T> But you cannot actually use those parameter packs in the specialization, i.e. this part is not OK: SomeStruct<1, 2, 3, T...> The fact that you can make it work if you wrap the pack in some other type, i.e. like this: SomeStruct<1, 2, 3, TypePack<T...>> to me implies that the declaration of the variadic parameter to a partial template specialization was successful, and you just can't use it directly. Can anyone confirm this?
There is a trick to get this to work with gcc. The feature isn't fully implemented yet, but you can structure the code to avoid the unimplemented sections. Manually expanding a variadic template into a parameter list won't work. But template specialization can do that for you. template< char head, char ... rest > struct head_broken { static const char value = head; }; template< char ... all > struct head_works; // make the compiler hapy template< char head, char ... rest > struct head_works<head,rest...> // specialization { static const char value = head; }; template<char ... all > struct do_head { static const char head = head_works<all...>::value; //Sorry, unimplemented: cannot expand 'all...' into a fixed-length arugment list //static const char head = head_broken<all...>::value; }; int main { std::cout << head_works<'a','b','c','d'>::value << std::endl; std::cout << head_broken<'a','b','c','d'>::value << std::endl; std::cout << do_head<'a','b','c','d'>::head << std::endl; } I tested this with gcc 4.4.1
1,989,708
1,989,780
Type casting with printf statements under Mac OSX and Linux
I have some piece of code that behaves differently under Mac OSX and Linux (Ubuntu, Fedora, ...). This is regarding type casting in arithmetic operations within printf statements. The code is compiled with gcc/g++. The following #include <stdio.h> int main () { float days = (float) (153*86400) / 86400.0; printf ("%f\n", days); float foo = days / 30.6; printf ("%d\n", (int) foo); printf ("%d\n", (int) (days / 30.6)); return 0; } generates on Linux 153.000000 5 4 and on Mac OSX 153.000000 5 5 Why? To my surprise this here works on both Mac OSX and Linux printf ("%d\n", (int) (((float)(153 * 86400) / 86400.0) / 30.6)); printf ("%d\n", (int) (153 / 30.6)); printf ("%.16f\n", (153 / 30.6)); Why? I don't have a clue at all. THX.
try this: #include <stdio.h> int main () { float days = (float) (153*86400) / 86400.0; printf ("%f\n", days); float foo = days / 30.6; printf ("%d\n", (int) foo); printf ("%d\n", (int) (days / 30.6)); printf ("%d\n", (int) (float)(days / 30.6)); return 0; } Notice what happens? The double to float conversion is the culprit. Remember float is always converted to double in a varargs function. I'm not sure why macos would be different, though. Better (or worse) implementation of IEEE arithmetic?
1,989,796
1,999,622
Qt question to fullscreen flash application
I am using Qt to develop an application and inside we have access to select flash streaming videos like youtube. Is there a way to programmaticly full screen the flash application without requiring interaction from the user? I am using a "QWebView" control.
I would say: locate the button for the fullscreen application on the page, and send a click using QEVent. Tricky, but might work. If the button is inside the flash application, you will have difficulties to locate it but if you succeed, you can probably send the click to the flash application area.
1,989,805
1,989,837
Different outputs after debugging and compiling C++ programs
I'm running CodeBlocks on the MingW compiler in an XP virtual machine. I wrote in some simple code, accessible at cl1p , which answers the algorithm question at CodeChef (Well it only answers it partly, as I have not yet included the loop for multiple test cases. However, my problem is, that while running it in debug mode, it gives the correct output of 5, for the input: 3 1 2 1 1 2 3 However, when I build and run it, it gives the absurd, huge output 131078, what seems like garbage to me. I do not understand how the hell this is happening, but am guessing it's something to do with the dynamic memory allocation. What's the problem here, and how can I fix it? I even ran it through the online compiler at BotSkool, and it worked fine. After adding the loop for test cases, the code even worked correctly on CodeChef! #include <iostream> using namespace std; int main() { // Take In number of rows int numofrows; cin >> numofrows; // Input Only item in first row int * prevrow; prevrow = new int[1]; cin >> prevrow[0]; // For every other row for (int currownum = 1; currownum < numofrows; currownum++) { // Declare an array for that row's max values int * currow; currow = new int[currownum+1]; int curnum; cin >> curnum; // If its the first element, max is prevmax + current input currow[0] = prevrow[0] + curnum; // for every element int i = 1; for (; i <= currownum; i++) { cin >> curnum; // if its not the first element, check whether prevmax or prev-1max is greater. Add to current input int max = (prevrow[i] > prevrow[i-1]) ? prevrow[i] : prevrow[i-1]; // save as currmax. currow[i] = max + curnum; } // save entire array in prev prevrow = new int[i+1]; prevrow = currow; } // get highest element of array int ans = 0; for (int j=0; j<numofrows; j++) { if (prevrow[j] > ans) { ans = prevrow[j]; } } cout << ans; }
For one thing, this: //save entire array in prev prevrow = new int [i+1]; prevrow = currow; copies the pointer, not the whole array.
1,989,819
23,449,090
Theory on error handling?
Most advice concerning error handling boils down to a handful of tips and tricks (see this post for example). These hints are helpful but I think they don't answer all questions. I feel that I should design my application according to a certain philosophy, a school of thought that provides a strong foundation to build upon. Is there such a theory on the topic of error handling? Here's a few practical questions: How to decide if an error should be handled locally or propagated to higher level code? How to decide between logging an error, or showing it as an error message to the user? Is logging something that should only be done in application code? Or is it ok to do some logging from library code. In case of exceptions, where should you generally catch them? In low-level or higher level code? Should you strive for a unified error handling strategy through all layers of code, or try to develop a system that can adapt itself to a variety of error handling strategies (in order to be able to deal with errors from 3rd party libraries). Does it make sense to create a list of error codes? Or is that old fashioned these days? In many cases common sense is sufficient for developing a good-enough strategy to deal with error conditions. However, I would like to know if there is a more formal/"scholarly" approach? PS: this is a general question, but C++ specific answers are welcome too (C++ is my main programming language for work).
A couple of years ago I thought exactly about the same question :) After searching and reading several things, I think that the most interesting reference I found was Patterns for Generation, Handling and Management of Errors from Andy Longshaw and Eoin Woods. It is a short and systematic attempt to cover the basic idioms you mention and some others. The answer to these questions is quite controversial, but the authors above were brave enough to expose themselves in a conference, and then put their thoughts on paper.
1,989,842
1,989,885
Storing several data into a file
Can somebody give me some tips about storing a lot of data into a file? For example: I'm creating an audio sequencer with C++, and I want to save all the audio sample names (the file paths), info about the audio tracks in the project (name, volume, mute, solo, etc.) and where the samples are placed on the timeline into a file. I really have no idea what's the best way to do this. I don't want to use 3th party library's for this, and I'm a beginning programmer of the language. Thanks!
When you want to save different information in the same file, there are two popular ways to go: fixed-length fields and delimited fields. With fixed length field, each part is stored in the same size chunk. So if you wanted to store 5 things, and you store them in 80-character blocks, you can go to offset 160 in the file to read the third one. In delimited files, you put a character (or series of characters) between each piece of data, which can be of any length. Since your data can vary greatly in length, I would suggest using delimited storage, probably with each one on a separate line ("\n" printed between each one).
1,989,908
1,989,973
Interlocked*64 on WinXP 32bit
How should I implement these 64-bit interlocked functions on WinXP? Of course I can use full mutex, but I think it's needlessly heavyweight for this task. There must be some better way.
You shouldn't. This is much more complex than you think. If you insist, your best bet is to use a critical section to make sure you get the barriers right. If you really think a critical section is too heavy weight, read up on memory barriers
1,989,969
1,990,028
C++ multiple inheritance off identically named operator
Is it possible to inherit identically named operator which only differ in return type, from two different abstract classes. If so, them: what is the syntax for implementing operators what is the syntax for using/resolving operators what is the overhead in general case, same as for any other virtual function? if you can provide me with a reference or sample code that would be helpful thanks 12struct abstract_matrix { 13 virtual double& operator()(int i, int j); 14}; 15 16 struct abstract_block_matrix { 17 virtual double* operator()(int i, int j); 18 }; 19 20struct block_matrix : abstract_matrix, abstract_block_matrix { 21 22}; block matrix needs to provide implementations for both operators, so that it is either a matrix or a block matrix, depending on the context. I do not know how to provide implementation specific to block_matrix class. right now, it is done by passing object wrapped type as the last argument, but that does not seem very clean. I would like to retain pure matrix notation.
The return type of a function is not part of it's signature, so you can't have two operator+(i,j)'s in block_matrix - that would be an ambiguous call. So multiple inheritance is sort of a red herring here on this point. You just can't do that. What are you really trying to do, and why? In any event, for your other question: virtual operators are exactly like virtual functions in terms of performance and the way they operate. There are just slight semantic differences in how you use them - but under the hood they're just functions like any other.
1,989,977
1,990,160
Writing data chunks while processing - is there a convergence value due to hardware constraints?
I'm processing data from a hard disk from one large file (processing is fast and not a lot of overhead) and then have to write the results back (hundreds of thousands of files). I started writing the results straight away in files, one at a time, which was the slowest option. I figured it gets a lot faster if I build a vector of a certain amount of the files and then write them all at once, then go back to processing while the hard disk is occupied in writing all that stuff that i poured into it (that at least seems to be what happens). My question is, can I somehow estimate a convergence value for the amount of data that I should write from the hardware constraints ? To me it seems to be a hard disk buffer thing, I have 16MB buffer on that hard disk and get these values (all for ~100000 files): Buffer size time (minutes) ------------------------------ no Buffer ~ 8:30 1 MB ~ 6:15 10 MB ~ 5:45 50 MB ~ 7:00 Or is this just a coincidence ? I would also be interested in experience / rules of thumb about how writing performance is to be optimized in general, for example are larger hard disk blocks helpful, etc. Edit: Hardware is a pretty standard consumer drive (I'm a student, not a data center) WD 3,5 1TB/7200/16MB/USB2, HFS+ journalled, OS is MacOS 10.5. I'll soon give it a try on Ext3/Linux and internal disk rather than external).
Can I somehow estimate a convergence value for the amount of data that I should write from the hardware constraints? Not in the long term. The problem is that your write performance is going to depend heavily on at least four things: Which filesystem you're using What disk-scheduling algorithm the kernel is using The hardware characteristics of your disk The hardware interconnect you're using For example, USB is slower than IDE, which is slower than SATA. It wouldn't surprise me if XFS were much faster than ext2 for writing many small files. And kernels change all the time. So there are just too many factors here to make simple predictions easy. If I were you I'd take these two steps: Split my program into multiple threads (or even processes) and use one thread to deliver system calls open, write, and close to the OS as quickly as possible. Bonus points if you can make the number of threads a run-time parameter. Instead of trying to estimate performance from hardware characteristics, write a program that tries a bunch of alternatives and finds the fastest one for your particular combination of hardware and software on that day. Save the fastest alternative in a file or even compile it into your code. This strategy was pioneered by Matteo Frigo for FFTW and it is remarkably effective. Then when you change your disk, your interconnect, your kernel, or your CPU, you can just re-run the configuration program and presto! Your code will be optimized for best performance.
1,990,012
1,990,025
Daemon writing output to file twice instead of once in C++
I've written a daemon that writes the word "Beat" to a file, followed up the current date and time at 15 second intervals. However, each time I check the output file, the daemon appears to be outputting twice like this: Beat: Fri Jan 1 18:09:01 2010 Beat: Fri Jan 1 18:09:01 2010 where it should only have on entry. the entire code is located at http://pastebin.com/m27a81981 (I didn't want to paste it here as the entire thing is a bit long.). The function for writing to the file is get_time(); ofstream outputFile("heart.txt", ios::app); beat = "\nBeat: " + gtime + "\n"; outputFile << beat; outputFile.close(); Thanks in advance.
It's because you fork() at the beginning, creating two running instances of the daemon...
1,990,032
1,990,052
Using C++, how do I correctly inherit from the same base class twice?
This is our ideal inheritance hierarchy: class Foobar; class FoobarClient : Foobar; class FoobarServer : Foobar; class WindowsFoobar : Foobar; class UnixFoobar : Foobar; class WindowsFoobarClient : WindowsFoobar, FoobarClient; class WindowsFoobarServer : WindowsFoobar, FoobarServer; class UnixFoobarClient : UnixFoobar, FoobarClient; class UnixFoobarServer : UnixFoobar, FoobarServer; This is because the our inheritance hierarchy would try to inherit from Foobar twice, and as such, the compiler would complain of ambiguous references on any members of Foobar. Allow me to explain why I want such a complex model. This is because we want to have the same variable accessible from WindowsFoobar, UnixFoobar, FoobarClient, and FoobarServer. This wouldn't be a problem, only I'd like to use multiple inheritance with any combination of the above, so that I can use a server/client function on any platform, and also use a platform function on either client or server. I can't help but feel this is a somewhat common issue with multiple inheritance... Am I approaching this problem from completely the wrong angle? Update 1: Also, consider that we could use #ifdef to get around this, however, this will tend to yield very ugly code like such: CFoobar::CFoobar() #if SYSAPI_WIN32 : m_someData(1234) #endif { } ... yuck! Update 2: For those who want to read more into the background of this issue, I really suggest skimming over the appropriate mailing list thread. Thing start to get interesting around the 3rd post. Also there is a related code commit with which you can see the real life code in question here.
It would work, although you'd get two copies of the base Foobar class. To get a single copy, you'd need to use virtual inheritance. Read on multiple inheritance here. class Foobar; class FoobarClient : virtual public Foobar; class FoobarServer : virtual public Foobar; class WindowsFoobar : virtual public Foobar; class UnixFoobar : virtual public Foobar; However, there are many problems associated with multiple inheritance. If you really want to have the model presented, why not make FoobarClient and FoobarServer take a reference to Foobar at construction time, and then have Foobar& FoobarClient/Server::getFoobar ? Composition is often a way out of multiple inheritance. Take a example now: class WindowsFoobarClient : public WindowsFoobar { FoobarClient client; public: WindowsFoobarClient() : client( this ) {} FoobarClient& getClient() { return client } } However care must be taken in using this in the constructor.
1,990,135
1,990,153
Is it better to use `#ifdef` or inheritance for cross-compiling?
To follow from my previous question about virtual and multiple inheritance (in a cross platform scenario) - after reading some answers, it has occurred to me that I could simplify my model by keeping the server and client classes, and replacing the platform specific classes with #ifdefs (which is what I was going to do originally). Will using this code be simpler? It'd mean there'd be less files at least! The downside is that it creates a somewhat "ugly" and slightly harder to read Foobar class since there's #ifdefs all over the place. Note that our Unix Foobar source code will never be passed to the compiler, so this has the same effect as #ifdef (since we'd also use #ifdef to decide what platform specific class to call). class Foobar { public: int someData; #if WINDOWS void someWinFunc1(); void someWinFunc2(); #elif UNIX void someUnixFunc1(); void someUnixFunc2(); #endif void crossPlatformFunc(); }; class FoobarClient : public Foobar; class FoobarServer : public Foobar; Note: Some stuff (ctor, etc) left out for a simpler example. Update: For those who want to read more into the background of this issue, I really suggest skimming over the appropriate mailing list thread. Thing start to get interesting around the 3rd post. Also there is a related code commit with which you can see the real life code in question here.
Preferably, contain the platform dependant nature of the operations within the methods so the class declaration remains the same across platforms. (ie, use #ifdefs in the implementations) If you can't do this, then your class ought to be two completely separate classes, one for each platform.
1,990,156
1,990,162
CPP | .h files (C++)
I was just wondering what the difference between .cpp and .h files is? What would I use a header file (.h) for and what would I use a cpp file for?
In general, and it really could be a lot less general: .h (header) files are for declarations of things that are used many times, and are #included in other files .cpp (implementation) files are for everything else, and are almost never #included
1,990,277
1,990,308
c++ Syntax of a constructor's initialization list with a data member struct?
Possible Duplicate: Member initialization of a data structure’s members EDIT: I typed the title in last, and it gave me a lsit of related problems, as it usually does. At the bottom of this list was the exact same problem. (Using the exact same code ;)). Member initialization of a data structure's members AraK answers it fully, really. It appears I need to vote in order to close my own question? Hi, I have a class that looks like this: class Button { private: SDL_Rect box; public: Button(int x, int y, int w, int h); } Where box is one of these guys from SDL. Running with GCC with -Weffc++, just becasue I wanted to know what the warnings would be like, complains about the initialiser list, file.cpp||In constructor 'Button::Button(int, int, int, int)':| file.cpp|168|error: 'Button::box' should be initialized in the member initialization list| I would like to appease it. I can't figure out the stupid syntax though. I've tried Button::Button(int x, int y, int w, int h ) : box(0,0,0,0) but that just results in file.cpp||In constructor 'Button::Button(int, int, int, int)':| file.cpp|171|error: expected identifier before '{' token| file.cpp|171|error: member initializer expression list treated as compound expression| file.cpp|171|error: left-hand operand of comma has no effect| file.cpp|171|error: right-hand operand of comma has no effect| file.cpp|171|error: right-hand operand of comma has no effect| file.cpp|171|error: no matching function for call to 'SDL_Rect::SDL_Rect(int)'| c:\programming\mingw-4.4.0\bin\..\lib\gcc\mingw32\4.4.0\..\..\..\..\include\SDL\SDL_video.h|50|note: candidates are: SDL_Rect::SDL_Rect(const SDL_Rect&)| c:\programming\mingw-4.4.0\bin\..\lib\gcc\mingw32\4.4.0\..\..\..\..\include\SDL\SDL_video.h|50|note: SDL_Rect::SDL_Rect()| I tried box = blah or box.x = blah or box.x(blah), but they failed. I also tried box({0,0,0,0}), and box{0,0,0,0}, file.cpp|169|error: extended initializer lists only available with -std=c++0x or -std=gnu++0x| file.cpp|171|error: expected identifier before '{' token| I don't really want to be compiling against c++0x, really. Especially as I want this to be cross platform, and I don't think many things support c++0x. In the end I managed to get away with: Button::Button(int x, int y, int w, int h ) : box() { box.x = x; box.y = y; box.w = w; box.h = h; } Which seems entirely pointless to me. Is this the 'correct' way to do this? Isn't this just the same as without the initialiser list?
I see you found your solution, but please note that you could also get away with writing a class wrapper for SDL_rect, or even a global function SDL_rect createRect( int x, int y, int w, int h )
1,990,283
2,076,474
C++ Dialog box With timer
I couldn't find a simple tutorial on how to make a dialog box with decrementing timer. I don't need the timer to be accurate or actually reflect my program's inner timer.
Ended using SetTimer : http://msdn.microsoft.com/en-us/library/ms644906%28VS.85%29.aspx Thanks!
1,990,526
1,990,607
Compiling Festival on MingW32
I'm trying to compile Festival on MingW32, so I can have a Windows binary. I couldn't find the Windows binary on their site. Anyone have one they can post? If not, here's what I have so far. I did the ./configure and make for it and have the following message: $ make config/config:43: ../speech_tools/config/config: No such file or directory So, I downloaded the speech_tools tar ball and got. Did the ./configure and make to get: $ make config/config:156: config/systems/ix86_unknown.mak: No such file or directory ../config/config:156: ../config/systems/ix86_unknown.mak: No such file or directory make: *** No rule to make target `../config/systems/ix86_unknown.mak'. Stop. config/rules/modules.mak:133: config/modincludes.inc: No such file or directory make --no-print-directory -C ./config MADE_FROM_ABOVE=1 MODINCLUDES=1 INCLUDE_EVERYTHING='' modincludes.inc ../config/config:156: ../config/systems/ix86_unknown.mak: No such file or directory make[1]: *** No rule to make target `../config/systems/ix86_unknown.mak'. Stop. make: *** [config/modincludes.inc] Error 2 So, I copied config/systems/ix86_CYGWIN32.mak to ix86_unknown.mak and tried again. Now I get this message: g++ -c -fno-implicit-templates -O3 -Wall -Wno-non-template-friend -Wno-deprecated -DSUPPORT_EDITLINE -I../include slib.cc In file included from slib.cc:85: ../include/EST_unix.h:53:25: sys/wait.h: No such file or directory ../include/EST_unix.h:54:29: sys/resource.h: No such file or directory In file included from ../include/EST_String.h:50, from ../include/siod.h:17, from slib.cc:88: ../include/EST_iostream.h:54:26: strstream.h: No such file or directory In file included from ../include/EST_TList.h:50, from ../include/EST_string_aux.h:43, from ../include/siod.h:18, from slib.cc:88: Where do I get sys/wait.h, sys/resource.h and strstream.h? I'd rather not have to try this whole bit in Cygwin and carry around those annoying DLL's. Any advice?
Windows binaries are available here
1,990,535
1,990,550
Win32 files locked for reading: how to find out who's locking them
In C++ (specifically on Visual C++), sometimes you cannot open a file because another executable has it opened and is not sharing it for reads. If I try to open such a file, how can I programatically find out who's locking the file?
In Windows 2000 and higher, you cannot do this without using a kernel-mode driver. Process Explorer and other similar tools load a driver automatically to accomplish this. This is because the file handles are in kernel space and not accessible by user-mode applications (EXE files). If you are really interested in doing this, take a look at this project.
1,990,645
1,990,687
general boost asio stream reads question
I'm a little confused about how reading data into a stream works in asio. My main questions are: What happens if there are multiple asynchronous writes from one computer going on at the same time, and only one asynchronous read on the receiving computer. Over a TCP protocol, is there any chance that the data will get interleaved? How does the ASIO library know when to call the handler that handles new data in the read stream? Would it call on every received byte? When the client disconnects? Are there any good (and simple) examples that use a stream, as opposed to a buffer to read from a tcp socket with asio? thanks.
What happens if there are multiple asynchronous writes from one computer going on at the same time, and only one asynchronous read on the receiving computer. Over a TCP protocol, is there any chance that the data will get interleaved? If you call async_write while another asynchronous write operation is in progress, the result is undefined. It's similar to doing two simultaneous write() syscalls on the same socket from two different threads. You could make a big data mess. How does the ASIO library know when to call the handler that handles new data in the read stream? Would it call on every received byte? When the client disconnects? If you call async_read, it will call the callback when all the requested amount of data is received. If you call async_read_some, it will call the callback when there is at least one byte, but it may be more. Probably the contents of a single TCP packet sent by the other end. Are there any good (and simple) examples that use a stream, as opposed to a buffer to read from a tcp socket with asio? You mean asio::iostream? There are examples in the asio documentation.
1,990,665
1,990,670
memory overhead of pointers in c/c++
I'm on a 64bit platform, so all memory adrs are 8 bytes. So to get an estimate of the memory usage of an array, should I add 8 bytes to the sizeof(DATATYPE) for each entry in the array. Example: short unsigned int *ary = new short unsigned int[1000000]; //length 1mio //sizeof(short unsinged int) = 2bytes //sizeof(short unsinged int*) = 8 bytes So does each entry take up 10bytes? and will my 1mio length array therefore use atleast 10megabytes? thanks
No, you don't get a pointer for each and every array index. You get a single pointer pointing to the array, which is a contiguous block of memory, which is why the address of any index can be calculated from the index itself plus the array address. For example, if the variable a known by the memory location 0xffff0012 is set to 0x76543210, then they could be laid out in memory as: +-------------+ This is on the stack or global. 0xffff0012 | 0x76543210 | +-------------+ +-------------+ This is on the heap (and may also 0x76543210 | a[ 0] | have some housekeeping information). +-------------+ 0x76543212 | a[ 1] | +-------------+ 0x76543214 | a[ 2] | +-------------+ 0x76543216 | a[ 3] | +-------------+ : : +-------------+ 0x7672B68E | a[999999] | +-------------+ and you can see that the address of index n is 0x76543210 + n * 2. So you will actually have one 8-byte pointer and a million 2-byte shorts which, in your case, totals 2,000,008 bytes. This is on top of any malloc housekeeping overhead which, like the pointer itself, is minuscule compared to your actual array.
1,990,864
1,990,869
how to convert a hexadecimal string to a corresponding integer in c++?
i have a unicode mapping stored in a file. like this line below with tab delimited. a 0B85 0 0B85 second column is a unicode character. i want to convert that to 0x0B85 which is to be stored in int variable. how to do it?
You could use strtol, which can parse numbers into longs, which you can then assign to your int. strtol can parse numbers with any radix from 2 to 36 (i.e. any radix that can be represented with alphanumeric charaters). For example: #include <cstdlib> using namespace std; char *token; ... // assign data from your file to token ... char *err; // points to location of error, or final '\0' if no error. int x = strtol(token, &err, 16); // convert hex string to int