question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,580,063
1,580,359
High performance comparison of signed int arrays (using Intel IPP library)
We're trying to compare two equally sized native arrays of signed int values using inequality operations, <, <=, > and >=, in a high performance way. As many values are compared, the true/false results would be sotred in a char array of the same size of the input, where 0x00 means false and 0xff means true. To accomplish this, we're using the Intel IPP library. The problem is that the function we found that does this operation, named ippiCompare_*, from the images and video processing lib, supports only the types unsigned char (Ipp8u), signed/unsigned short (Ipp16s/Ipp16u) and float (Ipp32f). It does not directly support signed int (Ipp32s) I (only) envision two possible ways of solving this: Casting the array to one of the directly supported types and executing the comparison in more steps (it would became a short array of twice the size or a char array of four times the size) and merging intermediate results. Using another function directly supporting signed int arrays from IPP or from another library that could do something equivalent in terms of performance. But there may be other creative ways... So I ask you're help with that! :) PS: The advantage of using Intel IPP is the performance gain for large arrays: it uses multi-value processor functions and many cores simultaneously (and maybe more tricks). So simple looped solutions wouldn't do it as fast AFAIK. PS2: link for the ippiCompare_* doc
You could do the comparison with PCMPEQD followed by a PACKUSDW and PACKUSWB. This would be something along #include <emmintrin.h> void cmp(__m128d* a, __m128d* b, v16qi* result, unsigned count) { for (unsigned i=0; i < count/16; ++i) { __m128d result0 = _mm_cmpeq_pd(a[0], b[0]); // each line compares 4 integers __m128d result1 = _mm_cmpeq_pd(a[1], b[1]); __m128d result2 = _mm_cmpeq_pd(a[2], b[2]); __m128d result3 = _mm_cmpeq_pd(a[3], b[3]); a += 4; b+= 4; v8hi wresult0 = __builtin_ia32_packssdw(result0, result1); //pack 2*4 integer results into 8 words v8hi wresult1 = __builtin_ia32_packssdw(result0, result1); *result = __builtin_ia32_packsswb(wresult0, wresult1); //pack 2*8 word results into 16 bytes result++; } } Needs aligned pointers, a count divisible by 16, some typecasts I have omitted because of lazyness/stupidity and probably a lot of debugging, of course. And I didn't find the intrinsics for packssdw/wb, so I just used the builtins from my compiler.
1,580,332
1,580,351
std::pow gives a wrong approximation for fractional exponents
Here is what I mean trying to do double x=1.1402 double pow=1/3; std::pow(x,pow) -1; result is 0 but I expect 0.4465 the equation is (1 + x) ^3= 1.1402, find x.
1/3 is done as integer arithmetic, so you're assigning 0 to pow. Try pow(x, 1.0/3.0);
1,580,471
1,658,902
How to mix C++ and external buttons on seperate window?
I want to make a C++ button on Start>Run i.e but when I do it will not do signalled event? Im sorry I have seen that you do not get the question. Ok basically when you create a button with CreateWindowEx(); I want to do that but put on a different window with SetPArent which I have already done now the button does not work so I need my program to someone get when it is clicked from the Run window as example! And yes you have it, but it's not making the button is the problem it's getting when it's clicked with my program since it does not belong to it anymore!
You need to apply the ancient but still-supported technique known in Windows as subclassing; it is well explained here (15-years-old article, but still quite valid;-). As this article puts it, Subclassing is a technique that allows an application to intercept messages destined for another window. An application can augment, monitor, or modify the default behavior of a window by intercepting messages meant for another window. You'll want "instance subclassing", since you're interested only in a single window (either your new button, or, the one you've SetParented your new button to); if you decide to subclass a window belonging to another process, you'll also need to use the injection techniques explained in the article, such as, injecting your DLL into system processes and watching over events with a WH_CBT hook, and the like. But I think you could keep the button in your own process even though you're SetParenting it to a window belonging to a different process, in which case you can do your instance subclassing entirely within your own process, which is much simpler if feasible. "Superclassing" is an alternative to "subclassing", also explained in the article, but doesn't seem to offer that many advantages when compared to instance subclassing (though it may compared with global subclassing... but, that's not what you need here anyway). You'll find other interesting articles on such topics here, here, and here (developing a big, rich C++ library for subclassing -- but, also showing a simpler approach based on hooks which you might prefer). Each article has a pretty difference stance and code examples, so I think that having many to look at may help you find the right mindset and code for your specific needs!
1,580,614
1,596,962
http client blocks on recv()
I need some help writing an http client. The trouble comes when I try to receive data from a webserver. The recv() call blocks the program. Any better direction would be extremely helpful, I'll post my code below: if ( argc != 2 ) { cerr << "Usage: " << argv[0]; cerr << " <URI>" << endl; return 1; } else { uri_string = argv[1]; } // Create URI object and have it parse the uri_string URI *uri = URI::Parse(uri_string); if ( uri == NULL ) { cerr << "Error: Cannot parse URI." << endl; return 2; } // Check the port number specified, if none use port 80 unsigned port = 80; if ( uri->Is_port_defined() ) { port = uri->Get_port(); } // Create TCP socket and connect to server int tcp_sock = socket( AF_INET, SOCK_STREAM, 0 ); if ( tcp_sock < 0 ) { cerr << "Unable to create TCP socket." << endl; return 3; } sockaddr_in server; socklen_t slen = sizeof(server); server.sin_family = AF_INET; server.sin_port = htons( port ); hostent *hostp = gethostbyname( uri->Get_host().c_str() ); memcpy( &server.sin_addr, hostp->h_addr, hostp->h_length ); if ( connect( tcp_sock, (sockaddr*)&server, slen ) < 0 ) { cerr << "Unable to connect to server via TCP." << endl; close( tcp_sock ); return 4; } // Build HTTP request to send to server HTTP_Request *request = HTTP_Request::Create_GET_request( uri->Get_path() ); request->Set_host( uri->Get_host() ); string request_string = ""; request->Print( request_string ); //cout << request_string << endl; // Send it to the server, wait for reply and use HTTP_Response to get reply send( tcp_sock, &request_string, sizeof(request_string), 0 ); char recv_buffer[1024]; int bytes_recv = 0; while ( bytes_recv < 1024 ) { int recv_len = recv( tcp_sock, recv_buffer + bytes_recv, 1024 - bytes_recv, 0 ); if ( recv_len == -1 ) { cerr << "Error receiving response from server." << endl; close( tcp_sock ); return 5; } bytes_recv += recv_len; } HTTP_Response *response = HTTP_Response::Parse(recv_buffer, bytes_recv); string response_string = ""; response->Print( response_string ); cout << response_string << endl; return 0; }
You are using a blocking TCP/IP socket, but you are not looking at the HTTP reply's "Content-Length" header to know how many bytes to read. Your current reading logic is calling recv() in a loop until 1024 bytes max have been received. If the server sends less than 1024 bytes, you are going to be blocked indefinately because you are calling recv() too many times asking for too many bytes.
1,580,737
1,580,743
Question about pointers and objects?
Just wondering, if I statically create an object that has a pointer as a data member and then the object goes out of scope, what happens to the pointer? Chuma
Nothing happens to the pointer at all, it just ceases to exist. If it was pointing to something that needed to be freed, you just got a memory leak. Either add code to the destructor that does the proper cleanup of the pointer, or use "smart pointers" that clean up after themselves automatically. Edit: If you actually meant you were creating a static object, by declaring it with the static keyword inside a function, then the answer is different. A static object, once constructed by the first execution of the function that declares it, continues to live until the program ends. Its data members, including pointers, will remain valid. Subsequent calls to the function will access the same object. If the object has allocated any memory, it will remain allocated unless something explicitly deletes it.
1,580,757
1,580,778
What is the official name of C++'s arrow (->) operator?
I always call it the "arrow operator", but I'm sure it has an official name. I quickly skimmed the C++ standard and didn't see it mentioned by name.
The C++ standard just calls it "arrow" (§5.2.5).
1,580,935
1,580,959
Declaring char[][512]?
I have an C++ SDK that requires a char[][512] as a parameter. I know that this is supposed to be a list of file names and the number of files could vary. For the life of me I cannot figure out how to declare this. I have an array of CStrings and I am trying to copy them over using strcpy_s and then pass them into the SDK. Any idea on how to do this?
This declaration has a special meaning when used to declare parameter of a function. Within the parameter list it is equivalent to char[100][512], char[123][512], char[3][512] (you get the idea - the first size can be just anything, it is simply ignored) and also to char (*)[512]. Effectively, it will accept as an argument a 2D array of chars with flexible (arbitrary) first size. The array that you will actually pass to this function should be declared with a concrete first size, for example char names[3][512] = { "abc", "cde", "fgh" }; if you know the first size at compile time, of course. If the first size is only known at run time (say, n), you'll have to allocate the array dynamically char (*names)[512] = new char[n][512]; // Now fill it with names or, more elegantly, with a typedef typedef char TName[512]; TName* names = new TName[n]; // Now fill it with names I expect that the SDK function you are talking about also asks you to pass the first size of the name array as another parameter.
1,580,960
1,580,981
Are these placement new macros correct?
I made a couple macros to make using placement new a bit easier. I was just wondering if there were any obvious cases where these would not work. Thanks. #define CONSTRUCT_INPLACE(TYPE,STORAGE,INIT) ::new((TYPE*)STORAGE) TYPE INIT #define DESTRUCT_INPLACE(TYPE,STORAGE) ((TYPE*)STORAGE)->~TYPE()
I'm not an expert in placement new but there are a couple of issues with how you are defining the macro. Issue 1 The most obvious problem is the use of the cast (TYPE*)STORAGE for the storage location. This is incorrect. Placement new is just another C++ function and it participates in operations like overload resolution. Arbitrarily casting the memory to a specific type could cause the placement new to bind to a different operator new that the user expected. For example, it's valid to have the following two definitions of placement new. Your macro would potentially cause the wrong one to be called. void * _cdecl operator new(size_t cbSize, void* pv); void * _cdecl operator new(size_t cbSize, SomeType* pv)- ... // These two call different overloads void* p = malloc(sizeof(SomeType)); SomeType* f1 = CONSTRUCT_INPLACE(SomeType, p,()) SomeType* f2 = new (p) SomeType(); I wrote a blog post awhile back on how you can use this type of overload resolution to implement custom allocators. http://blogs.msdn.com/jaredpar/archive/2007/10/17/c-placement-new-and-allocators.aspx Issue 2 The expression STORAGE in the macro should be wrapped in parens to prevent evil macro expansion bugs. ::new((TYPE*)(STORAGE)) TYPE INIT
1,581,105
1,610,124
Running C++ binaries without the runtime redistributable (Server2k3, XPSP3)
Having written a CGI application in Visual Studio 2008 and debigged it locally, I uploaded it to a Windows Server 2003 OS where it promptly failed to run. I am guessing I need to install the wretched Runtime distributable, but after reading this: http://kobyk.wordpress.com/2007/07/20/dynamically-linking-with-msvcrtdll-using-visual-c-2005/ I am wondering if it makes more sense to ignore this side by side stuff and just re-write the app. I am guessing Windows Server 2003 does not have MSCRVT version I need? Does Windows Server 2003 have it? When it comes to deploying thick clients, I would like to distribute the required dlls with my app. What are they assuming I just INCLUDE iostream, sstream, string? Does it change significantly if I add windows.h? Added: Using the /MT switch recommended below C / C++ -> Code Generation -> Runtime Library -> Multi-threaded(/MT) (You will probably need to do a clean: Build -> Clean in order to avoid the error message "Failed to save the updated manifest to the file") bloated my app from 38k to 573k. Thats what I call Significant (imagine if that were your salary). Since many instances of this app will be loaded and unloaded constantly (requiring precious memory and processor resources) I would like to find a better (smaller) solution. I understand this is not important for many situations today and not the focus of many developers, hence the trend to .NOT and 60MB runtimes, but this is what I want to do. Added: After removing the debugging to get the project to compile: Project -> Propeties -> c/c++ -> Preprocessor -> Preprocessor Definitions (remove DEBUG;) the size was reduced to 300k, and it will run. Added: As suggested by Chris Becke below, copying: msvcm90.dll msvcp90.dll msvcr90.dll Microsoft.VC90.CRT.manifest To the directory of the application will provide all the runtime needed. Using Visual Studio 6 has been suggested a few times, but it does not support Vista (or Windows 7 we assume) Other solutions that do not require a runtime distributable would probably me MASM or even a flavor of Basic. Unfortunately that defeats the purpose of using a high level OOP language like C++. So long as I do need to require the C++ redistributable be installed, the trade off is an additional 260k. Thats acceptable
A fuller list of options: Rewrite the app so that theres no C/C++ usage at all. Switch to Visual Studio 6 or a mingw based toolset like Code::Blocks - these use the already distributed msvcrt.dll as their runtime. Build using the /MT switch. This builds the necessary runtime functions into your exe. Which will bloat it. But that bloat (frankly) is less overhead than loading separate dll's. Distribute the VS9 runtime as a 'private sxs' install. This entails copying the contents of C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT into the same folder as your application's exe. If you have applied SP1, then this folder will contain the SP1 runtime, but your application will be asking for the RTM runtime (go figure). Add _BIND_TO_CURRENT_CRT_VERSION to your projects defines, rebuild and that should sort itself out. There is apparently a vc_redist.exe that can be obtained for VS9. Find that or figure out the MSI or installer mojo required to actually install the above assembly (being the contents of Microsoft.VC90.CRT) into the shared sxs store.
1,581,348
1,581,359
Allocate room for null terminating character when copying strings in C?
const char* src = "hello"; Calling strlen(src); returns size 5... Now say I do this: char* dest = new char[strlen(src)]; strcpy(dest, src); That doesn't seem like it should work, but when I output everything it looks right. It seems like I'm not allocating space for the null terminator on the end... is this right? Thanks
You are correct that you are not allocating space for the terminator, however the failure to do this will not necessarily cause your program to fail. You may be overwriting following information on the heap, or your heap manager will be rounding up allocation size to a multiple of 16 bytes or something, so you won't necessarily see any visible effect of this bug. If you run your program under Valgrind or other heap debugger, you may be able to detect this problem sooner.
1,581,390
1,581,393
Expected constructor, destructor, or type conversion before '::' token
I get the below error when compiling my file. //Error PluginDiskstats.cpp:107: error: expected constructor, destructor, or type conversion before '::' token scons: *** [PluginDiskstats.o] Error 1 // destructor ~PluginDiskstats::PluginDiskstats() // line 107 { if (stream != NULL) { fclose(stream); stream = NULL; } hash_destroy(&DISKSTATS); } // header file #ifndef __PLUGIN_DISKSTATS_H__ #define __PLUGIN_DISKSTATS_H__ #include <QObject> #include "Hash.h" class PluginDiskstats : public QObject { Q_OBJECT HASH DISKSTATS; FILE *stream; int ParseDiskstats(); public: PluginDiskstats(); ~PluginDiskstats(); public slots: double Diskstats(QString arg1, QString arg2, double arg3); }; #endif
Change line 107 to: PluginDiskstats::~PluginDiskstats()
1,581,443
1,582,397
Profiler for a C++ module in a C# app
I rewrote a number-crunching two pages of code from C# to unmanaged C++ in my project, which with full optimizations gave a 3x speedup. I want to keep optimizing that code, but now my profiler of choice, dotTrace, can't do it, because it only looks at managed code. How do I profile the P/Invoked C++ module when it's running in the C# app? And a tangential question: Calling the following function via P/Invoke doesn't produce any unneeded overhead (such as copying the arrays), right? Just making sure. (Note that Foo both reads from bar and writes to it) // From the C# side: [DllImport("foo.dll")] static extern void Foo(float[,] bar); // From the C++ side: extern "C" __declspec(dllexport) void Foo(float* bar); compiler: msvc on Vista
How about making a standalone test harness for the C++ code and profiling it alone with any number of tools like VTune, callgrind, Quantify, oprofile...?
1,581,508
1,581,512
Virtual non-method members
Is something similar to this possible? This way it produces an error. class A { public: virtual std::string key; }; class B : public A { public: std::string key; }; int main() { A a; a.key = "Foo"; return 1; }
No, because this doesn't really make sense. Remember that a subclass contains all members of its parent class; therefore, B still has A's std::string key. Moreover, since B's std::string key is the same type, it's absolutely identical to A's - so what was the point of overriding it? Also, note that during construction, when we're running A's constructor, B's virtual methods won't be called. This means that if we access key during A's construction, we'd get A's key - but then when B is constructed, that key would be shadowed, its data completely inaccessible. That said, if you really do want to do something like this, for some reason, you'll need to use a virtual accessor function: class A { private: std::string m_key; public: virtual std::string &key() { return m_key; } virtual const std::string &key() const { return m_key; } }; class B : public A { private: std::string m_key; public: virtual std::string &key() { return m_key; } virtual const std::string &key() const { return m_key; } }; int main() { B b; b.key() = "Foo"; return 0; }
1,581,539
1,581,567
How to identify a new line in C++?
I have to take integer input to an integer array. I have to identify the newline also in input. To be more clear, I have an example. The input I am giving is:- 2 3 4 45 6 78 45 34 34 I want to process the input according to the newline in the input. The programming language is C++ and the compiler is g++. I don't want to store the newline in the array; I just want to process accordingly.
You can use std::getline from <string> to read whole lines and use a std::stringstream, from <sstream> to parse the lines.
1,581,763
1,581,858
Difference between A* pA = new A; and A* pA = new A();
in C++, what is the exact difference between both following dynamic object creations : A* pA = new A; A* pA = new A(); I did some tests, but it seems that in both cases, the default constructor is called and only it. I'm looking for any difference about performance... Thanks
If A is a POD-type, then new A will allocate a new A object but leave it with an indeterminate value, otherwise new A will default initialize the new object. In all cases new A() will value initialize the new A object. This is obviously different behaviour for POD types but also affects non-POD, non-union class types without a used-declared constructor. E.g. struct A { int a; std::string s; }; A is a non-POD class type without a user-declared constructor. When an A is default initialized the implicitly defined constructor is called which calls the default constructor for s (a non-POD type), but a is not initialized. When an A is value initialized, as it has no used-declared constructor, all of its members are value initialized which means that the default constructor for s is called and a is zero initialized. ISO 14882:2003 references: 5.3.4 [expr.new]/15: How objects allocated by a new expression are initialized depending on whether the initializer is omitted, a pair of parentheses or otherwise. 8.5 [dcl.init]/5: The meaning of zero initialize, default initialize and value initialize. 12.1 [class.ctor]/7,8: The form of a user-written constructor that matches the behaviour of an implicitly defined default constructor. 12.6.2 [class.base.init]/4: How bases and members which are not listed in a member initializer list of a constructor are initialized.
1,581,778
1,646,213
How do you rotate a sprite around its center by calculating a new x and y position?
I'm using Dark GDK and C++ to create a simple 2d game. I'm rotating an object but it rotates from the top left corner of the sprite. I have the following variables available: PlayerX PlayerY PlayerWidth PlayerHeight RotateAngle (360 > x > 0) Is there an algorithm that will modify the pivot point of the sprite, preferable to the center? Here is a small code sample: void Player::Move( void ) { if ( checkLeft() ) { PlayerX -= PlayerSpeed; if ( PlayerX < 0 ) PlayerX = 0; } if ( checkRight() ) { PlayerX += PlayerSpeed ; if ( PlayerX > 800 - PlayerWidth ) PlayerX = 800 - PlayerWidth; } if ( checkUp()) { PlayerY -= PlayerSpeed; if ( PlayerY < 0 ) PlayerY = 0; } if ( checkDown()) { PlayerY += PlayerSpeed; if ( PlayerY > 600 - PlayerHeight) PlayerY = 600 - PlayerHeight; } RotateAngle += 5; if(RotateAngle > 360) RotateAngle -=360; dbRotateSprite(Handle,RotateAngle); dbSprite ( 1 , PlayerX , PlayerY , Handle ); } Edit I'm considering opening up some reputation for this question, I have yet to be provided with an answer that works for me. If someone can provide an actual code sample that does the job, I'd be very happy. The problem with Blindy's answer is that no matter how much I simply translate it back or forth, the spirte still rotates around the top left hand corner and moving it somewhere rotating around the top left corner, then moving it back to the same position accomplishes nothing. Here is what I believe to be going on: alt text http://img248.imageshack.us/img248/6717/36512474.png Just so there is no confusion I have created a an image of what is going on. The left shows what is actually happening and the right shows what I need to happen. alt text http://img101.imageshack.us/img101/1593/36679446.png
The answers so far are correct in telling you how it should be done but I fear that the Dark GDK API seems to be too primitive to be able to do it that simple way. Unfortunately dbRotateSprite rotates the sprite about the top left regardless of the sprite's transform which is why you're having no luck with the other suggestions. To simulate rotation about the centre you must manually correct the position of the sprite i.e. you simply have to rotate the sprite and then move it as a two-step process. I'm not familiar with the API and I don't know if y is measured up or down and which way the angle is measured so I'm going to make some assumptions. If y is measured down like many other 2D graphics systems, and the angle is measured from the x-axis increasing as it goes from the positive x-axis to the positive y-axis, then I believe the correct psuedo-code would look like // PlayerX and PlayerY denote the sprite centre // RotateAngle is an absolute rotation i.e. not a relative, incremental rotation RotateAngle += 5; RotateAngle %= 360; RadiansRotate = (RotateAngle * PI) / 180; dbRotateSprite( Handle, RotateAngle ); HalfSpriteWidth = dbSpriteWidth( Handle ) / 2; HalfSpriteHeight = dbSpriteHeight( Handle ) / 2; SpriteX = PlayerX - HalfSpriteWidth * cos(RadiansRotate) + HalfSpriteHeight * sin(RadiansRotate); SpriteY = PlayerY - HalfSpriteHeight * cos(RadiansRotate) - HalfSpriteWidth * sin(RadiansRotate); // Position the top left of the sprite at ( SpriteX, SpriteY ) dbSprite ( 1 , SpriteX , SpriteY , Handle );
1,581,809
1,581,933
create a object : A.new or new A?
Just out of curiosity: Why C++ choose a = new A instead of a = A.new as the way to instantiate an object? Doesn't latter seems more like more object-oriented?
Just out of curiosity: Why C++ choose a = new A instead of a = A.new as the way to instance-lize an object? Doesn't latter seems more like more object-oriented? Does it? That depends on how you define "object-oriented". If you define it, the way Java did, as "everything must have syntax of the form "X.Y", where X is an object, and Y is whatever you want to do with that object, then yes, you're right. This isn't object-oriented, and Java is the pinnacle of OOP programming. But luckily, there are also a few people who feel that "object-oriented" should relate to the behavior of your objects, rather than which syntax is used on them. Essentially it should be boiled down to what the Wikipedia page says: Object-oriented programming is a programming paradigm that uses "objects" – data structures consisting of datafields and methods together with their interactions – to design applications and computer programs. Programming techniques may include features such as information hiding, data abstraction, encapsulation, modularity, polymorphism, and inheritance Note that it says nothing about the syntax. It doesn't say "and you must call every function by specifying an object name followed by a dot followed by the function name". And given that definition, foo(x) is exactly as object-oriented as x.foo(). All that matters is that x is an object, that is, it consists of datafields, and a set of methods by by which it can be manipulated. In this case, foo is obviously one of those methods, regardless of where it is defined, and regardless of which syntax is used in calling it. C++ gurus have realized this long ago, and written articles such as this. An object's interface is not just the set of member methods (which can be called with the dot syntax). It is the set of functions which can manipulate the object. Whether they are members or friends doesn't really matter. It is object-oriented as long as the object is able to stay consistent, that is, it is able to prevent arbitrary functions from messing with it. So, why would A.new be more object-oriented? How would this form give you "better" objects? One of the key goals behind OOP was to allow more reusable code. If new had been a member of each and every class, that would mean every class had to define its own new operation. Whereas when it is a non-member, every class can reuse the same one. Since the functionality is the same (allocate memory, call constructor), why not put it out in the open where all classes can reuse it? (Preemptive nitpick: Of course, the same new implementation could have been reused in this case as well, by inheriting from some common base class, or just by a bit of compiler magic. But ultimately, why bother, when we can just put the mechanism outside the class in the first place)
1,581,818
1,581,838
How to call a function from binary data
I have some binary data which contains a bunch of functions and want to call one of it. I know the signature of these functions along with the offset relative to the start of the file. Calling convention is the default one: __cdecl. The file was already loaded into a memory page with executing permissions. For example (A, B, C being some types) void myFunction (A *arg1, B arg2, C arg3); // Signature int myOffset = 0x42; // Offset How can I specify that myOffset points to myFunction?
// define a function pointer typedef __cdecl void (*your_function) (A *arg1, B arg2, C arg3); your_function ftr; char * memory = 0x123456; // base segment address fptr = (your_function)(memory + 0x42); //calculate memory address (*ftpr)(a,b,b); // call function
1,581,839
1,581,853
what's the mechanism of sizeof() in C/C++?
It seems sizeof is not a real function? for example, if you write like this: int i=0; printf("%d\n", sizeof(++i)); printf("%d\n", i); You may get output like: 4 0 And when you dig into the assemble code, you'll find sth like this: movl $4, %esi leaq LC0(%rip), %rdi xorl %eax, %eax call _printf So, the compiler put directly the constant "4" as parameters of printf add call it. Then what does sizeof do?
You know, there's a reason why there are standard documents (3.8MB PDF); C99, section 6.5.3.4, §2: The sizeof operator yields the size (in bytes) of its operand, which may be an expression or the parenthesized name of a type. The size is determined from the type of the operand. The result is an integer. If the type of the operand is a variable length array type, the operand is evaluated; otherwise, the operand is not evaluated and the result is an integer constant. In response to ibread's comment, here's an example for the C99 variable length array case: #include <stdio.h> size_t sizeof_int_vla(size_t count) { int foo[count]; return sizeof foo; } int main(void) { printf("%u", (unsigned)sizeof_int_vla(3)); } The size of foo is no longer known at compile-time and has to be determined at run-time. The generated assembly looks quite weird, so don't ask me about implementation details...
1,581,925
1,582,111
Operations and functions that increase Virtual Bytes
Having some out-of-memory problems with a 32-bit process in Windows I begun using Performance Monitor to log certain counters for that process. Though it is normal that Virtual Bytes is higher than both Private Bytes and Working Set, I found that in my case there was a substantial difference, Virtual Bytes was much higher than both Private Bytes and Working Set. What specific operations and Win32/CRT functions (in C or C++) would increase Virtual Bytes but not Private Bytes and Working Set? I guess it would be some kind of shared resources, if I understand the description of the different counters in Performance Monitor. As there seems to be some (to say the least) confusion on the naming convention to use for the memory counters in different releases of Windows as well in different applications in the same release of Windows, I put together the following: Information from MSDN According to MSDN - Memory Limits for Windows Releases, the user-mode virtual address space limit in 32-bit Windows for each 32-bit process is normally 2 GB. It can be up to 3 GB with IMAGE_FILE_LARGE_ADDRESS_AWARE and 4GT. Below is a description of the different counters in Performance Monitor along with the corresponding columns in Task Manager and the Win32 structure which holds the information, according to MSDN - Memory Performance Information. Virtual Bytes Virtual Bytes is the current size, in bytes, of the virtual address space the process is using. Use of virtual address space does not necessarily imply corresponding use of either disk or main memory pages. Virtual space is finite, and the process can limit its ability to load libraries. Task Manager XP: N/A Task Manager Vista: N/A Structure: MEMORYSTATUSEX.ullTotalVirtual-MEMORYSTATUSEX.ullAvailVirtual Private Bytes Private Bytes is the current size, in bytes, of memory that this process has allocated that cannot be shared with other processes. Task Manager XP: VM Size Task Manager Vista: Commit Size Structure: PROCESS_MEMORY_COUNTERS_EX.PrivateUsage Working Set Working Set is the current size, in bytes, of the Working Set of this process. The Working Set is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the Working Set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from Working Sets. If they are needed they will then be soft-faulted back into the Working Set before leaving main memory. Task Manager XP: Mem Usage Task Manager Vista: Working Set Structure: PROCESS_MEMORY_COUNTERS_EX.WorkingSetSize
Things that (may) increase virtual bytes without increasing private bytes I can think of right now: Binaries are often shared (i.e. not private), but occupy significant address space. This can be even larger than the size of the binary Using VirtualAlloc to reserve sequential address space without committing / accessing it. Custom Memory Managers might do that. Using a memory mapped file (without completely accessing it)
1,582,095
1,585,448
Test driven development for C++ XPCOM component?
I want to create a Firefox extension using c++ XPCOM component which in turn uses javascript XPCOM component. Is there any framework that allows test driven development of C++ XPCOM component/firefox extension ?
You could copy what Mozilla uses to test native code. You just have to make sure you link with libxul (which may require that you build Firefox). An example file that uses TestHarness.h can be found here.
1,582,372
1,582,509
How can I convert from DWORD RGBA to ints?
I have to convert a DWORD (unsigned long) RGBA to four int vars (R, G, B, and A) So far, I have this function to convert the 4 ints to a DWORD: unsigned long RGBA2DWORD(int iR, int iG, int iB, int iA) { return ((iA << 24) | (iR << 16) | (iG << 8) | iB); } How can I convert it back? Something like struct RGBA { int R, G, B, A; }; RGBA DWORD2RGBA(unsigned long dwColor) { static RGBA tmp; //.......conversion process return tmp; } Any kind of help would be appreciated! :) Thanks
If I were you, I'd stick with multiplicative-additive operations in the packing/unpacking functions. Something like this unsigned long RGBA2DWORD(int iR, int iG, int iB, int iA) { return ((iA * 256 + iR) * 256 + iG) * 256 + iB; } with a symmetrical unpacking function RGBA DWORD2RGBA(unsigned long dwColor) { RGBA tmp; /* why did you declare it static??? */ tmp.B = dwColor % 256; dwColor /= 256; tmp.G = dwColor % 256; dwColor /= 256; tmp.R = dwColor % 256; dwColor /= 256; tmp.A = dwColor % 256; /* dwColor /= 256; */ return tmp; } Note that there's only one "magic constant" in the whole code. Of course, if you have an external specification that is written in terms of bit patterns in the packed data, a version based on bit and shift opertions might be preferrable. Still unsigned long RGBA2DWORD(int iR, int iG, int iB, int iA) { return (((((iA << 8) + iR) << 8) + iG) << 8) + iB; } RGBA DWORD2RGBA(unsigned long dwColor) { RGBA tmp; /* why did you declare it static??? */ tmp.B = dwColor & 0xFF; dwColor >>= 8; tmp.G = dwColor & 0xFF; dwColor >>= 8; tmp.R = dwColor & 0xFF; dwColor >>= 8; tmp.A = dwColor & 0xFF; /* dwColor >>= 8; */ return tmp; } has much less "magic constants". Now you can wrap the repetivie actions/subexpressions in macros or, better, inline functions and arrive at very compact and readable packer/unpacker.
1,582,404
1,635,107
QtCreator delete file is not working
I'm writing a "custom makefile" project using QtCreator and I want to delete a file of my project, so, I select the file in the tree view, press the right click and the "delete" option is disabled and I did not find any way of enable it. My environment: QtCreator 1.2.1 on SnowLeopard: Thanks in advance, Ernesto
By the way, I downloaded and installed Qt creator 1.3 beta for Mac OS X and deleting files is working properly.
1,582,704
1,582,716
Need help with three visual studio errors - C++ errors occuring when trying to build solution
I get the following errors when I try to build this project: error C2182: 'read_data':illegal use of type 'void' error C2078: too many initializers errors c2440: 'initializing': cannot convert from 'std::ofstream' to int All of the above seem to be pointing to my function call on line 72, which is this line: void read_data(finput, foutput); I've looked up these error codes on the MSDN site but wasn't able to use the description to deduce what might be wrong. Any ideas/tips are appreciated. #include <iostream> #include <fstream> #include <iomanip> #include <string> using namespace std; void read_data(ifstream& finput, ofstream& foutput); //PRE: The address of the ifstream & ofstream objects is passed to the function //POST: The data values are read in until the end of the file is reached void print_data(ofstream& foutput, string fname, string lname, int largest, int smallest); //PRE: The address of the ofstream object and the values of fname, lname and largest and smallest integer // in each row is passed to the function //POST: The values are outpur to the file with formatting int max(int num1, int num2, int num3, int num4); //PRE: Four integer values are passed to the function //POST: The largest of the four integer values is returned int min(int num1, int num2, int num3, int num4); //PRE: Four integer values are passed to the function //POST: The smallest of the four integer values is returned int main() { //Declare the filestream objects ifstream finput; ofstream foutput; //Attempt to open the input & output files //In each case, print an error message and quit if they fail to open finput.open("program4_input.txt"); if (finput.fail()) { cout <<"The input file failed to open!" << endl; return exit(1); } foutput.open("output.txt"); if (foutput.fail()) { cout <<"The output file failed to open!" << endl; return exit(2); } void read_data(finput, foutput); return 0; } //Function definitions void read_data(ifstream& finput, ofstream& foutput) { string fname, lname; int num1, num2, num3, num4, largest, smallest; while(finput >> fname) { finput >> lname >> num1 >> num2 >> num3 >> num4; largest = max(num1, num2, num3, num4); smallest = min(num1, num2, num3, num4); print_data(foutput, fname, lname, largest, smallest); } } void print_data(ofstream& foutput, string fname, string lname, int largest, int smallest) { foutput << setw(15) << fname << setw(15) << lname << setw(10) << largest << setw(10) << smallest << endl; } int max(int num1, int num2, int num3, int num4) { int lnum, lnum1, lnum2; if (num1 > num2) { lnum1 = num1; } else lnum1 = num2; if (num3 > num4) { lnum2 = num3; } else lnum2 = num4; if (lnum1 > lnum2) { lnum = lnum1; } else lnum = lnum2; return lnum; } int min(int num1, int num2, int num3, int num4) { int snum, snum1, snum2; if (num1 < num2) { snum1 = num1; } else snum1 = num2; if (num3 > num4) { snum2 = num3; } else snum2 = num4; if (snum1 > snum2) { snum = snum1; } else snum = snum2; return snum; }
Yes, the problem is the line void read_data(finput, foutput); inside the main function. Don't specify the return type when calling the function. Only when declaring it. In other words, the line should just read read_data(finput, foutput);
1,582,737
1,582,763
What is the simplest way to execute arbitrary process with stdin, stdout forwarded to a socket?
I'm interested in two situations: How to do it from C++? How to do it from system's shell? Answers for Linux, Windows and OSX are welcome.
Linux/OSX (actually POSIX), programming (any language that have POSIX calls), general scheme: setup a socket... fork() close(0), close(1) (not necessary, dup2 will close it too... but added for clarity) dup2(socket, 0), dup2(socket, 1) exec() Shell: use nc. Example in my other answer: https://stackoverflow.com/questions/1269400/is-this-a-fair-question-to-ask-in-a-software-engineering-interview-phase-1/1269577#1269577
1,582,790
1,582,931
Network programming: SOAP vs DIY marshalling with XML library?
I know that there are a lot of discussions already on SO about SOAP, bloat, XML, and alternative mechanisms like REST. Here's the situation. A new team member is really talking up SOAP based upon the difficulty of implementing protocols by hand. He recommends gSOAP (project is all in C++.) He is stating things like WSDL cleaning up lots of messy hand coded C++. Right now I am handling most networking with XML based text messages, and the expat XML library. So I have some programming effort (not much) associated with modifications to message formats or additions to parameter lists. At the sender end I package up an XML request and send it over a plain old TCP socket. At the receiver I parse the XML using DOM or SAX. Etc. It has worked fine so far. The XML messages are quite compact and average a couple of hundred characters, at most. And I understand every item in those messages. We want one portion of the product (a server) to be accessible to web sites that are coded using PHP. That is partly driving this idea here, that a SOAP interface will be "easier" for script writers. Everyone on this project believes that SOAP is their salvation. I see the introduction of a new large library like gSOAP as highly disruptive to the momentum of a mature project. What I am wondering is if there is a different and more compact way of doing what SOAP gives us. And how to balance claims of gSOAP or other SOAP tools making development life easier against hard reality. IE, I am being told that WSDL is better, easier, more workmanlike, etc than hand coding C++ using an XML library. That it puts the semantics of the C++ objects right into the declaration of the network messages. The thing is, many of the XML messages that I have defined don't map one for one to a single distinct object at the receiving end. Or, it is possible that I am worrying about nothing. But the reality as I scan messages here seems to contradict what I have been told locally.
I think that you will find that PHP developers are more likely to prefer RESTful interfaces. Here is a 2003 article about it. http://onlamp.com/pub/a/php/2003/10/30/amazon_rest.html RESTful interfaces are a growing phenomenon and if you need to attract developers to your platform it will be easier if you catch the wave. Having said that, is there a good reason why you cannot support multiple interfaces? This is fairly common in web services that do not have a captive audience. You could support your legacy model, a clean RESTful model and a SOAP/WSDL model. Then take stock after 6 months to a year to see which model is the most popular and least effort to support. When it comes to making the site more accessible to outsiders, REST has more widespread usage. As far as saving your project, it is possible that SOAP would do this because it demands a certain amount of rigor in interface design, however the same could be said of REST. If this is a key criterion, then you should probably abandon the hand-coded XML and go with a high-level interface design that could be implemented as both REST and SOAP. I know some people believe that SOAP and REST are fundamentally different approaches, but if you take a RESTful approach to the interface design, you shouldn't have great difficulty in creating a SOAP version. Don't try to do it the other way around though.
1,582,877
1,582,918
How to find free memory within a specific address range
I want to write a small amount of memory inside of a specific address range of my process. Example amount of memory to allocate: 5 bytes lower bound for address: 0x 00 40 00 00 upper bound for address: 0x 00 A0 00 00 The range in which I want to write is already allocated by the process. Therefore, I can't simply allocate new mem with VirtualAlloc. However, since the pages in the desired address space are used for program code, they are not 100% used. There exists enough space somewhere to write my 5 bytes. What do I have to do to ensure that I don't overwrite necessary memory?
I don't think there's a nice, general way to do what you're wanting. Since it looks like you're talking about Windows and about where the default spot to load a PE is, I'll make some assumptions here that might help you. If you're willing to parse the PE-header, you can generally find slack-space in there. Check out the areas between the sections and before the functions. Depending on how the application was built, you might find areas between functions to be filled with INT3's that would probably be sufficient for what you're looking for. If you gave us more information on what you're trying to do specifically, we could probably help more. Can you just patch the binary before loading it or do you have to do everything at run-time?
1,583,011
1,583,037
Compiler warning with nested vectors of depth 3 or more
I am trying to use a class member that uses nested vectors of depth 3: vector< vector< vector > > classVariable_; However, I then get compiler warnings throughout my code when I try do something as simple as classVariable_.clear(): /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/c++/bits/stl_vector.h: In member function `std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(const std::vector<_Tp, _Alloc>&) [with _Tp = std::vector >, _Alloc = std::allocator > >]': /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/c++/bits/stl_vector.h:715: warning: '__result' might be used uninitialized in this function /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/c++/bits/stl_uninitialized.h:82: warning: '__cur' might be used uninitialized in this function /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/c++/bits/stl_uninitialized.h:82: warning: '__cur' might be used uninitialized in this function The strange thing is that this works with nested vectors of depth 2, but not of depth 3 or more. Is this something to do with missing default operators/constructors in the stl libraries? Does anyone know a clean solution around this? I am compiling this using cygwin but that should not have an effect on this. Thank you.
If you actually mean vector< vector< vector< int> > > classVariable_, then classVariable_.clear() shouldn't produce any warning. Might be this bug. Disable the warning manually, or avoid nested vectors of depth 3, which might not be a good idea anyway.
1,583,184
1,583,204
How to print result of C++ evaluation with GDB?
I've been looking around but was unable to figure out how one could print out in GDB the result of an evaluation. For example, in the code below: if (strcmp(current_node->word,min_node->word) > 0) min_node = current_node; (above I was trying out a possible method for checking alphabetical order for strings, and wasn't absolutely certain it works correctly.) Now I could watch min_node and see if the value changes but in more involved code this is sometimes more complicated. I am wondering if there is a simple way to watch the evaluation of a test on the line where GDB / program flow currently is.
There is no expression-level single stepping in gdb, if that's what you are asking for. Your options are (from most commonly to most infrequently used): evaluate the expression in gdb, doing print strcmp(current_node->word,min_node->word). Surprisingly, this works: gdb can evaluate function calls, by injecting code into the running program and having it execute the code. Of course, this is fairly dangerous if the functions have side effects or may crash; in this case, it is so harmless that people typically won't think about potential problems. perform instruction-level (assembly) single-stepping (ni/si). When the call instruction is done, you find the result in a register, according to the processor conventions (%eax on x86). edit the code to assign intermediate values to variables, and split that into separate lines/statements; then use regular single-stepping and inspect the variables.
1,583,234
1,583,259
C++ system() function — How to collect the output of the issued command?
I'm running some commands with the C++ system() function: int system ( const char * command ); How can I collect the standard output from the issued commands? To be specific, I want to collect the output of the issued command (for example, the directory listing output from issuing the dir command).
Are you looking for returned value (as in "exit status") of the executed command, or for its output (as in "what did it print")? If the latter, use popen() and pclose() instead. If the former, look at the return value from system() (and use the documentation for waitpid() to interpret it).
1,583,413
1,583,422
Find a prime number?
To find whether N is a prime number we only need to look for all numbers less or equal to sqrt(N). Why is that? I am writing a C code so trying to understand a reason behind it.
N is prime if it is a positive integer which is divisible by exactly two positive integers, 1 and N. Since a number's divisors cannot be larger than that number, this gives rise to a simple primality test: If an integer N, greater than 1, is not divisible by any integer in the range [2, N-1], then N is prime. Otherwise, N is not prime. However, it would be nice to modify this test to make it faster. So let us investigate. Note that the divisors of N occur in pairs. If N is divisible by a number M, then it is also divisible by N/M. For instance, 12 is divisble by 6, and so also by 2. Furthermore, if M >= sqrt(N), then N/M <= sqrt(N). This means that if no numbers less than or equal to sqrt(N) divide N, no numbers greater than sqrt(N) divide N either (excepting 1 and N themselves), otherwise a contradiction would arise. So we have a better test: If an integer N, greater than 1, is not divisible by any integer in the range [2, sqrt(N)], then N is prime. Otherwise, N is not prime. if you consider the reasoning above, you should see that a number which passes this test also passes the first test, and a number which fails this test also fails the first test. The tests are therefore equivalent.
1,583,436
1,583,506
What is the most efficient way to keep a steady frame rate with DirectX and C++?
I'm learning DirectX from a book about game programming, and it uses the following method for a game loop: long int start = GetTickCount(); while(true) GameRun(); void GameRun() { if(GetTickCount() - start >= 30) //do stuff } This makes start equal whatever the time is (I'm guessing get tick count gives the number of 'ticks' since the program started), and then, 30 ticks later, does all of the AI, rendering, etc. My question is, wouldn't it be more efficient to do all of the AI, etc. first, then, if theres time left over, wait until the frame needs to be changed? So, what would be a better way to keep a steady frame rate? (Preferably only using DirectX headers that I would already be using for sound, images and imput (like d3d9.h)) And, on a related note, what exactly does GetTickCount() do, and how long is a 'tick'
For your game loop, read this article. It sum-up your options and makes your choice clear. Now for GetTickCount(), it's the simplest way to get time in Windows (I don't know the one for unix OSes). QueryPerformanceFrequency and QueryPerformanceCounter gives far more precision but might be hard to understand how to use them right. You might be interested in reading the way the Ogre::Timer class works (Ogre is a well-known open-source real-time 3D rendering library). They added some code to deal with some potential problems with those functions and there is code for several platforms. You can take the code and use it in your apps, .the license of the coming version of Ogre is MIT That way you can concentrate on your game loop and the game code. UPDATE ON TIME PRECISION : now if you can use the last boost version, there is boost.chrono that provide a high precision clock, steady_clock, that is cross platform and uses QueryPerformanceFrequency and QueryPerformanceCounter on windows. In fact, chrono is a proposition for addition to the C++ standard library.
1,583,444
1,583,451
Does d3d9.h include windows.h? (C++)
When I use #include <d3d9.h> in my programs, I no longer need to include windows.h to use windows functions like WinMain, and CreateWindow. Is this because d3d9.h &c. include windows.h? Mainly, I'm wondering if it is possible to substitute windows.h with d3d9.h, etc, and still be able to se any functions I could use with windows.h.
Yes, if you open d3d9.h you will see # include <windows.h>.
1,583,509
1,583,525
Why won't my sorting method work?
I have an issue with my C++ code for college. I can't seem to understand why my sRecSort() method isn't working. Any help? This is really confusing me! #include <iostream> #include <algorithm> #include <string> #include <fstream> #include <sstream> using namespace std; void sRecSort(string n[], int s[], string e[], int len){ for (int i = 0; i < len; i++){ for (int j = 1; j < len; j++){ if (s[j] < s[i]){ string tempName = " "; string tempName2 = " "; int tempGrade,tempGrade2; string tempEmail = " "; string tempEmail2 = " "; tempName = n[i]; tempName2 = n[j]; tempGrade = s[i]; tempGrade2 = s[j]; tempEmail = e[i]; tempEmail2 = e[j]; s[i] = tempGrade2; s[j] = tempGrade; n[i] = tempName2; n[j] = tempName; e[i] = tempEmail2; e[j] = tempEmail; } } } } void printLowestRecord(char inFileName[]){ string tempSubString = " "; string names[12] = {" "}; int grades[12] = {0}; string emails[12] = {""}; int firstSpace = -1; int secondSpace = -1; ifstream inputMe(inFileName); while (!inputMe.eof()){ for (int i = 0; i < 12; i++){ getline(inputMe, tempSubString); for (int w = 0; w < strlen(tempSubString.c_str()); w++){ if (tempSubString[w] != ' '){ continue; } else{ if (firstSpace == -1){ firstSpace = w; } else if (firstSpace != -1 && secondSpace == -1){ secondSpace = w; names[i] = tempSubString.substr(0, firstSpace); grades[i] = atoi((tempSubString.substr(firstSpace + 1, secondSpace - (firstSpace + 1))).c_str()); emails[i] = tempSubString.substr(secondSpace + 1, tempSubString.length() - (secondSpace + 1)); break; } } } firstSpace = -1; secondSpace = -1; } } sRecSort(names,grades,emails,12); cout << names[0] << " " << grades[0] << " " << emails[0] << endl; inputMe.close(); } void sortFileRecords(char inFileName[], char outFileName[]){ ifstream inputFile(inFileName); ofstream outputFile(outFileName); string tempSubString = " "; string names[12] = {" "}; int grades[12] = {0}; string emails[12] = {" "}; int firstSpace = -1; int secondSpace = -1; while (!inputFile.eof()){ for (int i = 0; i < 12; i++){ getline(inputFile, tempSubString); for (int w = 0; w < strlen(tempSubString.c_str()); w++){ if (tempSubString[w] != ' '){ continue; } else{ if (firstSpace == -1){ firstSpace = w; } else if (firstSpace != -1 && secondSpace == -1){ secondSpace = w; names[i] = tempSubString.substr(0, firstSpace); grades[i] = atoi((tempSubString.substr(firstSpace + 1, secondSpace - (firstSpace + 1))).c_str()); emails[i] = tempSubString.substr(secondSpace + 1, tempSubString.length() - (secondSpace + 1)); break; } } } firstSpace = -1; secondSpace = -1; } } int tempSmallest = grades[0]; int idxCatch = 0; for (int x = 1; x < 12; x++){ if (grades[x] < tempSmallest){ tempSmallest = grades[x]; idxCatch = x; } } for (int e = 0; e < 12; e++){ cout << names[e] << " " << grades[e] << " " << emails[e] << endl; } //string tmpStringForInt = " "; //stringstream tmpSS; /*for (int q = 0; q < 12; q++){ tmpSS << grades[q]; tmpStringForInt = tmpSS.str(); outputFile << names[q] << " " << tmpStringForInt << " " << emails[q] << endl; }*/ inputFile.close(); outputFile.close(); } int main (int argc, char * const argv[]) { printLowestRecord("gradebook.txt"); sortFileRecords("gradebook.txt", "sortedGradebook.txt"); return 0; }
Okay, the problem is in the inner loop condition. Can't tell you where exactly -- that's a homework. for (int i = 0; i < len; i++){ for (int j = 1; j < len; j++){ // <--- this line is wrong The first element of your "sorted" array will correctly be the lowest. But the others... P.S. Irrelevant to the problem, but, please, read a chapter about structures in your C++ book. P.P.S. You have chosen the worst sorting algorithm I could possibly imagine. Try "Bubble sort", at least.
1,583,547
1,583,568
How does get object from point work?
I'm new to programming. I want to make a card game with C++ / Allegro. The graphics api is irrelevant though. I want it to have many buttons you can click. I'm wondering the proper way this is done. For instance, how does windows know which control you click on from your cursor. I would use an array of rectangles and check each rectangle to ee if my mouse is is 1 of their bounds, but this doesn't seem very good. What about if I draw a line from 2 points and want to be able to drag any part of the line? I doubt i'm doing this right either. Any insight on this would be very helpful. Thanks
Basically, you want to make a mouse-driven user interface. This is very difficult to do from scratch, that's why Allegro has a built-in GUI system. If you don't like it, you'd better use a GUI library than doing it yourself. I'd recommand MasKing, it's an add-on for Allegro, to write graphical interfaces in C++.
1,583,652
1,583,656
How to read cin with whitespace up until a newline character?
I wish to read from cin in C++ from the current position up until a newline character into a string. The characters to be read may include spaces. My first pass fails because it stops on the first space: string result; cin >> result; If cin is given: (cd /my/dir; doSometing)\n The variable result only gets: (cd I would think I should be able to use stream manipulators to accomplish this, but the skipws was not quite right in that it throws carriage returns in with spaces and tabs, plus it sounds like that is for leading whitespace to be skipped. Perhaps I need to use streambuf something like this? streambuf buf; cin >> buf;
std::string str; std::getline( std::cin, str);
1,583,791
1,584,079
constexpr and endianness
A common question that comes up from time to time in the world of C++ programming is compile-time determination of endianness. Usually this is done with barely portable #ifdefs. But does the C++11 constexpr keyword along with template specialization offer us a better solution to this? Would it be legal C++11 to do something like: constexpr bool little_endian() { const static unsigned num = 0xAABBCCDD; return reinterpret_cast<const unsigned char*> (&num)[0] == 0xDD; } And then specialize a template for both endian types: template <bool LittleEndian> struct Foo { // .... specialization for little endian }; template <> struct Foo<false> { // .... specialization for big endian }; And then do: Foo<little_endian()>::do_something();
Assuming N2116 is the wording that gets incorporated, then your example is ill-formed (notice that there is no concept of "legal/illegal" in C++). The proposed text for [decl.constexpr]/3 says its function-body shall be a compound-statement of the form { return expression; } where expression is a potential constant expression (5.19); Your function violates the requirement in that it also declares a local variable. Edit: This restriction could be overcome by moving num outside of the function. The function still wouldn't be well-formed, then, because expression needs to be a potential constant expression, which is defined as An expression is a potential constant expression if it is a constant expression when all occurrences of function parameters are replaced by arbitrary constant expressions of the appropriate type. IOW, reinterpret_cast<const unsigned char*> (&num)[0] == 0xDD would have to be a constant expression. However, it is not: &num would be a address constant-expression (5.19/4). Accessing the value of such a pointer is, however, not allowed for a constant expression: The subscripting operator [] and the class member access . and operators, the & and * unary operators, and pointer casts (except dynamic_casts, 5.2.7) can be used in the creation of an address constant expression, but the value of an object shall not be accessed by the use of these operators. Edit: The above text is from C++98. Apparently, C++0x is more permissive what it allows for constant expressions. The expression involves an lvalue-to-rvalue conversion of the array reference, which is banned from constant expressions unless it is applied to an lvalue of effective integral type that refers to a non-volatile const variable or static data member initialized with constant expressions It's not clear to me whether (&num)[0] "refers to" a const variable, or whether only a literal num "refers to" such a variable. If (&num)[0] refers to that variable, it is then unclear whether reinterpret_cast<const unsigned char*> (&num)[0] still "refers to" num.
1,583,834
1,583,992
Is this the right approach for a thread-safe Queue class?
I'm wondering if this is the right approach to writing a thread-safe queue in C++? template <class T> class Queue { public: Queue() {} void Push(T& a) { m_mutex.lock(); m_q.push_back(a); m_mutex.unlock(); } T& Pop() { m_mutex.lock(); T& temp = m_q.pop(); m_mutex.unlock(); return temp; } private: std::queue<t> m_q; boost::mutex m_mutex; }; You get the idea... I'm just wondering if this is the best approach. Thanks! EDIT: Because of the questions I'm getting, I wanted to clarify that the mutex is a boost::mutex
Herb Sutter wrote an excellent article last year in Dr. Dobbs Journal, covering all of the major concerns for a thread-safe, lock-free, single-producer, single-consumer queue implementation. (Which made corrections over an implementation published the previous month.) His followup article in the next issue tackled a more generic approach for a multi-user concurrent queue, along with a full discussion of potential pitfalls and performance issues. There are a few more articles on similar concurrency topics. Enjoy.
1,583,966
1,583,973
C++ stack overflow - visual studio 2008
I declared and initialized an array having [100][1000][1000] char elements(100MB), it didn't say about a stack overflow at the first time.But when I running it after a while it throws a Stack overflow exception! I increased the -Stack Reserve Size- to 200,000,000 in project options->linker->system but it didn't worked! I'm working using 2GB Ram.How do i avoid stack over flow.
Stop using the stack! Use heap memory!
1,583,989
1,583,996
Most optimal way to find the sum of 2 numbers represented as linked lists
I was trying to write a program for the problem I mentioned above, the numbers (i.e the lists) can be of unequal length, I was not able to figure out a way to do this other than the most commonly thought of approach i.e reverse list-1 reverse list-2 find the sum and store it in a new list represented by list-3 reverse the list. The complexity of this should be of the O(n+m). Is there anyway to reduce it, or do it better?
Ideally the first thing I would do is store the numbers in reverse digit order, so 43,712 is stored as: 2 -> 1 -> 7 -> 3 -> 4 It makes arithmetic operations much easier. Displaying a number can be done either iteratively or more simply with a recursive algorithm. Note: all this assumes singly-linked lists. Edit: But you've since stated you have no choice in the storage format. As such, your best bet is to reverse both the lists, do the addition and then reverse the result.
1,584,090
1,584,098
Visual Studio Debugger can't view arrays after they have been passed to functions
I have a function like this: MyFunction(double matrix[4][4]) {/*do stuff*/} I am calling this from an outer function (the otuer function is a member function of a class, in case that matters): OuterFunction() { double[4][4] x; initialize(x); //this function puts the data I want in the matrix MyFunction(x); } I am trying to debug this progaram using the Visual Studio debugger. The problem is that when I am looking at the locals for the OuterFunction, I can see all the elements of the array just fine, but when I am looking at the locals for MyFunction, I can only see the first row of the array, and it says it's a matrix[4]* rather than a matrix[4][4]. This even happens when I am only passing a one dimensional array - I pass in a matrix[4], then the debugger identifies it as a matrix* and only lets me see the first element of the array. Is it possible to fix this so I can see all of the array in the debugger?
It is can be solved by using a vector of vectors, or having your matrix variable in the watch windows like "matrix,4". The ",4" is a format that tells the debugger show 4 elements.
1,584,100
1,584,114
Converting multidimensional arrays to pointers in c++
I have a program that looks like the following: double[4][4] startMatrix; double[4][4] inverseMatrix; initialize(startMatrix) //this puts the information I want in startMatrix I now want to calculate the inverse of startMatrix and put it into inverseMatrix. I have a library function for this purpose whose prototype is the following: void MatrixInversion(double** A, int order, double** B) that takes the inverse of A and puts it in B. The problem is that I need to know how to convert the double[4][4] into a double** to give to the function. I've tried just doing it the "obvious way": MatrixInversion((double**)startMatrix, 4, (double**)inverseMatrix)) but that doesn't seem to work. Is that actually the right way to do it?
No, there's no right way to do specifically that. A double[4][4] array is not convertible to a double ** pointer. These are two alternative, incompatible ways to implement a 2D array. Something needs to be changed: either the function's interface, or the structure of the array passed as an argument. The simplest way to do the latter, i.e. to make your existing double[4][4] array compatible with the function, is to create temporary "index" arrays of type double *[4] pointing to the beginnings of each row in each matrix double *startRows[4] = { startMatrix[0], startMatrix[1], startMatrix[2] , startMatrix[3] }; double *inverseRows[4] = { /* same thing here */ }; and pass these "index" arrays instead MatrixInversion(startRows, 4, inverseRows); Once the function finished working, you can forget about the startRows and inverseRows arrays, since the result will be placed into your original inverseMatrix array correctly.
1,584,202
42,268,952
GDI+ Bitmap Save problem
Bitmap bff(L"1.jpg"); bff.Save(L"2.jpg", &Gdiplus::ImageFormatJPEG, NULL); This creates a new file 2.jpg with zero-bytes length. Isn't it supposed to write an image file that is identical to 1.jpg? Why I'm having zero-bytes length files? I'm doing this test because writing other Bitmaps to files, result in the same output.
Here's a fast way to save it, since GetEncoderClsid is a custom function: //Save to PNG CLSID pngClsid; CLSIDFromString(L"{557CF406-1A04-11D3-9A73-0000F81EF32E}", &pngClsid); bmp.Save(L"file.png", &pngClsid, NULL); and here's IDs for other formats: bmp: {557cf400-1a04-11d3-9a73-0000f81ef32e} jpg: {557cf401-1a04-11d3-9a73-0000f81ef32e} gif: {557cf402-1a04-11d3-9a73-0000f81ef32e} tif: {557cf405-1a04-11d3-9a73-0000f81ef32e} png: {557cf406-1a04-11d3-9a73-0000f81ef32e}
1,584,206
1,606,745
Problems communicating with external editor in Qt4
I am writing a command-line Qt4 script (using QCoreApplication) on Mac OS X. I am using this code adapted from C++ Programming with Qt 4, 2nd ed. p. 313: QTemporaryFile outFile; if (!outFile.open()) return; QString fileName = outFile.fileName(); QTextStream out(&outFile); out << initial_text; outFile.close(); QProcess::execute(editor, QStringList() << fileName); QFile inFile(fileName); if (!inFile.open(QIODevice::ReadOnly)) return; QTextStream in(&inFile); QString text = in.readAll(); std::cout << text.toStdString() << std::endl; When the above is run with editor set to "/usr/bin/vim", "Vim: Warning: Input is not from terminal" is printed, then vim launches with the initial text (the string initial_text); however, I am unable to edit or quit because pressing escape prints a blue ^[ at the position of the cursor, similar to every other key. When editor is instead set to "/Users/jason/bin/mate" (the TextMate command-line utility), TextMate launches, without the initial text. I can edit and save the document, and when I quit, the application reads in the initial text (which should have been overwritten). I am puzzled since this code is in a printed book so it should work. Am I using the wrong strings for the editor variable?
It seems your editor "/Users/jason/bin/mate" doesn't support a filename being passed as an argument. From your terminal, try something like: echo "Test text" > test.txt /Users/jason/bin/mate test.txt To see if it opens the editor with the right text.
1,584,296
1,584,306
unsigned char array to unsigned int back to unsigned char array via memcpy is reversed
This isn't cross-platform code... everything is being performed on the same platform (i.e. endianess is the same.. little endian). I have this code: unsigned char array[4] = {'t', 'e', 's', 't'}; unsigned int out = ((array[0]<<24)|(array[1]<<16)|(array[2]<<8)|(array[3])); std::cout << out << std::endl; unsigned char buff[4]; memcpy(buff, &out, sizeof(unsigned int)); std::cout << buff << std::endl; I'd expect the output of buff to be "test" (with a garbage trailing character because of the lack of '/0') but instead the output is "tset." Obviously changing the order of characters that I'm shifting (3, 2, 1, 0 instead of 0, 1, 2, 3) fixes the problem, but I don't understand the problem. Is memcpy not acting the way I expect? Thanks.
This is because your CPU is little-endian. In memory, the array is stored as: +----+----+----+----+ array | 74 | 65 | 73 | 74 | +----+----+----+----+ This is represented with increasing byte addresses to the right. However, the integer is stored in memory with the least significant bytes at the left: +----+----+----+----+ out | 74 | 73 | 65 | 74 | +----+----+----+----+ This happens to represent the integer 0x74657374. Using memcpy() to copy that into buff reverses the bytes from your original array.
1,584,380
1,584,679
In C++, how system implement the buffered stream?
Can someone write some sample code to explain this concept? I know what a buffered stream is used for, but I also would like to know how to implement that. Thanks in advance!
You can look into your platform's implementation, the C++ standard or "Standard C++ IOstreams and Locales" by Angelika Langer and Klaus Kreft. Be prepared for quite a learning curve. Streams are old and a complicated matter. (Francis Glassborow: "I have very few doubts that I/O libraries are amongst the most difficult aspects of any language.")
1,584,421
1,584,478
Can't use DLL (written in C++) in Delphi: The procedure entry point could not be located
I've compiled a DLL in Visual Studio (the source code is in C++, which I barely understand). Here's a piece of Scraper.h: struct SWin { char title[512]; HWND hwnd; }; SCRAPER_API bool ScraperGetWinList(SWin winList[100]); Now I'm trying to use the above function in my Delphi application: type tWin = record title: String; hwnd: HWND; end; function ScraperGetWinList(var WinList: Array of tWin): Boolean; external 'Scraper.dll'; var myWinList: Array [1..100] of tWin; procedure TMainForm.GetWinListButtonClick(Sender: TObject); begin ScraperGetWinList(myWinList); ... The project doesn't compile, and I get the following message: The procedure entry point ScraperGetWinList could not be located in the dynamic link library: Scraper.dll. What am I doing wrong?
From my Linux experience, I'd say that you've encountered so-called "name-mangling" issue. The entry point of your procedure is not called "ScraperGetWinList", but something like "_ZN18ScraperGetWinListEpN4SWin". The thing is that, Unlike in C, in C++ language the name of entry point is not the same as the function name. No wonder: assume, you have a set of overloaded functions; they should have different entry points in your DLL. That's where name mangling comes into play. The most common solution to this problem is to define interface of your library in such a way that it will use C calling convention. No name mangling will happen with the interface functions then. Note that you don't have to write the whole library in C, you only should declare functions for them to emit C-like entry points. Usually it's written like this: extern "C" { SCRAPER_API bool ScraperGetWinList(SWin winList[100]); // More functions } Recompile your library and use it in Delphi without problems. Note, that you should also adjust calling conventions (stdcall or cdecl) for them to match in your C++ header and Delphi code. However, that's best explained in another question.
1,584,502
1,584,553
LoadLibraryW doesn't work while LoadLibraryA does the job
I have written some sample program and DLL to learn the concept of DLL injection. My injection code to inject the DLL to the sample program is as follows (error handling omitted): std::wstring dll(L"D:\\Path\\to\\my\\DLL.dll"); LPTHREAD_START_ROUTINE pLoadLibraryW = (LPTHREAD_START_ROUTINE)GetProcAddress(hKernel32, "LoadLibraryW"); int bytesNeeded = WideCharToMultiByte(CP_UTF8, 0, dll.c_str(), dll.length(), NULL, 0, NULL, NULL); std::vector<byte> dllName(bytesNeeded); WideCharToMultiByte(CP_UTF8, 0, dll.c_str(), dll.length(), (LPSTR)&dllName[0], bytesNeeded, NULL, NULL); // Memory is a class written by me to simplify memory processes. // Constructor takes desired permissions. Memory mem (pid, false, true, false, true, false, false, false, false, false, true, true, true, false); // Ensures deletion of the allocated range. // true / true / false = read and write access, no execute permissions std::tr1::shared_ptr<void> allocated = mem.AllocateBytes(dllName.size(), true, true, false); mem.WriteBytes((unsigned int)allocated.get(), dllName); mem.CreateThread(pLoadLibraryW, allocated.get()); Memory::CreateThread is as follows: void Memory::CreateThread(LPTHREAD_START_ROUTINE address, LPVOID parameter) const { std::tr1::shared_ptr<void> hThread(CreateRemoteThread(m_hProcess.get(), NULL, 0, address, parameter, 0, NULL), CloseHandle); if (hThread.get() == NULL) { throw std::runtime_error("Memory::CreateThread: CreateRemoteThread failed"); } DWORD returned = WaitForSingleObject(hThread.get(), INFINITE); if (returned != WAIT_OBJECT_0) { throw std::runtime_error("Memory::CreateThread: The remote thread did not complete properly"); } } The problem is, that the module isn't loaded. However, when I change the second line to LPTHREAD_START_ROUTINE pLoadLibraryW = (LPTHREAD_START_ROUTINE)GetProcAddress(hKernel32, "LoadLibraryA"); it works (since the test dll has no unicode characters in it's name). How to make it work with LoadLibraryW?
HMODULE WINAPI LoadLibrary( __in LPCTSTR lpFileName ); It takes a TCHAR -- so the argument for LoadLibraryW has to be a wide string; the code above passes the multi-byte form of the argument, which is the form that LoadLibraryA wants.
1,584,702
1,584,712
Some questions on C++ struct and array
I am relatively new to C++ and am having problems understanding struct. I have a struct declared as follow struct MyNode { int level; int index; MyNode children[4]; } However the code fails to compile and reports error C2148: total size of array must not exceed 0x7fffffff bytes. But the following code compiles struct MyNode { int level; int index; MyNode* children; } Can i code MyNode as in the first example or is there something that I am missing. Thanks!
This fails to compile, because the compiler needs to know the size of each type. So, what's sizeof(MyNode) ? It's sizeof(int) * 2 + sizeof(MyNode) : The recursion makes the size impossible to figure out. You need a pointer, as in your second example. Why does this works ? Because sizeof(MyNode*) is known : it's the size of an address on the target platform.
1,584,754
1,584,775
How to include lua scripting language into C++ XCode?
I have successfully compiled lua in my mac. It works. It runs. Now, I want to mix it with C++ in XCode. So I looked up in the Linked Library, the usual place to add more libraries in XCode. But of course, lua isn't there. Now, how do I tell XCode the existence of lua so that the program can include lua.h correctly?
Up to you : use the "Header Search Path" and "Library Search Paths" in the properties to add the location of Lua headers and liblua.a OR just copy the Lua headers and liblua.a in your project directory, then add them to your project
1,584,835
1,584,856
In what order do C++ objects passed as arguments to constructors of other objects go out of scope?
When I compile the following code with g++, the object of class A seems not to be destructed when the object of class C is constructed, and the B.ref_a reference is not broken when accessed by the constructor of object of class C: #include <iostream> struct A { A(int aa) { a = aa; } ~A() { std::cout << "A out" << std::endl; } int a; }; struct B { B(const A& a) : ref_a(a) { } ~B() { std::cout << "B out" << std::endl; } const A& ref_a; }; struct C { C(const B& b) { c = b.ref_a.a + 1; } int c; }; int main(void) { C c(B(A(1))); std::cout << c.c << std::endl; } However, is it guaranteed by the C++ language?
Here, the temporary objects go out of scope when the instruction has finished its execution. That is, just after the constructor of C has returned. And yes, this is guaranteed by the C++ standard.
1,584,956
1,586,277
How to handle execvp(...) errors after fork()?
I do the regular thing: fork() execvp(cmd, ) in child If execvp fails because no cmd is found, how can I notice this error in parent process?
The well-known self-pipe trick can be adapted for this purpose. #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <string.h> #include <sys/wait.h> #include <sysexits.h> #include <unistd.h> int main(int argc, char **argv) { int pipefds[2]; int count, err; pid_t child; if (pipe(pipefds)) { perror("pipe"); return EX_OSERR; } if (fcntl(pipefds[1], F_SETFD, fcntl(pipefds[1], F_GETFD) | FD_CLOEXEC)) { perror("fcntl"); return EX_OSERR; } switch (child = fork()) { case -1: perror("fork"); return EX_OSERR; case 0: close(pipefds[0]); execvp(argv[1], argv + 1); write(pipefds[1], &errno, sizeof(int)); _exit(0); default: close(pipefds[1]); while ((count = read(pipefds[0], &err, sizeof(errno))) == -1) if (errno != EAGAIN && errno != EINTR) break; if (count) { fprintf(stderr, "child's execvp: %s\n", strerror(err)); return EX_UNAVAILABLE; } close(pipefds[0]); puts("waiting for child..."); while (waitpid(child, &err, 0) == -1) if (errno != EINTR) { perror("waitpid"); return EX_SOFTWARE; } if (WIFEXITED(err)) printf("child exited with %d\n", WEXITSTATUS(err)); else if (WIFSIGNALED(err)) printf("child killed by %d\n", WTERMSIG(err)); } return err; } Here's a complete program. $ ./a.out foo child's execvp: No such file or directory $ (sleep 1 && killall -QUIT sleep &); ./a.out sleep 60 waiting for child... child killed by 3 $ ./a.out true waiting for child... child exited with 0 How this works: Create a pipe, and make the write endpoint CLOEXEC: it auto-closes when an exec is successfully performed. In the child, try to exec. If it succeeds, we no longer have control, but the pipe is closed. If it fails, write the failure code to the pipe and exit. In the parent, try to read from the other pipe endpoint. If read returns zero, then the pipe was closed and the child must have exec successfully. If read returns data, it's the failure code that our child wrote.
1,585,056
1,610,085
How to make XCode put required resources in "build" folder?
I am trying out lua script with C++ in Mac OS X. I was finding a way to make the program returning the current working directory. That's no problem with getcwd, but then I came one thing: My foo.lua stays at its initial path only. When I compile program, it is not being copied over to the build/Debug directory. Sure, I can grab my script there, but that's just impractical. XCode or any IDE should carry resources to the build zone. XCode does this automatically with iPhone app, but this seems to be a different case. For this case, how to command XCode to put the respective resources in the build directories? int main (int argc, char * const argv[]) { ... ... luaL_dofile(luaVM,"/Users/yourNameHere/Desktop/LuaSandbox/LetsTryLua/foo.lua"); //typing the whole absolute path here is just ugly and impractical. ... ... printf("working directory: %s", buffer); //output is: working directory: /Users/yourNameHere/Desktop/LuaSandbox/LetsTryLua/build/Debug ... ...
Rather than hard code the path to your Lua script you may want to use the NSBundle API's to find it: NSBundle * mainNSBundle = [NSBundle mainBundle]; NSString * luaFilePath = [mainNSBundle pathForResource:@"foo" ofType:@"lua" inDirectory:NULL forLocalization:NULL]; luaL_dofile(luaVM,[luaFilePath UTF8String]); This will find it in the bundle's folder (if you added the "Copy Bundle Resources" build step to your target as the above poster suggested.
1,585,137
1,585,164
What happens in assembly language when you call a method/function?
If I have a program in C++/C that (language doesn't matter much, just needed to illustrate a concept): #include <iostream> void foo() { printf("in foo"); } int main() { foo(); return 0; } What happens in the assembly? I'm not actually looking for assembly code as I haven't gotten that far in it yet, but what's the basic principle?
In general, this is what happens: Arguments to the function are stored on the stack. In platform specific order. Location for return value is "allocated" on the stack The return address for the function is also stored in the stack or in a special purpose CPU register. The function (or actually, the address of the function) is called, either through a CPU specific call instruction or through a normal jmp or br instruction (jump/branch) The function reads the arguments (if any) from the stack and the runs the function code Return value from function is stored in the specified location (stack or special purpose CPU register) Execution jumps back to the caller and the stack is cleared (by restoring the stack pointer to its initial value). The details of the above vary from platform to platform and even from compiler to compiler (see e.g. STDCALL vs CDECL calling conventions). For instance, in some cases, CPU registers are used instead of storing stuff on the stack. The general idea is the same though
1,585,188
1,588,486
SDL_Mixer sound problems
Basic Info: Programming Language - C++ Platform - Windows Audio Formats - wav and mid I recently finished a game and was fooling around with figuring out the best way to upload it to a file hosting site. I eventually decided on using 7zip's self-extracting feature. However, I think the mistake I made was that instead of just copying what I needed to another folder and zipping that up for the distribution (i.e., not copying source files, etc.) I rearranged the actual folder that held all of my source files etc. and split it into 2 sub folders for the C++ files, and then everything else (that folder being the one that got zipped up.) I tested downloading it and playing it and it worked fine. However, I went back because I decided to change the background music and that's when the problem started happening. To sum the problem up, Mix_PlayMusic() is being called and is working correctly. However, for some reason no sound is playing (and neither are any of the sound effects called from Mix_PlayChannel()). The odd thing is that you can hear the music when Mix_FadeOutMusic() is called. I also have a sound toggling feature, but after thorough testing I've come to the conclusion that it isn't the problem. I finally decided to create a completely new project and just bring all of the files I needed into that project in the same "organization" that they were in originally. However, the problem is still there. I have no idea what's wrong. The files are being loaded in fine, it's just that when the music is supposed to be playing (and according to testing it is), it's not playing. This also applies to sound effects. Edit: I actually wrote a test for each game loop for whether the music is playing and apparently the music is playing. It's just that for some reason it isn't being heard.
This could be a number of things. It could be an issue with the SDL_Mixer library you have, so you could try getting it again to rule that out. Your volume may have somehow got set to zero somewhere, so I would check the volume as a test. And the final thought would be that the source sound file you are playing is incompatible in some way (not likely if you can play it in another sound player, but possible). Besides those suggestions I don't believe I can help you any further with the data you have provided.
1,585,302
1,585,332
Which STL reference book would recommend?
I am considering to put one of the following as a reference on my desk (as I am sick and tired to google every time I have a STL question): The C++ Standard Library: A Tutorial and Reference STL Tutorial and Reference Guide: C++ Programming with the Standard Template Library Generic Programming and the STL: Using and Extending the C++ Standard Template Library Using the STL: The C++ Standard Template Library (why is this guy so overpriced -- $110?)
All of Scott Meyers' books are excellent, including "Effective STL". It's not a handbook or a tutorial, but worth having.
1,585,515
1,585,522
What does typedef do in C++
typedef set<int, less<int> > SetInt; Please explain what this code does.
This means that whenever you create a SetInt, you are actually creating an object of set<int, less<int> >. For example, it makes the following two pieces of code equivalent: SetInt somevar; and set<int, less<int> > somevar;
1,585,674
1,590,924
Subclass of QGraphicsLayoutItem for stretchers?
The specializations of QGraphicsLayout (e.g. QGraphicsLinearLayout) include an insertStretch method. What kind of object do QGraphicsLinearLayout::insertStretch method insert in the list of items managed by the layout? Better asked: what type of object is returned by QGraphicsLayout::itemAt method when called for a stretch position?
I've never investigated this, so I don't know, but if you are truly curious, you could ask for an object at that position. Assuming it doesn't return NULL, you could then work your way through the meta information and find out quite a bit of information about it. I wouldn't be too surprised, however, if it were a stock QWidget (empty, drawing nothing).
1,585,708
1,585,711
Copy Constructor and default constructor
Do we have to explicitly define a default constructor when we define a copy constructor for a class?? Please give reasons. eg: class A { int i; public: A(A& a) { i = a.i; //Ok this is corrected.... } A() { } //Is this required if we write the above copy constructor?? }; Also, if we define any other parameterized constructor for a class other than the copy constructor, do we also have to define the default constructor?? Consider the above code without the copy constructor and replace it with A(int z) { z.i = 10; } Alrite....After seeing the answers I wrote the following program. #include <iostream> using namespace std; class X { int i; public: //X(); X(int ii); void print(); }; //X::X() { } X::X(int ii) { i = ii; } void X::print() { cout<<"i = "<<i<<endl; } int main(void) { X x(10); //X x1; x.print(); //x1.print(); } ANd this program seems to be working fine without the default constructor. Please explain why is this the case?? I am really confused with the concept.....
Yes. Once you explicitly declare absolutely any constructor for a class, the compiler stops providing the implicit default constructor. If you still need the default constructor, you have to explicitly declare and define it yourself. P.S. It is possible to write a copy constructor (or conversion constructor, or any other constructor) that is also default constructor. If your new constructor falls into that category, there's no need to provide an additional default constructor anymore :) For example: // Just a sketch of one possible technique struct S { S(const S&); S(int) {} }; S dummy(0); S::S(const S& = dummy) { } In the above example the copy constructor is at the same time the default constructor.
1,586,286
1,586,298
C++ fork() and execv() problems
I am kind of newbie on C++, and working on a simple program on Linux which is supposed to invoke another program in the same directory and get the output of the invoked program without showing output of the invoked program on console. This is the code snippet that I am working on: pid_t pid; cout<<"General sentance:"<<endl<<sentence<<endl; cout<<"==============================="<<endl; //int i=system("./Satzoo"); if(pid=fork()<0) cout<<"Process could not be created..."<<endl; else { cout<<pid<<endl; execv("./Satzoo",NULL); } cout<<"General sentance:"<<endl<<sentence<<endl; cout<<"==============================="<<endl; One of the problem I encounter is that I am able to print the first two lines on console but I cant print the last two lines. I think the program stops working when I invoke the Satzoo program. Another thing is that this code invokes Satzoo program twice, I dont know why? I can see the output on screen twice. On the other hand if I use system() instead of execv(), then the Satzoo works only once. I haven't figured out how to read the output of Satzoo in my program. Any help is appreciated. Thanks
You aren't distinguisng between the child and the parent process after the call to fork(). So both the child and the parent run execv() and thus their respective process images are replaced. You want something more like: pid_t pid; printf("before fork\n"); if((pid = fork()) < 0) { printf("an error occurred while forking\n"); } else if(pid == 0) { /* this is the child */ printf("the child's pid is: %d\n", getpid()); execv("./Satzoo",NULL); printf("if this line is printed then execv failed\n"); } else { /* this is the parent */ printf("parent continues execution\n"); }
1,586,368
1,586,382
Assign a C++ out reference to something that was destroyed?
So I'm looking through some code, and I see this: class whatever { public: void SomeFunc(SomeClass& outVal) { outVal = m_q.front(); m_q.pop(); } private: std::queue<SomeClass> m_q; }; This doesn't seem like outVal would be a valid reference any more... However, it appears to work. I've seen this in other code before too, is this valid? Thanks
Remember that references are not like pointers: they cannot be rebound after their creation. That means that if I do int a; int b; int &c = a; Then throughout that scope, an assignment to c will actually mean an assignment to a. So, int a = 2; { int b = 3; int &c = a; c = b; b = -5; } printf("%d",a); // prints "3". So, in this case, the reference is not being pointed at a deleted object. Rather, the return value of m_q.front() is copied into whatever outVal references, via the assignment operator.
1,586,393
1,586,462
Custom C++ Preprocessor / Typeful Macros
Having seen the advantages of metaprogramming in Ruby and Python, but being bound to lower-level languages like C++ and C for actual work, I'm thinking of manners by which to combine the two. One instance comes in the simple problem for sorting lists of arbitrary structures/classes. For instance: struct s{ int a; int b; }; vector<s> vec; for(int x=0;x<10;x++){ s inst; inst.a = x; inst.b = x+10; vec.push_back(inst); } Ultimately, I'd like to be able to sort inst arbitrarily with a minimal amount of boilerplate code. The easiest way I can see to do this is to make use of STL's sort: sort(vec.begin(),vec.end()); Yet this requires me to write a method that can compare "struct s"s. What I'd rather do is: sort(vec,a ASC,b DESC); Which is very clearly not valid C++. What is the best way to accomplish my dream? If I had some sort of typeful macro, that would reveal to me what the type of a vector's elements were, then it would be trivial to write C preprocessor macros to create the function required to do the sorting. The alternative seems to be to write my own preprocessor. This works well, up until the point where I have to deduce the type of "vec" again. Is there an easy way to do this? Context: Less code = less bugs, programming competitions.
For the above, you can use Boost.Lambda to write your comparison function inline, just like a Python lambda: using namespace boost::lambda; std::sort(vec.begin(), vec.end(), (_1 ->* &s::a) < (_2 ->* &s::a)); This of course assumes that you are sorting by a. If the expressions you are looking for are far more complex, you are better off writing a separate function; even in languages like Python and Ruby with native support for closures, complex closures become quite unreadable anyway. Warning: The code above is untested. Hope this helps!
1,586,584
1,586,637
Why won't this code output to a file?
I've debugged my program and the arrays seem to be allocated well. However for some strange and stupid reason, the code doesn't output the arrays into the file. Please help me spot my bug or such! #include <iostream> #include <algorithm> #include <string> #include <fstream> using namespace std; void sRecSort(string *n, int *s, string *e, int len){ for (int i = 0; i < len; i++){ for (int j = i + 1; j < len; j++){ if (s[j] < s[i]){ swap(s[i],s[j]); swap(e[i],e[j]); swap(n[i],n[j]); } } } } void printLowestRecord(char inFileName[]){ string tempSubString = " "; string names[12] = {" "}; int grades[12] = {0}; string emails[12] = {""}; int firstSpace = -1; int secondSpace = -1; ifstream inputMe(inFileName); while (!inputMe.eof()){ for (int i = 0; i < 12; i++){ getline(inputMe, tempSubString); for (int w = 0; w < strlen(tempSubString.c_str()); w++){ if (tempSubString[w] != ' '){ continue; } else{ if (firstSpace == -1){ firstSpace = w; } else if (firstSpace != -1 && secondSpace == -1){ secondSpace = w; names[i] = tempSubString.substr(0, firstSpace); grades[i] = atoi((tempSubString.substr(firstSpace + 1, secondSpace - (firstSpace + 1))).c_str()); emails[i] = tempSubString.substr(secondSpace + 1, tempSubString.length() - (secondSpace + 1)); break; } } } firstSpace = -1; secondSpace = -1; } } sRecSort(names, grades, emails, 12); inputMe.close(); } void sortFileRecords(char inFileName[], char outFileName[]){ ifstream inputFile(inFileName); ofstream outputFile(outFileName); string tempSubString = " "; string names[12] = {" "}; int grades[12] = {0}; string emails[12] = {" "}; int firstSpace = -1; int secondSpace = -1; while (!inputFile.eof()){ for (int i = 0; i < 12; i++){ getline(inputFile, tempSubString); for (int w = 0; w < strlen(tempSubString.c_str()); w++){ if (tempSubString[w] != ' '){ continue; } else{ if (firstSpace == -1){ firstSpace = w; } else if (firstSpace != -1 && secondSpace == -1){ secondSpace = w; names[i] = tempSubString.substr(0, firstSpace); grades[i] = atoi((tempSubString.substr(firstSpace + 1, secondSpace - (firstSpace + 1))).c_str()); emails[i] = tempSubString.substr(secondSpace + 1, tempSubString.length() - (secondSpace + 1)); break; } } } firstSpace = -1; secondSpace = -1; } } sRecSort(names, grades, emails, 12); for (int q = 0; q < 12; q++){ outputFile << names[q] << " " << grades[q] << " " << emails[q] << endl; } inputFile.close(); outputFile.close(); } int main (int argc, char * const argv[]) { printLowestRecord("gradebook.txt"); sortFileRecords("gradebook.txt", "sortedGradebook.txt"); return 0; } Here's my data: Sean 80 sean@csi.edu James 100 james@yahoo.com Issac 99 issac@mail.csi.edu Thomas 88 tom@cix.csi.edu Alice 78 alice@myclass.com Jone 75 jone@hotmail.com Zach 89 zach@yahoo.com Mark 86 mark@gmail.com Nick 79 nick@bmail.com Amy 95 amy@hotmail.com Claire 89 claire@yahoo.com Eve 97 eve@nytimes.com
The code seems to be correct so far, I think your test data is wrong. If I test with this input file: a 10 c d 2 f g 9 i j 4 l m 8 o p 6 r s 7 u v 8 x y 6 a b 10 d e 5 g h 12 j The output file is like this, which is the expected behaviour: d 2 f j 4 l e 5 g y 6 a p 6 r s 7 u m 8 o v 8 x g 9 i b 10 d a 10 c h 12 j So either your test data is wrong or there is some additional error handling you have to do (file can't be opened etc.). By the way, this part of your code else if (firstSpace != -1 && secondSpace == -1){ can be reduced to else { because you have a break statement in there and set secondSpace back to -1 right after it. EDIT: Your data also works fine - output is this: Jone 75 jone@hotmail.com Alice 78 alice@myclass.com Nick 79 nick@bmail.com Sean 80 sean@csi.edu Mark 86 mark@gmail.com Thomas 88 tom@cix.csi.edu Zach 89 zach@yahoo.com Claire 89 claire@yahoo.com Amy 95 amy@hotmail.com Eve 97 eve@nytimes.com Issac 99 issac@mail.csi.edu James 100 james@yahoo.com
1,586,590
1,586,638
Instantiating objects and object members
For some reason the following doesn't crash like my program does, but I'm pretty sure it's similar in design. For one, the output's not correct. It outputs something similar to: 0x537ff4 5471612 While the main program outputs (nil) for the pointer address. The key to the problem might be display_ in Drv. Here's the code: #include <iostream> #include "debug.h" class LCDText { public: int rows_; LCDText() { rows_ = 10; }; }; class Generic { LCDText *lcdText_; public: Generic(LCDText *lcdText) { lcdText_ = lcdText; }; void Setup() { Error("%p %d", lcdText_, lcdText_->rows_); } }; class Display : public LCDText { Generic *visitor_; public: Display(Generic *visitor) { visitor_ = visitor; }; }; class Drv : public Generic { Display *display_; public: Drv() : Generic((LCDText *)display_) { display_ = new Display((Generic *)this); }; ~Drv() { delete display_; }; }; int main() { Drv drv; drv.Setup(); return 0; }
This code: Drv() : Generic((LCDText *)display_) { display_ = new Display((Generic *)this); }; first runs the parent class's ctor, with a yet-uninitialized value of display_, then independently sets display_, but, too late to change the parent class. So the pointer held by the parent class will never be set correctly. I guess you need to add a protected setter method (or make the parent-class-held pointer member itself protected).
1,586,749
1,587,158
What is the difference between _itoa and itoa?
Visual Studio is yelling at me about using itoa() saying to use _itoa() instead? It looks to me like they are the same function. What gives?
A C run time library implementation is not supposed to introduce names that aren't in the standard unless they follow a certain naming convention (like starting with an underscore). The earlier versions of Microsoft's compiler didn't follow this rule particularly closely, but over time, Microsoft has been moving more toward making their implementation more standards compliant. So functions they used to supply that would intrude on the user's namespace they have been implementing using names that are reserved for compiler implementations and have been deprecating the old names. If _CRT_NONSTDC_NO_WARNINGS is defined, the MS compiler won't complain about the itoa() function being deprecated. But it will still complain about it being unsafe (you have to define _CRT_SECURE_NO_WARNINGS to quiet that warning). Or use the safer version of the function (_itoa_s()) that provides the function with the destination buffer size Both _itoa() and itoa() resolve to the exact same function in the library down to the same address - there is no difference except in the name.
1,586,773
1,586,817
Checking if a Socket has closed in C++
I have a small application that redirects the stdout/in of an another app (usually command prompt or bash for windows). The problem is that if the connection is interrupted the my process has no idea and it never closes because of this line: WaitForSingleObject(childProcess.hThread, INFINITE) I was thinking of having a loop with something like this: while(true) { if(ProcessIsDead(childProcess.hThread)) // close socket and exit if(SocketIsDisocnnected(hSocket)) // close process and exit } What functions would I use to accomplish this? For ProcessIsDead I know there is a winapi for getting the exit code but I don't know how to check if the socket is disconnected without calling recv (which i cant do)
Note: I'm assuming the connected socket is communicating over a network link, because I'm not sure how it would become disconnected if it were a local pipe, except by one process or the other dying. Use the select() function in the socket API to query the read status of the socket. select() will tell you the socket is "readable" if any of the following is true: If listen has been called and a connection is pending, accept will succeed. Data is available for reading (includes OOB data if SO_OOBINLINE is enabled). Connection has been closed/reset/terminated. So, if select() says that the socket is readable, it is safe to call recv(), which will give you WSACONNRESET or return 0 bytes if the connection was reset or closed respectively. select() takes a 'timeout' parameter, which you can set to an appropriate time interval or zero if you want to poll the socket state. Information on the select() function is at http://msdn.microsoft.com/en-us/library/ms740141(VS.85).aspx
1,586,787
1,586,815
What is paging effect in C++?
I came across this as I was trying to learn array and vectors in c++. What is the "paging effect" mentioned in the post? Also, just to check my own understanding, I think vector uses more time is because of the dynamic memory allocation. Am I right? additional question: but with vector<int> arr( 10000 ) isn't there already enough memory for 10000 int allocated? or put it this way, does arr still grow if all I do is iterating through each element and initializing them?
Vector uses dynamic allocation if you use push_back(), but you can force it to pre-allocate memory with reserve(). Checked builds (common in debug libraries) also check the bounds for vector operations which can slow them down in debug mode. Release builds should be no slower than raw C. Paging means moving memory out to disk when physical memory is full. You have to be careful with timing if you think memory is being paged out. A common technique is to run the task multiple times and reject the longest times. edit: You should (almost) never use the raw 'C' type instead of the STL for efficiency. The people that wrote the STL are both really smart and really care about performance. Used properly it should never be worse than 'C' and is often better. The same goes double for using STL algorithms rather than your own han rolled loops.
1,586,795
1,586,887
Why does write() to pipe exit program when pipe writes to stdout?
I have a server application that writes to a popen("myCommand", "w") file descriptor in a separate thread and if the command passed to popen() results in any output to stdout or stderr, the my application exits. However, this is only an issue when my server application was invoked via inetd, if I used ssh to launch the server, it does not have this issue. Similarly, when my server application reads from a popen("myCommand2", "r") file descriptor in a separate thread and if the command passed to popen() results in any output to stderr (stdin is going to my pipe), the application exits. Again, this only occurs with inetd summoning, not ssh summoning.
you need to close all existed fds of the process before open the pipe, then do i/o redirection. that's because if inetd, the process runs as a daemon.
1,586,812
1,618,247
Why can't gdb attach to server application summoned with inetd?
I have a server application that can be summoned for the client using inetd. However, if I try to attach to the server process that was launched with inetd, I get the following response: ptrace: Operation not permitted. gdb --annotate=3 /my/app/here <processId> Current directory is /usr/local/bin/ GNU gdb 6.8 Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i686-pc-linux-gnu"... Reading symbols from /usr/local/bin/flumed...done. Using host libthread_db library "/lib/tls/libthread_db.so.1". Attaching to program: /my/app/here, process <processId> ptrace: Operation not permitted. /usr/local/bin/<processId>: No such file or directory. (gdb)
Apparently, things get funky with inetd where the userid is not root. You end up with a process with weird permissions. For example, you are unable to read /proc/self/exe even though the permissions are 777. I suspect this issue is more of the same. Even though I'm the userid of the process, I don't have permissions. Using root for gdb is a work around.
1,586,834
1,586,837
Escape sequence for ? in c++
I was looking at the escape sequences for characters in strings in c++ and I noticed there is an escape sequence for a question mark. Can someone tell me why this is? It just seems a little odd and I can't figure out what ? does in a string. Thanks.
It's to keep a question mark from getting misinterpreted as part of a trigraph. For example, in "What??!" The "??! would be interpreted as the | character. So, you have to escape the question marks as follows: "What\?\?!" Example complements of http://msdn.microsoft.com/en-us/library/bt0y4awe%28VS.80%29.aspx
1,586,907
1,586,952
C++ Simple file reading
I have a file,named f1.txt, whose contents are 75 15 85 35 60 50 45 70 Here is my code to read each integer and print them. #include <iostream> #include <fstream> using namespace std; int main(int argc, char *argv[]) { fstream file("f1.txt", ios::in); int i; while(!file.eof()) { file >> i; cout << i << " "; } return 0; } But when I compiled and run the program, the output is 75 15 85 35 60 50 45 70 70. Why it is reading the last integer twice ? Any clues ?
Try: while(file >> i) cout << i << " ";
1,587,005
1,587,011
Unusual HTTP Response in Basic C++ Socket Programming
I've got a basic HTTP client set up in C++, which works ok so far. It's for a school assignment, so there's lots more to do, but I'm having a problem. I use the recv() function in a while loop, to repeatedly add pieces of the response to my response buffer, and then output that buffer each time. The problem is, at the end of each piece of the response, the HTTP Request is getting tacked on as well. For example, the response will be a chunk of the page's source code, followed by "GET / HTTP/1.1...", followed by the next chunk, and then the "GET..." again, and so on. Here's my relevant code: // Prepare request char request[] = "HEAD /index.html HTTP/1.1\r\nHOST: www.google.com\r\nCONNECTION: close\r\n\r\n"; // Send request len = send(sockfd, request, sizeof(request), 0); // Write/output response while (recv(sockfd, buf, sizeof(buf), 0) != 0) { // Read & output response printf("%s", buf); }
The buffer isn't null terminated, which is required for strings in C++. When you see the "extra GET", you are seeing memory that you shouldn't be because the stdlib tried to print your buffer, but never found a '\0' character. A quick fix is to force the buffer to be terminated: int n = 1; while (n > 0) { n = recv(sockfd, buf, sizeof(buf), 0); if (n > 0) { // null terminate the buffer so that we can print it buf[n] = '\0'; // output response printf("%s", buf); } }
1,587,034
1,587,050
std::map::iterator crashes program on increment
What could cause this? Here's the stack trace: #0 0x0645c0f5 in std::_Rb_tree_increment (__x=0x83ee5b0) at ../../../../libstdc++-v3/src/tree.cc:69 #1 0x0805409a in std::_Rb_tree_iterator<std::pair<std::string const, Widget*> >::operator++ ( this=0xbffff144) at /usr/lib/gcc/i586-redhat-linux/4.4.1/../../../../include/c++/4.4.1/bits/stl_tree.h:192 #2 0x08053d32 in Generic::StartLayout (this=0x8287d68) at Generic.cpp:195 #3 0x0804f6e1 in LCDControl::ConfigSetup (this=0xbffff26c) at LCDControl.cpp:91 #4 0x0804ed7c in LCDControl::Start (this=0xbffff26c, argc=1, argv=0xbffff404) at LCDControl.cpp:21 #5 0x08050964 in main (argc=1, argv=0xbffff404) at Main.cpp:11 And here's the code: for(std::map<std::string,Widget *>::iterator w = widgets_.begin(); w != widgets_.end(); w++){ if( w->second->GetType() & WIDGET_TYPE_BAR) { w->second->SetupChars(); } w->second->Start(); } Edit: This next problem is related, so I won't open a whole new question. I'll leave the answer acceptance like it is. I just need to know something. I have two iterators, one main and one within that main after a function call. They both relate to the same map. Well, the one inside gets all corrupted, and the main one's loop stops iterating. Here's the code. Here's StartLayout: void Generic::StartLayout() { Error("StartLayout: %s", key.c_str()); for(std::map<std::string,Widget *>::iterator w = widgets_.begin(); w != widgets_.end(); w++){ Error("Starting widget %s", w->first.c_str()); if( w->second->GetType() & WIDGET_TYPE_SPECIAL) { w->second->SetupChars(); } w->second->Start(); } } And here's SetupChars(): void WidgetGif::SetupChars() { Error("SetupChars <%s> <%s>", name_.c_str(), widget_base_.c_str()); Error("Size of widgets: %d", visitor_->Widgets().size()); std::map<std::string, Widget *> widgets = visitor_->Widgets(); for(std::map<std::string, Widget *>::iterator ii=visitor_->Widgets().begin(); ii != visitor_->Widgets().end(); ii++) { Error("<%s> Widget base %s == %s", ii->first.c_str(), ii->second->GetWidgetBase().c_str(), widget_base_.c_str()); if(ii->second->GetWidgetBase() == widget_base_ && ((WidgetGif *)ii->second)->HasChars()) { Error("Using chars from %s", ii->first.c_str()); for(unsigned int i = 0; i < rows_ * cols_; i++ ) { if(i >= visitor_->GetLCDText()->CHARS) { Error("1) GIF too large: %s, %d", name_.c_str(), visitor_->GetLCDText()->CHARS); if(update_) delete update_; update_ = new Property(visitor_, section_, "", new Json::Value("-1")); return; } ch_[i] = ((WidgetGif *)widgets[ii->first])->GetChars()[i]; } hasChars_ = true; return; } } // It goes on, but I snipped it here. } And this is what happens: StartLayout: display_qt Starting widget widget_gif_american_flag:layout_american_flag:0 SetupChars <widget_gif_american_flag:layout_american_flag:0> <layout_american_flag> Size of widgets: 5 <widget_gif_american_flag:layout_american_flag:1> Widget base layout_american_flag == layout_american_flag <widget_gif_american_flag:layout_american_flag:4> Widget base layout_american_flag == layout_american_flag <(n (n ��S> Widget base ГS == layout_american_flag ^C Last edit: I figured it out. I just needed a copy of the original map for the new iterator.
There could be quite a few reasons for that. For one, it may be that GetType or SetupChars or Start do something that causes your map to change - which would invalidate the current iterator (note that using operator[] on the map, even just to read the value, is technically a mutating operation, and can cause a crash with iterator debugging enabled!). Alternatively, your map could be corrupted in memory by some code that executed before, e.g. because of a buffer overrun overwriting part of the map's tree.
1,587,252
1,587,272
What is a popular, multi-platform, free and open source socket library
Is there any free open source library (in C/C++) for sockets that is widely used and supports wide range of operating systems (Windows, Unix/Linux, FreeBSD etc). Just like pthreads. Otherwise the only solution left would be to write socket wrapper for each operating system. Or would writing a wrapper against winsock and GNU C sys/socket.h libraries would be enough? Wouldn't it be possible that I implement it against the socket library provided with GNU C. GNU C is available for wide range of platforms and my code will work in all those platforms?
I believe both the Apache Portable Runtime and GTK+'s GLib libraries have socket APIs. Since your question is tagged c and c++ I suspect you really want C++-centric answers, but both of these are good as pure C libraries.
1,587,410
1,587,413
Delphi: Access violation after calling function from external DLL (C++)
There's a function, written in C++ and compiled as DLL, which I want to use in my Delphi application. Scraper.cpp: SCRAPER_API bool ScraperGetWinList(SWin winList[100]) { iCurrWin=0; memset(winList,0,100 * sizeof(SWin)); return EnumWindows(EnumProcTopLevelWindowList, (LPARAM) winList); } Scraper.h: #ifdef SCRAPER_EXPORTS #define SCRAPER_API __declspec(dllexport) #else #define SCRAPER_API __declspec(dllimport) #endif struct SWin { char title[512]; HWND hwnd; }; extern "C" { SCRAPER_API bool ScraperGetWinList(SWin winList[100]); } This is how I declare the function in the Delphi application: type tWin = record Title: Array [0..511] of Char; hWnd: HWND; end; tWinList = Array [0..99] of tWin; function ScraperGetWinList(var WinList: tWinList): Boolean; stdcall; external 'Scraper.dll'; The function works, but when it's finished, I receive Debugger Fault Notification: Project ... faulted with message: ''access violation at 0x0012f773: write of address 0xffffffc0'. Process Stopped. Use Step or Run to continue. If I add __stdcall (after SCRAPER_API bool) in Scraper.cpp and Scraper.h, then the Delphi application doesn't start at all: The procedure entry point ScraperGetWinList could not be located in the dynamic link library Scraper.dll.
You need to put __stdcall after bool. The complete declaration, after all macros expand, should look like this: extern "C" { __declspec(dllexport) bool __stdcall ScraperGetWinList(SWin winList[100]); } EDIT: Looks like you'll also need a .def file there. It's a file that lists every function exported in the DLL, and in this case it's needed only to force C++ compiler not mangle the exported names. Contents would be this: EXPORTS ScraperGetWinList I'm not sure which C++ compiler you're using, but normally you'd just specify the .def file along with .cpp; for example, the following works for VC++: cl.exe foo.cpp foo.def Also, you will need to tell Delphi to use stdcall as well, by inserting stdcall keyword right before external in your Delphi function declaration.
1,587,521
1,587,589
Keep windows trying to read a file
I'm working in a sort of encapsulation of the windows filesystem. When the user request to open a file, windows calls to my driver to provide the data. In normal operation the driver return the file contents which is cached, However, in some cases the real file is not cached and I need to download it from the network. The question is if it's possible to keep Windows trying to read the file without blocking the entire drive operation neither the software which open the file, giving the user a chance to cancel the opening process. At first, i tried to block the driver until data is available, this solution is the more straightforward to implement, but the user experiencie isn't the best. Moreover, relying in network transfer isn't a good idea, the transfer could last for many time and the driver will be block all that time. The second way I have implemented, consist in return only data when the file is cached and when the file isn't available tell windows that the file is 0 size length, and download the file in a background proccess. With this, the driver don't block windows, and the user experience improve, but the user need to open N times the file until data is available. I think that the best solution will be to return windows a message like "No data available, try again in 5 secs", i think that if the driver return the apropiate error code to windows this could be achieved, but the error list is too long and the names aren't always as descriptive as you want. Do you have some advice implementing this? Thanks in advance.
The behavior you implement is correct for a driver. The responsibility for handling slow I/O lies at a higher level. For instance, Windows Explorer is very careful in NOT trying to retrieve even a single byte from any file, relying purely on metadata. However, do not return a failure code when you're busy. It's that failure which would force users to repeatedly open a file.
1,587,716
1,591,537
How can I get the IP address of a network printer given the port name using the Win32 API?
How can I get the IP address of a network printer given the port name, using win32 API? I tried looking into the PRINTER_INFO_* structs, but it seems it is not present there.
I don't think there's a standard way to get the IP address. There are probably different incompatible implementations of network port monitors. For my network printer, the IP address is part of the port name (e.g., IP_192_168.1.104). If it's of that form, then you might be able to parse it out, but I don't think this is universal. Using EnumPorts you can determine if it's a network printer, but I still don't see a way to get the IP address.
1,588,131
1,588,289
C++ Array size x86 and for x64
Simple question, I'm writting a program that needs to open huge image files (8kx8k) but I'm a little bit confused on how to initialize the huge arrays to hold the images in c++. I been trying something like this: long long SIZE = 8092*8092; ///8096*8096 double* array; array = (double*) malloc(sizeof(double) * SIZE); if (array == NULL) { fprintf(stderr,"Could not allocate that much memory"); } But sometimes my NULL check does not catch that the array was not initialized, any idea why? Also I can't initialize more that 2 or 3 arrays, even when running in a x64 machine with 12 GB of RAM, any idea why? I would really wish not to have to work with sections of array instead. Any help is welcome. Thanks.
Are you compiling your application as a 32-bit application (the default in Visual Studio, if that's what you're using), or as a 64-bit application? You shouldn't have troubles if you build it as a 64-bit app. malloc allocates (reserves memory and returns a pointer), calloc initializes (writes all zeros to that memory).
1,588,665
1,588,714
Forcing an error when a function doesn't explicitly return a value on the deafult return path?
Is there a way, in VC++ (VSTS 2008), to froce a compiler error for functions that do not explicitly return a value on the default return path (Or any other quick way to locate them)? On the same issue, is there any gaurentee as to what such functions actually return?
I don't know exactly the warning number, but you can use #pragma warning for enforcing a specific warning to be treated as error: Example: #pragma warning( error: 4001) will treat warning 4001 as error
1,588,788
1,590,534
Wrapping C++ class API for C consumption
I have a set of related C++ classes which must be wrapped and exported from a DLL in such a way that it can be easily consumed by C / FFI libraries. I'm looking for some "best practices" for doing this. For example, how to create and free objects, how to handle base classes, alternative solutions, etc... Some basic guidelines I have so far is to convert methods into simple functions with an extra void* argument representing the 'this' pointer, including any destructors. Constructors can retain their original argument list, but must return a pointer representing the object. All memory should be handled via the same set of process-wide allocation and free routines, and should be hot-swappable in a sense, either via macros or otherwise.
Foreach public method you need a C function. You also need an opaque pointer to represent your class in the C code. It is simpler to just use a void* though you could build a struct that contains a void* and other information (For example if you wanted to support arrays?). Fred.h -------------------------------- #ifdef __cplusplus class Fred { public: Fred(int x,int y); int doStuff(int p); }; #endif // // C Interface. typedef void* CFred; // // Need an explicit constructor and destructor. extern "C" CFred newCFred(int x,int y); extern "C" void delCFred(CFred); // // Each public method. Takes an opaque reference to the object // that was returned from the above constructor plus the methods parameters. extern "C" int doStuffCFred(CFred,int p); The the implementation is trivial. Convert the opaque pointer to a Fred and then call the method. CFred.cpp -------------------------------- // Functions implemented in a cpp file. // But note that they were declared above as extern "C" this gives them // C linkage and thus are available from a C lib. CFred newCFred(int x,int y) { return reinterpret_cast<void*>(new Fred(x,y)); } void delCFred(CFred fred) { delete reinterpret_cast<Fred*>(fred); } int doStuffCFred(CFred fred,int p) { return reinterpret_cast<Fred*>(fred)->doStuff(p); }
1,588,866
1,591,746
Gtkmm - "Gtk::DrawingArea" in "Gtk::ScrolledWindow"?
I am making a GUI program using "gtkmm". I would like to draw some graphics to the "Gtk::DrawingArea" widget but I want that widget to be "resizable", so when I draw let's say a line from "(0, 0)" to "(50, 50)" pixel - the drawing area should be resized to a square of size "(50, 50)"; and when I for example draw a line from "(0, 0)" to let's say "(100, 70)" pixel - the drawing area should be resized to a rectangle of size "(100, 70)". And to do this - I put the "Gtk::DrawingArea" widget into the "Gtk::ScrolledWindow" container. So when I put a picture with size "(100, 100)" to the drawing area -> I will call the appropriate member function "Gtk::DrawingArea::set___size_request(100, 100)", which will set the size of the drawing area to (100, 100) pixels. So if the "Gtk::ScrolledWindow" is "smaller" than the "Gtk::DrawingArea" inside it, the scrolled window should change to "scrollable" - BUT THE PROBLEM IS, THAT IT DOES NOT WORK!!!!! So could anybody explain me how can I do this? Thanks.
Add a GTK::Viewport in between your ScrolledWindow and DrawingArea.
1,588,949
1,588,967
How to check if the first char in the line is # (beginning of a comment)
I have been following this convention thus far: std::string line; while(std::getline(in,line)) { if(line.size() && line[0] =='#') continue; /* parse text*/ } The obvious drawback is that comment may not begin at the first character, in the case of leading whitespace. What is the good way to deal with this sort of a thing?
Simple enhancement: you may want to use line.find_first_not_of(" ") to get the first non-whitespace and then check if that is a '#'. That would also cover to the zero length case. Something like this fragment: found= line.find_first_not_of(" \t"); if( found != string::npos) { if( line[found] == '#') continue; } More info
1,588,970
1,589,081
How to determine where code spends a lot of time in a kernel space (system calls)
I noticed that 10% my code run is system space. However I do NOT know which system calls. I suspect, though, it is either has to do files or timestamps. Is there a tool to figure out which system calls are the culprits? Also, I want to know the frequency of (and location) of calls (and callee) . I am on AS3 thx
Both strace and truss will help you see which system calls are taking time. Two useful options for strace are: -T to show the time spent in each system call, -c to summarize syscall counts, calls, error counts as a table. The two options are mutually exclusive though. You may want a full system profiling tool, to allow you to profile the kernel in more detail. DTrace is probably the best if you have it on your platform. By platform, here are some options: Linux: strace, oprofile, SystemTap. Solaris: dtrace (the original) FreeBSD: dtrace OS X: dtrace and Instruments; the latter is a graphical UI over DTrace and comes with Xcode. DTrace can even help you profile your C/C++ code with the pid provider, e.g. see here.
1,588,976
1,589,073
Do pointers to string literals remain valid after a function returns?
Is the pointer returned by the following function valid? const char * bool2str( bool flg ) { return flg ? "Yes" : "No"; } It works well in Visual C++ and g++. What does C++ standard say about this?
On storage duration: 2.13.4 Ordinary string literals and UTF-8 string literals are also referred to as narrow string literals. A narrow string literal has type “array of n const char”, where n is the size of the string as defined below, and has static storage duration read in conjunction with 3.7.1 3.7.1. All objects which do not have dynamic storage duration, do not have thread storage duration, and are not local have static storage duration. The storage for these objects shall last for the duration of the program (3.6.2, 3.6.3). On type: Annex C Subclause 2.13.4: Change: String literals made const The type of a string literal is changed from “array of char ” to “array of const char.” The type of a char16_t string literal is changed from “array of some-integer-type ” to “array of const char16_t.” The type of a char32_t string literal is changed from “array of some-integer-type ” to “array of const char32_- t.” The type of a wide string literal is changed from “array of wchar_t ” to “array of const wchar_t.” Rationale: This avoids calling an inappropriate overloaded function, which might expect to be able to modify its argument. Effect on original feature: Change to semantics of well-defined feature. Difficulty of converting: Simple syntactic transformation, because string literals can be converted to char*; (4.2). The most common cases are handled by a new but deprecated standard conversion: char* p = "abc"; // valid in C, deprecated in C++ char* q = expr ? "abc" : "de"; // valid in C, invalid in C++ How widely used: Programs that have a legitimate reason to treat string literals as pointers to potentially modifiable memory are probably rare. Dynamically allocated (the word 'heap' is never used in context of an area of memory AFAIK in the standard) memory requires a function call that can happen as early as main much after the static memory is allocated.
1,589,235
1,589,254
How to make functions with flag parameters? (C++)
How could I make a function with flags like how Windows' CreateWindow(...style | style,...), for example, a createnum function: int CreateNum(flag flags) //??? { int num = 0; if(flags == GREATER_THAN_TEN) num = 11; if(flags == EVEN && ((num % 2) == 1) num++; else if(flags == ODD && ((num % 2) == 0) num++; return num; } //called like this int Number = CreateNum(GREATER_THAN_TEN | EVEN); Is this possible, and if so, how?
You can define an enum specifying "single bit" values (note that the enclosing struct is acting here only as a naming context, so that you can write e.g. MyFlags::EVEN): struct MyFlags{ enum Value{ EVEN = 0x01, ODD = 0x02, ANOTHER_FLAG = 0x04, YET_ANOTHER_FLAG = 0x08, SOMETHING_ELSE = 0x10, SOMETHING_COMPLETELY_DIFFERENT = 0x20 }; }; and then use it like this: int CreateNum(MyFlags::Value flags){ if (flags & MyFlags::EVEN){ // do something... } } void main(){ CreateNum((MyFlags::Value)(MyFlags::EVEN | MyFlags::ODD)); } or simply like this: int CreateNum(int flags){ if (flags & MyFlags::EVEN){ // do something... } } void main(){ CreateNum(MyFlags::EVEN | MyFlags::ODD); } You could also simply declare integer constants, but the enum is clearer in my opinion. Note: I updated the post to take some comments into account, thanks!
1,589,370
1,592,604
CComBSTR memory allocation
I have a "const char* str" with a very long string. I need to pass it from a cpp client to a .Net COM method which expects BSTR type. Currently I use: CComBSTR bstr = str; This has the following issues: Sometimes this line fails with out of memory message When I pass the bstr to the COM class it takes a lot of memory (much more than the string size) so it can fail with out of memory Questions: Am I converting to CComBSTR wisely? e.g. is there a way to use the heap or something Is it better to use BSTR instead? Any other suggestion is also welcomed...
If a method is expecting a BSTR passing a BSTR is the only correct way. To convert char* to a BSTR you use MultiByteToWideChar() Win32 API function for conversion and SysAllocStringLen() for memory allocation. You can't get around that - you need SysAllocStringLen() for memory allocation because otherwise the COM server will fail if it calls SysStringLen(). When you use CComBSTR and assign a char* to it the same sequence is run - ATL is available as headers and you can enjoy reading it to see how it works. So in fact CComBSTR does exactly the minimal set of necessary actions. When you pass a BSTR to a COM server CComBSTR::operator BSTR() const is called that simply returns a pointer to the wrapped BSTR - the BSTR body is not copied. Anything that happens next is up to the COM server or the interop being used - they decide for themselves whether they want to copy the BSTR body or just read it directly. Your only bet for resolving the memory outages is to change the COM interface so that it accepts some reader and requests the data in chunks through that reader.
1,589,425
1,589,469
Templated Stl Containers
template <class T> class container { typedef list<T> ObjectList; public: ~container () { for (typename ObjectList::iterator item = _Container.begin(); item != _Container.end(); item++) { if (*item) delete (*item) } } } how can i free the container items by deleting the pointer content? g++ not allow this code
You don't want to do that -- the container (ObjectList, in your case) owns the items it contains, so to delete them, you need to tell it what you want, as in: ObjectList.erase(item);. Since you're (apparently) deleting all the items, you might as well use: ObjectList.clear(); and skip using your explicit loop at all. Then again, since ObjectList is apparently an std::list, you don't need to do any of the above -- when you destroy an std::list, it automatically destroys whatever objects it contains. Edit: (mostly in response to UncleBens' comment): If you're trying to create a container of pointers that manages deleting the items pointed to by the pointers it contains, then the first thing you want to do is ensure that it really contains pointers: template <class T> class container { typedef typename std::list<T>::iterator it; std::list<T *> items; // Note 'T *' rather than just "T" public: ~container() { for (it p=items.begin; it!=items.end(); ++it) delete *it; // *it yields a pointer from the container. } };
1,589,459
1,589,486
What is the point of `void func() throw(type)`?
I know this is a valid c++ program. What is the point of the throw in the function declarement? AFAIK it does nothing and isnt used for anything. #include <exception> void func() throw(std::exception) { } int main() { return 0; }
That is an exception specification, and it is almost certainly a bad idea. It states that func may throw a std::exception, and any other exception that func emits will result in a call to unexpected().
1,589,630
1,589,954
Assigning vector::iterator to char array post VS 2003
I am trying to get some C++ code originally written in Microsoft Visual Studio (VS) 2003 to compile under VS 2008 and I am having trouble finding an efficient solution to assigning a vector::iterator to the beginning of a char array. I know that iterators went from being a defined as a simple pointer type (T*) to a class type between VS 2003 and VS 2005. Here is a simple example of what I am talking about: typedef std::vector<char> CharContainer; typedef CharContainer::iterator InputIt; int FindNumMsgs( InputIt _inputIter, int _len ); int ProcessBufferForMsgs( char buf[], const size_t maxlen ) { int numMsgs = FindNumMsgs( InputIt(buf), maxlen ); ... } So, in VS 2003, this compiles and works with no problem (since iterators are defined as T*). In VS 2008, this errors with C2440 (function-style-cast) since I can no longer just assign the iterator with the buf pointer. What would I do to get this to work in VS 2008 now that iterators are a class type? I could copy the buffer into a vector, then pass in myVec.begin(), but I have to think that I can avoid this overhead.
The proper solution would be to template FindNumMsgs such that it can work with either iterators or pointers (since pointers can be used as iterators just fine). Something like this: template <class T> int FindNumMsgs(T it, int count) { while(count--) { // do whatever it++; } return n; }
1,589,742
1,589,792
Size of 64-bit dll 50% larger than 32-bit
I have a VC++ project (2005) that generates both 32-bit and 64-bit dlls. The 32-bit dll is 1044 KB whereas the 64-bit version is 1620 KB. I'm curious why the size is so large. Is it just because of the larger address size, or is there a compiler option that I'm missing?
Maybe your code contains a lot of pointers. The Free Lunch Is Over .... (Aside: Here’s an anecdote to demonstrate “space is speed” that recently hit my compiler team. The compiler uses the same source base for the 32-bit and 64-bit compilers; the code is just compiled as either a 32-bit process or a 64-bit one. The 64-bit compiler gained a great deal of baseline performance by running on a 64-bit CPU, principally because the 64-bit CPU had many more registers to work with and had other code performance features. All well and good. But what about data? Going to 64 bits didn’t change the size of most of the data in memory, except that of course pointers in particular were now twice the size they were before. As it happens, our compiler uses pointers much more heavily in its internal data structures than most other kinds of applications ever would. Because pointers were now 8 bytes instead of 4 bytes, a pure data size increase, we saw a significant increase in the 64-bit compiler’s working set. That bigger working set caused a performance penalty that almost exactly offset the code execution performance increase we’d gained from going to the faster processor with more registers. As of this writing, the 64-bit compiler runs at the same speed as the 32-bit compiler, even though the source base is the same for both and the 64-bit processor offers better raw processing throughput. Space is speed.)
1,589,854
1,589,976
Cast between function pointers
I am currently implementing a timer/callback system using Don Clugston's fastdelegates. (see http://www.codeproject.com/KB/cpp/FastDelegate.aspx) Here is the starting code: struct TimerContext { }; void free_func( TimerContext* ) { } struct Foo { void member_func( TimerContext* ) { } }; Foo f; MulticastDelegate< void (TimerContext*) > delegate; delegate += free_func; delegate += bind( &Foo::member_func, &f ); Okay, but now, i wish the user to be able to subclass TimerContext to store and send his own structures to the callbacks. The purpose here is to prevent the user from having to downcast the TimerContext himself struct TimerContext { }; struct MyTimerContext : TimerContext { int user_value; }; void free_func( TimerContext* ) { } void free_func2( MyTimerContext* ) { } struct Foo { void member_func( TimerContext* ) { } void member_func2( MyTimerContext* ) { } }; Foo f; MulticastDelegate< void (TimerContext*) > delegate; delegate += free_func; delegate += free_func2; delegate += bind( &Foo::member_func, &f ); delegate += bind( &Foo::member_func2, &f ); As you guessed, GCC won't let me do that :) error: invalid conversion from `void (*)(MyTimerContext*)' to `void (*)(TimerContext*)' error: initializing argument 1 of `delegate::Delegate<R ()(Param1)>::Delegate(R (*)(Param1)) [with R = void, Param1 = TimerContext*]' So now my question is: If I force the cast using reinterpret_cast, it'll work, but will it be safe ? PS: These are time-critical callbacks, heavy virtual-oriented solutions are considered impracticable :/
C++ Standard says in 13.4/7 that: there are no standard conversions (clause 4) of one pointer-to-function type into another. In particular, even if B is a public base of D, we have D* f(); B* (*p1)() = &f; // error void g(D*); void (*p2)(B*) = &g; // error Still you may be could use function adapter for storing pointers to function with one argument, something like boost::function, but I'm not sure right now whether it will solve your problem.
1,589,950
1,589,987
Initializer list *argument* evaluation order
So, the C++ standard requires that class members be initialized in the order in which they are declared in the class, rather than the order that they're mentioned in any constructor's initializer list. However, this doesn't imply anything about the order in which the arguments to those initializations are evaluated. I'm working with a system that frequently passes references to serialization objects around, and wondering if I can ensure that bits are read from it in the right order, independent of the order in which those bits get written into the object's fields. struct Foo { int a; double b; // I want to be able to do this Foo(SerObj &s) : b(s.readDouble()), a(s.readInt()) { } // Rather than this Foo (SerObj &s) { b = s.readDouble(); a = s.readInt(); } }; Obviously, reordering things like ints and doubles in the declaration is not too big a deal, but bigger objects and things requiring dynamic allocation sometimes can be.
C++ Standard 12.6.2/3: There is a sequence point (1.9) after the initialization of each base and member. The expression-list of a mem-initializer is evaluated as part of the initialization of the corresponding base or member. The order of the initialization is the one you specified in the question. Evaluation is part of this initialization, and the initializations can't interleave (because there is a sequence point between them). That means that the function calls in your initializer lists are not called in the desired order, but in the order in which the member declarations appear.
1,590,062
1,590,231
Stream from std::string without making a copy?
I have a network client with a request method that takes a std::streambuf*. This method is implemented by boost::iostreams::copy-ing it to a custom std::streambuf-derived class that knows how to write the data to a network API, which works great. This means I can stream a file into the request without any need to read it all into memory. There are some cases, however, where large blocks of data must be sent which are not in a file, so I included an overload that takes a string. To avoid duplicating all the network code in the stream, it seemed obvious that I should set up a streambuf representing the string and call the other method. The only way I could figure out to make this work was something like: std::istringstream ss(data); send(ss.rdbuf()); Unfortunately, istringstream makes a copy of the data, which in some cases is several megabytes. It makes perfect sense in the general case, of course, if you hand a const reference to some object you don't want that object assuming it can continue using that reference. I worked around this with the following: struct zerocopy_istringbuf : public std::stringbuf { zerocopy_istringbuf(std::string const* s) : std::stringbuf(std::ios::in) { char* p = const_cast<char*>(s->c_str()); setg(p, p, p + s->length()); } }; ... send(&zerocopy_istringbuf(data)); This seems to work just fine, but I wonder if it's really necessary. Why doesn't std::istringstream have an overload taking a std::string const *? Is there a better way to do this?
The reason you're having these problem is that std::string isn't really suited to what you're doing. A better idea is to use vector of char when passing around raw data. If its possible, I would just change everything to use vector, using vector::swap and references to vectors as appropriatte to eliminate all your copying. If you like the iostreams/streambuf api, or if you have to deal with something that takes a streambuf, it would be trivial to create your own streambuf that uses a vector, like yours. It would effectively do the same thing that you do with the same issues as listed in the other answers, but you wouldn't be violating the class's contract. Otherwise, I think what you've got is probably the best way forward short of passing around an istringstream everywhere.
1,590,106
1,591,607
Best way to pass char* to .Net
I want to pass a big char* from cpp to .Net (preferably using COM). What is the best way (in terms of memory)? If I use CComBSTR it takes a lot of memory both when creating the BSTR in CPP and especially when moving it to .Net inside the COM call.
You can pass a StringBuilder as an input parameter and the C++ code can write into that. From a FAQ on PInvoke: To solve this problem (since many of the Win32 APIs expect string buffers) in the full .NET Framework, you can, instead, pass a System.Text.StringBuilder object; a pointer will be passed by the marshaler into the unmanaged function that can be manipulated. The only caveat is that the StringBuilder must be allocated enough space for the return value, or the text will overflow, causing an exception to be thrown by P/Invoke.
1,590,148
1,590,322
Vs2008 C++: how can I make recursive include directories?
I am including a complicated project as a library in C++ using Visual Studio 2008. I have a set of include files that are scattered throughout a very complicated directory tree structure. The root of the tree has around ten directories, and then each directory could have multiple subdirectories, subsubdirectories, etc. I know that all the header files in that structure are necessary, and they are hopelessly interlinked; I can't just include one directory, because then dependencies in another directory will feel left out and cause the compiler to crash in their annoyance at not being invited to the party. So, everyone has to be included. I can do this by adding the directories one at a time to the project (right click->properties->additional include directories), but that can be fraught with pain, especially when one of the dependencies has children and makes a brand new subsubsubdirectory. Is there a way to specify an include directory in a header file itself, so that I can just include that header whenever I need to use the functions it contains? That way, I get an easier way to edit the include files, and I don't have to make sure that the debug and release versions agree with each other (since the properties right click defaults to the current build, not all builds, a feature that has led to much crashing when switching from debug to release). Even better, is there a way to point to the directory root and force everything to be recursively included? EDIT for all those replies so far: I cannot edit the structure of this project. I can only link to it. I don't like the way the code is organized anymore than anyone else seems to, but I have to work within this constraint. Rather than spending potentially hours in the error-prone process of finding all the interdependencies and putting them in a project file, is there a way to do this programmatically?
That's clearly not a good idea, really. These directories are a way to organize the code in logical groups. /web /include /web /stackoverflow /language-agnostic /algorithm /database /meta /bug /feature-request /src /local/ /include /local /my-favorites /src Now if I type #include "exception.h" What the heck am I trying to include ? Where's that file ? How can I see its content ? On the other hand if I type #include "local/my-favorites/exception.h" Then it's perfectly clear. (and I just have two includes -Iweb/include -Ilocal/include) And this way, I can have multiple files that have the exact same name and there would be no ambiguity, nifty when you wish to integrate two different 3rd party libraries which both have such a 'exception.h'. Also note that for clarity, the namespace nesting should reflect the directories organization. So that file: "web/include/web/meta/bug/exception.h" namespace web { namespace meta { namespace bug { struct exception: std::runtime_error {}; } } } // namespace bug, namespace meta, namespace web This way it's easy to think of what header you have to include when you want a class. Also note that, for example if you look at boost, they put headers for 'lazy' programmers, in each directory, which include the headers of all subdirectories file: "web/include/web/meta/bug.h" #include "web/meta/bug/file1.h" #include "web/meta/bug/file2.h" #include "web/meta/bug/file3.h" #include "web/meta/bug/file4.h" #include "web/meta/bug/file5.h" file: "web/include/web/meta.h" #include "web/meta/bug.h" #include "web/meta/feature-request.h" These includes might also 'pull' names into a more generic namespace with a using directive: namespace web { namespace meta { using ::web::meta::bug::bug; } } // namespace meta, namespace web To make it less painful for developers. So as you can see, the language already provide you with a very good way of organizing your code cleanly, if you go with the 'all includes' options, you'll just end up with an unmaintainable mess: #include "exception.h" #include "bug.h" #include "myString.h" #include "database_connect.h" #include "helper.h" // really love this one... #include "file.h" // not bad either eh ? I've had some of these at work... think 20 unqualified includes when you depend on 25+ components... now, do you think that it would be possible to remove a dependency on component X ? ;) EDIT: How to deal with 3rd party library ? Sometimes a 3rd party library does not live up to your expectations, whether it is: not self-sufficient headers (ie you need to include 3 files to use 1 object) warnings at compilation header organization problem you always have the opportunity to wrap them in headers of your own. For example, say I have: /include /file1.h /file2.h /detail /mustInclude.h /exception.h And anytime you wish to include a file, you have to include 'exception.h' BEFORE and 'mustInclude.h', and of course you have the problem that it is difficult to spot that the files included come from this 3rd party library and not your own (current) project. Well, just wrap: /include /3rdParty /file1.h (same name as the one you would like to include, it's easier) file: "/include/3rdParty/file1.h" #pragma push // ignore warnings caused #include "detail/exception.h" // necessary to include it before anything #include "file1.h" #include "detail/mustInclude.h" #pragma pop And then in your code: #include "3rdParty/file1.h" You have just isolated the problem, and all the difficulty now lies within your wrappers files. Note: I just realize that you may have the problem that the 3rd party headers reference each others without taking the 'relative path' into account, in this case, you can still avoid the 'multiple include' syndroms (even without edition), but that might be ill-fated. I suppose you don't have the opportunity not to use such crap :x ?
1,590,270
1,590,334
GoogleMock - Matchers and MFC\ATL CString
I asked this question on the Google Group but I think I will get a faster response on here. I'm trying to use Google's Mocking framework to test my code. I am also utilizing their test framework as well. I'm compiling in VC9. I'm having issues matching arguments that are MFC\ATL CStrings. GMock says the objects are not equal and it appears it is evaluating on the pointer addresses. The method I am attempting to mock is structured like so: void myMethod(const CString & key, const CString & value); thus: MOCK_METHOD2(myMethod, void(const CString & key , const CString & value); When setting up my expectations I am doing to following comparison: CString szKey = _T("Some key"); CString szValue = _T("Some value"); EXPECT_CALL(myMock, myMethod(Eq(szKey), Eq(szValue))).WillOnce(Return (true)); I have tried many different combinations of the matchers such as: EXPECT_CALL(myMock, myMethod(StrCaseEq(_T("Some Key")), StrCaseEq(_T (""Some value)))).WillOnce(Return(true)); EXPECT_CALL(myMock, myMethod(TypedEq<const CString &>(szKey), TypedEq<const CString &>(szValue))).WillOnce(Return(true)); EXPECT_CALL(myMock, myMethod(TypedEq<const CString &>(szKey), TypedEq<const CString &>(szValue))).WillOnce(Return(true)); Any of the above calls have produced the same result. Anyone else run into this issue? This is the output: Google Mock tried the following 2 expectations, but none matched: :80: tried expectation #0 Expected arg #1: is equal to 006D430C pointing to "Some value" Actual: 4-byte object <A8EF 1102> Expected: to be called once Actual: never called - unsatisfied and active :83: tried expectation #1 Expected arg #1: is equal to (ignoring case) "" Actual: 4-byte object <A8EF 1102> Expected arg #2: is equal to (ignoring case) "Some value" Actual: 4-byte object <C0EE 1102> Expected: to be called once Actual: never called - unsatisfied and active Adam
Since you are not making a copy of the strings when they are passed to your method, do you really need to check their values? It should suffice to write the following expectation: CString szKey = _T("Some key"); CString szValue = _T("Some value"); EXPECT_CALL(myMock, myMethod(szKey, szValue)).WillOnce(Return(true)); ... which will check that the strings given to the mock method are indeed the ones you expect (validated by address), and not a copy or other string. Regarding why the pre-canned matchers don't work with CString, I suspect it is either because CString doesn't override operator()== or the matcher implementations don't have an explicit specialization for CString.
1,590,676
1,592,207
Detect if a computer is a NetApp filer? (Unmanaged C++)
What is the best way to detect if a computer on a network is a netapp filer? I have tried some general querying of the computers attributes, but nothing has stuck out.
SNMP is enabled by default on filers ( though it may later be disabled ). Info on the available MIB can be found here.
1,590,688
1,590,804
Class 'is not a template type'
What does this error mean? Generic.h:25: error: 'Generic' is not a template type Here's Generic. template <class T> class Generic: public QObject, public CFG, public virtual Evaluator { Q_OBJECT std::string key_; std::vector<std::string> layouts_; std::vector<std::string> static_widgets_; std::map<std::string, std::vector<widget_template> > widget_templates_; std::map<std::string, Widget *> widgets_; int type_; LCDWrapper *wrapper_; protected: LCDText *lcdText_; public: Generic(Json::Value *config, int type); ~Generic(); void CFGSetup(std::string key); void BuildLayouts(); void StartLayout(); int GetType() { return type_; } //T *GetLCD() { return lcd_; } LCDText *GetLCDText() { return lcdText_; } virtual void Connect(){}; virtual void SetupDevice(){}; std::map<std::string, Widget *> Widgets(); std::string CFG_Key(); LCDWrapper *GetWrapper() { return wrapper_; } }; Is the problem that it subclasses other classes? I tried an experiment testing that theory, but it didn't produce this error. Edit: Ok, so a couple of you guys pointed out I could be forward declaring Generic elsewhere without making it a template class. That's true. And when I make it a template, I get another error. Property.h:15: error: ISO C++ forbids declaration of 'Generic' with no type template <class T> class Generic; class Property : public CFG { Generic *visitor; // line 15 bool is_valid; QScriptValue result; Json::Value *expression; public: Property(const Property &prop); Property(Generic *v, Json::Value *section, std::string name, Json::Value *defval); ~Property(); bool Valid(); int Eval(); double P2N(); int P2INT(); std::string P2S(); void SetValue(Json::Value val); Property operator=(Property prop); };
I'm not sure if this is your problem, but you can't subclass QObject with a template class. Here is more information about that.
1,590,702
1,590,714
How to receive feedback from a Windows MessageBox?
I know its possible to do something like this with Windows: MessageBox(hWnd, "Yes, No, or Cancel?", "YNCB_YESNOCANCEL); But how do I react to what the user pressed (like closing the window if they clicked "yes")?
MessageBox will return a integer referring to the button pressed. From the previous link: Return Value IDABORT Abort button was selected. IDCANCEL Cancel button was selected. IDCONTINUE Continue button was selected. IDIGNORE Ignore button was selected. IDNO No button was selected. IDOK OK button was selected. IDRETRY Retry button was selected. IDTRYAGAIN Try Again button was selected. IDYES Yes button was selected. So something like: int result = MessageBox(hWnd, "Save work?", MB_YESNOCANCEL); if (result == IDOK) { // ... } else if (result == IDNO) { // ... } else // cancel { // ... }
1,590,773
1,591,733
handling central data buffer for many processes in C++
I ran into the following problem and cannot decide how to proceed: I have a class, Reader, getting a chunk of data every 1/T seconds (actually the data is from video frames, 30 frames per second). The chunks are to be passed to several objects, Detectors that process the chunks and output a decision. However, the number of chunks that each detector needs to read before making a decision varies, e.g. some may need only one chunk, some 51. I am thinking of having a data buffer where Reader places the read data chunks, implementing publish/subscriber to register each Detector and sending it a signal when there are enough number of frames in the data buffer for it to process. Is this a good approach? Also, what's the best way to manage the buffer and have Detectors read data from it without making their own copies? Shared pointers? Thanks a lot! C
I think (also based on your comment to Maciek) you have to start by understanding the difference between threads and processes and how they can communicate. Regarding the design problem: Try to start with a simple design. for instance, using only threads and passing each of the subscribers a shared_ptr to the job using it's own synchronized queue*. Since the access to the data is read-only and, AFAICR, boost::shared_ptr is multi-threading safe for such a use, there are no synchronization problems and the data is cleaned automatically. Don't worry about memory realocations (yet), Just make sure you are using a finite amount of memory (o(1)) (as you said, about 51 shared_ptrs at most) per subscriber/thread. When you'll have this working skeleton, you will be able to start optimizing based on the problems you encounter. If realocations are the problem, you can move to a ring buffer (as suggested by bcat). or you can replace your allocator (/new operator) with a pool allocator. if you have many subscribers, it might be effective to merge the queues into a single one used by all the threads. Doing that requires more information (what if one thread is very slow due to a very long computation? do you have some way to signal it to stop processing? or should the queue grow? if this is the case, a cyclic buffer may not work so well...) and may have its complications, but remember we are only trying to save the room occupied by the shared_ptrs (and not the jobs). Bottom line, try to avoid premature optimizations. instead, write it with reasonable optimization and extendability in design and go on from there based on what you learn. Good luck * synchronized queue - a queue between threads. push(j) adds the job and pop() waits until the queue is not empty and returns the top job (unlike stl::queue. This is important when the queue is read by more than one thread). I usually implement it by wrapping an stl::queue and protecting it using boost::mutex.