question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,824,245
1,824,283
scope of local variables of a function in C
I have heard about the following scenario right when I started programming in C. "Trying to access from outside, a functions local variable will result in error (or garbage value). Since the stack gets cleared off when we return from the function" But my below code sample prints a value of 50. I am compiling the code with latest GCC compiler. #include <stdio.h> int * left(); int main() { int *p=left(); printf("%d\n",*p); return 0; } int * left() { int i=50; return &i; } Enlight me on this issue. Can I know the behaviour in C++ ?? Is it similar to c ..
Modify it to add a second call to printf and you'll see a different value from the first time. Compile it with optimizations turned on and you'll see another set of values. Do anything with the value and you're stepping into undefined territory, which means that the compiler is free to summon demons through your nasal passages. On my system, I see 50 and then 0; with optimizations I see 0 and then 32767. If you make the local variable static, then you can return its address since it becomes just like a global (but remember that there is only one instance of it). When a function returns, the local storage it was using on the stack is now considered "unused" by the program, since the stack doesn't go that high anymore. Typically, though, the values are still there, since there's no urgent need to clear them. The memory is also still owned by the program, since there's no sense in returning memory to the operating system a few bytes at a time. So for your specific example, under the circumstances in which you compiled it, the memory pointed to still contains the value 50. Officially, though, the value of *p is indeterminate, and attempts to use it result in undefined behavior. One existential crisis of the C language is how on the one hand, it says nothing about the stack and the various bits of hexadecimal sludge that make up a running process; on the other hand, it's necessary to understand those in order to protect yourself from crashes, buffer overflows, and undefined behavior. Just remember that you're lucky that GCC gives a warning for this.
1,824,279
1,824,293
How to get IP address from sockaddr
I want to try and get the ip address of a client after calling accept. This is what I have so far, but I just end up getting some long number that is clearly not an ip address. What could be wrong? int tcp_sock = socket(AF_INET, SOCK_STREAM, 0); sockaddr_in client; client.sin_family = AF_INET; socklen_t c_len = sizeof(client); int acc_tcp_sock = accept(tcp_sock, (sockaddr*)&client, &c_len); cout << "Connected to: " << client.sin_addr.s_addr << endl;
That long number is the IP address, in integer form (an IP address is just an integer, after all; it's just easier for people to use when we split the octets apart and put it into dot notation). You can use inet_ntoa to convert the integer value to standard dot notation.
1,824,420
1,824,433
Is there an string equivalent to LPTSTR?
Is there an string equivalent to LPTSTR? I know of string and wstring. Is there a tstring?
You could define one: typedef std::basic_string<TCHAR> mystring; ... mystring test = _T("Hello World!");
1,824,533
1,824,550
What does it mean by C++ runtime?
What are all the activities done by C++ runtime?
It handles startup and shutdown of your application, and the core services it needs to run - things like initialising the stack and static variables, and providing the heap and default memory allocators.
1,824,685
8,964,016
Eclipse-C++-Debugging: see content of an Array
Is it possible to see the content of a dynamically allocated array, as in: int *array = new int[dimension]; I only see the value of the pointer. edit: just found the option "display as an array", but I always have to manually enter the size of the array. Is it possible to get that automagically?
In Eclipse, in order to see the content of a dynamically allocated array (for anyone else who stumbles over this question), Make sure you are in the debugging perspective; Look for the "Variables" window. if you don't see it, click "Window" > "Show view" > "Variables"; Right-click on the array variable; Click "display as array..."; Eclipse does not know how big your array is. So type 0 for the start index and choose the number of elements dynamically allocated for the length. Of course, you can use these values to display any part of the array of your liking. And, dealing with a pointer, take note of clicking 'Display as Array' when hovering on the pointer itself (arrow icon), and not on the value it is referenced at first (say in the position of (x)= counts in the picture). Otherwise you get an error of the type Failed to execute MI command: -data-evaluate-expression [specifics] Error message from debugger back end: Cannot access memory at address 0x[address of older *counts] showing up in the dialogue window just below the list (starting with "Name:" in the screenshot above).
1,824,772
1,824,834
How many requests can SQL Server handle per second?
I am using JMeter to test our application 's performance. but I found when I send 20 requests from JMeter, with this the reason result should be add 20 new records into the sql server, but I just find 5 new records, which means that SQL server discard the other requests(because I took a log, and make sure that the insert new records are sent out to sql server.) Do anyone have ideas ? What's the threshold number of request can SQL server handle per second ? Or do i need to do some configuration ? Yeah, in my application, I tried, but it seems that only 5 requests are accepted, I don't know how to config , then it can accept more.
I'm not convinced the nr of requests per seconds are directly releated to SQL server throwing away your inserts. Perhaps there's an application logic error that rolls back or fails to commit the inserts. Or the application fails to handle concurrency and inserts data violating the constraints. I'd check the server logs for deadlocks as well.
1,824,787
1,836,580
opencv multi channel element access
I'm trying to learn how to use OpenCV's new C++ interface. How do I access elements of a multi channel matrix? For example: Mat myMat(size(3, 3), CV_32FC2); for (int i = 0; i < 3; ++i) { for (int j = 0; j < 3; ++j) { //myMat_at_(i,j) = (i,j); } } What is the easiest way to do this? Something like cvSet2D of the old interface. What is the most efficient way? Similar to using direct pointers in the old interface.
typedef struct elem_ { float f1; float f2; } elem; elem data[9] = { 0.0f }; CvMat mat = cvMat(3, 3, CV_32FC2, data ); float f1 = CV_MAT_ELEM(mat, elem, row, col).f1; float f2 = CV_MAT_ELEM(mat, elem, row, col).f2; CV_MAT_ELEM(mat, elem, row, col).f1 = 1212.0f; CV_MAT_ELEM(mat, elem, row, col).f2 = 326.0f; Update : for OpenCV2.0 1. choose one type to represent the element Mat (or CvMat) has 3 dimensions: row, col, channel. We can access one element (or pixel) in the matrix by specifying the row and col. CV_32FC2 means the element is 32bit floating point value with 2 channels. So elem in above code is one acceptable representation of CV_32FC2. You can use other representations you like. For example : typedef struct elem_ { float val[2]; } elem; typedef struct elem_ { float x;float y; } elem; OpenCV2.0 adds some new types to represent the element in the matrix,like : template<typename _Tp, int cn> class CV_EXPORTS Vec // cxcore.hpp (208) So we can use Vec<float,2> to represent CV_32FC2, or use : typedef Vec<float, 2> Vec2f; // cxcore.hpp (254) See the source code to get more type that can represent your element. Here we use Vec2f 2. access the element The easiest and efficiant way to access the element in the Mat class is Mat::at. It has 4 overloads : template<typename _Tp> _Tp& at(int y, int x); // cxcore.hpp (868) template<typename _Tp> const _Tp& at(int y, int x) const; // cxcore.hpp (870) template<typename _Tp> _Tp& at(Point pt); // cxcore.hpp (869) template<typename _Tp> const _Tp& at(Point pt) const; // cxcore.hpp (871) // defineded in cxmat.hpp (454-468) // we can access the element like this : Mat m( Size(3,3) , CV_32FC2 ); Vec2f& elem = m.at<Vec2f>( row , col ); // or m.at<Vec2f>( Point(col,row) ); elem[0] = 1212.0f; elem[1] = 326.0f; float c1 = m.at<Vec2f>( row , col )[0]; // or m.at<Vec2f>( Point(col,row) ); float c2 = m.at<Vec2f>( row , col )[1]; m.at<Vec2f>( row, col )[0] = 1986.0f; m.at<Vec2f>( row, col )[1] = 326.0f; 3. interact with old interface Mat provides 2 conversion functions: // converts header to CvMat; no data is copied // cxcore.hpp (829) operator CvMat() const; // defined in cxmat.hpp // converts header to IplImage; no data is copied operator IplImage() const; // we can interact a Mat object with old interface : Mat new_matrix( ... ); CvMat old_matrix = new_matrix; // be careful about its lifetime CV_MAT_ELEM(old_mat, elem, row, col).f1 = 1212.0f;
1,824,835
1,824,867
How can I create a type based lookup table in order to implement multiple-dispatch in C++?
I'm attempting to make a messaging system in which any class derived from "Messageable" can receive messages based on how the function handleMessage() is overloaded. For example: class Messageable { public: void takeMessage(Message& message) { this->dispatchMessage(message); } protected: void bindFunction(std::type_info type, /* Need help here */ func) { m_handlers[type] = func; } void dispatchMessage(Message& message) { m_handlers[typeid(message)](message); } private: std::map<std::type_info, /*Need help here*/ > m_handlers; }; class TestMessageable : public Messageable { public: TestMessageable() { this->bindFunction( typeid(VisualMessage), void (TestMessageable::*handleMessage)(VisualMessage)); this->bindFunction( typeid(DanceMessage), void (TestMessageable::*handleMessage)(DanceMessage)); } protected: void handleMessage(VisualMessage visualMessage) { //Do something here with visualMessage } void handleMessage(DanceMessage danceMessage) { //Do something here with danceMessage } }; In a nutshell I want the correct version of handleMessage to be called based on the RTTI value of any given message. How can I implement this preferably without some sort of monolithic switch/case statement.
You should look into the Double Dispatch pattern. See information here. You should be able to implement VisualMessage as a class like such: class VisualMessage : public Message { public: virtual void dispatch(Messageable & inMessageable) { inMessageable.handleMessage(*this); } }; and then call it like this: Message & vMessage = VisualMessage(); Messageable & tMessageable = TestMessageable(); vMessage.dispatch(tMessageable); It will then call TestMessageable::handleMessage(VisualMessage & visualMessage) This is because Message::dispatch will be based on the VisualMessage type. Then when VisualMessage::dispatch calls inMessageable.handleMessage(*this) it will call the right handleMessage because the type of the *this pointer is VisualMessage, not Message.
1,824,838
1,824,900
Iterating over all pairs of elements in std-containers (C++)
What's the best way to iterate over all pairs of elements in std container like std::list, std::set, std::vector, etc.? Basically to do the equivalent of this, but with iterators: for (int i = 0; i < A.size()-1; i++) for(int j = i+1; j < A.size(); j++) cout << A[i] << A[j] << endl;
The easiest way is just rewriting the code literally: for (auto i = foo.begin(); i != foo.end(); ++i) { for (auto j = i; ++j != foo.end(); /**/) { std::cout << *i << *j << std::endl; } } Replace auto with a const_iterator for C++98/03. Or put it in its own function: template<typename It> void for_each_pair(It begin, It end) { for (It i = begin; i != end; ++i) { for (It j = i; ++j != end; /**/) { std::cout << *i << *j << std::endl; } } }
1,824,910
1,824,923
Is there an occasion where using catch all clause : catch (...) is justified?
Each time I have seen the catch all statement: try { // some code } catch (...) { } it has always been an abuse. The arguments against using cache all clauses are obvious. It will catch anything including OS generated exceptions such as access violations. Since the exception handler can't know what it's dealing with, in most cases the exceptions will manifest as obscure log messages or some incoherent message box. So catch(...) seems inherently evil. But it is still implemented in C++ and other languages (Java, C#) implements similar mechanisms. So is there some cases when its usage is justified?
the arguments against using cache all clauses are obvious , it will catch anything including OS generated exceptions such as access violation. since the exception handler can't know what its dealing with, in most cases the exceptions will manifest as obscure log message or some incoherent message box. And if those same exceptions aren't caught you get... an incoherent message box. catch(...) lets me at least present my own message box (and invoke custom logging, save a crash dump, etc.). I think there are also reasonable uses of catch(...) in destructors. Destructors can't throw--well, I mean, they can throw, but if a destructor throws during stack unwinding due to an in-progress exception the program terminates, so they should not ever allow exceptions to escape. It is in general better to allow the first exception to continue to be unwound than to terminate the program. Another situation is in a worker thread that can run arbitrary functions; generally you don't want an unceremonious crash if the task throws an exception. A catch(...) in the worker thread provides the opportunity for semi-orderly clean-up and shutdown.
1,825,065
1,825,183
Is it OK for an abstract base class have non-abstract methods?
An abstract base class (interface class) usually has all its member functions abstract. However, I have several cases where member functions consisting of calls to the abstract methods of the interface are used. I can implement them in a derived-but-still-abstract class, or I can implemented the methods as non-abstract, non-virtual methods of the interface class. Are there any problems design-wise with implementing the methods in the interface class? Is it bad style, and if so, why? Does the same hold for static methods? For example class IFoo { public: virtual ~IFoo(); virtual double calcThis( InputType p ) const = 0; virtual double calcThat( InputType p ) const = 0; double calcFraction( InputType p ) { return calcThis( p ) / calcThat( p ); } static BarType bar( InputType p ); }; class MyFoo : public IFoo { public: // implements IFoo virtual double calcThis( InputType p ) const; // implements IFoo virtual double calcThat( InputType p ) const; }; versus class IFoo { public: virtual ~IFoo(); virtual double calcThis( InputType p ) const = 0; virtual double calcThat( InputType p ) const = 0; }; class FooBase : public IFoo { public: virtual ~FooBase(); double calcFraction( InputType p ) { return calcThis( p ) / calcThat( p ); } static BarType bar( InputType p ); }; class MyFoo : public FooBase { public: // implements IFoo virtual double calcThis( InputType p ) const; // implements IFoo virtual double calcThat( InputType p ) const; };
If you're calling it an interface (i.e. which you seem to be by your use of the naming convention "IFoo") then it should be a pure interface (no implementations). If it's merely an abstract class then a mix of pure virtual and implemented methods is perfectly reasonable.
1,825,089
1,825,223
Using a C++ dll in C#
I'm attempting to consume a dll written in C++ from a C# application. I have 3rd party source code for the C++ dll (the Cyclone physics engine) and do not want to manually port it over to C#. In the C++ project I changed it to output a dll. I changed it to use the /clr flag. I changed it to use Multi-threaded Debug DLL (/MDd) because that was the only one compatible with /clr and also compiled. In the C# project I added a reference to the dll. I'm using the cyclone namespace. At first there was absolutely nothing under the namespace. I think this is because in the C++ code, all classes were declared with no access modifiers and the default is private. So for the class "Particle" I changed the definition to: public class Particle { //... } Now I can successfully declare a variable of type Particle from the C# code. However, intellesense and the object browser report Particle to be a struct (?) and it doesn't contain any methods at all. The c++ code declares a bunch of methods after "public:" access modifiers so I don't know what the problem is. For example, public: void integrate(real duration); What am I doing wrong?
The Particle class is not a managed class, hence it is treated as a struct. You need to use the ref keyword to make it managed and garbage collected. You also need to do the same to every other class that references it which might be a problem. The best solution I think, is to create a managed wrapper class that uses the Particle class internally. This wrapper class can then be referenced by .net. See here:
1,825,094
1,825,151
Is there an automated program to find C++ linker errors?
I'm working in a Linux environment with C++, using the GCC compiler. I'm currently working on modifying and upgrading a large pre-existing body of code. As part of this, it has been necessary to add quite a large number of small references throughout the code in a variety of places to link things together, and also to add in several new external code libraries. There is also quite a large and complex structure of Makefiles linked to a configure.ac file to handle the build process. Upon starting the build process everything compiles without a problem, but comes back with the dreaded linker error when trying to use a newly added custom code library we've created. We have now been through a vast amount of code with a fine tooth comb looking for spelling mismatches, checking the order that all the libraries are included in the build process, and checked that the .o files created contain what we need using dumps, and all are as and where they should be. We've also tested the library separately and the problem definitely doesn't lie there. In short, we've tried most things that you should normally do in these scenarios. Is there a tool for C++ that can detect linker errors automatically, in a similar vein to cppcheck or splint (both of which we have run to no avail) that could help here?
Don't know your platform, but I spent sometime with linker problems in gcc till I realized that the static library (.a) linking requires specific ordering, its not the same to link gcc object.o first.a second.a than gcc object.o second.a first.a.
1,825,338
1,825,369
Video streaming using c++
I'm going to build an application in c++ that creates stream of photos and then sends them as video stream to another application. any ideas about how can i start? what I mean is, what libraries should i use and what the encoding? I'm thinking about MJPEG, and UDP or RTP as protocol.... any help would be greatly appreciated.
If your input data is just a bunch of random images, not video, you're not going to do "video streaming". You're just going to be sending a bunch of full images. No need to involve video encoding technology, just do the simplest possible transmission of images. Video encoders rely on each frame having various relationships to the previous, as is common in actual video. For inputs of random images, they're not going to be able to compress that much, and single-frame compression (e.g. JPEG/PNG/whatever) is very likely already going to be applied to your input data. Probably easiest to send the contents of each file, together with the original filename, and have the receiving client re-create the file on disk, and use existing disk-oriented libraries to open and decode the image. You should probably just use TCP for this, nothing in your requirements that indicate you need to use the more complicated and error-prone UDP/RTP-based solutions.
1,825,553
2,101,039
C# pass int and string by reference to C++ ActiveX Control: type mismatch
I have a problem passing by reference int or string variables to C++ ActiveX Control. Also I pass these variables by reference to C++ DLL and everything works fine. C++ DLL: __declspec (dllexport) void Execute (LPCTSTR cmd, int& resultCode, LPCTSTR& message, long& receiptNumber) { message = _T("ReplyProblem"); resultCode = 100; receiptNumber = -1; } C#: [DllImport("MyCOM.dll", CharSet = CharSet.Unicode)] public static extern void Execute (string cmd, out int resultCode, out string message, out int receiptNumber); ... int resultCode = 0; string message = ""; int receiptNumber = 0; Execute ("cmd", out resultCode, out message, out receiptNumber); // OK How to get this done in ActiveX Control? I tried to define methods using & reference symbol, but MIDL compiler did not allow that. MyCOM.idl: [id(1025315)] void Execute (LPCTSTR cmd, [out]long& returnCode); // MIDL2025: syntax error I modified the methods to use pointers *. MyCOM.idl: [id(1025315)] void Execute (LPCTSTR cmd, [out]long* returnCode); MyCOMCtrl.h: // Dispatch maps afx_msg void Execute (LPCTSTR cmd, long* resultCode); MyCOMCtrl.cpp // Dispatch map ... DISP_FUNCTION_ID(MyCOMCtrl, "Execute", DISPID_EXECUTE_METHOD, Execute, VT_EMPTY, VTS_PI4) ... void MyCOMCtrl::Execute (LPCTSTR cmd, long* resultCode) { *resultCode = 111; } C#: using MyCOMLib; ... MyCOM client = new MyCOM(); int resultCode = 0; // COMException: Type mismatch. (Exception from HRESULT: 0x80020005 (DISP_E_TYPEMISMATCH)) client.Execute ("Test command", out resultCode); The same exception occurs using string type in C# and LPCTSTR* in C++ ActiveX instead. Any tips or suggestions will be appreciated.
SOLVED: In MyCOMCtrl.cpp: // Dispatch map ... DISP_FUNCTION_ID(MyCOMCtrl, "Execute", DISPID_EXECUTE_METHOD, Execute, VT_EMPTY, VTS_PI4) ... Must be: DISP_FUNCTION_ID(MyCOMCtrl, "Execute", DISPID_EXECUTE_METHOD, Execute, VT_EMPTY, VTS_BSTR VTS_PI4) // two VTS arguments
1,825,653
1,825,698
How to check code generated by C++ compiler?
just like in topic - is there any software to open (what?) and here I don't even know what to open - file with object code or exe? My today's questions (if only todays ;)) may seem bit odd but I'm going through excersises in "The C++ Programming Language" by B.S. and sometimes I'm just stuck on particular question. I'm sometimes bit irritated by style of this book (excellent in many aspects) that he (B.S.) asks some questions which you won't find answer in his book on how to do it or even where to start. Like this one for example: Run some tests to see if your compiler really generates equivalent code for iteration using pointers and iteration using indexing. If different degrees of opimization can be requested, see if and how that affects the quality of the generated code. Thats from chapter 5 question 8. Up to this point nowhere in this book is even mentioning testing and analyzing code generated by compiler. Anyway, if someone could help me with this I'll be greatful. Thank you.
The debugger will help you. Most debuggers let you halt the program and look into disassembly. The nice thing is they point you right to disassembly of the line you set the breakpoint to, not to just all the compilation result. Once in a while I do that in Visual Studio - compile the program, put a breakpoint onto the beginning of code of interest, start the program, then when it is halted I open the disassembly and immediately see the code corresponding to that C++ code.
1,825,720
1,825,929
C++ high precision time measurement in Windows
I'm interested in measuring a specific point in time down to the nanosecond using C++ in Windows. Is this possible? If it isn't, is it possible to get the specific time in microseconds at least?. Any library should do, unless I suppose it's possible with managed code. thanks
If you have a threaded application running on a multicore computer QueryPerformanceCounter can (and will) return different values depending on which core the code is executing on. See this MSDN article. (rdtsc has the same problem) This is not just a theoretical problem; we ran into it with our application and had to conclude that the only reliable time source is timeGetTime which only has ms precision (which fortunately was sufficient in our case). We also tried fixating the thread affinity for our threads to guarantee that each thread always got a consistent value from QueryPerformanceCounter, this worked but it absolutely killed the performance in the application. To sum things up there isn't a reliable timer on windows that can be used to time thing with micro second precision (at least not when running on a multicore computer).
1,825,794
1,825,813
Passing pointer argument by reference under C?
#include <stdio.h> #include <stdlib.h> void getstr(char *&retstr) { char *tmp = (char *)malloc(25); strcpy(tmp, "hello,world"); retstr = tmp; } int main(void) { char *retstr; getstr(retstr); printf("%s\n", retstr); return 0; } gcc would not compile this file, but after adding #include <cstring> I could use g++ to compile this source file. The problem is: does the C programming language support passing pointer argument by reference? If not, why? Thanks.
No, C doesn't support references. It is by design. Instead of references you could use pointer to pointer in C. References are available only in C++ language.
1,825,868
1,825,930
How to prevent window resizing temporarily?
I have a window which can be resized, but there are some situations when resizing is not possible because of the application state. Is there a way to prevent resizing the window temporarily? I want to disable resizing by all means available to the users, which include window menu, dragging edges by mouse, user initiated window tiling performed by OS - and perhaps some other I am not aware of?
One way is to use GetWindowLong() with GWL_STYLE flag to get the window style and reset/remove any styles you need, ie the WS_THICKFRAME style so that the window can't be resized. You apply the new style with SetWindowLong.
1,825,964
1,825,996
C/C++ maximum stack size of program on mainstream OSes
I want to do DFS on a 100 X 100 array. (Say elements of array represents graph nodes) So assuming worst case, depth of recursive function calls can go upto 10000 with each call taking upto say 20 bytes. So is it feasible means is there a possibility of stackoverflow? What is the maximum size of stack in C/C++? Please specify for gcc for both 1) cygwin on Windows 2) Unix What are the general limits?
In Visual Studio the default stack size is 1 MB i think, so with a recursion depth of 10,000 each stack frame can be at most ~100 bytes which should be sufficient for a DFS algorithm. Most compilers including Visual Studio let you specify the stack size. On some (all?) linux flavours the stack size isn't part of the executable but an environment variable in the OS. You can then check the stack size with ulimit -s and set it to a new value with for example ulimit -s 16384. Here's a link with default stack sizes for gcc. DFS without recursion: std::stack<Node> dfs; dfs.push(start); do { Node top = dfs.top(); if (top is what we are looking for) { break; } dfs.pop(); for (outgoing nodes from top) { dfs.push(outgoing node); } } while (!dfs.empty())
1,826,159
1,826,175
Swapping two variable value without using third variable
One of the very tricky questions asked in an interview. Swap the values of two variables like a=10 and b=15. Generally to swap two variables values, we need 3rd variable like: temp=a; a=b; b=temp; Now the requirement is, swap values of two variables without using 3rd variable.
Using the xor swap algorithm void xorSwap (int* x, int* y) { if (x != y) { //ensure that memory locations are different *x ^= *y; *y ^= *x; *x ^= *y; } } Why the test? The test is to ensure that x and y have different memory locations (rather than different values). This is because (p xor p) = 0 and if both x and y share the same memory location, when one is set to 0, both are set to 0. When both *x and *y are 0, all other xor operations on *x and *y will equal 0 (as they are the same), which means that the function will set both *x and *y set to 0. If they have the same values but not the same memory location, everything works as expected *x = 0011 *y = 0011 //Note, x and y do not share an address. x != y *x = *x xor *y //*x = 0011 xor 0011 //So *x is 0000 *y = *x xor *y //*y = 0000 xor 0011 //So *y is 0011 *x = *x xor *y //*x = 0000 xor 0011 //So *x is 0011 Should this be used? In general cases, no. The compiler will optimize away the temporary variable and given that swapping is a common procedure it should output the optimum machine code for your platform. Take for example this quick test program written in C. #include <stdlib.h> #include <math.h> #define USE_XOR void xorSwap(int* x, int *y){ if ( x != y ){ *x ^= *y; *y ^= *x; *x ^= *y; } } void tempSwap(int* x, int* y){ int t; t = *y; *y = *x; *x = t; } int main(int argc, char* argv[]){ int x = 4; int y = 5; int z = pow(2,28); while ( z-- ){ # ifdef USE_XOR xorSwap(&x,&y); # else tempSwap(&x, &y); # endif } return x + y; } Compiled using: gcc -Os main.c -o swap The xor version takes real 0m2.068s user 0m2.048s sys 0m0.000s Where as the version with the temporary variable takes: real 0m0.543s user 0m0.540s sys 0m0.000s
1,826,165
1,826,361
WM_ENTERSIZEMOVE / WM_EXITSIZEMOVE - when using menu, not always paired
To prevent my application changing the window content while user is moving its window around, I capture messages WM_ENTERSIZEMOVE / WM_EXITSIZEMOVE and I pause the application between the messages. However, sometimes it happens I receive WM_ENTERSIZEMOVE but no WM_EXITSIZEMOVE at all. One repro is: open the window menu click on Size do not resize the window, rather click into the window Notice the window never received any WM_EXITSIZEMOVE. When checking how this works, I have also checked Microsoft DirectX sample and I have noticed the same problem. Once you follow the repro steps above, the sample application looks frozen (I have tried it just now with BasicHLSL sample from March 2009 SDK). How is the application expected to respond to this? Are there some other conditions which should terminate the "moving or sizing modal loop"?
As a temporary workaround, I now un-pause the application whenever I receive WM_ACTIVATE message. This seems to have a kind solved this particular case (you can recover the application by activating it again) and did not seem to break anything. Such solution smells to me, though. I would rather understand how it should work rather then relying on a limited testing only.
1,826,172
1,826,188
C++ type casting vector class
I have two vector classes: typedef struct D3DXVECTOR3 { FLOAT x; FLOAT y; FLOAT z; } D3DXVECTOR3, *LPD3DXVECTOR3; and class MyVector3{ FLOAT x; FLOAT y; FLOAT z; }; and a function: void function(D3DXVECTOR3* Vector); How is it possible (if it's possible) to achieve something like this: MyVector3 vTest; function(&vTest);
function(reinterpret_cast<D3DXVECTOR3*>(&vTest)); Generally speaking you should avoid reinterpret_cast though.
1,826,203
1,826,373
Swapping addresses of pointers in C++
How can one swap pointer addresses within a function with a signature? Let's say: int weight, height; void swap(int* a, int* b); So after going out of this function the addresses of the actual parameters (weight and height) would be changed. Is it possible at all?
If you want to swap the addresses that the pointers are pointing to, not just the values stored at that address, you'll need to pass the pointers by reference (or pointer to pointer). #include <cassert> void swap(int*& a, int*& b) { int* c = a; a = b; b = c; } int main() { int a, b; int* pa = &a; int* pb = &b; swap(pa, pb); assert(pa == &b); //pa now stores the address of b assert(pb == &a); //pb now stores the address of a } Or you can use the STL swap function and pass it the pointers. #include <algorithm> std::swap(pa, pb); Your question doesn't seem very clear, though.
1,826,464
1,826,505
C-Style Strings as template arguments?
Can C-Style strings be used as template arguments? I tried: template <char *str> struct X { const char *GetString() const { return str; } }; int main() { X<"String"> x; cout<<x.GetString(); } And although I get no complaints about the class definition, the instantiation yields 'X' : invalid expression as a template argument for 'str' (VC).
A string literal cannot be used as a template argument. Update: Nowadays, a few years after this question was asked and answered, it is possible to use string literals as template arguments. With C++11, we can use characters packs as template arguments (template<char ...c>) and it is possible to pass a literal string to such a template. This would work, however: template <char const *str> struct X { const char *GetString() const { return str; } }; char global_string[] = "String"; int main() { X<global_string> x; cout<<x.GetString(); }
1,826,534
1,826,644
How to input realtime data to do realtime process in c/c++
I dont project about process realtime data from cyber glove( Virtual Hand ) . So I need to write some application that get realtime data from glove and feed to some algorithm. I don't know how to deal with process realtime data; does anybody have some resources?
I'm pretty sure the cyber glove you're using comes with an SDK as well as examples on how to get the data from the device. From there, I'm afraid we can't tell you much. I see you tagged your question with "recognition" but what are you trying to recognize exactly? Recognizing gestures would typically mean analyzing a trajectory in 3d space. I've never worked with such a glove but I can imagine it streams a sequence of data the same way a Wacom tablet would stream a sequence of (x,y, pressure) and eventually proximity and pen tilt data. So, you will need to extract pertinent features out of this raw data in order to form what's commonly called a "feature vector". For instance you could resample the data using an interpolation scheme to end up with n tuples, each tuple containing information such as: position orientation velocity acceleration curvature etc You will have to experiment in order to decide which features are best for the problem you're trying to solve. Once you are able to convert a raw 3d trajectory into a normalized feature vector you will need to make a decision about the method you want to use, for instance: example based analysis with a DTW (Dynamic Time Warping) approach neural network training support vector machines there are many!!! Unfortunately, pattern recognition is a vast subject and I can't tell everything about it in such a short answer. It's now up to you to study the literature. Good luck.
1,826,885
1,826,916
Visual Studio 2008 C++ debugger drops out of single-step mode under Vista
I have a fairly large C++ project, and am trying to use the debugger to step through some code. Unfortunately, it sometimes decides to drop out of that mode, and just execute the code without paying attention to the fact that I pressed F10, and not breaking at subsequent breakpoints. I don't know when it's going to drop out, but it seems to do so consistently when it does. To be specific, I'm trying to see how a certain element of a display is calculated. I put breakpoints where the calculation occurs. The debugger will stop at a few of them (not getting as far as I need), and then disregard all the rest, and the finished image appears on the window. Some other times, I've been single-stepping through code, and suddenly it simply starts executing. This is using Visual C++ in Visual Studio 2008 SP1, running on 64-bit Vista. The code is compiled in Debug mode, with no optimizations enabled. I have done a clean and complete rebuild without fixing this. Does anybody know what could be causing this? Is there anything I can do about it? Edit: There are no threads involved where I lost the breaks, and I just installed this recommended fix and still have the problem.
I think I've encountered this before. You can download hotfixes which will correct this and other issues (available here): http://code.msdn.microsoft.com/Project/ProjectDirectory.aspx?TagName=Visual%20Studio%202008,Hotfix I installed a bunch and have not since had the problem. After installing, you can see them listed in your About box. I won't take the time to pick out the most important ones, since I'm at work.. but there are probably at least 3 or 4 which you'll want to install. I think some of the hotfixes may have been grouped into an SP1 of some sort at some time (which is also available in the link). Edit (in response to an edit in the original post): I am certain that a hotfix addresses an issue resembling what you've described, since installing a bunch of hotfixes addressed the complaints that a bunch of my coworkers were making (the main complaint was along the lines of "it sometimes ignores my breakpoints and keeps running right past them!"). I recommend that you keep installing whatever could apply.
1,826,901
1,834,333
Should I add .vcxproj.filter files to source control?
While evaluating Visual Studio 2010 Beta 2, I see that in the converted directory, my vcproj files became vcxproj files. There are also vcxproj.filter files alongside each project which appear to contain a description of the folder structure (\Source Files, \Header Files, etc.). Do you think these filter files should be kept per-user, or should they be shared across the whole dev group and checked into SCC? My current thinking is to check them in, but I wonder if there are any reasons not to do that, or perhaps good reasons why I should definitely check them in. The obvious benefit is that the folder structures will match if I'm looking at someone else's machine, but maybe they'd like to reorganize things logically?
Previous versions of Visual Studio (at least versions 6.0 and 2008) store that information in their own project file (.dsp and .vcproj files respectively), which of course is good to add to SCC. I cannot think of any reason to not include this .filter files in SCC
1,826,902
1,826,920
How to avoid memory leak with shared_ptr?
Consider the following code. using boost::shared_ptr; struct B; struct A{ ~A() { std::cout << "~A" << std::endl; } shared_ptr<B> b; }; struct B { ~B() { std::cout << "~B" << std::endl; } shared_ptr<A> a; }; int main() { shared_ptr<A> a (new A); shared_ptr<B> b (new B); a->b = b; b->a = a; return 0; } There is no output. No desctructor is called. Memory leak. I have always believed that the smart pointer helps avoid memory leaks. What should I do if I need cross-references in the classes?
If you have circular references like this, one object should hold a weak_ptr to the other, not a shared_ptr. From the shared_ptr introduction: Because the implementation uses reference counting, cycles of shared_ptr instances will not be reclaimed. For example, if main() holds a shared_ptr to A, which directly or indirectly holds a shared_ptr back to A, A's use count will be 2. Destruction of the original shared_ptr will leave A dangling with a use count of 1. Use weak_ptr to "break cycles." Thanks, Glen, for the link.
1,826,934
1,827,049
Copy Constructor Needed with temp object
The following code only works when the copy constructor is available. When I add print statements (via std::cout) and make the copy constructor available it is not used (I assume there is so compiler trick happening to remove the unnecessary copy). But in both the output operator << and the function plop() below (where I create a temporary object) I don't see the need for the copy constructor. Can somebody explain why the language needs it when I am passing everything by const reference (or what I am doing wrong). #include <iostream> class N { public: N(int) {} private: N(N const&); }; std::ostream& operator<<(std::ostream& str,N const& data) { return str << "N\n"; } void plop(std::ostream& str,N const& data) { str << "N\n"; } int main() { std::cout << N(1); // Needs copy constructor (line 25) plop(std::cout,N(1)); // Needs copy constructor N a(5); std::cout << a; plop(std::cout,a); } Compiler: [Alpha:~/X] myork% g++ -v Using built-in specs. Target: i686-apple-darwin10 Configured with: /var/tmp/gcc/gcc-5646~6/src/configure --disable-checking --enable-werror --prefix=/usr --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --with-slibdir=/usr/lib --build=i686-apple-darwin10 --with-gxx-include-dir=/include/c++/4.2.1 --program-prefix=i686-apple-darwin10- --host=x86_64-apple-darwin10 --target=i686-apple-darwin10 Thread model: posix gcc version 4.2.1 (Apple Inc. build 5646) [Alpha:~/X] myork% g++ t.cpp t.cpp: In function ‘int main()’: t.cpp:10: error: ‘N::N(const N&)’ is private t.cpp:25: error: within this context t.cpp:10: error: ‘N::N(const N&)’ is private t.cpp:26: error: within this context This is a simplified version of some real code. In the real code I have a class that contains a std::auto_ptr. This means that a copy constructor that takes a const reference is not valid (without some work) and I was getting an error indicating that the copy constructor was not available because of it: Change the class too: class N { public: N(int) {} private: std::auto_ptr<int> data; }; The error is then: t.cpp:25: error: no matching function for call to ‘N::N(N)’
From http://gcc.gnu.org/gcc-3.4/changes.html When binding an rvalue of class type to a reference, the copy constructor of the class must be accessible. For instance, consider the following code: class A { public: A(); private: A(const A&); // private copy ctor }; A makeA(void); void foo(const A&); void bar(void) { foo(A()); // error, copy ctor is not accessible foo(makeA()); // error, copy ctor is not accessible A a1; foo(a1); // OK, a1 is a lvalue } This might be surprising at first sight, especially since most popular compilers do not correctly implement this rule (further details). This will be fixed in C++1x by Core Issue 391.
1,827,265
1,873,408
error C2027 and error C2227
I get error C2027: use of undefined type 'Bridge' and error C2227: left of '->receive' must point to class/struct/union/generic type on line *connection1->receive(newMessage,2); #include <stdio.h> #include <stdlib.h> #include <iostream> #define MAXHOST 10 #define MAXPORT 5 #define MAXLAN 8 #define MAXBRIDGE 5 #define MAXLANBRIDGECON 2 using namespace std; class Bridge; class Host; class Message; class Simulation; class Lan; class Message{ ////////////// }; class Host{ Lan * lan1; int id; int nextMessageTime; public: ///////// }; class Lan{ Bridge *connection1, *connection2; int bridgeConnection; Host hostList[MAXHOST]; int id; int hostCount; public: void connect(Bridge * const newBridge) { if(bridgeConnection==0) { connection1 = newBridge; } if(bridgeConnection==1) { connection2 = newBridge; } bridgeConnection++; } void receive(Message newMessage){ *connection1->receive(newMessage,2); } }; class Bridge{ ///////////////////// }; void main(){ Simulation newSim; newSim.create(); return; }
All the posts before are right, forward declaring is used to prevent circular includes in header files. The Bridge class is forward declared, so that you can specify pointers of that type within your class definition of LAN. Since pointers all have the same size this is ok. When it comes to using this class the compiler has to know more about the Bridge class, at least its size. But there is no information other than that there is a class named Bridge. The solution would be either to include the header where Bridge is defined (delete the class Bridge definition if you do this), or to move the implementation of LAN::connect() and LAN::receive() in its own implementation file LAN.cpp and include the Bridge header there which is probably the clean solution.
1,827,406
1,827,545
How much does the order of case labels affect the efficiency of switch statements?
Consider: if (condition1) { // Code block 1 } else { // Code block 2 } If I know that condition1 will be true the majority of the time, then I should code the logic as written, instead of: if (!condition1) { // Code block 2 } else { // Code block 1 } since I will avoid the penalty of the jump to the second code block (note: I have limited knowledge of assembly language). Does this idea carry forward to switch statements and case labels? switch (myCaseValue) { case Case1: // Code block 1 break; case Case2: // Code block 2 break; // etc. } If I know that one of the cases will happen more often, can I rearrange the order of the case labels so that it's more efficient? Should I? In my code I've been ordering the case labels alphabetically for code readability without really thinking about it. Is this micro-optimization?
Some facts for modern hardware like x86 or x86_64: A unconditionally taken branch has almost no additional costs, besides the decoding. If you want a number, it's about a quarter clock cycle. A conditional branch, which was correctly predicted, has almost no additional costs. A conditional branch, which was not correctly predicted, has a penalty equal to the length of the processor pipelines, this is about 12-20 clocks, depending on the hardware. The prediction mechanisms are very sophisticated. Loops with a low number of iterations (on Core 2 for example up to 64) can be perfectly predicted. Small repeating patterns like "taken-taken-nottaken-taken" can be predicted, if they are not too long (IIRC 6 on Core2). You can read more about branch prediction in Agner Fogs excellent manual. Switch statements are usually replaced by a jump table by the compiler. In most cases the order of cases won't make a difference at all. There are prediction mechanisms for indirect jumps as well. So the question isn't if your jumps are more likely to be taken, it is if they are well predictable, at least for the hardware you intend to run your code on. This isn't an easy question at all. But if you have branches depending on a random (or pseudo random) condition, you could try to reformulate it as a branchless statement if possible.
1,827,477
1,827,552
How can I access my class instance from a boost thread?
I have the following code (this is some semi-sudo code, which may not compile): class FooBar { public: void a(); void b(); boost::shared_ptr<boost::thread> m_thread; std::string m_test; }; void FooBar::a() { m_test = "Foo bar" m_thread = shared_ptr<thread>(new thread(bind(&FooBar::b, this))); } void FooBar::b() { cout << m_test; } The code cout << test does not yield any output, because m_test is "" instead of "Foo bar". Why is this? I thought that passing this as the 2nd argument to bind would allow me to access the same instance from b() - am I incorrect?
Yes, that works. Here's the "real" version, which does in fact print "Foo bar": #include <boost/make_shared.hpp> #include <boost/thread.hpp> #include <boost/bind.hpp> using namespace boost; struct FooBar { void a(); void b(); shared_ptr<thread> m_thread; std::string m_test; }; void FooBar::a() { m_test = "Foo bar"; m_thread = make_shared<thread>(bind(&FooBar::b, this)); } void FooBar::b() { std::cout << m_test; } int main() { FooBar fb; fb.a(); fb.m_thread->join(); return 0; } The code cout << test does not yield any output, because m_test is "" I suspect this is because the object was being destroyed before the thread got around to evaluating the member variable. Note the join(), it's very important.
1,827,522
1,827,695
Default encoding for variant bstr to std::string conversion
I have a variant bstr that was pulled from MSXML DOM, so it is in UTF-16. I'm trying to figure out what default encoding occurs with this conversion: VARIANT vtNodeValue; pNode->get_nodeValue(&vtNodeValue); string strValue = (char*)_bstr_t(vtNodeValue); From testing, I believe that the default encoding is either Windows-1252 or Ascii, but am not sure. Btw, this is the chunk of code that I am fixing and converting the variant to a wstring and going to a multi-byte encoding with a call to WideCharToMultiByte. Thanks!
The operator char* method calls _com_util::ConvertBSTRToString(). The documentation is pretty unhelpful, but I assume it uses the current locale settings to do the conversion. Update: Internally, _com_util::ConvertBSTRToString() calls WideCharToMultiByte, passing zero for all the code-page and default character parameters. This is the same as passing CP_ACP, which means to use the system's current ANSI code-page setting (not the current thread setting). If you want to avoid losing data, you should probably call WideCharToMultiByte directly and use CP_UTF8. You can still treat the string as a null-terminated single-byte string and use std::string, you just can't treat bytes as characters.
1,827,705
1,852,359
C++ Buildsystem with ability to compile dependencies beforehand
I'm in the middle of setting up an build environment for a c++ game project. Our main requirement is the ability to build not just our game code, but also its dependencies (Ogre3D, Cegui, boost, etc.). Furthermore we would like to be able build on Linux as well as on Windows as our development team consists of members using different operating systems. Ogre3D uses CMake as its build tool. This is why we based our project on CMake too so far. We can compile perfectly fine once all dependencies are set up manually on each team members system as CMake is able to find the libraries. The Question is if there is an feasible way to get the dependencies set up automatically. As a Java developer I know of Maven, but what tools do exist in the world of c++? Update: Thanks for the nice answers and links. Over the next few days I will be trying out some of the tools to see what meets our requirements, starting with CMake. I've indeed had my share with autotools so far and as much as I like the documentation (the autobook is a very good read), I fear autotools are not meant to be used on Windows natively. Some of you suggested to let some IDE handle the dependency management. We consist of individuals using all possible technologies to code from pure Vim to fully blown Eclipse CDT or Visual Studio. This is where CMake allows use some flexibility with its ability to generate native project files.
In the latest CMake 2.8 version there is the new ExternalProject module. This allows to download/checkout code, configure and build it as part of your main build tree. It should also allow to set dependencies. At my work (medical image processing group) we use CMake to build all our own libraries and applications. We have an in-house tool to track all the dependencies between projects (defined in a XML database). Most of the third party libraries (like Boost, Qt, VTK, ITK etc..) are build once for each system we support (MSWin32, MSWin64, Linux32 etc..) and are commited as zip-files in the version control system. CMake will then extract and configure the correct zip file depending on which system the developer is working on.
1,827,858
1,828,049
How to mitigate class declaration being far from its owner namespace declaration in a file?
So, I've seen how useful namespaces can be to organize declarations into their respective groups, but now comes an issue with this. The difference between making a library in C and a library in C++ is in C you must prefix your declarations with what they belong to, for example a library we'll dub MyMath might have a vector class, well the name might be MM_Vector. In C++, you would have a namespace MyMath with a Vector class declared as a part of it. Now the difference here is in C, just by going to the class declaration you immediately know how to use it. In C++, you would have to check which namespace a particular class belongs to (really only a problem in files where the declaration isn't near the namespace declaration, which can be common if there are constants and enumerations declared between the two). While I prefer using a namespace for organization, in my opinion this is still a valid argument as an annoyance. What have people done to reduce this annoyance?
In chapter 8 of his book, Stroustrup recommends a style such as the following: MyMath.h namespace MyMath { class Vector; }; Vector.h #include "MyMath.h" class MyMath::Vector { public: Vector(); // ... }; Vector.cc #include "Vector.h" MyMath::Vector::Vector() { /* ... */ } Limiting open namespace-declarations to declarations of their contents produces brief summaries. Fully-qualified definitions allow the compiler to catch typos. As applied to your concern, class declarations and definitions in this style make plain the parent namespace of each class—at the expense of what the Go folks call stuttering.
1,828,009
1,828,092
Can you Hide a virtual method in c++?
I have a base class with a virtual function. virtual CString& Foo(); I want to overload this in subclass like so CString Foo(); is there a way to hide the base classes virtual function? Something like the new keyword in vb.net or C#
Why anyone would do something like that? It breaks base class contract. If you don't want to implement subclass that has the same interface as base class, why do you inherit at all? Use composition. There is no equivalent of C# new keyword in C++. So you cannot cancel method's 'virtualness'. If you really want to do this you can always: override a method in subclass as private. create overload. But the overload has to have different parameters. But if you do this, IMHO something is wrong with your design. I wish each C++ compiler caught both of this situations at least as warnings.
1,828,021
1,828,183
Storing variable sized strings in structures
I'm reading a file in C++ using streams, specifically, fstream, not ifstream. blah blah blah\n blah blah\n blah blah blah blah \n end This repeats over and over with varble number of blah's in each line, constant number of lines between each end, end is the delimiter here I want to read one set of data, then store it in a character array, in a C style structure. I started by trying to use getline() but the delimter can only be one character, not three. I obviously can't try to read a set number of bytes using just read(), as the number will be different for each set. So I'm torn over what the easiest (and most robust) thing to do here is. Should I call getline until I find an 'end' string, while appending each string over and over? I tried a 2D char array but I copying to it was kind of a pain. Can I use strncpy here? I don't think this worked char buf[10][10]; strncpy(buf[1], "blah blah",10); I have a few ideas here, but I'm just not sure which one (or the one I haven't though of) is the best. EDIT: So this is for a networking application, so the size of the char array (or string) should always be the same. Also, there should be no pointers in the structure. Related question: is the way that a char array and a std::string are stored in memory the same? I always though there was some overhead with std::string.
(My push_back utility described at the bottom.) typedef std::vector<std::string> Block; int main() { using namespace std; vector<Block> blocks; string const end = "end"; // no real difference from using ifstream, btw for (fstream file ("filename", file.in); file;) { Block& block = push_back(blocks); for (string line; getline(file, line);) { if (line == end) { break; } push_back(block).swap(line); } if (!file && block.empty()) { // no lines read, block is a dummy not represented in the file blocks.pop_back(); } } return 0; } Example serialization: template<class OutIter> void bencode_block(Block const& block, OutIter dest) { int len = 0; for (Block::const_iterator i = block.begin(); i != block.end(); ++i) { len += i->size() + 1; // include newline } *dest++ = len; *dest++ = ':'; for (Block::const_iterator i = block.begin(); i != block.end(); ++i) { *dest++ = *i; *dest++ = '\n'; } } I've used a simple bencoding serialization format. Example suitable output iterator, which just writes to a stream: struct WriteStream { std::ostream& out; WriteStream(std::ostream& out) : out(out) {} WriteStream& operator++() { return *this; } WriteStream& operator++(int) { return *this; } WriteStream& operator*() { return *this; } template<class T> void operator=(T const& value) { out << value; } }; Example use: bencode_block(block, WriteStream(std::cout)); Another possible output iterator, which writes to a file descriptor (such as a network socket): struct WriteFD { int out; WriteFD(int out) : out(out) {} WriteFD& operator++() { return *this; } WriteFD& operator++(int) { return *this; } WriteFD& operator*() { return *this; } template<class T> void operator=(T const& value) { if (write(value) == -1) { throw std::runtime_error(strerror(errno)); } } //NOTE: write methods don't currently handle writing less bytes than provided int write(char value) { return write(out, &value, 1); } int write(std::string const& value) { return write(out, value.data(), value.size()); } int write(int value) { char buf[20]; // handles INT_MAX up to 9999999999999999999 // handles INT_MIN down to -999999999999999999 // that's 19 and 18 nines, respectively (you did count, right? :P) int len = sprintf(buf, "%d", value); return write(out, buf, len); } }; Poor man's move semantics: template<class C> typename C::value_type& push_back(C& container) { container.push_back(typename C::value_type()); return container.back(); } This allows easy use of move semantics to avoid unnecessary copies: container.push_back(value); // copies // becomes: // (C is the type of container) container.push_back(C::value_type()); // add empty container.back().swap(value); // swap contents
1,828,037
1,828,048
What's the point of g++ -Wreorder?
The g++ -Wall option includes -Wreorder. What this option does is described below. It is not obvious to me why somebody would care (especially enough to turn this on by default in -Wall). -Wreorder (C++ only) Warn when the order of member initializers given in the code does not match the order in which they must be executed. For instance: struct A { int i; int j; A(): j (0), i (1) { } }; The compiler will rearrange the member initializers for i and j to match the declaration order of the members, emit-ting a warning to that effect. This warning is enabled by -Wall.
Consider: struct A { int i; int j; A() : j(0), i(j) { } }; Now i is initialized to some unknown value, not zero. Alternatively, the initialization of i may have some side effects for which the order is important. E.g. A(int n) : j(n++), i(n++) { }
1,828,132
1,828,224
C++ text file pointer problems
I am writing a function which should (if the file already exists) increment the first number by one and append the parameters of the function to the end of the file. Example: append (4,9); append (5,6); File contents at 1: 1 \n 4 \n 9 File contents at 2: 2 \n 4 \n 9 \n 5 \n 6 int append (int obj, int objType) { ifstream infile; infile.open("stuff.txt"); if (infile.fail()){ infile.close(); ofstream outfile; outfile.open("stuff.txt"); outfile << 1 << endl << obj << endl << objType; outfile.close(); } else { int length = 0; while (!infile.eof()){ int temp; infile >> temp; length ++; } infile.close(); infile.open("stuff.txt"); int fileContents[length]; int i = 0; while (!infile.eof()){ /*PROGRAM DOES NOT ENTER HERE*/ infile >> fileContents[i]; i ++; } infile.close(); ofstream outfile; outfile.open("stuff.txt"); fileContents[0] +=1; for (i = 0; i < length; i++){ outfile << fileContents[i] << endl ; } outfile << obj << endl << objType; } The program never enters the second while loop, so the contents are never copied to the array and then into the file. I am unsure exactly what the problem is or how to fix it. Any help would be greatly appreciated. :)
Instead of closing and reopening file this way (I'm not sure if this operation will reset file position you require!) why not use std::fstream::seekg() and just "rewind" the file to beginning infile.seekg(0, ios::beg)
1,828,452
1,828,572
What on earth would compell C++ to call this function?
I'm working on a programming language that uses C++ as it's target language for now. I'm hitting an exceptionally strange backtrace. #1 0x08048d09 in factorial (n=0x8052160) at ir.cpp:35 35 shore::builtin__int * __return = NULL; (gdb) bt #0 shore::builtin__int::__mul__ (this=0x8052160, other=0x8052288) at /home/alex/projects/shore/shore/runtime/int.h:36 #1 0x08048d09 in factorial (n=0x8052160) at ir.cpp:35 #2 0x08048cfa in factorial (n=0x80520b8) at ir.cpp:35 #3 0x08048cfa in factorial (n=0x8052018) at ir.cpp:35 #4 0x08048d6f in main () at ir.cpp:43 Specifically it appears that declaring the type of return is somehow triggering the __mul method on builtin__int to be called, and I have no idea why. builtin__int looks like: #ifndef _SHORE_INT_H #define _SHORE_INT_H #include "gc.h" namespace shore { class builtin__int : public shore::Object { public: // Some day this will be arbitrary percision, but not today. long long value; static builtin__int* new_instance(long long value_) { builtin__int* val = new builtin__int(value_); shore::GC::register_object(val); return val; } builtin__int(long long value_) { this->value = value_; } builtin__bool* __eq__(builtin__int* other) { return builtin__bool::new_instance(this->value == other->value); } builtin__int* __add__(builtin__int* other) { return builtin__int::new_instance(this->value + other->value); } builtin__int* __sub__(builtin__int* other) { return builtin__int::new_instance(this->value - other->value); } builtin__int* __mul__(builtin__int* other) { return builtin__int::new_instance(this->value * other->value); } }; } #endif Any ideas as to what on earth is compelling C++ to call the mul method? EDIT: Added the source of ir.cpp #include "builtins.h" #include "frame.h" #include "object.h" #include "state.h" std::vector < shore::Frame * >shore::State::frames; shore::GCSet shore::GC::allocated_objects; class factorial__frame: public shore::Frame { public: shore::builtin__int * n; shore::GCSet __get_sub_objects() { shore::GCSet s; s. insert(this->n); return s; }}; class main__frame: public shore::Frame { public: shore::GCSet __get_sub_objects() { shore::GCSet s; return s; }}; shore::builtin__int * factorial(shore::builtin__int * n) { shore::builtin__int * __return = NULL; factorial__frame frame; shore::State::frames.push_back(&frame); frame.n = NULL; frame.n = n; if (((frame.n)->__eq__(shore::builtin__int::new_instance(0)))->value) { __return = shore::builtin__int::new_instance(1); shore::GC::collect(); shore::State::frames.pop_back(); return __return; } __return = (frame.n)-> __mul__(factorial ((frame.n)-> __sub__(shore::builtin__int::new_instance(1)))); shore::GC::collect(); shore::State::frames.pop_back(); return __return; } int main() { main__frame frame; shore::State::frames.push_back(&frame); builtin__print(factorial(shore::builtin__int::new_instance(3))); shore::State::frames.pop_back(); }
A bit of a guess: the initialization in the line shore::builtin__int * __return = NULL; does nothing, since it's always overwritten. The compiler would be perfectly entitled to (a) reorder it down to where __return is assigned, by the statement that does call __mul__ and then (b) remove the code entirely. But maybe it's left the source line in the debugging info, and either the linker or gdb has ended up thinking the call instruction belongs to the wrong one of the several source lines in the vicinity. Never trust source debugging unless you can see the disassembly too. Compiled languages - bah, humbug. And so forth.
1,828,535
1,828,699
Fastest socket method for a lot of data between a lot of files
I'm building a socket application that need to shuffle a lot of small/medium sized files, something like 5-100kb sized files to a lot of different clients (sort of like a web server, but still not quite). Should I just go with the standard poll/epoll (linux) or async sockets in winsock (win32), or are there any methods with even more performance around (overlapped i/o on win32 for example) ? Both Linux and Windows are possible platforms!
On windows you may try using TransmitFile, which has a potential of boosting your performance by avoiding kernel space <-> user space data copying.
1,828,700
1,829,260
Using C++ in xcode for image and video processing
I am studying in the area of image and video processing - specifically in the field of pattern recognition (objects, people etc.). I wish to use a programming language to apply the transformation to images and video (more importantly video). I am thinking of using C++ in Xcode to do this. The algorithms I wanna build I want to take data from the web (e.g. submitted videos) - process them and then give an output. My question has several parts: (1) Is C++ the best language to do this in? Can this be done in Python? (I'm guessing C++ is faster than Python and can probably handle larger files/more intense algos) (2) What is the best way for setting up a project for this in xcode - is it a straight (A) Command-line tools "vanilla" project or should I go for (B) a Cocoa application in objective C? (I will need to learn Obj-C) My short term objective is to write some simple alorithms and see how they work on video. Then to hook this up (at the back end) to a front end web GUI (so I can submit videos to my code). Volumes wont be huge - but file sizes may be substantial. Any insight will be hugely appreciated. Max.
If you choose to use C++ (which seems sinsible for that scenario) you should check out these links: IPP - Also usefull for non-Intel processors, especially in combination with the Intel C++ compiler, but it's expensive Intel offers many tools that are usefull for parallelising high performance number crunching (CPU bound stuff), e.g. TBB and OpenMP (I think that one isn't Intel) Maybe OpenCV is usefull for you Hope that helps somewhat.
1,829,013
1,829,034
Trouble understanding C++ `virtual`
I'm having trouble understanding what the purpose of the virtual keyword in C++. I know C and Java very well but I'm new to C++ From wikipedia In object-oriented programming, a virtual function or virtual method is a function or method whose behavior can be overridden within an inheriting class by a function with the same signature. However I can override a method as seen below without using the virtual keyword #include <iostream> using namespace std; class A { public: int a(); }; int A::a() { return 1; } class B : A { public: int a(); }; int B::a() { return 2; } int main() { B b; cout << b.a() << endl; return 0; } //output: 2 As you can see below, the function A::a is successfully overridden with B::a without requiring virtual Compounding my confusion is this statement about virtual destructors, also from wikipedia as illustrated in the following example, it is important for a C++ base class to have a virtual destructor to ensure that the destructor from the most derived class will always be called. So virtual also tells the compiler to call up the parent's destructors? This seems to be very different from my original understanding of virtual as "make the function overridable"
Make the following changes and you will see why: #include <iostream> using namespace std; class A { public: int a(); }; int A::a() { return 1; } class B : public A { // Notice public added here public: int a(); }; int B::a() { return 2; } int main() { A* b = new B(); // Notice we are using a base class pointer here cout << b->a() << endl; // This will print 1 instead of 2 delete b; // Added delete to free b return 0; } Now, to make it work like you intended: #include <iostream> using namespace std; class A { public: virtual int a(); // Notice virtual added here }; int A::a() { return 1; } class B : public A { // Notice public added here public: virtual int a(); // Notice virtual added here, but not necessary in C++ }; int B::a() { return 2; } int main() { A* b = new B(); // Notice we are using a base class pointer here cout << b->a() << endl; // This will print 2 as intended delete b; // Added delete to free b return 0; } The note that you've included about virtual destructors is exactly right. In your sample there is nothing that needs to be cleaned-up, but say that both A and B had destructors. If they aren't marked virtual, which one is going to get called with the base class pointer? Hint: It will work exactly the same as the a() method did when it was not marked virtual.
1,829,119
1,829,158
C++ -- Pointers to Arrays -- Arrays of Pointers
I notice this has caused confusion for several people, but after reading a couple of posts on here and the cplusplus tutorial my brain is still scrambled. Suppose I have the following variables in a header file - int numberOfLinePoints; D3DXVECTOR3* line; //confused as to what it is Then in the implementation C++ file I initialize them as follows - //both initialized in constructor numberOfLinePoints = 25; line = new D3DXVECTOR3[numPoints]; //array of pointers? What does my line variable now represent? As far as I can tell from reading links on stackoverflow it should represent an array of pointers. I then read the following however... (1) Pointers for Beginners ...where (A) arrays of pointers, and (B) pointers to arrays, are both discussed. This left me confused once again as they both seem to work similarly. The fact that I define my pointers in a seperate location to where I allocate (correct?) them seems to be where my confusion stems from. Am I correct that this is an array of pointers to D3DXVECTOR3 objects? To finish - if variable line holds information about one line segment, how would I create an array of line segments? I currently have the following - //HEADER FILE int numberOfLineSegments; D3DXVECTOR3** lineCollection; //array of pointers - each of which //points to an array of pointers? //CPP FILE numberOfLineSegments = 8; //constructor for(i = 0; i < numberOfLineSegments; i++) //initialization { //and allocation CORRECT? lineCollection[i] = new D3DXVECTOR*[numPoints]; //of memory for Y/N } //lineCollection VOID createLineSegments(startPoint, endPoint) //could return array instead { //pseudo to generate one line segment while != numberOfLinePoints line[sentinel++] = interpolate(startPoint, endPoint, time_T) //pseudo to generate array of line segments while != numberOfLines lineCollection[sentinel++] = line } Any help is much appreciated.
int numberOfLinePoints; D3DXVECTOR3* line; //confused as to what it is //both initialized in constructor numberOfLinePoints = 25; line = new D3DXVECTOR3[numPoints]; //array of pointers? line is an array of D3DXVECTOR3. It would be an array of pointers if D3DVECTOR3 is itself a pointer, however. Since I don't know the C++ D3D headers very well, I'm not sure. D3DXVECTOR3** lineCollection; Is an array of pointers, each pointer likely being a pointer to a line (that is, an array of D3DXVECTOR3). You have two options. Memorywise, the best would be to set each entry in lineCollection to just point to the corresponding line. This is safe if you either know the lines aren't going to change (and aren't going to be freed), or if they do change you want the changes to be reflected immedaitely inside your collection. The other option would be to create a new array for each entry in lineCollection, and copy the points from each line into this new array. There is no correct answer, it depends on the functionality you want.
1,829,499
1,829,552
How Does PHP's main.c Start Execution
I was poking around the PHP 5.3.1 source tree, and decided to take a look at main.c. I was curious what was happening behind the scenes whenever PHP runs. I was under the impression that any C or C++ program starts execution in a function named main, but I don't see a function with that name in main.c. Where does PHP code actually start executing (a different for command-line vs. MOD_PHP vs. CGI?), and what am I missing w/r/t no main function in the main.c file that would let me answer this question myself the next time?
I don't think I've ever seen any clear answer to that kind of question on the Internet, but you might be interested by some paragraphs of the book Extending and Embedding PHP, which is probably the reference book when it comes to writting PHP extensions, and the internals of the PHP engine. An interesting couple of sentences, quoting chapter 1 "The PHP Life Cycle", is : In a common webserver environment, you'll never explicitly start the PHP interpreter ; you'll start Apache or some other web server that will load PHP and process scripts as needed... And, just after : ... the CLI binary actually behaves just the same way. A php command, entered at the system prompt, starts up the "command line API", which acts as a mini-web server designed to service a single request. You'll probably be able to find some pages on Google books, if you want to try reading a bit more...
1,829,639
1,829,691
How can I use a custom type for keys in a boost::unordered_map?
I'm using Boost's implementation of a hash map in a project right now, and I'm trying to implement a custom type for keys. I have four unsigned integers which I'd like to combine into a single 128-bit datatype to use as a key. I've created a struct with a 32-bit integer array of four elements, which serves as my storage. To be honest, I'm not sure how Boost's hash map works, so I'm not sure what I'm doing here, but I followed the Boost documentation (http://www.boost.org/doc/libs/1_37_0/doc/html/hash/custom.html) for extending boost::hash, and I created a hash function, as well as a custom comparison operator. I have this custom type defined in a header. This is my code: #ifndef INT128_H_ #define INT128_H_ // Custom 128-bit datatype used to store and compare the results of a weakened hash operation. struct int128 { unsigned int storage[4]; /* Assignment operation that takes a 32-bit integer array of four elements. This makes assignment of values a shorter and less painful operation. */ void operator=(const unsigned int input[4]) { for(int i = 0; i < 4; i++) storage[i] = input[i]; } }; bool operator==(int128 const &o1, int128 const &o2) { if(o1.storage[0] == o2.storage[0] && o1.storage[1] == o2.storage[1] && o1.storage[2] == o2.storage[2] && o1.storage[3] == o2.storage[3]) return true; return false; } // Hash function to make int128 work with boost::hash. std::size_t hash_value(int128 const &input) { boost::hash<unsigned long long> hasher; unsigned long long hashVal = input.storage[0]; for(int i = 1; i < 3; i++) { hashVal *= 37; hashVal += input.storage[1]; } return hasher(hashVal); } #endif Now when I actually use this type in Boost's unordered map, my code compiles, but fails to link. The linker claims that I have a symbol defined multiple times in several object files. I'd really like to get my 128-bit type working with this map. Any tips on what I'm screwing up, or a better way to do this?
The involvement of unordered-map is almost incidental to the problem you're encountering. The real problem is that you're defining hash_value and operator== in every file that includes the header above. You can cure this by either: Defining both those as inline functions Just declaring them in the header If you do the latter (and it's what you'll usually want) you'll move the definitions of those functions into a .cpp file (or whatever extension you use for C++ source files). You'll then compile that file, and link the resulting object with your other code that uses the int128 type. Edit: You can still make your comparison cleaner, something like: bool operator==(int128 const &o1, int128 const &o2) { return o1.storage[0] == o2.storage[0] && o1.storage[1] == o2.storage[1] && o1.storage[2] == o2.storage[2] && o1.storage[3] == o2.storage[3]); }
1,829,741
1,836,862
Lightweight debugging on embedded Linux
I'm developing an application that runs on a small Linux-based SBC (~32MB RAM). Sadly, my app recently became too large to run under GDB anymore. Does anyone know of any good, lightweight debugging methods that I can use in embedded Linux? Even being able to view a thread's stack trace would be extremely helpful. I should mention that this application is written in C++ and runs multiple threads, so gdbserver is a no-go as it doesn't work with multithreaded apps. Thanks in advance, Maha
gdbserver definitely works with multi-threaded applications, I'm working on an embedded project right now with >25 threads and we use gdbserver all the time. info threads lists all the threads in the system thread <thread number from info threads> switches to that thread of execution. thread apply XXX <command> Runs on the thread designated by XXX, which can also be 'all'. So if you want the back trace from all running threads do thread apply all bt Once you're in the execution flow of a given threads all your typical commands work as they would in a single threaded process.
1,829,898
1,829,917
Drawing with c++ visual studio 2010 beta?
please tell me how to draw any shape (a small square e.g) using visual studio 2010 with the c++ language ? PUT THEM STEP BY STEP PLEASE I don't know what type of file i have to choose nor how to check it out
I think you mean drawing in win32? I would suggest you to check this out: http://www.codeproject.com/KB/GDI/paint_beginner.aspx
1,829,905
1,831,813
What is the copy constructor bug causing parsing errors?
I'm writing a compiler for a small language, and my Parser class is currently in charge of building an AST for use later. However, recursive expressions are not working correctly because the vector in each AST node that holds child nodes are not working correctly. Currently my AST's header file looks like this: class AST { public: enum ASTtype {nil, fdecl, pdecl, vdecl, rd, wr, set, rdLV, setLV, exprLV, add, sub, mul, fcall, divide, mod, lt, gt, lte, gte, eq, ne, aAnd, aOr, aNot, aNeg, nConst, t, f, vs, dl, loop, cond, ss}; enum scalarType {tNA, tINVALID, tINT, tLONG, tBOOL}; AST (); AST (AST const&); AST (ASTtype); AST (ASTtype, std::string); void addChild(AST); ASTtype getNodeType(); std::string text; ASTtype nodeType; int size; scalarType evalType; std::vector<AST> children; }; Here's the expression parsing code that is causing trouble: void Parser::e(AST& parent) { AST expr; AST::ASTtype check = AST::nil; bool binOp = false; switch (lookahead.type) { case Lexer::AND : check = AST::aAnd ; binOp = true; break; case Lexer::OR : check = AST::aOr ; binOp = true; break; case Lexer::NOT : check = AST::aNot ; break; case Lexer::NEG : check = AST::aNeg ; break; case Lexer::PLUS : check = AST::add ; binOp = true; break; case Lexer::MINUS : check = AST::sub ; binOp = true; break; case Lexer::SPLAT : check = AST::mul ; binOp = true; break; case Lexer::FSLASH : check = AST::divide; binOp = true; break; case Lexer::MOD : check = AST::mod ; binOp = true; break; case Lexer::EQ : check = AST::eq ; binOp = true; break; case Lexer::LT : check = AST::lt ; binOp = true; break; case Lexer::GT : check = AST::gt ; binOp = true; break; case Lexer::GTE : check = AST::gte ; binOp = true; break; case Lexer::LTE : check = AST::lte ; binOp = true; break; case Lexer::NE : check = AST::ne ; binOp = true; break; } if (check != AST::nil && binOp) { match(lookahead.type); expr = AST(check); e(expr); e(expr); } else if (check != AST::nil && !binOp) { match(lookahead.type); expr = AST(check); } else if (lookahead.type == Lexer::IDENT) { if (symbols.resolve(lookahead.text).sym_type == symbol::FUNC) { expr = AST(AST::fcall, lookahead.text); match(Lexer::IDENT); while (lookahead.type != Lexer::BANG) { e(expr); } match(Lexer::BANG); } else { expr = AST(AST::exprLV); lv(expr); } } else if (lookahead.type == Lexer::T) { match(Lexer::T); //true expr = AST(AST::t); } else if (lookahead.type == Lexer::F) { match(Lexer::F); //false expr = AST(AST::f); } else { expr = AST(AST::nConst, lookahead.text); match(Lexer::NUM); } parent.children.push_back(expr); } An example expression that doesn't work is + 1 + 2 + 3 4. It should parse into an AST like this: + [1, + [2, + [3, 4]], but instead I get this: + [1, + []] Any advice on what I'm doing wrong?
parent.children.push_back(expr) copies the expression. Hence, it calls AST::AST(AST const&). A bug in that could certainly cause the problem you see. However, without the code, we can't find bugs in it.
1,829,906
1,829,923
Member value changes between successive calls of the same function
I have a CognitiveEntity class, defined this way: class CognitiveEntity : public Object { public: CognitiveEntity (FuzzyCognitiveMap fcm, SystemState s); ~CognitiveEntity (); template <typename T> void RegisterChange (std::string context, T value); bool operator!= (const CognitiveEntity& rhs) const; private: FuzzyCognitiveMap m_fuzzyCognitiveMap; SystemState m_systemState; std::vector <SystemState> RunFuzzyCognitiveMap (); }; As shown, a CognitiveEntity has a SystemState object, which in turn has a vector of Concept objects (only the most relevant lines are shown): class SystemState { public: SystemState (); ~SystemState (); void AddConcept (Concept c) { m_L.push_back(c); } std::vector <Concept> m_L; }; Inside the CognitiveEntity::RegisterChange, I mark a Concept as a potential cause (by calling Concept::IsPotentialCause (bool) which merely sets a private member with the value passed): template <typename T> void CognitiveEntity::RegisterChange (std::string context, T value) { std::string name = context.substr(context.find_last_of ("/") +1); int pos = m_systemState.FindConcept(name); if (pos > -1) { int intValue = value ? 1 : 0; m_systemState.m_L[pos].SetConceptValue (intValue, false); if (m_systemState.m_L[pos].CheckVariation ()) { m_systemState.m_L[pos].IsPotentialCause (true); // Mark this concept as a potential cause for (int cause = 0; cause < m_systemState.GetSize (); cause++) { if ( (cause != pos) && (m_systemState.m_L[cause].MayBeCause ())) { m_fuzzyCognitiveMap.UpdateFuzzyCognitiveMapEntry (cause, pos, m_systemState); m_systemState.m_L[cause].IsPotentialCause (false); } } } } } What happens is that as soon as RegisterChange is called another time, the Concept that was marked as potential cause, is marked no more. I tried running gdb and I am sure that that member is not set elsewhere. I'm not sure if this little information is enough for you to give me some hints about such a behavior (I didn't want to flood the post with the code of both SystemState and Concept classes). Regards, Jir
If this was a multi-threaded system, I'd say it sounds like a classic case of shared, mutable state that wasn't properly synchronized. If you don't have a multi-threaded situation, I'd say set a watch on that variable and see what changes it.
1,829,930
1,829,936
Multi-statement Macros in C++
In C++, is it possible to make a multi-statement macro with nested if statements inside of it like the one below? I've been attempting it for a while now and I'm getting a scope issue for the second if statement not being able to see 'symbol'. Maybe I need to understand macros further. #define MATCH_SYMBOL( symbol, token) if(something == symbol){ if( symbol == '-'){ }else if (symbol != '-'){ } other steps; }
For a multi-line macro you need to add a \ character to the end of all but the last line to tell the macro processor to continue parsing the macro on the next line, like so: #define MATCH_SYMBOL( symbol, token) \ if(something == symbol){ \ if( symbol == '-'){ \ }else if (symbol != '-'){ \ } \ other steps; \ } Right now, it's trying to interpret it as a 1-line macro and then some actual code at the top of your file, which isn't what you want: #define MATCH_SYMBOL( symbol, token) // and then... wrongly thinking this is separate... if(something == symbol){ // symbol was never defined, because the macro was never used here! if( symbol == '-'){ }else if (symbol != '-'){ } other steps; }
1,830,043
1,830,089
C++ pass pointer by reference and assign default value
I would like to pass a pointer by reference to a function, such that i can actually change the address the passed pointer is pointing to and i'd like to assign this argument a default value. something like this: in the declaration void myFunc(SomeType* &var=NULL); and the definition: void MyClass::myFunc(SomeType* &var){ if(var!=NULL) (*var)=(*someOtherPointer); if(someCondition) var=NULL; } such that a callee can decide whether he wants to call the function with one argument or without argument. And sucht that if he decides to pass an argument, and someCondition holds, the passed pointer will point to NULL afterwards however - if i try to do it like this i get a: Error C2440: 'default argument': 'int' cannot be conveted to 'SomeType *&' Thanks for the help!
The error message says it all: you are passing an integer instead of a reference-to-a-pointer-to-SomeType. To do what you want, you can use a pointer-to-a-pointer-to-SomeType: void myFunc(SomeType** var=NULL); void MyClass::myFunc(SomeType** var){ if(var!=NULL && *var!=NULL) (**var)=(*someOtherPointer); if(var!=NULL && someCondition) *var=NULL; }
1,830,158
1,830,240
How to call erase with a reverse iterator
I am trying to do something like this: for ( std::list< Cursor::Enum >::reverse_iterator i = m_CursorStack.rbegin(); i != m_CursorStack.rend(); ++i ) { if ( *i == pCursor ) { m_CursorStack.erase( i ); break; } } However erase takes an iterator and not a reverse iterator. is there a way to convert a reverse iterator to a regular iterator or another way to remove this element from the list?
After some more research and testing I found the solution. Apparently according to the standard [24.4.1/1] the relationship between i.base() and i is: &*(reverse_iterator(i)) == &*(i - 1) (from a Dr. Dobbs article): So you need to apply an offset when getting the base(). Therefore the solution is: m_CursorStack.erase( --(i.base()) ); EDIT Updating for C++11. reverse_iterator i is unchanged: m_CursorStack.erase( std::next(i).base() ); reverse_iterator i is advanced: std::advance(i, 1); m_CursorStack.erase( i.base() ); I find this much clearer than my previous solution. Use whichever you require.
1,830,780
1,831,073
How can I interface with a third party module that only provides JTAPI API From C++?
I'm supporting a large system written in C++ and we now have a requirement for our application to talk with a third party system which only provides a JTAPI interface. It would appear that I am stuck writing a JTAPI proxy in Java that talks JTAPI on one side and some more language-neutral API on the other. However, this feels like it should be a solved problem and I don't want to unnecessarily re-invent the wheel. What is the best solution to interface to JTAPI from C++? Does such a proxy already exist, or perhaps is there a solution that does not require a Java layer?
This article shows a way to call Java objects from C++. You can also think of embedding the JVM in your C++ program. This page talks about a possible way to do this. Also see: Embed Java code into your native apps If your C++ system provides an API, then the easier approach is to write a Java program that wraps the C++ API (using JNI) and call the JTAPI library from there.
1,831,290
1,832,693
Static variable initialization?
I want to know why exactly static variables in C, C++ and Java are initialized by zero by default? And why this is not true for local variables?
Why the static variables are deterministically initialized and local variables aren't? See how the static variables are implemented. The memory for them is allocated at link time, and the initial value for them is also provided at link time. There is no runtime overhead. On the other hand, the memory for local variables is allocated at run time. The stack has to grow. You don't know what was there before. If you want, you can clear that memory (zero it), but that would incur a runtime overhead. The C++ philosophy is "you don't pay for things you don't use", so it doesn't zero that memory by default. OK, but why are static variables initialized to zero, and not some other value? Well, you generally want to do something with that variable. But then how do you know if it has been initialized? You could create a static boolean variable. But then it also has to be reliably initialized to something (preferably false). How about a pointer? You'd rather want it initialized to NULL than some random garbage. How about a struct/record? It has some other data members inside. It makes sense to initialize all of them to their default values. But for simplicity, if you use the "initialize to 0" strategy, you don't have to inspect the individual members and check their types. You can just initialize the entire memory area to 0. This is not really a technical requirement. The semantics of initialization could still be considered sane if the default value is something other than 0, but still deterministic. But then, what should that value be? You can quite easily explain why 0 is used (although indeed it sounds slightly arbitrary), but explaining -1 or 1024 seems to be even harder (especially that the variable may not be large enough to hold that value, etc). And you can always initialize the variable explicitly. And you always have paragraph 8.5.6 of the C++ standard which says "Every object of static storage duration shall be zero-initialized at program startup". For more info, please refer to these other questions: Is global memory initialized in C++? What do the following phrases mean in C++: zero-, default- and value-initialization?
1,831,316
1,832,051
Is this "*ptr++ = *ptr + a" undefined behavior?
Well, I'm not really in serious need of this answer, I am just inquisitive. Expressions like *ptr++ = a are perfectly valid since we are operating on two objects ptr and *ptr but if i write *ptr++ = *ptr + a is it still valid ? For example consider the following snippet: int main(void){ int a[] = {5,7,8,9,2}; int* p =a; *p++ = 76; /*altering the first element */ *p++ = *p + 32; /*altering the second element */ p = a; int i; for(i = 0;i<5; i++) printf("%d ",*p++); return 0; } I think that there is nothing to worry about with the expression *p++ = *p + 32; but I am unsure about the sequence points involved.
First let us assume that 'p' is a pointer type. Otherwise all the operation are just syntactic sugar for function calls. Lets us break the statement down into parts. int* p = a; *p++ = *p + 32; << Sequence Point >> // Part 1: p++ // Note the definition of post increment in the standard is (5.2.6) // The result of the expression p++ is the value of 'p' while the value of the // objects represented by 'p' is incremented. This can be represented in pseudo code as: (A) int* p1 = p; (B) p = p + 1; // Part 2: *p (On the result of Part 1) (On *p++) (C) int& p2 = *p1; // Note the use of p1; // Part 3: *p (On *p + 32) // Note: There is no linkage between this use of 'p' and the 'p' in Part 1&2 (D) int& p3 = *p; // Part 4: *p + 32; (E) int p5 = p3 + 32; // Note the use of p3; // Part 5: Assignment. (F) p2 = p5; << Sequence Point >> Ordering that must be preserved: (A) Before (B) (A) Before (C) (D) Before (E) (C) Before (F) (E) Before (F) Given the above constraints: The compiler can re-order those instructions in several ways, But the main point to note is that (B) can happen anywhere the only constraint on (B) is that it happen after (A) Thus the value of p3 as defined in (D) could be one of two different values depending on the exact position of (B). As the value of p3 can not be defined here. The resulting statement has undefined behavior.
1,831,369
1,831,499
Writing binary files using C++: does the default locale matter?
I have code that manipulates binary files using fstream with the binary flag set and using the unformatted I/O functions read and write. This works correctly on all systems I've ever used (the bits in the file are exactly as expected), but those are basically all U.S. English. I have been wondering about the potential for these bytes to be modified by a codecvt on a different system. It sounds like the standard says using unformatted I/O behaves the same as putting characters into the streambuf using sputc/sgetc. These will lead to the overflow or underflow functions in the streambuf getting called, and it sounds like these lead to stuff going through some codecvt (e.g., see 27.8.1.4.3 in the c++ standard). For basic_filebuf the creation of this codecvt is specified in 27.8.1.1.5. This makes it look like the results will depend on what basic_filebuf.getloc() returns. So, my question is, can I assume that a character array written out using ofstream.write on one system can be recovered verbatim using ifstream.read on another system, no matter what locale configuration either person might be using on their system? I would make the following assumptions: The program is using the default locale (i.e., the program is not changing the locale settings itself at all). The systems both have CHAR_BIT 8, have the same bit order within each byte, store files as octets, etc. The stream objects have the binary flag set. We don't need to worry about any endianess differences at this stage. If any bytes in the array are to be interpretted as a multi-byte value, endianess conversions will be handled as required at a later stage. If the default locale isn't guaranteed to pass through this stuff unmodified on some system configuration (I don't know, Arabic or something), then what is the best way to write binary files using C++?
On Windows it should be fine, but on other OS you should check also the line endings (just as safety). The default C/C++ locale is "C" which is not dependent on the system's locale. This is not a guarantee. As you know C/C++ compiler and their target machines vary greatly. So you're waiting for troubles to come if you keep all those assumptions. There is negligible overhead for changing the locale unless you try to make it hundreds of time per second.
1,831,529
2,308,294
Is C++ code generation in ANTLR 3.2 ready?
I was trying hard to make ANTLR 3.2 generate parser/lexer in C++. It was fruitless. Things went well with Java & C though. I was using this tutorial to get started: http://www.ibm.com/developerworks/aix/library/au-c_plusplus_antlr/index.html When I checked the *.stg files, I found that: CPP has only ./tool/src/main/resources/org/antlr/codegen/templates/CPP/CPP.stg C has so many files: ./tool/src/main/resources/org/antlr/codegen/templates/C/AST.stg ./tool/src/main/resources/org/antlr/codegen/templates/C/ASTDbg.stg ./tool/src/main/resources/org/antlr/codegen/templates/C/ASTParser.stg ./tool/src/main/resources/org/antlr/codegen/templates/C/ASTTreeParser.stg ./tool/src/main/resources/org/antlr/codegen/templates/C/C.stg ./tool/src/main/resources/org/antlr/codegen/templates/C/Dbg.stg And so other languages. My C.g file: grammar C; options { language='CPP'; } /** Match things like "call foo;" */ r : 'call' ID ';' {System.out.println("invoke "+$ID.text);} ; ID: ('a'..'z'|'A'..'Z'|'_')('0'..'9'|'a'..'z'|'A'..'Z'|'_')* ; WS: (' ' |'\n' |'\r' )+ {$channel=HIDDEN;} ; // ignore whitespace Errors: error(10): internal error: group Cpp does not satisfy interface ANTLRCore: missing templates [lexerRuleRefAndListLabel, parameterSetAttributeRef, scopeSetAttributeRef, returnSetAttributeRef, lexerRulePropertyRef_text, lexerRulePropertyRef_type, lexerRulePropertyRef_line, lexerRulePropertyRef_pos, lexerRulePropertyRef_index, lexerRulePropertyRef_channel, lexerRulePropertyRef_start, lexerRulePropertyRef_stop, ruleSetPropertyRef_tree, ruleSetPropertyRef_st] error(10): internal error: group Cpp does not satisfy interface ANTLRCore: mismatched arguments on these templates [outputFile(LEXER, PARSER, TREE_PARSER, actionScope, actions, docComment, recognizer, name, tokens, tokenNames, rules, cyclicDFAs, bitsets, buildTemplate, buildAST, rewriteMode, profile, backtracking, synpreds, memoize, numRules, fileName, ANTLRVersion, generatedTimestamp, trace, scopes, superClass, literals), optional headerFile(LEXER, PARSER, TREE_PARSER, actionScope, actions, docComment, recognizer, name, tokens, tokenNames, rules, cyclicDFAs, bitsets, buildTemplate, buildAST, rewriteMode, profile, backtracking, synpreds, memoize, numRules, fileName, ANTLRVersion, generatedTimestamp, trace, scopes, superClass, literals), lexer(grammar, name, tokens, scopes, rules, numRules, labelType, filterMode, superClass), rule(ruleName, ruleDescriptor, block, emptyRule, description, exceptions, finally, memoize), alt(elements, altNum, description, autoAST, outerAlt, treeLevel, rew), tokenRef(token, label, elementIndex, hetero), tokenRefAndListLabel(token, label, elementIndex, hetero), listLabel(label, elem), charRangeRef(a, b, label), ruleRef(rule, label, elementIndex, args, scope), ruleRefAndListLabel(rule, label, elementIndex, args, scope), lexerRuleRef(rule, label, args, elementIndex, scope), lexerMatchEOF(label, elementIndex), tree(root, actionsAfterRoot, children, nullableChildList, enclosingTreeLevel, treeLevel)] error(10): internal error: C.g : java.lang.IllegalArgumentException: Can't find template actionGate.st; group hierarchy is [Cpp] ... and so on. Please kindly advise. Thank you! I'm using Leopard 10.5.8 with CLASSPATH=:/Users/vietlq/projects/antlr-3.2.jar:/Users/vietlq/projects/stringtemplate-3.2.1/lib/stringtemplate-3.2.1.jar:/Users/vietlq/projects/stringtemplate-3.2.1/lib/antlr-2.7.7.jar
It sounds like you've answered your own question: ANTLR's C++ lexer/parser generators are not yet functional. For what it's worth, it's still possible to use ANTLR for parsing from C++, via the C target. I use ANTLR to generate a C language lexer and parser, which I then compile and link to my C++ code. I have one C++ file that translates an ANTLR parse tree to my target abstract syntax tree classes, and the rest of my code doesn't care where the AST comes from. It works pretty well in practice! It would be easy to replace ANTLR with a different parser generator, and I find that the separation leads to cleaner ANTLR grammars.
1,831,635
1,831,688
vptr - virtual tables
there is something i still don't get. for every class i declare there is a hidden vptr member pointing to the class virtual table. let's say i have this declaration : class BASE { virtual_table* vptr; //that's hidden of course , just stating the obvious virtual void foo(); } class DERIVED : public BASE { virtual_table* vptr; //that's hidden of course also virtual void foo(); virtual void cho(); } first i want to understand something, is it really the same member name for the vptr both for the derived and the base ? second, what happens in this situation : base* basic = new derived(); i get it, the basic variable gets derived's vptr, but how is that happening ? cause usually when conversion taking place , derived's base part (including base's vptr) should be assigned to basic, and not derived's vptr. maybe it's different if there is a variable with the same name in both classes, i dunno. third and last question : when i have base* basic = new derived(); is there a way to call with basic - base's member function even though it's virtual ? thanks
first, yes, it is the same member. It is automaticaly assigned a first time when running base constructor, and assigned a second time when running the derived constructor. (In the case of default empty constructors, the useless assignements of base is optimized away.) second, there is no real conversion. In fact, the derivation can be named a "is a" relationship. In this case, derived "is a" base. If you consider the first bytes of memory of a derived object, they have the same meaning than the first bytes of a base object. third, you can call basic member function as follows: basic->base::foo();
1,831,991
1,832,104
C++: Safe way to cast an integer to a pointer
I need to convert an integral type which contains an address to the actual pointer type. I could use reinterpret_cast as follows: MyClass *mc1 = reinterpret_cast<MyClass*>(the_integer); However, this does not perform any run-time checks to see if the address in question actually holds a MyClass object. I want to know if there is any benefit in first converting to a void* (using reinterpret_cast) and then using dynamic_cast on the result. Like this: void *p = reinterpret_cast<void*>(the_integer); MyClass *mc1 = dynamic_cast<MyClass*>(p); assert(mc1 != NULL); Is there any advantage in using the second method?
Type checking on dynamic_cast is implemented in different ways by different C++ implementations; if you want an answer for your specific implementation you should mention what implementation you are using. The only way to answer the question in general is to refer to ISO standard C++. By my reading of the standard, calling dynamic_cast on a void pointer is illegal: dynamic_cast<T>(v) "If T is a pointer type, v shall be an rvalue of a pointer to complete class type" (from 5.2.7.2 of the ISO C++ standard). void is not a complete class type, so the expression is illegal. Interestingly, the type being cast to is allowed to be a void pointer, i.e. void * foo = dynamic_cast<void *>(some_pointer); In this case, the dynamic_cast always succeeds, and the resultant value is a pointer to the most-derived object pointed to by v.
1,832,003
1,837,665
Instantiating classes by name with factory pattern
Suppose I have a list of classes A, B, C, ... which all inherit from Base. I get the class name as a string from the user, and I want to instantiate the right class and return a pointer to Base. How would you implement this? I thought of using a hash-table with the class name as the key, and a function pointer to a function that instantiates the right class and returns a Base *. However, I think I might be able to use the factory pattern here and make it a lot easier, but I just can't quite remember it well, so I though I'd ask for suggestions.
Here is a generic factory example implementation: template<class Interface, class KeyT=std::string> struct Factory { typedef KeyT Key; typedef std::auto_ptr<Interface> Type; typedef Type (*Creator)(); bool define(Key const& key, Creator v) { // Define key -> v relationship, return whether this is a new key. return _registry.insert(typename Registry::value_type(key, v)).second; } Type create(Key const& key) { typename Registry::const_iterator i = _registry.find(key); if (i == _registry.end()) { throw std::invalid_argument(std::string(__PRETTY_FUNCTION__) + ": key not registered"); } else return i->second(); } template<class Base, class Actual> static std::auto_ptr<Base> create_func() { return std::auto_ptr<Base>(new Actual()); } private: typedef std::map<Key, Creator> Registry; Registry _registry; }; This is not meant to be the best in every circumstance, but it is intended to be a first approximation and a more useful default than manually implementing the type of function stijn mentioned. How each hierarchy should register itself isn't mandated by Factory, but you may like the method gf mentioned (it's simple, clear, and very useful, and yes, this overcomes the inherent problems with macros in this case). Here's a simple example of the factory: struct Base { typedef ::Factory<Base> Factory; virtual ~Base() {} virtual int answer() const = 0; static Factory::Type create(Factory::Key const& name) { return _factory.create(name); } template<class Derived> static void define(Factory::Key const& name) { bool new_key = _factory.define(name, &Factory::template create_func<Base, Derived>); if (not new_key) { throw std::logic_error(std::string(__PRETTY_FUNCTION__) + ": name already registered"); } } private: static Factory _factory; }; Base::Factory Base::_factory; struct A : Base { virtual int answer() const { return 42; } }; int main() { Base::define<A>("A"); assert(Base::create("A")->answer() == 42); return 0; }
1,832,087
1,832,218
Is Network Up? C++ Fedora/Unix
Does any one have a snippet of their code that, checks if the network is enabled on a machine and has an active IP Address. I have a networking software that connects to other client machines, Although it works when the machine is connected but if i unplug the cable or disable the network, It throws a whole reem of exceptions. It would be nice to just put a check on top :D Thanks in Advance
Network is always in dynamic state, a simple check at beginning of the run is not enough for correct operation. So unfortunately you have to check for any network operations succeess state. As for not even starting program with network disconnected state... Consider if your program is automatically started after computer has crashed or power failure. Or if any other component has suffered something similar, or a glitch. These happen surprisingly often, and restarting program on n+1 computers just because some dweeb stumbled on network cable is quite annoying.. For checking a general availability of networking, you can always "ping -q -c 1 127.0.0.1" return value is 1 if localhost does not answer. This should be in startup script, quite unnecessary to code it in application.
1,832,160
1,832,206
Can I use C++ templates to generate Unicode/ANSI variants of a function, rather than using the preprocessor?
We've got a bunch of legacy code that doesn't support Unicode, so a transitional pattern we use in our code is to move the function to a .inl file, change char to CHAR_TYPE, and then wrap it up like this: #define CHAR_TYPE wchar_t #define STRING_TYPE std::wstring #define MyFunctionName MyFunctionNameW #include "MyFunction.inl" #undef CHAR_TYPE #define CHAR_TYPE char #undef STRING_TYPE #define STRING_TYPE std::string #undef MyFunctionName #define MyFunctionName MyFunctionNameA #include "MyFunction.inl" ...where MyFunction.inl then defines MyFunctionName, using the macros to generate both an 'A' version and a 'W' version. This is icky, but it's unfortunately necessary until we get all of our code converted to support Unicode. Is there an alternative way I could do this with templates? I'm thinking that something like the following would be nice: typedef MyFunctionName<wchar_t, std::wstring> MyFunctionNameW typedef MyFunctionName<char, std::string> MyFunctionNameA Is this possible?
Roger Pate is entire correct about the interface. You shouldn't bother with A and W suffixes. However, this still leaves the problem of implementation. As you supected, templates are the correct solution. And since you don't need the different names, you can leave out the typedefs. You would just have template <typename TSTRING> void MyFunctionName (TSTRING const&);
1,832,504
1,832,574
boost::any test code compiles with Sun CC but not g++
The following noddy test code: #include <iostream> #include <list> #include <boost/any.hpp> #include <boost/foreach.hpp> #include <typeinfo.h> using boost::any_cast; using std::cout; using std::cerr; typedef std::list<boost::any> many; template <typename T> inline bool is_any(const boost::any& op) { return (op.type() == typeid(T)); } int main() { many theStrangeList; theStrangeList.push_back("Can you really..."); theStrangeList.push_back(std::string ("do random types in 1 container?")); theStrangeList.push_back(6.359); theStrangeList.push_back(7); BOOST_FOREACH(boost::any a, theStrangeList) { try { if (is_any<const char*>(a)) { cout << any_cast<const char*>(a) << '\n'; } else if (is_any<std::string>(a)) { cout << any_cast<std::string>(a) << '\n'; } else if (is_any<double>(a)) { cout << "double = " << any_cast<double>(a) << '\n'; } } catch (const boost::bad_any_cast& e) { cerr << e.what(); cerr << "\n"; } } return 0; } Compiles and works fine using Sun's CC compiler and default settings. However when using g++ I get the following : $ g++ -I$BOOST_ROOT -o myany myany.cpp myany.cpp:5:22: typeinfo.h: No such file or directory /ilx/boost_1_41_0/boost/any.hpp: In constructor `boost::any::holder<ValueType>::holder(const ValueType&) [with ValueType = char[18]]': /ilx/boost_1_41_0/boost/any.hpp:47: instantiated from `boost::any::any(const ValueType&) [with ValueType = char[18]]' myany.cpp:21: instantiated from here /ilx/boost_1_41_0/boost/any.hpp:122: error: ISO C++ forbids assignment of arrays This is g++ version 3.4.3, so it might be different on a 4.x version, I'll try it later. Is this the reason why there isn't a 'is_any' template included with boost any, or is it a compiler bug? I get the same result if I remove the template, as you would expect with an inlined function. (related question)
Seems I only answered the second part of the question, so here I go with the first part as well: Is this the reason why there isn't a 'is_any' template included with boost any? There are no actual need to is_any, do the following instead: if (const std::string* s = boost::any_cast<std::string>(&a)) { std::cout << "string = " << *s << '\n'; } else if (const double* d = boost::any_cast<double>(&a)) { std::cout << "double = " << *d << '\n'; } But this isn't extensible, prefer using boost::variant instead. Is it a compiler bug? It is a compiler bug in Sun CC. gcc is correct, the type of "Can you really..." is char[18], which doesn't satisfy the requirements of boost::any: A ValueType is CopyConstructible. A ValueType is optionally Assignable. The strong exception-safety guarantee is required for all forms of assignment. The destructor for a ValueType upholds the no-throw exception-safety guarantee.
1,832,521
1,832,570
How do I avoid popping up an error dialog when my MSVS C++ app crashes
When my Visual Studio 2008 C++ command-line application crashes, it sometimes produces this dialog. CommandProcessor.exe has encountered a problem and needs to close. We are sorry for the inconvenience. If you were in the middle of something, the information you were working on might be lost. For more informaiton about this error, click here. I tried this in Release and in Debug mode. (By the way, the debugger shows that this is a divide by zero error.) If it is going to crash, I don't want this dialog, which blocks the application. How do I compile my application so that crashes do not produce the dialog?
With /EHa option you can use catch(...) to catch all exceptions included structured exceptions and write a console message. You can also use VC++ - specific __try for structured exception handling instead, but that's a bit harder to code. However this will not protect you against situations when terminate() is called by the C++ runtime - like when an exception escapes a destructor during stack unwinding - you will also have to change the terminate() handler by calling set_terminate().
1,832,621
1,832,645
C++ CPU Register Usage
In C++, local variables are always allocated on the stack. The stack is a part of the allowed memory that your application can occupy. That memory is kept in your RAM (if not swapped out to disk). Now, does a C++ compiler always create assembler code that stores local variables on the stack? Take, for example, the following simple code: int foo( int n ) { return ++n; } In MIPS assembler code, this could look like this: foo: addi $v0, $a0, 1 jr $ra As you can see, I didn't need to use the stack at all for n. Would the C++ compiler recognize that, and directly use the CPU's registers? Edit: Wow, thanks a lot for your almost immediate and extensive answers! The function body of foo should of course be return ++n;, not return n++;. :)
Disclaimer: I don't know MIPS, but I do know some x86, and I think the principle should be the same.. In the usual function call convention, the compiler will push the value of n onto the stack to pass it to the function foo. However, there is the fastcall convention that you can use to tell gcc to pass the value through the registers instead. (MSVC also has this option, but I'm not sure what its syntax is.) test.cpp: int foo1 (int n) { return ++n; } int foo2 (int n) __attribute__((fastcall)); int foo2 (int n) { return ++n; } Compiling the above with g++ -O3 -fomit-frame-pointer -c test.cpp, I get for foo1: mov eax,DWORD PTR [esp+0x4] add eax,0x1 ret As you can see, it reads in the value from the stack. And here's foo2: lea eax,[ecx+0x1] ret Now it takes the value directly from the register. Of course, if you inline the function the compiler will do a simple addition in the body of your larger function, regardless of the calling convention you specify. But when you can't get it inlined, this is going to happen. Disclaimer 2: I am not saying that you should continually second-guess the compiler. It probably isn't practical and necessary in most cases. But don't assume it produces perfect code. Edit 1: If you are talking about plain local variables (not function arguments), then yes, the compiler will allocate them in the registers or on the stack as it sees fit. Edit 2: It appears that calling convention is architecture-specific, and MIPS will pass the first four arguments on the stack, as Richard Pennington has stated in his answer. So in your case you don't have to specify the extra attribute (which is in fact an x86-specific attribute.)
1,832,704
1,832,776
Default assignment operator in inner class with reference members
I've run into an issue I don't understand and I was hoping someone here might provide some insight. The simplified code is as follows (original code was a custom queue/queue-iterator implementation): class B { public: B() {}; class C { public: int get(); C(B&b) : b(b){}; private: B& b; }; public: C get_c() { return C(*this); } }; int main() { B b; B::C c = b.get_c(); c = b.get_c(); return EXIT_SUCCESS; } This, when compiled, gives me the following error: foo.cpp: In member function 'B::C& B::C::operator=(const B::C&)': foo.cpp:46: error: non-static reference member 'B& B::C::b', can't use default assignment operator foo.cpp: In function 'int main()': foo.cpp:63: note: synthesized method 'B::C& B::C::operator=(const B::C&)' first required here I can go around this by using two separate C variables, as they are supposed to be independent 'C' objects, but this only hides the problem (I still don't understand why I can't do this). I think the reason is that the reference cannot be copied, but I don't understand why. Do I need to provide my own assignment operator and copy constructor?
This problem has nothing to do with inner classes. In C++ you just can't (re)assign references - they need to be initialised when defined. A simpler example is: class B { public: B(int& i) : ir(i) {}; int& ir; }; int main() { int i; B b(i); // Constructor - OK int j; B bb = B(j); // Copy constructor - OK bb = b; // Assignment - Error return 0; }
1,832,809
1,833,378
How to catch divide-by-zero error in Visual Studio 2008 C++?
How can I catch a divide-by-zero error (and not other errors; and to be able to access exception information) in Visual Studio 2008 C++? I tried this: try { int j=0; int i= 1/j;//actually, we call a DLL here, which has divide-by-zero } catch(std::exception& e){ printf("%s %s\n", e.what()); } catch(...){ printf("generic exception"); } But this goes to the generic ... catch block. I understand that the MS-specific __try may be useful here, but I'd prefer standard C++, and in any case I have destructors which prevent the use of __try. CLARIFICATION: The code above is simplified for discussion purposes. Actually, the divide-by-zero is a bug which occurs deep in a third-party DLL for which I do not have the source code. The error depends on the parameter (a handle to a complex structure) which I pass to the library, but not in any obvious way. So, I want to be able to recover gracefully.
Assuming that you can't simply fix the cause of the exception generating code (perhaps because you don't have the source code to that particular library and perhaps because you can't adjust the input params before they cause a problem). You have to jump through some hoops to make this work as you'd like but it can be done. First you need to install a Structured Exception Handling translation function by calling _set_se_translator() (see here) then you can examine the code that you're passed when an SEH exception occurs and throw an appropriate C++ exception. void CSEHException::Translator::trans_func( unsigned int code, EXCEPTION_POINTERS *pPointers) { switch (code) { case FLT_DIVIDE_BY_ZERO : throw CMyFunkyDivideByZeroException(code, pPointers); break; } // general C++ SEH exception for things we don't need to handle separately.... throw CSEHException(code, pPointers); } Then you can simply catch your CMyFunkyDivideByZeroException() in C++ in the normal way. Note that you need to install your exception translation function on every thread that you want exceptions translated.
1,833,224
1,833,237
operator/ overloading
For learning purposes I'm creating big integer class in C++. There are 2 files: big_int.h #ifndef BIG_INT_H #define BIG_INT_H #include class big_int { public: big_int(void); big_int(char*); big_int(QString); ~big_int(); big_int operator+(big_int); big_int operator-(big_int); big_int operator*(big_int); big_int operator/(big_int); }; #endif // BIG_INT_H big_int.cpp #include "big_int.h" big_int::big_int() { } big_int::big_int(QString str) { } big_int::~big_int() { } big_int operator+(big_int b) { return big_int(); } big_int operator-(big_int b) { return big_int(); } big_int operator*(big_int b) { return big_int(); } big_int operator/(big_int) { return big_int(); } Qt Creator returns: C:/Documents and Settings/Admin/My Documents/calculator_1_0/big_int.cpp:31: error: big_int operator/(big_int) must take exactly two arguments. But operator/ takes only 1 parameter. What`s wrong?
That's a typo, you forgot the class name : big_int big_int::operator+(big_int b) { return big_int(); } big_int big_int::operator-(big_int b) { return big_int(); } big_int big_int::operator*(big_int b) { return big_int(); } big_int big_int::operator/(big_int) { return big_int(); } By the way, you should take contant references instead of values : big_int big_int::operator/(const big_int& v) { //... }
1,833,318
1,833,500
Not able execute CreateProcess with PhotoViewer.dll
In my application there is an interface where user can select any file and open in its default application depending on the file association. I am using FindExecutable and CreateProcessAsUser with Explorer token. Now the problem is in the case of picture files say .jpg, FindExecutable returns "C:\Program Files\Windows Photo Gallery\PhotoViewer.dll", then CreateProcessAsUser returns "ERROR_BAD_EXE_FORMAT(193) %1 is not a valid Win32 application.". I was passing NULL as the second parameter for CreateProcessAsUser, sending executable path in lpCommandLine(eg: CreateProcessAsUser(hToken, NULL, szCmdline, ...)). Can anyone help me in solving this? Regards, Manoj
A Win32 executable has extension .EXE; a DLL is not an executable. CreateProcess cannot create a process with just a .DLL. The missing .EXE is "rundll32.exe". However, that's not what you are after: you want the Shell behavior. ShellExecuteEx() is usually the most convenient function. AssocQueryString() may be appropriate in this case, with the right flags: ASSOCSTR_EXECUTABLE to get the executable in case it's not yet running, and ASSOCSTR_DDEAPPLICATION etc. in case the application already runs.
1,833,356
1,833,381
Detach a pointer from a shared_ptr?
Possible Duplicate: How to release pointer from boost::shared_ptr? A function of my interface returns a pointer to an object. The user is supposed to take ownership of that object. I do not want to return a Boost.shared_ptr, because I do not want to force clients to use boost. Internally however, I would like to store the pointer in a shared_ptr to prevent memory leaks in case of exceptions etc. There seems to be no way to detach a pointer from a shared pointer. Any ideas here?
What you're looking for is a release function; shared_ptr doesn't have a release function. Per the Boost manual: Q. Why doesn't shared_ptr provide a release() function? A. shared_ptr cannot give away ownership unless it's unique() because the other copy will still destroy the object. Consider: shared_ptr<int> a(new int); shared_ptr<int> b(a); // a.use_count() == b.use_count() == 2 int * p = a.release(); // Who owns p now? b will still call delete on it in its destructor. Furthermore, the pointer returned by release() would be difficult to deallocate reliably, as the source shared_ptr could have been created with a custom deleter. Two options you might consider: You could use std::tr1::shared_ptr, which would require your users to use a C++ library implementation supporting TR1 or to use Boost; at least this would give them the option between the two. You could implement your own boost::shared_ptr-like shared pointer and use that on your external interfaces. You might also look at the discussion at this question about using boost::shared_ptr in a library's public interface.
1,833,362
1,833,389
What is the correct way to initialize a Matrix in C++98?
I'm not programmer I want to compile Inkscape in win32, and stumbled on these error messages: ui/dialog/filedialogimpl-win32.cpp:1379: error: in C++98 'matrix' must be initialized by constructor, not by '{...}' make[1]: *** [ui/dialog/filedialogimpl-win32.o] Error 1 The suspected code of filedialogimpl-win32.cpp: ... // Draw the image if(_preview_bitmap_image) // Is the image a pixbuf? { // Set the transformation const Matrix matrix = { scaleFactor, 0, 0, scaleFactor, svgX, svgY }; context->set_matrix (matrix); ... So how is it written in C++98 standard? I've googling it but no one encountered that way, could it be unrelated?
Instead of const Matrix matrix = { scaleFactor, 0, 0, scaleFactor, svgX, svgY }; there should be something like this: const Matrix matrix( scaleFactor, 0, 0, scaleFactor, svgX, svgY );
1,833,379
1,835,894
C macro to transform a SVN revision to an integer
I am looking for a C/C++ macro that can transform a random SVN revision like "$Revision: 9 $" or "$Revision: 9999999 $" into an integer or a string. I know that simple functions exists to achieve this, but I want this to be made at compile time. My wish is to write things like:unsigned int rev = SVN_TO_INT("$Revision$");
I agree that it is not doable with macros, however, I found a trick using compiler optimizations. The result is that the expression JL_SvnRevToInt("$Revision: 12345 $") is reduced to a single unsigned integer: 12345 inline unsigned int JL_SvnRevToInt(const char *r) { if ( r == NULL || r[0] == '\0' || r[10] == '\0' || r[11] == '\0' || r[12] == '\0' || r[13] == '\0' ) return 0; const unsigned int count = r[11] == ' ' ? 1 : r[12] == ' ' ? 10 : r[13] == ' ' ? 100 : r[14] == ' ' ? 1000 : r[15] == ' ' ? 10000 : r[16] == ' ' ? 100000 : r[17] == ' ' ? 1000000 : r[18] == ' ' ? 10000000 : r[19] == ' ' ? 100000000 : 0; return (r[11] == ' ' ? 0 : (r[11]-'0') * (count/10) + (r[12] == ' ' ? 0 : (r[12]-'0') * (count/100) + (r[13] == ' ' ? 0 : (r[13]-'0') * (count/1000) + (r[14] == ' ' ? 0 : (r[14]-'0') * (count/10000) + (r[15] == ' ' ? 0 : (r[15]-'0') * (count/100000) + (r[16] == ' ' ? 0 : (r[16]-'0') * (count/1000000) + (r[17] == ' ' ? 0 : (r[17]-'0') * (count/10000000) + (r[18] == ' ' ? 0 : (r[18]-'0') * (count/100000000) + (r[19] == ' ' ? 0 : (r[19]-'0') * (count/1000000000) + 0))))))))); } It supports9 digits revision number, NULL and empty and "$Revision$" strings.
1,833,447
1,833,499
A good example for boost::algorithm::join
I recently wanted to use boost::algorithm::join but I couldn't find any usage examples and I didn't want to invest a lot of time learning the Boost Range library just to use this one function. Can anyone provide a good example of how to use join on a container of strings? Thanks.
#include <boost/algorithm/string/join.hpp> #include <vector> #include <iostream> int main() { std::vector<std::string> list; list.push_back("Hello"); list.push_back("World!"); std::string joined = boost::algorithm::join(list, ", "); std::cout << joined << std::endl; } Output: Hello, World!
1,833,484
1,833,628
C++ frontend only compiler (convert C++ to C)
I'm currently managing some C++ code that runs on multiple platforms from a single source tree (Win32, Linux, Verifone CC terminals, MBED and even the Nintendo GBA/DS). However I need to build an app targetted at an embedded platform for which there is no C++ compiler (C only). I remmber that many of the early C++ compilers were only front-ends stitting on existing C compilers (Glockenspiel for example used MSC). Are there any such 'frontend' C++ compilers in use today that will generate C code. Tools Platform ----------- ------------ ______Visual C++ _____ WIN32 / /_______MBED (ARM)_______MBED (ARM dev board). / /_________GCC (x86)________Linux / Source____/___________GCC (ARM)________GBA/DS \ \__________SDA______________Verifone Verix CC Terminals \ \________ARM SDT__________Verifine VerixV CC terminals \ \______????_____________Renases M8/16/32. \ \____????_____________Z8 family. The last two platforms I have good C compilers for but no C++. As you can see I'm supporting a large variety of platforms and I share a large body of library code (and some app code).
If you use LLVM, llvm-g++ will compile your C++ code to LLVM bitcode, and llc has a backend which converts bitcode to C. You could write commands like this: llvm-g++ -emit-llvm -c foo.cpp -o foo.o llc -march=c <foo.o >foo.c
1,833,683
1,839,660
Boost: how to build Boost under MacOSX
I am trying to build MacOSX universal binaries (I need at least i386/ppc for >=macosx10.3) of Boost. I tried a lot of different methods and options and versions and it all fails in the end with this crash: Boost: what could be the reasons for a crash in boost::slot<>::~slot? I guess this crash is because of a bad Boost build. With Boost-1.41.0, I think the most correct options I tried were this: ./bootstrap.sh && sudo ./bjam architecture=combined macosx-version=10.4 install
It was already the correct command. I found out about the problem with my crash: You must use exactly the same STL preprocessor definitions when you compiled Boost in your project. I.e. you cannot enable _GLIBCXX_DEBUG or _GLIBCXX_DEBUG_PEDANTIC in your project when Boost was compiled without those.
1,833,982
1,834,074
In C++, is there a difference between “throw” and “throw ex”?
I'd like to ask this question (also here), but this time about C++. What is the difference in C++ between try { /*some code here*/} catch(MyException& ex) { throw ex;} //not just throw and try { /*some code here*/} catch(MyException& ex) { throw;} //not throw ex Is it just in the stack trace (which in C++ is in any case not a standard as in C# or Java)? (If it makes any difference, I use MSVS 2008.)
throw; rethrows the same exception object it caught while throw ex; throws a new exception. It does not make a difference other than the performance reasons of creating a new exception object. If you have a exception hierarchy where there some other exception classes derived from MyException class and while throwing an exception you have done a throw DerivedClassException; it can be caught by the catch(MyException&). Now if you modify this caught exception object and rethrow it using throw; the type of exception object will still be DerivedClassException. If you do throw Ex; the object slicing occurs and the newly thrown exception will be of type MyException.
1,834,164
1,834,274
Access violation writing location
I have the following code: #include <openssl/bn.h> #include <openssl/rsa.h> unsigned char* key; RSA* rsa = RSA_new(); rsa = RSA_generate_key(1024,65537,NULL,NULL); //init pubkey key[BN_num_bytes(rsa->n)] = '\0'; BN_bn2bin(rsa->n, key); printf("RSA Pub: %s\n", key); RSA_free( rsa ); rsa = NULL; The debugger is telling me that I have an issue "Access violation writing location" on the line key[BN_num_bytes(rsa->n)] = '\0'; If I comment out that line the issue just moves down to BN_bn2bin(rsa->n, key); any suggestions on how to fix this issue would be great.
Since key is not pointing to anything and you have referenced it with array notation subscript, that is the source. How does key get the value. You are overwriting or trampling on some other memory block that is not yours hence the 'Access violation' as trapped by windows. Double check your code and make sure that the variable has been malloc'd or new'd. As a side note, it is best for your sanity to declare it like this unsigned char *key = NULL; In that way if you try access key without it being malloc'd/new'd, you will get a memory exception error (which can easily be nailed down to this). Consider it makes debugging much easier. Hope this helps, Best regards, Tom.
1,834,230
1,834,330
How to encapsulate a std::set properly?
I have a class named Particle which has a std::set as a member. The class looks like this: class Particle { private: std::set<vtkIdType> cells; std::set<vtkIdType>::iterator ipc; public: Particle() {}; enum state {EXISTS = -1, SUCCESS = 0, ERROR = 1}; state addCell(const vtkIdType cell); int numCells() { return static_cast<int>(cells.size()); } vtkIdType getFirstCell() { return (*(ipc = this->cells.begin()));} vtkIdType getNextCell() { return *(++ipc); } vtkIdType hasNextCell() { ++ipc; if (ipc == this->cells.end()) return false; --ipc; return true; } std::string getOutput(); }; I'm very unhappy with the getFirstCell(), getNextCell() and especially hasNextCell(), they exist because I don't want to expose the set itself. I had to use the way through ++ipc and --ipc because if((ipc+1) == this->cells.end()) gives a compiler error, ipc+1 seems to be the problem. What would be a good way to encapsulate a set and access it? Also, is there a nice way to get rid of the getFirstCell() function? Thanks in advance. Edit: The code I posted is just an example of the classes structure. The "real" class contains more sets and other data that is not that important for this question (I assumed).
I'm not sure why you do not want to expose the set itself, but if it is because you want to ensure that the content of the set cannot be altered outside class Particle just return const iterators which makes the set "read-only", e.g. typedef std::set<vtkIdType>::const_iterator CellIterator; CellIterator beginCell() const { return this->cells.begin(); } CellIterator endCell() const { return this->cells.end(); }
1,834,434
1,834,577
How to use protocol buffers?
Could someone please help and tell me how to use protocol buffers. Actually I want to exchange data through sockets between a program running on unix and anoother running on windows in order to run simulation studies. The programs that use sockets to exchange data, are written in C/C++ and I would be glad if somneone could help me to use protocol buffers in order to exchange data in the form of : struct snd_data{ char *var="temp"; int var1=1; float var2; double var2; } I tried several ways, but still data are not exchanged correctly. Any help would be very appreciated Thanks for your help,
You start by defining your message in a .proto file: package foo; message snd_data { required string var= 1; required int32 var1 = 2; optional float var2 = 3; optional double var3 = 4; } (I guess the float and double actually are different variables...) Then you compile it using protoc and then you have code implementing your buffer. For further information see: http://code.google.com/apis/protocolbuffers/docs/cpptutorial.html
1,834,666
1,834,688
In which header file c++ STL hash function object is declared?
If I want to use the hash function object provided in STL, which header file I should include on Linux? e.g. hash Hf;
#include <hash_map> on some Linux distros it's available here: #include <ext/hash_map> More info here. The hash_map is currently not part of the official STL but it's in TR1 as <unordered_map>.
1,834,769
1,834,783
Character arrays question C++
Is there any difference between the below two snippets? One is a char array, whereas the other is a character array pointer, but they do behave the same, don't they? Example 1: char * transport_layer_header; // Memory allocation for char * - allocate memory for a 2 character string char * transport_layer_header = (char *)malloc(2 * sizeof(char)); sprintf(transport_layer_header,"%d%d",1,2); Example 2: char transport_layer_header[2]; sprintf(transport_layer_header,"%d%d",1,2);
Yes, there is a difference. In the first example, you dynamically allocate a two-element char array on the heap. In the second example you have a local two-element char array on the stack. In the first example, since you don't free the pointer returned by malloc, you also have a memory leak. They can often be used in the same way, for example using sprintf as you demonstrate, but they are fundamentally different under the hood.
1,835,040
1,835,152
C++ inheritance designing a linked list
I wanted to make a linked list class ListList that inherits from a class List. ListList uses functions from List, but has its own functions. It has its own start pointer that points to the beginning of the list, and its own Node struct that holds a different amount of elements. But, it looks like, when one of List's functions are called from ListList, List uses it's own start pointer and Node. But I want ListList's start pointer and Node to be used. Can someone help me figure this out? I could post some code, but I don't know what part would be relevant... this is the one I called List above class LinkList { public: LinkList(); //constructor that sets the start pointer to NULL, to show that the list is empty ~LinkList(); //destructor that deletes each node in the linked list LinkList(const LinkList &original); //copy constructor void addToken(string token); //creates a node with the given token and places it at the beginning of the linked list string showList(); //returns a string of tokens, separated by commas and spaces bool findToken(string token); //searches linked list for the given token, returns true if the token is in the list string getToken(string word); //searches linked list for a token that begins with the given word. //Returns the full token if there's a token that begins with the given word, else returns an empty string void deleteList(); protected: struct Node //each node of the linked list, held together by the next pointer { string token; bool second_word; //tells whether or not there is a space within the token (a two-word keyword) //This could be easily changed to an int that tells how many words are within the keyword (for multi-word keywords) Node *next; //pointer to the next node of the linked list. NULL if there is no next node }; Node *start; //pointer to the beginning of the linked list, and the last added node bool twoWordToken(string token); //returns true if there is a space located within a token, meaning the token consists of two words. }; This is the one I called ListList above class LinkListList: public LinkList { public: LinkListList(); //modified contructor initiates the pointers start and ptrNode ~LinkListList(); //modified destructor deletes all nodes and secondaryList nodes LinkListList(const LinkListList &original); //copy constructor bool addSubList(LinkList subList, string commandWord); //calls setPtrNode, then adds the given subList to that node bool findSubToken(string commandWord, string token); //calls setPtrNode, then calls on that node's secondaryList's findToken function //returns true if the findToken function returns true, else returns false string showSubList(string commandWord); //returns a string of tokens, separated by commas and spaces, representing the subList of the given token string getSubToken(string word, string commandWord); //searches commandWord's subList for a token that begins with the given word. //Returns the full token if there's a token that begins with the given word, else returns an empty string private: struct Node //each node of the linked list, held together by the next pointer { string token; bool second_word; //tells whether or not there is a space within the token (a two-word keyword) LinkList secondaryList; //keeps a list of all related words Node *next; }; Node *start; //pointer to the beginning of the linked list Node *ptrNode; //this pointer is used for the functions void setPtrNode(string token); //sets ptrNode to point to the node containing the specified token. ptrNode is NULL if the token could not be found };
Edit: I see. Ideally, you'd have just one linked list implementation that can hold any kind of value, including — and here's the kicker — a compound data structure that has a linked list as one of its fields. In the code you have right now, the inheritance is actually unnecessary as far as I can tell, you're generally duplicating all the hard work of creating a linked list, and you're mixing the linked list data structure with your higher-level object representing the various lists of words. Here is one possible way I might structure the data structures here: The generic linked list: template <typename T> class LinkedList { ... }; A class that uses linked lists to represent whatever list of words you're making: class TokenList { struct Token { string word; LinkedList<string> related; }; LinkedList<Token> list; // Methods to add/search/remove tokens from the lists and sublists }; (Also, I suspect the data structure you're actually seeking is a map, but that's another discussion.)
1,835,106
1,835,181
Lockless Deque in Win32 C++
I'm pretty new to lockless data structures, so for an exercise I wrote (What I hope functions as) a bounded lockless deque (No resizing yet, just want to get the base cases working). I'd just like to have some confirmation from people who know what they're doing as to whether I've got the right idea and/or how I might improve this. class LocklessDeque { public: LocklessDeque() : m_empty(false), m_bottom(0), m_top(0) {} ~LocklessDeque() { // Delete remaining tasks for( unsigned i = m_top; i < m_bottom; ++i ) delete m_tasks[i]; } void PushBottom(ITask* task) { m_tasks[m_bottom] = task; InterlockedIncrement(&m_bottom); } ITask* PopBottom() { if( m_bottom - m_top > 0 ) { m_empty = false; InterlockedDecrement(&m_bottom); return m_tasks[m_bottom]; } m_empty = true; return NULL; } ITask* PopTop() { if( m_bottom - m_top > 0 ) { m_empty = false; InterlockedIncrement(&m_top); return m_tasks[m_top]; } m_empty = true; return NULL; } bool IsEmpty() const { return m_empty; } private: ITask* m_tasks[16]; bool m_empty; volatile unsigned m_bottom; volatile unsigned m_top; };
Looking at this I would think this would be a problem: void PushBottom(ITask* task) { m_tasks[m_bottom] = task; InterlockedIncrement(&m_bottom); } If this is used in an actual multithreaded environment I would think you'd collide when setting m_tasks[m_bottom]. Think of what would happen if you have two threads trying to do this at the same time - you couldn't be sure of which one actually set m_tasks[m_bottom]. Check out this article which is a reasonable discussion of a lock-free queue.
1,835,209
1,837,286
How can I disable and gray the top level menu item using MFC
I have a dialog application in which I want to have clickable menu items at the top of the dialog. These items do not show a drop down menu but actually run the associated commands. I did this by setting Popup=False in the dialogs properties and assigning a message-id but my problem is not having the ability to disable the item properly when it makes no sense for the item to be clickable (depending on internal state stored in the dialog) I have already found out how to disable any popup-parent menu items from http://www.microsoft.com/msj/0299/c/c0299.aspx, but this isn't exactly what I want I have also found out how to add menu command routing to dialogs from the msdn knowledgebase article KB242577. This works fine for sub-menu items, but not for the top level menu. I am currently using the following function to do the disabling void CYourDlg::EnableMenuItem(UINT nCommand, BOOL bEnable) { CMenu* pMenu = GetMenu(); pMenu->EnableMenuItem(nCommand, bEnable ? 0 : MF_DISABLED | MF_GRAYED); } This half works, if you alt-tab away from the app it does show as disabled, otherwise it doesn't. Is there a way to invalidate the area programmatically? I think an non-client area message may be involved.
I have not tried but in regular window (not dialog) CWnd::DrawMenuBar should do what you want. It might work with dialog based applications as well. void CYourDlg::EnableMenuItem(UINT nCommand, BOOL bEnable) { CMenu* pMenu = GetMenu(); pMenu->EnableMenuItem(nCommand, bEnable ? 0 : MF_DISABLED | MF_GRAYED); DrawMenuBar(); }
1,835,399
1,835,431
Const correctness: const char const * const GetName const (//stuff);
Labelled as homework because this was a question on a midterm I wrote that I don't understand the answer to. I was asked to explain the purpose of each const in the following statement: const char const * const GetName() const { return m_name; }; So, what is the explanation for each of these consts?
Take them from the right. The one before the ; tells the client this is a design level const i.e. it does not alter the state of the object. (Think of this as a read-only method.) Okay, now the return value: const char const *const This is a constant pointer to okay ... here we go boom! You have an extra const -- a syntax error. The following are equivalent: const T or T const. If you take out a const you get a constant pointer to a constant characters. Does that help?
1,835,489
1,835,566
Linking an application to libbz2.so.1 rather than libbz2.so.1.0
Here's the current situation I'm in: I want to distribute a binary app on Linux that would run on several distros (not all of them, just the main ones matter at the moment, let's focus on Ubuntu and Fedora for the sake of this discussion). The app in question links to libbz2 for some of its work. A simple "Hello World" will illustrate the situation : /* main.cpp */ #include <iostream> int main(int argc, char* argv[]) { std::cout << "Hello World!\n"; return 0; } The app is built as such : g++ -lbz2 -o test.bin main.cpp My build system is on Ubuntu. When I perform a check with ldd on the resulting binary, it lists libbz2.so.1.0 as a runtime dependency. When I take this app to a Fedora machine, the app doesn't run and ldd reveals that it can't find libbz2.so.1.0. Fedora only has libbz2.so.1 and libbz2.so.1.0.4, but not libbz2.so.1.0. Red Hat's Bugzilla database reveals that this behavior is not a bug, but a feature. I don't really need libbz2.so.1.0, and I would be satisfied with simply linking to libbz2.so.1, but I have yet to figure out how. I have seen a similar question asked here previously, but the accepted answer (You can pass the actual .so file instead of -l on the linker command line) doesn't seem to work. I tried building with the following command : g++ /lib/libbz2.so.1 -o test.bin main.cpp However, ldd still mentions that the app depends on libbz2.so.1.0, even though I passed the full name to g++. Now, the question is, is there a way on Ubuntu to build the app to have it depend only on libbz2.so.1 rather than on libbz2.so.1.0? Thanks.
Why don't you just link statically instead? I have done that in the past for builds on Ubuntu and deployment on RHEL which works just fine using static builds.
1,835,626
1,835,647
C++ DLL fails when run from different drive letter
I've written a C++ DLL that connects to a Sybase database using the native C library for Sybase. I can build and run the program on my C drive, and others can run it from their C drives, and everything works. But in some situations both my DLL and the Sybase DLL are located on the F drive instead of the C drive. In those cases my DLL apparently fails to load the Sybase DLL. I'm a little unclear on how linking works, but based on my incomplete understanding my best guess is that the C-drive location of the DLL is what gets built into the final DLL, which is what causes it to fail when it runs from a different drive letter. Does that sound like a reasonable explanation? Any other reasons my DLL would fail to load the Sybase DLL when run from a different drive letter? Any idea how I can resolve this? EDIT: Turns out this was the wrong question, but it led me in the right direction. The Sybase DLL uses an ini file to determine database connection details, and I had the path for that hard-coded to the C drive.
Generally speaking absolute locations are not used inside DLLs. Only the name of the DLL is stored. The places where system looks for DLLs are descrived here: http://msdn.microsoft.com/en-us/library/ms682586(VS.85).aspx Though it IS possible to load a DLL by absolute path - with a techinique known as run-time DLL loading - but I suspect not many programs do so.
1,835,761
1,836,117
Why does C# not have C++ style static libraries?
Lately I've been working on a few little .NET applications that share some common code. The code has some interfaces introduced to abstract away I/O calls for unit testing. I wanted the applications to be standalone EXEs with no external dependencies. This seems like the perfect use case for static libraries. Come to think of it third party control vendors could benefit from this model too. Are there some hidden nasties with static libraries that I've missed? Is there any reason why the C# designers left them out? Edit: I'm aware of ILMerge but it doesn't offer the same convenience as static libraries.
.NET does in fact support the moral equivalent of a static library. It's called a netmodule (file extension is usually .netmodule). Read more about it in this blog post. Beware that it isn't well supported by the Visual Studio build tool chain. I think extension methods are a problem as well. ILMerge is the better tool to get this done.
1,835,988
1,836,067
Why can I not access a public function of a base class with a pointer of a subClass?
I am not sure why I am getting an "error C2660: 'SubClass::Data' : function does not take 2 arguments". when i try to compile my project. I have a base class with a function called data. The function takes one argument, There is an overload of Data that takes 2 arguments. In my subClass I override the data function taking 1 argument. Now when I try to call the overload of data from a pointer to subClass I receive the above compile error. class Base : public CDocument { Public: virtual CString& Data( UINT index); CString Data( UINT index, int pos); }; class SubClass : public Base { Public: virtual CString& Data( UINT index); }; Void SomeOtherFunction() { subType* test = new subType(); test->Data( 1, 1);// will not compile ((Base*)test)->Data(1,1); // compiles with fine. }
The C++ Programming Language by Bjarne Stroustrup (p. 392, 2nd ed.): 15.2.2 Inheritance and Using-Declarations Overload resolution is not applied across different class scopes (§7.4) … You can access it with a qualified name: void SomeOtherFunction() { SubClass* test = new SubClass(); test->Base::Data(1, 1); } or by adding a using-declaration to SubClass: class SubClass : public Base { public: using Base::Data; virtual CString& Data( UINT index); }; void SomeOtherFunction() { SubClass* test = new SubClass(); test->Data(1, 1); }
1,836,253
1,836,612
How To Remove Characters?
I'm start(really starting) an Assembly tool, at the time it only converts a decimal to a hexadecimal, but I want to remove the zeros from the result. Here is the code: // HexConvert.cpp #include <iostream> using namespace std; int main() { int decNumber; while (true) { cout << "Enter the decimal number: "; cin >> decNumber; // Print hexadecimal with leading zeros cout << "Hexadecimal: "; for (int i = 2*sizeof(int) - 1; i >= 0; i--) { cout << "0123456789ABCDEF"[((decNumber >> i*4) & 0xF)]; } cout << endl; } return 0; } How can I do this?
You can call this function directly from C++, but you may have to save some registers, dependig on the compiler. Have fun retranslating to C++. ;number to convert in [esp+4] ;pointer to string in [esp+8] itoh: mov edi, [esp+8] ;pointer to c string bsr ecx, eax ;calculate highest set bit and cl, $fc ;round down to nearest multiple of 4 loop: mov eax, [esp+4] shr eax, cl ;mov hex digit to lowest 4 bit and eax, $f ;mask hex digit cmp eax, 10 ;test if digit is in A..F jlt numdgt add eax, 'A'-'0'-10 ;it is numdgt: add eax, '0' ;ascii converted digit mov [edi], al ;store to string inc edi ;and increment pointer sub cl,4 ;decrement loop counter jnc loop mov byte[edi], 0 ;terminate string ret
1,836,622
1,837,422
Surely there is a way to obtain the full View pulldown for the current folder view?
Motivation: Creating our own file dialog that looks & acts much like the std common dialog Problem: How to obtain the view pull-down for the current folder/shell container Apparent Dead Ends: Query the IShellFolder for its IContextMenu < NULL interface pointer. Query the IShellView for its IContextMenu < NULL interface pointer. IShellFolder::CreateViewObject(IID_IContextMenu...) < very limited context menu (new). IShellFolder::GetUIObjectOf(IID_IContextMenu...) < limited context menu (open, explore,...). Implement IShellBrowser's InsertMenusSB, RemoveMenusSB, and SetMenuSB < The menu is never populated beyond what I populate it with I have spent some time reading Implementing a Folder View and How to host an IContextMenu. This seems to indicate that the final approach above (implementing InsertMenuSB, ...) should work. The IShellView should be populating the shared menu for the IShellBrowser, including its View submenu, with the appropriate items. However, so far all I get from that is an empty menu (unless I populate it with items - in which case, I just get the items I populate it with). Surely there is a way to do this. Windows Explorer arrives at the menu it displays (if you press down ALT on Vista or above) from somewhere. And I cannot imagine that this menu is statically built by Explorer itself - it surely is dynamically created somehow in concert with the currently displayed IShellView to allow for shell extensions to display the correct list of view options (and other menu options). But the documentation on InsertMenuSB, RemoveMenuSB, and SetMenuSB is confusing. It seems to indicate that, as the container server, I should populate the supplied OLEMENUGROUPWIDTHS, "in elements 0, 2, and 4 to reflect the number of menu elements it provided in the File, View, and Window menu groups." I have implemented the following to attempt to properly fulfill this contract: HRESULT STDMETHODCALLTYPE ShellBrowserDlgImpl::InsertMenusSB(__RPC__in HMENU hmenuShared, /* [out][in] */ __RPC__inout LPOLEMENUGROUPWIDTHS lpMenuWidths) { TRACE("IShellBrowser::InsertMenusSB\n"); // insert our main pull-downs struct { UINT id; LPCTSTR label; } pull_downs[] = { { FCIDM_MENU_FILE, "File" }, { FCIDM_MENU_EDIT, "Edit" }, { FCIDM_MENU_VIEW, "View" }, { FCIDM_MENU_TOOLS, "Tools" }, { FCIDM_MENU_HELP, "Help" }, }; for (size_t i = 0; i < countof(pull_downs); ++i) { VERIFY(AppendMenu(hmenuShared, MF_POPUP, pull_downs[i].id, pull_downs[i].label)); ASSERT(GetMenuItemID(hmenuShared, i) == pull_downs[i].id); } // set the count of menu items we've inserted into each *group* lpMenuWidths->width[0] = 2; // FILE: File, Edit lpMenuWidths->width[2] = 2; // VIEW: View, Tools lpMenuWidths->width[4] = 1; // WINDOW: Help return S_OK; } Has anyone implemented an Explorer like project that properly exposes the current IShellView's menus to the end-user? Is there documentation / examples on IOLEInPlaceFrame implementations that might shed some light on this murky subject? Ugh!@ - I feel like I must be close - yet not close enough!
use SVGIO_BACKGROUND to get the background menu of the folder, which should have a view submenu. the index, name and the command id of the "view" menu item may vary between windows versions and local languages, so this is kind of hack.
1,836,671
1,837,945
CSocket is not blocking on send
This function is called to serve each client in a new thread. in the consume function, these archives are read and written to but the function returns before the client finishes reading all the response so the socket goes out of scope and closed, creating an exception in client. I'm assuming that any write on the CArchive should block until its read on the client side. Am I making a wrong assumption here? The code works fine if I add a delay before going out of scope ( try ) which is not a good way, I wonder is there any way to block until all the data is transferred? Thanks UINT CNetServer::serveClient(LPVOID p) { serveClientParams* params = reinterpret_cast<serveClientParams*>(p); try { AfxSocketInit(); CSocket clientSocket; clientSocket.Attach(params->ClientSocket); CSocketFile file(&clientSocket); CArchive arIn (&file, CArchive::load); CArchive arOut(&file, CArchive::store); params->ServerInstance->Consumer.Consume(arIn, arOut); arOut.Flush(); file.Flush(); //SleepEx(1000,true); works fine is I wait till the data is sent. } catch(int ex) { CMisc::LogWriteWarning(ex, GetLastError(), "Listen Loop Communication"); } catch(CException* ex) { char buffer[1024]; ex->GetErrorMessage(buffer, sizeof(buffer)); CMisc::LogWriteError(buffer, SOCKET_COMUNICATION_FAILED); } catch(...) { CMisc::LogWriteWarning(0, GetLastError(), "abnormal communication termination."); } delete params; return 0; }
I found the solution, In order to close the connection without loosing any none exchanged data you should basically use SO_LINGER option, it's a very long story you can see the details in this article but the strange part is, MSDN seems very inaccurate when it comes to shutdown, in my experience, LINGER options had no effect on shutdown and if you call shutdown before close then the subsequent close won't block anymore! finally, here is the new code UINT CNetServer::serveClient(LPVOID p) { serveClientParams* params = reinterpret_cast<serveClientParams*>(p); try { AfxSocketInit(); CSocket clientSocket; clientSocket.Attach(params->ClientSocket); struct linger linger; linger.l_linger = 9; linger.l_onoff = 128; int fls = 0; int i = clientSocket.SetSockOpt(SO_LINGER, &linger, sizeof(linger)); i = clientSocket.SetSockOpt(SO_DONTLINGER, &fls, sizeof(fls)); CSocketFile file(&clientSocket); CArchive arIn (&file, CArchive::load); CArchive arOut(&file, CArchive::store); params->ServerInstance->Consumer.Consume(arIn, arOut); arOut.Flush(); //BOOL b = clientSocket.ShutDown(SD_BOTH); } catch(int ex) { CMisc::LogWriteWarning(ex, GetLastError(), "Listen Loop Communication"); } catch(CException* ex) { char buffer[1024]; ex->GetErrorMessage(buffer, sizeof(buffer)); CMisc::LogWriteError(buffer, SOCKET_COMUNICATION_FAILED); } catch(...) { CMisc::LogWriteWarning(0, GetLastError(), "abnormal communication termination."); } delete params; return 0; }
1,837,024
1,837,336
Problem with returning arguments that are const references
I know why the following does not work correclty, so I am not asking why. But I am feeling bad about it is that it seems to me that it is a very big programming hindrance. #include <iostream> #include <string> using namespace std; string ss("hello"); const string& fun(const string& s) { return s; } int main(){ const string& s = fun("hello"); cout<<s<<endl; cout<<fun("hello")<<endl; } The first cout will not work. the second cout will. My concern is the following: Is it not possible to imagine a situation where a method implementor wants to return an argument that is a const reference and is unavoidable? I think it is perfectly possible. What would you do in C++ in this situation? Thanks.
I think it is a slight weakness of C++. There's an unfortunate combination of two factors: The function's return is only valid as long as its argument is. Implicit conversion means that the function's argument is not the object it may appear to be. I have no sympathy for people who fail to think about the lifetime of objects they have pointers/references to. But the implicit conversion, which certainly is a language feature with subtle pros and cons, is not making the analysis very easy here. Sometimes implicit conversion is bad news, which why the explicit keyword exists. But the problem isn't that conversion to string is bad in general, it's just bad for this function, used in this incorrect way. The author of the function can in effect disable implicit conversion, by defining an overload: const char *fun(const char *s) { return s; } That change alone means the code which previously was bad, works. So I think it's a good idea in this case to do that. Of course it doesn't help if someone defines a type which the author of fun has never heard of, and which has an operator std::string(). Also, fun is not a realistic function, and for more useful routines you might not want to provide an equivalent which operates on char*. In that case, void fun(const char *); at least forces the caller to explicitly cast to string, which might help them use the function correctly. Alternatively, the caller could note that he's providing a char*, and getting back a reference to a string. That appears to me to be a free lunch, so alarm bells should be ringing where this string came from, and how long it's going to last.
1,837,092
1,837,140
C++ destruction of temporary object in an expression
Given the following code: #include <iostream> struct implicit_t { implicit_t(int x) : x_m(x) { std::cout << "ctor" << std::endl; } ~implicit_t() { std::cout << "dtor" << std::endl; } int x_m; }; std::ostream& operator<<(std::ostream& s, const implicit_t& x) { return s << x.x_m; } const implicit_t& f(const implicit_t& x) { return x; } int main() { std::cout << f(42) << std::endl; return 0; } I get the following output: ctor 42 dtor While I know this is correct, I'm not certain why. Is there anyone with stdc++ knowledge who can explain it to me?
Temporary objects are destroyed as the last step in evaluating the full-expression (1.9) that (lexically) contains the point where they were created. [12.2/3]
1,837,159
1,837,186
Encapsulating a private enum
Previously I've defined enumerated types that are intended to be private in the header file of the class. private: enum foo { a, b, c }; However, I don't want the details of the enum exposed anymore. Is defining the enum in the implementation similar to defining class invariants? const int ClassA::bar = 3; enum ClassA::foo { a, b, c }; I'm wondering if this the correct syntax.
C++ doesn't have forward declarations of enums, so you can't separate enum "type" from enum "implementation". The following will be possible in C++0x: // foo.h class foo { enum bar : int; // must specify base type bar x; // can use the type itself, members still inaccessible }; // foo.cpp enum foo::bar : int { baz }; // specify members
1,837,165
1,837,206
can two classes see each other using C++?
So I have a class A, where I want to call some class B functions. So I include "b.h". But, in class B, I want to call a class A function. If I include "a.h", it ends up in an infinite loop, right? What can I do about it?
Each class (A and B) should have a header file and an implementation file. Each header file (e.g. A.h) should not include the other header file (e.g. B.h) but may include a forward reference to the other class (e.g. a statement like class B;), and may then use pointers and/or references to the other class in its declaration (e.g. class A may contain a B* as a data member and/or as a method parameter). Each CPP file (e.g. A.cpp) may include more than one header file (e.g. A.h and B.h). It's recommended that each CPP file should include its own header file first (e.g. A.cpp should include A.h and then B.h, whereas B.cpp should include B.h and then A.h). Each header file should contain only the declaration, and not the definition of the class: for example it will list the signatures of the class' methods, but not the method bodies/implementations (the method bodies/implementations will be in the .cpp file, not in the header file). Because the header files don't contain implemention details, they therefore don't depend on (don't need to see) details of other classes; at most they need to know that, for example, B is the name of a class: which it can get from a forward declaratin, instead of by including a header file in another header file.
1,837,312
1,837,552
Which version of boost should I use with c++ visual-studio-2005?
Does anyone know what version of the Boost Library to use with Visual Studio 2005?
The latest version, 1.41.0. If you're interested, Boost maintains a page with the current status of the regression tests on a variety of platforms, including Visual C++ 7.1, 8.0, and 9.0 (Visual Studio 2003, 2005, and 2008, respectively).
1,837,350
1,837,378
Problem with cross casting in Visual Studio 2003
I am using Visual Studio 2003 to compile and run following program. There are 4 assignment operation where I expect 2 of them to run ok and 2 of them to raise exception. There is a dynamic casting inside overloaded = operator which expect to fail during non proper cross casting (Casting from Apple to Orange or Orange to Apple). But in my case all the 4 operations are failing ( Raising exception ). I have run the same code in Visual Studio 2008 and it is working fine as expected. But moving entire project to Visual Studio 2008 is difficult. Is this a problem of Visual Studio 2003? If so, is there any way to fix this? Note: class Fruit is read only and cannot be changed. class Fruit { public: virtual void operator = ( const Fruit& fruit ) { } }; class Apple : public Fruit { public: virtual void operator = ( const Fruit& fruit ) { Apple& apple = dynamic_cast<Apple&>( fruit ); } }; class Mango : public Fruit { public: virtual void operator = ( const Fruit& fruit ) { Mango& mango = dynamic_cast<Mango&>( fruit ); } }; int main( void ) { Apple apple; Mango mango; Fruit* fruit[] = { &apple, &mango }; *fruit[0] = *fruit[0]; /* Expect to work ok */ *fruit[0] = *fruit[1]; /* Expect an exception */ *fruit[1] = *fruit[0]; /* Expect an exception */ *fruit[1] = *fruit[1]; /* Expect to work ok */ }
You need to check Enable Run-time type information (RTTI) in Project Settings. Category "C++ Language". In VC6 at least it will not be enabled by default.