question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
3,136,930
3,137,164
C++ libcurl http response code issues
This issue/quirk/side-effect is driving me crazy. Near the bottom the code, the response code of the HTTP interaction is passed by reference into responseCode_. However it often comes out as 0 even though the site can otherwise be accessed, and returns too quickly to be a timeout... All variables are defined, the code below is just a snippet of a C++ method in a class. Any var_ variables are instance based. It runs on several threads, but that should not be a problem. Each class that uses libcurl has its own instance on the respective threads. Thanks in advance for any ideas or advice... CURL *curl; curl = curl_easy_init(); //The URL curl_easy_setopt(curl, CURLOPT_URL, url.getURLString().c_str()); //Timeout curl_easy_setopt(curl, CURLOPT_TIMEOUT, &timeout_); //disable signals to use with threads curl_easy_setopt(curl, CURLOPT_NOSIGNAL, 1); //Redirecting curl_easy_setopt(curl, CURLOPT_MAXREDIRS, 5); curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1); //Writing callback curl_easy_setopt(curl, CURLOPT_HEADERFUNCTION, &writerh); curl_easy_setopt(curl, CURLOPT_HEADERDATA, &head_); //Writing callback curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, &writerb); curl_easy_setopt(curl, CURLOPT_WRITEDATA, &body_); //Headers struct curl_slist *headers = NULL; for (std::map<std::string, std::string>::iterator itr = requestHeaders_.begin(); itr != requestHeaders_.end(); itr++) { std::stringstream header; header << itr->first << ": " << itr->second; headers = curl_slist_append(headers, header.str().c_str()); } curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers); //UA curl_easy_setopt(curl, CURLOPT_USERAGENT, "RDFaS-Bot/1.0 (+http://www.rdfas.com/bot)"); curl_easy_perform(curl); /* ignores error */ //Response code curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &responseCode_); //clean headers curl_slist_free_all(headers); curl_easy_cleanup(curl); Update: curl_easy_perform was not returning CURLE_OK when the response code was 0, as the marked answer explains. However debug hooks are very useful too and an excellent suggestion
The response code is only going to be set if curl_easy_perform() returns CURLE_OK so you should check that first to make sure curl actually performed the request successfully. Are you sure the callback functions for writing the header and body are set up correctly? Also, make sure curl_global_init(CURL_GLOBAL_ALL) is called before these easy_perform threads start. Assuming nothing in the curl handle returned by curl_easy_init() is shared across threads, then the code looks correct.
3,137,105
3,137,392
Sed to remove underscores and promote character
I am trying to migrate some code from an old naming scheme to the new one the old naming scheme is: int some_var_name; New one is int someVarName_: So what I would ilke is some form of sed / regexy goodness to ease the process. So fundamentally what needs to happen is: find lower case word with contained _ replace underscore with nothing and promote the char to the right of the _ to uppercase. After this appending an _ to the end of the match. Is it possible to do this with Sed and/or Awk and regex? If not why not? Any examples scripts would be appreciated. thanks very much for any assistance. EDIT: For a bit of clarity the renaming is for a number of files that were written with the wrong naming convention and need to be brought into line with the rest of the codebase. It is not expected that this do a perfect replace that leaves everything in a compilable state. Rather the script will be run and then looked over by hand for any anomalies. The replace script would be purely to ease the burden of having to correct everything by hand, which i'm sure you would agree is considerably tedious.
sed -re 's,[a-z]+(_[a-z]+)+,&_,g' -e 's,_([a-z]),\u\1,g' Explanation: This is a sed command with 2 expressions (each in quotes after a -e.) s,,,g is a global substitution. You usually see it with slashes instead of commas, but I think this is easier to read when you're using backslashes in the patterns (and no commas). The trailing g (for "global") means to apply this substitution to all matches on each line, rather than just the first. The first expression will append an underscore to every token made up of a lowercase word ([a-z]+) followed by a nonzero number of lowercase words separated by underscores ((_[a-z]+)+). We replace this with &_, where & means "everything that matched", and _ is just a literal underscore. So in total, this expression is saying to add an underscore to the end of every underscore_separated_lowercase_token. The second expression matches the pattern _([a-z])), where everything between ( and ) is a capturing group. This means we can refer back to it later as \1 (because it's the first capturing group. If there were more, they would be \2, \3, and so on.). So we're saying to match a lowercase letter following an underscore, and remember the letter. We replace it with \u\1, which is the letter we just remembered, but made uppercase by that \u. This code doesn't do anything clever to avoid munging #include lines or the like; it will replace every instance of a lowercase letter following an underscore with its uppercase equivalent.
3,137,113
3,137,154
Organizing test project and main executable - C & C++
I have the following directory structure. root --src ---tests src contains the source & header files (C files) for the application. When this application is built, it generates an executable. tests directory contains unit test cases (C++ files, using UnitTest++ as testing framework) for the application. In the testing project, I can include header files from src directory and compile will pass. Problems occur at the link time. Linker won't be able find the object files in the source directory. How can I solve this? What is the normal practice in C & C++ projects for having one executable for main application and other one for tests where both needs the same source files to work with? Application type : Cross platform. Current development env : Linux Build tool : CMake Any help would be great!
What I've always done for this is had three projects. I'd have one build setup which builds a static library, containing most of my code. Then I'd have a test project that links with the static library and a project that contains UI code and such that isn't typically unit tested. Because both projects share the same compiled static library, no recompilation of the files between the projects is necessary. Note: When I say "projects" above, I mean whatever has the scope of a "project" for your build system. For Visual Studio that's going to be a proj file, for CMake it should be a build target.
3,137,166
3,137,228
Why does my timer stop ticking?
I'm creating a drawing application that renders OpenGL when it gets a WM_SCROLL or WM_MOUSEMOVE. The thing is that there are a lot of mouse moves and I only need it to render a maximum of 60 frames per second. So I created a bool in my engine class called CanRender. so in my render() proc I do: if(!CanRender) { return; } CanRender = false; Basically it prevents it from rendering more than 60 FPS. I create the timer in WM_CREATE. when I get a WM_TIMER I set CanRender to true. I made it beep so I know the timer is running. As soon as I start scroling or moving the mouse, the beeping stops and I no longer see rendering. Why would it stop my timer? Also when I minimize the timer starts again then remaximize, it stops again. Thanks Message Pump: // Main message loop: while (GetMessage(&msg, NULL, 0, 0)) { if (!TranslateAccelerator(msg.hwnd, hAccelTable, &msg)) { TranslateMessage(&msg); DispatchMessage(&msg); } } return (int) msg.wParam; creation: case WM_CREATE: //Set Window Title SetWindowText(hWnd,engineGL.current.caption.c_str()); SetTimer(hWnd, // handle to main window 120, // timer identifier 17, // 60 fps interval (TIMERPROC) NULL); // no timer callback
Why making it so complicated? Drawing in windows application is usually done only in WM_PAINT message and triggered by RedrawWindow function. You can call RedrawWindow within WM_SCROLL and WM_MOUSEMOVE. Multiple calls to RedrawWindow (WM_PAINT messages) will be collapsed if your application can't keep up with drawing. Also if you set OpenGL to synchronize with your monitors vertical retrace you will not exceed certain refresh rate. As to your question... I guess there're many WM_SCROLL and WM_MOUSEMOVE messages. And those cannot be collapsed. So if you do your drawing inside them (which takes time), you block your message queue and WM_TIMER messages cannot be handled. Thus, you don't hear beeps.
3,137,231
3,137,240
Is it possible not to include a class variable in a class header file?
I want to hide an implementation in implementation file. If the object is not public, I don't want the object's header to leak everywhere my class is used. Suppose I have header file A.h for my class A: #include "Foo.h" class A{ private: Foo foo; public: do_stuff(); }; Now wherever I would include A.h, Foo.h also would be included. But I have no use for class Foo anywhere outside of class A. I would rather not have this #include "Foo.h" line. Is there any way to move the declaration of 'foo' variable inside the implementation A.cpp? I suspect one possible solution involves adding a layer of abstract class (interface analogy). Is it the best solution? Thank you.
Use a pointer to Foo and allocate it dynamically, rather than using a member object. Then you only need to include Foo.h in A.cpp. class Foo; class A{ private: Foo* foo; public: do_stuff(); }
3,137,267
3,137,288
C++: using std::wstring in API function
I'm using the SHGetSpecialFolderLocation API function. My application is set to "Use Unicode Character Set". Here's what I have so far: int main ( int, char ** ) { LPITEMIDLIST pidl; HRESULT hr = SHGetSpecialFolderLocation(NULL, CSIDL_PERSONAL, &pidl); /* Confused at this point */ wstring wstrPath; wstrPath.resize ( _MAX_PATH ); BOOL f = SHGetPathFromIDList(pidl, wstrPath.c_str () ); /* End confusion */ The error I'm getting is: error C2664: 'SHGetPathFromIDListW' : cannot convert parameter 2 from 'const wchar_t *' to 'LPWSTR' Can someone help? What's the proper C++ way to do this? Thanks!
The second parameter is an out parameter, so you can't just pass c_str (which is const) directly. It would probably be simplest just to do: wchar_t wstrPath[MAX_PATH]; BOOL f = SHGetPathFromIDList(pidl, wstrPath); MAX_PATH is currently 260 characters.
3,137,336
3,137,551
difference between cstdint and tr1/cstdint
What is the difference between <cstdint> and <tr1/cstdint>? (apart from that one puts things in namespace std:: and the other in std::tr1::) Since this stuff isn't standard yet I guess it's compiler specific so I'm talking about gcc. To compile with the non-tr1 one I must compile with -std=c++0x, but there is no such restriction when using tr1. Is the answer perhaps that there is none but you can't go around adding things to std:: unless there, well, standard. So until c++0x is standardised an error must be issued using <cstdint> but you dont need to worry when adding to the tr1:: namespace, which makes no claim to things in it being standard? Or is there more to this? Thanks. p.s - If you read "std" as standard, as I do, I do apologise for the overuse of the word in this Q.
At least as far as I know, there was no intent to change <cstdint> between TR1 and C++0x. There's no requirement for #includeing <cstdint> to result in an error though -- officially, it's nothing more or less than undefined behavior. An implementation is allowed to specify exact behavior, and in this case it does.
3,137,601
3,934,294
Preprocessor macro based code yields a C2400 error
#define CANCEL_COMMON_DIALOG_HOOK(name) \ void __declspec(naked) ##name##CancelCommonDialogHook(void) \ { \ __asm \ { \ add esp, [k##name##CancelCommonDialogStackOffset] \ jz RESTORE \ jmp [k##name##CancelCommonDialogNewFileRetnAddr] \ RESTORE: \ pushad \ call DoSavePluginCommonDialogHook \ test eax, eax \ jnz REMOVE \ popad \ jmp [k##name##CancelCommonDialogRestoreRetnAddr] \ REMOVE: \ popad \ jmp [k##name##CancelCommonDialogRemoveRetnAddr] \ } \ } Using the above macro causes the compiler to throw this error: error C2400: inline assembler syntax error in 'second operand'; found 'RESTORE' What have I done incorrectly ? EDIT: void __declspec(naked) #name##CancelCommonDialogHook(void) \ { \ __asm add esp, [k##name##CancelCommonDialogStackOffset] \ __asm jz RESTORE \ __asm jmp [k##name##CancelCommonDialogNewFileRetnAddr] \ RESTORE: \ __asm pushad \ __asm call DoSavePluginCommonDialogHook \ __asm test eax, eax \ __asm jnz REMOVE \ __asm popad \ __asm jmp [k##name##CancelCommonDialogRestoreRetnAddr] \ REMOVE: \ __asm popad \ __asm jmp [k##name##CancelCommonDialogRemoveRetnAddr] \ } The above code doesn't work either: error C2447: '{' : missing function header (old-style formal list?) error C2014: preprocessor command must start as first nonwhite space
Fixed it by enclosing the function body in another scope.
3,137,622
3,137,686
C++ Importing and Renaming/Resaving an Image
Greetings all, I am currently a rising Sophomore (CS major), and this summer, I'm trying to teach myself C++ (my school codes mainly in Java). I have read many guides on C++ and gotten to the part with ofstream, saving and editing .txt files. Now, I am interested in simply importing an image (jpeg, bitmap, not really important) and renaming the aforementioned image. I have googled, asked around but to no avail. Is this process possible without the download of external libraries (I dled CImg)? Any hints or tips on how to expedite my goal would be much appreciated
Renaming an image is typically about the same as renaming any other file. If you want to do more than that, you can also change the data in the Title field of the IPTC metadata. This does not require JPEG decoding, or anything like that -- you need to know the file format well enough to be able to find the IPTC metadata, and study the IPTC format well enough to find the Title field, but that's about all. Exactly how you'll get to the IPTC metadata will vary -- navigating a TIFF (for one example) takes a fair amount of code all by itself.
3,138,053
3,139,534
SiteLock Implementing IObjectSafety BUT Not Working in IE
I have used the SiteLock 1.15 template to restrict domain access to my ActiveX control so that only a list of pre-approved domain can use it. Everything compiles ok, and even the SiteList.exe application that is supplied with the SiteLock template correctly shows the list of domains that I defined inside the ActiveX Control. Also, The OLE Object Viewer correctly shows my ActiveX component exposing the IObjectSafety and IObjectWithSite interfaces. It's only IE8 that is acting up and not honoring the IObjectSafety interface, so what could be wrong?
It turns out I had the test website in Exploder's trusted zone, with all security options turned off; so Exploder didn't even negotiate with the IObjectSafety interface. When I modified the security option, Exploder started communicating with the interface so all is rainbows and bubbles.
3,138,090
3,138,154
How to rotate yuv420 data?
I need to know how to rotate an image, which is in yuv420p format by 90 degrees. The option of converting this to rgb, rotating and again reconverting to yuv is not feasible. Even an algorithm would help. Regards, Anirudh.
I suppose it is not planar YUV, if it is it already it's quite easy (skip first and last steps). You meant to have YUV 4:2:0 planar, but then I do not understand why you have difficulties. convert it to a planar first: allocate space for planes and put bytes at right places according to the packed YUV format you have. rotate the Y, U, V planes separately. The "color" (U, V) information for each block then shall be kept the same. recombine the planes to reobtain the right packed YUV you had at the beginning This always works fine if your image dimensions are multiple of 4. If not, then take care...
3,138,283
3,138,290
CreateThread issue in c under window OS
I have the following code which initiate the thread. int iNMHandleThread = 1; HANDLE hNMHandle = 0; hNMHandle = CreateThread( NULL, 0, NMHandle, &iNMHandleThread, 0, NULL); if ( hNMHandle == NULL) ExitProcess(iNMHandleThread); My question is What will happened if I run this code while the thread already in the running state. I want to initiate the multiple independent threads of NMHandle kindly give me some hints to solve this problem.
Each time you call CreateThread, a new thread is started that is independent of any other currently-running threads. Whether your "NMHandle" function is capable of running on more than one thread at a time is up to you: for example, does it rely on any global state?
3,138,732
5,360,631
How to implement Outlook Express alike address field control
I was thinking about inserting some object (button, panel or static text) into textctrl, like Outlook Express does this. You can see from a pic "group1" is an object, you can double click on it, when you delete it, it gets deleted the whole text not just a part of it. I made some research and this text field is just a simple RichEdit20W. I understand that I can do it by implementing some logic to a text field and so on, but it will not be proper way of doing it. I wonder how they done that. Should I implement IRichEditOleCallback interface to achieve that? I will appreciate your answer very much. Thanks!
The ability to insert an object is built-in to the RichEdit control, that's what Outlook is using, and you can do the same yourself. It seems you would need to implement your own OLE object for your own item, and then use the RichEdit's COM interface to insert it. You can see a sample on MSDN that gets the COM interface and inserts an object here.
3,138,937
3,139,717
How to convert UTM Coordinate in C to Latitude/Longitude using WGS84 Datum?
Does anyone know where I can find open source code (in c++) that converts a UTM point to Geo (WGS 84)? Thanks, Liran
Take a look at GDAL. Specifically the code used here. There is also a Warp API tutorial here which outlines the basic use of the Warp API. Alternatively, you can use the more lightweight PROJ.4 library (GDAL uses this internally).
3,138,977
5,322,523
Building ActiveQt (COM) applications with MinGW
I am using Qt 4.6.3 with MinGW on Windows to build Qt apps and now need to add a COM interface to my application. I enabled ActiveQt but was getting post-link errors because I was missing a copy of the MIDL compiler. I downloaded a copy of the latest MS Windows SDK, which includes MIDL, but now MIDL complains it cannot find cl.exe. The only conclusion I can draw is that you can only build ActiveQt applications using the MS compiler, which I would rather avoid. Is a way to get this working with MinGW or am I out of luck?
Using the MS compiler and tools seems to be the only reliable way to get this working.
3,139,086
3,139,154
c++ boost conditional remove from container
i want to do something like c# linq style : SomeColection <SomeType> someColection; someColection.Remove(something => something > 2); and it'll remove all the things that are bigger then 2 (or any other boolean condition)... using boost in the project...
First, you need a simple template wrapper: template <class Container, class UnaryPredicate> void erase_if(Container& container, UnaryPredicate pred) { container.erase( std::remove_if(container.begin(), container.end(), pred), container.end() ); } It's a well-known idiom, however it won't be possible with map or set as they maintain their own order. Then, you can use Boost.Lambda to get the syntax you wish for writing the predicate itself. using boost::lambda::_1; SomeCollection<Type> someCollection; erase_if(someCollection, _1 > 2);
3,139,144
3,139,226
Definitive function for get elapsed time in miliseconds
I have tried clock_gettime(CLOCK_REALTIME) and gettimeofday() without luck - And the most basic like clock(), what return 0 to me (?). But none of they count the time under sleep. I don't need a high resolution timer, but I need something for getting the elapsed time in ms. EDIT: Final program: #include <iostream> #include <string> #include <time.h> #include <sys/time.h> #include <sys/resource.h> using namespace std; // Non-system sleep (wasting cpu) void wait ( int seconds ) { clock_t endwait; endwait = clock () + seconds * CLOCKS_PER_SEC ; while (clock() < endwait) {} } int show_time() { timeval tv; gettimeofday(&tv, 0); time_t t = tv.tv_sec; long sub_sec = tv.tv_usec; cout<<"t value: "<<t<<endl; cout<<"sub_sec value: "<<sub_sec<<endl; } int main() { cout<<show_time()<<endl; sleep(2); cout<<show_time()<<endl; wait(2); cout<<show_time()<<endl; }
You need to try gettimeofday() again, it certainly count the wall clock time, so it counts when the process sleep as well. long long getmsofday() { struct timeval tv; gettimeofday(&tv); return (long long)tv.tv_sec*1000 + tv.tv_usec/1000; } ... long long start = getmsofday(); do_something(); long long end = getmsofday(); printf("do_something took %lld ms\n",end - start);
3,139,287
3,139,308
question on string in c++
does this work on string in c++? string s="lomi"; cout<<s<<endl; what is bad in this code? #include <iostream> #include <cstring> using namespace std; int main(){ string s=string("lomi"); for (int i=0;i<s.length();i++){ s[i]= s[i]+3; } std::cout<<s<<std::endl; return 0; } ?
Yes. (after you have #included the corresponding headers, and using the std namespace, etc.) Edit: What's wrong with your code is you should #include <string> instead of #include <cstring>     cstring is C's string.h header, which defines functions like strlen, strcpy, etc. that manipulates a C string, i.e. char*. string defines C++'s string class which you're using.
3,139,414
3,139,545
Qt programming: More productive in Python or C++?
Trying to dive into Qt big time but haven't done a large project with it yet. Currently using Python, but I've been thinking -- which is really the better language to use in terms of programmer productivity? In most comparisons between the languages, Python is the obvious answer, because you don't have to mess with memory management and all that. However, with Qt I'm not so sure. It provides enough added features to C++ that (from what I can tell) a line of Python code is roughly equal to a line of C++ code most of the time (excluding some additional things like class definitions and structure components). Qt does nearly all the memory management for you as long as you stick with its classes, and provides equivalents to the nice containers you would find in Python. I've always preferred statically typed languages, but have gotten on the Python bandwagon for various reasons. If programmer productivity is similar with C++, however, I may jump back that way for its other benefits -- more efficient code and fewer dependencies for users to install. Thoughts?
My Opinion (having tried out C++ and Python in general and specifically in Qt case): Python always wins in terms of 'programmer productivity' and 'peace of mind'. PyQt represent Qt very well and hence question doesn't remain of "Qt with Python" or "Qt with C++", in general python is more productive unless off-course you need speed or something which isn't available in python. Best way for you to arrive at the answer would be to write a simple project first in C++ and then same project in python and compare, but that could be biased towards python as after coding the project once you may find it easy in Python, so try another project too and first do it in Python and then in C++.
3,139,558
3,139,577
Scope of pure virtual functions during derived class destruction - In C++
During destruction of the derived class object, i first hit the derived class destructor and then the base class destructor (which is as expected). But i was curious to find out - at what point does the functions of the derived class go out of scope (are destroyed). Does it happen as soon as the control leaves the derived class destructor and goes toward the base? Or does it happen once we done with the base class destructor also. Thanks
Once the destructor of the most derived class finishes, the dynamic type of the object can be considered that of the next less-derived-type. That is, a call to a virtual method in the base destructor will find that the final overrider at that point in time is at base level. (The opposite occurs during construction) struct base { base() { std::cout << type() << std::endl; } virtual ~base() { std::cout << type() << std::endl; } virtual std::string type() const { return "base"; } }; struct derived : base { virtual std::string type() const { return "derived"; } }; int main() { base *p = new derived; std::cout << p->type() << std::endl; delete p; } // output: // base // derived // base
3,139,814
3,139,834
How to refactor an existing class to become polymorphic?
I have a class that is used as a member in many places in my project. Now, instead of this class I want to have a polymorphism, and the actual object will be created by some kind of factory. I have to choose between: Having to change all the places where I use the class - to call the factory and use a pointer instead of object directly. Changing the class to be just a wrapper, that will call functions of a new polymorphic class I will create. Which strategy should I better choose?
Change all the places where I use the class to call the factory and use a pointer instead of object directly. That's best. It seems painful at first, but it's clean and more extensible than implementing a wrapper because you didn't feel like doing a search for new MyClass(. Once you list all the place with the new, you'll see that it isn't really all that bad a job.
3,139,862
3,139,886
Simple simple template returning odd numbers?
EDIT BEFORE YOU READ: Sorry.. I didn't add newline so it appeared jumbled, I can't delete the question because I'm not registered yet, sorry for wasting your time guys. I just used a template for the first time (for finding MIN of two numbers) instead of a macro, and I liked it! But when I tried to modify and make my own template it failed completely.. Here is my code: #include <stdio.h> template <class T> T min(T a, T b) { return a < b ? a : b; }; //My attempt now.. because add could be int, float, etc; I wanted a template. template <class T> T add(T a, T b) { return a + b; }; int main(){ printf("%f\n", min(1.3, 2.2)); //(Does not appear in console?) printf("%d", add(1, 10)); //1.300000 (how is an int show as float? lol) printf("%f", add(5.1, 7.34)); //1112.440000 return 0; } Now the strange results are in the comments.. Min works fine, but when I change it from comparison to "a + b" it stops min from working, and hands me weird float values?! Am I using it the wrong way? , is it supposed to be something else? what does that mean? I understand the basics so a simple explaination would be alright.Thank you!
Try adding linebreaks after the other lines too. What happens is this: it prints min(1.3, 2.2) which is 1.300000 it prints a linebreak it prints add(1, 10), which is 11 it prints add(5.1, 7.34) which is 12.440000 Since there is no linebreak between step 3 and 4, it prints the number directly after each other, making it look like this: 1112.440000.
3,140,027
3,140,508
Fixing "comparison is always false ..." warning in GCC
I'm having a problem which I'm sure is simple to fix but I'm at a loss... I have a template that performs the following code: T value = d; if ( std::numeric_limits< T >::is_signed ) { if ( value < 0 ) { *this += _T( "-" ); value = -(signed)value; } } Now for, obvious reasons, GCC is giving me a warning (comparison is always false due to limited range of data type) when this code is compiled for an unsigned type. I fully understand the reasoning behind this and I put in the numeric_limits check to see if I could get the compiler to shut up about it (it worked for MSVC). Alas under GCC I get the warning. Is there any way (short of disabling the warning which I don't even know if you can do with GCC) to fix this warning? The code will never get called anyway and I would assume the optimiser will compile it out as well but I can't get rid of the warning. Can someone give me a solution to this? Cheers!
Simpler solution: template <typename T> inline bool isNegative(T value) { return std::numeric_limits< T >::is_signed && value < 0; // Doesn't trigger warning. } T value = d; if ( isNegative(value) ) // Doesn't trigger warning either. { *this += _T( "-" ); value = -1 * value; }
3,140,088
3,140,194
Test for overhead of virtual functions
I set up a (perhaps very unscientific) small test to determine the overhead of virtual functions in a one-level single inheritance and the results I got were, well, exactly the same when accessing the derived class polymorphically or when accessing it directly. What was a bit surprising was the order of magnitude of computation time that is introduced when any function is declared virtual (see results below). Is there so much overhead when declaring member functions as such, and why is it still present even when accessing the derived class directly? The code is as follows: class base { public: virtual ~base() {} virtual uint func(uint i) = 0; }; class derived : public base { public: ~derived() {} uint func(uint i) { return i * 2; } }; uint j = 0; ulong k = 0; double l = 0; ushort numIters = 10; base* mybase = new derived; // or derived* myderived = ... for(ushort i = 0; i < numIters; i++) { clock_t start2, finish2; start2 = clock(); for (uint j = 0; j < 100000000; ++j) k += mybase->func(j); finish2 = clock(); l += (double) (finish2 - start2); std::cout << "Total duration: " << (double) (finish2 - start2) << " ms." << std::endl; } std::cout << "Making sure the loop is not optimized to nothing: " << k << std::endl; std::cout << "Average duration: " << l / numIters << " ms." << std::endl; Results: base* mybase = new derived; gives an average of ~338 ms. derived* myderived = new derived; gives an average of ~338 ms. Eliminating inheritance and removing virtual functions gives an average of ~38 ms. That's almost 10 times less! So basically, if any function is declared virtual the overhead will always be identically present, even if I don't use it polymorphically? Thanks.
Accessing it "directly" is doing the same work as accessing it "indirectly". When you call the function on myderived, the pointer stored there could point to some object of some class derived from derived. The compiler can't assume that it really is a derived object, it might be an object of a further derived class that overrides the virtual function, so there needs to be virtual function dispatch just like in the mybase case. In both cases the function is looked up in the virtual function table before it is called. To call the function non-polymorphically, don't use a pointer: derived myderived; myderived.func(1); When you remove the virtual functions, the compiler can inline the function call so that you basically end up with a simple loop: for (uint j = 0; j < 100000000; ++j) k += i * 2; This is much faster since you save the overhead of 100000000 function calls and the compiler might even be able to optimize the loop further in ways it wouldn't if there was a function call in it. Note also that the difference between the inlined version and the virtual function call would be much less if the function did some real work. In this example the function body takes almost no time at all, so the costs for calling the function outweigh the costs for executing the body.
3,140,190
3,140,221
When is (this != this) in C++?
I have a very strange question. I have a class/function : class MCBSystem { [...] template <class Receiver> void setCallBack(int i, Receiver* receiver, void(Receiver::*function)(void*)) { iCallBacks.at(i) = new CallBack<Receiver>(receiver, function, this); }; }; And I inherit it (multiply) in another class : class MenuBox : public OverlayBox, public HIDListener, public FANLib::MCBSystem { [...] }; Now, if I call the "setCallBack" function : menuBox->setCallBack(MenuBox::CLICKED, this, &SubMain::widgetClicked); then "menuBox" has a value say 0x06cf22b8 but inside "setCallBack", "this" is 0x06cf2370. Can someone explain what on earth is going on? [EDIT:] The true question is : if I need to store 'this' inside 'setCallBack', how can I check later that 'menuBox == this'? Many thanks in advace!
Yes, the this pointer has to be patched to allow for multiple inheritance polymorphism. As a zeroth-order approximation, an instance of a class C that inherits from A and B can be thought to include an instance of A followed by an instance of B. Now if you have a pointer to a C instance and convert that to an instance of B, the this pointer must be different because the B instance is located after the C instance in memory. See this paper for an in-depth discussion. Little test program: #include <iostream> struct A { int i; }; struct B { int j; }; struct C: A, B { }; #define PRINT(expr) std::cout << #expr " = " << expr << std::endl int main() { C* c = new C; B* b = c; PRINT(b); PRINT(c); PRINT(static_cast<B*>(c)); }
3,140,294
3,140,326
Any performance reason to put attributes protected/private?
I "learned" C++ at school, but there are several things I don't know, like where or what a compiler can optimize, seems I already know that inline and const can boost a little... If performance is an important thing (gaming programming for example), does putting class attributes not public (private or protected) allow the compiler to make more optimized code ? Because all my previous teacher were saying was it's more "secure" or "prevent not wanted or authorized class access/behavior", but in the end, I'm wonder if putting attributes not public can limit the scope and thus fasten things. I don't criticize my teachers (should I), but the programming class I was in wasn't very advanced...
The teachers were right to tell you to use private and protected to hide implementation and to teach you about information hiding instead of propsing questionable performance optimizations. Try to think of an appropriate design first and of performance second, in 99% of the cases this will be the better choice (even in performance critical scenarios). Performance bottlenecks can appear in a lot of unpredicted cases and are much easier to come by if your design is sound. To directly answer your question however: any reduction in scope may help the compiler to do certain optimizations, form the top of my head I can not think of any however right now in regards to making members private.
3,140,387
3,140,393
What is purpose of _p.h files?
In Qt Source files, there are two versions of header files, such as: qxmlstream.h qxmlstream_p.h Why are there _p.h files?
They're generally private header files, used so that components of a subsystems know about everything but users don't need to. In other words, something that multiple C source files in Qt might want to know about would be in the private header files if the users of Qt didn't need to know about them. One example might be a customised memory allocator for your subsystem. Perhaps you know that every memory allocation you make is 128 bytes then you can provide such an allocator: void * malloc128 (void) { ... } Since this is likely to be of dubious value to users of your subsystem, there's no point publishing it as part of the official API but every single one of your own source files needs the prototype so you'd put it in the private header file. Then your own code uses: #include "mysubsystem_p.h" while users of your API use: #include "mysubsystem.h"
3,140,683
3,140,711
what is a good alternative to this ugly construct, in c++?
This is my code (simplification of a real-life problem): class Foo { public: void f(const string& s) { if (s == "lt") { return lt(); } else if (s == "lte") return lte(); } else if (s == "gt") return gt(); } else if (s == "gte") return gte(); } } void lt() { /* skipped */ } void lte() { /* skipped */ } void gt() { /* skipped */ } void gte() { /* skipped */ } }; This is how I would do it in PHP/Python/JavaScript/many other languages (example in PHP): class Foo { function f($s) { return $this->$s(); } function lt() { /* skipped */ } function lte() { /* skipped */ } function gt() { /* skipped */ } function gte() { /* skipped */ } } How can I make my C++ code as elegant as this PHP example? Thanks in advance.
There is no reflection in C++. However, something like a std::map<std::string, void (Foo::*)()>should do the trick. EDIT: Here is some ugly code to do it maintainably. Note the following : This can probably be improved in various way Please add code to deal with non-existent tokens. I did no error checking. #define BEGIN_TOKEN_MAP \ template <int n> \ struct add_to_ \ { \ static void act() {} \ }; \ std::map<std::string, void (Foo::*)()> map_; #define DECLARE_TOKEN(str, n) \ template <> struct add_to_<n> \ { \ static void act() { map_[#str] = &Foo::##str; add_to<n+1>::act();} \ };\ void str() #define END_TOKEN_MAP \ void init_map() { add_to_<0>::act(); } \ void process_token(std::string s) { (this->*map_[s])(); } class Foo { BEGIN_TOKEN_MAP DECLARE_TOKEN(lt, 0) { ... } DECLARE_TOKEN(gt, 1) { ... } ... END_TOKEN_MAP Foo() { init_map(); } void f(const std::string& s) { process_token(s); } };
3,140,797
3,140,957
Rotate a 2D array in-place without using a new array - best C++ solution?
One of my students asked me this kind of homework with C++ arrays. It seemed quite interesting for me, so, though I have solved this problem, I wanted to share my solution with you and know another variants and opinions. The problem is following: Problem It is given a 2D dynamic quadratic matrix (array) A(nxn). It is required to rotate the array by 90 degree anticlockwise, that is to say, after rotation A[1,1] field should contain the value of A[1,n] and A[1,n] field should contain the value of A[n,n]. And also it is required that while solving this problem you should not use any other array. My solution I have told to the student to do the following (will represent steps schematically): I have suggested to define a class which, as its member, will have the 2D array. And to define a operation which will return reference on A[j,n+1-i] element when the user will request A[i,j] one. In two words I have suggested to create a wrapper for the array, and manipulate by array through the wrapper.
Wikipedia has an article on in-place matrix transposition. Consider: a b c e f g x y z transpose: a e x b f y c g z rotated 90 deg CCW: c g z b f y a e x So after you have the transpose, reverse the rows, which you can do in-place easily.
3,140,875
3,140,901
How to get rid of "C++ exception specification ignored" warning
I recently got a dll that has been implemented by others. I have to use it in my application. In the header file of their class they have the function declaration void func1() throw (CCustomException); Now when i compile it am getting the warning, C++ exception specification ignored except to indicate a function is not _declspec(nothrow) I read the MSDN - Documentation but couldn't understand it clearly. Also, I don't want to disable the warning just because it is showing up. I want to know what I am doing wrong instead of disabling it. I thought my function, say myfunc() accessing that func1() from the dll doesn't have that Exception specification list. Hence I tried having the corresponding exception specification list in my function too as, void myfunc1() throw (CCustomException); But I am still getting the warning. What is that warning is all about and how to get rid of it? I am using Qt 4.5 in Windows XP.
Ok, it is a non-answer, but I would throw away the exception specification and never use it again. EDIT: I read too fast, and I didn't see you did not write the class yourself. Best way to get rid of warnings in msvc is via #pragma warning(push) followed by #pragma warning(disable:xxxx) where xxxx is the warning code : #ifdef _MSC_VER #pragma warning(push) #pragma warning(disable:xxxx) #endif ... #ifdef _MSC_VER #pragma warning(pop) #endif EDIT: It is perfectly safe to disable the warning. Exception specifications are evil, and the compiler is only telling you it is disabling them for you. Even if it breaks the standard.
3,141,087
3,141,107
What is meant with "const" at end of function declaration?
I got a book, where there is written something like: class Foo { public: int Bar(int random_arg) const { // code } }; What does it mean?
A "const function", denoted with the keyword const after a function declaration, makes it a compiler error for this class function to change a member variable of the class. However, reading of a class variables is okay inside of the function, but writing inside of this function will generate a compiler error. Another way of thinking about such "const function" is by viewing a class function as a normal function taking an implicit this pointer. So a method int Foo::Bar(int random_arg) (without the const at the end) results in a function like int Foo_Bar(Foo* this, int random_arg), and a call such as Foo f; f.Bar(4) will internally correspond to something like Foo f; Foo_Bar(&f, 4). Now adding the const at the end (int Foo::Bar(int random_arg) const) can then be understood as a declaration with a const this pointer: int Foo_Bar(const Foo* this, int random_arg). Since the type of this in such case is const, no modifications of member variables are possible. It is possible to loosen the "const function" restriction of not allowing the function to write to any variable of a class. To allow some of the variables to be writable even when the function is marked as a "const function", these class variables are marked with the keyword mutable. Thus, if a class variable is marked as mutable, and a "const function" writes to this variable then the code will compile cleanly and the variable is possible to change. (C++11) As usual when dealing with the const keyword, changing the location of the const key word in a C++ statement has entirely different meanings. The above usage of const only applies when adding const to the end of the function declaration after the parenthesis. const is a highly overused qualifier in C++: the syntax and ordering is often not straightforward in combination with pointers. Some readings about const correctness and the const keyword: Const correctness The C++ 'const' Declaration: Why & How
3,141,199
3,141,522
If an overridden C++ function calls the parent function, which calls another virtual function, what is called?
I'm learning about polymorphism, and I am confused by this situation: Let's say I have the following C++ classes: class A{ ... virtual void Foo(){ Boo(); } virtual void Boo(){...} } class B : public A{ ... void Foo(){ A::Foo(); } void Boo(){...} } I create an instance of B and call its Foo() function. When that function calls A::Foo(), will the Boo() method used be that of class A or B? Thanks!
Unless you qualify a function call with the class, all method calls will be treated equal, that is dynamic dispatch if virtual, static dispatch if not virtual. When you fully qualify with the class name the method you are calling you are effectively disabling the dynamic dispatch mechanism and introducing a direct method call. class A{ virtual void Foo(){ Boo(); // will call the final overrider A::Boo(); // will call A::Boo, regardless of the dynamic type } virtual void Boo(); }; class B : public A{ void Foo(){ //Foo(); // Would call the final overrider // (in this case B: infinite recursion) A::Foo(); // Will call A::Foo, even if the object is B } void Boo(); }; The implicit this pointer is not an important part of the discussion here, as exactly the same happens when the call is made with an explicit object: B b; b.Foo(); // will call B::Foo -- note 1 b.A::Foo(); // will call A::Foo Note 1: in this example, the compiler can elide the dynamic dispatch mechanism as it knows the concrete type of the instance (it sees the definition and it is not a reference/pointer) but you can imagine the same would happen if b was a reference, or equivalently if it was a pointer with -> instead of .
3,141,432
3,141,475
Any weird purpose of switch / default in this code?
I am porting some code from C to C++ and I found this code: if(ErrorCode >= SOME_CONSTANT) { Status = RETVAL_OK; switch ( ErrorCode ) { default: Status = RETVAL_FAILED; break; } } This code generates a compilation warning: warning C4065: switch statement contains 'default' but no 'case' labels The question: Is there any purpose of the switch statement (that I failed to grasp) or is it simply crufty code? That is, is there any reason (when compiling for ANSI C or C++) to not write it like this? if(ErrorCode >= SOME_CONSTANT) Status = RETVAL_FAILED; EDIT: To address all the questions that appeared: The code was not meant to be expanded: It was the final release of a module that was delivered four years ago (it hasn't been touched since then, so I am inclined to believe it's cruft). There were also no removed case statements as far as I could see (the same code construct was placed in three different places in code (the same switch/default inside an if checking on the error constant). If there were removed case statements, the code should have been refactored anyway when the cases were removed. Thanks everyone.
Two things I can think of: 1) the code was automatically generated 2) the original coder thought they might add different processing for error codes later, but never did. In either case, I can't see any reason not to change it to a simple if statement
3,141,455
3,141,539
cmath compilation error when compiling old C++ code in VS2010
I've inherited a few C++ files and an accompanying makefile, which I'm trying to bring into VS2010 as a solution. I've created an empty project and added the appropriate C++ and header (.hpp) files for one of the makefile targets. When I try to compile the project, however, I immediately get a large number of C2061 (syntax error identifier) errors coming from cmath regarding acosf, asinf, atanf, etc. The error line in cmath: #pragma once #ifndef _CMATH_ #define _CMATH_ #include <yvals.h> #ifdef _STD_USING #undef _STD_USING #include <math.h> #define _STD_USING #else /* _STD_USING */ #include <math.h> #endif /* _STD_USING */ #if _GLOBAL_USING && !defined(RC_INVOKED) _STD_BEGIN using _CSTD acosf; using _CSTD asinf; The top block of the relevant C++ file (though named as a .C): #include <fstream> #include <iostream> #include <stdio.h> #include <stdlib.h> #include <string.h> using namespace std; Followed by the main() function, which doesn't call any of the trig functions directly. This has to be something really obvious, but I'm missing it. Can anyone help? Thanks!
Are you sure it's compiling as C++? Most compilers will compile .C file as C and .cpp files as C++, compiling a C++ file with a C-compiler will probably fail. Also, that code mixes oldstyle ('c') headers and newstyle ('c++') headers. It should be more like this (I doubt that is the error however). #include <fstream> #include <iostream> #include <cstdio> #include <cstdlib> #include <cstring> using namespace std; That's all I can see with what you've given. But most of the time when you get errors in library files of C/C++ itself, it still is code of you that's wrong somewhere, like forgetting the ; after a class statement in a header file.
3,141,555
3,145,269
Are there any tools for tracking down bloat in C++?
A carelessly written template here, some excessive inlining there - it's all too easy to write bloated code in C++. In principle, refactoring to reduce that bloat isn't too hard. The problem is tracing the worst offending templates and inlines - tracing those items that are causing real bloat in real programs. With that in mind, and because I'm certain that my libraries are a bit more bloat-prone than they should be, I was wondering if there's any tools that can track down those worst offenders automatically - i.e. identify those items that contribute most (including all their repeated instantiations and calls) to the size of a particular target. I'm not much interested in performance at this point - it's all about the executable file size. Are there any tools for this job, usable on Windows, and fitting with either MinGW GCC or Visual Studio? EDIT - some context I have a set of multiway-tree templates that act as replacements for the red-black tree standard containers. They are written as wrappers around non-typesafe non-template code, but they were also written a long time ago and as an "will better cache friendliness boost real performance" experiment. The point being, they weren't really written for long-term use. Because they support some handy tricks, though (search based on custom comparisons/partial keys, efficient subscripted access, search for smallest unused key) they ended up being in use just about everywhere in my code. These days, I hardly ever use std::map. Layered on top of those, I have some more complex containers, such as two-way maps. On top of those, I have tree and digraph classes. On top of those... Using map files, I could track down whether non-inline template methods are causing bloat. That's just a matter of finding all the instantiations of a particular method and adding the sizes. But what about unwisely inlined methods? The templates were, after all, meant to be thin wrappers around non-template code, but historically my ability to judge whether something should be inlined or not hasn't been very reliable. The bloat impact of those template inlines isn't so easy to measure. I have some idea which methods are heavily used, but that's the well-known opimization-without-profiling mistake.
Check out Symbol Sort. I used it a while back to figure out why our installer had grown by a factor of 4 in six months (it turns out the answer was static linking of the C runtime and libxml2).
3,141,556
3,172,064
How to setup timer resolution to 0.5 ms?
I want to set a machine timer resolution to 0.5ms. Sysinternal utility reports that the min clock resolution is 0.5ms so it can be done. P.S. I know how to set it to 1ms. P.P.S. I changed it from C# to more general question (thanks to Hans) System timer resolution
NtSetTimerResolution Example code: #include <windows.h> extern "C" NTSYSAPI NTSTATUS NTAPI NtSetTimerResolution(ULONG DesiredResolution, BOOLEAN SetResolution, PULONG CurrentResolution); ... ULONG currentRes; NtSetTimerResolution(5000, TRUE, &currentRes); Link with ntdll.lib.
3,141,572
3,141,783
C++ Map Gives Bus Error when trying to set a value
I have the following function as the constructor for a class: template<typename T> void Pointer<T>::Pointer(T* inPtr) { mPtr = inPtr; if (sRefCountMap.find(mPtr) == sRefCountMap.end()) { sRefCountMap[mPtr] = 1; } else { sRefCountMap[mPtr]++; } } Here is the definition for the map: static std::map<T*, int> sRefCountMap; I get a Bus Error sometimes when this code is run: #0 0x95110fc0 in std::_Rb_tree_decrement () #1 0x00017ccc in std::_Rb_tree_iterator<std::pair<Language::Value* const, int> >::operator-- (this=0xbfffe014) at stl_tree.h:196 #2 0x0001b16c in std::_Rb_tree<Language::Value*, std::pair<Language::Value* const, int>, std::_Select1st<std::pair<Language::Value* const, int> >, std::less<Language::Value*>, std::allocator<std::pair<Language::Value* const, int> > >::insert_unique (this=0x2a404, __v=@0xbfffe14c) at stl_tree.h:885 #3 0x0001b39c in std::_Rb_tree<Language::Value*, std::pair<Language::Value* const, int>, std::_Select1st<std::pair<Language::Value* const, int> >, std::less<Language::Value*>, std::allocator<std::pair<Language::Value* const, int> > >::insert_unique (this=0x2a404, __position={_M_node = 0x2a408}, __v=@0xbfffe14c) at stl_tree.h:905 #4 0x0001b5a0 in __gnu_norm::map<Language::Value*, int, std::less<Language::Value*>, std::allocator<std::pair<Language::Value* const, int> > >::insert (this=0x2a404, position={_M_node = 0x2a408}, __x=@0xbfffe14c) at stl_map.h:384 #5 0x0001b6e0 in __gnu_norm::map<Language::Value*, int, std::less<Language::Value*>, std::allocator<std::pair<Language::Value* const, int> > >::operator[] (this=0x2a404, __k=@0x2e110) at stl_map.h:339 Thanks.
From your comments, you say that you're initialising a static Pointer. This most likely means you've encountered the "static initialisation order fiasco" - if two static objects are in different compilation units, then it's not defined which order they're initialised in. So if the constructor of one depends on the other already being initialised, then you might get away with it, or you might not. Sod's Law dictates that the code will work during testing, then mysteriously break when it's deployed. The best solution is to avoid static objects; they're rarely a good idea. Another possibility is lazy instantiation, something like this: typedef std::map<T*, int> RefCountMap; static RefCountMap& GetRefCountMap() { static RefCountMap map; return map; } This may have issues of it's own; it's guaranteed to be constructed before it's used, but might be destroyed before you've finished with it, if a static destructor accesses it, and there may be thread safety issues. For the gory details, see the many discussions on the Singleton pattern, which requires a static instance. Singletons in C++ are a whole world of pain, best avoided if possible.
3,141,902
3,142,052
How to extract the contents of an OLE container?
I need to break open a MS Word file (.doc) and extract its constituent files ('[1]CompObj', 'WordDocument' etc). Something like 7-zip can be used to do this manually but I need to do this programatically. I've gathered that a Word document is an OLE container (hence why 7-zip can be used to view its contents) but I can't work out how to (using C++): open the OLE container extract each constituent file and save it to disk I've found a couple of examples of OLE automation (eg here) but what I want to do seems to be less common and I've found no specific examples. If anyone has any idea of either an API (?!) and tutorial for working with OLE I'd be grateful. Ditto any code samples.
It is called Compound Files, part of the Structured Storage API. You start with StgOpenStorageEx(). It buys you little for a Word .doc file, the streams themselves have a sophisticated binary format. To really read the document content you want to use automation, letting Word read the file. That's rarely done in C++ but that project shows you how.
3,141,907
3,142,036
C++ <algorithm> permutation
Why is this code note working (the code compiles and run fine, but is not actually showing the permutations): int main(int argc, char *argv[]) { long number; vector<long> interval; vector<long>::const_iterator it; cout << "Enter number: "; cin >> number; while(number-->0){ interval.push_back(number); } do{ for(it = interval.begin(); it < interval.end(); ++it){ cout << *it << " "; } cout << endl; } while(next_permutation(interval.begin(), interval.end())); return (0); } But after changing this line: while(next_permutation(interval.begin(), interval.end())); with: while(prev_permutation(interval.begin(), interval.end())); Isn't permutation changing the elements in the vector by acting on positions ? PS: I've edited the code now.
Permutations are lexicographically ordered, that's what std::next_permutation and std::prev_permutation algorithms traverse. Here you enter the "biggest" permutation, so there's no next one in order.
3,141,963
3,142,074
reversible float sort in c/c++
I need to sort some arrays of floats, modify the values, and then construct an array with the original ordering, but the modified values. In R, I could use the rank() and order() functions to achieve this: v a vector v[order(v)] is sorted v[i] goes in the rank(v)th spot in the sorted vector Is there some equivalent of these functions in the standard c or c++ libraries? A permutation matrix or other way of encoding the same information would be fine too. O(n) space and O(nlogn) time would be ideal.
There is the equivalent to the rank function in C++: it's called nth_element and can be applied on any model of Random Access Container (among which vector and deque are prominent). Now, the issue seems, to me, that the operate on values might actually modify the values and thus the ranks would change. Therefore I would advise storing the ranks. std::vector<float> to std::vector< std::pair<float, rank_t> > Sort the vector (works without any predicate) Operate on the values std::vector< std::pair<float, rank_t> > to std::vector<float> Unless of course you want nth_element to be affected by the current modifications of the values that occurred.
3,142,038
13,130,289
QextSerialPort connection problem to Arduino
I'm trying to make a serial connection to an Arduino Diecimila board with QextSerialPort. My application hangs though everytime I call port->open(). The reason I think this is happening is because the Arduino board resets itself everytime a serial connection to it is made. There's a way of not making the board reset described here, but I can't figure out how to get QextSerialPort to do that. I can only set the DTR to false after the port has been opened that's not much help since the board has already reset itself by that time. The code for the connection looks like this: port = new QextSerialPort("/dev/tty.usbserial-A4001uwj"); port->open(QIODevice::ReadWrite); port->setBaudRate(BAUD9600); port->setFlowControl(FLOW_OFF); port->setParity(PAR_NONE); port->setDataBits(DATA_8); port->setStopBits(STOP_1); port->setDtr(false); port->setRts(false); Any ideas on how to get this done. I don't necessarily need to use QextSerialPort should someone know of another library that does the trick. I'm new to C++ and Qt. UPDATE: I noticed that if I run a python script that connects to the same port (using pySerial) before running the above code, everything works just fine.
I had a similar problem. In my case QExtSerial would open the port, I'd see the RX/TX lights on the board flash, but no data would be received. If I opened the port with another terminal program first QExtSerial would work as expected. What solved it for me was opening the port, configuring the port settings, and then making DTR and RTS high for a short period of time. This was on Windows 7 w/ an ATMega32u4 (SFE Pro Micro). bool serialController::openPort(QString portName) { QString selectPort = QString("\\\\.\\%1").arg(portName); this->port = new QextSerialPort(selectPort,QextSerialPort::EventDriven); if (port->open(QIODevice::ReadWrite | QIODevice::Unbuffered) == true) { port->setBaudRate(BAUD38400); port->setFlowControl(FLOW_OFF); port->setParity(PAR_NONE); port->setDataBits(DATA_8); port->setStopBits(STOP_1); port->setTimeout(500); port->setDtr(true); port->setRts(true); Sleep(100); port->setDtr(false); port->setRts(false); connect(port,SIGNAL(readyRead()), this, SLOT(onReadyRead())); return true; } else { // Device failed to open: port->errorString(); } return false; }
3,142,294
3,142,343
Receiving console output
Is there a way to execute a program and receive the console output in c++ instead of displaying the console window? I am trying to do a command line call but provide a GUI instead of the console output.
You can do this on most systems using popen (or on some compilers _popen). If that isn't versatile enough for your purposes, you'll probably have to do something platform specific (e.g., fork on a POSIX-like system, or CreateProcess on Windows).
3,142,420
3,143,391
Convert some code from C++ to C
Possible Duplicate: C code compiles as C++, but not as C Edit: I recompiled the source for the library as C, and that fixed it. I've got this code I need to use in my application. It's for writing to the serial port, and I can't figure out how to get it to run in C. I've got a version in C++, as well as a version that looks more like C, designed to compile with the Borland C++ 5.5 compiler, but I can't get it to compile there or in my project. Edit: I should note that it compiles (and links) when I compile as c++, but not when I compile as c. Here's the linker error I get: 1>InpoutTest.obj : error LNK2019: unresolved external symbol _Out32@8 referenced in function _main 1>InpoutTest.obj : error LNK2019: unresolved external symbol _Inp32@4 referenced in function _main Here's the c++ code. I don't need the command line functionality, I just need to be able to call Out32(). I don't even need to be able to read. #include "stdafx.h" #include "stdio.h" #include "string.h" #include "stdlib.h" /* ----Prototypes of Inp and Outp--- */ short _stdcall Inp32(short PortAddress); void _stdcall Out32(short PortAddress, short data); /*--------------------------------*/ int main(int argc, char* argv[]) { int data; if(argc<3) { //too few command line arguments, show usage printf("Error : too few arguments\n\n***** Usage *****\n\nInpoutTest read <ADDRESS> \nor \nInpoutTest write <ADDRESS> <DATA>\n\n\n\n\n"); } else if(!strcmp(argv[1],"read")) { data = Inp32(atoi(argv[2])); printf("Data read from address %s is %d \n\n\n\n",argv[2],data); } else if(!strcmp(argv[1],"write")) { if(argc<4) { printf("Error in arguments supplied"); printf("\n***** Usage *****\n\nInpoutTest read <ADDRESS> \nor \nInpoutTest write <ADDRESS> <DATA>\n\n\n\n\n"); } else { Out32(atoi(argv[2]),atoi(argv[3])); printf("data written to %s\n\n\n",argv[2]); } } return 0; } Here's the other sample: #include <stdio.h> #include <conio.h> #include <windows.h> /* Definitions in the build of inpout32.dll are: */ /* short _stdcall Inp32(short PortAddress); */ /* void _stdcall Out32(short PortAddress, short data); */ /* prototype (function typedef) for DLL function Inp32: */ typedef short _stdcall (*inpfuncPtr)(short portaddr); typedef void _stdcall (*oupfuncPtr)(short portaddr, short datum); int main(void) { HINSTANCE hLib; inpfuncPtr inp32; oupfuncPtr oup32; short x; int i; /* Load the library */ hLib = LoadLibrary("inpout32.dll"); if (hLib == NULL) { printf("LoadLibrary Failed.\n"); return -1; } /* get the address of the function */ inp32 = (inpfuncPtr) GetProcAddress(hLib, "Inp32"); if (inp32 == NULL) { printf("GetProcAddress for Inp32 Failed.\n"); return -1; } oup32 = (oupfuncPtr) GetProcAddress(hLib, "Out32"); if (oup32 == NULL) { printf("GetProcAddress for Oup32 Failed.\n"); return -1; } /***************************************************************/ /* now test the functions */ /* Try to read 0x378..0x37F, LPT1: */ for (i=0x378; (i<0x380); i++) { x = (inp32)(i); printf("port read (%04X)= %04X\n",i,x); } /***** Write the data register */ i=0x378; x=0x77; (oup32)(i,x); printf("port write to 0x%X, datum=0x%2X\n" ,i ,x); /***** And read back to verify */ x = (inp32)(i); printf("port read (%04X)= %04X\n",i,x); /***** One more time, different value */ i=0x378; x=0xAA; (oup32)(i,x); printf("port write to 0x%X, datum=0x%2X\n" ,i ,x); /***** And read back to verify */ x = (inp32)(i); printf("port read (%04X)= %04X\n",i,x); FreeLibrary(hLib); return 0; } Any help would be appreciated.
I had to recompile the library as C, then use that version. The existing version was compiled as C++.
3,142,630
3,142,693
Implementation of string literal concatenation in C and C++
AFAIK, this question applies equally to C and C++ Step 6 of the "translation phases" specified in the C standard (5.1.1.2 in the draft C99 standard) states that adjacent string literals have to be concatenated into a single literal. I.e. printf("helloworld.c" ": %d: Hello " "world\n", 10); Is equivalent (syntactically) to: printf("helloworld.c: %d: Hello world\n", 10); However, the standard doesn't seem to specify which part of the compiler has to handle this - should it be the preprocessor (cpp) or the compiler itself. Some online research tells me that this function is generally expected to be performed by the preprocessor (source #1, source #2, and there are more), which makes sense. However, running cpp in Linux shows that cpp doesn't do it: eliben@eliben-desktop:~/test$ cat cpptest.c int a = 5; "string 1" "string 2" "string 3" eliben@eliben-desktop:~/test$ cpp cpptest.c # 1 "cpptest.c" # 1 "<built-in>" # 1 "<command-line>" # 1 "cpptest.c" int a = 5; "string 1" "string 2" "string 3" So, my question is: where should this feature of the language be handled, in the preprocessor or the compiler itself? Perhaps there's no single good answer. Heuristic answers based on experience, known compilers, and general good engineering practice will be appreciated. P.S. If you're wondering why I care about this... I'm trying to figure out whether my Python based C parser should handle string literal concatenation (which it doesn't do, at the moment), or leave it to cpp which it assumes runs before it.
The standard doesn't specify a preprocessor vs. a compiler, it just specifies the phases of translation you already noted. Traditionally, phases 1 through 4 were in the preprocessor, Phases 5 though 7 in the compiler, and phase 8 the linker -- but none of that is required by the standard.
3,142,701
3,142,740
Building in 64 bit Windows on VS2008 gives C2632 error
So I am trying to build an 32 bit application in 64. I am linking to all 64 bit libraries, and I have recompiled everything we used for 64 bit. I am getting weird errors now. I have seen some similar errors over the net but nothing useful in those topics. Any idea what could be wrong that causes this behavior? warning C4091: 'typedef ' : ignored on left of 'float' when no variable is declared C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\windef.h error C2632: 'float' followed by 'double' is illegal C:\Program Files\MicrosoftSDKs\Windows\v6.0A\include\windef.h warning C4091: 'typedef ' : ignored on left of 'double' when no variable is declared C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\wtypes.h Error 44 error C2632: 'double' followed by 'double' is illegal C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\wtypes.h Warning and error are for the same line. Clearly its not a problem with wtypes.h and windef.h (and if it was i cant do anything about it) typedef float FLOAT; typedef double DOUBLE; Clearly these are fine by itself so it has to be something else. File in my project that causes this just includes Any ideas?
It looks like FLOAT and DOUBLE have been previously #defined to double. This might be a result of another library, although it seems unlikely to be caused by switching to 64-bit compilation. Try doing #undef FLOAT #undef DOUBLE Prior to including windows.h or windef.h or whichever file is directly responsible for the warning.
3,142,764
3,143,144
How to switch on the "auto-build" option in VS2008
What option (where it is located in VS2008 menu) is need to be switched on in order VS2008 compile and build solution before launch (native C++ project)? Thanks.
Tools + Options, Projects and Solutions, Build and Run. The setting "On Run, when projects are out of date" is relevant. You'll probably want "Always build". The setting for the next one has "Do not launch" as the only sane option.
3,142,941
3,142,997
Porting C++ code from Windows to Linux - Header files case sensitivity issue
I am porting a C++ large project form Windows to Linux. My C++ files include header files that do not match those on the project directory due to the case sensitivity of file names in Linux file systems. Any help? I would prefer finding a flag for gcc (or ext4 file system) to manual editing or sed'ing my files. Thanks for all!
You're out of luck on your preference. Linux is case-sensitive, and always will be. Just identify the names that need to be changed, and sed away.
3,143,000
3,143,117
How do I specify 64-bit machine architecture when building boost libraries with bjam on solaris?
How do I specify 64-bit machine architecture when building boost libraries with bjam on solaris?
Not a real answer, just a note - Sun compiler is something boost has always had trouble with. Only fairly recent versions are supported and you need STLport. Take a look here and here. You might want to play with the [compiler options] part of the module syntax. Edit: Found this specific link that tells this should work: bjam toolset=sun stdlib=sun-stlport address-model=64 stage No doubt, it requires Sun Studio 12.
3,143,052
3,143,166
C code compiles as C++, but not as C
Possible Duplicate: Convert some code from C++ to C I've got some code that appears to be straight C. When I tell the compiler (I'm using Visual Studio 2008 Express) to compile it as c++, it compiles and links fine. When I try to compile it as C, though, it throws this error: 1>InpoutTest.obj : error LNK2019: unresolved external symbol _Out32@8 referenced in function _main 1>InpoutTest.obj : error LNK2019: unresolved external symbol _Inp32@4 referenced in function _main The code reads from and writes to the parallel port, using Inpout.dll. I have both Inpout.lib and Inpout.dll. Here's the code: // InpoutTest.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "stdio.h" #include "string.h" #include "stdlib.h" /* ----Prototypes of Inp and Outp--- */ short _stdcall Inp32(short PortAddress); void _stdcall Out32(short PortAddress, short data); /*--------------------------------*/ int main(int argc, char* argv[]) { int data; if(argc<3) { //too few command line arguments, show usage printf("Error : too few arguments\n\n***** Usage *****\n\nInpoutTest read <ADDRESS> \nor \nInpoutTest write <ADDRESS> <DATA>\n\n\n\n\n"); } else if(!strcmp(argv[1],"read")) { data = Inp32(atoi(argv[2])); printf("Data read from address %s is %d \n\n\n\n",argv[2],data); } else if(!strcmp(argv[1],"write")) { if(argc<4) { printf("Error in arguments supplied"); printf("\n***** Usage *****\n\nInpoutTest read <ADDRESS> \nor \nInpoutTest write <ADDRESS> <DATA>\n\n\n\n\n"); } else { Out32(atoi(argv[2]),atoi(argv[3])); printf("data written to %s\n\n\n",argv[2]); } } return 0; } I previously asked this question, incorrectly, here. Any help would be appreciated.
You're trying to link to a C++ function, from C. That doesn't work due to name mangling- the linker doesn't know where to look for your function. If you want to call a C function from C++, you must mark it extern "C". C does not support extern "C++"- as far as I know. One of the other answers says there is. Alternatively, recompile it's source code as C. Edit: Why ever would you compile as C if you could compile as C++, anyway?
3,143,068
3,143,450
How can I log which thread called which function from which class and at what time throughout my whole project?
I am working on a fairly large project that runs on embedded systems. I would like to add the capability of logging which thread called which function from which class and at what time. E.g., here's what a typical line of the log file would look like: Time - Thread Name - Function Name - Class Name I know that I can do this by using the _penter hook function, which would execute at the beginning of every function called within my project (Source: http://msdn.microsoft.com/en-us/library/c63a9b7h%28VS.80%29.aspx). I could then find a library that would help me find the function, class, and thread from which _penter was called. However, I cannot use this solution since it is VC++ specific. Is there a different way of doing this that would be supported by non-VC++ implementations? I am using the ARM/Thumb C/C++ Compiler, RVCT3.1. Additionally, is there a better way of tracking problems that may arise from multithreading? Thank you, Borys
I've worked with a system that had similar requirements (ARM embedded device). We had to build much of it from scratch, but we used some CodeWarrior stuff to do it, and then the map file for the function name lookup. With CodeWarrior, you can get some code inserted into the start and end of each function, and using that, you can track when you enter each function, and when you switch threads. We used assembly, and you might have to as well, but it's easier than you think. One of your registers will be your return value, which is a hex value. If you compile with a map file, you can then use that hex value to look up the (mangled) name of that function. You can find the class name in the function name. But, basically, get yourself a stream to somewhere (ideally to a desktop), and yell to the stream: Entered Function ##### Left Function ##### Switched to Thread # (PS - Actual encoding should be more like 1 21361987236, 2 1238721312, since you don't actually want to send characters) If you're only ever processing one thread at a time, this should give you an accurate record of where you went, in the order you went there. Attach clock tick info for function profiling, add a message for allocations (and deallocations) and you get memory tracking. If you're actually running multiple threads, it could get substantially harder, or be more of the same - I don't know. I'd put timing information on everything, and then have a separate stream for each thread. Although you might just be able to detect which processor you're running on, and report that, for which thread.... I don't, however, know if any of that will work. Still, the basic idea was: Report on each step (function entry/exit, thread switching, and allocation), and then re-assemble the information you care about on the desktop side, where you have processing to spare.
3,143,125
3,143,307
reuse function logic in a const expression
I think my question is, is there anyway to emulate the behaviour that we'll gain from C++0x's constexpr keyword with the current C++ standard (that is if I understand what constexpr is supposed to do correctly). To be more clear, there are times when it is useful to calculate a value at compile time but it is also useful to be able to calculate it at runtime too, for e.g. if we want to calculate powers, we could use the code below. template<int X, unsigned int Y> struct xPowerY_const { static const int value = X*xPowerY_const<X,Y-1>::value; }; template<int X> struct xPowerY_const<X, 1> { static const int value = X; }; int xPowerY(int x, unsigned int y) { return (y==1) ? x : x*xPowerY(x,y-1); } This is a simple example but in more complicated cases being able to reuse the code would be helpful. Even if, for runtime performance, the recursive nature of the function is suboptimal and a better algorithm could be devised it would be useful for testing the logic if the templated version could be expressed in a function, as I can't see a reasonable method of testing the validity of the constant template method in a wide range of cases (although perhaps there is one and i just can't see it, and perhaps that's another question). Thanks. Edit Forgot to mention, I don't want to #define Edit2 Also my code above is wrong, it doesn't deal with x^0, but that doesn't affect the question.
Template metaprogramming implements logic in an entirely different (and incompatible) way from "normal" C++ code. You're not defining a function, you're defining a type. It just happens that the type has a value associated with it, which is built up from a combination of other types. Because the templates define types, there is no program logic involved. The logic is simply a side effect of the compiler trying to resolve relationships between the templated types. There really isn't any way to automatically extract the high level logic from a template "program" into a function. FWIW, template metaprogramming wasn't even a glimmer in Bjarne's eye when templates were first implemented. They were actually discovered later on in the language's life by users of the language. It's an "unintended" side-effect of the type system that just happened to become very popular. It's precisely because of this discovery that new features are being added to the language to more thoroughly support the idioms that have evolved.
3,143,180
3,143,416
How to do static de-initialization if the destructor has side effects and the object is accessed from another static object's destructor?
There is a simple and well-known pattern to avoid the static initialization fiasco, described in section 10.13 of the C++ FAQ Lite. In this standard pattern, there is a trade-off made in that either the constructed object gets never destructed (which is not a problem if the destructor does not have important side effects) or the static object cannot safely be accessed from another static object's destructor (see section 10.14 of the C++ FAQ Lite). So my question is: How do you avoid the static de-initialization fiasco if a static object's destructor has important side effects that must eventually occur and the static object must be accessed by another static object's destructor? (Note: the FAQ-lite mentions this question is answered in FAQ 16.17 of C++ FAQs: Frequently Asked Questions by M. Cline and and G. Lomow. I do not have access to this book, which is why I ask this question instead.)
function static objects like global objects are guaranteed to be destroyed (assuming they are created). The order of destruction is the inverse of creation. Thus if an object depends on another object during destruction you must guarantee that it is still available. This is relatively simple as you can force the order of destruction by making sure the order of creation is done correctly. The following link is about singeltons but describes a similar situation and its solution: Finding C++ static initialization order problems Extrapolating to the general case of lazy initialized globals as described in the FAQ lite we can solve the problem like this: namespace B { class B { ... }; B& getInstance_Bglob; { static B instance_Bglob; return instance_Bglob;; } B::~B() { A::getInstance_abc().doSomthing(); // The object abc is accessed from the destructor. // Potential problem. // You must guarantee that abc is destroyed after this object. // To gurantee this you must make sure it is constructed first. // To do this just access the object from the constructor. } B::B() { A::getInstance_abc(); // abc is now fully constructed. // This means it was constructed before this object. // This means it will be destroyed after this object. // This means it is safe to use from the destructor. } } namespace A { class A { ... }; A& getInstance_abc() { static A instance_abc; return instance_abc; } }
3,143,212
3,143,277
conversion from std::vector<char> to wchar_t*
i'm trying to read ID3 frames and their values with TagLib (1) and index them with CLucene (2). the former returns frame ID's as std::vector<char> (3) and the latter writes field names as tchar* [wchar_t* in Linux] (4). i need to make a link between the two. how can i convert from std::vector<char> to wchar_t* by means of the STL? thank you (1)http://developer.kde.org/~wheeler/taglib.html (2)http://clucene.sourceforge.net/ (3)http://developer.kde.org/~wheeler/taglib/api/classTagLib_1_1ID3v2_1_1Frame.html#6aac53ec5893fd15164cd22c6bdb5dfd (4)http://ohnopublishing.net/doc/clucene-0.9.21b/html/classlucene_1_1document_1_1Field.html#59b0082e2ade8c78a51a64fe99e684b2
In a simple case where your chars don't contain any accented characters or anything like that, you can just copy each one to the destination and use it: std::vector<char> frameID; std::vector<wchar_t> field_name; std::copy(frameID.begin(), frameID.end(), std::back_inserter(field_name)); lucene_write_field(&field_name[0], field_name.length()); My guess is that for ID3 frame ID's you don't have accented characters and such, so that'll probably be all you need. If you do have a possibility of accented characters and such, things get more complex in a hurry -- you'll need to convert from something like ISO 8859-x to (probably) UTF-16 Unicode. To do that, you need a code-page that tells you how to interpret the input (i.e., there are several varieties of ISO 8859, and one for French input will be different from one for Russian, for example).
3,143,323
3,143,375
To implement properties or not?
I've found a few methods online on how to implement property-like functionality in c++. There seems to be some sound work-arounds for getting it to work well. My question is, with the prevalence of properties in managed langues, should I spend the effort and the possibilty of code-breakage (or whatever) to implement properties in my code? Say I'm going to dev up a library of calls for someone else to use, would properties be desired enough to validate the extra code?
Unless you add reflection to the mix (being able to identify at runtime what properties exist on an object), properties are nothing more than syntactic sugar for getters and setters. Might as well just use getters and setters, in that case. Properties with reflection can indeed be useful for C++ programs, though. Qt handles this quite nicely.
3,143,325
3,143,402
Problem compiling VS8 C++ program with boost signals
So I am wanting to use boost signals in my C++ program. I add: #include <boost/signal.hpp> But I get this error when I build. fatal error LNK1104: cannot open file 'libboost_signals-vc90-mt-gd-1_42.lib' The lib file is not contained within my boost directory. Typing 'libboost_signal' (with variations) into google hasn't helped. Anyone encountered this problem before? Any help is greatly appreciated.
most of Boost is header-file-only source, so you just need to #include <boost/whatever.hpp> and your done. However, there's a few sections that require a dll - examples are date/time, regex and signals. So yuo need to build the signals dll. instructions are on the boost website and are easy - so easy I've forgotten how I did it last time. (check out section 5.2 on the site).
3,143,832
3,144,095
Problems upgrading VS2008 to VS2010 with Managed and Unmanaged C++
I have a VS2008 Professional solution that I tried to convert to VS2010 Professional (RTM from MSDN download) today and I am experiencing some problems with some unmanaged and managed C++ DLLs that are referenced by a C# application. The C# application is set to target .NET 3.5 (as it was in the VS2008 version) but when I try and compile it I get a lot of warnings like: The primary reference "xxxx.dll" could not be resolved because it had an indirect dependency on the .NET Framework assembly "(various assembly names)", Version 4.0.0.0 ... which has a higher version "4.0.0.0" than the version "3.5.0.0" in the current target framework and ultimately I get a failure to build. From this I understand that it is a mismatch in .Net framework version. So I look at the properties of the unmanaged C++ DLL project and under "Common Properties->Framework and References" I can see "Targeted framework: .NetFramework, Version=v4.0" So I go WTF!?!?!?, why does a pure C++ DLL now target a .Net framework when it sure as hell didn't in the VS2008 version. I then added on to that exclamation as there appears to be no way to change this. I also look at the managed C++ and see the same thing: targeting .Net version=v4.0 and again no way to change this at all. In the C++ General properties there is an entry for "Common language runtime support" and I have set this to "No common language run time support" but that hasn't seem to have done anything. So I have two questions: Why has my pure C++ DLL now been tagged as targeting a .Net framework? How can I change/remove this targeting? Solution As per Hans' reply and the link he supplied I now see that I have 3 choices: Stay at VS2008 and everything works Keep both VS2008 SP1 and VS2010 installed so that I can have .Net 3.5 c# applications and c++ managed code as per the link supplied by Hans. Move everything to VS2010 and move to a minimum of .Net 4.0 for all my c# apps I am really annoyed to have to make that choice as MS has deliberately chosen to break functionality when moving from VS2008 to VS2010. This is not the sort of behavior I expected. I was expecting to convert the project and have it compile with no issues in the same manner that moving from VS2005 to VS2008 worked. Fortunately I do have a need to go to .Net 4.0, but I just wasn't expecting to have to do it so soon. Update I decided to move to .Net 4 framework and encountered problems with referencing managed c++ projects from c# projects. I was getting errors like the following when trying to add the reference to the c++ managed code project A reference to 'myproj' could not be added. An Assembly must have a 'dll' or 'exe' extension in order to be referenced. Google lead me down the path to "cli c project cannot be referenced from c project allowing only assembly dll" which turned up that there was an extraneous "\" in the output path of the managed c++ project. The original VS2008 output path was specified as $(SolutionDir)\$(ProjectName)\$(Configuration)\ But in the VS2010 project the SolutionDir macro has a trailing "\" (or the VS2008 version didn't care about it) giving a path like c:\projects\thisproject\solution\\projectname\configuration\ And VS2010 barfed over that path when trying to add a reference to the managed c++ code. My solution was to change the output path to be $(SolutionDir)$(ProjectName)\$(Configuration)\ And now I am (sort of) happy
Keep your eyes on the ball, the warning you get is for a managed C++ assembly. And the platform target setting for an unmanaged DLL is of no consequence, it won't use any .NET references while being built. Yes, they could not make the platform target setting editable in the C++ IDE, the VS2008 tool chain is required to build C++/CLI assemblies for 3.5. This blog post explains the workaround. You can upvote this feedback article if you're unhappy with that.
3,143,881
3,144,045
assign values to selective items using STL multimap
typedef std::pair<int, bool> acq_pair; //edge, channel_quality typedef std::pair<int, acq_pair> ac_pair; typedef std::multimap<int, acq_pair> ac_map; typedef ac_map::iterator It_acq; int bits = acq_map.size(); std::cout << "bits = " << bits << std::endl; std::vector<std::vector<bool> > c_flags (1 << bits); for (i = 0; i < c_flags.size(); ++i) { for (j = 0; j < bits; ++j) { c_flags[i].push_back( (i & (1 << j)) > 0); } } std::cout << "c_flags.size() = " << c_flags.size() << std::endl; for(i = 0; i < c_flags.size(); ++i) { for(j = 0; j < bits; ++j) { std::cout << c_flags[i][j] << std::endl; for(It_acq itc = acq_map.begin(); itc!= acq_map.end(); ++itc) { acq_pair it1 = itc->second; itc->second.second = c_flags[i][j]; std::cout << itc->first << " : " << it1.first << " : " << it1.second << std::endl; } } std::cout << "\n" << std::endl; } How can I access only one item from the multimap container at a time? I want to update only the jth value in the map, but when I iterate through the map all the bool values are changed. Is there a selective way to access the map container values?
The line itc->second.second = c_flags[i][j]; performed in a loop with itc from begin() to end() indeed performs assignment to every value of the map. If the goal was to modify only the j'th value in the map, there was no need for a loop over the entire map: for(size_t j = 0; j < bits; ++j) { std::cout << c_flags[i][j] << std::endl; It_acq itc = acq_map.begin(); // itc points at the beginning advance(itc, j); // itc points at the j'th element itc->second.second = c_flags[i][j]; // the assignment for(It_acq itc = acq_map.begin(); itc!= acq_map.end(); ++itc) { acq_pair it1 = itc->second; // itc->second.second = c_flags[i][j]; // no assignment here std::cout << itc->first << " : " << it1.first << " : " << it1.second << std::endl; } } If this map is used for indexed access in this manner, it may be worthwhile to consider switching to vector, though.
3,143,895
3,143,957
Syntax for std::binary_function usage
I'm a newbie at using the STL Algorithms and am currently stuck on a syntax error. My overall goal of this is to filter the source list like you would using Linq in c#. There may be other ways to do this in C++, but I need to understand how to use algorithms. My user-defined function object to use as my function adapter is struct is_Selected_Source : public std::binary_function<SOURCE_DATA *, SOURCE_TYPE, bool> { bool operator()(SOURCE_DATA * test, SOURCE_TYPE ref)const { if (ref == SOURCE_All) return true; return test->Value == ref; } }; And in my main program, I'm using as follows - typedef std::list<SOURCE_DATA *> LIST; LIST; *localList = new LIST;; LIST* msg = GLOBAL_DATA->MessageList; SOURCE_TYPE _filter_Msgs_Source = SOURCE_TYPE::SOURCE_All; std::remove_copy(msg->begin(), msg->end(), localList->begin(), std::bind1st(is_Selected_Source<SOURCE_DATA*, SOURCE_TYPE>(), _filter_Msgs_Source)); What I'm getting the following error in Rad Studio 2010. The error means "Your source file used a typedef symbol where a variable should appear in an expression. " "E2108 Improper use of typedef 'is_Selected_Source'" Edit - After doing more experimentation in VS2010, which has better compiler diagnostics, I found the problem is that the definition of remove_copy only allows uniary functions. I change the function to uniary and got it to work.
(This is only relevant if you didn't accidentally omit some of your code from the question, and may not address the exact problem you're having) You're using is_Selected_Source as a template even though you didn't define it as one. The last line in the 2nd code snippet should read std::bind1st(is_Selected_Source()... Or perhaps you did want to use it as a template, in which case you need to add a template declaration to the struct. template<typename SOURCE_DATA, typename SOURCE_TYPE> struct is_Selected_Source : public std::binary_function<SOURCE_DATA *, SOURCE_TYPE, bool> { // ... };
3,144,225
3,148,489
Where can I see printf output in an mfc applcation?
Where can I see printf output in an mfc application during debugging? Is there a "console" window I can view in the debugger? (Visual Studio C++ 6.0) Thanks.
If you use the API OutputDebugString, the strings you output will appear in the Visual C Output window (in debug mode). In release mode, you'll need a separate app to capture them, such as DBWIN32.EXE The advantage of using a separate application is that you can get debug output from several applications serialised into a single window, which can be very handy for debugging some scenarios. The downside of course is that you can get debug output from other apps (nothing to do with your own) appearing because they've forgotten to flag out their debug in the release build. TRACE will do this automatically, but of course there might be cases where you WANT to get at the output in the release build. I prefer to be in charge, so I wsprintf/sprintf into a string, use OutputDebugString, and retain that control for myself.
3,144,340
3,144,970
How to draw on given bitmap handle (C++ / Win32)?
I'm writing an unmanaged Win32 C++ function that gets a handle to a bitmap, and I need to draw on it. My problem is that to draw I need to get a device context, but when I do GetDC (NULL), it gives me a device context for the WINDOW! The parameter for GetDC () is a window handle (HWND), but I don't have a window; just a bitmap handle. How can I draw on this bitmap? Thanks!
In addition to Pavel's answer, the "compatible with the screen" always bugged me too, but, since CreateCompatibleDC(NULL) is universally used for that purpose, I assume it is correct. I think that the "compatible" thing is related just to DDB (the DC is set up to write on the correct DDB type for the current screen), but does not affect read/writes on DIBs. So, to be safe, always use DIBs and not DDBs if you need to work on bitmaps that doesn't just have to go temporarily onscreen, nowadays the difference in performance is negligible. See here for more info about DIBs and DDBs.
3,144,349
3,144,763
boost threads mutex array
My problem is, I have block matrix updated by multiple threads. Multiple threads may be updating disjoint block at a time but in general there may be race conditions. right now matrix is locked using single lock. The question is, is it possible (and if it is, how?) to implement an efficient array of locks, so that only portions of matrix maybe locked at a time. The matrix in question can get rather large, on order of 50^2 blocks. my initial guess is to use dynamically allocate vector/map of mutexes. Is it good approach? Is it better to use multiple condition variables instead? Is there a better approach?
Use a single lock. But instead of using it to protect the entire matrix use it to guard a std::set (or a boost::unordered_set) which says which blocks are "locked". Something like this. class Block; class Lock_block { public: Lock_block( Block& block ) : m_block(&block) { boost::unique_lock<boost::mutex> lock(s_mutex); while( s_locked.find(m_block) != s_locked.end() ) { s_cond.wait(lock); } bool success = s_locked.insert(m_block).second; assert(success); } ~Lock_block() { boost::lock_guard<boost::mutex> lock(s_mutex); std::size_t removed = s_locked.erase(m_block); assert(removed == 1); s_cond.notify_all(); } private: Block* m_block; static boost::mutex s_mutex; static boost::condition s_cond; static std::set<Block*> s_locked; };
3,144,604
3,144,609
'std::vector<T>::iterator it;' doesn't compile
I've got this function: template<typename T> void Inventory::insertItem(std::vector<T>& v, const T& x) { std::vector<T>::iterator it; // doesn't compile for(it=v.begin(); it<v.end(); ++it) { if(x <= *it) // if the insertee is alphabetically less than this index { v.insert(it, x); } } } and g++ gives these errors: src/Item.hpp: In member function ‘void yarl::item::Inventory::insertItem(std::vector<T, std::allocator<_CharT> >&, const T&)’: src/Item.hpp:186: error: expected ‘;’ before ‘it’ src/Item.hpp:187: error: ‘it’ was not declared in this scope it must be something simple, but after ten minutes of staring at it I can't find anything wrong. Anyone else see it?
Try this instead: typename std::vector<T>::iterator it; Here's a page that describes how to use typename and why it's necessary here.
3,144,726
3,144,748
Am I failing to follow the standard?
If I have something like this: MyStruct clip; clip = {16, 16, 16, 16}; I get the following warning from the compiler: warning: extended initializer lists only available with -std=c++0x or -std=gnu++0x If I active -std=c++0x in the compiler, it does not give any warning. But I'm not sure if I am following the standard. So should I deactivate that flag and initialize each member of the structure separately? Thank you.
For initialization you should be able to use MyStruct clip = {16, 16, 16, 16}; but as you discovered in the current C++ standard you can't assign to a bracketed list. In C++1x you can use the extended syntax.
3,144,904
3,144,917
May I take the address of the one-past-the-end element of an array?
Possible Duplicate: Take the address of a one-past-the-end array element via subscript: legal by the C++ Standard or not? int array[10]; int* a = array + 10; // well-defined int* b = &array[10]; // not sure... Is the last line valid or not?
Yes, you can take the address one beyond the end of an array, but you can't dereference it. For your array of 10 items, array+10 would work. It's been argued a few times (by the committee, among others) whether &array[10] really causes undefined behavior or not (and if it does, whether it really should). The bottom line with it is that at least according to the current standards (both C and C++) it officially causes undefined behavior, but if there's a single compiler for which it actually doesn't work, nobody in any of the arguments has been able to find or cite it. Edit: For once my memory was half correct -- this was (part of) an official Defect Report to the committee, and at least some committee members (e.g., Tom Plum) thought the wording had been changed so it would not cause undefined behavior. OTOH, the DR dates from 2000, and the status is still "Drafting", so it's open to question whether it's really fixed, or ever likely to be (I haven't looked through N3090/3092 to figure out). In C99, however, it's clearly not undefined behavior.
3,145,399
3,145,512
strlen() not working
Basically, I'm passing a pointer to a character string into my constructor, which in turn initializes its base constructor when passing the string value in. For some reason strlen() is not working, so it does not go into the right if statement. I have checked to make sure that there is a value in the variable and there is. Here is my code, I've taken out all the irrelevant parts: Label class contents: Label(int row, int column, const char *s, int length = 0) : LField(row, column, length, s, false) { } Label (const Label &obj) : LField(obj)\ { } ~Label() { } Field *clone() const { return new Label(*this); } LField class contents: LField(int rowNumVal, int colNumVal, int widthVal, const char *valVal = "", bool canEditVal = true) { if(strlen(valVal) > 0) { } else { //This is where it jumps to, even though the value in //valVal is 'SFields:' val = NULL; } } Field *clone() const { return new LField(*this); } LField(const LField &clone) { delete[] val; val = new char[strlen(clone.val) + 1]; strcpy(val, clone.val); rowNum = clone.rowNum; colNum = clone.colNum; width = clone.width; canEdit = clone.canEdit; index = clone.index; } Screen class contents: class Screen { Field *fields[50]; int numOfFields; int currentField; public: Screen() { numOfFields = 0; currentField = 0; for(int i = 0; i < 50; i++) fields[i] = NULL; } ~Screen() { for (int i = 0; i < 50; i++) delete[] fields[i]; } int add(const Field &obj) { int returnVal = 0; if (currentField < 50) { delete[] fields[currentField]; fields[currentField] = obj.clone(); numOfFields += 1; currentField += 1; returnVal = numOfFields; } return returnVal; } Screen& operator+=(const Field &obj) { int temp = 0; temp = add(obj); return *this; } }; Main: int main () { Screen s1; s1 += Label(3, 3, "SFields:"); } Hopefully someone is able to see if I am doing something wrong.
Marcin at this point the problem will come down to debugging, I copied your code with some minor omissions and got the correct result. Now it needs to be said, you should be using more C++ idiomatic code. For instance you should be using std::string instead of const char* and std::vector instead of your raw arrays. Here is an example of what the LField constructor would look like with std::string: #include <string> // header for string LField(int rowNumVal, int colNumVal, int widthVal, const std::string& valVal = "", bool canEditVal = true) { std::cout << valVal; if(valVal.length() > 0) { } else { //This is where it jumps to, even though the value in //valVal is 'SFields:' //val = NULL; } } Using these types will make your life considerably easier and if you make the change it may just fix your problem too. PREVIOUS: So you can be CERTAIN that the string is not being passed in correctly add a printline just before the strlen call. Once you do this work backward with printlines until you find where the string is not being set. This is a basic debugging technique. Label(int row, int column, const char *s, int length = 0) : LField(row, column, length, s, false) { } LField(int rowNumVal, int colNumVal, int widthVal, const char *valVal = "", bool canEditVal = true) { std::cout << valVal << std::endl; if(strlen(valVal) > 0) { } else { //This is where it jumps to, even though the value in //valVal is 'SFields:' val = NULL; } }
3,145,528
3,153,967
Problems with Qt 4.6 in VS 2008
sys info : win xp SP3 , Microsoft Visual Studio 2008 Version 9.0.21022.8 RTM Microsoft .NET Framework Version 3.5 SP1 Qt Add-in 1.1.5 I installed Qt 4.6.3 from the site http://qt.nokia.com/downloads/windows-cpp-vs2008. Then I added the Add-in Qt 1.1.5 and configured the PATH variable. When I open a new QT project , default example works just fine. On Nokia (qt) site I found some examples but it seems that things are not working properly. Here is one of many examples that do not work : #include <QtGui> #include <QWidget> class QLabel; class QLineEdit; class QTextEdit; class AddressBook : public QWidget { Q_OBJECT public: AddressBook(QWidget *parent = 0); private: QLineEdit *nameLine; QTextEdit *addressText; }; AddressBook::AddressBook(QWidget *parent) : QWidget(parent) { QLabel *nameLabel = new QLabel(tr("Name:")); nameLine = new QLineEdit; QLabel *addressLabel = new QLabel(tr("Address:")); addressText = new QTextEdit; QGridLayout *mainLayout = new QGridLayout; mainLayout->addWidget(nameLabel, 0, 0); mainLayout->addWidget(nameLine, 0, 1); mainLayout->addWidget(addressLabel, 1, 0, Qt::AlignTop); mainLayout->addWidget(addressText, 1, 1); setLayout(mainLayout); setWindowTitle(tr("Simple Address Book")); } int main(int argc, char *argv[]) { QApplication app(argc, argv); AddressBook addressBook; addressBook.show(); return app.exec(); } Compiler says this :: Output Window Linking... main.obj : error LNK2001: unresolved external symbol "public: virtual struct QMetaObject const * __thiscall AddressBook::metaObject(void)const " (?metaObject@AddressBook@@UBEPBUQMetaObject@@XZ) main.obj : error LNK2001: unresolved external symbol "public: virtual void * __thiscall AddressBook::qt_metacast(char const *)" (?qt_metacast@AddressBook@@UAEPAXPBD@Z) main.obj : error LNK2001: unresolved external symbol "public: virtual int __thiscall AddressBook::qt_metacall(enum QMetaObject::Call,int,void * *)" (?qt_metacall@AddressBook@@UAEHW4Call@QMetaObject@@HPAPAX@Z) main.obj : error LNK2001: unresolved external symbol "public: static struct QMetaObject const AddressBook::staticMetaObject" (?staticMetaObject@AddressBook@@2UQMetaObject@@B) C:\Documents and Settings\nik\My Documents\Visual Studio 2008\Projects\vs_03\Debug\vs_03.exe : fatal error LNK1120: 4 unresolved externals Results Build log was saved at "file://c:\Documents and Settings\nik\My Documents\Visual Studio 2008\Projects\vs_03\vs_03\Debug\BuildLog.htm" vs_03 - 5 error(s), 0 warning(s) It seems to me that the thing has to do with the use of macro Q_OBJECT but just dont know what to do that thing starts to work properly. Maybe wrong installation or ... NO IDEA Any help is appreciated.
I find solution. Read all the details about the installation on this page >> http://dcsoft.wordpress.com/?aspxerrorpath=/community_server/blogs/dcsoft/archive/2009/03/06/how-to-setup-qt-4-5-visual-studio-integration.aspx. After a whole day of studying and configuration, I finally managed to enable QT 4.6.3. on the VS 2008. Follow the detailed instructions and there should not be a problem. My problem was that I used the following options: Starting with Qt 4.6, the LPGL version now comes pre-built for Visual Studio 2008 RTM. If this fits your need, you can simply install it and skip to INSTALL VISUAL STUDIO ADD-IN. This was wrong in my case so I go to next chapter : DOWNLOAD QT SOURCE CODE As the option to download only the Qt source code is a bit obfuscated on the current Nokia website, please follow these directions: ................................................................................................................................................................................................................................. etc. READ ON THE SITE. For now all works great.There are no errors in linking. MOC works fine ...
3,145,727
3,145,733
Compare a value with all array elements in one statement
For example: if (value == array[size]) //if the value (unique) is present in an array then do something can this be done in one statement without having to call a function or a basic for loop statement?
std::find can do it in one statement, but it's not as trivial as other languages :( int array[10]; if (array + 10 != find(array, array + 10, 7)) { cout << "Array contains 7!"; } Or with std::count: if (int n = count(array, array + 10, 7)) { cout << "Array contains " << n << " 7s!"; }
3,145,799
3,145,825
How to delete a pointer after returning its value inside a function
I have this function: char* ReadBlock(fstream& stream, int size) { char* memblock; memblock = new char[size]; stream.read(memblock, size); return(memblock); } The function is called every time I have to read bytes from a file. I think it allocates new memory every time I use it but how can I free the memory once I have processed the data inside the array? Can I do it from outside the function? Processing data by allocating big blocks gives better performance than allocating and deleting small blocks of data? Thank you very much for your help!
Dynamic arrays are freed using delete[]: char* block = ReadBlock(...); // ... do stuff delete[] block; Ideally however you don't use manual memory management here: std::vector<char> ReadBlock(std::fstream& stream, int size) { std::vector<char> memblock(size); stream.read(&memblock[0], size); return memblock; }
3,145,802
3,155,026
JavaScript Standard Library for V8
In my application, I allow users to write plugins using JavaScript. I embed V8 for that purpose. The problem is that developers can't use things like HTTP, Sockets, Streams, Timers, Threading, Crypotography, Unit tests, et cetra. I searched Stack Overflow and I found node.js. The problem with it is that you can actually create HTTP servers, and start processes and more things that I do not want to allow. In addition, node.js has its own environment (./node script.js) and you can't embed it. And it doesn't support Windows - I need it to be fully cross platform. If those problems can be solved, it will be awesome :) But I'm open to other frameworks too. Any ideas? Thank you!
In the end, I built my own library.
3,145,992
3,167,705
Esoteric JScript hosting problem: where is the error code when IDispatch::Invoke returns SCRIPT_E_PROPAGATE?
Our application hosts the Windows Scripting Host JScript engine and exposes several domain objects that can be called from script code. One of the domain objects is a COM component that implements IDispatch (actually, IDispatchEx) and which has a method that takes a script-function as a call-back parameter (an IDispatch* as a parameter). This COM component is called by script, does some things, and then calls back into script via that supplied IDispatch parameter before returning to the calling script. If the call-back script happens to throw an exception (e.g., makes a call to another COM component which returns something other than S_OK), then the call to IDispatch::Invoke on the call-back script will return SCRIPT_E_PROPAGATE instead of the HRESULT from the other COM component; not the expected HRESULT from the other COM object. If I return that HRESULT (SCRIPT_E_PROPAGATE) back to the caller of the first COM component (e.g., to the calling script), then the script engine correctly throws an error with the expected HRESULT from the other COM object. However, the ACTUAL ERROR is nowhere to be found. It's not returned from the Invoke call (the return value is SCRIPT_E_PROPAGATE). It's not returned via the EXCEPINFO supplied to Invoke (the structure remains empty). AND, it's not available via GetErrorInfo (the call returns S_FALSE)! Script Defines ScriptCallback = function() { return ComComponentB.doSomething(); } Invokes ComComponentA.execute(ScriptCallback) Invokes ScriptCallback() Invokes ComComponentB.doSomething() Returns E_FAIL (or some other HRESULT) Throws returned HRESULT Receives SCRIPT_E_PROPAGATE <--- WHERE IS THE ACTUAL ERROR? Returns SCRIPT_E_PROPAGATE Throws E_FAIL (or whatever HRESULT was returned from ComComponentB) I'd really like to get my hands on that error, because it would be useful to cache it and return the same error on subsequent calls (getting to the error often involves an expensive operation that is defined by the script-function passed as a parameter, but I do know how to cache the error). Is there a way for a scripted COM component to get to an exception thrown during a call-back into a supplied script-function???
Wow, this was seriously underdocumented. The answer is to: In the COM component making a callback into script... QI to get an IDispatchEx pointer on the script function to be called. Construct an object implementing both IServiceProvider & ICanHandleException; e.g. CScriptErrorCapturer. IServiceProvider::QueryService can return E_NOINTERFACE If the script callback function throws, but does not catch, an exception when InvokEx'd (see below), then ICanHandleException::CanHandleException will get an EXCEPINFO and VARIANT* (look on MSDN for documentation). The variant will contain the object thrown, which might be an Error object. Try to get the "number" and "message" properties from the IDispatch on this Error object, where "number" represents the actual script error (HRESULT). These values can/should be used to update the EXCEPINFO scode and (optionally) bstrDescription in order to propagate the error up to the calling script. If you don't update the scode, then then engine will throw an "Exception thrown but not caught" (0x800A139E), which is what the EXCEPINFO contains before you modify it. Not sure if pfnDeferredFillIn should be cleared, but it works without doing this. In my code, I capture the error here in my CScriptErrorCapturer. Return S_OK. Returning E_FAIL here will abort the entire script run, and not allow the exception to be thrown back up to the original calling script. Call IDispatchEx::InvokeEx and pass your CScriptErrorCapturer as the IServiceProvider parameter. Upon return from InvokeEx, query your CScriptErrorCapturer to see if it caught an error. According to code in the GoogleWebKit, sometimes InvokeEx may return S_OK, even if an error is thrown. Don't touch the return value from InvokeEx, especially if it is SCRIPT_E_PROPAGATE (0x80020102) Note: this link contains some of the undocumented JScript HRESULTS described above.
3,146,017
3,146,035
How do I share a constant between C# and C++ code?
I'm writing two processes using C# and WCF for one and C++ and WWSAPI for the second. I want to be able to define the address being used for communication between the two in a single place and have both C# and C++ use it. Is this possible? The closest I've come is defining the constant in an IDL, then using MIDL and TLBIMP to get it into a DLL that can be consumed by C#. However this doesn't seem to expose the constant, or at least I can't figure out how to make it do so. Maybe it is limited to type definitions only. Any other suggestions?
C# and C++ have differing models for constants. Typically, the constant won't even be emitted in the resulting C++ binary -- it's automatically replaced where it is needed most of the time. Rather than using the constant, make a function which returns the constant, which you can P/Invoke from C#. Thus, #include <iostream> const double ACCELERATION_DUE_TO_GRAVITY = 9.8; int main() { std::cout << "Acceleration due to gravity is: " << ACCELERATION_DUE_TO_GRAVITY; } becomes #include <iostream> extern "C" double AccelerationDueToGravity() { return 9.8; } int main() { std::cout << "Acceleration due to gravity is: " << AccelerationDueToGravity(); } which you should be able to P/Invoke from C#.
3,146,048
3,146,979
Does this cause a memory leak?
I create my VBO like this: glGenBuffersARB(1,&polyvbo); glBindBufferARB(GL_ARRAY_BUFFER_ARB,polyvbo); glBufferDataARB(GL_ARRAY_BUFFER_ARB,sizeof(GLfloat) * tempvct.size(),&tempvct[0],GL_DYNAMIC_COPY); Then to update it I just do the same thing: glBindBufferARB(GL_ARRAY_BUFFER_ARB,polyvbo); glBufferDataARB(GL_ARRAY_BUFFER_ARB,sizeof(GLfloat) * tempvct.size(),&tempvct[0],GL_DYNAMIC_COPY); (needless to say, the data in tempvct changes) I'm just wondering if the above produces a memory leak. do I need to delete the vbo and recreate it, or will it automatically delete the old and update? Thanks
It doesn't cause a memory leak because the buffer is not reallocated. But why not use glBufferSubData()? it will probably be much faster and does basically the same thing.
3,146,223
3,146,288
sendmessage does not work
I try to sendmessage to an IE rebar/toolbar, but it seems that my toolbar does not take the message effect. Can someone tell me where is the fault ? HRESULT CButtonDemoBHO::onDocumentComplete(IDispatch *pDisp, VARIANT *vUrl) { m_hWnd = NULL; SHANDLE_PTR nBrowser = NULL; HRESULT hr = m_spWebBrowser2->get_HWND(&nBrowser); m_hWnd = (HWND)nBrowser; SendMessage(m_hWnd, WM_test, 0, 0); return S_OK; }
I would stronly recommned that you check the values of hr and m_hWnd and the return value of sendmessage(). I doubt that "Send message does not work", but am willing to believe "my message does not arrive". Are you sure that you are sending it to a valid destination?
3,146,231
3,146,286
How do I convert an unsigned long array to byte in C++?
How do I convert an unsigned long array to byte in C++? I'm developing using VS2008 C++. Edit: I need to evaluate the size of this converted number,I want to divide this long array to a 29byte array. for example we have long array = 12345; it should convert to byte and then I need its length to divide to 29 and see how many packet is it. losing data is important but right now I just want to get result.
long array[SOME_SIZE]; char* ptr = reinterpret_cast<char*>( array ); // or just ( char* )array; in C // PC is little-endian platform for ( size_t i = 0; i < SOME_SIZE*sizeof( long ); i++ ) { printf( "%x", ptr[i] ); } Here's more robust solution for you, no endianness (this does not cover weird DSP devices where char can be 32-bit entity, those are special): long array[SOME_SIZE]; for ( size_t i = 0; i < SOME_SIZE; i++ ) { for ( size_t j = 0; j < sizeof( long ); j++ ) { // my characters are 8 bit printf( "%x", (( array[i] >> ( j << 3 )) & 0xff )); } }
3,146,344
3,146,629
compatibility of native code C++ and openGL in Windows Phone 7
We have a windows mobile 6.5 gaming application which uses openGL . Now we planned to port it to WP7 (windows phone 7). When I check the compatibility of native code C++ and openGL in WP7, they are telling that there is no support in the WP7. WP7 support only Silverlight, XNA and the .NET Framework. So what we thought of use XNA.Is it is the right to use this? Please let me know how to proceed with this. And which is the best way to go about it.
well I am doing exactly the same thing now. I'm currently going through the painstaking process of just manually converting all the code to c#. there is no little saviour like the Android NDK here with winmo7, you HAVE to use c# if I had my time I would and WILL definitely look into something that converts from c++ to c#, it is completely unrealistic to try and manage a multi platform project across multiple languages. depending on your app: silverlight I believe is meant for the more 'Applicationy' type apps, where XNA is meant for games (or 3D apps), but I think both are coded in c# EDIT: lol, sorry skipped over the part about how you were porting an openGL game, definitely use XNA, converting from openGL to XNA(directX like) will be the least of your worries, its fairly strait forward. its converting the code that's the pain. XNA is meant for n00bs writing stuff from scratch, and for them, it is awesome. to that end, it is good if you still have all your asset's source: hopefully still having your max or maya model files, and tga/bmp/png texture source files. if so, the content pipeline will automatically convert textures, and for models, converts .x or .fbx files exported from max or maya.
3,146,351
3,146,366
C++ getline or cin not accepting a string with spaces, I've searched Google and I'm still stumped!
First of all, thanks to everyone who helps me, it is much appreciated! I am trying to store a string with spaces and special characters intact into MessageToAdd. I am using getline (cin,MessageToAdd); and I have also tried cin >> MessageToAdd;. I am so stumped! When I enter the sample input Test Everything works as intended. However if I were to use Test Test Test The whole console would just blink fast until I pressed CtrlC. My style of putting variables at the top I've been told is obsolete. Please forgive me as I am still teaching myself and it's simply force of habit. I will be changing my style shortly after I get this solved :) void AddMessage() { ifstream myReadFile; string str; string MessageToAdd; string myMessages[10]; int i; // of course my famous i static string rowHtmlCloseTags; static string rowHtmlOpenTags; string replacement; myReadFile.open("C:\\Users\\Andrews\\Documents\\Visual Studio 2010\\Projects\\computerclass\\Debug\\outages.htm",ios::in); i = 0; //the start of my array rowHtmlCloseTags = "</b></td>"; // value that I want to replace with nothing rowHtmlOpenTags = "<td><b>"; if(!myReadFile) // is there any error? { cout << "Error opening the file! Aborting…\n"; exit(1); } if (myReadFile.is_open()) { cout << endl; while (!myReadFile.eof()) { getline(myReadFile, str); if (str == "<tr>") { getline(myReadFile, str); //get the next line cause thats where the <td><b>Outage Message</b></td> is. size_t foundIndex = str.find(rowHtmlCloseTags); //does the sought string exist in this this line? if (foundIndex != str.npos) //if not no position str.replace(foundIndex, rowHtmlCloseTags.size(), replacement); //replace the string else std::cout << "Oops.. didn't find " << rowHtmlCloseTags << std::endl; //else throw a bitch foundIndex = str.find(rowHtmlOpenTags); //does the sought string exist in this this line? if (foundIndex != str.npos) //if not no position str.replace(foundIndex, rowHtmlOpenTags.size(), replacement); //replace the string else std::cout << "Oops.. didn't find " << rowHtmlOpenTags << std::endl; //else throw a bitch myMessages[i]=str; i++; } } } system("cls"); i=0; while (i < 10) { cout << i << ") " << myMessages[i] << endl; i++; if (myMessages[i]=="") { break; } } myReadFile.close(); cout << endl; cout << endl; cout << "Enter the message you would like to see on the reader board.\n"; cout << "Or enter 911 to go back to the main menu: "; cin.ignore(1080); getline (cin,MessageToAdd); if (str == "911") //go back to the main menu { system("cls"); mainMenu(); } else //insert the message into a blank spot in the array { i=0; while (i < 10) { if (myMessages[i].empty()) { myMessages[i]=MessageToAdd; break; } else { i++; } } } //now rebuild the htm file with the new array CreateHtmlFile(myMessages); }
I'll tell you one thing that's immediately wrong with your code, not your specific problem but a hairy one nonetheless. I'm presuming that your mainMenu() function is calling this one. In that case, you appear to be under the misapprehension that: if (str == "911") //go back to the main menu { system("cls"); mainMenu(); } will return to your menu. It will not do that. What it will do is to call your main menu code afresh and eventually you will run out of stack space. I suspect that what you should be doing is having a loop in mainMenu() and that code above should just use return; rather than calling mainMenu() recursively. That and the fact that I think you should be comparing MessageToAdd against "911" rather than str. Another thing I would do would be to put some temporary debug code in: cout << "DEBUG A\n"; i=0; while (i < 10) { cout << "DEBUG B " << i << "\n"; if (myMessages[i].empty()) { cout << "DEBUG C\n"; myMessages[i]=MessageToAdd; break; } else { i++; cout << "DEBUG D " << i << "\n"; } cout << "DEBUG E\n"; } cout << "DEBUG F\n"; and see what gets printed. Of course, you could trace the execution in a debugger but that would require you to do the work yourself. If you just post the output (first 100 lines if it's huge), then we can probably tell you what's wrong easily. Actually, I think your problem is the cin.ignore. When I run your code, nothing works, neither Test nor Test Test Test. That's because it's ignoring the first 1080 characters I'm trying to input. Proof can be seen when you change those statements to: cin.ignore(1); getline (cin,MessageToAdd); cout << MessageToAdd << "\n"; and you get est output when you enter test. Take out the ignore line and try again. I'm not certain of this since you seem to indicate that Test works but I can't see this as being correct. So here's what you need to do (at a bare minimum): get rid of the cin.ignore altogether. use return rather than mainMenu(). use if (MessageToAdd == "911") instead of if (str == "911"). let us know how it goes then.
3,146,372
3,162,646
How can I track my input position with multiple inputs using Boost::Spirit::Qi?
I'd like to support something like C++'s #include mechanism in a boost spirit parser. Essentially, I have a script command that asks my parser to load a sub script from a file. I'd like to be able to report error messages as described in the tracking input position while parsing post, but they don't cover parsing for multiple inputs. Can this be reasonably accomplished using boost::spirit::qi? I've worked around getting the differing inputs in using a smarter iterator type. I'd still like to see accurate positioning though.
IMHO, using a smart iterator is the way to go. What needs to be done is to have a stack of input contexts maintained by the iterator. Each input context stores the information related to a specific file. Whenever a new file needs to be read (i.e. after seeing an #include statement) a new input context is created. The current input context gets pushed onto the stack, while the new context gets to be the active one. On EOF you pop the next input context from the stack, returning to the point right after the #include. If the stack is empty you reached the EOF of the main file. In any case, the iterator only gets its input from the active input context.
3,146,438
3,147,859
Should Direct3D be used over OpenGL in Windows?
Since Microsoft is generally a bit bias toward Direct3D, would a scene using VBO's in Direct3D be faster than the same scene using VBO's in OpenGL, or would it be the same since it's up to the Graphics Card driver? Thanks
Performance-wise, and assuming decent GPU drivers, there is no difference overall. Some operations are inherently faster in OpenGL than in DirectX9, although DX10 remedied that. But a good rule of thumb when working with external hardware is that it's not the API you're using that determines performance. When writing network code, the bottleneck is the network adapter, and it doesn't matter if your socket code is written in .NET, plain Berkeley sockets in C, or perhaps using some Python library. When writing code to use the GPU, the GPU is the limiting factor. The biggest difference between DirectX and OpenGL is that one might require a function call or two more than the other to achieve certain tasks -- and the performance cost of that is pretty much nonexistent. What happens on the GPU is the same in either case, because that's determined by your GPU driver, and because both OpenGL and DirectX try to be as efficient as possible. There are valid reasons to prefer either API though. DirectX has much better tool support. Microsoft does an extremely good job of that. Debugging and optimizing DirectX code is much easier with tools such as PIX. And Microsoft also provides the helper library D3DX which provides efficient implementations of a lot of commonly used functionality. OpenGL has the advantage that it's not tied to a specific OS. DirectX9 only works on Windows. DX10 and above only works on Vista and above. OpenGL works on any OS where an OpenGL driver has been written. On Windows, the situation is sometimes a bit awkward though. Windows itself only comes with ancient implementations of OpenGL. (XP with v1.1, I believe, and Vista/7 with 1.5). So OpenGL apps on Windows rely on the GPU vendor to provide an updated implementation with their drivers. ATI and NVidia do provide very good implementations, so it's not that much of a problem. Intel's OpenGL drivers are generally lagging behind, both in quality and in supported features.
3,146,545
3,146,630
how to get country name and/or code by IP address?
Sorry for so noobish question but how can I do it with C++? I need to get country names and their 2-letter or 3-letter ISO codes (or whatever it's called) using a given IP address. Is there any way to do this easily? The only two ways I found are to use GeoIP C library or to download a free database but is there something more dynamic? I mean, is there a free web-service with up-to-date db I can utilize for my purpose using curl or something? Thanks in advance.
MaxMind.com exposes a set of web services for geolocation. They're not free, but cheap per lookup. One of my clients uses their city-level lookup service, with each result cached for a reasonable period (from memory, 2 or 3 weeks), since IP address blocks don't change location that often. The application also counts the number of actual lookups to send an admin notification when the number remaining in the subscription gets low, so it can be topped up manually. I also use the downloadable database in several applications, with a scheduled update process to keep it up to date.
3,146,550
3,146,559
Is there any reason Google uses the cc extension instead of cpp in their open source projects?
Is there any reason Google uses the cc extension instead of the standard cpp in their open source projects? Should I do that too?
It's entirely a matter of personal preference, at least for the person(s) starting the project. Whatever you choose, be consistent.
3,146,628
3,146,688
How to cast wchar_t into int for displaying the code point?
I have a simple function in my program, when I was wanting to mess around with unicode and do stuff with it. In this function, I wished to display the code value of the character the user entered. It SEEMED possible, here's my function: wstring listcode(wchar_t arg) { wstring str = L""; str += static_cast<int> (arg); //I tried (int) arg as well str += L": "; str += (wchar_t) arg; return str; } Now as you see I just wanted to display the integer value (like an ascii character, such as (int) "a"), but something like listcode(L"&") will be displayed as &: & ! Is it not possible to find the integer value of a wide character like that?
In C++, you cannot add anything to strings but characters and other strings. There is no implicit conversion from int (or anything else) to string. That's just the way the string type is designed. What you do instead is to use string streams: std::wstring listcode(wchar_t arg) { std::wostringstream oss; oss << static_cast<int>(arg); oss << L": "; oss << arg; return oss.str(); } In practice, however, when converting to strings in C++, it's better to have functions writing to a stream, than returning a string: void listcode(std::wostream os, wchar_t arg) { os << static_cast<int>(arg); os << L": "; os << arg; } That way, if you want to output something to the console or to a file, you can directly pass std::cout or a file stream, and if you want a string, you just pass a string stream.
3,146,641
3,146,663
How to make a global array with variable number of elements?
Is it posible to declare a global array of a struct, and add elements dynamically to it? Thanks.
If you want to dynamically add elements to something, you might consider using a list. You could create a global list, and dynamically add elements to it as needed. If you really need array type functionality, a vector might be more your speed. In this case, the STL is likely to provide what you need. It's also good to note that globals aren't always a good idea. If you're using globals a lot, you may want to consider refactoring your code so they won't be necessary. Many people consider global variables to be a code smell.
3,146,675
3,151,067
What is the problem with this simple boost::spirit::qi parser?
I've got this simple parser intended to parse VB style double quoted strings. Thus, the parser should turn something like "This is a quoted string containing quotes ("" "")" into an output of This is a quoted string containing quotes (" ") Here is the grammar I came up with for this: namespace qi = boost::spirit::qi; namespace wide = qi::standard_wide; class ConfigurationParser : public qi::grammar<std::wstring::iterator, std::wstring()> { qi::rule<std::wstring::iterator, std::wstring()> quotedString; qi::rule<std::wstring::iterator> doubleQuote; public: ConfigurationParser() : ConfigurationParser::base_type(quotedString, "vFind Command Line") { doubleQuote = (wide::char_(L'"') >> wide::char_(L'"')); quotedString = L'"' >> +(doubleQuote[qi::_val = L'"'] | (wide::char_ - L'"'))>> L'"'; } }; However, the attribute I'm getting is a single quote mark ("), rather than the full parsed message.
You can do it without any semantic actions: class ConfigurationParser : public qi::grammar<std::wstring::iterator, std::wstring()> { qi::rule<std::wstring::iterator, std::wstring()> quotedString; qi::rule<std::wstring::iterator, wchar_t()> doubleQuote; public: ConfigurationParser() : ConfigurationParser::base_type(quotedString, "vFind Command Line") { doubleQuote = wide::char_(L'"') >> omit[wide::char_(L'"')]; quotedString = L'"' >> +(doubleQuote | (wide::char_ - L'"')) >> L'"'; } }; The omit[] directive still executes the embedded parser but doesn't expose any attribute, making the doubleQuote rule return a single L'"'.
3,146,948
3,154,386
P/Invoke code works on WinXP, exception on Win2k8
I'm attempting to access a function in a DLL in C# and C++. C++ is working fine, as is C# on WinXP. However I'm getting the following error when attempting to access the function on a Win2k8 system: Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at Router.GetAddress() The declaration in C# is: [DllImport("Constants.dll")] static extern String GetAddress(); Usage in C# (at the moment) is just outputting it: Console.WriteLine(GetAddress()); And the contents of the DLL's function are just: const static WCHAR* szAddress= L"net.tcp://localhost:4502/TestAddress"; extern "C" __declspec(dllexport) const WCHAR* GetAddress() { return szAddress; } I really didn't think there was anything controversial here. The only thing I can think of is the const return from GetAddress, but I'm not sure how to apply the corresponding keyword to C# as I'm not as familiar with that language yet. Any suggestions would be greatly appreciated.
I ended up fixing this problem using the details in http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/4e387bb3-6b99-4b9d-91bb-9ec00c47e3a4. I changed the declaration to: [DllImport("Constants.dll", CharSet = CharSet.Unicode)] static extern int GetAddress(StringBuilder strAddress); The usage therefore became: StringBuilder sb = new StringBuilder(1000000); // Arbitrary length for the time being GetAddress(sb); Console.WriteLine(sb.ToString()); And the DLL was changed to: const static WCHAR* szAddress = L"net.tcp://localhost:4502/TestAddress"; extern "C" __declspec(dllexport) int GetAddress(WCHAR* strAddress) { wcscpy(strAddress, szAddress); return 0; }
3,147,014
3,147,048
Is heap memory per-process? (or) Common memory location shared by different processes?
Every process can use heap memory to store and share data within the process. We have a rule in programming whenever we take some space in heap memory, we need to release it once job is done, else it leads to memory leaks. int *pIntPtr = new int; . . . delete pIntPtr; My question: Is heap memory per-process? If YES, then memory leak is possible only when a process is in running state. If NO, then it means OS is able to retain data in a memory somewhere. If so, is there a way to access this memory by another process. Also this may become a way for inter-process communication. I suppose answer to my question is YES. Please provide your valuable feedback.
On almost every system currently in use, heap memory is per-process. On older systems without protected memory, heap memory was system-wide. (In a nutshell, that's what protected memory does: it makes your heap and stack private to your process.) So in your example code on any modern system, if the process terminates before delete pIntPtr is called, pIntPtr will still be freed (though its destructor, not that an int has one, would not be called.) Note that protected memory is an implementation detail, not a feature of the C++ or C standards. A system is free to share memory between processes (modern systems just don't because it's a good way to get your butt handed to you by an attacker.)
3,147,156
3,147,233
casting comparison between Objective-C and C++
Okay, so this might be a bit of an academic question. Can someone tell me if/how C++'s casting operators might translate to Objective-C... or how/why they're not necessary? I've been out of the loop with C++ for a few years now and it seems like every time I turn around they add a few new keywords. I was recently introduced to C++'s various casting operators reinterpret_cast, static_cast, dynamic_cast, and const_cast. I'm a little fuzzy on exactly when those situations come up that you would need to use all these types of casts. I've been using Objective-C for a year or two now and feel fairly comfortable with it. (Been mostly a C person until then). I'm trying to understand why C++ seems to have all this complexity. Or to put it another way, what's Objective-C missing that it doesn't seem to have (or need?) this many casting types?
See this answer to the question When should static_cast, dynamic_cast and reinterpret_cast be used? on the meaning of each kind of casts. what's Objective-C missing that it doesn't seem to have (or need?) this many casting types? C++ focuses a lot more in type safety than C. The many cast operators are added to make the many different casting intentions clear (and to discourage people from using it due to its ugly form). And, There is no const objects (const NSObject*) in Objective-C, and other const parameters aren't so emphasized unlike in C++, so const_cast is useless. Objective-C instances always use dynamic typing, so dynamic_cast is not needed. (Type checking in ObjC is usually done with -isKindOfClass:.) static_cast and reinterpret_cast are the same in C, but not so in C++. Because C++ supports multiple inheritance (missing in ObjC), a pointer casting is not as simple as a no-op: #include <cstdio> struct A { int x; A() : x(12) {} }; struct B { int y; B() : y(33) {} int get() const { return y; } }; struct C : A, B { int z; C() : A(), B(), z(41) {} }; int main () { C* c = new C; printf("%d\n", c->get()); // 33 printf("%d\n", static_cast<B*>(c)->get()); // 33 printf("%d\n", reinterpret_cast<B*>(c)->get()); // 12 }
3,147,274
3,147,283
C++ Default argument for vector<int>&?
I have a function, void test( vector<int>& vec ); How can I set the default argument for vec ? I have tried void test( vector<int>& vec = vector<int>() ); But there's a warning "nonstandard extension used : 'default argument' : conversion from 'std::vector<_Ty>' to 'std::vector<_Ty> &'" Is there a better way to do this ? Instead of void test() { vector<int> dummy; test( dummy ); } Regards, Voteforpedro
Have you tried: void test(const vector<int>& vec = vector<int>()); C++ does not allow temporaries to be bound to non-const references. If you really to need to have a vector<int>& (not a const one), you can declare a static instance and use it as a default (thus non-temporary) value. static vector<int> DEFAULT_VECTOR; void test(vector<int>& vec = DEFAULT_VECTOR); But beware, because DEFAULT_VECTOR will (can) be modified and won't reset on each call ! Not sure that this is what you really want. Thanks to stinky472, here is a thread-safe alternative: Instead of providing a default value, you might as well overload test() with a zero-parameter version which calls the other version: void test() { vector<int> vec; test(vec); }
3,147,359
3,149,750
I am looking for C++ wrapper around built-in Perl functions
A while ago I found a library that allowed calling individual built-in Perl functions in C++, I cannot find it now. Can you tell me where I can find it on the net? Thanks.
You might want to try libperl++. It's still kind of beta, but the part that involves calling perl from C++ has been mature for quite some time. It's much easier to use than the perl API itself. Full disclosure: I'm the author of libperl++
3,147,561
3,147,580
seekg tellg end of line
I have to read lines from an extern text file and need the 1. character of some lines. Is there a function, which can tell me, in which line the pointer is and an other function, which can set the pointer to the begin of line x? I have to jump to lines before and after the current position.
There is no such function i think. You will have to implement this functionality yourself using getline() probably, or scan the file for endline characters (\n) one character at a time and store just the one character after this one. You may find a vector (vector<size_t> probably) helpful to store the offsets of line starts, this way you might be able to jump in the file in a line-based way. But haven't tried this, so it may not work.
3,147,754
3,150,286
Python-dependency, windows (CMake)
I have a large, crossplatform, python-dependent project, which is built by CMake. In linux, python is either preinstalled or easily retrived by shell script. But on windows build, i have to install python manually from .msi before running CMake. Is there any good workaround using cmake scripts? PS All other external dependencies are downloaded from dedicated FTP server.
Python doesn't really have to be installed to function properly. For my own CMake based projects on Windows, I just use a .zip file containing the entire python tree. All you need to do is extract it to a temporary directory, add it to your path, and set your PYTHONHOME/PYTHONPATH environment variables. Once that's done, you have a fully operational Python interpreter at your disposal. About the only 'gotcha' on Windows is to make sure you remember to copy the Python DLL out of C:\Windows\system32 into the top-level Python directory prior to creating the .zip.
3,147,900
3,150,210
How to read file which contains \uxxxx in vc++
I have txt file whose contents are: \u041f\u0435\u0440\u0432\u044b\u0439_\u0438\u043d\u0442\u0435\u0440\u0430\u043a\u0442\u0438\u0432\u043d\u044b\u0439_\u0438\u043d\u0442\u0435\u0440\u043d\u0435\u0442_\u043a\u0430\u043d\u0430\u043b How can I read such file to get result like this: "Первый_интерактивный_интернет_канал" If I type this: string str = _T("\u041f\u0435\u0440\u0432\u044b\u0439_\u0438\u043d\u0442\u0435\u0440\u0430\u043a\u0442\u0438\u0432\u043d\u044b\u0439_\u0438\u043d\u0442\u0435\u0440\u043d\u0435\u0442_\u043a\u0430\u043d\u0430\u043b"); then result in str is good but if I read it from file then it is the same like in file. I guess it is because '\u' becomes '\u'. Is there simple way to convert \uxxxx notation to corresponding symbols in C++?
Here is an example for MSalters's suggestion: #include <iostream> #include <string> #include <fstream> #include <algorithm> #include <sstream> #include <iomanip> #include <locale> #include <boost/scoped_array.hpp> #include <boost/regex.hpp> #include <boost/numeric/conversion/cast.hpp> std::wstring convert_unicode_escape_sequences(const std::string& source) { const boost::regex regex("\\\\u([0-9A-Fa-f]{4})"); // NB: no support for non-BMP characters boost::scoped_array<wchar_t> buffer(new wchar_t[source.size()]); wchar_t* const output_begin = buffer.get(); wchar_t* output_iter = output_begin; std::string::const_iterator last_match = source.begin(); for (boost::sregex_iterator input_iter(source.begin(), source.end(), regex), input_end; input_iter != input_end; ++input_iter) { const boost::smatch& match = *input_iter; output_iter = std::copy(match.prefix().first, match.prefix().second, output_iter); std::stringstream stream; stream << std::hex << match[1].str() << std::ends; unsigned int value; stream >> value; *output_iter++ = boost::numeric_cast<wchar_t>(value); last_match = match[0].second; } output_iter = std::copy(last_match, source.end(), output_iter); return std::wstring(output_begin, output_iter); } int wmain() { std::locale::global(std::locale("")); const std::wstring filename = L"test.txt"; std::ifstream stream(filename.c_str(), std::ios::in | std::ios::binary); stream.seekg(0, std::ios::end); const std::ifstream::streampos size = stream.tellg(); stream.seekg(0); boost::scoped_array<char> buffer(new char[size]); stream.read(buffer.get(), size); const std::string source(buffer.get(), size); const std::wstring result = convert_unicode_escape_sequences(source); std::wcout << result << std::endl; } I'm always surprised how complicated seemingly simple things like this are in C++.
3,148,081
3,148,155
regenerating connection point methods
I've created a connection point interface _IPlayerEvents. I've added a couple of methods OnConnect() OnDisconnect() I've built the project, and VS2008 has generated code in the CProxy_IPlayerEvents class: HRESULT Fire_OnConnect(){...} HRESULT Fire_OnDisconnect() {...} Now I've added a further method to the _IPlayerEvents interface OnMessage([out, retval]BSTR* pbstrMessage) When I build, no code is added to the CProxy_IPlayerEvents class for the OnMessage function - I'd expected that VS2008 would generate: HRESULT Fire_OnMessage(BSTR* pbstrMessage){...} I'd prefer to avoid having to update the CProxy_IPlayerEvents manually if I could. How can I force VS2008 to regenerate the CProxy_IPlayerEvents class?
I found an answer! Open Class View in VS2008, right-click your COM object and from its context menu, select Add -> Add Connection Point... Move the source interface from the list on the left over to the right, then click Finish. This will generate or regenerate proxy class when you next build your project. This step is crucial - and a real pain if you haven't done connection points in a while!
3,148,319
3,148,783
Is `volatile` required for shared memory accessed via access function?
[edit] For background reading, and to be clear, this is what I am talking about: Introduction to the volatile keyword When reviewing embedded systems code, one of the most common errors I see is the omission of volatile for thread/interrupt shared data. However my question is whether it is 'safe' not to use volatile when a variable is accessed via an access function or member function? A simple example; in the following code... volatile bool flag = false ; void ThreadA() { ... while (!flag) { // Wait } ... } interrupt void InterruptB() { flag = true ; } ... the variable flag must be volatile to ensure that the read in ThreadA is not optimised out, however if the flag were read via a function thus... volatile bool flag = false ; bool ReadFlag() { return flag } void ThreadA() { ... while ( !ReadFlag() ) { // Wait } ... } ... does flag still need to be volatile? I realise that there is no harm in it being volatile, but my concern is for when it is omitted and the omission is not spotted; will this be safe? The above example is trivial; in the real case (and the reason for my asking), I have a class library that wraps an RTOS such that there is an abstract class cTask that task objects are derived from. Such "active" objects typically have member functions that access data than may be modified in the object's task context but accessed from other contexts; is it critical then that such data is declared volatile? I am really interested in what is guaranteed about such data rather than what a practical compiler might do. I may test a number of compilers and find that they never optimise out a read through an accessor, but then one day find a compiler or a compiler setting that makes this assumption untrue. I could imagine for example that if the function were in-lined, such an optimisation would be trivial for a compiler because it would be no different than a direct read.
My reading of C99 is that unless you specify volatile, how and when the variable is actually accessed is implementation defined. If you specify volatile qualifier then code must work according to the rules of an abstract machine. Relevant parts in the standard are: 6.7.3 Type qualifiers (volatile description) and 5.1.2.3 Program execution (the abstract machine definition). For some time now I know that many compilers actually have heuristics to detect cases when a variable should be reread again and when it is okay to use a cached copy. Volatile makes it clear to the compiler that every access to the variable should be actually an access to the memory. Without volatile it seems compiler is free to never reread the variable. And BTW wrapping the access in a function doesn't change that since a function even without inline might be still inlined by the compiler within the current compilation unit. P.S. For C++ probably it is worth checking the C89 which the former is based on. I do not have the C89 at hand.
3,148,392
3,148,605
ICC 11.1 has strange behaviour regarding PTHREADS on ia64
I'm working on a ia64-machine using ICC 11.1. The following program compiles nicely: #include <pthread.h> #include <iostream> using namespace std; int main() { cout << PTHREAD_STACK_MIN << '\n'; return 0; } When I compile it with icc test.cpp -o test BUT when I change the contents of the file to to: #include <pthread.h> #include <stdio.h> int main() { printf("%d\n", PTHREAD_STACK_MIN); return 0; } I suddenly get: icc -c test.cpp -o test.o test.cpp(6): error: identifier "PTHREAD_STACK_MIN" is undefined printf("%d\n", PTHREAD_STACK_MIN); ^ compilation aborted for test.cpp (code 2) Can anyone explain to me why? Or more importantly: how I can work around this issue so that the second code example will also compile?
Well, that's easy: you forgot to include <limits.h> where the PTHREAD_STACK_MIN is supposed to be declared (as per POSIXv6/SUSv3). And from the error one can conclude that <iostream> internally also includes the <limits.h> why in C++ mode the error doesn't happen.
3,148,571
3,148,584
Strange class declaration
In Qt's qrect.h I found class declaration starting like this: class Q_CORE_EXPORT QRect { }; As you can see there are two identifiers after class keyword. How shall I understand this? Thank you.
Q_CORE_EXPORT is a macro that gets expanded to different values depending on the context in which it's compiled. A snippet from that source: #ifndef Q_DECL_EXPORT # ifdef Q_OS_WIN # define Q_DECL_EXPORT __declspec(dllexport) # elif defined(QT_VISIBILITY_AVAILABLE) # define Q_DECL_EXPORT __attribute__((visibility("default"))) # endif # ifndef Q_DECL_EXPORT # define Q_DECL_EXPORT # endif #endif #ifndef Q_DECL_IMPORT # ifdef Q_OS_WIN # define Q_DECL_IMPORT __declspec(dllimport) # else # define Q_DECL_IMPORT # endif #endif // ... # if defined(QT_BUILD_CORE_LIB) # define Q_CORE_EXPORT Q_DECL_EXPORT # else # define Q_CORE_EXPORT Q_DECL_IMPORT # endif Those values (__declspec(dllexport), __attribute__((visibility("default"))), etc.) are compiler-specific attributes indicating visibility of functions in dynamic libraries.
3,148,794
3,191,111
c++ boost::serialization setting a fixed class_id for a class
I'mm using boost to serialize and deserialize some classes Like so: boost::archive::xml_oarchive xmlArchive(oStringStream); xmlArchive.register_type(static_cast<BaseMessage *>(NULL)); xmlArchive.register_type(static_cast<IncomingTradeMessage *>(NULL)); xmlArchive.register_type(static_cast<InternalRequestInfo *>(NULL)); xmlArchive.register_type(static_cast<InternalTradeTransInfo *>(NULL)); const BaseMessage* myMessage =message; xmlArchive << make_nvp("Message", myMessage); now my clasess get a class_id according to the order used, i dont want that, i want to control the Class_id's so i can do something like BOOST_SET_CLASS_ID(1234, BaseMessage); and everywhere in my project BaseMessage would have class_id of 1234. How can i do such a thing
Can't you use BOOST_CLASS_EXPORT_GUID or similar instead? I.e. BOOST_CLASS_EXPORT_GUID(IncomingTradeMessage, "IncomingTradeMessage") ... It will use some more bandwidth since strings are transmitted rather than integers, but it will solve your problem. Refer to this and this for more info. EDIT: This compile just fine: #include <fstream> #include <boost/serialization/export.hpp> #include <boost/archive/text_oarchive.hpp> class Foo { friend class boost::serialization::access; template<class Archive> void serialize(Archive & ar, const unsigned int version) { ar & dummy1; } int dummy1; public: virtual ~Foo() {} }; class Bar : public Foo { friend class boost::serialization::access; template<class Archive> void serialize(Archive & ar, const unsigned int version) { // serialize base class information ar & boost::serialization::base_object<Foo>(*this); ar & dummy2; } int dummy2; }; BOOST_CLASS_EXPORT_GUID(Foo, "Foo") BOOST_CLASS_EXPORT_GUID(Bar, "Bar") int main(int argc, char *argv[]) { std::ofstream ofs("filename"); boost::archive::text_oarchive oa(ofs); Foo *f = new Bar; oa << f; return 0; }
3,148,850
3,149,059
Is using enum for integer bit oriented operations in C++ reliable/safe?
Consider the following (simplified) code: enum eTestMode { TM_BASIC = 1, // 1 << 0 TM_ADV_1 = 1 << 1, TM_ADV_2 = 1 << 2 }; ... int m_iTestMode; // a "bit field" bool isSet( eTestMode tsm ) { return ( (m_iTestMode & tsm) == tsm ); } void setTestMode( eTestMode tsm ) { m_iTestMode |= tsm; } Is this reliable, safe and/or good practice? Or is there a better way of achieving what i want to do apart from using const ints instead of enum? I would really prefer enums, but code reliability is more important than readability.
I can't see anything bad in that design. However, keep in mind that enum types can hold unspecified values. Depending on who uses your functions, you might want to check first that the value of tsm is a valid enumeration value. Since enums are integer values, one could do something like: eTestMode tsm = static_cast<eTestMode>(17); // We consider here that 17 is not a valid value for your enumeration. However, doing this is ugly and you might just consider that doing so results in undefined behavior.
3,148,896
3,149,136
Cryptography libraries conflict (MCrypt, libgcrypt)
I'm trying to perform encryption and decryption (Rijndael 256, ecb mode) in two different components: 1. PHP - Server Side (using mcrypt) 2. C + + - Client Side (using gcrypt) I ran into a problem when the client side could not decrypt correctly the encrypted data (made by the server side) so... i checked the: 1. initial vector - same same (32 length) 2. the key - again the same key on both sides.. so i wrote some code in C++ that will encrypt the data (with the same parameters like in the php) and i found out that the encrypted data contains different bytes (maybe encoding issue??) I'll be more than glad to get some help PHP - MCrypt // Encrypt Function function mc_encrypt($encrypt, $mc_key) { $iv_size = mcrypt_get_iv_size(MCRYPT_RIJNDAEL_256, MCRYPT_MODE_ECB); $iv = mcrypt_create_iv($iv_size, MCRYPT_RAND); $iv = "static_init_vector_static_init_v"; echo "IV-Size: " . $iv_size . "\n"; echo "IV: " . $iv . "\n"; $passcrypt = mcrypt_encrypt(MCRYPT_RIJNDAEL_256, $mc_key, $encrypt, MCRYPT_MODE_ECB, $iv); print_hex($passcrypt); return $encode; } mc_encrypt("Some text which should be encrypted...","keykeykeykeykeykeykeykeykeykeyke"); I'll post the C++ code in a comment Thanks, Johnny Depp
OK. I'll make my comment an answer: An Initialization Vector (IV) isn't used in ECB mode. If it is provided different implementations might work differently. If you want to be sure the implementations will work correctly then use an IV of 0 (zero). Even though you provide the IV, both implementations SHOULD ignore it but one can never be sure about that. Not providing an IV in ECB mode should work aswell but again, it all depends on the implementations. According to the PHP documentation MCrypt will ignore it. GCrypt I'm not sure about. mcrypt_get_iv_size(MCRYPT_RIJNDAEL_256, MCRYPT_MODE_ECB) should actually return 0 since you specify ECB mode. Edit: Do not call mcrypt_get_iv_size or mcrypt_create_iv. Instead call mcrypt_encrypt without an IV. According to the PHP documentation all bytes in the IV will be set to '\0'. Same goes for the C++ code. No need to set any IV at all. The libgcrypt code is complex but from glancing at the source of version 1.4.5 then in ECB mode it seems the IV isn't used at all. If the resulting ciphertext still differs then the problem is something else. A couple of possibilities comes to mind: Encoding - Is the same encoding used in both the server and the client? Endianness - What type of systems are the server and the client? Big- vs Little-endian?
3,148,903
3,149,250
Recursive function returns unexpected result
My funciton takes a number input from the user and recursively sums the number 'n' to one. Inputting a 5 would sum 1/5 + 1/4 + 1/3+ 1/2 + 1/1. #include<stdio.h> #include<conio.h> //to float recursion(float num,float sum); void main(void) { float num=5,sum=0; //input num printf("%d",recursion(num,sum)); getch(); } float recursion(float num,float sum) { // int sum=0; every time u run it the sum is assigned 0 if( num==1) return 1; else { sum=sum+(1/num); num--; recursion(num,sum); } return sum; }//recursion function ends The problem is, that it is giving 0. Can anyone help, please?
Why's the printf("%d") while it's supposed to print a float? Doesn't that display an integer making it always 0 for a float less than 0? float recursion(float num) { if( num==1.0f) { printf("1/1 = "); return 1.0f; } float inverse = 1.0f/num; printf("1/%.0f + ", num); return (inverse + recursion(--num)); }//recursion function ends Here's the test code: float num=5,sum=0; float expected = 0; for (int i = 1; i <= num; ++i) { expected += 1.0f/i; } //input num printf("Expected %f and got %f",expected, recursion(num)); Output: 1/5 + 1/4 + 1/3 + 1/2 + 1/1 = Expected 2.283334 and got 2.283334 Hope this helps.
3,149,263
3,149,285
Why is there no boost::filesystem::move_file?
I'm using boost filesystem to replace windows C++ functions like CopyFile and MoveFile to get some kind of portability between windows and linux. I'm using copy_file but I have not been able to find anything that moves files like a 'move_file' function. Do boost have a move file function? I would very much prefer to use boost, but alternative suggestions are welcome.
It's called rename, see the manual. Like the corresponding OS functions, this might or might not work if the source and destination paths are on different file systems. If it doesn't work, use a copy operation followed by a delete operation.