question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
2,982,167
2,982,197
Integration (math) in C++
I'm looking for a library to find the integral of a given set of random data (rather than a function) in C++ (or C, but preferably C++). There is another question asking about integration in C but the answers discuss more how to integrate a function (I think...). I understand that this can be done simply by calculating the area under the line segment between each pair of points from start to finish, but I'd rather not reinvent the wheel if this has already been done. I apologize in advance if this is a duplicate; I searched pretty extensively to no avail. My math isn't as strong as I'd like it so it's entirely possible I'm using the wrong terminology. Thanks in advance for any help! Chris Edit: In case anybody is interested, I feel like an idiot. Even adding in a bunch of OO abstraction to make my other code easier to use, that was maybe 30 lines of code. This is what 3 years away from any sort of math will do to you...thanks for all of the help!
This is trivial. If the points are (x0, y0), (x1, y1), ..., (xN, yN), and the points are ordered so that x0 <= x1 <= ... <= xN, then the integral is y0 * (x1 - x0) + y1 * (x2 - x1) + ... using no interpolation (summing areas of rectangles), and (y0 + y1)/2 * (x1 - x0) + (y1 + y2)/2 * (x2 - x1) + ... using linear interpolation (summing areas of trapezia). The problem is especially simple if your data is y0, y1, ..., yN and the corresponding x values are assumed to be 0, 1, ..., N. Then you get y0 + y1 + ... using no interpolation (summing areas of rectangles), and (y0 + y1)/2 + (y1 + y2)/2 + ... using linear interpolation (summing areas of trapezia). Of course, using some simple algebra, the trapezia formulae can be simplified. For instance, in the last case, you get y0/2 + y1 + y2 + ...
2,982,224
2,982,229
Access element of pointed std::vector
I have a function where I provide a pointer to a std::vector. I want to make x = to vector[element] but i'm getting compiler errors. I'm doing: void Function(std::vector<int> *input) { int a; a = *input[0]; } What is the right way to do this? Thanks
Should be: void Function(std::vector<int> *input) { // note: why split the initialization of a onto a new line? int a = (*input)[0]; // this deferences the pointer (resulting in) // a reference to a std::vector<int>), then // calls operator[] on it, returning an int. } Otherwise you've got *(input[0]), which is *(input + 0), which is *input. Of course, why not just do: void Function(std::vector<int>& input) { int a = input[0]; } And if you don't modify input, mark it as const: void Function(const std::vector<int>& input) { int a = input[0]; }
2,982,325
2,983,171
Quantifying the Performance of Garbage Collection vs. Explicit Memory Management
I found this article here: Quantifying the Performance of Garbage Collection vs. Explicit Memory Management http://www.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf In the conclusion section, it reads: Comparing runtime, space consumption, and virtual memory footprints over a range of benchmarks, we show that the runtime performance of the best-performing garbage collector is competitive with explicit memory management when given enough memory. In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management. So, if my understanding is correct: if I have an app written in native C++ requiring 100 MB of memory, to achieve the same performance with a "managed" (i.e. garbage collector based) language (e.g. Java, C#), the app should require 5*100 MB = 500 MB? (And with 2*100 MB = 200 MB, the managed app would run 70% slower than the native app?) Do you know if current (i.e. latest Java VM's and .NET 4.0's) garbage collectors suffer the same problems described in the aforementioned article? Has the performance of modern garbage collectors improved? Thanks.
You seem to be asking two things: have GC's improved since that research was performed, and can I use the conclusions of the paper as a formula to predict required memory. The answer to the first is that there have been no major breakthroughs in GC algorithms that would invalidate the general conclusions: GC'ed memory management still requires significantly more virtual memory. If you try to constrain the heap size the GC performance drops significantly. If real memory is restricted, the GC'ed memory management approach results in substantially worse performance due to paging overheads. However, the conclusions cannot really be used as a formula: The original study was done with JikesRVM rather than a Sun JVM. The Sun JVM's garbage collectors have improved in the ~5 years since the study. The study does not seem to take into account that Java data structures take more space than equivalent C++ data structures for reasons that are not GC related. On the last point, I have seen a presentation by someone that talks about Java memory overheads. For instance, it found that the minimum representation size of a Java String is something like 48 bytes. (A String consists of two primitive objects; one an Object with 4 word-sized fields and the other an array with a minimum of 1 word of content. Each primitive object also has 3 or 4 words of overhead.) Java collection data structures similarly use far more memory than people realize. These overheads are not GC-related per se. Rather they are direct and indirect consequences of design decisions in the Java language, JVM and class libraries. For example: Each Java primitive object header1 reserves one word for the object's "identity hashcode" value, and one or more words for representing the object lock. The representation of a String has to use a separate "array of characters" because of JVM limitations. Two of the three other fields are an attempt to make the substring operation less memory intensive. The Java collection types use a lot of memory because collection elements cannot be directly chained. So for example, the overheads of a (hypothetical) singly linked list collection class in Java would be 6 words per list element. By contrast an optimal C/C++ linked list (i.e. with each element having a "next" pointer) has an overhead of one word per list element. 1 - In fact, the overheads are less than this on average. The JVM only "inflates" a lock following use & contention, and similar tricks are used for the identity hashcode. The fixed overhead is only a few bits. However, these bits add up to a measurably larger object header ... which is the real point here.
2,982,369
2,982,382
rand () for c++ with variables
int userHP = 100; int enemyHP = rand() % ((userHP - 50) - (userHP - 75)) + 1; okay, for some reason this doesnt seem to work right, im trying to get 50 -25 hp for enemys. also id rather it be a percentage... like int enemyHP = rand() % ((userHP / 50%) - (userHP / 75%)) + 1; but id like to stick with integers and not mess with floats or doubles... can someone help me?
Perform some algebra on this: rand() % ((userHP - 50) - (userHP - 75)) + 1; rand() % (userHP - 50 - userHP + 75) + 1; rand() % (userHP - userHP - 50 + 75) + 1; rand() % (-50 + 75) + 1; ...and you can quickly see what's going wrong. Why not use doubles?
2,982,387
2,982,486
global std::unordered_map com server init problems
I want to have a static global std::unordered_map in the cpp of my entry point for my COM server. relevant header code: typedef unordered_map<HWND,IMyInterface*> MyMapType; relevant body: static MyMapType MyMap; void MyFunction(HWND hWnd, IMyInterface* pObj){ MyMap[hWnd] = pObj; } HINSTANCE g_hInstModule = NULL; BOOL WINAPI DllMain ( __in HINSTANCE hInstDLL, __in DWORD fdwReason, __in LPVOID lpvReserved ) { if( fdwReason == DLL_PROCESS_ATTACH ) { g_hInstModule = hInstDLL; return true; } else if( fdwReason == DLL_PROCESS_DETACH ) { return true; } return false; } MyCoClass::MyCoClass() { DRM_Refcount = 1; } HRESULT STDMETHODCALLTYPE MyCoClass::InitMyCoClass() { CoInitializeEx(NULL, COINIT_APARTMENTTHREADED); //replace with make window code MyFunction(hWnd,ISomeInterface); return S_OK; } The only way I can get this to work is be making a map_type pointer and creating an instance of map_type on the heap and pointing at it with the global pointer. :/ WHY?
You need to modify DllMain to explicitly initialize the C runtime: http://support.microsoft.com/kb/814472 Search for "To Modify COM-based DLL" Static objects with constructors usually get initialized through the CRT entry point which for .exes then calls your program's main function. With DLLs, you have to the work yourself.
2,982,388
2,982,421
Translating C++'s sprintf format string to C#'s string.Format
I found the following C++ code (comments added myself): // frame_name is a char array // prefix is std::string // k is a for loop counter // frames is a std::vector string sprintf(frameName, "%s_%0*s.bmp", prefix.c_str(), k, frames[k].c_str()); I then try to translate it to C# // prefix is string // k is a for loop counter // frames is List<string> string frameName = string.Format("{0}_(what goes in here?).bmp", prefix, k, frames[k]); Basically, what would be the C# equivalent of the C++ format string "%s_%0*s.bmp"? Edit, @Mark Byers: I've tried your code and made a little test program: static void Main(string[] args) { List<string> frames = new List<string>(); frames.Add("blah"); frames.Add("cool"); frames.Add("fsdt"); string prefix = "prefix"; int n = 2; int k = 0; string frameName = string.Format("{0}_{1}.bmp", prefix, frames[k].PadLeft(n, '0')); Console.WriteLine(frameName); // outputs prefix_blah.bmp, should output prefix_00blah.bmp Console.ReadLine(); } It's not padding for some reason. Edit: Got it working; won't pad if n = 2.
To pad a string with zeros use string.PadLeft: frames[k].PadLeft(n, '0') In combination with string.Format: int n = 15; // Get this from somewhere. string frameName = string.Format("{0}_{1}.bmp", prefix, frames[k].PadLeft(n, '0')); Note that I have changed k to n, as I assume that this is a bug in the original code. I think it's unlikely that the length of the padding on the file name was meant to increase by one in each iteration of the loop.
2,982,447
2,982,450
simple question on C
I have this snippet of the code char *str = “123”; if(str[0] == 1) printf("Hello\n"); why I can't receive my Hello thanks in advance! how exactly compiler does this comparison if(str[0] == 1)?
You want to do this: if (str[0] == '1') ... The difference is that you are comparing str[0] to the number 1, while my code above is comparing str[0] to the character '1' (which has ASCII value 49). Not all programming languages treat characters and numbers interchangeably in this way, but C does. Check out ASCII for more information about how computers map numbers to characters.
2,982,505
2,982,566
Did I implement this correctly?
I'm trying to implement line thickness as denoted here: start = line start = vector(x1, y1) end = line end = vector(x2, y2) dir = line direction = end - start = vector(x2-x1, y2-y1) ndir = normalized direction = dir*1.0/length(dir) perp = perpendicular to direction = vector(dir.x, -dir.y) nperp = normalized perpendicular = perp*1.0/length(perp) perpoffset = nperp*w*0.5 diroffset = ndir*w*0.5 p0, p1, p2, p3 = polygon points: p0 = start + perpoffset - diroffset p1 = start - perpoffset - diroffset p2 = end + perpoffset + diroffset p3 = end - perpoffset + diroffset I have implemented this like this: void OGLENGINEFUNCTIONS::GenerateLinePoly(const std::vector<std::vector<GLdouble>> &input, std::vector<GLfloat> &output, int width) { output.clear(); float temp; float dirlen; float perplen; POINTFLOAT start; POINTFLOAT end; POINTFLOAT dir; POINTFLOAT ndir; POINTFLOAT perp; POINTFLOAT nperp; POINTFLOAT perpoffset; POINTFLOAT diroffset; POINTFLOAT p0, p1, p2, p3; for(int i = 0; i < input.size() - 1; ++i) { start.x = input[i][0]; start.y = input[i][1]; end.x = input[i + 1][0]; end.y = input[i + 1][1]; dir.x = end.x - start.x; dir.y = end.y - start.y; dirlen = sqrt((dir.x * dir.x) + (dir.y * dir.y)); ndir.x = dir.x * (1.0 / dirlen); ndir.y = dir.y * (1.0 / dirlen); perp.x = dir.x; perp.y = -dir.y; perplen = sqrt((perp.x * perp.x) + (perp.y * perp.y)); nperp.x = perp.x * (1.0 / perplen); nperp.y = perp.y * (1.0 / perplen); perpoffset.x = nperp.x * width * 0.5; perpoffset.y = nperp.y * width * 0.5; diroffset.x = ndir.x * width * 0.5; diroffset.y = ndir.y * width * 0.5; // p0 = start + perpoffset - diroffset //p1 = start - perpoffset - diroffset //p2 = end + perpoffset + diroffset // p3 = end - perpoffset + diroffset p0.x = start.x + perpoffset.x - diroffset.x; p0.y = start.y + perpoffset.y - diroffset.y; p1.x = start.x - perpoffset.x - diroffset.x; p1.y = start.y - perpoffset.y - diroffset.y; p2.x = end.x + perpoffset.x + diroffset.x; p2.y = end.y + perpoffset.y + diroffset.y; p3.x = end.x - perpoffset.x + diroffset.x; p3.y = end.y - perpoffset.y + diroffset.y; output.push_back(p0.x); output.push_back(p0.y); output.push_back(p1.x); output.push_back(p1.y); output.push_back(p2.x); output.push_back(p2.y); output.push_back(p3.x); output.push_back(p3.y); } } but right now the lines look perpendicular and wrong; it should be giving me quads to render which is what I'm rendering, but the points it is outputting are strange. Have I done this wrong? Thanks
You are calculating perp incorrectly. It should be (y, -x), not (x, -y). I don't know if that's the only bug. This one just jumped out at me. As an aside, I strongly recommend that you define a useful vec2 type, with useful helpers like: vec2 perp(vec2 v) { return vec2(v.y, -v.x); } That way, your code will look almost the same as your pseudocode. Manipulating x and y individually is much more error-prone and harder to read. It is quite simple to build a basic class for this purpose, though you might be better off finding a third-party implementation to avoid mistakes like the above one. Most game/graphics/physics engines provide a bunch of useful types and functions.
2,982,514
2,982,655
Boost.Asio: The difference between async_read and async_receive
What's the difference between async_read and async_receive?
async_receive is a function that just receives into a buffer, but may not receive the amount you asked for. (It'll be equal or less, never more.) async_read, however, will always receive the amount you asked for, as it states: This function is used to asynchronously read a certain number of bytes of data from a stream. The function call always returns immediately. The asynchronous operation will continue until one of the following conditions is true: The supplied buffers are full. That is, the bytes transferred is equal to the sum of the buffer sizes. An error occurred. The only thing the page is a bit vague on is what async_read does if it doesn't get that many bytes, and the connection closes gracefully. (Does that count as "error"?) This can probably be determined with a quick test. (async_receive, however, would just give you what it got.)
2,982,592
2,982,599
Does a no-op "do nothing" function object exist in C++(0x)?
I realize this is a ludicrous question for something that takes less than 2 seconds to implement. But I vaguely remember reading that one was introduced with the new standard. I grep'ed VC10's headers and came up with nothing. Can you help? It's bugging me! :) edit: On second thought, the new functor I was remembering was probably the unrelated std::default_deleter.
You could always write a no-op lambda: []{}
2,982,660
2,982,681
problem with template inheritance
I'm trying to understand whay i get an error on this code: (the error is under g++ unix compiler. VS is compiling OK) template<class T> class A { public: T t; public: A(const T& t1) : t(t1) {} virtual void Print() const { cout<<*this<<endl;} friend ostream& operator<<(ostream& out, const A<T>& a) { out<<"I'm "<<typeid(a).name()<<endl; out<<"I hold "<<typeid(a.t).name()<<endl; out<<"The inner value is: "<<a.t<<endl; return out; } }; template<class T> class B : public A<T> { public: B(const T& t1) : A<T>(t1) {} const T& get() const { return t; } }; int main() { A<int> a(9); a.Print(); B<A<int> > b(a); b.Print(); (b.get()).Print(); return 0; } This code is giving the following error: main.cpp: In member function 'const T& B::get() const': main.cpp:23: error: 't' was not declared in this scope It did compiled when i changed the code of B to this: template<class T> class B : public A<T> { public: B(const T& t1) : A<T>(t1) {} const T& get() const { return A<T>::t; } }; I just cant understand what is the problem with the first code... It doesn't make sense that i really need to write "A::" every time...
You can also use this->t to access the base class template member. In B::get(), the name t is not dependent on the template parameter T, so it is not a dependent name. Base class A<T> is obviously dependent on the template parameter T and is thus a dependent base class. Nondependent names are not looked up in dependent base classes. A detailed description of why this is the case can be found in the C++ FAQ Lite.
2,982,919
2,982,946
Loading and saving a class to a binary file
I don't know if this is possible but, I have a class and I'v made an instance of it. I also put things in it. It has vectors and other things. I was wondering if I could save its contents (the instance) to a binary file, then reload it and cast it in from the file. Thanks
Yes, sometimes, kinda... Serialization is a tricky problem. Don't solve it yourself (i.e. don't reinvent the wheel... plenty of smart people have already done this). What you've described works in a constrained environment: Your reading and writing machines have the same endianness. Your class contains data only within its footprint (no pointers or objects with pointers). This isn't for the real world the real world usually needs something better the real world usually wants backwards compatible against changes the real world usually can't anticipate hardware changes You probably want to look into different serialization schemes. They have their own pluses and minuses, which you'll find plenty of information detailing on StackOverflow. To get you started, look into Google's protocol buffers, boost serialization and XML.
2,982,985
2,983,012
Algorithm to zoom into mouse(OpenGL)
I have an OpenGL scene with a top left coordinate system. When I glScale it zooms in from (0,0) the top left. I want it to zoom in from the mouse's coordinate (relative to the OGL frame). How is this done? Thanks
I believe this can be done in four steps: Find the mouse's x and y coordinates using whatever function your windowing system (i.e. GLUT or SDL) has for that, and use gluUnProject to get the object coordinates that correspond to those window coordinates Translate by (x,y,0) to put the origin at those coordinates Scale by your desired vector (i,j,k) Translate by (-x,-y,0) to put the origin back at the top left
2,983,066
2,983,069
Rationale behind introducing protected access specifier
What was the rationale behind introducing protected access specifier in C++. An example would be helpful.
The protected access level is used when classes need to work together with their inheritors. For example, imagine an abstract Shape class that can report its area to the outside world. Different shapes, such as triangles, squares, and circles, are described differently (angle, side, radius) and calculate their areas differently. The Shape class might have a public getArea() method that returns a private variable holding the area. The best way to set this variable would be a protected method called setArea(double) which would be called by the child classes. Thus, Circle would call setArea(PI * radius * radius), Square would call setArea(side * side), etc. Note that this is not necessarily a good design (but it's a great example of protected)
2,983,091
2,983,101
Templates and function overloading
If funciton overloading and templates serve more the less the same purpose then which one should we go for templates or function overloading and what are the corresponding benefits.
With overloaded functions, you have to explicitly write out each overload: int max(int x, int y) { return x > y ? x : y; } long max(long x, long y) { return x > y ? x : y; } char max(char x, char y) { return x > y ? x : y; } // etc. This is tedious, but can be beneficial if the function body needs to be different based on the type. Templates are nice when the same source code can be used for any type. You specify the pattern, and the compiler generates the expansions as needed: // Can be used with any type that supports ">". template<typename T> T max(T x, T y) { return x > y ? x : y; }
2,983,146
2,983,211
Class templates and template class
Is there a difference between a class template and template class. If so what is it?
When both terms are used there is a very subtle difference. It is more linguistic than semantic, it depends on which word you are modifying. In short, a class template is a particular kind of template. Templates can define either classes or functions. A class template is a template that defines a class. See the difference: template <typename T> class SomeClass {...}; // this is a class template template <typename T> int some_function(T&) {...} // this is a function template A template class is a particular kind of class. There are many kinds of classes, and in particular, template classes are those defined using a class template. Contrast: SomeClass sc; // SomeClass is an ordinary (non-template) class SomeClass<int> sc; // SomeClass<int> is a template class From Stroustrup's C++ Glossary: template class - class parameterized by types, values, or templates. The template arguments necessary to identify the class to be generated for the class template must be provided where a template class is used. For example "vector<int> v;" generates a vector of ints from the vector template. See also template. TC++PL 13.2, D&E 15.3. Both expressions are used in Stroustrup's book The C++ Programming Language, and the ISO/IEC C++ standard until 1998. Note: As discussed in the comments below, it seems that C++03 doesn't use the term "template class" anymore (although I don't have a copy of it), presumably to reduce confusion. As I said before, they are fundamentally the same thing, it is just a linguistic difference: in the templates context you refer to a particular kind of template or in the classes context you refer to a particular kind of class. If you just stick to "class template", you won't lose anything. More food for thought: What is the difference between a template class and a class template? Is there a difference between a function template and a template function, or between a class template and a template class? — link suggested by Josh Haberman
2,983,182
2,983,185
New, delete, malloc, free
new and delete are said to be preprocessors while malloc and free are functions. What is meant by new and delete being preprocessors?
new and delete are C++ operators (like +, (), etc.) whereas malloc and free are (C) functions. Some operators (including new and delete) can be overloaded.
2,983,264
2,983,323
Will this SQL cause any problems?
I'm sure everyone knows the joys of concurrency when it comes to threading. Imagine the following scenario on every page-load on a noobily set up MySQL db: UPDATE stats SET visits = (visits+1) If a thousand users load the page at same time, will the count cause any issues? is this that table locking/row locking mechanism? Which one mysql use.
You have two potential problems: Will you get the right answer? Will you get unreasonable locking, will your whole app go very slow or even deadlock. The right answer depends upon whether two users could compute (visit + 1) on the same value of visit. We can imagine that the database needs to do these actions: Read visit count Add one to visit count Write visit count So if two users are working at the same time could they both read the same old value? That's where the isolation level of the transaction comes into play. As observed by Artefacto the default isolation level is repeatable read, and hence we get: Grab a lock Read, increment, Write Release lock Rather than Read (two users same old value) Increment First user Grab Lock, second waits Write Release, second user grabs lock Write (same value!) Release However the level of contention could be quite high, and very much depends on the scope of your transaction. Suppose you have: Begin transaction Do the visit increment stuff Do some serious business work End transaction <==== visit lock is held until here Then you will get a lot of folks waiting for that visit lock. We don't know the overall structure of your app, whether you are using large transaction scopes like this. Very likely you are getting a default behaviour of a single transaction per SQL statement, and in which case you're contention is just for the duration of the SQL statement, pretty much as you would be hoping. Other folks might not be so fortunate: there are environments (eg. Java EE Servlets) where implicit transaction scopes can be created by the infrastructure and then the longer lived transactions I show above happen by default. Worse is the possibility that your code is not written consistently (with the visit increment always first, or always last) you can get: Begin transaction Do the visit increment stuff Do some serious business work End transaction <==== visit lock and business locks held until here and Begin transaction Do some other serious business work Do the visit increment stuff End transaction <==== visit lock and maybesame business locks held until here And bingo: Deadlock For high volume sites you bcould consider writing a "Visit" event to a queue, and having a daemon listening for those events and maintaining the count. More complex, but possibly fewer contention issues.
2,983,270
2,983,334
Using a function with variable argument strings
I was playing around a bit with functions with variable arguments, and decided to make a function to create vectors with the arguments. My function for creating an int vector worked... vector<int> makeIntVector(int numArgs, ...) { va_list listPointer; va_start(listPointer, numArgs); vector<int> made; for(int a = 0; a < numArgs; a++) made.push_back(va_arg(listPointer, int)); va_end(listPointer); return made; } but not my function for creating a string vector: vector<string> makeStringVector(int numArgs, string something, ...) { va_list listPointer; va_start(listPointer, something); vector<string> made; for(int a = 0; a < numArgs; a++) made.push_back(va_arg(listPointer, string)); va_end(listPointer); return made; } which crashes the program. What am I doing wrong?
Attempting to pass a string as a varaidic parameter gives undefined behavior: "If the argument has a non-POD class type (clause 9), the behavior is undefined." (§5.2.2/7 of the standard).
2,983,278
2,983,333
Problem Initializing an Array Of Structs
I am trying to initialize the following array of the following struct, but my code isn't compiling. Can anybody help me out? The struct/array: struct DiningCarSeat { int status; int order; int waiterNum; Lock customerLock; Condition customer; DiningCarSeat(int seatNum) { char* tempLockName; sprintf(tempLockName, "diningCarSeatLock%d", seatNum); char* tempConditionName; sprintf(tempConditionName, "diningCarSeatCondition%d", seatNum); status = 0; order = 0; waiterNum = -1; customerLock = new Lock(tempLockName); customer = new Condition(tempConditionName); } } diningCarSeat[DINING_CAR_CAPACITY]; The relevant errors: ../threads/threadtest.cc: In constructor `DiningCarSeat::DiningCarSeat(int)': ../threads/threadtest.cc:58: error: no matching function for call to `Lock::Lock()' ../threads/synch.h:66: note: candidates are: Lock::Lock(const Lock&) ../threads/synch.h:68: note: Lock::Lock(char*) ../threads/threadtest.cc:58: error: no matching function for call to `Condition::Condition()' ../threads/synch.h:119: note: candidates are: Condition::Condition(const Condition&) ../threads/synch.h:121: note: Condition::Condition(char*) ../threads/threadtest.cc:63: error: expected primary-expression before '.' token ../threads/threadtest.cc:64: error: expected primary-expression before '.' token ../threads/threadtest.cc: At global scope: ../threads/threadtest.cc:69: error: no matching function for call to `DiningCarSeat::DiningCarSeat()' ../threads/threadtest.cc:51: note: candidates are: DiningCarSeat::DiningCarSeat(const DiningCarSeat&) ../threads/threadtest.cc:58: note: DiningCarSeat::DiningCarSeat(int) Thanks in advance!
There are multiple issues here: These should both be pointers, since you are newing them in your constructor: Lock customerLock; Condition customer; You don't declare a type for seatNum: DiningCarSeat(seatNum) { You don't allocate memory for tempLockName or tempConditionName: char* tempLockName; sprintf(tempLockName, "diningCarSeatLock%d", seatNum); char* tempConditionName; sprintf(tempConditionName, "diningCarSeatCondition%d", seatNum);
2,983,289
2,983,296
syntax for calling virtual functions outside of the class?
what is the syntax for defining virtual functions outside the class body? class random{ public: random(int i = 0); virtual ~random(){}; virtual void print() const; protected: int id; }; is it? virtual void random::print() { } ?
Just: struct foo { virtual void bar(void) const; }; void foo::bar(void) const { } virtual only goes on the declaration.
2,983,315
2,983,336
Multiple Socket Connections
I need to write a server which accepts connections from multiple client machines, maintains track of connected clients and sends individual clients data as necessary. Sometimes, all clients may be contacted at once with the same message, other times, it may be one individual client or a group of clients. Since I need confirmation that the clients received the information and don't want to build an ACK structure for a UDP connection, I decided to use a TCP streaming method. However, I've been struggling to understand how to maintain multiple connections and keep them idle. I seem to have three options. Use a fork for each incoming connection to create a separate child process, use pthread_create to create an entire new thread for each process, or use select() to wait on all open socket IDs for a connection. Recommendations as to how to attack this? I've begun working with pthreads but since performance will likely not be an issue, multicore processing is not necessary and perhaps there is a simpler way.
Child processes are not nice, because you just move the goalpost. You will need to make your child processes communicate between each other, then you are back to the same problem. It is possible to use threads, but you will have other problems if your threads keep blocking on socket receive. select() (or poll() on newer (POSIX) Unixes) is still the best solution. You tell either select() or poll() which sockets or descriptors you want to monitor for events (probably just input (read) events is enough for you), then you do the read only on that socket or descriptor that was flagged by select()/poll(). It is guaranteed that recv() will not block.
2,983,341
2,983,381
What is a way to fill a multiline editbox with line of text
What im after is a greyed out editbox you see at the bottom of some programs. A list of results. Im having problems having the text properly formatted. so starting from scratch, how is it usually done? The filling of the half page sized editbox with text. one big long string with line breaks? Results->Text = System::Convert::ToString(Var) ; that seems to be the only way i can input to the editbox but i cant seem to build a multiline string that that line will accept.
.NET or WinAPI? Don't add too much tags. In Win32 dialog template, I use the following styles to create such editbox: Auto HScroll false, Auto VScroll false, Horizontal Scroll false, Multiline true, Read Only true, Vertical Scroll true. You can add new information by concatenating new text with existing. If you want new line, add "\n". Long lines are wrapped. To scroll down automatically post to this textbox WM_VSCROLL message with SB_BOTTOM parameter.
2,983,383
2,983,465
destructor and copy-constructor calling..(why does it get called at these times)
I have the following code #include <iostream> using namespace std; class Object { public: Object(int id){ cout << "Construct(" << id << ")" << endl; m_id = id; } Object(const Object& obj){ cout << "Copy-construct(" << obj.m_id << ")" << endl; m_id = obj.m_id; } Object& operator=(const Object& obj){ cout << m_id << " = " << obj.m_id << endl; m_id = obj.m_id; return *this; } ~Object(){ cout << "Destruct(" << m_id << ")" << endl; } private: int m_id; }; Object func(Object var) { return var; } int main(){ Object v1(1); cout << "( a )" << endl; Object v2(2); v2 = v1; cout << "( b )" << endl; Object v4 = v1; Object *pv5; pv5 = &v1; pv5 = new Object(5); cout << "( c )" << endl; func(v1); cout << "( d )" << endl; delete pv5; } which outputs Construct(1) ( a ) Construct(2) 2 = 1 ( b ) Copy-construct(1) Construct(5) ( c ) Copy-construct(1) Copy-construct(1) Destruct(1) Destruct(1) ( d ) Destruct(5) Destruct(1) Destruct(1) Destruct(1) I have some issues with this, firstly why does Object v4 = v1; call the copy constructor and produce Copy-construct(1) after the printing of ( b ). Also after the printing of ( c ) the copy-constructor is again called twice?, Im not certain of how this function works to produce that Object func(Object var) { return var; } and just after that Destruct(1) gets called twice before ( d ) is printed. sorry for the long question, I'm confused with the above.
Object v1(1); // Construct(1) Regular constructor call for an automatic stack variable (destroyed at the end of the function). cout << "( a )" << endl; // ( a ) Object v2(2); // Construct(2) Another constructor call. v2 = v1; // 2 = 1 The assignment operator is called because v2 was already created (we called the constructor for it) and now we're assigning one existing object to another. cout << "( b )" << endl; // ( b ) Object v4 = v1; // Copy-construct(1) The copy constructor is called here because Object v4 is still not created, so we create it as a copy of v1. The assignment is taken here to mean the same as if you did Object v4(v1) Object *pv5; pv5 = &v1; pv5 = new Object(5); // Construct(5) Call the constructor for a heap object (destroyed explicitly with delete). cout << "( c )" << endl; // ( c ) func(v1); // Copy-construct(1) <br /> // Copy-construct(1) <br /> // Destruct(1) <br /> // Destruct(1) <br /> The copy constructor is first called to copy v1 to the parameter var. It is called again to create a copy of var as return value to the caller. var is destroyed as it's popped off the stack when exiting the function. The return value is destroyed after at the expression func(v1). cout << "( d )" << endl; // ( d ) delete pv5; // Destruct(5) The object pointed at by pv5 is manually destroyed. } // end of main // Destruct(1) <br /> // Destruct(1) <br /> // Destruct(1) <br /> The automatic variables v1, v2, v4 (all having copied the id of v1 from either assignment or copy construction) are popped off the stack and the destructor is called for each.
2,983,418
2,983,509
C++ corrupt my thinking, how to trust auto garbage collector?
I use to program mainly with C/C++, that's make me dealing with pointers and memory management daily. This days I'm trying to develop using other tools, such as Java, Python and Ruby. The problem is that I keep thinking C++ style, I'm writing code as C++ usually written in almost every programming language, and the biggest problem is the memory management, I keep writing bad code using references in Java and just get as close as I can to the C++ style. So I need 2 thinks here, one is to trust the garbage collector, let's say by seeing benchmarks and proofs that it's realy working in Java, and know what I should never do in order to get my code the best way it can be. And the second think is knowing how to write other languages code. I mean I know what to do, I'm just never write the code as most Java or Python programmers usually do, are there any books for C++ programmers just to introduce me to the writing conventions? (by the way, forgive me for my English mistakes)
One difference to bear in mind is that in C++ the destructor can be used to clean up any kind of resource, not just memory (i.e. RAII). In Java you have to explicitly close files, sockets, datastore connections etc in a try - finally block. If you put resource cleanup code in a Java finalize method then it may get called at some indeterminate time in the future, or never, so is not recommended. So in some ways this puts a bigger burden on the programmer, not less. Python is somewhere in between - you can use the 'with' statement to handle automatic cleanup for most resources. The two problems in C++ memory management are memory leaks and trying to use an object that has already been destroyed. As others have pointed out you can also get memory leaks in Java (and Python) if you keep a reference to an object that you no longer need, which in turn may have references to other objects. Memory leaks in Java may be less frequent but when they do occur they can be much bigger than in C++. Judicious use of weak references can help, as well as assigning null to variables that are no longer needed. However this leads to the second problem - if you then try to use the variable you will get a NullPointerException. This is more helpful than the segmentation fault you would probably get in C++, but is still an issue. So all the things you learnt about memory management in C++ still apply in Java, but you have to do it for other resources too.
2,983,558
2,983,579
Reuse C++ Header files
I have a Visual C++ solution with 2 projects AlgorithmA & AlgorithmB and both share a common header file RunAlgo.h with the class declaration. Each project in the solution has its own unique implementation for the header file. I am trying to compile a DLL out of the common header file RunAlgo.h and add reference to this DLL in the projects AlgorithmA & AlgorithmB. I have then included separate RunAlgo.cpp definition file in both my projects. The problem is that I am getting linker errors while compiling the new DLL project which has only the header file. So, the question is Can a header file with only class declaration be compiled into a DLL (Similar to class library containing an Interface in C#)? For the above scenario, is there a better approach to reuse the common Header file among projects? Should the above method work (re-check my code?)
1 & 3: No, that doesn't make sense in C++. Libraries (dynamic or otherwise) are only used during linking. During compilation declarations must be visible to the compiler in source-code form. This is why, for example, you have to explicitly #include standard library headers in addition to linking against the standard library. 2: What you're already doing is basically the only solution. Put the common header files in their own directory, and add that directory to the include path of each of the two projects.
2,983,570
2,983,598
Can you recommend a full-text search engine?
Can you recommend a full-text search engine? (Preferably open source) I have a database of many (though relatively short) HTML documents. I want users to be able to search this database by entering one or more search words in my C++ desktop application. Hence, I’m looking for a fast full-text search solution to integrate with my app. Ideally, it should: Skip common words, such as the, of, and, etc. Support stemming, i.e. search for run also finds documents containing runner, running and ran. Be able to update its index in the background as new documents are added to the database. Be able to provide search word suggestions (like Google Suggest) Have a well-documented API To illustrate, assume the database has just two documents: Document 1: This is a test of text search. Document 2: Testing is fun. The following words should be in the index: fun, search, test, testing, text. If the user types t in the search box, I want the application to be able to suggest test, testing and text (Ideally, the application should be able to query the search engine for the 10 most common search words starting with t). A search for testing should return both documents. Other points: I don't need multi-user support I don't need support for complex queries The database resides on the user's computer, so the indexing should be performed locally. Can you suggest a C or C++ based solution? (I’ve briefly reviewed CLucene and Xapian, but I’m not sure if either will address my needs, especially querying the search word indexes for the suggest feature).
I have use with very success the dtSearch module. They have a dll, that you can use with your application to index just anything and do more than the one you ask. Note: Is not free. I do not see in question that you ask for free one, so I write my favor one. The dtSearch have inspire me and I create an indexer for my language Ellinika for my sites, because did not found what I was looking for my language. There are some modules just for steeming if you just need to find suggestions for your words, I have get reference from here http://tartarus.org/~martin/PorterStemmer/ For example if you have a database like ms sql that all ready do some basic indexing, and some one search for a word, and you do not find nothing, you can do by your self steeming on this word, and search again...
2,983,708
3,028,901
Qt not displaying PNG images on Linux
I have a software written using Qt which is meant to work on both Windows and Linux. I use PNG images as icons for my QActions, embedded in a resource. I have the following tree directory: / resources/ icons.qrc image.png Here is the content of icons.qrc: <RCC> <qresource prefix="/resources" lang="fr"> <file alias="image.png">image.png</file> </qresource> </RCC> I declare my QIcon like that: QIcon(":/resources/image.png") Under Windows, it works well but on Linux (I only tried on Ubuntu 10.4 so far), the images aren't displayed. Is there anything special I have to do for this to work ? Is this a configuration problem ? Thank you.
Actually, I found out what was wrong. It had nothing to do with being on Linux or Windows, it was due to the locale. My linux system is in english while my Windows is in french. Since the resources had the lang="fr" flag, nothing was shown on non-french OSes... A stupid mistake !
2,983,819
2,983,841
How initialize array of classes?
I have this class constructor: Pairs (int Pos, char *Pre, char *Post, bool Attach = true); How can I initialize array of Pairs classes? I tried: Pairs Holder[3] = { {Input.find("as"), "Pre", "Post"}, {Input.find("as"), "Pre", "Post"}, {Input.find("as"), "Pre", "Post"} }; Apparently it's not working, I also tried to use () brackets instead of {} but compiler keeps moaning all the time. Sorry if it is lame question, I googled quite hard but wasn't able to find answer :/
Call the constructor explicitly: Pairs Holder[3] = { Pairs(Input.find("as"), "Pre", "Post"), Pairs(Input.find("as"), "Pre", "Post"), Pairs(Input.find("as"), "Pre", "Post") };
2,984,148
2,984,218
Where does C++ really shine?
I know C and Python, and I'm moving toward another language for learning purposes. My problem is that I like to learn things with something to do (for example contributing to some project or do something amazing, not boring plain algebra). I would like to hear suggestions about the fields in which C++ shines and where I can find interesting programming with C++. (For fields I mean networking/GUI programming/algorithms/games ...) I confirm that I'm interested in open source projects/development.
I will share which fields I use the language in and why I use the language over others. Perhaps you can decide if my reasons qualify as 'shines'. Which fields: Device drivers, file system drivers, GUI development, algorithm modules, protocols and communications, application frameworks, data manipulation, storage handlers, system emulation. Why: I want to write code that is portable across a broad scale of architectures. From small 16-bit embedded systems to large enterprise platforms. This is because I dislike solving the same problems over and over again. C++ compilers are available for more platforms I target than any other OO language. I do lose this capability on very very small (i.e. 8-bit) systems but I'm not spending much time in that space anymore. System code can be written (i.e. device drivers, FS drivers, etc.) as those require a language that compiles to native code. With careful selection of language features and libraries used it can be nearly as compact as C. Broad usage among compiled languages so there is peer experience to draw upon as well as available libraries and source code. Deterministic and predictable behavior over long execution runs (months to years) since the memory management scheme may be carefully selected for the application's needs. Acceptability to my clients. They are assured that the work is maintainable since a significant pool of developers exist in the market. I hope that helped a little.
2,984,166
2,984,246
How to put/get INT to/from a WCHAR array?
how can I put a INT type variable to a wchar array ? Thanks EDIT: Sorry for the short question. Yes we can cast INT to a WCHAR array using WCHAR*, but when we are retrieving back the result (WCHAR[] to INT), I just realize that we need to read size of 2 from WCHAR array since INT is 4 BYTEs which is equal to 2 WCHARs. WCHAR arData[20]; INT iVal = 0; wmemcpy((WCHAR*)&iVal, arData, (sizeof(INT))/2); Is this the safest way to retrieve back INT value from WCHAR array
Technically, the way you do it is unsafe due to strict aliasing and alignment. The safest and the most portable way would be to read chars one by one and combine them with bit shifts. While your code would work on a Windows PC, don't expect it to be portable or work for all compilers and compiler settings. Basic example (can be improved to be more portable with regard to integer sizes, byte order, etc): WCHAR arData[20]; ... // Read little-endian 32-bit integer from two 16-bit chars: INT iVal = arData[0] | arData[1] << 16;
2,984,206
2,984,212
Is iterator being invalidated?
Is iterator invalidated after: string b "Some string"; auto beg_ = b.begin(); auto end_ = b.end(); b.erase(beg_);
Yes, but erase return a valid iterator you can use to continue in a loop : For the remaining members, the function returns an iterator of member type string::iterator referring to the character that now occupies the position of the first character erased, or, if no such character exists, returns end(). Source : http://www.cplusplus.com/reference/string/string/erase/
2,984,211
2,990,232
Using custom coordinates with QGraphicsScene
I am experimenting with a WYSIWYG editor that allows a user to draw shapes on a page and the Qt graphics scene support seems perfect for this. However, instead of working in pixels I want all my QGraphicsItem objects to work in tenths of a millimetre but I don't know how to achieve this. For example: // Create a scene that is the size if an A4 page (2100 = 21cm, 2970 = 29.7cm) QGraphicsScene* scene = new QGraphicsScene(0, 0, 2100, 2970); // Add a rectangle located 1cm across, 1cm down, 5cm wide and 2cm high QGraphicsItem* item = scene->addRect(100, 100, 500, 200); ... QGraphicsView* view = new QGraphicsView(scene); setCentralWidget(view); Now, when I display the scene above I want the shapes to appear at correct size for the screen DPI. Is this simply a case of using QGraphicsView::scale or do I have to do something more complicated? Note that if I was using a custom QWidget instead then I would use QPainter::setWindow and QPainter::setViewport to create a custom mapping mode but I can't see how to do this using the graphics scene support.
QGraphicsView::scale should do the job. But I prefer setting the transform. It gives me much more control over how the scene is displayed. But that's because I need things like rotation, flipping, etc. It also allow me to track what I did to the scene.
2,984,287
2,984,435
QTableWidget::itemAt() returns seemingly random items
I've just started using Qt, so please bear with me. When I use QTableWidget->getItemAt(), it returns a different item from if I used currentItemChanged and clicked the same item. I believe it's necessary to use itemAt() since I need to get the first column of whatever row was clicked. Some example code is below: MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); QList<QString> rowContents; rowContents << "Foo" << "Bar" << "Baz" << "Qux" << "Quux" << "Corge" << "Grault" << "Garply" << "Waldo" << "Fred"; for(int i =0; i < 10; ++i) { ui->tableTest->insertRow(i); ui->tableTest->setItem(i, 0, new QTableWidgetItem(rowContents[i])); ui->tableTest->setItem(i, 1, new QTableWidgetItem(QString::number(i))); } } //... void MainWindow::on_tableTest_currentItemChanged(QTableWidgetItem* current, QTableWidgetItem* previous) { ui->lblColumn->setText(QString::number(current->column())); ui->lblRow->setText(QString::number(current->row())); ui->lblCurrentItem->setText(current->text()); ui->lblCurrentCell->setText(ui->tableTest->itemAt(current->row(), current->column())->text()); } For the item at 1x9, lblCurrentItem displays "9" (as it should,) whereas lblCurrentCell displays "Quux". Am I doing something wrong?
Qt documenration says: QTableWidgetItem * QTableWidget::itemAt ( int ax, int ay ) const Returns the item at the position equivalent to QPoint(ax, ay) in the table widget's coordinate system, or returns 0 if the specified point is not covered by an item in the table widget. See also item(). So you should probably use item(row, column) instead: ui->lblCurrentCell->setText(ui->tableTest->item(current->row(), current->column())->text());
2,984,434
2,996,578
Small objects allocator
Has anybody used SmallObjectAllocator from Modern C++ Design by Andrei Alexandrescu in a big project? I want to implement this allocator but I need some opinions about it before using it in my project. I made some tests and it seems very fast, but the tests were made in a small test environment. I want to know how fast it is when are lots of small objects(like events, smart pointers, etc) and how much extra memory it uses.
I suggest you ask Rich Sposato. He has done extensive work on Loki's small object allocator, including testing and benchmarks.
2,984,441
2,984,452
When to use () with classes?
This is really starting to confuse the hell out of me. When do I use them, when don't I? For example I was reading a .cpp on linked lists whose class declaration was: struct CarPart { long PartNumber; char Partname[40]; double UnitPrice; CarPart *next; }; class ListOfParts { int size; public: CarPart *head; ListOfParts(); ~ListOfParts(); const int count() const; void insert( CarPart *item ); CarPart *retrieve( int pos ); }; With this code, why am I allowed to write ListOfParts *pPart = new ListOfParts(); CarPart *pCarPart = new CarPart; Declaring an instance of ListOfParts requires (), but not my CarPart? That's confusing me. When I asked a question before and people told me that such a declaration is a function that returns a ListOfParts object, but not the actual constructor. So I'm guessing this is still something different. What's happening here? PS: Am I correct to assume that the const to the right of count() means I cannot modify any values in count?
Declaring an instance of ListOfParts class does not require () when allocating on the heap. Both forms are valid: ListOfParts *pPart1 = new ListOfParts(); ListOfParts *pPart2 = new ListOfParts; EDIT: As the commenters have pointed out, it makes a difference when initialising a POD type (however is's not relevant to your code sample). However, when declaring a stack variable or a static variable, it matters, because the form with () is the same as declaring a function. ListOfParts pPart1(); // a function prototype ListOfParts pPart2; // a object construction const to the right of count() means you cannot modify any values inside the current object in this function, which will be this->size and this->head (note, you can still change object pointed to by head).
2,984,505
2,987,159
simulating atm communication without atm switch
can anybody tell me how to make file descriptors behave like atm nodes in /dev directory. Since i dnt have atm switch to test my program, i have to test with normal files, is there any method to make special type of file descriptors that behave like atm nodes.
You can write a dummy device driver that simulates the behavior that you expect from your ATM switch. This dummy driver would then provide a device driver node in /dev/atmXYZ. Writing a minimal linux driver is not much work. See Linux Device Drivers, Third Edition http://lwn.net/Kernel/LDD3/ for the details. The link points to a full copy of the book. I guess most work would be to figure out what behavior you expect from the switch and then to implement that correctly. It might turn out that its not worth the trouble.
2,984,693
2,984,731
Does Visual Studio 2008 use make utility?
I have checked in buid directory and have not found makefile there. How does Visual Studio 2008 buid the project? Does it use makefile?
The NMAKE utility has been distributed with Visual C++ since back when it was called Microsoft C/C++ Optimizing Compiler, and is very similar to Unix make. Previous versions of the IDE actually used NMAKE makefiles, but this isn't true anymore. You can write NMAKE makefiles yourself if you want, but it sounds like you want to know what the IDE does. Starting with VS2010, the build system changes to MSBUILD, which bertelmonster mentioned. But not in VS2008. In VC++ 6.0, C++ projects have their own build engine integrated into msdev.exe. In VS2002 - VS2008, it's a separate tool, VCBUILD. But you can still invoke it via the main IDE, devenv.exe, see the /BUILD option, and devenv is the best way if you have inter-project dependencies in your solution.
2,984,706
2,984,735
Special parameters for texture binding?
Do I have to set up my gl context in a certain way to bind textures. I'm following a tutorial. I start by doing: #define checkImageWidth 64 #define checkImageHeight 64 static GLubyte checkImage[checkImageHeight][checkImageWidth][4]; static GLuint texName; void makeCheckImage(void) { int i, j, c; for (i = 0; i < checkImageHeight; i++) { for (j = 0; j < checkImageWidth; j++) { c = ((((i&0x8)==0)^((j&0x8))==0))*255; checkImage[i][j][0] = (GLubyte) c; checkImage[i][j][1] = (GLubyte) c; checkImage[i][j][2] = (GLubyte) c; checkImage[i][j][3] = (GLubyte) 255; } } } void initt(void) { glClearColor (0.0, 0.0, 0.0, 0.0); makeCheckImage(); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGenTextures(1, &texName); glBindTexture(GL_TEXTURE_2D, texName); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, checkImageWidth, checkImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, checkImage); engineGL.current.tex = texName; } In my rendering I do: PolygonTesselator.Begin_Contour(); glEnable(GL_TEXTURE_2D); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glBindTexture(GL_TEXTURE_2D, current.tex); if(layer[currentlayer].Shapes[i].Contour[c].DrawingPoints.size() > 0) { glColor4f( layer[currentlayer].Shapes[i].Color.r, layer[currentlayer].Shapes[i].Color.g, layer[currentlayer].Shapes[i].Color.b, layer[currentlayer].Shapes[i].Color.a); } for(unsigned int j = 0; j < layer[currentlayer].Shapes[i].Contour[c].DrawingPoints.size(); ++j) { gluTessVertex(PolygonTesselator.tobj,&layer[currentlayer].Shapes[i].Contour[c].DrawingPoints[j][0], &layer[currentlayer].Shapes[i].Contour[c].DrawingPoints[j][0]); } PolygonTesselator.End_Contour(); } glDisable(GL_TEXTURE_2D); } However it still renders the color and not the texture at all. I'd atleast expect to see black or something but its as if the bind fails. Am I missing something? Thanks
It looks like from that code that you don't set any UVs. Edit: Does it make any difference using GL_MODULATE instead of GL_DECAL? (Am taking guesses here because I suspect the problem lies in code you haven't provided, or in gluTessVertex itself ...
2,984,729
2,984,816
C++ template overloading - wrong function called
template<typename T> T* Push(T* ptr); template<typename T> T* Push(T& ref); template<typename T, typename T1> T* Push(T1&& ref); I have int i = 0; Push<int>(i); But the compiler calls it ambiguous. How is that ambiguous? The second function is clearly the preferred match since it's more specialized. Especially since the T1&& won't bind to an lvalue unless I explicitly forward/move it. Sorry - i is an int. Otherwise, the question would make no sense, and I thought people would infer it since it's normally the loop iterator.
If i is an int, then the first isn't viable. Last two remain. Then, for deduction of i, the second and the third both yield the same function types for overload resolution (both int& as parameter). So you have to rely on partial ordering. However, partial ordering can't tell them apart. For a function call partial ordering context, only the parameters are used to determine an order (and the return type in your example is not considered), and any reference modifier is peeled off from them. So you will succeed deducing the parameter type from one against the other in both direction - both parameter types will be at least as specialized as the other parameters respectively. And neither has const applied, so neither is more specialized than the other. There is an issue report placeholder that aims at clarifying anything related to rvalue/lvalue reference difficulties during partial ordering. See this usenet question for details. If any of the two should be more specialized, i would say that it should the first one. After all, it accepts less arguments than the other one (the other one being a potential perfect forwarder). Especially since the T1&& won't bind to an lvalue unless I explicitly forward/move it. Actually, it will accept anything. Having a parameter of type T&& in a template will switch to the "perfect-forwarding-deduction-mode", which will deduce T to the type of the argument if it's an rvalue, and add a lvalue-reference modifier to the type of T if it's an lvalue. So if the argument is an lvalue, the resulting parameter type is T& && collapsed to T&, which accepts lvalues fine (just like in your case). On a second look, what you seem to be trying to do is to overload a function for taking objects by moving them. But this won't work because of the special deduction done for T&& (see below). Just erase the first function and write your code as template<typename T, typename T1> T* Push(T1&& ref) { /* for lvalues, T1 is U& and rvalues it is U, with U being the * argument type. */ T t1(std::forward<T1>(ref)); /* whatever needs to be done ... */ } This will move-construct t1 if the argument was an rvalue, and copy ref if the argument was an lvalue or if T doesn't have a move constructor. This is just an illustration, it may not be what you actually should do depending on your real use-case. I'm also not sure why you have two template parameter types here. I propose to get rid of the T, and say typename remove_reference<T1>::type * for the return type, instead. So that you can gain from argument deduction.
2,984,740
2,984,758
Error in creating template class
I found this vector template class implementation, but it doesn't compile on XCode. Header file: // File: myvector.h #ifndef _myvector_h #define _myvector_h template <typename ElemType> class MyVector { public: MyVector(); ~MyVector(); int size(); void add(ElemType s); ElemType getAt(int index); private: ElemType *arr; int numUsed, numAllocated; void doubleCapacity(); }; #include "myvector.cpp" #endif Implementation file: // File: myvector.cpp #include <iostream> #include "myvector.h" template <typename ElemType> MyVector<ElemType>::MyVector() { arr = new ElemType[2]; numAllocated = 2; numUsed = 0; } template <typename ElemType> MyVector<ElemType>::~MyVector() { delete[] arr; } template <typename ElemType> int MyVector<ElemType>::size() { return numUsed; } template <typename ElemType> ElemType MyVector<ElemType>::getAt(int index) { if (index < 0 || index >= size()) { std::cerr << "Out of Bounds"; abort(); } return arr[index]; } template <typename ElemType> void MyVector<ElemType>::add(ElemType s) { if (numUsed == numAllocated) doubleCapacity(); arr[numUsed++] = s; } template <typename ElemType> void MyVector<ElemType>::doubleCapacity() { ElemType *bigger = new ElemType[numAllocated*2]; for (int i = 0; i < numUsed; i++) bigger[i] = arr[i]; delete[] arr; arr = bigger; numAllocated*= 2; } If I try to compile as is, I get the following error: "Redefinition of 'MyVector::MyVector()'" The same error is displayed for every member function (.cpp file). In order to fix this, I removed the '#include "myvector.h"' on the .cpp file, but now I get a new error: "Expected constructor, destructor, or type conversion before '<' token". A similar error is displayed for every member as well. Interestingly enough, if I move all the .cpp code to the header file, it compiles fine. Does that mean I can't implement template classes in separate files?
It's always a good idea to place your templates in a header file. That way you don't mess up the linker with multiple definitions of the same instantiations and such. And of course there's the circular inclusion :).
2,984,923
2,985,162
Drawing shapes with wxWidgets
I want to learn how to draw shapes with wxWidgets. Where do I start? In case there are multiple ways, I prefer ease of use over cross-platform compatibility. I'm a Windows user.
This is done by creating a wxPanel, connecting to the paint event, and using the DC provided in that paint event to draw various things. The DC has a number of drawing related functions. This will probably be using Windows GDI or something similar, which means performance probably won't be fantastic, but it should work for simple purposes. You can find a tutorial with sample code on the Wiki. Look for the documentation for the wxDC class to see a list of drawing functions you can use. If you need something with more performance, look into the wxGLCanvas which renders a hardware accelerated OpenGL canvas.
2,985,004
2,985,062
Using Boost.Asio to get "the whole packet"
I have a TCP client connecting to my server which is sending raw data packets. How, using Boost.Asio, can I get the "whole" packet every time (asynchronously, of course)? Assume these packets can be any size up to the full size of my memory. Basically, I want to avoid creating a statically sized buffer.
typically, when you do async IO, your protocol should support it. one easy way is to prefix a byte array with it's length at the logical level, and have the reading code buffer up until it has a full buffer ready for parsing. if you don't do it, you will end up with this logic scattered all over the place (think about reading a null terminated string, and what it means if you just get a part of it every time select/poll returns).
2,985,034
2,985,069
GlGenTextures keeps returing 0's
I'm trying to generate textures like so: #define checkImageWidth 64 #define checkImageHeight 64 static GLubyte checkImage[checkImageHeight][checkImageWidth][4]; static GLubyte otherImage[checkImageHeight][checkImageWidth][4]; static GLuint texName[2]; void makeCheckImages(void) { int i, j, c; for (i = 0; i < checkImageHeight; i++) { for (j = 0; j < checkImageWidth; j++) { c = ((((i&0x8)==0)^((j&0x8))==0))*255; checkImage[i][j][0] = (GLubyte) c; checkImage[i][j][1] = (GLubyte) c; checkImage[i][j][2] = (GLubyte) c; checkImage[i][j][3] = (GLubyte) 255; c = ((((i&0x10)==0)^((j&0x10))==0))*255; otherImage[i][j][0] = (GLubyte) c; otherImage[i][j][1] = (GLubyte) 0; otherImage[i][j][2] = (GLubyte) 0; otherImage[i][j][3] = (GLubyte) 255; } } } void init(void) { glClearColor (1.0, 0.0, 0.0, 0.0); glShadeModel(GL_FLAT); glEnable(GL_DEPTH_TEST); makeCheckImages(); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGenTextures(2, texName); glBindTexture(GL_TEXTURE_2D, texName[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, checkImageWidth, checkImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, checkImage); glBindTexture(GL_TEXTURE_2D, texName[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, checkImageWidth, checkImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, otherImage); glEnable(GL_TEXTURE_2D); engineGL.current.tex = texName[1]; } But when I check the values of texName[0] and [1] they are both 0, I do not understand why, what am I doing wrong. Thanks.
Try calling glGetError. It should tell you in more detail what went wrong. In general, if an OpenGL function fails, the first thing you do should be to ask OpenGL why it failed. It knows, because it just tried to execute the function. It's much harder for us to guess at what might have gone wrong.
2,985,039
2,985,048
Pointer initialization
Sorry if this question has been asked before. On my search through SO I didn't find one that asked what I wanted to know. Basically, when I have this: typedef struct node { int data; node *node; } *head; and do node *newItem = new node; I am under the impression that I am declaring and reserving space, but not defining, a pointer to struct node, is that correct? So when I do newItem->data = 100 and newItem->next = 0 I get confused. newItem = 0would declare what exactly? Both data and next? The object as a whole? I'm especially confused when I use typedef. Which part is the macro? I assume node because that's how I call it, but why do I need it? Finally, what happens when I do: node *temp; temp = new node; temp = head->next; head->next = newItem; newItem->next = temp; I mean, head->next is a pointer pointing to object newItem, so I assume not to newItem.data or next themselves. So how can I use an uninitialized pointer that I described above safely like here? is head now not pointing to an uninitialized pointer?
I am under the impression that I am declaring and reserving space, but not defining, a pointer to struct node, is that correct? No. You are declaring a pointer, allocating space on the stack for the pointer, and dynamically allocating storage for a node to it it. Don't confuse yourself by writing stuff like this: typedef struct node { int data; node * next; } *head; The way to write the struct in C++ is: struct node { int data; node * next; }; You can now create a pointer: node * pnode; which allocates storage for the pointer. and you can dynamically allocate storage for a node, and make the pointer point to it: pnode = new node; or do it all in one: node * pnode = new node; Now when you say: pnode->data = 10; you are not allocating anything. You are assigning 10 to the member called data of the node instance pointed to by pnode. Of course, if you had given your node a constructor (which you should normally do), you could do it all in one: struct node { int data; node * next; node( int n, node * np ) : data( n ), next( np ) {} }; node * pnode = new node( 10, 0 );
2,985,058
2,985,295
Extremely big integer multiplication and addition
Greetings, I need to multiply two extremely long integer values stored in a text file (exported via GMP (MPIR, to be exact), so they can be any in any base). Now, I would usually just import these integers via the mpz_inp_str() function and perform the multiplication in RAM, however, these values are so long that I can't really load them (about 1 GB of data each). What would be the fastest way to do this? Perhaps there are some external libraries that do this sort of thing already? Are there any easily implementable methods for this (performance is not incredibly important as this operation would only be performed once or twice)? tl;dr: I need to multiply values so large they don't fit into process memory limits (Windows). Thank you for your time.
I don't know if there is a library that supports this, but you could use GMP/MPIR on parts of each really big number (RBN). That is, start by breaking each RBN into manageable, uniformly sized chunks (e.g. 10M digit chunks, expect an undersized chunk for most significant digits, also see below). RBN1 --> A B C D E RBN2 --> F G H I J The chunking can be done in base 10, so just read <chuck_size> characters from the file for each piece. Then multiply chunks from each number one at a time. AxF BxF CxF DxF ExF + AxG BxG CxG DxG ExG + AxH BxH CxH DxH ExH + AxI BxI CxI DxI ExI + AxJ BxJ CxJ DxJ ExJ Perform each column of the final sum in memory. Then, keeping the carry in memory, write the column out to disk, repeat for next column... For carries, convert each column sum result to a string with GMP, write out the bottom <chunk size> portion of the result and read the top portion back in as a GMP int for the carry. I'd suggest selecting a chunk size dynamically for each multiplication in order to keep each column addition in memory; the larger the numbers, the more column additions will need to be done, the smaller the chunk size will need to be. For both reading and writing, I'd suggest using memory mapped files, boost has a nice interface for this (note that this does not load the entire file, it just basically buffers the IO on virtual memory). Open one mapped file for each input RBN numbers, and one output with size = size(RBN1) + size(RBN2) + 1; With memory mapped files, file access is treated as a raw char*, so you can read/write chunks directly using gmp c-string io methods. You will probably need to read into an intermediate buffer in order to NULL terminated strings for GMP (unless you want to temporarily alter the memory mapped file). This isn't very easy to implement correctly, but then again this isn't a particularly easy problem (maybe just tedious). This approach has the advantage that it exactly mirrors what GMP is doing in memory, so the algorithms are well known.
2,985,068
2,985,081
does memcpy params have to be of the same type?
I was reading that memcpy takes the number of bytes from a source location and adds it to a destination location. Does this mean that memcpy could possibly change datatype entirely ?? memcpy(DoubleOne, CharTwo, strlen(CharTwo)); considering that both values are empty still.
Yes, they dont have to. int test = 3; char dest[sizeof(int)]; memcpy(&dest[0], &test, sizeof(int)); Is valid c(++).
2,985,142
3,032,281
Exit code 3 (not my return value, looking for source)
Greetings, my program exits with the code 3. No error messages, no exceptions, and the exit is not initiated by my code. The problem occurs when I am trying to read extremely long integer values from a text file (the text file is present and correctly opened, with successful prior reading). I am using very large amounts of memory (in fact, I think that this might be the cause, as I am nearly sure I go over the 2GB per process memory limit). I am also using the GMP (or, rather, MPIR) library to multiply bignums. I am fairly sure that this is not a file I/O problem as I got the same error code on a previous program version that was fully in-memory. System: MS Visual Studio 2008 MS Windows Vista Home Premium x86 MPIR 2.1.0 rc2 4GB RAM Where might this error code originate from? EDIT: this is the procedure that exits with the code void condenseBinSplitFile(const char *sourceFilename, int partCount){ //condense results file into final P and Q std::string tempFilename; std::string inputFilename(sourceFilename); std::string outputFilename(BIN_SPLIT_FILENAME_DATA2); mpz_class *P = new mpz_class(0); mpz_class *Q = new mpz_class(0); mpz_class *PP = new mpz_class(0); mpz_class *QQ = new mpz_class(0); FILE *sourceFile; FILE *resultFile; fpos_t oldPos; int swapCount = 0; while (partCount > 1){ std::cout << partCount << std::endl; sourceFile = fopen(inputFilename.c_str(), "r"); resultFile = fopen(outputFilename.c_str(), "w"); for (int i=0; i<partCount/2; i++){ //Multiplication order: //Get Q, skip P //Get QQ, mul Q and QQ, print Q, delete Q //Jump back to P, get P //Mul P and QQ, delete QQ //Skip QQ, get PP //Mul P and PP, delete P and PP //Get Q, skip P mpz_inp_str(Q->get_mpz_t(), sourceFile, CALC_BASE); fgetpos(sourceFile, &oldPos); skipLine(sourceFile); skipLine(sourceFile); //Get QQ, mul Q and QQ, print Q, delete Q mpz_inp_str(QQ->get_mpz_t(), sourceFile, CALC_BASE); (*Q) *= (*QQ); mpz_out_str(resultFile, CALC_BASE, Q->get_mpz_t()); fputc('\n', resultFile); (*Q) = 0; //Jump back to P, get P fsetpos(sourceFile, &oldPos); mpz_inp_str(P->get_mpz_t(), sourceFile, CALC_BASE); //Mul P and QQ, delete QQ (*P) *= (*QQ); (*QQ) = 0; //Skip QQ, get PP skipLine(sourceFile); skipLine(sourceFile); mpz_inp_str(PP->get_mpz_t(), sourceFile, CALC_BASE); //Mul P and PP, delete PP, print P, delete P (*P) += (*PP); (*PP) = 0; mpz_out_str(resultFile, CALC_BASE, P->get_mpz_t()); fputc('\n', resultFile); (*P) = 0; } partCount /= 2; fclose(sourceFile); fclose(resultFile); //swap filenames tempFilename = inputFilename; inputFilename = outputFilename; outputFilename = tempFilename; swapCount++; } delete P; delete Q; delete PP; delete QQ; remove(BIN_SPLIT_FILENAME_RESULTS); if (swapCount%2 == 0) rename(sourceFilename, BIN_SPLIT_FILENAME_RESULTS); else rename(BIN_SPLIT_FILENAME_DATA2, BIN_SPLIT_FILENAME_RESULTS); } EDIT2: completely in-memory version that also exits with 3 void binarySplitE(const ULONG first, const ULONG last, mpz_class *P, mpz_class *Q){ //P(first, last) = P(first, mid)*Q(mid, last) + P(mid, last) //Q(first, last) = Q(first, mid)*Q(mid, last) if (last - first == 1){ calcP(P, first, last); calcQ(Q, first, last); return; } ULONG mid = (first+last)/2; mpz_class *PP = new mpz_class(*P); mpz_class *QQ = new mpz_class(*Q); //Calculate P(first, mid) and Q(first, mid) binarySplitE(first, mid, P, Q); //Calculate P(mid, last) and Q(mid, last) binarySplitE(mid, last, PP, QQ); //P(first, last) == P(first, mid) *P *= (*QQ); //P(first, last) == P(first, mid)*Q(mid, last) *P += (*PP); //P(first, last) == P(first, mid)*Q(mid, last) + P(mid, last); //Q(first, last) == Q(first, mid) *Q *= (*QQ); //Q(first, last) == Q(first, mid)*Q(mid, last) delete PP; delete QQ; }
It appears this exit code was returned by MPIR(GMP) because it could not allocate a big amount of memory. It's a bit annoying that this was not in any of the documentation though.
2,985,173
2,985,186
Passing a template func. as a func. ptr to an overloaded func. - is there a way to compile this code?
Just a general c++ curiosity: This code below shouldn't compile because it's impossible to know which to instantiate: temp(const int&) or temp(const string&) when calling func(temp) - this part i know. What i would like to know is if there is anything i can do to the line marked PASSINGLINE to get the compiler to deduce that i want FPTR1 called and not FPTR2 ? #include<iostream> using std::cout; using std::endl; /*FPTR1*/ void func(void(*fptr)(const int&)){ fptr(1001001);} /*FPTR2*/ void func(void(*fptr)(const string&)){ fptr("1001001"); } template <typename T> void temp(const T &t){ cout << t << endl; } int main(){ /*PASSINGLINE*/ func(temp); return 0; } Thank you.
func(temp<int>); There's no way to make the compiler infer the template argument, that is more succinct or clearer than just explicitly specifying it, in this case. Edit: The following code compiles without warning and produces the expected result: #include<iostream> #include<string> using std::string; using std::cout; using std::endl; /*FPTR1*/ void func(void(*fptr)(const int&)){ fptr(1001001);} /*FPTR2*/ void func(void(*fptr)(const string&)){ fptr("1001001"); } template <typename T> void temp(const T &t){ cout << t << endl; } int main(){ /*PASSINGLINE*/ func(temp<int>); return 0; }
2,985,372
2,985,387
C++ - Where to code a member function for an inherited object
I have a few classes (heat, gas, contact, pressure) inheriting from a main one (sensor). I have a need to store them in a vector<Sensor *> (part of the specification). At some point in time, I need to call a function that indiscriminately stores those Sensor *. (also part of the specification, not open for discussion) Something like this: for(size_t i = 0; i < Sensors.size(); ++i) Sensors[i]->storeSensor(os) //os is an ofstream kind of object, passed onwards by reference Where and how shall storeSensor be defined? Is there any simple way to do this or will I need to disregard the specification? Mind you, I'm a beginner! Thanks for your time!
You need to make it a pure virtual function, which you then implement in each of the derived classes. The implementation can be pretty simple: class sensor { .... virtual void storeSensor( ostream & os ) = 0; }; class heat : public sensor { .... void storeSensor( ostream & os ) { os << * this; } }; class light : public sensor { .... void storeSensor( ostream & os ) { os << * this; } }; This assumes you have defined a suitable operator<< for each of your classes. If not, you need to write the specific sensors member variables explicitly. And that is the easy bit - the difficulty starts when you want to read the sensors back in again :-)
2,985,449
2,985,471
Are compilers smart enough to detect a no-op function?
If I write a function like this: void doMaybeNothing() { #ifndef IM_LAZY doSomething(); #endif } Are modern compilers smart enough to detect a no-op function and optimize so that there are no cycles wasted? Or is there always a small performance impact?
Assuming the body of the function is available at compile-time or link-time (i.e., it's not in a dynamically linked library), most modern compilers should get rid of calls to functions that do nothing (if optimizations are enabled, of course). Effectively, this is just a form of inline expansion, which allows the body of a function to be expanded anywhere it is called, so long as the results are the same. If the function does nothing, then it will simply expand to nothing wherever it is inlined.
2,985,461
2,985,712
Rich text edit control for C++?
I'm looking for a control to edit rich text. These are my requirements: Fast/lightweight Support for bulleted/numbered lists Colored text and highlighting Targeting Windows, but cross-platform would be a bonus Fine control over undo/redo and easy way to monitor what parts of the documents have changed All the other usual features: text styles, copy/paste, etc. Can easily be a subclassed and extended I'm considering Qt's QTextEDit, but wanted to see if you have any other suggestions.
My first choice would be Qt. From what I tested, it is the best toolkit around, and it is the only one I know that has all of your requirements. My second choice would be wxWidgets, but I didn't like its architecture and API very much.
2,985,478
3,026,066
How can I render 3d graphics in a directshow source filter
I need to render a simple texture mapped model as the output of a directshow source filter. The 3d rendering doesnt need to come from Direct3D, but that would be nice. OpenGL or any other provider would be fine assuming I can fit it into the context of the DirectShow source filter. visual studio 2008 c++
With direct3d I have found that you can call GetRenderTargetData from the d3d device to get you access to the raw image bytes that you can then copy into the source filters image buffer Here is example code of how to get the d3d render void CaptureRenderTarget(IDirect3DDevice9* pdev) { IDirect3DSurface9* pTargetSurface=NULL; HRESULT hr=pdev->GetRenderTarget(0,&pTargetSurface); if(SUCCEEDED(hr)) { D3DSURFACE_DESC desc; hr=pTargetSurface->GetDesc(&desc); if(SUCCEEDED(hr)) { IDirect3DTexture9* pTempTexture=NULL; hr=pdev->CreateTexture(desc.Width,desc.Height,1,0,desc.Format,D3DPOOL_SYSTEMMEM,&pTempTexture,NULL); if(SUCCEEDED(hr)) { IDirect3DSurface9* pTempSurface=NULL; hr=pTempTexture->GetSurfaceLevel(0,&pTempSurface); if(SUCCEEDED(hr)) { hr=pdev->GetRenderTargetData(pTargetSurface,pTempSurface); if(SUCCEEDED(hr)) { //D3DXSaveTextureToFile(L"Output.png",D3DXIFF_PNG,pTempTexture,NULL); D3DLOCKED_RECT data; hr=pTempTexture->LockRect(0, &data, NULL, 0); if(SUCCEEDED(hr)) { BYTE *d3dPixels = (BYTE*)data.pBits; } pTempTexture->UnlockRect(0); } pTempSurface->Release(); } pTempTexture->Release(); } } pTargetSurface->Release(); } }
2,985,492
2,985,503
What exactly are signals and slots in Qt?
I know how they work conceptually, but how are signals and slots implemented in the Qt framework? Qt Creator treats them as keywords, but are they simply a set of macros, or is a special pre-processor required before these source files can be compiled? In other words, if I use Qt's signal/slot features in my code, can I easily compile it on any C++ compiler?
Many features of Qt, including signals, require preprocessing the source using the Qt Meta-Object Compiler (MOC). If you use Qt's signals feature, you can compile your source code on any compiler and platform that is supported by Qt. If you need to compile on a platform not supported by Qt, you may be out of luck or at least in for a lot of fun integration work (this is true of any library or framework, of course). If you are looking for a cross-platform signals implementation, you might also consider Boost.Signals.
2,985,532
2,985,535
Overloading the == operator to compare two char[] using a string comparison
Can the == operator be overloaded so as to compare two char[] using a string comparison?
No; at least one parameter of an operator overload must be of class or enumeration type. char[] and char* are array and pointer types, which are not class or enumeration types.
2,985,571
2,985,578
Pass temporary object to function that takes pointer
I tried following code : #include<iostream> #include<string> using namespace std; string f1(string s) { return s="f1 called"; } void f2(string *s) { cout<<*s<<endl; } int main() { string str; f2(&f1(str)); } But this code doesn't compile. What I think is : f1 returns by value so it creates temporary, of which I am taking address and passing to f2. Now Please explain me where I am thinking wrong?
The unary & takes an lvalue (or a function name). Function f1() doesn't return an lvalue, it returns an rvalue (for a function that returns something, unless it returns a reference, its return value is an rvalue), so the unary & can't be applied to it.
2,985,622
2,985,634
Making use of WCHAR as a CHAR?
GDI+ makes use of WCHAR instead of what the WinAPI allows which is CHAR. Usually I can do: char *str = "C:/x.bmp"; but how do I do this for wchar? I can't juse do wchar_t *file = "C:/x.bmp"; Thanks
wchar_t *file = L"C:/x.bmp"; L introduces a wide string. In Windows, it's customary to use macros that behave differently according to some preprocessor definitions. See http://msdn.microsoft.com/en-us/library/c426s321(VS.71).aspx You would write: _TCHAR *file = _TEXT("C:/x.bmp");
2,985,676
2,985,710
Setting Up OpenCV and .lib files
I have been trying to set up OpenCV for the past few days with no results. I am using Windows 7 and VS C++ 2008 express edition. I have downloaded and installed OpenCV 2.1 and some of the examples work. I downloaded CMake and ran it to generate the VS project files and built all of them but there with several errors, and couldn't get any farther than that. When I ran CMake I configured it to use the VS 9 compiler, and then it brought up a list of items in red such as BUILD_EXAMPLES, BUILD_LATEX_DOCS, ect. All of them were unchecked except BUILD_NEW_PYTHON_SUPPORT, BUILD_TESTS, ENABLE_OPENMP, and OPENCV_BUILD_3RDPARTY_LIBS. I configured and generate without changing anything and then it generated the VS files such as ALL_BUILD.vcproj. I built the OpenCV VS solution in debug mode and it had 15 failures (maybe this is part of the problem or is it because I don't have python and stuff like that?) Now there was a lib folder created after building but inside there was just this VC++ Minimum Rebuild Dependency file and Program Debug Database file, both called cvhaartraining. I believe it should have created the .lib files I need instead of this. Also, the bin folder now has a folder called Debug with the same types of files with names like cv200d and cvaux200d. Believe I need those .lib files to move forward. I would also greatly appreciate if someone could direct me to a reliable tutorial to set up VS for OpenCV because I have been reading a lot of tutorials and they all say different things such as some say to configure Window's environment variables and other say files are located in folders such as OpenCV/cv which I don't have. I have gotten past the point of clear headed thinking so if anyone could offer some direction or a simple list of the files I need to link then I would be thankful. Also a side question: why when linking the OpenCV libs do you have to put them in quotes?
If you're just getting started, you should probably grab the prebuilt libraries for OpenCV instead. It's OpenCV-2.1.0-win32-vs2008.exe from this page. Once you have that, there is really no setup. Just link to the (already built) lib files in any VS project you create, and make sure the OpenCV include directory is in the projects include path.
2,985,840
2,986,116
OpenGL texture misaligned on quad
I've been having trouble with this for a while now, and I haven't gotten any solutions that work yet. Here is the problem, and the specifics: I am loading a 256x256 uncompressed TGA into a simple OpenGL program that draws a quad on the screen, but when it shows up, it is shifted about two pixels to the left, with the cropped part appearing on the right side. It has been baffling me for the longest time, people have suggested clamping and such, but somehow I think my problem is probably something really simple, but I just can't figure out what it is! Here is a screenshot comparing the TGA (left) and how it appears running in the program (right) for clarity. Also take note that there's a tiny black pixel on the upper right corner, I'm hoping that's related to the same problem. alt text http://img64.imageshack.us/img64/2686/helpmed.png Here's the code for the loader, I'm convinced that my problem lies in the way that I'm loading the texture. Thanks in advance to anyone who can fix my problem. bool TGA::LoadUncompressedTGA(char *filename,ifstream &texturestream) { cout << "G position status:" << texturestream.tellg() << endl; texturestream.read((char*)header, sizeof(header)); //read 6 bytes into the file to get the tga header width = (GLuint)header[1] * 256 + (GLuint)header[0]; //read and calculate width and save height = (GLuint)header[3] * 256 + (GLuint)header[2]; //read and calculate height and save bpp = (GLuint)header[4]; //read bpp and save cout << bpp << endl; if((width <= 0) || (height <= 0) || ((bpp != 24) && (bpp !=32))) //check to make sure the height, width, and bpp are valid { return false; } if(bpp == 24) { type = GL_RGB; } else { type = GL_RGBA; } imagesize = ((bpp/8) * width * height); //determine size in bytes of the image cout << imagesize << endl; imagedata = new GLubyte[imagesize]; //allocate memory for our imagedata variable texturestream.read((char*)imagedata,imagesize); //read according the the size of the image and save into imagedata for(GLuint cswap = 0; cswap < (GLuint)imagesize; cswap += (bpp/8)) //loop through and reverse the tga's BGR format to RGB { imagedata[cswap] ^= imagedata[cswap+2] ^= //1st Byte XOR 3rd Byte XOR 1st Byte XOR 3rd Byte imagedata[cswap] ^= imagedata[cswap+2]; } texturestream.close(); //close ifstream because we're done with it cout << "image loaded" << endl; glGenTextures(1, &texID); // Generate OpenGL texture IDs glBindTexture(GL_TEXTURE_2D, texID); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glTexImage2D(GL_TEXTURE_2D, 0, type, width, height, 0, type, GL_UNSIGNED_BYTE, imagedata); delete imagedata; return true; } //Public loading function for TGA images. Opens TGA file and determines //its type, if any, then loads it and calls the appropriate function. //Returns: TRUE on success, FALSE on failure bool TGA::loadTGA(char *filename) { cout << width << endl; ifstream texturestream; texturestream.open(filename,ios::binary); texturestream.read((char*)header,sizeof(header)); //read 6 bytes into the file, its the header. //if it matches the uncompressed header's first 6 bytes, load it as uncompressed LoadUncompressedTGA(filename,texturestream); return true; }
It sure looks like you are consuming the header twice (once in TGA::loadTGA and then again in LoadUncompressedTGA). But if that were the case then I'd think width, height, and bpp would all be radically wrong and it wouldn't look as correct as it does.
2,985,943
2,985,964
C++ MACRO that will execute a block of code and a certain command after that block
void main() { int xyz = 123; // original value { // code block starts xyz++; if(xyz < 1000) xyz = 1; } // code block ends int original_value = xyz; // should be 123 } void main() { int xyz = 123; // original value MACRO_NAME(xyz = 123) // the macro takes the code code that should be executed at the end of the block. { // code block starts xyz++; if(xyz < 1000) xyz = 1; } // code block ends << how to make the macro execute the "xyz = 123" statement? int original_value = xyz; // should be 123 } Only the first main() works. I think the comments explain the issue. It doesn't need to be a macro but to me it just sounds like a classical "macro-needed" case. By the way, there's the BOOST_FOREACH macro/library and I think it does the exact same thing I'm trying to achieve but it's too complex for me to find the essence of what I need. From its introductory manual page, an example: #include <string> #include <iostream> #include <boost/foreach.hpp> int main() { std::string hello( "Hello, world!" ); BOOST_FOREACH( char ch, hello ) { std::cout << ch; } return 0; }
The cleanest way to do this is probably to use an RAII container to reset the value: // Assumes T's assignment does not throw template <typename T> struct ResetValue { ResetValue(T& o, T v) : object_(o), value_(v) { } ~ResetValue() { object_ = value_; } T& object_; T value_; }; used as: { ResetValue<int> resetter(xyz, 123); // ... } When the block ends, the destructor will be called, resetting the object to the specified value. If you really want to use a macro, as long as it is a relatively simple expression, you can do this using a for-block: for (bool b = false; b == false; b = true, (xyz = 123)) { // ... } which can be turned into a macro: #define DO_AFTER_BLOCK(expr) \ for (bool DO_AFTER_BLOCK_FLAG = false; \ DO_AFTER_BLOCK_FLAG == false; \ DO_AFTER_BLOCK_FLAG = true, (expr)) used as: DO_AFTER_BLOCK(xyz = 123) { // ... } I don't really think the macro approach is a good idea; I'd probably find it confusing were I to see this in production source code.
2,986,103
2,986,601
C++ constructor problem, values not being set
I'm new to C++ and I'm trying to figure out this problem I'm having with my constructor for one of my classes. What happens is... all my variables are initialized properly except two (health and type). #pragma once #include <irrlicht.h> #include <vector> #include <cassert> using namespace irr; using namespace core; using namespace scene; enum { PLAYER = 0, NPC = 1, SOLDIER = 2, CHAINGUNNER = 3 }; class Model { public: Model(void); Model(int id, std::vector<ISceneNode*> modelVec, int modType); ~Model(void); std::vector<int> path; std::vector<ISceneNode*> model; int endNode; int type; int animate; int health; u32 lastAnimation; private: int mId; }; #include "Model.h" Model::Model(void) { //assert(false); } Model::Model(int id, std::vector<ISceneNode*> modelVec, int modType) { path = std::vector<int>(); model = modelVec; endNode = 0; type = modType; animate = 0; health = 100; lastAnimation = 0; mId = id; } Model::~Model(void) {} I create a model with Model soldier(id, model, SOLDIER) Everything is set properly except type and health. I've tried many different things, but I cannot figure out my problem. I'm not sure but the default constructor is being called. It doesn't make sense because I make no called to that constructor. Thanks, vector<ISceneNode*> model; model.push_back(soldierBody); model.push_back(soldierHead); model.push_back(soldierWeapon); cout << "Id of char: " << id << endl; Model soldier(id, model, SOLDIER); modelMap[id] = soldier;
This lines: modelMap[id] = soldier; First default constructs the Model inside the map. The returned reference is then used with the assignment operator to copy the value of soldier into the value contained inside the map. To test if it is working try: Model soldier(id, model, SOLDIER); std::cout << "TYPE(" << soldier.type << ") HEALTH(" << soldier.health << ")" std::endl; modelMap[id] = soldier; std::cout << "TYPE(" << modelMap[id].type << " HEALTH(" << modelMap[id].health << ")" std::endl; If your class is not designed to be default constructible. Then do not have a default constructor (this will just lead to problems). Declare a default constructor in the private part of the class (no need for a body). Without a default constructor you will not be able to use the operator[] on map. But you can get around this by using insert: modelMap.insert(std::map<XX, Model>::value_type(id, soldier));
2,986,229
2,986,251
Is it good practice to use std::size_t all over the place?
Possible Duplicate: When to use std::size_t? I have a lot of constants in my code that are unsigned numbers, e.g. counters, frequency cutoffs, lengths, etc. I started using std::size_t for all of these, instead of int or unsigned int. Is that the right thing to do? I started it because the STL containers use it for their sizes, it's used for string position, etc.
std::size_t is the type to use for counting memory locations, like array lengths, the size of objects, etc. STL containers are using container_type::size_type, which will usually map to std::size_t, but isn't guaranteed to do so. If you need types for holding non-negative integrals which are not used for the above mentioned purposes, what's wrong with unsigned short, unsigned int, and unsigned long?
2,986,306
2,986,331
Standard C++ method for validating a URLs format?
Is there a easy standards-compliant way to check if a URL string is a valid format? Either through a specific URL-type class or maybe someone could show me how to do a regex validation of it?
Nope, there isn't one. On windows, you could take a look at the IsValidURL() function
2,986,324
2,986,327
Float compile-time calculation not happening?
A little test program: #include <iostream> const float TEST_FLOAT = 1/60; const float TEST_A = 1; const float TEST_B = 60; const float TEST_C = TEST_A / TEST_B; int main() { std::cout << TEST_FLOAT << std::endl; std::cout << TEST_C << std::endl; std::cin.ignore(); return 0; } Result : 0 0.0166667 Tested on Visual Studio 2008 & 2010. I worked on other compilers that, if I remember well, made the first result like the second result. Now my memory could be wrong, but shouldn't TEST_FLOAT have the same value than TEST_C? If not, why? Is TEST_C value resolved at compile time or at runtime? I always assumed the former but now that I see those results I have some doubts...
In 1/60 Both of the operands are integers, so integer arithmetic is performed. To perform floating point arithmetic, at least one of the operands needs to have a floating point type. For example, any of the following would perform floating point division: 1.0/60 1.0/60.0 1/60.0 (You might choose to use 1.0f instead, to avoid any precision reduction warnings; 1.0 has type double, while 1.0f has type float) Shouldn't TEST_FLOAT have the same value than TEST_C? In the TEST_FLOAT case, integer division is performed and then the result of the integer division is converted to float in the assignment. In the TEST_C case, the integer literals 1 and 60 are converted to float when they are assigned to TEST_A and TEST_B; then floating-point division is performed on those floats and the result is assigned to TEST_C. Is TEST_C value resolved at compile time or at runtime? It depends on the compiler; either method would be standards-conforming.
2,986,351
2,986,443
What's a good way to write XML in C++?
There are plenty of Libraries to parse XML, but it seems there aren't many good (?) ways to write XML in C++. Libraries I've been using so far: PugiXML: really lightweight, very straightforwarded API, but it seems to lack a way to write XML (Or I haven't found it yet) RapidXML: I don't have much experience with RapidXML; But it does look nice. TinyXML: I find it odd that the STL TinyXML needs to be explicitly "enabled" in TinyXML - I mean, if your compiler doesn't support the STL, get a better one! Anyway, to make my point clear, I have written a PHP Script that does what I plan to do in C++: http://codepad.org/RyhQSgcm I really appreciate any help!
Xerces DOM Parser Xerces-C++ is a validating XML parser written in a portable subset of C++. Xerces-C++ makes it easy to give your application the ability to read and write XML data. A shared library is provided for parsing, generating, manipulating, and validating XML documents using the DOM, SAX, and SAX2 APIs. For an introduction to programming with Xerces-C++ refer to the Programming Guide. The POCO project also has an XMLWriter that will allow you to generate XML. Of course there is always this existing StackOverflow post.
2,986,383
2,986,399
C++ - Implementing my own stream
Hello! My problem can be described the following way: I have some data which actually is an array and could be represented as char* data with some size I also have some legacy code (function) that takes some abstract std::istream object as a param and uses that stream to retrieve data to operate. So, my question is the following - what would be the easy way to map my data to some std::istream object so that I can pass it to my function? I thought about creating a std::stringstream object from my data, but that means copying and (as I assume) isn't the best solution. Any ideas how this could be done so that my std::istream operates on the data directly? Thank you.
If you're looking at actually creating your own stream, I'd look at the Boost.Iostreams library. It makes it easy to create your own stream objects.
2,986,570
2,986,602
C++ compiler maximum number of classes
In meta-programming number of classes grows quite fast. Is maximum number of classes modern compiler allows, for example g++, something to be concerned about? Thank you
I'd guess this question is best answered by the standard published by the C++ committee. But looking at this place, I can't see any upper limit on the number of classes although there is minimum quantity limit on many items (saying at least the given number of items of each type should be supported by the compiler but that is not a binding limit). If your compiler can support these minimum limits, you should be OK. But what factors would have the say on the upper limits on the number of classes kindles my academic curiosity. I'd be glad to know if a compiler guru can answer that.
2,986,644
2,986,699
converting a timestring to a duration
At the moment I am trying to read in a timestring formatted and create a duration from that. I am currently trying to use the boost date_time time_duration class to read and store the value. boost date_time provides a method time_duration duration_from_string(std::string) that allows a time_duration to be created from a time string and it accepts strings formatted appropriately ("[-]h[h][:mm][:ss][.fff]".). Now this method works fine if you use a correctly formatted time string. However if you submit something invalid like "ham_sandwich" or "100" then you will instead be returned a time_duration that is not valid. Specifically if you try to pass it to a standard output stream then an assertion will occur. My question is: Does anyone know how to test the validity of the boost time_duration? and failing that can you suggest another method of reading a timestring and getting a duration from it? Note: I have tried the obvious testing methods that time_duration provides; is_not_a_date_time(), is_special() etc and they don't pick up that there is an issue. Using boost 1.38.0
From the documentation, it looks like you may want to try using the stream operators (operator<<, operator>>); error conditions are described at Date Time Input/Output. Alternately, I suppose you could validate the string before passing it in. Right offhand, it doesn't look like that particular method has any error handling. Edit: I'm not sure I would have thought to check the return value like this if it weren't for Brian's answer, but for completeness here's a full example that takes a string as input. You can either check the return value or have it throw an exception (I believe you'd want to catch std::ios_base_failure): #include <iostream> #include <sstream> #include <string> #include <boost/date_time/posix_time/posix_time.hpp> using namespace std; using namespace boost::posix_time; int main(int argc, char **argv) { if (argc < 2) { cout << "Usage: " << argv[0] << " TIME_DURATION" << endl; return 2; } // No exception stringstream ss_noexcept(argv[1]); time_duration td1; if (ss_noexcept >> td1) { cout << "Valid time duration: " << td1 << endl; } else { cout << "Invalid time duration." << endl; } // Throws exception stringstream ss2; time_duration td2; ss2.exceptions(ios_base::failbit); ss2.str(argv[1]); try { ss2 >> td2; cout << "Time duration: " << td2 << endl; } catch (ios_base::failure e) { cout << "Invalid time duration (exception caught). what():\n" << e.what() << endl; } }
2,986,844
2,986,856
Reading Unicode files line by line C++
What is the correct way to read Unicode files line by line in C++? I am trying to read a file saved as Unicode (LE) by Windows Notepad. Suppose the file contains simply the characters A and B on separate lines. In reading the file byte by byte, I see the following byte sequence (hex) : FE FF 41 00 0D 00 0A 00 42 00 0D 00 0A 00 So 2 byte BOM, 2 byte 'A', 2byte CR , 2byte LF, 2 byte 'B', 2 byte CR, 2 byte LF . I tried reading the text file using the following code: std::wifstream file("test.txt"); file.seekg(2); // skip BOM std::wstring A_line; std::wstring B_line; getline(file,A_line); // I get "A" getline(file,B_line); // I get "\0B" I get the same results using >> operator instead of getline file >> A_line; file >> B_line; It appears that the single byte CR character is is being consumed only as the single byte. or CR NULL LF is being consumed but not the high byte NULL. I would expect wifstream in text mode would read the 2byte CR and 2byte LF. What am I doing wrong? It does not seem right that one should have to read a text file byte by byte in binary mode just to parse the new lines.
std::wifstream exposes the wide character set to your program, which is typically UCS-2 on Windows and UTF-32 on Unix, but assumes that the input file is still using narrow characters. If you want it to behave using wide characters on disk, you need to use a std::codecvt<wchar_t, wchar_t> facet. You should just be able to find your compiler's implementation of std::codecvt<char, char> which is also a non-converting code conversion facet, and change the chars to wchar_ts.
2,986,891
2,986,902
How to publicly inherit from a base class but make some of public methods from the base class private in the derived class?
For example, class Base has two public methods: foo() and bar(). Class Derived is inherited from class Base. In class Derived, I want to make foo() public but bar() private. Is the following code the correct and natural way to do this? class Base { public: void foo(); void bar(); }; class Derived : public Base { private: void bar(); };
Section 11.3 of the C++ '03 standard describes this ability: 11.3 Access declarations The access of a member of a base class can be changed in the derived class by mentioning its qualified-id in the derived class declaration. Such mention is called an access declaration. The effect of an access declaration qualified-id ; is defined to be equivalent to the declaration using qualified-id So there are 2 ways you can do it. Note: As of ISO C++ '11, access-declarations (Base::bar;) are prohibited as noted in the comments. A using-declaration (using Base::bar;) should be used instead. 1) You can use public inheritance and then make bar private: class Base { public: void foo(){} void bar(){} }; class Derived : public Base { private: using Base::bar; }; 2) You can use private inheritance and then make foo public: class Base { public: void foo(){} void bar(){} }; class Derived : private Base { public: using Base::foo; }; Note: If you have a pointer or reference of type Base which contains an object of type Derived then the user will still be able to call the member.
2,986,974
2,986,985
Overloading stream insertion without violating information hiding?
I'm using yaml-cpp for a project. I want to overload the << and >> operators for some classes, but I'm having an issue grappling with how to "properly" do this. Take the Note class, for example. It's fairly boring: class Note { public: // constructors Note( void ); ~Note( void ); // public accessor methods void number( const unsigned long& number ) { _number = number; } unsigned long number( void ) const { return _number; } void author( const unsigned long& author ) { _author = author; } unsigned long author( void ) const { return _author; } void subject( const std::string& subject ) { _subject = subject; } std::string subject( void ) const { return _subject; } void body( const std::string& body ) { _body = body; } std::string body( void ) const { return _body; } private: unsigned long _number; unsigned long _author; std::string _subject; std::string _body; }; The << operator is easy sauce. In the .h: YAML::Emitter& operator << ( YAML::Emitter& out, const Note& v ); And in the .cpp: YAML::Emitter& operator << ( YAML::Emitter& out, const Note& v ) { out << v.number() << v.author() << v.subject() << v.body(); return out; } No sweat. Then I go to declare the >> operator. In the .h: void operator >> ( const YAML::Node& node, Note& note ); But in the .cpp I get: void operator >> ( const YAML::Node& node, Note& note ) { node[0] >> ? node[1] >> ? node[2] >> ? node[3] >> ? return; } If I write things like node[0] >> v._number; then I would need to change the CV-qualifier to make all of the Note fields public (which defeats everything I was taught (by professors, books, and experience))) about data hiding. I feel like doing node[0] >> temp0; v.number( temp0 ); all over the place is not only tedious, error-prone, and ugly, but rather wasteful (what with the extra copies). Then I got wise: I attempted to move these two operators into the Note class itself, and declare them as friends, but the compiler (GCC 4.4) didn't like that: src/note.h:44: error: ‘YAML::Emitter& Note::operator<<(YAML::Emitter&, const Note&)’ must take exactly one argument src/note.h:45: error: ‘void Note::operator>>(const YAML::Node&, Note&)’ must take exactly one argument Question: How do I "properly" overload the >> operator for a class Without violating the information hiding principle? Without excessive copying?
The typical way to do this without violating encapsulation is to make the operator>> a friend function. There must have been a syntax problem with your declaration of a friend operator (not clear what exactly from the error message). I don't use YAML, but from your question the following is the jist of it: class Note{ ... friend void operator >> ( const YAML::Node& node, Note& note ); .... }; void operator >> ( const YAML::Node& node, Note& note ){ node[0] >> note._number; node[1] >> note._author; node[2] >> note._subject; node[3] >> note._body; } A friend function has the same access rights to private members as a member function. Alternatively, you can declare setters for all member data, but the friend function method is cleaner.
2,987,062
3,042,278
Configuring the GCC compiler switches in Qt, QtCreator, and QMake
I recently tried to use Qt Creator 1.3.2, Qt 4.6.2, and GCC 4.4.0 (32-bit version) on Windows 7 (64-bit) to compile an application using some of the experimental C++0x extensions and encountered the following (fatal) error: This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options. In my search for a solution, I came across the thread qmake and compiler flags?, and added the following to the .pro file: CXXFLAGS += -std=c++0x but that didn't seem to make a difference. So, I expect there's some tag I need to add to the .pro (project) file, but I've never messed with the GCC compiler switches in Qt, QMake, and QtCreator before, and I am uncertain about the proper invokation / incantation. So, my question is how do you set GCC compiler switches when using QtCreator, QMake, and Qt?
It boils down to reading the manual. Instead of using CXXFLAGS in the .pro file, you need to use QMAKE_CXXFLAGS as in: main.cpp: #include <cinttypes> int main() { return 0; } main.pro: SOURCES += main.cpp QMAKE_CXXFLAGS += -std=c++0x
2,987,505
2,996,538
How can I modify the application file of an application that is currently running (on Linux)?
I have an application running called AppFS. This application has an ext2 filesystem just attached to the end of the file (it's positioned so that the application binary exists in a 1MB spacing area, followed by the ext2 data). Now I've got FUSE embedded in the program and I've managed to extract the filesystem out of the application data into a temporary file so that FUSE can mount / use it. The problem I have now is writing the temporary file back into the application file. I get "Text file busy" presumably because the application has locked itself and won't let writes occur. Is there a way I can force the file to become unlocked so I can write data to it? (It's important to note that I'm not changing the application binary area - just rewriting the ext2 component.) It needs to be unlocked without requiring root permissions (unlocked by the same user who started the application).
The solution to this problem was to rename the existing application name (to a location within a temporary directory) and to then move the new (generated) file back in it's place and apply the same umask / uid / gid that the old one had. Once it's moved, you can safely just unlink the running executable. It's a bit of a hacky workaround (I especially don't like the fact that the application is entirely removed and replaced), but it works.
2,987,524
2,987,542
Any high-level languages that can use c libraries?
I know this question could be in vain, but it's just out of curiosity, and I'm still much a newb^^ Anyways I've been loving python for some time while learning it. My problem is obviously speed issues. I'd like to get into indie game creation, and for the short future, 2d and pygame will work. But I'd eventually like to branch into the 3d area, and python is really too slow to make anything 3d and professional. So I'm wondering if there has ever been work to create a high-level language able to import and use c libraries? I've looked at Genie and it seems to be able to use certain libraries, but I'm not sure to what extent. Will I be able to use it for openGL programing, or in a c game engine? I do know some lisp and enjoy it a lot, but there aren't a great many libraries out there for it. Which leads to the problem: I can't stand C syntax, but C has libraries galore that I could need! And game engines like irrlicht. Is there any language that can be used in place of C around C? Thanks so much guys
Python can call functions in dynamically loaded C libraries (.so in unix, .dll in Windows) using the ctypes module. There is also cython - a variation of python that compiles to C and can call C libraries directly. You can mix modules written in pure Python and cython. You may also want to look at the numerous 3D game engines either written specifically for Python or with a python interface. The ones I have heard the most about (but not used) are Blender and Python-Ogre.
2,987,741
2,987,748
What is the difference between bit shifting and arithmetical operations?
int aNumber; aNumber = aValue / 2; aNumber = aValue >> 1; aNumber = aValue * 2; aNumber = aValue << 1; aNumber = aValue / 4; aNumber = aValue >> 2; aNumber = aValue * 8; aNumber = aValue << 3; // etc. Whats is the "best" way to do operations? When is better to use bit shifting?
The two are functionally equivalent in the examples you gave (except for the final one, which ought to read aValue * 8 == aValue << 3), if you are using positive integers. This is only the case when multiplying or dividing by powers of 2. Bit shifting is never slower than arithmetic. Depending on your compiler, the arithmetic version may be compiled down to the bit-shifting version, in which case they both be as efficient. Otherwise, bit-shifting should be significantly faster than arithmetic. The arithmetic version is often more readable, however. Consequently, I use the arithmetic version in almost all cases, and only use bit shifting if profiling reveals that the statement is in a bottleneck: Programs should be written for people to read, and only incidentally for machines to execute.
2,987,956
2,987,974
Copy constructor demo (crashing... case 2)
Please have a glance at this program: class CopyCon { public: char *name; CopyCon() { name = new char[20]; name = "Hai";//_tcscpy(name,"Hai"); } CopyCon(const CopyCon &objCopyCon) { name = new char[_tcslen(objCopyCon.name)+1]; _tcscpy(name,objCopyCon.name); } ~CopyCon() { if( name != NULL ) { delete[] name; name = NULL; } } }; int main() { CopyCon obj1; CopyCon obj2(obj1); cout<<obj1.name<<endl; cout<<obj2.name<<endl; } This program crashes on execution. Error: "Expression: _BLOCK_TYPE_IS_VALID(pHead->nBlockUse)" If I assign "Hai" to name using aasignment operator, its crashing. Where as when I use string func _tcscpy to assign "Hai" to name, its working perfectly. Can some one explain why so?
In the default constructor CopyCon() { name = new char[20]; name = "Hai";//_tcscpy(name,"Hai"); } you assign the address of a string literal to the pointer and in the destructor you call delete[] on it, that's undefined behavior. delete[] should only be called on addresses returned by new[]. When you instead use _tcscpy() you copy the literal content to the buffer allocated by new[] and then the destructor runs fine.
2,987,973
2,999,711
Problem with receiving data form serial port in c#?
hello i have problem with receiving data from serial port in c# in am inserting a new line operator at the end of data buffer. then i send this data buffer on serial port, after this my c# GUI receiver will take this data via Readline() function but it always give me raw data not the actual one how to resolve this problem. //configuring the serial port this code in c# with problem serialPort.PortName = "COM1"; serialPort.BaudRate = 9600; serialPort.DataBits = 8; serialPort.Parity = Parity.None; serialPort.StopBits = StopBits.One; //opening the serial port if(!serialPort.IsOpen) serialPort.Open(); //read 2byte data for msG code from serial port string strReadData=serialPort.ReadLine(); char[] temp=new char[350]; //strReadData.CopyTo(1, temp, 0, strReadData.Length - 2); //strReadData = temp.ToString(); //string strReadData = serialPort.ReadExisting(); //strReadData.Replace(' ', '\0'); //strReadData.Replace(' ', ''); byte[] RecievedData = Encoding.ASCII.GetBytes(strReadData); RecievedDataDecoder(RecievedData); //close the port if(serialPort.IsOpen) serialPort.Close(); but my c++ receiver is working perfectly i dont know what is the problem here is working c++ code // variables used with the com port BOOL m_bPortReady; HANDLE m_hCom; DCB m_dcb; COMMTIMEOUTS m_CommTimeouts; BOOL bWriteRC; BOOL bReadRC; DWORD iBytesWritten; DWORD iBytesRead; DWORD dwCommEvent; DWORD dwRead; char sBuffer[128]; m_hCom = CreateFile("Com1", GENERIC_READ | GENERIC_WRITE, 0, // exclusive access NULL, // no security OPEN_EXISTING, 0, // no overlapped I/O NULL); // null template m_bPortReady = SetupComm(m_hCom, 128, 128); // set buffer sizes m_bPortReady = GetCommState(m_hCom, &m_dcb); m_dcb.BaudRate = 9600; m_dcb.ByteSize = 8; m_dcb.Parity = NOPARITY; m_dcb.StopBits = ONESTOPBIT; m_dcb.fAbortOnError = TRUE; m_bPortReady = SetCommState(m_hCom, &m_dcb); m_bPortReady = GetCommTimeouts (m_hCom, &m_CommTimeouts); m_CommTimeouts.ReadIntervalTimeout = 50; m_CommTimeouts.ReadTotalTimeoutConstant = 50; m_CommTimeouts.ReadTotalTimeoutMultiplier = 10; m_CommTimeouts.WriteTotalTimeoutConstant = 50; m_CommTimeouts.WriteTotalTimeoutMultiplier = 10; m_bPortReady = SetCommTimeouts (m_hCom, &m_CommTimeouts); if (!SetCommMask(m_hCom, EV_RXCHAR)) { printf("Error in set comm mask"); } while(1) { if (WaitCommEvent(m_hCom, &dwCommEvent, NULL)) { if (ReadFile(m_hCom, &sBuffer, 128, &iBytesRead, NULL)) printf(""); else { printf("Error in reading"); break; } } else { printf("Error in Waiting"); break; } printf("%s",sBuffer); strcpy(sBuffer,""); } CloseHandle(m_hCom); getch(); exit(0);
Your question is a bit vague, but the ReadLine() method will return when the port receives a carriage return and line feed byte combination by default on Windows. Or \r\n, or 0x0d 0x0a if you prefer. If the 'buffer' you are sending in the fist place contains several messages delimited by \r\n, then ReadLine() will only return the first one, then the port will close in the C# code you have posted. Maybe your code is in a loop, but it is not shown. Beyond that, after any data that you have received, your are converting it back to an array of bytes, determined by ASCII encoding. Are you sending ASCII in the first place? If not, it is likely that you could be missing information. Also, if all you do is convert the received string into bytes, you could just receive the data as bytes in the first place. I think you need to explain in a bit more detail exactly what is in the buffer you are sending, and what exactly do you receive.
2,988,066
2,988,077
Decorator pattern in C++
Can someone give me an example of the Decorator design pattern in C++ ? I have come across the Java version of it, but found it difficult to understand the C++ version of it (from the examples I found). Thanks.
Vince Huston Design Patterns, even though its layout is poor, has C++ implementation for most design patterns in the Gang of Four book. Click for Decorator. There isn't much difference with Java, except the manual memory handling that you'd better wrap with smart pointers :)
2,988,168
2,988,888
Where my memory is alloced, Stack or Heap, Can I find it at Run-time?
I know that memory alloced using new, gets its space in heap, and so we need to delete it before program ends, to avoid memory leak. Let's look at this program... Case 1: char *MyData = new char[20]; _tcscpy(MyData,"Value"); . . . delete[] MyData; MyData = NULL; Case 2: char *MyData = new char[20]; MyData = "Value"; . . . delete[] MyData; MyData = NULL; In case 2, instead of allocating value to the heap memory, it is pointing to a string literal. Now when we do a delete it would crash, AS EXPECTED, since it is not trying to delete a heap memory. Is there a way to know where the pointer is pointing to heap or stack? By this the programmer Will not try to delete any stack memory He can investigate why does this ponter, that was pointing to a heap memory initially, is made to refer local literals? What happened to the heap memory in the middle? Is it being made to point by another pointer and delete elsewhere and all that?
Is there a way to know where the pointer is pointing to heap or stack? You can know this only if you remember it at the point of allocation. What you do in this case is to store your pointers in smart pointer classes and store this in the class code. If you use boost::shared_ptr as an example you can do this: template<typename T> void no_delete(T* ptr) { /* do nothing here */ } class YourDataType; // defined elsewhere boost::shared_ptr<YourDataType> heap_ptr(new YourDataType()); // delete at scope end YourDataType stackData; boost::shared_ptr<YourDataType> stack_ptr(&stackData, &no_delete); // never deleted
2,988,273
2,988,305
C++ pointer to objects
In C++ do you always have to initialize a pointer to an object with the new keyword? Or can you just have this too: MyClass *myclass; myclass->DoSomething(); I thought this was a pointer allocated on the stack instead of the heap, but since objects are normally heap-allocated, I think my theory is probably faulty? Please advice.
No, you can have pointers to stack allocated objects: MyClass *myclass; MyClass c; myclass = & c; myclass->DoSomething(); This is of course common when using pointers as function parameters: void f( MyClass * p ) { p->DoSomething(); } int main() { MyClass c; f( & c ); } One way or another though, the pointer must always be initialised. Your code: MyClass *myclass; myclass->DoSomething(); leads to that dreaded condition, undefined behaviour.
2,988,347
2,988,366
Abstract class reference
Can i have a class Class Room{ ~Room(); virtual cost() =0; } Class Hotel{ map<int, Room> rooms; /* */ }; will my hotel become abstract ? Can it hold the list of concrete Room objects that are derived from Room ?
The code you have written is not valid C++. If you mean: class Room{ ~Room(); virtual int cost() =0; }; then yes, the class is abstract. You cannot then create a map like this: map <int, Room> rooms; but you can like this: map <int, Room *> rooms; Then assuming you have a class SingleRoom that is derived from Room and implements cost(), you can say: rooms.insert( make_pair( 101, new SingleRoom ) ); Also, please note that abstract classes must have virtual destructors.
2,988,467
2,990,195
How to know coordinates in a real image from a scaled image
First of all thanks for your time reading my question :-) I have an original image (w': 2124, h': 3204) and the same image scaled (w: 512, h: 768). The ratio for width is 4.14 (rw) and the ratio for height is 4.17 (rh). I'm trying to know the coordinates (x', y') in the original image when I receive the coordinates in the scaled image (x, y). I'm using the formula: x' = x * rw and y' = y * rh. But when I'm painting a line, or a rectangle always appears a shift that is incremented when x or y is higher. Please anybody knows how do I transform coordinates without losing accuracy? Thanks in advance! Oscar.
Or you can use QTransform::quadToQuad to create a transform and use it to map points, rects, lines, etc.: QVector<QPointF> p1; p1 << scaledRect.topLeft() << scaledRect.topRight() << scaledRect.bottomRight() << scaledRect.bottomLeft(); QVector<QPointF> p2; p2 << originalRect.topLeft() << originalRect.topRight() << originalRect.bottomRight() << originalRect.bottomLeft(); QTransform::quadToQuad(p1, p2, mappingTransform); ... QPointF originalPoint = mappingTransform.map(scalePoint);
2,988,628
2,988,652
Inheritance vs specific types in Financial Modelling for cashflows
I have to program some financial applications where I have to represent a schedule of flows. The flows can be of 3 types: fee flow (just a lump payment at some date) floating rate flow (the flow is dependant of an interest rate to be determined at a later date) fixed rate flow (the flow is dependant of an interest rate determined when the deal is done) I need to keep the whole information and I need to represent a schedule of these flows. Originally I wanted to use inheritance and create three classes FeeFlow, FloatingFlow, FixedFlow all inheriting from ICashFlow and implement some method GetFlowType() returning an enum then I could dynamic_cast the object to the correct type. That would allow me to have only one vector<IFlow> to represent my schedule. What do you think of this design, should I rather use three vectors vector<FeeFlow>, vector<FloatingFlow> and vector<FixedFlow> to avoid the dynamic casts ?
Why do you actually need the dynamic casts? Make your flow subclasses implement the same interface polymorphically, then there is no need to cast anything. If they need very different inputs, you could try passing the different inputs as constructor parameters, thus clearing up the common interface. However, if you really can't define a common interface for them, maybe they are better implemented as independent classes.
2,988,752
2,988,910
C++ finding pointer errors
I am currently on a project which has been buggy for a long while, now I suspect, among other things, that there is pointer errors in the code. The program is written in native C++ using COM and uses out-of-process COM servers. Can anybody give me some tips as how one would go about finding these errors? Is there specific things to keep in mind, ways to do it, etc...
I have a couple of suggestions here: How did you figure out there was a pointer error? Why don't you put it in the debugger and then see where the program crashes? That'd be a start. Put a watch on the offending variable(s). What makes you think something is messed up in the COM part of the code? On the contrary, I imagine that there are 2 processes running here (may or may not be on same machine) and you should run these in parallel and then see how they respond to each other -- I am mostly sure something would be wrong in the handling of data in application code. COM is a fairly mature technology. Perhaps you are making incorrect API calls, or as is the usual case with all things Microsoft there are 1000s of arguments, may be you need to tweak there. Hope this helps. You can also try things like remote debugging and all, or instrument the code with Vtune or something but lets keep things simple for a start. Arpan
2,988,964
2,988,982
C++ reference variable clarification
I think most would be surprised about the topic again, However I am referring to a book "C++ Common Knowledge: Essential Intermediate Programming" written by "Stephen C. Dewhurst". In the book, he quotes a particular sentence (in section under Item 5. References Are Aliases, Not Pointers), which is as below A reference is an alias for an object that already exists prior to the initialization of the reference. Once a reference is initialized to refer to a particular object, it cannot later be made to refer to a different object; a reference is bound to its initializer for its whole lifetime Can anyone please explain the context of "cannot later be made to refer to a different object" Below code works for me, #include <iostream> using namespace std; int main(int argc, char *argv[]) { int i = 100; int& ref = i; cout<<ref<<endl; int k = 2000; ref = k; cout<<ref<<endl; return 0; } Here I am referring the variable ref to both i and j variable. And the code works perfectly fine. Am I missing something? I have used SUSE10 64bit linux for testing my sample program. Thanks for your input in advance.
You aren't binding ref to another object: when you do ref = k, it is as though you did i = k, which simply reassigned the value of i. In other words, you're calling i's operator=. Here's something that better demonstrates the idea: #include <iostream> int main() { std::ostream& stream = std::cout; stream = std::cerr; // fails to compile! Cannot access ostream::operator= return 0; }
2,989,016
2,989,024
Delete array of size 1
This is possibly a candidate for a one-line answer. I would like know it anyway.. I am writing a simple circular buffer and for some reasons that are not important for the question I need to implement it using an array of doubles. In fact I have not investiated other ways to do it, but since an array is required anyway I have not spent much time on looking for alternatives. template<typename T> class CircularBuffer { public: CircularBuffer(unsigned int size); ~CircularBuffer(); void Resize(unsigned int new_size); ... private: T* buffer; unsigned int buffer_size; }; Since I need to have the buffer dynamically sized the buffer_size is neither const nor a template parameter. Now the question: During construction and in function Resize(int) I only require the size to be at least one, although a buffer of size one is effectively no longer a buffer. Of course using a simple double instead would be more appropriate but anyway. Now when deleting the internal buffer in the destructor - or in function resize for that matter - I need to delete the allocated memory. Question is, how? First candidate is of course delete[] buffer; but then again, if I have allocated a buffer of size one, that is if the pointer was aquired with buffer = new T[0], is it still appropriate to call delete[] on the pointer or do I need to call delete buffer; (without brackets) ? Thanks, Arne
If you allocate the memory with new T[x], you always delete it with delete[], even if x ≤ 1.
2,989,125
2,989,184
Using forward declarations for built in datatypes
I understand that wherever possible we shall use forward declarations instead of includes to speed up the compilation. I have a class Person like this. #pragma once #include <string> class Person { public: Person(std::string name, int age); std::string GetName(void) const; int GetAge(void) const; private: std::string _name; int _age; }; and a class Student like this #pragma once #include <string> class Person; class Student { public: Student(std::string name, int age, int level = 0); Student(const Person& person); std::string GetName(void) const; int GetAge(void) const; int GetLevel(void) const; private: std::string _name; int _age; int _level; }; In Student.h, I have a forward declaration class Person; to use Person in my conversion constructor. Fine. But I have done #include <string> to avoid compilation error while using std::string in the code. How to use forward declaration here to avoid the compilation error? Is it possible? Thanks.
Since used string as std::string _name; //^^^^^^^^^ concrete member the whole structure of string would be needed, so the declaration must be needed. You must #include <string>. Declaration of string can be omitted possible if you write, e.g. std::string* _name; //^^^^^^^^^^ pointer or reference which you could use a forward declaration, but I still recommend you not to do so, because std::string is not a simple structure type like Person or Student, but a very complex type involving many templates: template<class charT, class traits = char_traits<charT>, class Allocator = allocator<charT> > class basic_string { ... }; typedef basic_string<char> string; If you forward declare it wrongly (e.g. class string;), the compilation will fail when you actually use it because of conflicting type.
2,989,173
2,989,223
runtime error when calling a C++ dll from C# .NET windows application
I had a C# .NET windows application having C# user interface and all the code behind processing is done by calling C++ dll (C++ class library project) which is added as a reference to the C# project. However recently when I formatted my computer and again tried to run my project which was backed up, in visual studio 2005 it gave the following exception: An unhandled exception of type 'System.IO.FileNotFoundException' occurred in System.Windows.Forms.dll Additional information: The specified module could not be found. (Exception from HRESULT: 0x8007007E) This exception is thrown when I put the following code (for example) in the button click event. private void button3_Click(object sender, EventArgs e) { CyclopiaDll.Class1 cc = new CyclopiaDll.Class1(); // calling dll cc.clearData(); } However the exception is actually shown to be thrown in this line even though the form gets loaded without a problem: Application.Run(new Form1()); I tried building the new project and adding the referenced dll again but I m still getting the exception. This happened to me before also when I tried to run this project in another computer. However after my machine was formatted even I cant even run the application. Only way I can think of solving this is to recreate the project from the scratch as before. which is time consuming. Is there a way to avoid this problem so that I can run this project in my computer as well as in another computer ?
It sounds like you have a missing dependency i.e. another dll that your C++ dll depends on, that is not present on your machine. You can use a utility like "Dependency Walker" to load your C++ dll on a machine where you have a problem, and it will point out any any missing dependencies. It is then a case of working out what this dependency is (e.g. missing C++ runtime version) and then ensuring this is packaged with your application either directly or via a merge module if suitable, for instance. Dependency Walker: http://www.dependencywalker.com/
2,989,200
2,989,523
C++ Iterators and inheritance
Have a quick question about what would be the best way to implement iterators in the following: Say I have a templated base class 'List' and two subclasses "ListImpl1" and "ListImpl2". The basic requirement of the base class is to be iterable i.e. I can do: for(List<T>::iterator it = list->begin(); it != list->end(); it++){ ... } I also want to allow iterator addition e.g.: for(List<T>::iterator it = list->begin()+5; it != list->end(); it++){ ... } So the problem is that the implementation of the iterator for ListImpl1 will be different to that for ListImpl2. I got around this by using a wrapper ListIterator containing a pointer to a ListIteratorImpl with subclasses ListIteratorImpl2 and ListIteratorImpl2, but it's all getting pretty messy, especially when you need to implement operator+ in the ListIterator. Any thoughts on a better design to get around these issues?
If you can get away with making List<T>::iterator non-virtual, then delegating the virtualness off add to List makes things simple: template<typename T> class List { virtual void add_assign(iterator& left, int right) = 0; public: class iterator { const List* list; const T* item; public: iterator(const List* list, const T* item) : list(list), item(item) {} iterator& operator +=(int right) { list->add_assign(*this, right); return *this; } static iterator operator +(iterator const& left, int right) { iterator result = left; result += right; return result; } }; virtual iterator begin() const = 0; virtual iterator end() const = 0; }; Otherwise (if the iterators need to store significantly different data, for example), then you have to do the regular, boring pointer-to-implementation to get your virtualness: template<typename T> class List { class ItImpl { virtual ItImpl* clone() = 0; virtual void increment() = 0; virtual void add(int right) = 0; }; public: class iterator { ItImpl* impl; public: // Boring memory management stuff. iterator() : impl() {} iterator(ItImpl* impl) : impl(impl) {} iterator(iterator const& right) : impl(right.impl->clone()) {} ~iterator() { delete impl; } iterator& operator=(iterator const& right) { delete impl; impl = right.impl->clone(); return *this; } // forward operators to virtual calls through impl. iterator& operator+=(int right) { impl->add(right); return *this; } iterator& operator++() { impl->increment(); return *this; } }; }; template<typename T> static List<T>::iterator operator+(List<T>::iterator const& left, int right) { List<T>::iterator result = left; result += right; return result; } template<typename T> class MagicList : public List<T> { class MagicItImpl : public ItImpl { const MagicList* list; const magic* the_magic; // implement ... }; public: iterator begin() const { return iterator(new MagicItImpl(this, begin_magic)); } iterator end() const { return iterator(new MagicItImpl(this, end_magic)); } };
2,989,235
2,989,271
How to store an XML file content into a std::string or CString, at compilation time?
The XML contains symbols that the C++ may misinterpret (like the // in http://www.w3.org/2000/10/XMLSchema) I am wondering if you know about clean ways to include some hard-coded XML into a std::string or a CString, at compile time. (in deed I'm trying to encode an .xsd file for validating some xml input, because I don't want to depend upon the presence or not of this xsd file, so I want it to be compiled in the binary)
const char* XML_STRING = "<?xml version=\"1.0\"?><Test></Test>"; std::string aTestXmlStr(XML_STRING);
2,989,419
2,989,481
How can I programmatically find the IP address/netmask/gateway configured for a specific network device in Linux?
I would like to write a piece of code which checks, for each network device (e.g. eth0, lo, master devices) some statistics and configuration data about that device. I could find the statistics data (and most of the configuration data) in /sys/class/net/..., however, I couldn't find any C/C++ API or any entry in procfs/sysfs listing the inet addr, netmask and gateway. Some alternatives I checked: parsing the output from ifconfig/route/some other utilities: I don't want to start a subprocess every time I need to do the inspection. parsing /etc/sysconfig/network-scripts/: will give me only the start-up configuration, and not the current state. Also, since this code is intended for a product in my workplace, where every external library is inspected thoroughly (meaning it will take me forever to add any external library) I prefer solutions which rely on Linux native API and not external libraries. Thanks!
There sure is using a struct of ifreq and ioctl() calls you can grab all interface information: Man page is here Ifreq manpage /* local interface info */ typedef struct{ char *iface; struct ether_addr hwa; struct in_addr ipa; struct in_addr bcast; struct in_addr nmask; u_short mtu; } ifcfg_t; /* * Grabs local network interface information and stores in a ifcfg_t * defined in network.h, returns 0 on success -1 on failure */ int get_local_info(int rsock, ifcfg_t *ifcfg) { struct ifreq ifr; memset(&ifr, 0, sizeof(ifr)); strncpy(ifr.ifr_name, ifcfg->iface, IF_NAMESIZE); if((ioctl(rsock, SIOCGIFHWADDR, &ifr)) == -1){ perror("ioctl():"); return -1; } memcpy(&(ifcfg->hwa), &ifr.ifr_hwaddr.sa_data, 6); memset(&ifr, 0, sizeof(ifr)); strncpy(ifr.ifr_name, ifcfg->iface, IF_NAMESIZE); if((ioctl(rsock, SIOCGIFADDR, &ifr)) == -1){ perror("ioctl():"); return -1; } memcpy(&ifcfg->ipa, &(*(struct sockaddr_in *)&ifr.ifr_addr).sin_addr, 4); memset(&ifr, 0, sizeof(ifr)); strncpy(ifr.ifr_name, ifcfg->iface, IF_NAMESIZE); if((ioctl(rsock, SIOCGIFBRDADDR, &ifr)) == -1){ perror("ioctl():"); return -1; } memcpy(&ifcfg->bcast, &(*(struct sockaddr_in *)&ifr.ifr_broadaddr).sin_addr, 4); memset(&ifr, 0, sizeof(ifr)); strncpy(ifr.ifr_name, ifcfg->iface, IF_NAMESIZE); if((ioctl(rsock, SIOCGIFNETMASK, &ifr)) == -1){ perror("ioctl():"); return -1; } memcpy(&ifcfg->nmask.s_addr, &(*(struct sockaddr_in *)&ifr.ifr_netmask).sin_addr, 4); memset(&ifr, 0, sizeof(ifr)); strncpy(ifr.ifr_name, ifcfg->iface, IF_NAMESIZE); if((ioctl(rsock, SIOCGIFMTU, &ifr)) == -1){ perror("ioctl():"); return -1; } ifcfg->mtu = ifr.ifr_mtu; return 0; } Quick edit, this function requires that the interface has been assigned before it is called, like so: strcpy(if_cfg->iface, iface) Ensuring you have allocated the memory first, then call like so if((get_local_info(sock, if_cfg)) != 0){ printf("Unable to get network device info\n"); return -1; }
2,989,537
2,990,013
Deriving from streambuf without rewriting a corresponding stream
Some days ago, I decided that it would be fun to write a streambuf subclass that would use mmap and read-ahead. I looked at how my STL (SGI) implemented filebuf and realized that basic_filebuf contains a FILE*. So inheriting from basic_filebuf is out of the question. So I inherited from basic_streambuf. Then i wanted to bind my mmapbuf to a fstream. I thought the only thing that I would have to do would be to copy the implicit interface of filebuf... but that was a clear mistake. In the SGI, basic_fstream owns a basic_filebuf. No matter if I call basic_filestream.std::::ios::rdbuf( streambuf* ), the filestream completely ignores it and uses its own filebuf. So now I'm a bit confused... sure, I can create my own mmfstream, that would be the exact copy/paste of the fstream but that sounds really not DRY-oriented. What I can't understand, is: why does fstream is so tightly coupled with filebuf, so that it is not possible to use anything else than a filebuf? The whole point of separating streams and bufs is that one can use a stream with a different buffer. Solutions: => filestream should rely on the implicit interface of filebuf. That is, fstream should be templated by a streambuf class. That would allow everyone to provide its own streambuf subclass to a fstream as long as it implements filebuf's implicit interface. Problem: we cannot add a template parameter to fstream since it would break template selectors while using fstream as template template parameter. => filebuf should be a pure virtual class without any additional attributes. So that one can inherit from it without carrying all its FILE* garbage. Your ideas on the subject ?
Check out mapped_file in the Boost.Iostreams library. I've never used used it myself, but it seems like it might already do what you need. EDIT: Oops, reread your questions and I see you're doing this for fun. Perhaps you can draw inspiration from Boost.Iostreams?
2,989,713
2,993,774
Get drive type with SetupDiGetDeviceRegistryProperty
I would like to know whether i can get the drive information using the SP_DEVICE_INTERFACE_DETAIL_DATA's DevicePath my device path looks like below "\?\usb#vid_04f2&pid_0111#5&39fe81e&0&2#{a5dcbf10-6530-11d2-901f-00c04fb951ed}" also please tell me in the winapi they say "To determine whether a drive is a USB-type drive, call SetupDiGetDeviceRegistryProperty and specify the SPDRP_REMOVAL_POLICY property." i too use SetupDiGetDeviceRegistryProperty like below while ( !SetupDiGetDeviceRegistryProperty( hDevInfo,&DeviceInfoData, SPDRP_REMOVAL_POLICY,&DataT,( PBYTE )buffer,buffersize,&buffersize )) but i dont know how can i get the drive type using the above.. Please help me up
Probably what you are looking for you will be find here http://support.microsoft.com/kb/264203/en. Another link http://support.microsoft.com/kb/305184/en can be also interesting for you. UPDATED: Example from http://support.microsoft.com/kb/264203/en shows you how to use to determine whether USB-Drive is removable. You can also use SetupDiGetDeviceRegistryProperty with SPDRP_REMOVAL_POLICY on the device instance (use SetupDiEnumDeviceInfo, SetupDiGetDeviceInstanceId and then SetupDiGetDeviceRegistryProperty). If returned DWORD has CM_REMOVAL_POLICY_EXPECT_SURPRISE_REMOVAL or CM_REMOVAL_POLICY_EXPECT_ORDERLY_REMOVAL as value, the drive is removable. Moreover the code example show how to open device handle which you can use with DeviceIoControl function to retrieve a lot of useful information which you can need. IOCTL_STORAGE_QUERY_PROPERTY (see http://msdn.microsoft.com/en-us/library/ff566997%28v=VS.85%29.aspx) with different QueryType and PropertyId only one example. You can use IOCTL_STORAGE_GET_DEVICE_NUMBER for example to receive storage volumes and their disk number. If you will have full STORAGE_DEVICE_NUMBER information about your USB device we will be able to find all other information about it with different ways. One of the easiest is: just enumerate all drive letters with QueryDosDevice and query STORAGE_DEVICE_NUMBER for every drive. If you will find full match in STORAGE_DEVICE_NUMBER you will find the drive letter.
2,989,810
2,990,581
Which Cross Platform Preprocessor Defines? (__WIN32__ or __WIN32 or WIN32 )?
I often see __WIN32, WIN32 or __WIN32__. I assume that this depends on the used preprocessor (either one from visual studio, or gcc etc). Do I now have to check first for os and then for the used compiler? We are using here G++ 4.4.x, Visual Studio 2008 and Xcode (which I assume is a gcc again) and ATM we are using just __WIN32__, __APPLE__ and __LINUX__.
It depends what you are trying to do. You can check the compiler if your program wants to make use of some specific functions (from the gcc toolchain for example). You can check for operating system ( _WINDOWS, __unix__ ) if you want to use some OS specific functions (regardless of compiler - for example CreateProcess on Windows and fork on unix). Macros for Visual C Macros for gcc You must check the documentation of each compiler in order to be able to detect the differences when compiling. I remember that the gnu toolchain(gcc) has some functions in the C library (libc) that are not on other toolchains (like Visual C for example). This way if you want to use those functions out of commodity then you must detect that you are using GCC, so the code you must use would be the following: #ifdef __GNUC__ // do my gcc specific stuff #else // ... handle this for other compilers #endif
2,990,051
2,992,149
Qt and finding partial matches in a QList
I have a struct viz: struct NameKey { std::string fullName; std::string probeName; std::string format; std::string source; } which are held in a QList: QList<NameKey> keyList; what I need to do is find an occurence in keyList of a partial match where the search is for a NameKey that only has two members filled. All the keyList entries are full NameKey's. My current implementation is , well, boring in the extreme with too many if's and conditions. So, If I have a DataKey with a fullName and a format I need to find all the occurences in keyList which match. Any useful Qt/boost things available?
QList is compatible with STL. So you can use it with STL algorithm: struct NameKeyMatch { NameKeyMatch(const std::string & s1, const std::string & s2, const std::string & s3, const std::string & s4) : fullName(s1), probeName(s2), format(s3), source(s4) {} bool operator()(const NameKey & x) const { return fullName.size() && x.fullName == fullName && probeName.size && x.probeName == probeName && format.size && x.format == format && source.size && x.source == source; } std::string fullName; std::string probeName; std::string format; std::string source; }; QList<int>::iterator i = std::find_if(keyList.begin(), keyList.end(), NameKeyMatch("Full Name", "", "Format", "")); I don't know if Qt will actively maintain STL compatibility though.
2,990,055
2,990,108
does such a c++ tool exist? [I dont know what to call it]
I am going through the process of trying to figure out how a library of code works. I would like some sort of tool that would analyze the program that I run off of the library and tells me what functions are called in what order by each thread. Does such a tool exist? What google terms would I use to find such a program? Note: Using VS2008/Win7/C++
I think a call graph may help you. Most profilers can generate a call graph after profiling. Profiling can also help you identified what code is being used most often. Another possibility is using a tool to generate sequence diagrams. This won't show you specifically what happened during runtime, but it will give a clear idea what the code is doing. Regards Dirk
2,990,060
3,002,600
Qt - QPushButton text formatting
I have a QPushButton and on that I have a text and and icon. I want to make the text on the button to be bold and red. Looked at other forums, googled and lost my hope. Seems there is no way to do that if the button has an icon (of course if you don't create a new icon which is text+former icon). Is that the only way? Anyone has a better idea?
You really don't need to subclass to change the formatting of your button, rather use stylesheets e.g. QPushButton { font-size: 18pt; font-weight: bold; color: #ff0000; } Applying this to the button that you want to change will make the buttons text 18pt, bold and red. You can apply via widget->setStyleSheet() Applying this to a widget in the hierarchy above will style all the buttons underneath, the QT stylesheet mechanism is very flexible and fairly well documented. You can set stylesheets in the designer too, this will style the widget that you are editing immediately
2,990,172
2,990,202
Is there a list of preprocessor defines for various operating systems (and versions)?
e.g. a mapping for Mac OS 10.6.3 aka Snow Leopard => __APPLE__ && __LP64__? Windows 7, Windows XP => __WIN32__ Linux => __LINUX__
Here you go: http://predef.sourceforge.net/
2,990,342
2,990,378
boost for each problem
std::map< std::string , std::string > matrix_int; typedef std::pair< std::string , std::string > lp_type; BOOST_FOREACH( lp_type &row, matrix_int ){ } this can not be complied: error C2440: 'initializing' : cannot convert from 'std::pair<_Ty1,_Ty2>' to 'lp_type &' when I have ',' in element type, boost doc says I can use typedef or predefine a var; but what should I do when I want to get a reference?
Your typedef is incorrect; it needs to be: typedef std::pair< const std::string , std::string > lp_type; ^ note the added const The key element in the map pair is const-qualified. It would be a bit cleaner to use the value_type typedef; this way you don't repeat the type information: typedef std::map<std::string, std::string> map_t; map_t matrix_int; BOOST_FOREACH(map_t::value_type& row, matrix_int){ }
2,990,717
2,991,005
C++'s unordered_map / hash_map / Google's dense_hash - how to input binary data (buf+len) and insert operation
I have two questions about Google's dense_hash_map, which can be used instead of the more standard unordered_map or hash_map: How do I use an arbitrary binary data memory segment as a key: I want a buffer+length pair, which may still contain some NUL (\0) characters. I can see how I use a NUL-terminated char * string , but that's not what I want. How do I implement an operation where I look if a key exists, and if not - insert it and if it does return the pointer to the existing key and let me know what actually happened. I'd appreciate it if anyone can shed any light on this subject. Regards, -- Shlomi Fish
I would disagree with Neil. I would use insert for number 2. Using find then insert causes 2 look-ups while using insert causes one look-up without overriding if the element is already present. Normally, insert returns an iterator to the key/value pair corresponding (even if not updated) + a boolean which indicates whether or not the insertion took place. std::pair<iterator, bool> result = map.insert(std::make_pair(key(), value()); result.first->second; // accesses the value at key: `key()`