question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
1,603,251 | 1,603,282 | In C++, how can a class take a const std::string& parameter in the constructor but also handle NULL? | I'm trying to work through ways to create a class with a std::string argument, but which also handles NULL without throwing an exception. Here's an example of the code:
class myString {
public:
myString(const std::string& str) : _str(str) {}
std::string _str;
};
int main() {
myString mystr(NULL);
printf("mystr = <%s>\n", mystr._str.c_str());
return 0;
}
Intuitively, you'd think that this program should print "mystr = <>" and exit successfully, but instead with g++ it gives this error:
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_S_construct NULL not valid
How can I change this class so that it translates NULL into "" instead of throwing a logic_error exception?
(Postscript: As you'd suspect, the "real" implementation is a bit more complicated than this example, having several arguments in the constructor and doing more interesting things with _str -- and properly keeping it private. The motivation behind this example is that programmers in our team will tend to use NULL in this context for historical purposes, and the compiler won't catch this. Retraining everyone to always use "" is relatively hard. Alternatively, a clever solution would be easy)
| What actually happens here is that NULL is interpreted as char const* and is being tried to be converted to std::string (you can convert C-strings of type char* into STL strings, right?).
To handle it, you may add another constructor
class myString {
public:
myString(const std::string& str) : _str(str) {}
myString(const char* str) : _str(str==NULL?"":std::string(str))
{ /* and do something special in the body */ }
std::string _str;
};
But if you need nothing special there, the better way would be
class myString {
public:
myString(const std::string& str = "") : _str(str) {}
std::string _str;
};
int main() {
myString mystr; //no () here!
printf("mystr = <%s>\n", mystr._str.c_str());
return 0;
}
|
1,603,300 | 1,603,613 | Xcode 3.2.1 and C++ string fails! | In Xcode 3.2.1 on Mac OS X Snow Leopard, I open a project under:
Command Line Tool of type C++ stdc++.
I have the following simple code:
#include <iostream>
#include <string>
using namespace std;
int main(){
string myvar;
cout << "Enter something: " << endl;
cin >> myvar;
cout << endl << myvar << endl;
return 0;
}
The program compiles fine, and prompts me to "Enter Something". When I type in something, and then press enter, I get the following error:
myproject(766) malloc: *** error for object 0x1000041c0: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Program received signal: “SIGABRT”.
sharedlibrary apply-load-rules all
(gdb)
When compiling on an earlier version of Xcode (3.1.2) on a different computer (opened the project with the 'command line utility' option, which does not exist in 3.2.1), the code runs with NO PROBLEM.
Does anybody know what's going on?
Thanks,
Yuval
| As far as I can tell, I'm not experiencing this issue in Release mode for x86_64. But I am seeing the issue in Debug x86_64. If I follow the directions given by Howard in this post, I'm able to get it running in debug mode:
Project -> Edit Active Target ...
Click Build tab
Search for "preprocessor"
Delete _GLIBCXX_DEBUG=1 _GLIBCXX_DEBUG_PEDANTIC=1
Build and run, you'll notice it works. Another interesting observation is that using __gnu_debug::string (from the <debug/string> header) alone does not trigger the error.
EDIT: from the horses mouth (known issues in XCode 3.2.1)
The default gcc 4.2 compiler is not compatible with the Standard C++ Library Debug Mode. C++ programs compiled with Xcode 3.2 may not work in the Debug configuration. To fix this, set the Compiler Version to 4.0, or edit the Debug configuration’s Preprocessor Macros and remove the entries:
_GLIBCXX_DEBUG=1 _GLIBCXX_DEBUG_PEDANTIC=1
You can do this for all projects by navigating to /Developer/Library/Xcode/Project Templates/Application/Command Line Tool/C++ Tool/C++Tool.xcodeproj/ and editing project.pbxproj and deleting the lines around line 138:
"_GLIBCXX_DEBUG=1",
"_GLIBCXX_DEBUG_PEDANTIC=1",
|
1,603,401 | 1,606,056 | CUBLAS memory allocation error | I tried to allocate 17338896 elements of floating point numbers as follows (which is roughly 70 mb):
state = cublasAlloc(theSim->Ndim*theSim->Ndim,
sizeof(*(theSim->K0)),
(void**)&K0cuda);
if(state != CUBLAS_STATUS_SUCCESS) {
printf("Error allocation video memory.\n");
return -1;
}
However, I'm receiving error message of CUBLAS_STATUS_ALLOC_FAILED for the variable state. Would this have anything to do with the amount of video card memory available on the machine (128 mb on mine) or would this be a limit of the amount of memory that I can allocate using cublasAlloc() function (i.e. not relevant to the amount of memory available on the machine)? I tried using cudaMalloc() function and I am running into the same problem. Thanks in advance for looking into this.
--------------Addition of Error Reproduction-------------------------------------
#include <cuda.h>
#include <stdio.h>
int main (int argc, char *argv[]) {
// CUDA setup
cublasStatus state;
if(cublasInit() == CUBLAS_STATUS_NOT_INITIALIZED) {
printf("CUBLAS init error.\n");
return -1;
}
// Instantiate video memory pointers
float *K0cuda;
// Allocate video memory needed
state = cublasAlloc(20000000,
sizeof(float),
(void**)&K0cuda);
if(state != CUBLAS_STATUS_SUCCESS) {
printf("Error allocation video memory.\n");
return -1;
}
// Copy K0 from CPU memory to GPU memory
// Note: before so, decide whether to integrate as a part of InsertionSim or
// CUDA content as a separate class
//state = cublasSetMatrix(theSim->Ndim, theSim->Ndim, sizeof(*theSim->K0),
// theSim->K0, theSim->Ndim, K0cuda, theSim->Ndim);
//if(state != CUBLAS_STATUS_SUCCESS) {
// printf("Error copy to video memory.\n");
// return -1;
//}
// Free memory
if(cublasFree(K0cuda) != CUBLAS_STATUS_SUCCESS) {
printf("Error freeing video memory.\n");
return -1;
}
// CUDA shutdown
if(cublasShutdown() != CUBLAS_STATUS_SUCCESS) {
printf("CUBLAS shutdown error.\n");
return -1;
}
if(theSim != NULL) delete theSim;
return 0;
}
| Memory can fragment, which means that you can still allocate multiple smaller blocks but not a single large block. Your videocard will obviously need some memory for its normal 2D task. If that happens to break the 128 MB into 2 blocks of almost 64MB, then you'd see this kind of failure.
|
1,603,834 | 1,604,031 | DllImport a c++ DLL in to a C# app, BYTE * p | I have a exported function in a c++ DLL
// C++ DLL (Blarggg.dll)
extern "C"
{
USHORT ReadProperty( BYTE * messsage, USHORT length, BYTE * invokeID )
{
if( invokeID != NULL ) {
* invokeID = 10 ;
}
return 0;
}
}
That I would like to make it available to my C# application
// C# app
[DllImport("Blarggg.dll")]
public static extern System.UInt16 ReadProperty(
/* [OUT] */ System.Byte[] message,
/* [IN] */ System.UInt16 length,
/* [OUT] */ System.Byte[] invokeID );
private void DoIt()
{
System.Byte[] message = new System.Byte[2000];
System.Byte[] InvokeID = new System.Byte[1];
System.UInt16 ret = ReadProperty( message, 2000, InvokeID ); // Error
}
The problem is that I keep getting the following error message.
An unhanded exception of type 'System.NullReferenceException' occurred in Blarggg.dll
Additional information: Object reference not set to an instance of an object.
I'm using VS2008 to build both the DLL and the C# application.
I'm not a C# programmer.
What am I doing wrong?
| I pasted your code directly into VS2008 and it runs perfectly on my 32-bit machine (added a .def file to set the exported name).
Is your C++ library definitely a pure win32 project? The error message you gave seems to imply that it threw a CLR exception.
|
1,604,132 | 1,604,745 | How to find location of executable on Linux when normal methods fail? | In another question, the answer states that on Unixes with /proc, the really straight and reliable way is to readlink("/proc/self/exe", buf, bufsize) and it then proceeds to give backup solutions as follows:
On Unixes without /proc (i.e. if above fails):
If argv[0] starts with "/" (absolute path) this is the path.
Otherwise if argv[0] contains "/" (relative path) append it to cwd (assuming it hasn't been changed yet).
getcwd(buf, bufsize); strncat(buf, "/", bufsize-strlen(buf)-1);
strncat(buf, argv[0], bufsize-strlen(buf)-1);
Otherwise search directories in $PATH for executable argv[0].
Afterward it may be reasonable to check whether the executable isn't actually a symlink. If it is resolve it relative to the symlink directory.
Now in my case, unfortunately, none of the above works:
/proc/self/exe exists but fail to readlink() due to permission denied errno 13.
The argv[0] has no / for absolute or relative path.
The $PATH does not contain the executable found in argv[0].
It appears this issue is faced also when sgid applications run. In my case, it is not sgid, but an inetd launch.
| Try looking in /proc from a suid binary.
|
1,604,176 | 1,604,216 | Size of virtual pointer-C++ | What is the size of virtual pointer(VPTR) for a virtual table in C++? Also this is not a homework question...just a question that came to my mind while I was reading a C++ book.
| An excellent article related to this topic is Member Function Pointers and the Fastest Possible C++ Delegates. This article delves deeply into the implementation of member function pointers for many different compilers. This article talks about all the nuances of vtable pointers particularly in light of multiple (and virtual) inheritance.
|
1,604,196 | 1,604,301 | STL custom allocators to manage different memory spaces | I would like to use different instances of an STL custom allocator class to manage different memory spaces, and then be able to specify an allocator instance to an STL container such that each container only draws from its assigned memory space. But I don't see how I can do that. I see how I can pass an allocator type into the template parameters of an STL container, but I want something akin to passing an allocator instance into the constructor of an STL container. Is there a way to do this in STL?
| Unfortunately STL allocators cannot have state (or at least have to be very careful how that state is used) - each instance of a particular allocator type must be equivalent for STL containers to work effectively with them. I don't recall the details right now, but I know that Scott Meyers discusses this problem at length in "Effective STL", Item 10: Be aware of allocator conventions and restrictions.
However, you can have templated allocators that are very similar with the differences between the allocators being encapsulated in the allocator type and use different 'instantiations' of the allocator template (each template 'instantiation' is a different type). Again, my recollection is that Meyers discusses this pretty clearly.
For example see this paragraph from an article by Anthony Aue, "Improving Performance with Custom Pool Allocators for STL":
A potentially more serious caveat is that, since the allocator uses nonstatic data, it's not technically Standard compliant because the Standard requires that allocators of the same type be equivalent. See Effective STL (Item 10) for a thorough explanation of the issue. This amounts to requiring that an allocator for a given type be able to deallocate memory allocated by any other instance of an allocator for that type. For many uses of standard containers, this requirement is unnecessary (some might say Draconian). However, there are two cases where this requirement is absolutely necessary: list::splice and swap(). The case of swap() is especially serious because it is needed in order to implement certain operations on containers in an exception-safe manner (see Exceptional C++, Item 12). Technically, swap could be (and in some cases, is) implemented in the face of allocators that don't compare equally—items could be copied or the allocators could be swapped along with the data—but this is not always the case. For this reason, if you're using swap() or list::splice, you should make sure to use HoldingPolicySingleton; otherwise, you're bound to run into some really nasty behavior.
See also Stephan T. Lavavej's discussion in this newsgroup thread.
I'll update later tonight if someone else doesn't give the details in the meantime.
|
1,604,268 | 1,604,294 | How can I clear a SDL_Surface to be replaced with another one? | Been trying to find this online for a while now.
I have a SDL_Surface with some content (in one it's text, in another is a part of a sprite). Inside the game loop I get the data onto the screen fine. But then it loops again and it doesn't replace the old data but just writes over it. So in the case of the text, it becomes a mess.
I've tried SDL_FreeSurface and it didn't work, anyone know another way?
fpsStream.str("");
fpsStream << fps.get_ticks();
fpsString = fpsStream.str();
game.fpsSurface = TTF_RenderText_Solid(game.fpsFont, fpsString.c_str(), textColor);
game.BlitSurface(0, 0, game.fpsSurface, game.screen);
| Try something like:
SDL_FillRect(screen, NULL, 0x000000);
at the beginning of your loop.
|
1,604,440 | 1,604,542 | How to set selected filter on QFileDialog? | I have a open file dialog with three filters:
QString fileName = QFileDialog::getOpenFileName(
this,
title,
directory,
tr("JPEG (*.jpg *.jpeg);; TIFF (*.tif);; All files (*.*)")
);
This displays a dialog with "JPEG" selected as the default filter. I wanted to put the filter list in alphabetical order so "All files" was first in the list. If I do this however, "All files" is the default selected filter - which I don't want.
Can I set the default selected filter for this dialog or do I have to go with the first specified filter?
I tried specifying a 5th argument (QString) to set the default selected filter but this didn't work. I think this might only be used to retrieve the filter that was set by the user.
| Like this:
QString selfilter = tr("JPEG (*.jpg *.jpeg)");
QString fileName = QFileDialog::getOpenFileName(
this,
title,
directory,
tr("All files (*.*);;JPEG (*.jpg *.jpeg);;TIFF (*.tif)" ),
&selfilter
);
The docs are a bit vague about this, so I found this out via guessing.
|
1,604,582 | 1,643,777 | Timing program runtimes in visual C++ | Is there a quick and easy way of timing a section of a program (or the entire thing) without having to setup a timer class, functions, and variables inside my program itself?
I'm specifically referring to Visual C++ (Professional 2008).
Thanks,
-Faken
Edit: none of these answers do what i ask for, i would like to be able to time a program inside visual c++ WITHOUT having to write extra bits of code inside it. Similar to how people do it with BASH in Linux.
| In the Intel and AMD CPUs there is a high speed counter. The Windows API includes function calls to read the value of this counter and also the frequency of the counter - i.e. how many times per second it is counting.
Here's an example how to time your time in microseconds:
#include <iostream>
#include <windows.h>
int main()
{
__int64 ctr1 = 0, ctr2 = 0, freq = 0;
// Start timing the code.
if (QueryPerformanceCounter((LARGE_INTEGER *) &ctr1) != 0) {
// Do what ever you do, what ever you need to time...
//
//
//
// Finish timing the code.
QueryPerformanceCounter((LARGE_INTEGER *) &ctr2);
QueryPerformanceFrequency((LARGE_INTEGER *) &freq);
// Print the time spent in microseconds to the console.
std::cout << ((ctr2 - ctr1) * 1.0 / freq) << std::endl;
}
}
|
1,604,588 | 1,604,603 | iterate vector, remove certain items as I go | I have a std::vector m_vPaths; I will iterate this vector and call ::DeleteFile(strPath) as I go. If I successfully delete the file, I will remove it from the vector. My question is can I get around having to use two vectors? Is there different data structure that might be better suited for what I need to do?
example:
using iterators almost does what I want, but problem is once you erase using an iterator, all iterators become invalid.
std::vector<std::string> iter = m_vPaths.begin();
for( ; iter != m_vPaths.end(); iter++) {
std::string strPath = *iter;
if(::DeleteFile(strPath.c_str())) {
m_vPaths.erase(iter);
//Now my interators are invalid because I used erase,
//but I want to continue deleteing the files remaining in my vector.
}
}
I can use two vectors and I will no longer have a problem, but is there a better, more efficient method of doing what I'm trying to do?
btw, incase it is unclear, m_vPaths is declared like this (in my class):
std::vector<std::string> m_vPaths;
| Check out std::remove_if:
#include <algorithm> // for remove_if
#include <functional> // for unary_function
struct delete_file : public std::unary_function<const std::string&, bool>
{
bool operator()(const std::string& strPath) const
{
return ::DeleteFile(strPath.c_str());
}
}
m_vPaths.erase(std::remove_if(m_vPaths.begin(), m_vPaths.end(), delete_file()),
m_vPaths.end());
Use a std::list to stop the invalid iterators problem, though you lose random access. (And cache performance, in general)
For the record, the way you would implement your code would be:
typedef std::vector<std::string> string_vector;
typedef std::vector<std::string>::iterator string_vector_iterator;
string_vector_iterator iter = m_vPaths.begin();
while (iter != m_vPaths.end())
{
if(::DeleteFile(iter->c_str()))
{
// erase returns the new iterator
iter = m_vPaths.erase(iter);
}
else
{
++iter;
}
}
But you should use std::remove_if (reinventing the wheel is bad).
|
1,604,699 | 1,604,722 | Generators in C++ -- invalid use of nonstatic data member | I sort of understand this, at least the function of generators (I've used them in Python). I understand how the switch statement and its content is formed. However, I get these errors.
test.cpp: In constructor 'Foo::descent::descent(int)':
test.cpp:46: error: invalid use of nonstatic data member 'Foo::index_'
test.cpp: In member function 'bool Foo::descent::operator()(std::string&)':
test.cpp:50: error: invalid use of nonstatic data member 'Foo::bars_'
test.cpp:50: error: invalid use of nonstatic data member 'Foo::index_'
test.cpp:51: error: invalid use of nonstatic data member 'Foo::index_'
test.cpp:51: error: invalid use of nonstatic data member 'Foo::bars_'
test.cpp:52: error: invalid use of nonstatic data member 'Foo::index_'
Here's the code. If you have a better way of dealing with this, by all means share please.
#include <math.h>
#include <string>
#include <vector>
#include <iostream>
#ifndef __generator_h__
#define __generator_h__
// generator/continuation for C++
// author: Andrew Fedoniouk @ terrainformatica.com
// idea borrowed from: "coroutines in C" Simon Tatham,
// http://www.chiark.greenend.org.uk/~sgtatham/coroutines.html
struct _generator
{
int _line;
_generator():_line(0) {}
};
#define $generator(NAME) struct NAME : public _generator
#define $emit(T) bool operator()(T& _rv) { \
switch(_line) { case 0:;
#define $stop } _line = 0; return false; }
#define $yield(V) \
do {\
_line=__LINE__;\
_rv = (V); return true; case __LINE__:;\
} while (0)
#endif
class Foo {
int index_;
std::vector<std::string> bars_;
public:
Foo() {
index_ = 0;
bars_.push_back("Foobar");
bars_.push_back("Barfoo");
}
$generator(descent){
int j;
descent(int j) {
index_+=j;
}
$emit(std::string)
while(true) {
$yield(bars_[index_++]);
if(index_ >= bars_.size())
index_ = 0;
}
$stop;
};
//descent bar;
void InitGenerator() { index_ = 0; }
};
using namespace std;
int main()
{
//Foo::descent gen(1);
//for(int n; gen(n);) // "get next" generator invocation
// cout << n << endl;
return 0;
}
| I'm not entirely sure what you're going for here, but here's where your error is occuring:
Let's expand the macros to see how this really looks:
class Foo {
int index_;
std::vector<std::string> bars_;
public:
Foo() {
index_ = 0;
bars_.push_back("Foobar");
bars_.push_back("Barfoo");
}
struct descent: public _generator {
int j;
descent(int j) {
index_+=j;
}
bool operator()(std::string& _rv) {
switch(_line) { case 0:;
while(true) {
do {
_line=__LINE__;
_rv = (bars_[index_++]); return true; case __LINE__:;
} while (0);
if(index_ >= bars_.size())
index_ = 0;
}
} _line = 0; return false; }
};
//descent bar;
void InitGenerator() { index_ = 0; }
};
As you can see, we declare an inner structure Foo::descent. However, unlike in some other languages, inner classes in C++ do not automatically have a pointer to an instance of their outer class. You need to either add to descent a Foo * which is passed in via the descent constructor, and use that Foo * to reference index_ and bars_ - or move the necessary members right into descent.
To be honest, I don't really understand what Foo here is for at all... Everything it does seems to belong right in descent.
|
1,604,853 | 1,605,132 | Nested class' access to enclosing class' private data members | I'm having trouble implementing a nested class who's constructor is initialized with some of the enclosing class' private data members.
Example:
Header File:
class Enclosing {
//...Public members
//...Private members
int x, int y
class Inner; // Declaration for nested class
};
Impl. File:
// Stuff...
class Enclosing::Inner {
explicit Inner() : foo(x), bar(y) // foo and bar are data members of Inner
//...
};
I get an invalid use of non-static data member error. Is there something I'm missing when it comes to nested class access to its enclosing class' members?
| Member x and y are non-static data member of Enclosing, which means that they only exist within a concrete object of Enclosing class. Without a concrete object, neither x nor y exist. Meanwhile, you are trying to refer to x and y without an object. That can't be done, which is what the compiler is trying to tell you.
If you want to initialize members Inner::foo and Inner::bar from x and y, you have to pass a concrete object of Enclosing type into the Inners constructor. For example
class Enclosing::Inner {
explicit Inner(const Enclosing& e) : foo(e.x), bar(e.y)
{}
//...
};
Extra note: in the original C++98 the inner class has no special privileges is accessing the outer class. With C++98 compiler you'd either have to give the inner class the necessary privileges (friendship) or expose the members x and y as public. However, this situation was classified as a defect in C++98, and it was decided that inner classes should have full access to outer class members (even private ones). So, whether you have to do anything extra with regard to access privileges depends on your compiler.
|
1,604,968 | 1,604,972 | What does a colon in a struct declaration mean, such as :1, :7, :16, or :32? | What does the following C++ code mean?
unsigned char a : 1;
unsigned char b : 7;
I guess it creates two char a and b, and both of them should be one byte long, but I have no idea what the ": 1" and ": 7" part does.
| The 1 and the 7 are bit sizes to limit the range of the values. They're typically found in structures and unions. For example, on some systems (depends on char width and packing rules, etc), the code:
typedef struct {
unsigned char a : 1;
unsigned char b : 7;
} tOneAndSevenBits;
creates an 8-bit value, one bit for a and 7 bits for b.
Typically used in C to access "compressed" values such as a 4-bit nybble which might be contained in the top half of an 8-bit char:
typedef struct {
unsigned char leftFour : 4;
unsigned char rightFour : 4;
} tTwoNybbles;
For the language lawyers amongst us, the 9.6 section of the C++11 standard explains this in detail, slightly paraphrased:
Bit-fields [class.bit]
A member-declarator of the form
identifieropt attribute-specifieropt : constant-expression
specifies a bit-field; its length is set off from the bit-field name by a colon. The optional attribute-specifier appertains to the entity being declared. The bit-field attribute is not part of the type of the class member.
The constant-expression shall be an integral constant expression with a value greater than or equal to zero. The value of the integral constant expression may be larger than the number of bits in the object representation of the bit-field’s type; in such cases the extra bits are used as padding bits and do not participate in the value representation of the bit-field.
Allocation of bit-fields within a class object is implementation-defined. Alignment of bit-fields is implementation-defined. Bit-fields are packed into some addressable allocation unit.
Note: bit-fields straddle allocation units on some machines and not on others. Bit-fields are assigned right-to-left on some machines, left-to-right on others. - end note
|
1,605,274 | 1,605,491 | Graph in C++ using gnuPlot | I am using gnuPlot library for printing a graph in C++ code. Its printing graph fine, but its printing only in blue color. Does somebody know how can I change the colors?
Cheers
| Use linestyle for setting a color. Some examples are here.
|
1,605,400 | 1,605,426 | C++ std::sort with predicate function in Class | I want to sort vector of certain struct with certain order in certain class. I've wrote definition of struct and predicate function in a class and run std::sort in a method of the class with these struct and function. But compilation error has occurred. gcc version is 4.0.1 and OS is Mac OSX. The code is following:
class sample {
public:
struct s {
int x;
int y;
};
bool cmp (struct s a, struct s b) {
if (a.x == b.x)
return a.y < b.y;
else
return a.x < b.x;
}
int func(void) {
std::vector <struct s> vec;
// ...
sort(vec.begin(), vec.end(), cmp); // compilation error
// ...
return 0;
}
};
int main(void) {
sample *smp = new sample();
smp->func();
return 0;
}
Error message was huge and complex. So this is first two lines of it.
sortSample.cpp: In member function 'int sample::func()':
sortSample.cpp:51: error: argument of type 'bool (sample::)(sample::s, sample::s)' does not match 'bool (sample::*)(sample::s, sample::s)'
...
Instead of above approach, the code could run correctly with following ways.
Define struct s and function cmp()
outside of class sample.
Remove
function cmp() and define operator
overloading of < in struct s.
Sample code of each approach is bellow.
1)
struct s {
int x;
int y;
};
bool cmp (struct s a, struct s b) {
if (a.x == b.x)
return a.y < b.y;
else
return a.x < b.x;
}
class sample {
// ...
2)
struct s {
int x;
int y;
bool operator<(const struct s & a) const {
if (x == a.x)
return y < a.y;
else
return x < a.x;
}
};
Can anyone tell a mechanism of this behavior? Why does first approach invokes compilation error?
Thanks.
| In the first case cmp is declared as a member function of the class sample and hence requires this pointer for calling it. Since the this pointer is not available compiler is complaining about it. You can make it work by declaring cmp as static function since static functions do not require this pointer for calling. In the second case, since cmp is declared as a stand-alone function again it will behave same as static function. In the third case (with overloaded operator), the sort algorithm will take care of calling the function for each object in the vector and hence it compiles.
|
1,605,409 | 1,605,419 | function definition does not declare parameters | What's wrong with TextLayoutTransition? Can function pointers not be declared virtual?
LCDWrapper.h:23: error: function definition does not declare parameters
Here's the class.
class LCDInterface {
public:
// Slots
virtual void TextSetSpecialChars() = 0;
virtual void LayoutChangeBefore() = 0;
virtual void LayoutChangeAfter() = 0;
virtual void TextSpecialCharChanged(unsigned int i) = 0;
virtual void ChangeLayout() = 0;
virtual void (*TextLayoutTransition)(Generic<LCDText> *v){}; // line 23
virtual void TransitionFinished() = 0;
};
Edit: Slightly related, and related to Qt, can function pointers be declared as slots/signals?
| No, you cant.. it doesnt make sense to put virtual on a function pointer. You cant override a variable.
|
1,605,674 | 1,605,683 | Why/When to use (!!p) instead of (p != NULL) | In the following code, what is the benefit of using (!!p) instead of (p != NULL)?
AClass *p = getInstanceOfAClass();
if( !!p )
// do something
else
// do something without having valid pointer
| That's a matter of style, in fact they are equivalent. See this very similar question for discussion.
IMO comparing against null pointer is clearer.
|
1,605,743 | 1,605,817 | MFC/C++ equivalent of VB's AppActivate | AppActivate seems to be what i need, I am fairly sure there must be an c++/mfc equivalent. Is there one?
| You can try these:
SetForegroundWindow(FindWindow(NULL, "window title"));
// or
SetForegroundWindow(AfxGetMainWnd());
|
1,605,807 | 1,605,994 | boost::unordered_map maintains order of insertion? | I am looking for a container which provides std::map like interface but maintains the order in which elements are inserted. Since there will not be too many elements in the map, the lookup performance is not a big issue. Will boost::unordered_map work in this case? i.e. does it maintain the order of insertion. I am new to boost library and hence want to know what exactly meant by 'unordered' ?
| unordered_map doesn't maintain the order of insertion. Unordered in this case means that the observable order of elements (i.e. when you enumerate them) is unspecified and arbitrary. In fact, I would expect that the order of elements in an unordered_map can change during the lifetime of the map, due to rehashing when resizing the map (that's implementation dependent though)
|
1,606,299 | 1,606,344 | How does compiler choose between template specializations featuring an array? | I just came across std::tr1::extent template and it puzzled me. I never ever dealt with array type parameters in my life so I don't understand how they work. So, given the code from gcc type_traits
template<typename _Tp, unsigned _Uint, std::size_t _Size>
struct extent<_Tp[_Size], _Uint>
template<typename _Tp, unsigned _Uint>
struct extent<_Tp[], _Uint>
how does compiler chooses between those specializations? What type I should pass to extent to get it choose the second one?
| extent<int[], 0>::value == 0 // second one chosen
int[] is an incomplete type, the compiler doesn't know its sizeof value. The outermost dimension may stay incomplete, because it's not important for the array to function correctly in most contexts (in particular, indexing will still work). Something like int[1][] wouldn't be a correct type anymore.
extent<int[2], 0>::value == 2 // first one chosen
Sure this can be nested:
extent<int[][2], 0>::value == 0 // second one chosen, with `_Tp` being `int[2]`
extent<int[][2], 1>::value == 2 // second one chosen again
|
1,606,400 | 1,760,819 | How to sleep or pause a PThread in c on Linux | I am developing an application in which I do multithreading. One of my worker threads displays images on the widget. Another thread plays sound. I want to stop/suspend/pause/sleep the threads on a button click event. It is same as when we click on video player play/pause button.
I am developing my application in c++ on linux platform using the pthread library for threading.
Can somebody tell me how I achieve threads pause/suspend?
| You can use a mutex, condition variable, and a shared flag variable to do this. Let's assume these are defined globally:
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int play = 0;
You could structure your playback code like this:
for(;;) { /* Playback loop */
pthread_mutex_lock(&lock);
while(!play) { /* We're paused */
pthread_cond_wait(&cond, &lock); /* Wait for play signal */
}
pthread_mutex_unlock(&lock);
/* Continue playback */
}
Then, to play you can do this:
pthread_mutex_lock(&lock);
play = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
And to pause:
pthread_mutex_lock(&lock);
play = 0;
pthread_mutex_unlock(&lock);
|
1,606,407 | 1,607,985 | Index, assignment and increment in one statement behaves differently in C++ and C#. Why? | Why is this example of code behaving differently in c++ and C#.
[C++ Example]
int arr[2];
int index = 0;
arr[index] = ++index;
The result of which will be arr[1] = 1;
[C# Example]
int[] arr = new int[2];
int index = 0;
arr[index] = ++index;
The result of which will be arr[0] = 1;
I find this very strange. Surely there must be some rationale for both languages to implement it differently? I wonder what would C++/CLI output?
| As others have noted, the behaviour of this code is undefined in C/C++. You can get any result whatsoever.
The behaviour of your C# code is strictly defined by the C# standard.
Surely there must be some rationale for both languages to implement it differently?
Well, suppose you were designing C#, and wished to make the language easy for C++ programmers to learn. Would you choose to copy C++'s approach to this problem, namely, leave it undefined? Do you really want to make it easy for perfectly intelligent developers to accidentally write code that the compiler can just make up any meaning for that it wants?
The designers of C# do not believe that undefined behaviour of simple expressions is a good thing, and therefore we have strictly defined what expressions like this mean. We cannot possibly agree with what every C++ compiler does because different C++ compilers give you different results for this sort of code, and so we cannot agree with all of them.
As for why the designers of C++ believe that it is better to leave simple expressions like this to have undefined behaviour, well, you'll have to ask one of them. I could certainly make some conjectures, but those would just be educated guesses.
I've written a number of blog articles about this sort of issue; my most recent one was about almost exactly the code you mention here. Some articles you might want to read:
How the design of C# encourages elimination of subtle bugs:
http://blogs.msdn.com/ericlippert/archive/2007/08/14/c-and-the-pit-of-despair.aspx
Exactly what is the relationship between precedence, associativity, and order of execution in C#?
http://blogs.msdn.com/ericlippert/archive/2008/05/23/precedence-vs-associativity-vs-order.aspx
In what order do the side effects of indexing, assignment and increment happen?
http://blogs.msdn.com/ericlippert/archive/2009/08/10/precedence-vs-order-redux.aspx
|
1,606,530 | 1,627,349 | PInvoke - how to represent a field from a COM interface | I am referencing a COM structure that starts as follows:
[scriptable, uuid(ae9e84b5-3e2d-457e-8fcd-5bbd2a8b832e)]
interface nsICacheSession : nsISupports
{
/**
* Expired entries will be doomed or evicted if this attribute is set to
* true. If false, expired entries will be returned (useful for offline-
* mode and clients, such as HTTP, that can update the valid lifetime of
* cached content). This attribute defaults to true.
*/
attribute PRBool doomEntriesIfExpired;
...
Source: http://dxr.proximity.on.ca/dxr/mozilla-central/netwerk/cache/public/nsICacheSession.idl.html#58
I found code for importing that interface into my C# app. The code must be wrong though, as the set method doesn't seem to be useful and also throws an error when I try to call it just to see what happens:
[Guid("ae9e84b5-3e2d-457e-8fcd-5bbd2a8b832e"), ComImport, InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
interface nsICacheSession
{
[return: MarshalAs(UnmanagedType.Bool)]
void set_doomEntriesIfExpired();
[return: MarshalAs(UnmanagedType.Bool)]
bool get_doomEntriesIfExpired();
...
What is the correct way to set the value of doomEntriesIfExpired and how do I reference this from my code?
EDIT
I changed my code to the following, which yielded "System.AccessViolationException: Attempt to read or write protected memory yada yada...":
[Guid("ae9e84b5-3e2d-457e-8fcd-5bbd2a8b832e"), ComImport, InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
interface nsICacheSession
{
void set_doomEntriesIfExpired(bool enabled);
bool get_doomEntriesIfExpired();
...
| The answer you inserted is good. In COM Interop bools are marshaled as VARIANT_BOOL by default so your addition of the MarshalAs attribute to tell the marshaler to use a standard 4 byte BOOL type is correct, though the getter part of the equation needs the attribute added as well.
In general, I like to leave properties defined in the interface as properties rather than breaking them out into their getters and setters. It matches the semantics of the interface definition better and is usually easier to read as well. You should be able to re-write your COM import definition as follows to retain the attribute nature of doomEntriesIfExpired:
[Guid("ae9e84b5-3e2d-457e-8fcd-5bbd2a8b832e"), ComImport,
InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
interface nsICacheSession
{
bool doomEntriesIfExpired
{
[param:MarshalAs(UnmanagedType.Bool)]set;
[return:MarshalAs(UnmanagedType.Bool)]get;
}
...
|
1,606,596 | 1,606,622 | is this a good way to do a strcmp to return false when strings are empty | I want another condition --still maintaining a fast execution time but safer-- where i return false if either or both strings is empty:
int speicial_strcmp(char *str1, char* str2 )
{
if(*str1==*str2 =='\0')
return 0;
return strcmp(str1,str2);
}
| No, that's not a good way to do it, because it doesn't work.
if(*str1==*str2 =='\0')
will get evaluated as:
bool tmp1 = *str1==*str2;
bool tmp2 = tmp1 == '\0';
if (tmp2)
In other words, because the bool will get promoted to an integer, your test will return true whenever the strings start with different characters (tmp1 will be false, which gets converted to 0, and so tmp2 becomes true)
Don't try to outsmart the compiler. Writing fast code is not about writing as few lines of code as possible, or even as short lines as possible. Even if chaining together == in this manner was meaningful, there's no reason why it'd be faster. Just write code that you understand, and can write correctly.
|
1,606,738 | 1,606,975 | Why does SFINAE not apply to this? | I'm writing some simple point code while trying out Visual Studio 10 (Beta 2), and I've hit this code where I would expect SFINAE to kick in, but it seems not to:
template<typename T>
struct point {
T x, y;
point(T x, T y) : x(x), y(y) {}
};
template<typename T, typename U>
struct op_div {
typedef decltype(T() / U()) type;
};
template<typename T, typename U>
point<typename op_div<T, U>::type>
operator/(point<T> const& l, point<U> const& r) {
return point<typename op_div<T, U>::type>(l.x / r.x, l.y / r.y);
}
template<typename T, typename U>
point<typename op_div<T, U>::type>
operator/(point<T> const& l, U const& r) {
return point<typename op_div<T, U>::type>(l.x / r, l.y / r);
}
int main() {
point<int>(0, 1) / point<float>(2, 3);
}
This gives error C2512: 'point<T>::point' : no appropriate default constructor available
Given that it is a beta, I did a quick sanity check with the online comeau compiler, and it agrees with an identical error, so it seems this behavior is correct, but I can't see why.
In this case some workarounds are to simply inline the decltype(T() / U()), to give the point class a default constructor, or to use decltype on the full result expression, but I got this error while trying to simplify an error I was getting with a version of op_div that did not require a default constructor*, so I would rather fix my understanding of C++ rather than to just do what works.
Thanks!
*: the original:
template<typename T, typename U>
struct op_div {
static T t(); static U u();
typedef decltype(t() / u()) type;
};
Which gives error C2784: 'point<op_div<T,U>::type> operator /(const point<T> &,const U &)' : could not deduce template argument for 'const point<T> &' from 'int', and also for the point<T> / point<U> overload.
| Not 100% sure. It appears that the compiler needs to instantiate both overloads to determine which is better, but while trying to instantiate the other op_div with T = int and U = point<float>, this leads to an error that is not covered by SFINAE (the error is not that op_div doesn't have type in this case, but that type cannot be determined).
You could try to disable the second overload if the second type is a point (boost::disable_if).
Also, what seems to work is postponed return type declaration (doing away with the op_div struct, but depending on which C++0x features are supported by your compiler):
template<typename T, typename U>
auto
operator/(point<T> const& l, point<U> const& r) -> point<decltype(l.x / r.x)> {
return {l.x / r.x, l.y / r.y};
}
template<typename T, typename U>
auto
operator/(point<T> const& l, U const& r) -> point<decltype(l.x / r)> {
return {l.x / r, l.y / r};
}
|
1,606,894 | 1,606,909 | std::pair<int, int> vs struct with two int's | In an ACM example, I had to build a big table for dynamic programming. I had to store two integers in each cell, so I decided to go for a std::pair<int, int>. However, allocating a huge array of them took 1.5 seconds:
std::pair<int, int> table[1001][1001];
Afterwards, I have changed this code to
struct Cell {
int first;
int second;
}
Cell table[1001][1001];
and the allocation took 0 seconds.
What explains this huge difference in time?
| std::pair<int, int>::pair() constructor initializes the fields with default values (zero in case of int) and your struct Cell doesn't (since you only have an auto-generated default constructor that does nothing).
Initializing requires writing to each field which requires a whole lot of memory accesses that are relatively time consuming. With struct Cell nothing is done instead and doing nothing is a bit faster.
|
1,606,969 | 1,609,131 | operator new/delete and class hierarchies | Suppose, we have hierarchy of classes and we want to make them allocate/deallocate their memory only throughout our memory manager. What is a classical C++ way to achieve this behavior?
Is it a MUST to have additional checks such as:
class Foo{
public:
virtual ~Foo(){}
void* operator new(size_t bytes)
{
if (bytes != sizeof(Foo)){
return ::operator new(bytes);
}
return g_memory_manager.alloc(bytes);
}
void operator delete(void *space, size_t bytes)
{
if (bytes != sizeof(Foo)){
return ::operator delete(space);
}
g_memory_manager.dealloc(space, bytes);
}
}
| No the checks are unnecessary.
Have a look at Alexandrescu's Loki's SmallObject allocator, you just inherit from SmallObject and it does all the heavy lifting for you!
And do not forget to overload all versions of new and delete:
simple version
array version
placement version
Otherwise you might have some troubles.
|
1,607,188 | 1,607,233 | Why is a type qualifier on a return type meaningless? | Say I have this example:
char const * const
foo( ){
/* which is initialized to const char * const */
return str;
}
What is the right way to do it to avoid the compiler warning "type qualifier on return type is meaningless"?
| The way you wrote it, it was saying "the returned pointer value is const". But non-class type rvalues are not modifiable (inherited from C), and thus the Standard says non-class type rvalues are never const-qualified (right-most const was ignored even tho specified by you) since the const would be kinda redundant. One doesn't write it - example:
int f();
int main() { f() = 0; } // error anyway!
// const redundant. returned expression still has type "int", even though the
// function-type of g remains "int const()" (potential confusion!)
int const g();
Notice that for the type of "g", the const is significant, but for rvalue expressions generated from type int const the const is ignored. So the following is an error:
int const f();
int f() { } // different return type but same parameters
There is no way known to me you could observe the "const" other than getting at the type of "g" itself (and passing &f to a template and deduce its type, for example). Finally notice that "char const" and "const char" signify the same type. I recommend you to settle with one notion and using that throughout the code.
|
1,607,357 | 1,609,076 | warning #411: class foo defines no constructor to initialize the following: | I have some legacy code built with c++ compiler giving the error in the subject line
typedef struct foo {
char const * const str;
} Foo;
and a lot of places in the code (meaning I cannot change all of them) use it in a C style initialization:
Foo arr[] ={
{"death"},
{"turture"},
{"kill"}
}
What is the good workaround to remove the stupid warning?
| If it works anyway, you might as well disable the warning in the command line for those files.
With gcc I think it's something like -wd411.
Not so elegant, but if it works and if the code appears to be compliant to the standard (in this order) there's no point sweating over it!
|
1,607,380 | 1,626,166 | PInvoke - reading the value of a string field - "Attempted to read or write protected memory" | I'm having trouble accessing the some string fields in a COM interface. Calling the integer fields does not result in an error. When attempting call clientID(), deviceID() or key(), I get the old "Attempted to read or write protected memory" error.
Here is the source interface code: (code sourced from here)
[scriptable, uuid(fab51c92-95c3-4468-b317-7de4d7588254)]
interface nsICacheEntryInfo : nsISupports
{
readonly attribute string clientID;
readonly attribute string deviceID;
readonly attribute ACString key;
readonly attribute long fetchCount;
readonly attribute PRUint32 lastFetched;
readonly attribute PRUint32 lastModified;
readonly attribute PRUint32 expirationTime;
readonly attribute unsigned long dataSize;
boolean isStreamBased();
};
Here is the C# code for accessing the interface:
[Guid("fab51c92-95c3-4468-b317-7de4d7588254"), ComImport, InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
interface nsICacheEntryInfo
{
string clientID();
string deviceID();
nsACString key();
int fetchCount();
Int64 lastFetched();
Int64 lastModified();
Int64 expirationTime();
uint dataSize();
[return: MarshalAs(UnmanagedType.Bool)]
bool isStreamBased();
}
Any suggestions as to why simply trying to read a field should throw access violations at me?
| The strings in this interface are variants on C style strings (char*'s) but COM Interop by default treats strings as BSTRs. You have the marshaller trying to read the wrong kind of string and then free it with the CoTask memory allocator, so it's no surprise you get an access violation. If your strings were [In] parameters you could just adorn them with the appropriate MarshalAs attribute but that won't work with return value parameters. So you need to marshal them as IntPtrs and then manually marshal and free the underlying memory.
I would try the following:
[Guid("fab51c92-95c3-4468-b317-7de4d7588254"), ComImport,
InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
interface nsICacheEntryInfo
{
IntPtr clientID { get; }
IntPtr deviceID { get; }
IntPtr key { get; }
int fetchCount { get; }
uint lastFetched { get; }
uint lastModified { get; }
uint expirationTime { get; }
uint dataSize { get; }
[return: MarshalAs(UnmanagedType.Bool)]
bool isStreamBased();
}
As Chris mentioned above the PRUint32s are actually 32 bit not 64 bit unsigned integers so I've changed them. Also, I've changed the methods to properties since that captures the meaning of the idl better, since they are all readonly it doesn't actually affect the layout.
The strings for clientID and deviceID can be read using Marshal.PtrToStrAnsi as so:
nsIMemory memoryManagerInstance = /*maybe get this from somewhere*/;
nsICacheEntryInfo cacheEntryInstance = /*definitely get this from somewhere*/;
IntPtr pClientID = cacheEntryInstance.clientID;
string clientID = Marshal.PtrToStringAnsi(pClientID);
NS_Free(pClientID);
//or
memoryManagerInstance.free(pClientID);
Whether you use NS_Free or the memory interface to free the strings depends on how you are using the whole xpcom setup. The key value is an abstract string so its representation depends on where it comes from. It appears though that it can usually be treated as a pointer to an ANSI string like the others.
I don't have the setup to try any of this for you and the documentation is, to say the least, somewhat opaque so you may need to do a little tweaking on this.
|
1,607,402 | 1,607,567 | How to Identify a Missing Dependency | We have a legacy 3rd party program that is failing with the error "Class Not Registered" when it tries to execute certain functionality. Is there a way to tell what class it's looking for? Sometimes it says "Catastrophic error" instead.
Tried Dependency Walker statically and profiling, Kernal32.exe errors. I'm guessing that's the instantiation failing. It didn't reveal any missing dependencies.
The ProcessMon didn't reveal any failed attempts to access the registry or a particular filepath.
Have spent several days with a debugger and tools trying to figure out what file it is. We have a running production system that we can access with a non-Admin user account.
The 3rd party app was written in Delphi if that makes a difference. Our calling app is C++.
Update:
Using the Process Monitor from Sysinternals and a filter to narrow the natural registry read failures from the unusual ones I was able to find the GUID that was missing and register the corresponding dll.
| It's with a very large probability a COM problem. Here's an article on how to debug it. Basically, use RegMon. It will show COM reading registry keys, tyring to find the class provider.
|
1,607,409 | 1,607,436 | Why doesn't this while loop work? | Ok, so I'm trying to create a program using a while loop to find the greatest common divisor of two numbers. This is what I came up with. However, from what I can tell, the program just seems to skip the loop entirely when I run it. (opers remains 0, divisor always comes back as equal to num1). Anyone out there that can help out a newbie?
/* Define variables for divisors and number of operations */
int num1, num2, divisor, opers;
opers = 0;
/* Prompt user for integers and accept input */
cout << "Please enter two integers with the smaller number first, separated by a space. ";
cout << endl;
cin >> num1 >> num2;
/* Make divisor the smaller of the two numbers */
divisor = num1;
/* While loop to calculate greatest common divisor and number of calculations */
while ( (num1 % divisor != 0 ) && ( num2 % divisor != 0 ) )
{
divisor--;
opers++;
}
/* Output results and number of calculations performed */
cout << "The greatest common divisor of " << num1 << " and " << num2 << " is: ";
cout << divisor << endl << "Number of operations performed: " << opers;
| As soon as one of those modulo returns non 0, the while loop terminates. (So if any of your inputs immediately results in 0 from the modulo, the loop won't be entered)
What you probably want:
while ( (num1 % divisor != 0 ) || ( num2 % divisor != 0 ) )
{
divisor--;
opers++;
}
This continues the loop until both modulo operations result in 0.
|
1,607,515 | 1,607,536 | PyQt custom widget in c++ | Can I write custom Qt widget in pure C++, compile it and use in PyQt?
I'm trying to use the ctypes-opencv with qt and I have performance problems with python's code for displaying opencv's image in Qt form.
| You will have to write a Python wrapper for the widget, using the sip library (which is used by PyQt). There is a simple example for a Qt/C++ widget in the documentation.
|
1,607,678 | 1,608,358 | C++ operator overloading and implicit conversion | I have a class that encapsulates some arithmetic, let's say fixed point calculations. I like the idea of overloading arithmetic operators, so I write the following:
class CFixed
{
CFixed( int );
CFixed( float );
};
CFixed operator* ( const CFixed& a, const CFixed& b )
{ ... }
It all works. I can write 3 * CFixed(0) and CFixed(3) * 10.0f. But now I realize, I can implement operator* with an integer operand much more effective. So I overload it:
CFixed operator* ( const CFixed& a, int b )
{ ... }
CFixed operator* ( int a, const CFixed& b )
{ ... }
It still works, but now CFixed(0) * 10.0f calls overloaded version, converting float to int ( and I expected it to convert float to CFixed ). Of course, I can overload a float versions as well, but it seems a combinatorial explosion of code for me. Is there any workaround (or am I designing my class wrong)? How can I tell the compiler to call overloaded version of operator* ONLY with ints?
| Assuming you'd like the specialized version to be picked for any integral type (and not just int in particular, one thing you could do is provide that as a template function and use Boost.EnableIf to remove those overloads from the available overload set, if the operand is not an integral type.
#include <cstdio>
#include <boost/utility/enable_if.hpp>
#include <boost/type_traits/is_integral.hpp>
class CFixed
{
public:
CFixed( int ) {}
CFixed( float ) {}
};
CFixed operator* ( const CFixed& a, const CFixed& )
{ puts("General CFixed * CFixed"); return a; }
template <class T>
typename boost::enable_if<boost::is_integral<T>, CFixed>::type operator* ( const CFixed& a, T )
{ puts("CFixed * [integer type]"); return a; }
template <class T>
typename boost::enable_if<boost::is_integral<T>, CFixed>::type operator* ( T , const CFixed& b )
{ puts("[integer type] * CFixed"); return b; }
int main()
{
CFixed(0) * 10.0f;
5 * CFixed(20.4f);
3.2f * CFixed(10);
CFixed(1) * 100u;
}
Naturally, you could also use a different condition to make those overloads available only if T=int: typename boost::enable_if<boost::is_same<T, int>, CFixed>::type ...
As to designing the class, perhaps you could rely on templates more. E.g, the constructor could be a template, and again, should you need to distinguish between integral and real types, it should be possible to employ this technique.
|
1,607,748 | 1,607,895 | template meta-programming OR operation | I have a class that can be decorated with a set of add-on templates to provide additional functionality. Each add-on has an identifying addon_value that the base class needs to know.
The code below is an example of what I would like to do. Obviously, the main() function fails to compile. The goal is for CBase::GetValueOfAddOns() to know the value of OR-ing the addon_value for each add-on. The calculation does not actually have to be performed in GetValueOfAddOns(), it just has to be able to get at the result.
template< class T >
class AddOn_A : public T
{
public:
AddOn_A( int x ) : T( x )
{};
enum { addon_value = 0x00000001 };
};
template< class T >
class AddOn_B : public T
{
public:
AddOn_B( int x ) : T( x )
{};
enum { addon_value = 0x00000010 };
};
class CBase
{
public:
explicit CBase( int x ) : x_( x )
{
// error LNK2001: unresolved external symbol "public: virtual int __thiscall CBase::GetValueOfAddOns(void)const " (?GetValueOfAddOns@CBase@@UBEHXZ)
int z = GetValueOfAddOns();
};
virtual int GetValueOfAddOns() const = 0;
private:
int x_;
};
// define an empty AddOn
template< class > class empty
{
public:
enum { addon_value = 0x00000000 };
};
// forward declaration and Add-On defaults
template< template< class > class AddOn1 = empty,
template< class > class AddOn2 = empty,
template< class > class AddOn3 = empty >
class CMyClass;
// specialized template for the default case
template<> class CMyClass< empty, empty, empty > : public CBase
{
public:
CMyClass( int x ) : CBase( x )
{};
enum { addon_value = 0x00000000 };
};
// actual definition
template< template< class > class AddOn1,
template< class > class AddOn2,
template< class > class AddOn3 >
class CMyClass : public AddOn1< CBase >,
public CMyClass< AddOn2, AddOn3 >
{
public:
CMyClass( int x ) : AddOn1< CBase >( x ),
CMyClass< AddOn2, AddOn3 >( x )
{};
enum { addon_value = AddOn1< CBase >::addon_value | CMyClass< AddOn2, AddOn3 >::addon_value };
int GetValueOfAddOns() const
{
return addon_value;
}
};
int _tmain( int argc, _TCHAR* argv[] )
{
CMyClass< AddOn_A > A( 0 );
_ASSERT( A.GetValueOfAddOns() == AddOn_A< CBase >::addon_value );
CMyClass< AddOn_A, AddOn_B > AB( 0 );
_ASSERT( AB.GetValueOfAddOns() == ( AddOn_A< CBase >::addon_value | AddOn_B< CBase >::addon_value ) );
return 0;
}
Thanks for any help,
PaulH
| Not sure this is the most elegant way, but the following is fairly straightforward:
Add this to CMyClass:
enum {AddonsValues = AddOn1<CBase>::addon_value | CMyClass<AddOn2, AddOn3>::AddonsValues};
int GetValueOfAddOns()
{
// return the result of OR-ing the addon_value of each add-on.
return AddonsValues;
};
and this to the specialized CMyClass<empty, empty, empty>:
enum {AddonsValues = 0};
|
1,607,953 | 1,608,028 | c++ OpenGL coordinate transformation | I just don't seem to be able to figure this out in my head. I'm trying to move an object in 3D space.
If I have a point at 5,15,5 and use opengl functions to change the model view....
glTranslatef( 10.0f, 4.0f, 4.0f );
glRotatef( 33.0f, 1.0f, 0.0f, 0.0f );
glTranslatef( 10.0f, 4.0f, 4.0f );
Is there a way I can find out where that point ends up (in world / global coordinates)?
Can I do some kind of matrix calculations that will give me back 20,26,23 (or what every the new coordinate position is)?
Please help, I've been stuck on this for so long!
| Try the following:
1) Push the current matrix into stack;
2) Load identity and apply your transformations;
3) Get the resulting transformation matrix into some temp variable. glGet or something like that will help;
4) Pop the matrix from the stack;
Now you have your transformation matrix. Multiply your point by this matrix to predict the point's coordinates after the transformation.
|
1,608,252 | 1,608,271 | C++ features to learn | C++ has too many features, and I can't see how any programmer is able to remember all these features while programming. (We can see how this affected the design of newer languages, such as Java)
So, what I need is a list of features that are enough to know, disregarding all the others, to create c++ programs, perhaps created by someone who thought the same way as I did.
Hope I was clear enough.
| This is really an impossible to create list. Every place I work has a different acceptable subset of C++. So its going to be different depending on what you're developing on. I've seen C++ that truly is just C with occasional use of the "class keyword" to very run-time polymorphism oriented code to template meta-programming heavy code. Then the practices are going to change based on what frameworks/libraries/platforms you are targeting.
The best I could suggest is reading various coding standards and seeing the how they suggest you ought to write code using C++.
Google's Coding Standard
Sutter's Coding Standard Book
|
1,608,291 | 1,608,347 | Including header files in Visual Studio | Suppose I have a solution with 3 projects X,Y, and E.
E will generate an executable and X and Y will generate static libraries such that Y includes the header files of X and E includes the header files of Y.
Now, my question is: why do I have to include the directory of the header files of X in E?
| Here's why:
It is possible that some function in project Y takes an argument (or returns a value) which is of a type declared in X.
If so, the compiler may have to create these argument (or return value) objects while compiling E.
If that's the case, header files from X are absolutely needed in E.
|
1,608,565 | 1,608,749 | Is there a method to procedurally detecting if network router supports DHCP using C/C++? | There is a scenario where an application tells a device on a network to get their IP address from the network router's DHCP server. If a DHCP server is not available, the device's behavior becomes erratic. Is there a method to procedurally detect if the network router supports DHCP? Or, is this something the device needs to do when attempting to get its IP address from the DHCP server?
| DHCP client on the device is doing exactly that. Well, almost that.
The DHCP DISCOVERY message is broadcast on the link. Then, if there are any DHCP servers willing to serve this particular MAC address, each one reserves an IP address from its pool, and answers with the OFFER message. The client then picks what server it wants to "bind" to and issues the REQUEST message. The server confirms the granted lease of the IP address with an ACK message.
The client discovers "non-availability" of DHCP servers by simply timing out waiting for offers.
Take a look at the DHCPing project. Might give you some ideas.
Links to the DHCP RFCs:
RFC 2131, Dynamic Host Configuration Protocol
RFC 2132, DHCP Options and BOOTP Vendor Extensions
|
1,608,732 | 1,608,774 | which cast is faster static_cast<int> () or int() | Try to see which cast is faster (not necessary better): new c++ case or old fashion C style cast. Any ideas?
| There should be no difference at all if you compare int() to equivalent functionality of static_cast<int>().
Using VC2008:
double d = 10.5;
013A13EE fld qword ptr [__real@4025000000000000 (13A5840h)]
013A13F4 fstp qword ptr [d]
int x = int(d);
013A13F7 fld qword ptr [d]
013A13FA call @ILT+215(__ftol2_sse) (13A10DCh)
013A13FF mov dword ptr [x],eax
int y = static_cast<int>(d);
013A1402 fld qword ptr [d]
013A1405 call @ILT+215(__ftol2_sse) (13A10DCh)
013A140A mov dword ptr [y],eax
Obviously, it is 100% the same!
|
1,608,733 | 1,608,950 | ambiguous access when calling a decorated base class | I have a class that can be decorated with a set of add-on templates to provide additional functionality. Each add-on needs to be able to call the base class and the user needs to be able to call the base class (either directly or using the CMyClass as a proxy).
Unfortunately, the compiler can't tell which base class I'm calling and I get ambiguous access errors.
template< class T >
class AddOn_A : public T
{
public:
AddOn_A( int x ) : T( x )
{};
int AddOne()
{
T* pT = static_cast< T* >( this );
return pT->GetValue() + 1;
};
};
template< class T >
class AddOn_B : public T
{
public:
AddOn_B( int x ) : T( x )
{};
int AddTwo()
{
T* pT = static_cast< T* >( this );
return pT->GetValue() + 2;
};
};
class CBase
{
public:
explicit CBase( int x ) : x_( x )
{
};
int GetValue()
{
return x_;
};
private:
int x_;
};
// define an empty AddOn
template< class > struct empty {};
// forward declaration and Add-On defaults
template< template< class > class AddOn1 = empty,
template< class > class AddOn2 = empty,
template< class > class AddOn3 = empty >
class CMyClass;
// specialized template for the default case
template<> class CMyClass< empty, empty, empty > : public CBase
{
public:
CMyClass( int x ) : CBase( x )
{};
};
// actual definition
template< template< class > class AddOn1,
template< class > class AddOn2,
template< class > class AddOn3 >
class CMyClass : public AddOn1< CBase >,
public CMyClass< AddOn2, AddOn3 >
{
public:
CMyClass( int x ) : AddOn1< CBase >( x ),
CMyClass< AddOn2, AddOn3 >( x )
{};
};
int _tmain( int argc, _TCHAR* argv[] )
{
CMyClass< AddOn_A > A( 100 );
// error C2385: ambiguous access of 'GetValue'
// 1> could be the 'GetValue' in base 'CBase'
// 1> or could be the 'GetValue' in base 'CBase'
_ASSERT( A.GetValue() == 100 );
// error C2385: ambiguous access of 'GetValue'
// 1> could be the 'GetValue' in base 'CBase'
// 1> or could be the 'GetValue' in base 'CBase'
_ASSERT( A.AddOne() == A.GetValue() + 1 );
// works
_ASSERT( A.AddOne() == 101 );
CMyClass< AddOn_A, AddOn_B > AB( 100 );
// same errors as above
_ASSERT( AB.GetValue() == 100 );
// same errors as above
_ASSERT( AB.AddTwo() == AB.GetValue() + 2 );
// works
_ASSERT( AB.AddTwo() == 102 );
return 0;
}
Can anybody point out what I may be doing wrong?
Thanks,
PaulH
| Well, since I launched on the Decorator approach, I might as well :)
EDIT: let's add the AddOnValues to solve this as well
The problem here is the Multi-Inheritance. Tracing such a diagram is not easy but if you look closely you'll see that CMyClass<AddOn_A> inherits twice from CBase.
CMyClass<AddOn_A> <-- AddOn_A<CBase> <-- CBase
CMyClass<AddOn_A> <-- CMyclass<empty,empty,empty> <-- CBase
The problem is that you used a policy approach, instead of a Decorator approach. In a proper Decorator approach, the hierarchy is strictly linear and you only have one template parameter at a time. Let's get the basis:
// Note that the static_cast are completely unnecessary
// If you inherit from T then you can freely enjoy
// its public and protected methods
template< class T >
class AddOn_A : public T
{
public:
enum { AddOnValues = T::AddOnValues | 0x01 }; // this hides T::AddOnValues
AddOn_A( int x ) : T( x ) {};
int AddOne()
{
return this->GetValue() + 1;
};
};
template< class T >
class AddOn_B : public T
{
public:
enum { AddOnValues = T::AddOnValues | 0x02 }; // this hides T::AddOnValues
AddOn_B( int x ) : T( x ) {};
int AddTwo()
{
return this->GetValue() + 2;
};
};
class CBase
{
public:
enum { AddOnValues = 0x00 };
explicit CBase( int x ) : x_( x ) {}
virtual ~CBase() {} // virtual destructor for inheritance
int GetValue() const { return x_; }; // const method
private:
int x_;
};
Now we can get to the actual use!
// First, the typedef approach
typedef AddOn_B< AddOn_A< CBase > > CMyClass;
CMyClass myObject(3);
std::cout << myObject.GetValue() << std::endl;
std::cout << myObject.AddOne() << std::endl;
std::cout << myObject.AddTwo() << std::endl;
Quite easy isn't it ? The obvious drawback is that you don't add functionality there...
// I want more!
template < class T >
class CMyClassImpl: public T
{
// Whatever you want
};
CMyClassImpl< AddOn_B< AddOn_A< CBase > > > myObject(3);
Okay... not so beautiful I guess... Even better ? Well, we can just use a wrapper!
// Even better
template <>
class CMyClass: public CMyClassImpl < CBase > {};
template < template <class> class AddOn1>
class CMyClass: public CMyClassImpl <AddOn1 < CBase > > {};
template < template <class> class AddOn1,
template <class> class AddOn2 >
class CMyClass: public CMyClassImpl < AddOn2 < AddOn1< CBase > > > {};
template < template <class> class AddOn1,
template <class> class AddOn2,
template <class> class AddOn3 >
class CMyClass: public CMyClassImpl < AddOn3 < AddOn2< AddOn1< CBase > > > > {};
// Go on with as much specializations as you wish
CMyClass < AddOn_A, AddOn_B > myObject(3);
Of course, the last solution saves typing at the calling site, but you've got to really work on your class :)
Also, you have to repeat the various constructors at each step of the inheritance, which my rapidly prove... boring.
There are preprocessor macros out there, but... last time it took me some 500 lines to generate something quite simple, so don't bother and type, really :)
|
1,608,909 | 1,608,955 | Why don't structs work in Xcode, when they do in Visual C++? Help needed! | For some reason, this very basic code will compile with no errors in Visual C++, but gives errors in XCode. I will need to know why, in order to continue working in Xcode for my Computer Science class.
#include <iostream>
#include <string>
using namespace std;
struct acct { // bank account data
int num; // account number
string name; // owner of account
float balance; // balance in account
};
int main() {
acct account;
cout << "Enter new account data: " << endl;
cout << "Account number: ";
cin >> account.num;
cout << "Account name: ";
cin >> account.name;
cout << "Account balance: ";
cin >> account.balance;
return 0;
}
It gives two errors, one saying that it expected ';' before account (after main is declared), and the second that account was not declared for cin >> account.num;
| The problem is not actually in your code: while C does require you to prefix your variables with struct in this case, C++ does not. The problem is actually that there is a global function on Unix named acct - it is this that is confusing the compiler. If you renamed your struct to something else, say bank_account, it will behave as you expected.
|
1,608,953 | 1,608,978 | Very strange char array behaviour | .
unsigned int fname_length = 0;
//fname length equals 30
file.read((char*)&fname_length,sizeof(unsigned int));
//fname contains random data as you would expect
char *fname = new char[fname_length];
//fname contains all the data 30 bytes long as you would expect, plus 18 bytes of random data on the end (intellisense display)
file.read((char*)fname,fname_length);
//m_material_file (std:string) contains all 48 characters
m_material_file = fname;
// count = 48
int count = m_material_file.length();
now when trying this way, intellisense still shows the 18 bytes of data after setting the char array to all ' ' and I get exactly the same results. even without the file read
char name[30];
for(int i = 0; i < 30; ++i)
{
name[i] = ' ';
}
file.read((char*)fname,30);
m_material_file = name;
int count = m_material_file.length();
any idea whats going wrong here, its probably something completely obvious but im stumped!
thanks
| Sounds like the string in the file isn't null-terminated, and intellisense is assuming that it is. Or perhaps when you wrote the length of the string (30) into the file, you didn't include the null character in that count. Try adding:
fname[fname_length] = '\0';
after the file.read(). Oh yeah, you'll need to allocate an extra character too:
char * fname = new char[fname_length + 1];
|
1,608,954 | 1,608,998 | Can the loop continuation condition in for loop be anything that will eventually return a false/null value? | This is out of deitel's c++ book and I'm trying to understand a bit more about why the continuation condition works and how it knows to quit. s1 and s2 are arrays so when s2 tries to assign the '\n' to s1 does it return null?
void mystery1( char *s1, const char *s2 )
{
while ( *s1 != '\0' )
s1++;
for ( ; *s1 = *s2; s1++, s2++ )
; // empty statement
}
| *s1 = *s2
Is an expression. Expressions in C/C++ evaluates to values, and in this case it returns the value assigned to *s1. When the '\0' is assigned to *s1, the expression evaluates to 0 which is false clearly.
|
1,609,121 | 1,609,183 | How to have a vector of byvalue and use a vector of pointers in conjunction? | I have some vectors of class A objects:
std::vector<A> *V1;
std::vector<A> *V2;
etc
there is a function with a vector of pointers of A:
std::vector<A *> *arranged;
what I need to do is put the vectors from V1, V2 etc inside arranged without destroying them in the end, so I thought that a vector of pointers to those objects... is this possible?
if yes, can you give me an example of a iteration with variable V1 and add pointers of those objects into arranged?
imagine that you, temporarly, need to sort 3 vectors of objects into one vector but you don't want to mess up the memory of the 3 vectors.
ty,
Joe
| You could write your own comparator. In this case, the comparator would work on A*. A simple example using int type:
void fun(vector<int*>* vec)
{
/////////
}
bool comp(int* lhs, int* rhs)
{
return *lhs < *rhs;
}
int main()
{
vector<int> first, second;
vector<int*> vec;
for(vector<int>::size_type i = 0; i < first.size(); ++i)
vec.push_back(&first[i]);
for(vector<int>::size_type i = 0; i < second.size(); ++i)
vec.push_back(&second[i]);
// write your own comparator! provided above: comp
sort(vec.begin(), vec.end(), comp);
fun(&vec);
return 0;
}
|
1,609,127 | 1,609,198 | C++ Serial Port Only Responding Once Using Write() | All the code below works. My device responds, C,7 is a reset. When I run this the second time it doesn't respond. If I manually turn my device off and on, then run this script again it works. But not if I press the button to run the script the second time.
RS232: 57600,8,N,1
Any ideas?? Is there any more information needed to solve this?
*Also when I get this working I'm going to have to use the read() function to get the devices responses. Does anyone know the correct format I need to use, based on the below code? Sorry I'm new to C++...I'm more of a PHP guy.
*I also don't know if 1024 is right, but it seems to work so eh...
#include <termios.h>
int fd;
struct termios options;
fd=open("/dev/tty.KeySerial1", O_RDWR | O_NOCTTY | O_NDELAY);
fcntl(fd, F_SETFL, 0);
tcgetattr(fd,&options);
options.c_ispeed=57600;
options.c_ospeed=57600;
options.c_cflag |= (CLOCAL | CREAD);
options.c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG);
options.c_cflag &= ~CSTOPB;
options.c_lflag &= ~ECHO;
options.c_oflag &= ~ECHO;
options.c_oflag &= ~OPOST;
options.c_cflag |= CS8;
options.c_cflag |= CRTSCTS;
options.c_cc[VMIN] = 0;
options.c_cc[VTIME] =10;
tcflush(fd, TCIFLUSH);
tcsetattr(fd,TCSANOW,&options);
write(fd, "C,7\r\n", 1024);
close(fd);
| The 1024 may in fact be your problem. The third paramter to the write() function indicates the number of bytes to be written:
ssize_t write(int fildes, const void *buf, size_t nbyte);
See the man page for write() for details.
In your case, the number should be 5, since you are sending 5 characters ('C' ',' '7' '\r' and '\n').
By providing a value of 1024, you are actually sending another 1019 garbage characters over the serial channel.
update:
The read() function has almost the same arguments:
ssize_t read(int fildes, void *buf, size_t nbyte);
Note that you must provide a writable buffer as the second parameter. For example, to read 12 bytes you would use:
char someData[12];
read(fd, someData, 12);
I'm not quite sure how to determine the number of characters you need to read, but the ssize_t number returned by the function will tell you how many were actually read.
|
1,609,163 | 1,609,185 | What is the difference between static_cast<> and C style casting? | Is there any reason to prefer static_cast<> over C style casting? Are they equivalent? Is there any sort of speed difference?
| C++ style casts are checked by the compiler. C style casts aren't and can fail at runtime.
Also, c++ style casts can be searched for easily, whereas it's really hard to search for c style casts.
Another big benefit is that the 4 different C++ style casts express the intent of the programmer more clearly.
When writing C++ I'd pretty much always use the C++ ones over the the C style.
|
1,609,472 | 1,609,505 | friend class with limited access | I want to make a class A friend class of class B. I want to do this as these interact very much and A needs to change internals of class B (which I dont want to expose using public). But I want to make sure it has access to only a few selected functions not all the functions.
Example:
class A
{
};
class B
{
private:
void setState();
void setFlags();
friend class A
};
I want A to be able to access setState but not setFlags... Is there a design pattern or a nice way of doing this or am I left with giving full access or no access at all in this case.
Thanks
| It depends on what you mean by "a nice way" :) At comp.lang.c++.moderated we had the same question a while ago. You may see the discussion it generated there.
IIRC, we ended up using the "friend of a nested key" approach. Applied to your example, this would yield:
class A
{
};
class B
{
public:
class Key{
friend class A;
Key();
};
void setFlags(Key){setFlags();}
private:
void setState();
void setFlags();
};
The idea is that the public setFlags() must be called with a "Key", and only friends of Key can create one, as its ctor is private.
|
1,609,550 | 1,609,576 | looking for an efficient data structure to do a quick searches | I have a list of elements around 1000. Each element (objects that i read from the file, hence i can arrange them efficiently at the beginning) containing contains 4 variables. So now I am doing the following, which is very inefficient at grand scheme of things:
void func(double value1, double value2, double value3)
{
fooArr[1000];
for(int i=0;i<1000; ++i)
{
//they are all numeric! ranges are < 1000
if(fooArr[i].a== value1
&& fooArr[i].b >= value2;
&& fooArr[i].c <= value2; //yes again value2
&& fooArr[i].d <= value3;
)
{
/* yay found now do something!*/
}
}
}
Space is not too important!
MODIFIED per REQUEST
| If space isn't too important the easiest thing to do is to create a hash based on "a" Depending on how many conflicts you get on "a" it may make sense to make each node in the hash table point to a binary tree based off of "b" If b has a lot of conflicts, do the same for c.
That first index into the hash, depending on how many conflicts, will save you a lot of time for very little coding or data structures work.
|
1,609,606 | 1,609,617 | LNK2001 using a std::vector of a custom struct | I want to have some data cache which contains some objects which I can update over an UpdateCache function. However, I'm getting problems with a LNK2001 followed by a LNK1120.
HeaderFile.h
#ifndef headerfile_included
#define headerfile_included
#include <vector>
struct MyObject {
unsigned int A;
unsigned int B;
};
class MyClass {
private:
static std::vector<MyObject> myObjectCache;
public:
static void UpdateCache ();
};
#endif
CodeFile.cpp
#include "HeaderFile.h"
void MyClass::UpdateCache () {
myObjectCache.clear();
/* Repopulate cache with new data */
}
The error message I get from the linked is
error LNK2001: unresolved external symbol ""private: static class std::vector > MyClass::myObjectCache" (?myObjectCache@MyClass@@0V?$vector@UMyObject@@V?$allocator@UMyObject@@@std@@@std@@A)".
fatal error LNK1120: 1 unresolved externals
My opinion is that it is some problem with the partitioning into the header file and the code file as I have had similar problems with improper partitioning. If it is again such a problem, it would be nice if you could post some rule on what to put into the header file and what into the code file since it's pretty confusing.
| You need to add this to a cpp file:
std::vector<MyObject> MyClass::myObjectCache;
The reason is that as a static exists without a class ever being instantiated it needs to exist whether an instance of the class is instantiated or not. The line above creates the instance of the static and thus it exists whether or not you ever create an instance of the class itself.
|
1,609,669 | 1,610,128 | Why does new / malloc fail on Win x64 although there is plenty of free RAM? | I have a strongly recursive function, that creates a (very small) std::multimap locally for each function instance using new (which recurses to malloc/calloc in the std lib). After some hundred recursions new fails although i am using a native 64Bit application on Windows XP x64. The machine has 10 GB RAM, The application only uses about 1GB. No other big apps are running.
This happens a few minutes after starting the program and starting the recursive function. The recursive function has been called about 150.000 times at this point with a probably max. recursion of some hundreds. The problem occurring is not a stack overflow.
I am using Visual Studio 2005 and the dinkumware STL. The fault occurs in a release build.
EDIT:
Ok, here is some code.
I rearranged the code now and put the map on the stack, but it uses new to initialize - there it fails. I also tried with a std::multimap instead of hash_multimap. All of this die not change the behavior.
int TraceBackSource(CalcParams *CalcData, CKnoObj *theKno, int qualNo,
double maschFak, double partAmount, int MaschLevel, char *MaschID,
double *totalStrFlow, int passNo,
CTraceBackData *ResultData)
{ typedef std::hash_multimap<double, CStrObj *>StrFMap;
StrFMap thePipes;
for(...)
{
...
thePipes.insert(std::make_pair(thisFlow, theStr));
}
// max. 5 elements in "thePipes"
for(StrFMap::iterator it = thePipes.begin(); it != thePipes.end(); it++)
{
...
try
{
TraceBackSource(CalcData, otherKno, qualNo, maschFak * nodeFak, nodeAmount, SubMaschlevel, newMaschID, totalStrFlow, passNo, ResultData);
}
catch(std::exception &it)
{
Trace(0, "*** Exception, %s", it.what());
return 0;
}
return 0;
}
}
Interestingly, the first failure runs into the catch handler, quite a bit later on i end with a ACCESS VIOLATION and a corrupted stack.
| Your number suggests an easily defaulted 1MB stacks size (c150K x 8 ). So from a quick look at your code (and that map::insert especially and not providing the for'...' code ) you are running into an interaction with stackoverflow.com :)
You are probably hitting it for the OS you're running it on. On Windows use the VS linker setttings or use editbin.exe or some exotic unportable api, triple your stack size and see whether it significantly changes the observed recursive count at time of exception.
|
1,609,975 | 1,610,606 | "Elements of Programming" real world examples? | I'm eager to learn about Stepanov's approach to programming described in the book Elements of Programming. Does anyone here have experience with these methods, or can point me to some online resource where this topic matter is discussed?
I've seen the Adobe's Google Tech Talk on A Possible future of software development.
| I was a proofreader for the book, and my feedback to Alex greatly influenced the style of presentation. I am happy to call myself one of his disciples. I find the material fascinating, and it has totally changed the way I write code, even Java code.
Some of Alex's "methods" are radical despite the vague wording I am providing:
Progamming is about two things: algorithms and data structures.
Think very hard about every line of code you write.
Understand the structure of the problem you are trying to solve.
Good code is often rewritten to be better.
The goal is to write code that is both as general and as fast as possible.
Naming things correctly is very important.
Often the core of an algorithm is already a well-understood idea.
If your function computes something interesting, return it.
How does that turn into source code? My posts on StackOverflow were done after this improvement in my understanding, so you can read those if you want. Otherwise, I'm not sure how to answer your question to your satisfaction.
|
1,610,029 | 1,610,065 | getters and setters style | (Leaving aside the question of should you have them at all.)
I have always preferred to just use function overloading to give you the same name for both getter and setters.
int rate() { return _rate; }
void rate(int value) { _rate = value; }
// instead of
int getRate() { return _rate; }
void setRate(int value) { _rate = value; }
// mainly because it allows me to write the much cleaner
total( period() * rate() );
// instead of
setTotal( getPeriod() * getRate() );
Naturally I am correct, but i wondered if the library writers had any good reason ?
| I would prefer the get/set versions because it is more clear as to what is going on. If I saw rate() and rate(10), how do I know that rate(10) isn't simply using 10 in the calculation to return the rate? I don't, so now I have to start searching to figure out what is going on. A single function name should do one thing, not two opposing things.
Also, as others have pointed out, some prefer to omit the 'get' and leave the 'set', i.e.,
int Rate( );
void SetRate( int value );
That convention is pretty clear as well, I wouldn't have any problem reading that.
|
1,610,030 | 1,610,454 | Why does flowing off the end of a non-void function without returning a value not produce a compiler error? | Ever since I realized many years ago, that this doesn't produce an error by default (in GCC at least), I've always wondered why?
I understand that you can issue compiler flags to produce a warning, but shouldn't it always be an error? Why does it make sense for a non-void function not returning a value to be valid?
An example as requested in the comments:
#include <stdio.h>
int stringSize()
{
}
int main()
{
char cstring[5];
printf( "the last char is: %c\n", cstring[stringSize()-1] );
return 0;
}
...compiles.
| C99 and C++ standards require non-void functions to return a value, except main. The missing return statement in main will be defined (to return 0). In C++ it's undefined behaviour if execution actually reaches the end of a non-void function other than main, while in C it's only UB if the caller uses the return value.
This means functions can look like they might reach the end without returning a value, but actually can't reach the closing }. John Kugelman's answer shows some examples, like a noreturn function called from one side of an if. It's only undefined behaviour if execution actually does get to the end without reaching a return earlier. The rationale includes that checking if every real code path returns a value is quite difficult (without knowing which functions never return), so it's not illegal to compile a function like your example, only to actually call it like your main does.
As an extension, at least one compiler (MSVC) allows a return value to be set with inline assembly, but most others still require a return statement in functions that use inline asm.
From C++11 draft:
§ 6.6.3/2
Flowing off the end of a function [...] results in undefined behavior in a value-returning function.
§ 3.6.1/5
If control reaches the end of main without encountering a return statement, the effect is that of executing
return 0;
Note that the behaviour described in C++ 6.6.3/2 is not the same in C.
gcc will give you a warning if you call it with -Wreturn-type option.
-Wreturn-type Warn whenever a function is defined with a return-type that
defaults to int. Also warn about any
return statement with no return-value
in a function whose return-type is not
void (falling off the end of the
function body is considered returning
without a value), and about a return
statement with an expression in a
function whose return-type is void.
This warning is enabled by -Wall.
Just as a curiosity, look what this code does:
#include <iostream>
int foo() {
int a = 5;
int b = a + 1;
}
int main() { std::cout << foo() << std::endl; } // may print 6
This code has formally undefined behaviour, and in practice it's calling convention and architecture dependent. On one particular system, with one particular compiler, the return value is the result of last expression evaluation, stored in the eax register of that system's processor, if you disable optimization.
This seems to be a consequence of GCC internals with optimization disabled, because in that case it picks the return-value register if it needs any to implement a statement. With optimization enabled in C++ mode, GCC and clang assume this path of execution is unreachable because it contains undefined behaviour. They don't even emit a ret instruction, so execution falls into the next function in the .text section. Of course undefined behaviour means that anything could happen.
|
1,610,158 | 1,610,171 | the "this" pointer inside a class | the question is simple...
is there any difference in using this->yourvariable or yourvariable directly for some reason?
I am not finding any problem with that, but I am using this-> a lot and would like to know if there is any difference before going further.
I saw a comment on a post here and I don't remember which thread, but the person said something about using the keyword "this".
Personally, I find it nice to use than the variable directly. It makes the code more easier and pretty.
Joe
| No, there is no real difference, it is simply a scope qualifier. However, suppose a method
void SetFoo( Foo foo )
{
this->foo = foo;
}
where this->foo is a private member. Here it allows you to take a parameter with the same name as a class/instance variable.
|
1,610,210 | 1,610,229 | C++ Graphic Drawing Library | Does anyone know what's the best graphic drawing library for C++, I want a lib that can draw basic shapes and can make image editing, gradients and vector or 3D would be great to.
The windows drawing functions are complicated and are not very advanced.
| May I suggest using Cairo?
This vector library is very fast, verbose and powerful! Just look at those pretty examples!
There's even integration with OpenGL if you need vectorized 3D textures!
|
1,610,283 | 1,610,312 | Step execution of release code / post-mortem debugging (VS/C++) | Is there any sense to step-execute release code? I noticed that some lines of code are omitted, i.e. some method calls. Also variable preview doesn't show some variables and shows invalid (not real) values for some others, so it's all quite misleading.
I'm asking this question, because loading WinDbg crashdump file into Visual Studio brings the same stack and variables partial view as step-execution. Are there any way to improve crashdump analyze experience, except recompiling application without optimalizations?
Windows, Visual Studio 2005, unmanaged C++
| Recompile just the file of interest without optimisations :)
In general:
Switch to interleaved disassembly mode. Single-stepping through the disassembly will enable you to step into function calls that would otherwise be skipped over, and make inlined code more evident.
Look for alternative ways of getting at values in variables the debugger is not able to directly show you. If they were passed in as arguments, look up the callstack - you will often find they are visible in the caller. If they were retrieved via getters from some object, examine that object; glance over the assembly generated by the code that calculates them to work out where they were stored; etc. If all else fails and disabling optimisations / adding a printf() distorts timings sufficiently to affect debugging, add a dummy global variable and set it to the value of interest on entry to the section of interest.
|
1,610,551 | 1,610,631 | When or where was the term "Most vexing parse" coined? | There are countless articles and blogs discussing C++'s most vexing parse, but I can't seem to find any with a more substantial reference than "the C++ literature."
Where did this term come from?
| Scott Meyers book Effective STL: 50 Specific Ways to Improve Your Use of the Standard Template Library of 2001 might be first published use.
|
1,610,668 | 1,610,963 | Project level c++ exception handling strategy | Say I have nested methods a, b, c, d, e on each level we return errors in the normal course of operations, but e could also throw an exception (e.g. out of memory on STL insert). The exceptions are very seldom and how fast/slow actual unwinding is happening is not an issue.
What is the most appropriate strategy for exception handling in this case?
Put it into the lowest level and convert to a normal error condition.
Pros: do not need to write exception safe code, simplest to implement, easiest to test, easiest to understand, minimum compile time information required for unwinding.
Cons: does not look cool, adds noticeable try/catch clatter - practically around every insert and push_back, up to the extent of writing exception safe wrappers around STL containers, there are opinions that there is run time performance penalty for try blocks (and there are opinions that there is no penalty at all).
Handle it it at the top.
Pros: looks cool, no clatter.
Cons: really hard to visually verify that all code in between is indeed exception safe to testing all exception unwinding paths will be
Handle it the very top as a complete restart of the application: zap everything that was not zapped by exception handling and start again
Pros: predictable, will tolerate minor blemishes in exception safe code, way better than crash.
Cons: way too harsh
Writing custom allocator, allowing to check memory reserve at a() before diving down the call stack.
void a()
{
...
x = b();
...
}
int b()
{
y = c();
...
return y + d();
}
int d()
{
...
z = e();
...
}
| Regardless of what you decide here, I would encourage you to pound--or at least gently tap--the notion of exception-safety into the other developers' heads. In my experience, the process of writing exception-safe code has resulted in more cleanly-designed, transactional code.
As a benefit, that coding style works regardless of the presence of exceptions, whereas the reverse is not true.
|
1,610,836 | 1,610,852 | Branchless code that maps zero, negative, and positive to 0, 1, 2 | Write a branchless function that returns 0, 1, or 2 if the difference between two signed integers is zero, negative, or positive.
Here's a version with branching:
int Compare(int x, int y)
{
int diff = x - y;
if (diff == 0)
return 0;
else if (diff < 0)
return 1;
else
return 2;
}
Here's a version that may be faster depending on compiler and processor:
int Compare(int x, int y)
{
int diff = x - y;
return diff == 0 ? 0 : (diff < 0 ? 1 : 2);
}
Can you come up with a faster one without branches?
SUMMARY
The 10 solutions I benchmarked had similar performance. The actual numbers and winner varied depending on compiler (icc/gcc), compiler options (e.g., -O3, -march=nocona, -fast, -xHost), and machine. Canon's solution performed well in many benchmark runs, but again the performance advantage was slight. I was surprised that in some cases some solutions were slower than the naive solution with branches.
| int Compare(int x, int y) {
return (x < y) + (y < x) << 1;
}
Edit: Bitwise only? Guess < and > don't count, then?
int Compare(int x, int y) {
int diff = x - y;
return (!!diff) | (!!(diff & 0x80000000) << 1);
}
But there's that pesky -.
Edit: Shift the other way around.
Meh, just to try again:
int Compare(int x, int y) {
int diff = y - x;
return (!!diff) << ((diff >> 31) & 1);
}
But I'm guessing there's no standard ASM instruction for !!. Also, the << can be replaced with +, depending on which is faster...
Bit twiddling is fun!
Hmm, I just learned about setnz.
I haven't checked the assembler output (but I did test it a bit this time), and with a bit of luck it could save a whole instruction!:
IN THEORY. MY ASSEMBLER IS RUSTY
subl %edi, %esi
setnz %eax
sarl $31, %esi
andl $1, %esi
sarl %eax, %esi
mov %esi, %eax
ret
Rambling is fun.
I need sleep.
|
1,610,875 | 1,618,075 | Is LLVM an exception to the rule for avoiding dynamic casts? | LLVM has it's own hand rolled alternative to RTTI that is a speed improvement over built-in RTTI and allows dynamic casting to classes with no vtable (dyn_cast). However, it can still be used in exactly the way that dynamic_cast<> is used though it does allow it to be used with more classes.
dyn_cast<> template documentation
LLVM is a reputable C++ project so this seems to fly in the face of the common saying that too many dynamic casts is a sign of bad design, also known as a code smell. Surely a better performing dynamic cast does nothing to improve its use in design than a standard dynamic_cast. So who is right here? Are there cases where large-scale use of dynamic casting is a good design choice in C++ code? Google turns up 690 occurrences of this kind of dynamic casting in the LLVM trunk source code.
Uses of dyn_cast<> in LLVM trunk
| While performance hits are a reason to avoid dynamic_cast<> for large class hierarchies, it's not the only reason you might want to avoid them. Better performing or not, one should not be more encouraged to use dyn_cast<> because of this claim.
On the other hand, there's absolutely nothing wrong with using dynamic_cast<> when it's the best tool for the job. If its use is justified, and the cleanest way to solve a problem, then it's always right, regardless of the "common saying".
I would certainly not steer clear of popular projects simply because they use dynamic_cast<>s, gotos or any other idiom that's fallen out of favour.
|
1,610,906 | 1,611,001 | debugging C++ code with templates and STL with gdb | What do gdb users here think about its capabilities in regards to debugging code with templates and STL?
Do you use any tricks to make the debugging any simpler? Perhaps some Python scripts? Or are you satisfied the way it is at present in gdb (ver 6.x, haven't tried 7.x yet)?
Thanks.
| I am assuming that you mean visualizing STL code better (and not the debug mode which give safe iterators and additional runtime checks). I am not sure if you have looked at these posts:
GNU GCC Documentation: Debugging Support
Using gdb
Starting with version 7.0, GDB includes support for writing pretty-printers in Python. Pretty printers for STL classes are distributed with GCC from version 4.5.0. The most recent version of these printers are always found in libstdc++ svn repository. To enable these printers, check-out the latest printers to a local directory:
Pretty printing STL lists
Also, try using KDevelop/DDD if possible -- they do help.
|
1,611,165 | 1,611,205 | How to use different ifstream modes in c++? |
According to the reference, if I use ifstream infile ( "test.txt" , ifstream::in ); it will Allow input operations on the stream. But what are some of the examples of the "input operations"?
Is ifstream infile ( "test.txt" , ifstream::in | ifstream::binary ); the right syntax to use multiple flags?
Will it make a difference if I change ifstream:: to iso:: ?
Thank you
|
According to the reference, if I use ifstream infile ( "test.txt" ,
ifstream::in ); it will Allow input
operations on the stream. But what are
some of the examples of the "input
operations"?
Reading from a file which would mean everything an input stream can support. See istream member functions. Typically, you can do both formatted (using >>) and unformatted reads (using read). Remember that ifstream is a specialization of the basic_ifstream template for char type. Depending on your needs, say to read UTF-16 encoded file, you may have to use a different specialization (wifstream) or even use a special locale (read this to know more about locales).
Is ifstream infile ( "test.txt" , ifstream::in | ifstream::binary );
the right syntax to use multiple
flags?
Yes.
Will it make a difference if I change ifstream:: to iso:: ?
No.
|
1,611,420 | 1,611,430 | ifstream position in c++ | I am trying to write a simple UTF-8 decoder for my assignment. I'm fairly new with C++ so bear with me here...
I have to determine whether the encoding is valid or not, and output the value of the UTF-8 character in hexadecimal in either case. Say that I have read the first byte and used this first byte to determine the number of bytes in this UTF8 character. The problem is that after I read the first byte, i'm having trouble setting the ifstream position back one byte and read the whole UTF-8 character. I've tried seekg() and putback(), but i always get BUS error or some weird output that's not my test data. Please help, thanks.
Even though i can use peek() for the first byte, but i still have to read the following bytes to determine whether the encoding is valid or not. The problem of setting back the stream position is still there.
| I would suggest you use peek() to read the first byte instead. seekg() should work to rewind, but a BUS error is usually caused by your code breaking alignment issues, which points to you doing something else evil in your code.
|
1,611,526 | 1,611,602 | How do you std::vector in XCode + C++? | For various reasons (and I assure you they are valid, so no "use Cocoa" talk please), I must work with XCode, C++, OpenGL, OpenCL (with a little GLUT on the side) to rebuild a few graphics demos on Mac (coming from XP + Visual Studio 2005 development). The project was built as a Command Line Tool using "c++ stdc++".
My Program.h file connects my shader objects together, compiles, links, and otherwise prepares them for use as OpenGL shader programs. Contained within this file are the following relevant lines of code:
#include <vector>
using std::vector;
and within the private section of the class:
vector<int> shaderHandles;
and when adding shader handles:
shaderHandles.push_back(shaderHandle);
and finally, when using the pushed shader handles:
for (int s = 0; s < (int) shaderHandles.size(); s++)
{
glAttachShader(handle, shaderHandles[s]);
}
In all my experience and research, there's nothing wrong with these lines within C++. However, when compiling (whether debug or release, so it's not related to the _GLIBCXX_DEBUG problem), the following 4 errors are generated:
/Developer/SDKs/MacOSX10.6.sdk/usr/include/c++/4.2.1/bits/stl_bvector.h:916: error: 'size' is not a member of 'std'
/Developer/SDKs/MacOSX10.6.sdk/usr/include/c++/4.2.1/bits/stl_bvector.h:961: error: 'size' is not a member of 'std'
/Developer/SDKs/MacOSX10.6.sdk/usr/include/c++/4.2.1/bits/vector.tcc:350: error: '__old_size' is not a member of 'std'
/Developer/SDKs/MacOSX10.6.sdk/usr/include/c++/4.2.1/bits/vector.tcc:453: error: '__old_size' is not a member of 'std'
Also, the file that links to stl_bvector.h and vector.tcc is:
/Developer/SDKs/MacOSX10.6.sdk/usr/include/c++/4.2.1/vector
Thus far, numerous Google searches have turned up nothing. All this code works flawlessly on Windows. Worse, if I replace the above code with the list equivalents:
#include <list>
using std::list;
and,
list<int> shaderHandles;
and,
for (list<int>::iterator s = shaderHandles.begin(); s != shaderHandles.end(); s++)
{
glAttachShader(handle, *s);
}
The program works as expected.
But one can't blame this ENTIRELY on the vector implementation, because the following program:
#include <iostream>
#include <vector>
using std::vector;
int main (int argc, char * const argv[])
{
vector<int> test;
test.push_back(1);
test.push_back(2);
test.push_back(3);
test.clear();
return 0;
}
Works with no problems.
I'll be happy to provide more information as necessary.
Please don't tell me I should use Cocoa/Objective-C; it's not really an option right now. And while yes, I can use lists to accomplish this functionality, other parts of my demo are not so easy to rework.
| I'm SO sorry everyone. Mere minutes after posting this, I decided to go on with what I could, saving this issue for later. I found a similar problem occurring with fstream. With this new information available, a Google search brought up this topic, and ultimately the solution.
I had defined my own min and max macros in my completely unrelated vector math files. The solution was to remove my macros and put std:: in front of the min and max calls.
|
1,611,615 | 1,615,422 | boost::spirit and generating different nodes | greetings.
i've been interesting in how to force boost::spirit to produce nodes of different classes when parsing the grammar and generating AST. say, i want to have different nodes such as VariableNode (which has name of variable as its member), ValueNode (which has value as its member), etc.
it would be very useful when dealing with tree-walker. in this case we would write a base abstract class for walking all the different nodes (applying "the visitor" pattern) and extend it when dealing with semantics checking phase, code generation phase and the such.
boost::spirit allows us to parameterize factory being used for trees, but i've been unable to find a proper way to tune its behavior.
any ideas, code? thanks in advance.
| I'm not sure I understand your question, do you mean something like this? :
typedef boost::variant<VariableNode, ValueNode> AbstractNode;
template <typename Iterator>
struct NodeGrammar: public boost::spirit::qi::grammar<Iterator, AbstractNode(), boost::spirit::ascii::space_type>
{
NodeGrammar: NodeGrammar::base_type(start)
{
start %= variableNode | valueNode >> eps;
variableNode %= /*something*/;
valueNode %= /*something*/;
}
//start
boost::spirit::qi::rule<Iterator, AbstractNode(), boost::spirit::ascii::space_type> start;
boost::spirit::qi::rule<Iterator, VariableNode(), boost::spirit::ascii::space_type> variableNode;
boost::spirit::qi::rule<Iterator, ValueNode(), boost::spirit::ascii::space_type> valueNode;
};
You can then use boost::apply_visitor (see boost::variant documentation) with a visitor class to do the behavior you want.
|
1,611,673 | 1,611,689 | Constant strings address | I have several identical string constants in my program:
const char* Ok()
{
return "Ok";
}
int main()
{
const char* ok = "Ok";
}
Is there guarantee that they are have the same address, i.e. could I write the following code? I heard that GNU C++ optimize strings so they have the same address, could I use that feature in my programs?
int main()
{
const char* ok = "Ok";
if ( ok == Ok() ) // is it ok?
;
}
| There's certainly no guarantee, but it is a common (I think) optimization.
The C++ standard says (2.13.4/2 "String literals):
Whether all string literals are distinct (that is, are stored in nonoverlapping objects) is implementation-defined.
To be clear, you shouldn't write code that assumes this optimization will take place - as Chris Lutz says, C++ code that relies on this is code that's waiting to be broken.
|
1,611,756 | 1,611,777 | Memory management and realloc | I'm going through my program with valgrind to hunt down memory leaks. Here's one that I'm not sure what to do with.
==15634== 500 (224 direct, 276 indirect) bytes in 2 blocks are definitely lost in loss record 73 of 392
==15634== at 0x4007070: realloc (vg_replace_malloc.c:429)
==15634== by 0x807D5C2: hash_set_column(HASH*, int, char const*) (Hash.cpp:243)
==15634== by 0x807BB15: LCD::PluginDiskstats::PluginDiskstats() (PluginDiskstats.cpp:102)
==15634== by 0x806E021: LCD::Evaluator::Evaluator() (Evaluator.cpp:27)
==15634== by 0x8066A87: LCD::LCDControl::LCDControl() (LCDControl.h:16)
==15634== by 0x80667F5: main (Main.cpp:8)
Here's the code:
/* add an entry to the column header table */
void hash_set_column(HASH * Hash, const int number, const char *column)
{
if (Hash == NULL)
return;
Hash->nColumns++;
Hash->Columns = (HASH_COLUMN *)realloc(Hash->Columns, Hash->nColumns * sizeof(HASH_COLUMN)); // line 243
Hash->Columns[Hash->nColumns - 1].key = strdup(column);
Hash->Columns[Hash->nColumns - 1].val = number;
qsort(Hash->Columns, Hash->nColumns, sizeof(HASH_COLUMN), hash_sort_column);
}
Should I be doing something here in regards to memory management?
| The problem is that if realloc() fails the function will return NULL but the original block will still be allocated. However, you've just overwritten the pointer to that block and can't free (or use) it anymore.
|
1,611,771 | 1,611,795 | Deleting object with private destructor | How is that possible that it is allowed to delete object with private destructor in the following code? I've reduced real program to the following sample, but it still compiles and works.
class SomeClass;
int main(int argc, char *argv[])
{
SomeClass* boo = 0; // in real program it will be valid pointer
delete boo; // how it can work?
return -1;
}
class SomeClass
{
private:
~SomeClass() {}; // ! private destructor !
};
| You are trying to delete object of incomplete class type. C++ Standard says that you'll get undefined behavior in this case (5.3.5/5):
If the object being deleted has incomplete class type at the point of deletion and the complete class has a non-trivial destructor or a deallocation function, the behavior is undefined.
To detect such cases you could use boost::checked_delete:
template<typename T>
inline void checked_delete( T* p )
{
typedef char type_must_be_complete[ sizeof(T)? 1: -1 ];
(void) sizeof(type_must_be_complete);
delete p;
}
|
1,612,031 | 1,612,051 | Is there any danger in calling free() or delete instead of delete[]? |
Possible Duplicate:
( POD )freeing memory : is delete[] equal to delete ?
Does delete deallocate the elements beyond the first in an array?
char *s = new char[n];
delete s;
Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array?
For more complex types, would delete call the destructor of objects beyond the first one?
Object *p = new Object[n];
delete p;
How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region? What if the memory region was allocated with some overhang for performance reasons? For example one could assume that not all allocators would provide a granularity of a single byte. Then any particular allocation could exceed the required size for each element by a whole element or more.
For primitive types, such as char, int, is there any difference between:
int *p = new int[n];
delete p;
delete[] p;
free p;
Except for the routes taken by the respective calls through the delete->free deallocation machinery?
| It's undefined behaviour (most likely will corrupt heap or crash the program immediately) and you should never do it. Only free memory with a primitive corresponding to the one used to allocate that memory.
Violating this rule may lead to proper functioning by coincidence, but the program can break once anything is changed - the compiler, the runtime, the compiler settings. You should never rely on such proper functioning and expect it.
delete[] uses compiler-specific service data for determining the number of elements. Usually a bigger block is allocated when new[] is called, the number is stored at the beginning and the caller is given the address behind the stored number. Anyway delete[] relies on the block being allocated by new[], not anything else. If you pair anything except new[] with delete[] or vice versa you run into undefined behaviour.
|
1,612,058 | 1,612,075 | Building 64bit libraries | I want to build 64bit libraries for some of my C++ components. Is it required to compile/link the libraries in OS running on physical machine directly? Or can i use a OS running as virtual machine in ESX server to build the libraries? Would i need to take care of anything if i am building in a virtual machine? Please advice
I was interested in 32/64bit for linux.
Thanks in advance!
| If the guest OS is 64-bit and you have a 64-bit compiler, there is not reason why you can't compile and run a 64-bit library/application in your virtualised OS.
|
1,612,235 | 1,612,243 | Upcasts in COM automatic? | In COM, if I have an interface IBase and an interface IX which inherits from IBase, can I call methods of IBase through an IX pointer, and if not, why can I call Release() and AddRef() on any COM interface pointer without an upcast?
| Yes, you can call whatever method of the base through the pointer to the derived. That's exactly why you can call AddRef(), Release() and QueryInterface() through any interface pointer.
|
1,612,451 | 1,616,679 | Executing external program via system() does not run properly | I try to call a program (ncbi blast, for those who need to know) from my code, via calling the command in a system() call.
If I execute the string directly in the shell, it works as intended, but if I try the same string via system(), the program returns much faster, without the intended results. The output file is created, but the file size is 0. The returned error code is also 0. I even tried appending "> output.log 2> error.log" but these files are not created.
I guess it has something to do with environment variables or the path...
The output file name is given via -o command line parameter, not output redirection.
I read something about the popen command being possibly better suited for my use-case, but I can not find it, which library is that from?
| The most usual cause of such problems is incorrect environment variable setting in ones ~/.bashrc.
You should be able to see what ncbi is unhappy about by executing
$SHELL -c '<exact string you pass to system()>'
Another common way to debug this is with strace. Execute:
strace -fo /tmp/strace.out ./myProgram
and look in /tmp/strace.out for clues.
|
1,612,619 | 1,614,402 | Large Xml files are being truncated by MSXML4 / FreeThreadedDOMDocument40 (COM string Interop issue) | I'm using the following code to load a large Xml document (~5 MB):
int _tmain(int argc, _TCHAR* argv[])
{
::CoInitialize(NULL);
HRESULT hr;
CComPtr< IXMLDOMDocument > spXmlDocument;
hr = spXmlDocument.CoCreateInstance(__uuidof(FreeThreadedDOMDocument60)), __uuidof(FreeThreadedDOMDocument60);
if(FAILED(hr)) return FALSE;
spXmlDocument->put_preserveWhiteSpace(VARIANT_TRUE);
spXmlDocument->put_async(VARIANT_FALSE);
spXmlDocument->put_validateOnParse(VARIANT_FALSE);
VARIANT_BOOL bLoadSucceeded = VARIANT_FALSE;
hr = spXmlDocument->load( CComVariant( L"C:\\XMLFile1.xml" ), &bLoadSucceeded );
if(FAILED(hr) || bLoadSucceeded==VARIANT_FALSE) return FALSE;
CComVariant bstrDoc;
hr = spXmlDocument->get_nodeValue(&bstrDoc);
CComPtr< IXMLDOMNode > spNode;
hr = spXmlDocument->selectSingleNode(CComBSTR(L"//SpecialNode"), &spNode );
}
I'm finding that the contents of bstrDoc is truncated (there are no exceptions / failed HResults)
Anyone know why? You can try this yourself just by creating a large Xml file of just <xml></xml> elements (~5 MB should do it)
UPDATE: Updating to use MSXML 6 made no difference, also setting Async to false and using get_nodeValue / get_text made no difference (sample updated)
I noticed that if I did selectSingleNode for a node placed at the end of the document it worked fine - it appears that the document loads successfully, and the issue is instead with getting the text for a single node. I'm perplexed however as I'm yet to find anyone else on the internet having this issue.
UPDATE 2: The problem appears to be related to COM interop itself - I've created a simple C# class that does the same thing and exposed it as a COM object. I can see that although the Xml is fine in my C# app, by the time I look at it in my debugger in the C++ app it looks exactly as it did when using MSXML.
| It appears I was a victim of my own foolishness - the Xml / strings were in fact not being truncated, the viewer in Visual Studio was simply lying to me.
Outputting the strings to a file showed that the strings were all as they should be.
|
1,612,982 | 1,612,995 | How does automatic memory allocation actually work in C++? | In C++, assuming no optimization, do the following two programs end up with the same memory allocation machine code?
int main()
{
int i;
int *p;
}
int main()
{
int *p = new int;
delete p;
}
| No, without optimization ...
int main()
{
int i;
int *p;
}
does almost nothing - just a couple of instructions to adjust the stack pointer, but
int main()
{
int *p = new int;
delete p;
}
allocates a block of memory on heap and then frees it, that's a whole lot of work (I'm serious here - heap allocation is not a trivial operation).
|
1,613,041 | 1,764,696 | Embedded console tools functionality in application | I'm currently developing an application that happens to require some file preprocessing before actually reading the data.
Doing it externally was not a possibility so I came up with a fork & execve of "cut options filename | sort | uniq -c" etc... and I execute it like that.
However I thought that maybe there was already another option to reuse all those ancient and good working tools directly in my code and not having to invoke them through a shell.
I am currently looking at busybox to see if there is an easy way of statically link and programatically call those utils but no luck yet.
| Arkaitz, the answer no, because of how you've phrased the question.
You ask for "another option to reuse all those ancient and good working tools directly in my code and not having to invoke them through a shell"
The problem with that is, the proper and accepted way of reusing all those ancient and good working tools is exactly what you're saying you want to avoid - invoking them via a shell (or at least, firing them up as child processes via popen for example) - and it's definitely not recommend to try to subsume, copy, or duplicate these tools into your code.
The UNIX (and Linux) model for data manipulation is robust and proven - why would you want to avoid it?
|
1,613,074 | 1,613,094 | In C++, what's the use of having a function void foo(int** p)? | I have been told by my colleague that void foo(int** p) is used like an out parameter in C#. Can someone precisely explain how?
I get the idea, but there is something that is missing. I know that if we pass the pointer p itself to foo(*p) and the function body does p = new int(), we might have a dangling modifier! But how does foo(**p) prevent something like that from happening?
| void foo(int** p)
{
*p = new int;
}
void main()
{
int *p = NULL;
foo(&p);
//unless you pass the pointer to p, the memory allocated in the
//function foo is not available here.
foo(p); // lets assume foo is foo(int* p)
//when you pass only pointer p to `foo()` then the value of the pointer will be
//passed ( pass by value) to foo() and hence, any modification to pointer p ( in
// the form of allocating memory also) will not be available after the
// control returns from foo()
// The p will still points to NULL here, not the memory allocated in foo().
}
|
1,613,217 | 1,616,507 | How do I convert from _TCHAR * to char * when using C++ variable-length args? | We need to pass a format _TCHAR * string, and a number of char * strings into a function with variable-length args:
inline void FooBar(const _TCHAR *szFmt, const char *cArgs, ...) {
//...
}
So it can be called like so:
char *foo = "foo";
char *bar = "bar";
LogToFileA(_T("Test %s %s"), foo, bar);
Obviously a simple fix would be to use _TCHAR instead of char, but we don't have that luxury unfortunately.
We need to use this with va_start, etc so we can format a string:
va_list args;
_TCHAR szBuf[BUFFER_MED_SIZE];
va_start(args, cArgs);
_vstprintf_s(szBuf, BUFFER_MED_SIZE, szFmt, args);
va_end(args);
Unfortunately we cannot use this because it give us this error:
Unhandled exception at 0x6a0d7f4f (msvcr90d.dll) in foobar.exe:
0xC0000005: Access violation reading location 0x2d86fead.
I'm thinking we need to convert our char * to _TCHAR * - but how?
| Use %hs or %hS instead of %s. That will force the parameters to be interpretted as char* in both Ansi and Unicode versions of printf()-style functions, ie:
inline void LogToFile(const _TCHAR *szFmt, ...)
{
va_list args;
TCHAR szBuf[BUFFER_MED_SIZE];
va_start(args, szFmt);
_vstprintf_s(szBuf, BUFFER_MED_SIZE, szFmt, args);
va_end(args);
}
{
char *foo = "foo";
char *bar = "bar";
LogToFile(_T("Test %hs %hs"), foo, bar);
}
|
1,613,230 | 1,618,867 | Uses of C comma operator | You see it used in for loop statements, but it's legal syntax anywhere. What uses have you found for it elsewhere, if any?
| C language (as well as C++) is historically a mix of two completely different programming styles, which one can refer to as "statement programming" and "expression programming". As you know, every procedural programming language normally supports such fundamental constructs as sequencing and branching (see Structured Programming). These fundamental constructs are present in C/C++ languages in two forms: one for statement programming, another for expression programming.
For example, when you write your program in terms of statements, you might use a sequence of statements separated by ;. When you want to do some branching, you use if statements. You can also use cycles and other kinds of control transfer statements.
In expression programming the same constructs are available to you as well. This is actually where , operator comes into play. Operator , is nothing else than a separator of sequential expressions in C, i.e. operator , in expression programming serves the same role as ; does in statement programming. Branching in expression programming is done through ?: operator and, alternatively, through short-circuit evaluation properties of && and || operators. (Expression programming has no cycles though. And to replace them with recursion you'd have to apply statement programming.)
For example, the following code
a = rand();
++a;
b = rand();
c = a + b / 2;
if (a < c - 5)
d = a;
else
d = b;
which is an example of traditional statement programming, can be re-written in terms of expression programming as
a = rand(), ++a, b = rand(), c = a + b / 2, a < c - 5 ? d = a : d = b;
or as
a = rand(), ++a, b = rand(), c = a + b / 2, d = a < c - 5 ? a : b;
or
d = (a = rand(), ++a, b = rand(), c = a + b / 2, a < c - 5 ? a : b);
or
a = rand(), ++a, b = rand(), c = a + b / 2, (a < c - 5 && (d = a, 1)) || (d = b);
Needless to say, in practice statement programming usually produces much more readable C/C++ code, so we normally use expression programming in very well measured and restricted amounts. But in many cases it comes handy. And the line between what is acceptable and what is not is to a large degree a matter of personal preference and the ability to recognize and read established idioms.
As an additional note: the very design of the language is obviously tailored towards statements. Statements can freely invoke expressions, but expressions can't invoke statements (aside from calling pre-defined functions). This situation is changed in a rather interesting way in GCC compiler, which supports so called "statement expressions" as an extension (symmetrical to "expression statements" in standard C). "Statement expressions" allow user to directly insert statement-based code into expressions, just like they can insert expression-based code into statements in standard C.
As another additional note: in C++ language functor-based programming plays an important role, which can be seen as another form of "expression programming". According to the current trends in C++ design, it might be considered preferable over traditional statement programming in many situations.
|
1,613,341 | 1,613,578 | What do the following phrases mean in C++: zero-, default- and value-initialization? | What do the following phrases mean in C++:
zero-initialization,
default-initialization, and
value-initialization
What should a C++ developer know about them?
| One thing to realize is that 'value-initialization' is new with the C++ 2003 standard - it doesn't exist in the original 1998 standard (I think it might be the only difference that's more than a clarification). See Kirill V. Lyadvinsky's answer for the definitions straight from the standard.
See this previous answer about the behavior of operator new for details on the the different behavior of these type of initialization and when they kick in (and when they differ from c++98 to C++03):
Do the parentheses after the type name make a difference with new?
The main point of the answer is:
Sometimes the memory returned by the new operator will be initialized, and sometimes it won't depending on whether the type you're newing up is a POD, or if it's a class that contains POD members and is using a compiler-generated default constructor.
In C++1998 there are 2 types of initialization: zero and default
In C++2003 a 3rd type of initialization, value initialization was added.
To say they least, it's rather complex and when the different methods kick in are subtle.
One thing to certainly be aware of is that MSVC follows the C++98 rules, even in VS 2008 (VC 9 or cl.exe version 15.x).
The following snippet shows that MSVC and Digital Mars follow C++98 rules, while GCC 3.4.5 and Comeau follow the C++03 rules:
#include <cstdio>
#include <cstring>
#include <new>
struct A { int m; }; // POD
struct B { ~B(); int m; }; // non-POD, compiler generated default ctor
struct C { C() : m() {}; ~C(); int m; }; // non-POD, default-initialising m
int main()
{
char buf[sizeof(B)];
std::memset( buf, 0x5a, sizeof( buf));
// use placement new on the memset'ed buffer to make sure
// if we see a zero result it's due to an explicit
// value initialization
B* pB = new(buf) B(); //C++98 rules - pB->m is uninitialized
//C++03 rules - pB->m is set to 0
std::printf( "m is %d\n", pB->m);
return 0;
}
|
1,613,494 | 1,613,922 | Why was wchar_t invented? | Why is wchar_t needed? How is it superior to short (or __int16 or whatever)?
(If it matters: I live in Windows world. I don't know what Linux does to support Unicode.)
|
Why is wchar_t needed? How is it superior to short (or __int16 or whatever)?
In the C++ world, wchar_t is its own type (I think it's a typedef in C), so you can overload functions based on this. For example, this makes it possible to output wide characters and not to output their numerical value. In VC6, where wchar_t was just a typedef for unsigned short, this code
wchar_t wch = L'A'
std::wcout << wch;
would output 65 because
std::ostream<wchar_t>::operator<<(unsigned short)
was invoked. In newer VC versions wchar_t is a distinct type, so
std::ostream<wchar_t>::operator<<(wchar_t)
is called, and that outputs A.
|
1,613,678 | 1,613,721 | Why can't C++ Builder find my headers? | I am required to recompile a C++ builder project, and am come across this problem.
one of the unit contains the followings:
#include "LMDBaseControl.hpp"
#include "LMDBaseGraphicControl.hpp"
#include "LMDBaseLabel.hpp"
#include "LMDBaseMeter.hpp"
#include "LMDControl.hpp"
:
When I compiled this unit, I got the following error messages:
MHSS_ISS_HMI_v3_2.cpp(41): #include
....\include\MHSS\iss_hmi_gui_cached.h
[C++ Error] iss_hmi_gui_cached.h(68):
E2209 Unable to open include file
'LMDBaseControl.hpp'
Full parser context
MHSS_ISS_HMI_v3_2.cpp(41): #include
....\include\MHSS\iss_hmi_gui_cached.h
[C++ Error] iss_hmi_gui_cached.h(69):
E2209 Unable to open include file
'LMDBaseGraphicControl.hpp'
Full parser context
MHSS_ISS_HMI_v3_2.cpp(41): #include
....\include\MHSS\iss_hmi_gui_cached.h
[C++ Error] iss_hmi_gui_cached.h(70):
E2209 Unable to open include file
'LMDBaseLabel.hpp'
Full parser context
MHSS_ISS_HMI_v3_2.cpp(41): #include
....\include\MHSS\iss_hmi_gui_cached.h
[C++ Error] iss_hmi_gui_cached.h(71):
E2209 Unable to open include file
'LMDBaseMeter.hpp'
MHSS_ISS_HMI_v3_2.cpp(41): #include
....\include\MHSS\iss_hmi_gui_cached.h
[C++ Error] iss_hmi_gui_cached.h(72):
E2209 Unable to open include file
'LMDControl.hpp' :
I have installed LMD Tools 7.0 on my Borland C++ builder 6, and set the library to the folder where I installed the LMD Tool.
| You need to add both the path to the library and the path to the H files (2 separate options in the Borland options dialog).
|
1,614,379 | 1,615,158 | how to determine which files has been changed from those in rcs | I am working with a c++ codebase using rcs repository (agh, old I know), during major code changes, I modify a lot of files, which I sometimes lose track of. So I would like to have a small script which will list the files that are different (those that I changed) from files in the repository. It is sort of unrealistic to do rcsdiff on each file. Assume that all files are in the same directory. Is there a small script or alias that will list all the files I have modified?
thanks
| Wow.
RCS is designed for source control of a single file.
It is rarely used directly. CVS is a system that is layered ontop of RCS to extend the functionality to a bunch of files (ie a project).
You could do it in the shell with:
tcsh
foreach a ( `ls *.cpp` )
echo ${a}
echo "=============================================================="
rcsdiff ${a}
echo "=============================================================="
echo
echo
echo
end
If you had been using cvs I would suggest using tags:
Tag before you start modifying and then tag at regular check points.
cvs tag <startChangeTag>
Then you can compare any of the files against the the tagged version.
|
1,614,393 | 1,680,334 | SDL GL program terminates immediately | I'm using Dev-C++ 4.9.9.2 (don't ask why) and SDL 1.2.8.
Next I've created new project: SDL&GL. This project contains already some code:
#include <SDL/SDL.h>
#include <gl/gl.h>
int main(int argc, char *argv[]){
SDL_Event event;
float theta = 0.0f;
SDL_Init(SDL_INIT_VIDEO);
SDL_SetVideoMode(600, 300, 0, SDL_OPENGL | SDL_HWSURFACE | SDL_NOFRAME);
glViewport(0, 0, 600, 300);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(1.0);
glDepthFunc(GL_LESS);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glMatrixMode(GL_PROJECTION);
glMatrixMode(GL_MODELVIEW);
int done;
for(done = 0; !done;){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0f,0.0f,0.0f);
glRotatef(theta, 0.0f, 0.0f, 1.0f);
glBegin(GL_TRIANGLES);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex2f(0.0f, 1.0f);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex2f(0.87f, -0.5f);
glColor3f(0.0f, 0.0f, 1.0f);
glVertex2f(-0.87f, -0.5f);
glEnd();
theta += .5f;
SDL_GL_SwapBuffers();
SDL_PollEvent(&event);
if(event.key.keysym.sym == SDLK_ESCAPE)
done = 1;
}
SDL_Quit();
return(0);
}
Next I compiled project and try to run it. After run the program shows for less than 1 second and immediately terminates. Debugger returns following error: "An Access Violation (Segmentation Fault) raised in your program".
I'm using Windows 2003 and Radeon x1950 PRO with latest drivers.
I've tested program on laptop with Windows XP and it works perfectly. Why this program doesn't work on my computer?
| I finally found some time to solve this problem. I have completly uninstalled old card graphic drivers and install 9.8 ATI drivers with Catalyst Control Center. Now everything is working.
There where no problem in code itself. The problem was something in my system with graphic drivers. Anyway thanks for your answers!
|
1,614,595 | 1,614,670 | Converting wide char string to lowercase in C++ | How do I convert a wchar_t string from upper case to lower case in C++?
The string contains a mixture of Japanese, Chinese, German and Greek characters.
I thought about using towlower...
http://msdn.microsoft.com/en-us/library/8h19t214%28VS.80%29.aspx
.. but the documentation says that:
The case conversion of towlower is locale-specific. Only the characters relevant to the current locale are changed in case.
Edit: Maybe I should describe what I'm doing. I receive a Unicode search query from a user. It's originally in UTF-8 encoding, but I'm converting it to a widechar (I may be wrong on the wording). My debugger (VS2008) correctly shows the Japanese, German, etc characters in in the "variable quick watch". I need to go through another set of data in Unicode and find matches of the search string. While this is no problem for me to do when the search is case sensitive, it's more problematic to do it case insensitive. My (maybe naive) approach to solve the problem would be to convert all input data and output data to lower case and then compare it.
| If your string contains all those characters, the codeset must be Unicode-based. If implemented properly, Unicode (Chapter 4 'Character Properties') defines character properties including whether the character is upper case and the lower case mapping, and so on.
Given that preamble, the towlower() function from <wctype.h> is the correct tool to use. If it doesn't do the job, you have a QoI (Quality of Implementation) problem to discuss with your vendor. If you find the vendor unresponsive, then look at alternative libraries. In this case, you might consider ICU (International Components for Unicode).
|
1,614,641 | 1,614,681 | C++ and Windows , CRT | I am developing application application using C++ VS 2008.
Now I need to either install respective MSM or install redist on customer machine to get this working.
Is there any way in which I can just copy those CRT dlls and get the application running.
Private assembly option seems to be complicate.
| If you just depend on the CRT, then yes you can simply XCOPY deploy it as a private assembly and it will work just fine. Put it in the same folder as your application.
Doing this will prevent your application from taking advantage of servicing releases of the CRT though. This may or may not be an issue for you.
|
1,614,651 | 1,615,226 | Do sequence points prevent code reordering across critical section boundaries? | Suppose that one has some lock based code like the following where mutexes are used to guard against inappropriate concurrent read and write
mutex.get() ; // get a lock.
T localVar = pSharedMem->v ; // read something
pSharedMem->w = blah ; // write something.
pSharedMem->z++ ; // read and write something.
mutex.release() ; // release the lock.
If one assumed that the generated code was created in program order, there is still a requirement for appropriate hardware memory barriers like isync,lwsync,.acq,.rel. I'll assume for this question that the mutex implementation takes care of this part, providing a guarentee that the pSharedMem reads and writes all occur "after" the get, and "before" the release() [but that surrounding reads and writes can get into the critical section as I expect is the norm for mutex implementations]. I'll also assume that volatile accesses are used in the mutex implementation where appropriate, but that volatile is NOT used for the data protected by the mutex (understanding why volatile does not appear to be a requirement for the mutex protected Data is really part of this question).
I'd like to understand what prevents the compiler from moving the pSharedMem accesses outside of the critical region. In the C and C++ standards I see that there is a concept of sequence point. Much of the sequence point text in the standards docs I found incomprehensible, but if I was to guess what it was about, it is a statement that code should not be reordered across a point where there is a call with unknown side effects. Is that the jist of it? If that is the case what sort of optimization freedom does the compiler have here?
With compilers doing tricky optimizations like profile driven interprocedural inlining (even across file boundaries), even the concept of unknown side effect gets kind of blurry.
It is perhaps beyond the scope of a simple question to explain this in a self contained way here, so I am open to being pointed at references (preferrably online and targetted at mortal programmers not compiler writers and language designers).
EDIT: (in response to Jalf's reply)
I'd mentioned the memory barrier instructions like lwsync and isync because of the CPU reordering issues you also mentioned. I happen to work in the same lab as the compiler guys (for one of our platforms at least), and having talked to the implementers of the intrinsics I happen to know that at least for the xlC compiler __isync() and __lwsync() (and the rest of the atomic intrinsics) are also a code reordering barrier. In our spinlock implementation this is visible to the compiler since this part of our critical section is inlined.
However, suppose you weren't using a custom build lock implementation (like we happen to be, which is likely uncommon), and just called a generic interface such as pthread_mutex_lock(). There the compiler isn't informed anything more than the prototype. I've never seen it suggested that code would be non-functional
pthread_mutex_lock( &m ) ;
pSharedMem->someNonVolatileVar++ ;
pthread_mutex_unlock( &m ) ;
pthread_mutex_lock( &m ) ;
pSharedMem->someNonVolatileVar++ ;
pthread_mutex_unlock( &m ) ;
would be non-functional unless the variable was changed to volatile. That increment is going to have a load/increment/store sequence in each of the back to back blocks of code, and would not operate correctly if the value of the first increment is retained in-register for the second.
It seems likely that the unknown side effects of the pthread_mutex_lock() is what protects this back to back increment example from behaving incorrectly.
I'm talking myself into a conclusion that the semantics of a code sequence like this in a threaded environment is not really strictly covered by the C or C++ language specs.
| In short, the compiler is allowed to reorder or transform the program as it likes, as long as the observable behavior on a C++ virtual machine does not change. The C++ standard has no concept of threads, and so this fictive VM only runs a single thread. And on such an imaginary machine, we don't have to worry about what other threads see. As long as the changes don't alter the outcome of the current thread, all code transformations are valid, including reordering memory accesses across sequence points.
understanding why volatile does not appear to be a requirement for the mutex protected Data is really part of this question
Volatile ensures one thing, and one thing only: reads from a volatile variable will be read from memory every time -- the compiler won't assume that the value can be cached in a register. And likewise, writes will be written through to memory. The compiler won't keep it around in a register "for a while, before writing it out to memory".
But that's all. When the write occurs, a write will be performed, and when the read occurs, a read will be performed. But it doesn't guarantee anything about when this read/write will take place. The compiler may, as it usually does, reorder operations as it sees fit (as long as it doesn't change the observable behavior in the current thread, the one that the imaginary C++ CPU knows about). So volatile doesn't really solve the problem. On the other hand, it offers a guarantee that we don't really need. We don't need every write to the variable to be written out immediately, we just want to ensure that they get written out before crossing this boundary. It's fine if they're cached until then - and likewise, once we've crossed the critical section boundary, subsequent writes can be cached again for all we care -- until we cross the boundary the next time. So volatile offers a too strong guarantee which we don't need, but doesn't offer the one we do need (that reads/writes won't get reordered)
So to implement critical sections, we need to rely on compiler magic. We have to tell it that "ok, forget about the C++ standard for a moment, I don't care what optimizations it would have allowed if you'd followed that strictly. You must NOT reorder any memory accesses across this boundary".
Critical sections are typically implemented via special compiler intrinsics (essentially special functions that are understood by the compiler), which 1) force the compiler to avoid reordering across that intrinsic, and 2) makes it emit the necessary instructions to get the CPU to respect the same boundary (because the CPU reorders instructions too, and without issuing a memory barrier instruction, we'd risk the CPU doing the same reordering that we just prevented the compiler from doing)
|
1,614,794 | 1,614,858 | Modify PL/SQL statement strings in C++ | This is my use case: Input is a string representing an Oracle PL/SQL statement of arbitray complexity. We may assume it's a single statement (not a script).
Now, several bits of this input string have to be rewritten.
E.g. table names need to be prefixed, aggregate functions in the selection list that don't use a column alias should be assigned a default one:
SELECT SUM(ABS(x.value)),
TO_CHAR(y.ID,'111,111'),
y.some_col
FROM
tableX x,
(SELECT DISTINCT ID
FROM tableZ z
WHERE ID > 10) y
WHERE
...
becomes
SELECT SUM(ABS(x.value)) COL1,
TO_CHAR(y.ID,'111,111') COL2,
y.some_col
FROM
pref.tableX x,
(SELECT DISTINCT ID, some_col
FROM pref.tableZ z
WHERE ID > 10) y
WHERE
...
(Disclaimer: just to illustrate the issue, statement does not make sense)
Since aggregate functions might be nested and subSELECTs are a b_tch, I dare not use regular expressions. Well, actually I did and achieved 80% of success, but I do need the remaining 20%.
The right approach, I presume, is to use grammars and parsers.
I fiddled around with c++ ANTLR2 (although I do not know much about grammars and parsing with the help of such). I do not see an easy way to get the SQL bits:
list<string> *ssel = theAST.getSubSelectList(); // fantasy land
Could anybody maybe provide some pointers on how "parsing professionals" would pursue this issue?
EDIT: I am using Oracle 9i.
| Maybe you can use this, it changes an select statement into an xml block:
declare
cl clob;
begin
dbms_lob.createtemporary (
cl,
true
);
sys.utl_xml.parsequery (
user,
'select e.deptno from emp e where deptno = 10',
cl
);
dbms_output.put_line (cl);
dbms_lob.freetemporary (cl);
end;
/
<QUERY>
<SELECT>
<SELECT_LIST>
<SELECT_LIST_ITEM>
<COLUMN_REF>
<SCHEMA>MICHAEL</SCHEMA>
<TABLE>EMP</TABLE>
<TABLE_ALIAS>E</TABLE_ALIAS>
<COLUMN_ALIAS>DEPTNO</COLUMN_ALIAS>
<COLUMN>DEPTNO</COLUMN>
</COLUMN_REF>
....
....
....
</QUERY>
See here: http://forums.oracle.com/forums/thread.jspa?messageID=3693276�
Now you 'only' need to parse this xml block.
Edit1:
Sadly I don't fully understand the needs of the OP but I hope this can help (It is another way of asking the 'names' of the columns of for example query select count(*),max(dummy) from dual):
set serveroutput on
DECLARE
c NUMBER;
d NUMBER;
col_cnt PLS_INTEGER;
f BOOLEAN;
rec_tab dbms_sql.desc_tab;
col_num NUMBER;
PROCEDURE print_rec(rec in dbms_sql.desc_rec) IS
BEGIN
dbms_output.new_line;
dbms_output.put_line('col_type = ' || rec.col_type);
dbms_output.put_line('col_maxlen = ' || rec.col_max_len);
dbms_output.put_line('col_name = ' || rec.col_name);
dbms_output.put_line('col_name_len = ' || rec.col_name_len);
dbms_output.put_line('col_schema_name= ' || rec.col_schema_name);
dbms_output.put_line('col_schema_name_len= ' || rec.col_schema_name_len);
dbms_output.put_line('col_precision = ' || rec.col_precision);
dbms_output.put_line('col_scale = ' || rec.col_scale);
dbms_output.put('col_null_ok = ');
IF (rec.col_null_ok) THEN
dbms_output.put_line('True');
ELSE
dbms_output.put_line('False');
END IF;
END;
BEGIN
c := dbms_sql.open_cursor;
dbms_sql.parse(c,'select count(*),max(dummy) from dual ',dbms_sql.NATIVE);
dbms_sql.describe_columns(c, col_cnt, rec_tab);
for i in rec_tab.first..rec_tab.last loop
print_rec(rec_tab(i));
end loop;
dbms_sql.close_cursor(c);
END;
/
(See here for more info: http://www.psoug.org/reference/dbms_sql.html)
The OP also want to be able to change the schema name of the table in a query. I think the easiest say to achieve that is to query the table names from user_tables and search in sql statement for those table names and prefix them or to do a 'alter session set current_schema = ....'.
|
1,614,867 | 1,614,878 | why would std::vector max_size() function return -1? | I have a std::vector<unsigned char> m_vData;
m_vData.max_size() always returns -1. why would that happen?
| Probably because you're assigning it to a signed type before viewing. The return value of max_size is typically size_t which is an unsigned type. A straight conversion to say int on many platforms would return -1.
Try the following instead
std::vector<unsigned char>::size_type v1 = myVector.max_size();
|
1,614,988 | 1,615,004 | Vector initializing slower than array...why? | I tried 2 things: (pseudo code below)
int arr[10000];
for (int i = 0; i < 10000; i++)
{
for (int j = 0; j < 10000; j++)
{
arr[j] = j;
}
}
and
vector<int> arr(10000);
for (int i = 0; i < 10000; i++)
{
for (int j = 0; j < 10000; j++)
{
arr[j] = j;
}
}
I ran both the programs and timed it using the "time" shell command. Program 1 runs in 5 seconds, program 2 runs in 30 seconds. I ran both programs with compiler optimization turned on, and both programs ran in about the same time (0.38s). I am confused by these results. Can someone please explain to me why this is happening?
Thanks!
| For the template, subscripting is done with operator[]. With optimization turned off, that'll usually be generated as a real function call, adding a lot of overhead to something as simple as subscripting into an array. When you turn on optimization, it's generated inline, removing that overhead.
|
1,615,109 | 1,615,162 | Does Windows 7 render old programs' controls with GDI or the new DWM/WDDM? | In Windows XP the Win32 API renders the controls using GDI/GDI+.
Now I'm on 7, so if I use the API's functions, will the rendering automatically be handled by the DWM/WDDM (so by DirectX)? or will it continue to render with GDI?
Or likewise, will an old app written with WinAPI, be rendered with GDI also in Windows 7?
Thank you in advance for the help :)
| In my experience, if the Aero display is on everything will render via that system, it just won't be obvious to your application. You'll still render in GDI, but it will be to a back buffer and not directly to the screen buffer (in fact it's more complicated then that). That way your older app can get the benefits of the new features, like the live preview effects, without having to be aware of them.
Really though, your application doesn't really notice a difference. The API is still the same API as before and works as you expect it. There are ways to take advantage of this, but you have to opt in to really use it.
|
1,615,197 | 1,615,253 | Templates and std::numeric_limits | I have a class called Atomic which is basically an _Atomic_word plus methods that call the gcc atomic builtins.
class Atomic{
mutable volatile _Atomic_word value_;
public:
Atomic(int value = 0): value_(value) {}
**** blah blah ****
};
I would like std::numeric_limits<Atomic> to instantiate to std::numeric_limits<underlying integer type> (e.g. on my system _Atomic_word is just a typedef for int).
Is there a way to do this?
| std::numeric_limits<Atomic> will instantiate with Atomic as the type, you can't subvert that. However you could specialise std::numeric_limits for Atomic like this
template<>
class numeric_limits< Atomic > : public numeric_limits< Atomic::UnderlyingType >
{
};
where you obviously expose UnderlyingType as a type in Atomic.
|
1,615,215 | 1,615,259 | Does ASP.NET support C++? | When I go to New -> Web site, in the drop-down menu "Language" there are only 2 languages: Visual C# and Visual Basic. No Visual C++. Maybe, I'm using wrong version of Visual Studio? (mine is 9.0.21022)
I tried to google this problem. I found a topic which tells that using C++ in ASP.NET is impossible. But it was posted in 2002 and I hope that something has changed since that year.
Is it possible to write ASP.NET applications using C++? If it does, can I use visual designer with this language?
| Visual Studio generates C# and VB code and that's why it provides you only those options, because the visual designers from which code is generated don't understand C++. There's nothing preventing you from creating a C++ project that uses the managed .NET codebase like the System, System.Web.* namespaces, etc. You won't have the designers or code generators working for you, which means comparatively more coding for you; however arguably a C++ programmer is accustomed to not having a lot of visual design support.
Microsoft provides information about ways of programming .NET using C++.
The caveat is you might not be able to use the version of Visual Studio you wanted to use. Worst case scenario is you use a text editor and invoke the compiler from the command-line.
|
1,615,518 | 1,615,557 | vector.resize function corrupting memory when size is too large | what's happening is i'm reading encryption packets, and I encounter a corrupted packet that gives back a very large random number for the length.
size_t nLengthRemaining = packet.nLength - (packet.m_pSource->GetPosition() - packet.nDataOffset);
seckey.SecretValues.m_data.resize(nLengthRemaining);
In this code m_data is a std::vector<unsigned char>. nLengthRemaining is too large due to a corrupted data packet, therefore the resize function throws. The problem isn't that resize throws (we handle the exceptions), but that resize has corrupted memory already and this leads to more exceptions later.
What I want to do is know if the length is too large before I call resize, then only call resize if it's ok. I have tried putting this code before the call to resize:
std::vector<unsigned char>::size_type nMaxSize = seckey.SecretValues.m_data.max_size();
if(seckey.SecretValues.m_data.size() + nLengthRemaining >= nMaxSize) {
throw IHPGP::PgpException("corrupted packet: length too big.");
}
seckey.SecretValues.m_data.resize(nLengthRemaining);
This code is using the std::vector max_size member function to test if the nLengthRemaining is larger. That must not be reliable though, because nLengthRemaining is still less than nMaxSize, but apparently still large enough to cause resize to have a problem (nMaxSize was 4xxxxxxxxx and nLengthRemaining is 3xxxxxxxxx).
Also, I haven't determine what exception resize is throwing. It's not a std::length_error and it's not a std::bad_alloc. What exception it's throwing really isn't too important to me, but i'm curious to know.
btw, just so you know, this code does work correctly in normal cases. This case of a corrupted data packet is the only place where it goes crazy. Please help! thanks.
UPDATE:
@Michael. For now i'll just ignore the packet if it's larger than 5 MB. I'll discuss with other team members about possibly validating the packets (it may already be there and I just don't know it). I'm starting to think it really is a bug in our version of STL, the exception it throws isn't even a std::exception, which surprized me. I'll try to find out from my supervisor what version of STL we're running too (how would I check?).
ANOTHER UPDATE:
I just prove that it is a bug in the STL version I'm using on my Visual Studio 6 development machine. I wrote this sample app:
// VectorMaxSize.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <vector>
#include <iostream>
#include <math.h>
#include <typeinfo>
typedef std::vector<unsigned char> vector_unsigned_char;
void fill(vector_unsigned_char& v) {
for (int i=0; i<100; i++) v.push_back(i);
}
void oput(vector_unsigned_char& v) {
std::cout << "size: " << v.size() << std::endl;
std::cout << "capacity: " << v.capacity() << std::endl;
std::cout << "max_size: " << v.max_size() << std::endl << std::endl;
}
void main(int argc, char* argv[]) {
{
vector_unsigned_char v;
fill(v);
try{
v.resize(static_cast<size_t>(3555555555));
}catch(std::bad_alloc&) {
std::cout << "caught bad alloc exception" << std::endl;
}catch(const std::exception& x) {
std::cerr << typeid(x).name() << std::endl;
}catch(...) {
std::cerr << "unknown exception" << std::endl;
}
oput(v);
v.reserve(500);
oput(v);
v.resize(500);
oput(v);
}
std::cout << "done" << std::endl;
}
On my VS6 dev machine it has the same behavior has the encryption project, it causes all kinds of havoc. When I build and run it on my Visual Studio 2008 machine, resize will throw a std::bad_alloc exception and the vector will not be corrupted, just as we would have expected! Time for some EA Sport NCAA football hehe!
| I think that vector::max_size() is pretty much always a 'hard coded' thing - it's independent of how much memory the system/library is prepared to dynamically allocate. Your problem seems to be a bug in the vector implementation that corrupts things when an allocation fails.
'Bug' might be too strong of a word. vector::resize() is defined in terms of vector::insert() and the standard says this about vector::insert():
If an exception is thrown other than by the copy constructor or assignment operator of T there are no effects
So it seems like there may be times when the resize() operation is allowed to corrupt a vector, but it would still be nice if the operation were exception safe (and I think it wouldn't be out of line to expect the library to do that, but maybe it's harder than I imagine).
You seem to have a couple reasonable options:
change or update to a library that doesn't have the corruption bug (what compiler/library version are you using?)
instead of checking against vector::max_size() set nMaxSize to your own reasonable maximum and do what you have above but using that threshold instead.
Edit:
I see that you're using VC6 - there's definitely a bug in vector::resize() that might have something to do with your problem, though looking at the patch I honestly don't see how (actually it's a bug in vector::insert(), but as mentioned, resize() calls insert()). I'd guess it would be worthwhile to visit Dinkumwares' page for bug fixes to VC6 and apply the fixes.
The problem might also have something to do with the <xmemory> patch on that page - it's unclear what the bug is that's discussed there, but vector::insert() does call _Destroy() and vector<> does define the name _Ty so you might be running into that problem. One nice thing - you won't have to worry about managing the changes to the headers, as Microsoft is never touching them again. Just make sure the patches make it into version control and get documented.
Note that Scott Meyers in "Effective STL" suggests using SGI's or STLPort's library to get better STL support than comes with VC6. I haven't done that so I'm not sure how well those libraries work (but I also haven't used VC6 with STL very much). Of course, if you have the option to move to a newer version of VC, by all means do it.
One more edit:
Thanks for the test program...
VC6's _Allocate() implementation for the default allocator (in <xmemory>) uses a signed int to specify the number of elements to allocate, and if the size passed in is negative (which apparently is what you're doing - certainly in the test program you are) the _Allocate() function forces the requested allocation size to zero and proceeds. Note that a zero-sized allocation request will pretty much always succeed (not that vector checks for a failure anyway), so the vector::resize() function merrily tries to move its contents into the new block, which isn't quite big enough to say the least. So the heap gets corrupted, it'll likely hit a invalid memory page, and regardless - your program is hosed.
So the bottom line is don't ever ask VC6 to allocate more than INT_MAX objects in one go. Probably not a great idea in most circumstances (VC6 or otherwise).
Also, you should keep in mind that VC6 uses a pre-standard idiom of returning 0 from new when an allocation fails rather than throwing bad_alloc.
|
1,615,555 | 1,615,572 | Why does my C++ divide program not compile | I tried to make a program that has a correct Divide function.
My code was:
#include <iostream>
using namespace std;
double x,y,z,a;
double divide(x,y) {
if (x >= y) {
x=z;
z=y;
y=x;
return(x/y);
}
else
return(y/x);
}
int main()
{
double x,y,z ;
cout << "Enter x " <<endl;
cin >> x;
cout << "Enter y " <<endl;
cin >> y;
a = divide (x,y);
cout << a <<endl;
system("pause");
return 0;
}
And I have 2 errors:
expected `,' or `;' before '{' token
on the { line. Right under the double divide (x, y) line
And another error
divide cannot be used as a function
on the a = divide (x, y); line.
I am using Code: Blocks
| You need to specify a proper function signature for the function divide. Specifically, the arguments to the function are missing their types:
double divide(double x, double y)
{
...
}
You also need to create a scope for each block in an if statement:
if (x > y)
{
...
}
else
{
...
}
|
1,615,634 | 1,864,256 | Can anyone recommend a decent DSP/speech library in C++? | Google returns too much results, although SPUC caught my attention.
Is there a standard recommended library like OpenCV for vision?
The necessary features would be:
Free Open Source
filter design (Butterworth, Chebyshev, etc)
FFT
if possible, some speech processing features, like MFCC computation, although that's secondary, as I could use SPTK (sp-tk.sourceforge.net) for that part.
| The Synthesis Toolkit, https://ccrma.stanford.edu/software/stk/, has a class that can model different Phonemes. It also has tools for all sorts of DSP including different types of filters. I recommend checking it out as it will be a fantastic learning experience no matter what you use it for.
|
1,615,660 | 1,615,812 | Why this warning from IBM XL C/C++ compiler? | Here's a minimum code example that illustrates the problem:
#include <iostream>
class Thing
{
// Non-copyable
Thing(const Thing&);
Thing& operator=(const Thing&);
int n_;
public:
Thing(int n) : n_(n) {}
int getValue() const { return n_;}
};
void show(const Thing& t)
{
std::cout << t.getValue() << std::endl;
}
int main()
{
show(3);
}
This yields the same error:
int main()
{
show( Thing(3) );
}
IBM XL C/C++ 8.0 compiler under AIX emits these warnings:
"testWarning.cpp", line 24.9: 1540-0306 (W) The "private" copy constructor "Thing(const Thing &)" cannot be accessed.
"testWarning.cpp", line 24.9: 1540-0308 (I) The semantics specify that a temporary object must be constructed.
"testWarning.cpp", line 24.9: 1540-0309 (I) The temporary is not constructed, but the copy constructor must be accessible.
I also tried g++ 4.1.2 with "-Wall" and "-pedantic" and got no diagnostic. Why is access to the copy constructor required here? How can I eliminate the warning, besides making the object copyable (which is outside my control) or making an explicit copy to pass (when the real-life object is expensive to copy)?
| The rules for this are in §8.5.3/5 of the standard. There are three basic situations identified. The first involve the initializer ('3' in your case) being either an lvalue, or having class type. Since neither of those is true, what you have is the third case: initializing a const reference with an rvalue that does not have a class type. This case is covered by the final bullet in 8.5.3/5:
Otherwise, a temporary of type “cv1 T1” is created and initialized from the initializer expression using the rules for a non-reference copy initialization (8.5). The reference is then bound to the temporary. If T1 is reference-related to T2, cv1 must be the same cv-qualification as, or greater cv-qualification than, cv2; otherwise, the program is ill-formed.
Edit: rereading, I think IBM has it right. I was previously thinking of the possibility of having to copy the temporary, but that's not the source of the problem. To create the temporary using non-reference copy initialization as specified in §8.5, it needs the copy ctor. In particular, at this point it's equivalent to an expression like:
T x = a;
This is basically equivalent to:
T x = T(a);
I.e. it's required to create a temporary, then copy the temporary to the object being initialized (which, in this case, is also a temporary). To summarize the required process, it's roughly equivalent to code like:
T temp1(3);
T temp2(temp1); // requires copy ctor
show(temp2); // show's reference parameter binds directly to temp2
|
1,615,813 | 1,616,143 | How to use C++ classes with ctypes? | I'm just getting started with ctypes and would like to use a C++ class that I have exported in a dll file from within python using ctypes.
So lets say my C++ code looks something like this:
class MyClass {
public:
int test();
...
I would know create a .dll file that contains this class and then load the .dll file in python using ctypes.
Now how would I create an Object of type MyClass and call its test function? Is that even possible with ctypes? Alternatively I would consider using SWIG or Boost.Python but ctypes seems like the easiest option for small projects.
| The short story is that there is no standard binary interface for C++ in the way that there is for C. Different compilers output different binaries for the same C++ dynamic libraries, due to name mangling and different ways to handle the stack between library function calls.
So, unfortunately, there really isn't a portable way to access C++ libraries in general. But, for one compiler at a time, it's no problem.
This blog post also has a short overview of why this currently won't work. Maybe after C++0x comes out, we'll have a standard ABI for C++? Until then, you're probably not going to have any way to access C++ classes through Python's ctypes.
|
1,615,902 | 1,615,917 | C++ - "Member function not declared" in derived class | I have a problem in MSVC++ 2008 where VS2008 is throwing this compile error:
error C2509: 'render' : member function not declared in 'PlayerSpriteKasua'
Now, what's confusing me is that render() is defined, but in an inherited class.
The class definition works like this:
SpriteBase -Inherited By-> PlayerSpriteBase -Inherited By-> PlayerSpriteKasua
So, a pared-down version of SpriteBase.h is the following:
class SpriteBase {
public:
//Variables=============================================
-snip-
//Primary Functions=====================================
virtual void think()=0; //Called every frame to allow the sprite to process events and react to the player.
virtual void render(long long ScreenX, long long ScreenY)=0; //Called every frame to render the sprite.
//Various overridable and not service/event functions===
virtual void died(); //Called when the sprite is killed either externally or via SpriteBase::kill().
-snip-
//======================================================
};
PlayerSpriteBase.h is this:
class PlayerSpriteBase : public SpriteBase
{
public:
virtual void pose() = 0;
virtual void knockback(bool Direction) = 0;
virtual int getHealth() = 0;
};
And finally, PlayerSpriteKasua.h is this:
class PlayerSpriteKasua : public PlayerSpriteBase
{
public:
};
I know there are no members in it yet, but that's simply because I hadn't gotten to adding them. Same goes for PlayerSpriteBase; there's other stuff left to go in to it.
The code in PlayerSpriteKasua.cpp is this:
#include "../../../MegaJul.h" //Include all the files needed in one go
void PlayerSpriteKasua::render(long long ScreenX, long long ScreenY) {
return;
}
void PlayerSpriteKasua::think() {
return;
}
int PlayerSpriteKasua::getHealth() {
return this->Health;
}
When I type, say, void PlayerSpriteKasua::, Intellisense pops up listing all the members of PlayerSpriteBase and SpriteBase just fine, but on compile it fails like I said above.
Is there any particular reason I'm getting this error?
PlayerSpriteBase.cpp is empty and has nothing in it as of yet.
SpriteBase.cpp has plenty of function definitions for SpriteBase, and uses the same format as PlayerSpriteKasua.cpp:
void SpriteBase::died() {
return;
}
is an example.
| In PlayerSpriteKasua.h you need to re-declare whatever methods you're going to override/implement (without the "=0" to say that those methods are not abstract anymore). So you need to write it like follows:
class PlayerSpriteKasua : public PlayerSpriteBase
{
public:
virtual void think();
virtual void render(long long ScreenX, long long ScreenY);
virtual int getHealth();
};
...or did you omit that to keep your post shorter?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.