question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
1,724,332 | 1,724,505 | How do I install GDI+ version 1.1 on Windows XP? | Has anyone done this successfully? As I understand, GDI+ 1.1 only ships with Vista. I'm trying to get my hands on the different Effects classes.
I'm using C++ VS2008 (VC9)
| GDI+ 1.1 is not redistributable according to EULA of Windows Vista.
|
1,724,842 | 1,724,884 | Dynamic output filenames (C++) | I'm trying to create output files subscripted by a dynamic index ( d = {0,...,NUM_DEMES-1}). Currently, I'm only getting output files for the first value (d=0).
#include <sstream>
#include <string>
void Simulation::updateSimulation( double t )
{
...
ofstream abundanceStream;
ofstream abHeaderStream;
if ( step == 1 ) {
for ( int d = 0; d < NUM_DEMES; d++ ) {
abundanceStream.open( makeFilename( "Abundances_", d ).c_str(),ios::out);
abHeaderStream.open( makeFilename( "Abundances_IDs_", d ).c_str(),ios::out);
}
}
for ( int d = 0; d < NUM_DEMES; d++ ) {
abundanceStream.open( makeFilename( "Abundances_", d ).c_str(),ios::app);
abHeaderStream.open( makeFilename( "Abundances_IDs_", d ).c_str(),ios::app);
}
}
string Simulation::makeFilename( const string& basename, int index )
{
ostringstream result;
result << basename << index;
return result.str();
}
This creates Abundances_0 and Abundances_IDs_0 but nothing else. I can write to those files. I can create the other filenames just fine, but the files just don't appear.
I'm probably missing something basic about streams, but I haven't been able to figure out what.
EDIT
The following code prints to screen the right filenames:
for ( int d = 0; d < NUM_DEMES; d++ ) {
abundanceStream.open( makeFilename( "Abundances_", d ).c_str(),ios::app);
abundanceStream << "stuff\n";
cout << makeFilename( "Abundances_", d ).c_str() << endl;
abHeaderStream.open( makeFilename( "Abundances_IDs_", d ).c_str(),ios::app);
abHeaderStream << "more stuff\n";
cout << makeFilename( "Abundances_IDs_", d ).c_str() << endl;
}
But "stuff" and "more stuff" only appear in the Abundances_0 and Abundances_IDs_0.
| You are always using the same objects. You can either close the streams after "use" or use different objects for each file.
|
1,725,147 | 1,725,182 | Templates, nested classes, and "expected constructor, destructor, or conversion before '&' token" | While working with some templates and writing myself a basic container class with iterators, I found myself needing to move the body of member functions from a template class into a separate file to conform to style guidelines. However, I've run into an interesting compile error:
runtimearray.cpp:17: error: expected
constructor, destructor, or type
conversion before '&' token
runtimearray.cpp:24: error: expected
constructor, destructor, or type
conversion before '&' token
runtimearray.cpp:32: error: expected
constructor, destructor, or type
conversion before '&' token
runtimearray.cpp:39: error: expected
constructor, destructor, or type
conversion before '&' token
runtimearray.cpp:85: error: expected
constructor, destructor, or type
conversion before 'RuntimeArray'
runtimearray.cpp:91: error: expected
constructor, destructor, or type
conversion before 'RuntimeArray'
runtimearray.h:
#ifndef RUNTIMEARRAY_H_
#define RUNTIMEARRAY_H_
template<typename T>
class RuntimeArray
{
public:
class Iterator
{
friend class RuntimeArray;
public:
Iterator(const Iterator& other);
T& operator*();
Iterator& operator++();
Iterator& operator++(int);
Iterator& operator--();
Iterator& operator--(int);
bool operator==(Iterator other);
bool operator!=(Iterator other);
private:
Iterator(T* location);
T* value_;
};
RuntimeArray(int size);
~RuntimeArray();
T& operator[](int index);
Iterator Begin();
Iterator End();
private:
int size_;
T* contents_;
};
#endif // RUNTIMEARRAY_H_
runtimearray.cpp:
#include "runtimearray.h"
template<typename T>
RuntimeArray<T>::Iterator::Iterator(const Iterator& other)
: value_(other.value_)
{
}
template<typename T>
T& RuntimeArray<T>::Iterator::operator*()
{
return *value_;
}
template<typename T>
RuntimeArray<T>::Iterator& RuntimeArray<T>::Iterator::operator++()
{
++value_;
return *this;
}
template<typename T>
RuntimeArray<T>::Iterator& RuntimeArray<T>::Iterator::operator++(int)
{
Iterator old = *this;
++value_;
return old;
}
template<typename T>
RuntimeArray<T>::Iterator& RuntimeArray<T>::Iterator::operator--()
{
--value_;
return *this;
}
template<typename T>
RuntimeArray<T>::Iterator& RuntimeArray<T>::Iterator::operator--(int)
{
Iterator old = *this;
--value_;
return old;
}
template<typename T>
bool RuntimeArray<T>::Iterator::operator==(Iterator other)
{
return value_ == other.value_;
}
template<typename T>
bool RuntimeArray<T>::Iterator::operator!=(Iterator other)
{
return value_ != other.value_;
}
template<typename T>
RuntimeArray<T>::Iterator::Iterator(T* location)
: value_(location)
{
}
template<typename T>
RuntimeArray<T>::RuntimeArray(int size)
: size_(size),
contents_(new T[size])
{
}
template<typename T>
RuntimeArray<T>::~RuntimeArray()
{
if(contents_)
delete[] contents_;
}
template<typename T>
T& RuntimeArray<T>::operator[](int index)
{
return contents_[index];
}
template<typename T>
RuntimeArray<T>::Iterator RuntimeArray<T>::Begin()
{
return Iterator(contents_);
}
template<typename T>
RuntimeArray<T>::Iterator RuntimeArray<T>::End()
{
return Iterator(contents_ + size_);
}
How can I make these errors go away? The files make sense to me, but alas, it's the compiler's say that matters.
| I think that you are missing the typename keyword.
e.g.
template<typename T>
RuntimeArray<T>::Iterator& RuntimeArray<T>::Iterator::operator++()
should be
template<typename T>
typename RuntimeArray<T>::Iterator& RuntimeArray<T>::Iterator::operator++()
'Nested' types which are dependent on a template parameter need the typename keyword to tell the compiler that they should be types where this otherwise would be ambiguous.
|
1,725,237 | 1,726,046 | Outputting unicode characters in windows terminal | Over the past week I've been working on a roguelike game in C++ along with a friend. Mostly too learn the language.
I'm using:
pdcurses
Windows 7
Visual studio C++
To output wchar_t's wherever I want to in the console. I have succeeded in otuputting some unicode characters such as \u263B (☻), but others such as \u2638 (☸) will just end up as question marks(?).
Here's the relevant code I use for output.
// Container of room information
struct RoomInfo
{
wchar_t * layout;
int width;
int height;
};
// The following function builds RoomInfo
RoomInfo Room::examine(IActor * examinor)
{
RoomInfo ri;
ri.width = this->width;
ri.height = this->height;
ri.layout = new wchar_t[height * width];
for(unsigned int y = 0; y < height; y++)
{
for(unsigned int x = 0; x < width; x++)
{
ri.layout[y*width + x] = L'\u263B'; // works
//ri.layout[y*width + x] = L'\u2638'; // will not work
}
}
}
// The following function outputs RoomInfo
void CursesConsole::printRoom(RoomInfo room)
{
int w = room.width;
int h = room.height;
WINDOW * mapw = newwin(h, w, 1, 0);
for(int y = 0; y < h; y++)
{
wmove(mapw, y, 0);
for(int x = 0; x < w; x++)
{
int c = y*w + x;
waddch(mapw, room.layout[c]);
}
}
wrefresh(mapw);
delwin(mapw);
}
I could of course fall back on boring ANSI-characters. But it would be really awesome to have the complete unicode-set of characters to play with.
To sum it up: How do you make sure that unicode characters are outputted correctly?
Edit:
Ok, so I figured out my encoding is working correctly. The problem is that I need to force the terminal to switch to a more unicode-rich font face. Is there a cross-platform way to do this? is there even a windows specific way to do this?
|
The problem is that I need to force the terminal to switch to a more unicode-rich font face. Is there a cross-platform way to do this? is there even a windows specific way to do this?
I had a look for this, but couldn't find a Windows API call to do it (which may just mean I didn't find it, of course). I would not expect to find a cross-platform way to do it.
The best solution I can think of is to launch the console using a specially constructed Shell Link (.LNK) file. If you read the file format documentation, you'll see that it allows you to specify a font.
But your problems don't end there. A Western locale install of Windows provides Lucida Console, but that font only provides a limited subset of graphemes. I assume that you can output/input Japanese text in the console on a Japanese Windows PC. You'd need to check what is available on a Japanese Windows if you wanted to be sure it would work there.
Linux (Ubuntu, at least) seems to have much better support here, using UTF-8 and providing a font with broad grapheme support. I haven't checked other distros to see what the story is there, nor have I checked how fonts are resolved in the terminal (whether it's an X thing, a Gnome thing or whatever).
|
1,725,498 | 1,725,954 | Is anybody working on a high level standard library for C++ | STL/Boost cover all the low level stuff.
But what about the higher level concepts?
Windows: We have multiple windowing libs
KDE(Qt)
Gnome
Motif(C but written in OO style)
MS Windows
etc
But is anybody working on a unified standard for windowing?
Something that wrapped all the above would be acceptable. (even if it only accessed the common stuff it would be a starting point).
Networking:
There are a couple out there (including the Boost low level stuff).
But is there anybody working on a Service based network layer?
All the other stuff that Java/C# have in their standard libraries.
The stuff that makes it simpler for a beginner to jump in and say Wow done and it works everywhere (nearly).
Anyway. Here hoping there are some cool projects out there.
Edit
Maybe there is not one.
But if there are a couple that could be bundled together as a starting point (and potentially modified over time (where is that deprecated keyword)) into a nice consolidated whole.
Note: Windows is just a small part of what I am looking for. The Java/C# languages consolidate a lot more under the hood than just the GUI. What would be a good set of libraries to get all the functionality in one place.
| The Poco C++ project aims to deliver all that you ask, except for Windowing:
The POCO C++ Libraries aim to be for
network-centric, cross-platform C++
software development what Apple's
Cocoa is for Mac development, or Ruby
on Rails is for Web development — a
powerful, yet easy to use platform to
build your applications upon.
|
1,725,714 | 1,725,770 | Why ofstream would fail to open the file in C++? Reasons? | I am trying to open an output file which I am sure has a unique name but it fails once in a while. I could not find any information for what reasons the ofstream constructor would fail.
EDIT:
It starts failing at some point of time and after that it continuously fails until I stop the running program which write this file.
EDIT:
once in a while = 22-24 hours
code snippet ( I don't this would help but still someone asked for it )
ofstream theFile( sLocalFile.c_str(), ios::binary | ios::out );
if ( theFile.fail() )
{
std::string sErr = " failed to open ";
sErr += sLocalFile;
log_message( sErr );
return FILE_OPEN_FAILED;
}
| Too many file handles open? Out of space? Access denied? Intermittent network drive problem? File already exists? File locked? It's awfully hard to say without more details. Edit: Based on the extra details you gave, it sounds like you might be leaking file handles (opening files and failing to close them and so running out of a per-process file handle limit).
I assume that you're familiar with using the exceptions method to control whether iostream failures are communicated as exceptions or as status flags.
In my experience, the iostream classes give very little details on what went wrong when they fail during an I/O operation. However, because they're generally implemented using lower-level Standard C and OS API functions, you can often get at the underlying C or OS error code for more details. I've had good luck using the following function to do this.
std::string DescribeIosFailure(const std::ios& stream)
{
std::string result;
if (stream.eof()) {
result = "Unexpected end of file.";
}
#ifdef WIN32
// GetLastError() gives more details than errno.
else if (GetLastError() != 0) {
result = FormatSystemMessage(GetLastError());
}
#endif
else if (errno) {
#if defined(__unix__)
// We use strerror_r because it's threadsafe.
// GNU's strerror_r returns a string and may ignore buffer completely.
char buffer[255];
result = std::string(strerror_r(errno, buffer, sizeof(buffer)));
#else
result = std::string(strerror(errno));
#endif
}
else {
result = "Unknown file error.";
}
boost::trim_right(result); // from Boost String Algorithms library
return result;
}
|
1,726,104 | 1,726,190 | visual studio intellisense error | template <typename T>
class Test {
friend Test<T> & operator * (T lhs, const Test<T> & rhs) {
Test<T> r(rhs);
// return r *= lhs;
}
}
4 IntelliSense: identifier "T" is undefined
Why is T defined on line 3 but not line 4? I mean I guess it's not a real error just an intellisense error... It works anyway but is there something wrong? Can I fix it? Or remove the red squiggles somehow?
I am using visual studio 2010. I wonder if this happens in other versions as well?
| Intellisense shows T as undefined because it is a generic template type. Depending on how you instantiate the class, T will be a different type. For example if you have Test<int> A, T is of type int, but if you call Test<string> A, T is of type string for that class and it's methods.
|
1,726,122 | 1,726,165 | SDL_image/C++ OpenGL Program: IMG_Load() produces fuzzy images | I'm trying to load an image file and use it as a texture for a cube. I'm using SDL_image to do that.
I used this image because I've found it in various file formats (tga, tif, jpg, png, bmp)
The code :
SDL_Surface * texture;
//load an image to an SDL surface (i.e. a buffer)
texture = IMG_Load("/Users/Foo/Code/xcode/test/lena.bmp");
if(texture == NULL){
printf("bad image\n");
exit(1);
}
//create an OpenGL texture object
glGenTextures(1, &textureObjOpenGLlogo);
//select the texture object you need
glBindTexture(GL_TEXTURE_2D, textureObjOpenGLlogo);
//define the parameters of that texture object
//how the texture should wrap in s direction
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//how the texture should wrap in t direction
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
//how the texture lookup should be interpolated when the face is smaller than the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//how the texture lookup should be interpolated when the face is bigger than the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//send the texture image to the graphic card
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels);
//clean the SDL surface
SDL_FreeSurface(texture);
The code compiles without errors or warnings !
I've tired all the files formats but this always produces that ugly result :
I'm using : SDL_image 1.2.9 & SDL 1.2.14 with XCode 3.2 under 10.6.2
Does anyone knows how to fix this ?
| The reason the image is distorted is because it's not in the RGBA format that you've specified. Check the texture->format to find out the format it's in and select the appropriate GL_ constant that represents the format. (Or, transform it yourself to the format of your choice.)
|
1,726,236 | 1,727,044 | system call does not work as in the command line | Ok I have two programs, and one calls another using executable from another.
I am running it on Ubuntu terminal
This is folder structure in place
.../src/pgm1/pgm1
.../src/pgm0/pgm0
pgm1 and pgm0 are executables.
This is how I call the other executable
char cmd[1000];
string path = "/home/usr/src/";
// call pgm0 for each instance...
sprintf( cmd, "../pgm0/pgm0 xRes 400 xRes 400 inFile tmp_output/%s.%04d.sc > tmp_output/%s.%04d.ppm", g_outFile.c_str(), ti, g_outFile.c_str(), ti);
cout << cmd << endl;
system (cmd);
....
I looked over and the cmd is generated properly:
../pgm0/pgm0 yRes 400 xRes 400 inFile tmp_output/sph0.0000.sc > tmp_output/sph0.0000.ppm
So if I run this command from command line it works perfectly well.
If I run it using system call it hangs and fails to parse input file sph0.0000.sc
I tried adding full path (hence path variable up)
But still no luck.
Any ideas why would this work from command line and not from system call within another executable...
Just to make it clear it works from command line in folder pgm1.
Thanks
| You are using > which means something to many shells, but I suspect not to system. Try this:
snprintf( cmd, sizeof cmd,
"/usr/bin/bash -c '../pgm0/pgm0 xRes 400 xRes 400"
" inFile tmp_output/%s.%04d.sc > tmp_output/%s.%04d.ppm'",
g_outFile.c_str(), ti, g_outFile.c_str(), ti);
And let us know how that goes.
|
1,726,242 | 1,726,325 | Is there a way to customize the tool tip of a custom object in the VS Debugger? | Is there a way to customize the tool tip of a custom object in the VS Debugger?
anyway to do same for unmanaged c++? thanks
| Have you had a look at <VSInstallDir>\Common7\Packages\Debugger?
|
1,726,425 | 1,726,653 | Adding elements to a vector inside a c++ class not being stored | Edit: My debugger was lying to me. This is all irrelevant
Howdy all,
I had a peek at Adding element to vector, but it's not helpful for my case.
I'm trying to add an element (custom class LatLng) to another object (Cluster) from a third object (ClusterManager).
When I pass my LatLng to Cluster (last line of ClusterManager.cpp), and jump into Cluster::addLocation, at the end of the function execution gdb says my new LatLng has been added to Cluster, but the moment I jump back into the scope of the highest class, ClusterManager, the new LatLng added to the vector 'locStore' is not present in either runtime or debug.
Any ideas?
DJS.
DE: Xcode 3.2 (Targeted to Debug 10.5)
OS: OSX 10.6
Compiler: GCC 4.2
Arch: x86_64
ClusterManager.cpp (where it's all being called from):
void ClusterManager::assignPointsToNearestCluster()
{
//Iterate through the points.
for (int i = 0; i < locationStore.size(); i++)
{
double closestClusterDistance = 100.1;
// Make sure to chuck the shits if we don't find a cluster.
int closestCluster = -1;
int numClusters = clusterStore.size();
// Iterate through the clusters.
for (int j = 0; j < numClusters; j++) {
double thisDistance = locationStore[i].getDistanceToPoint( *(clusterStore[j].getCentroid()) );
// If there's a closer cluster, make note of it.
if (thisDistance < closestClusterDistance) {
closestClusterDistance = thisDistance;
closestCluster = j;
}
}
// Remember the penultiment closest cluster.
this->clusterStore[closestCluster].addLocation( this->locationStore[i] );
}
}
ClusterManager.h
#include "Cluster.h"
#include "LatLng.h"
#include <vector>
class ClusterManager{
private:
std::vector<Cluster> clusterStore;
std::vector<LatLng> locationStore;
public:
ClusterManager();
void assignPointsToNearestCluster();
void addLocation(int,double,double);
};
Cluster.h:
#include <vector>
#include <string>
#include "LatLng.h"
class Cluster {
private:
std::vector<LatLng> locStore;
LatLng newCentroid;
bool lockCentroid;
int clusterSize;
int clusterID;
public:
Cluster(int,LatLng&);
void addLocation(LatLng&);
LatLng* getCentroid();
};
Cluster.cpp
Cluster::Cluster(int newId, LatLng &startPoint)
{
this->clusterID = newId;
this->newCentroid = startPoint;
};
void Cluster::addLocation(LatLng &newLocation)
{
(this->locStore).push_back( newLocation );
};
LatLng* Cluster::getCentroid()
{
return &newCentroid;
};
| The debugger is possibly lying. I've found Xcode has issues has viewing the contents of vectors, try using some asserts to make sure the vector in question is actually being filled.
|
1,726,462 | 1,726,528 | Memory Leak Analysis |
There is a memory leak in my application. The memory consumption shoots up after a couple of days of running the application. I need to dump call stack information of each orphaned block address. How is it possible with WinDbg?
I tried referring to document created by my colleague, but I'm confused about how to specify the symbol path and stuff like that. It didn't work out. Where can I get a step-by-step document.
| You can use umdh.exe to capture and compare snapshots of the process before and after leak happens. This works best with Debug binaries - it will give you the callstacks of memory allocated between the 1st and the 2nd snapshot.
http://support.microsoft.com/kb/268343
|
1,726,541 | 1,738,013 | Failing to set COM+ ConstructorString on Win7 - CryptProtectData changes? | UPDATED
I'm trying to programmatic-ally set a COM+ component's ConstructorString with a value for later initialization.
The code in question works fine on WinXP, Win2k3, Vista and Win2k8.
I'm failing on Win7 - Home Premium version.
I've determined by trial and error that there seems to be a size limit on the constructor string - if the string is 512 characters (wchar) or less, it saves. Longer, and the SaveChanges call on the CatalogCollection object fails with a 0x80110437 - COMADMIN_E_PROPERTYSAVEFAILED error.
Turns out, all systems have that limit - 512 characters.
We use CryptProtectData to encrypt a password before putting it into the string.
On win7 (x64) the output of the string is longer than on XP (x32) and W2k3 (x64).
So - CryptProtectData has changed - why is the output longer?
if (!CryptProtectData(&dataIn,L" ",&optionalEntropy,NULL,NULL,
CRYPTPROTECT_LOCAL_MACHINE | CRYPTPROTECT_UI_FORBIDDEN, &dataOut))
| What do you do with dataOut to turn it into a string? I can't remember the exact details now, but I assume the constructor string is a BSTR. dataOut is a byte buffer, so you need to be very careful when converting it to a string, so you don't trip on embedded NUL characters, etc.
Could you update your question to include the conversion from the output buffer of CryptProtectData to string?
|
1,726,740 | 1,726,777 | c++ error: operator []: 2 overloads have similar conversions | template <typename T>
class v3 {
private:
T _a[3];
public:
T & operator [] (unsigned int i) { return _a[i]; }
const T & operator [] (unsigned int i) const { return _a[i]; }
operator T * () { return _a; }
operator const T * () const { return _a; }
v3() {
_a[0] = 0; // works
_a[1] = 0;
_a[2] = 0;
}
v3(const v3<T> & v) {
_a[0] = v[0]; // Error 1 error C2666: 'v3<T>::operator []' : 2 overloads have similar conversions
_a[1] = v[1]; // Error 2 error C2666: 'v3<T>::operator []' : 2 overloads have similar conversions
_a[2] = v[2]; // Error 3 error C2666: 'v3<T>::operator []' : 2 overloads have similar conversions
}
};
int main(int argc, char ** argv)
{
v3<float> v1;
v3<float> v2(v1);
return 0;
}
| If you read the rest of the error message (in the output window), it becomes a bit clearer:
1> could be 'const float &v3<T>::operator [](unsigned int) const'
1> with
1> [
1> T=float
1> ]
1> or 'built-in C++ operator[(const float *, int)'
1> while trying to match the argument list '(const v3<T>, int)'
1> with
1> [
1> T=float
1> ]
The compiler can't decide whether to use your overloaded operator[] or the built-in operator[] on the const T* that it can obtain by the following conversion function:
operator const T * () const { return _a; }
Both of the following are potentially valid interpretations of the offending lines:
v.operator float*()[0]
v.operator[](0)
You can remove the ambiguity by explicitly casting the integer indices to be unsigned so that no conversion is needed:
_a[0] = v[static_cast<unsigned int>(0)];
or by changing your overloaded operator[]s to take an int instead of an unsigned int, or by removing the operator T*() const (and probably the non-const version too, for completeness).
|
1,727,081 | 1,727,187 | typedef'ing STL wstring | Why is it when i do the following i get errors when relating to with wchar_t?
namespace Foo
{
typedef std::wstring String;
}
Now i declare all my strings as Foo::String through out the program, but when i ever attempt to create a new Foo::String from a wchar_t* i get an error, e.g.:
namespace Bar
{
static const wchar_t* COMMON_BAR = L"Hello";
}
int main()
{
Foo::String A(Bar::COMMON_BAR);
};
I get the following error from visual studio:
error C2440: '<function-style-cast>' : cannot convert from 'const wchar_t *' to 'Foo::String'
Whats wrong?
EDIT Sorry i ment to say Bar::COMMON_BAR :(
Also i am compiling on visual studio 2008. I am really frustrated by this.
EDIT#2
Sorry i couldn't respond to this sooner. The problems turns out to be a bigger issue than i wanted. I now noticed how much using std::wstring; 's there are floating around in the code.
This is at the absolute bottom of the global headerfile.
// WStrings are a waste
// Feb 2007
#define wstring string
Ugh. Thanks, sorry this is kinda a waste of space :(
| The following code compiled and ran under llvm-gcc:
#include <string>
namespace Foo
{
typedef std::wstring String;
}
namespace Bar
{
static const wchar_t* COMMON_BAR = L"Hello";
}
int main()
{
Foo::String A(Bar::COMMON_BAR);
};
Notice how you accidentally had COMMON_DATA_PATH instead. I'm not sure which compiler you're using, but llvm-gcc gives the following error:
/tmp/webcompile/_1569_0.cc: In function 'int main()':
/tmp/webcompile/_1569_0.cc:14: error: 'COMMON_DATA_PATH' is not a member of 'Bar'
/tmp/webcompile/_1569_0.cc: At global scope:
/tmp/webcompile/_1569_0.cc:9: warning: 'Bar::COMMON_BAR' defined but not used
Try out the live demo here.
|
1,727,143 | 1,727,161 | C++ enum value initialization | I have an enum declared in my code as:
enum REMOTE_CONN
{
REMOTE_CONN_DEFAULT = 0,
REMOTE_CONN_EX_MAN = 10000,
REMOTE_CONN_SD_ANNOUNCE,
REMOTE_CONN_SD_IO,
REMOTE_CONN_AL,
REMOTE_CONN_DS
};
I expect the value of REMOTE_CONN_SD_IO to be 10002, but when debugging the value of ((int)REMOTE_CONN_SD_IO) was given as 3.
Another component uses the same enum and it gives the expected value of 10002 to REMOTE_CONN_SD_IO.
What could be the reason for this?
| OK, I'll guess.
The first component was built before you changed the code in the header. Try rebuilding the offending component.
|
1,727,174 | 1,727,182 | Open source C/C++ 3d renderer (with support of 3ds max models) | Best, smallest, fastest, open source, C/C++ 3d renderer (with support of 3ds max models), better not GPL,
It should support Lights, textures (better dynamic), simple objects, It should be really fast and it shall have lots of use examples
| I would advise Ogre, it's pretty mature and really good API.
Ogre license
there is a plugin to export 3DSMax model to the orgre format there.
|
1,727,282 | 1,729,020 | C++ Equivalent of Tidy | Is there an equivalent to tidy for HTML code for C++? I have searched on the internet, but I find nothing but C++ wrappers for tidy, etc... I think the keyword tidy is what has me hung up.
I am basically looking for something to take code written by two people, and clean it up to a standardized style. Does such an app exist?
Thanks a bunch!
| Artistic Style
is a source code indenter, formatter,
and beautifier for the C, C++, C# and
Java programming languages.
GC Great Code
is a well known C/C++ source code
beautifier.
|
1,727,361 | 1,745,044 | How to read/write data into excel 2007 in c++? | How to read/write data into excel 2007 in c++?
| Excel provides COM interface which you can use from your C++ application.
I have experience with Excel 2003 only but I think for excel 2007 it will also work.
This can be done e.g. with #import or in the way described in this article:
http://support.microsoft.com/kb/216686
|
1,727,471 | 1,729,994 | c++: generate function call tree | I want to parse current c++ files in a project and list out all the methods/functions in it and then generate the function call and caller trees.
F.g. you can refer how doxygen generates the call tree.
I have checked gccxml but it doesn't list the functions called from another function.
Please suggest me some lightweight tools (open source) which I can use it.
thanks!
| The static call tree isn't necessarily the runtime call tree. Callbacks and virtual functions muddy the water. So static analysis can only give you part of the answer.
The only way I've ever been able to get a reliable call tree was to run gprof on the compiled executable. The output can be massaged into a very accurate call tree.
|
1,727,569 | 1,727,606 | What are the benefits to passing integral types by const ref | The question: Is there benefit to passing an integral type by const reference as opposed to simply by value.
ie.
void foo(const int& n); // case #1
vs
void foo(int n); // case #2
The answer is clear for user defined types, case #1 avoids needless copying while ensuring the constness of the object. However in the above case, the reference and the integer (at least on my system) are the same size, so I can't imagine there being a whole lot of difference in terms of how long it takes for the function call (due to copying). However, my question is really related to the compiler inlining the function:
For very small inline functions, will the compiler have to make a copy of the integer in case #2? By letting the compiler know we won't change the reference can it inline the function call without needless copying of the integer?
Any advice is welcome.
| Passing a built-in int type by const ref will actually be a minor de-optimization (generally). At least for a non-inline function. The compiler may have to actually pass a pointer that has to be de-referenced to get the value. You might think it could always optimize this away, but aliasing rules and the need to support separate compilation might force the compiler's hand.
However, for your secondary question:
For very small inline functions, will the compiler have to make a copy of the integer in case #2? By letting the compiler know we won't change the reference can it inline the function call without needless copying of the integer?
The compiler should be able to optimize away the copy or the dereference if semantics allow it, since in that situation the compiler has full knowledge of the state at the call site and the function implementation. It'll likely just load the value into a register have its way with it and just use the register for something else when it's done with the parameter. Of course,all this is very dependent on the actual implementation of the function.
|
1,727,594 | 1,733,370 | Optimal datafile format loading on a game console | I need to load large models and other structured binary data on an older CD-based game console as efficiently as possible. What's the best way to do it? The data will be exported from a Python application. This is a pretty elaborate hobby project.
Requierements:
no reliance on fully standard compliant STL - i might use uSTL though.
as little overhead as possible. Aim for a solution so good. that it could be used on the original Playstation, and yet as modern and elegant as possible.
no backward/forward compatibility necessary.
no copying of large chunks around - preferably files get loaded into RAM in background, and all large chunks accessed directly from there later.
should not rely on the target having the same endianness and alignment, i.e. a C plugin in Python which dumps its structs to disc would not be a very good idea.
should allow to move the loaded data around, as with individual files 1/3 the RAM size, fragmentation might be an issue. No MMU to abuse.
robustness is a great bonus, as my attention span is very short, i.e. i'd change saving part of the code and forget the loading one or vice versa, so at least a dumb safeguard would be nice.
exchangeability between loaded data and runtime-generated data without runtime overhead and without severe memory management issues would be a nice bonus.
I kind of have a semi-plan of parsing in Python trivial, limited-syntax C headers which would use structs with offsets instead of pointers, and convenience wrapper structs/classes in the main app with getters which would convert offsets to properly typed pointers/references, but i'd like to hear your suggestions.
Clarification: the request is primarily about data loading framework and memory management issues.
| This is a common game development pattern.
The usual approach is to cook the data in an offline pre-process step. The resulting blobs can be streamed in with minimal overhead. The blobs are platform dependent and should contain the proper alignment & endian-ness of the target platform.
At runtime, you can simply cast a pointer to the in-memory blob file. You can deal with nested structures as well. If you keep a table of contents with offsets to all the pointer values within the blob, you can then fix-up the pointers to point to the proper address. This is similar to how dll loading works.
I've been working on a ruby library, bbq, that I use to cook data for my iphone game.
Here's the memory layout I use for the blob header:
// Memory layout
//
// p begining of file in memory.
// p + 0 : num_pointers
// p + 4 : offset 0
// p + 8 : offset 1
// ...
// p + ((num_pointers - 1) * 4) : offset n-1
// p + (num_pointers * 4) : num_pointers // again so we can figure out
// what memory to free.
// p + ((num_pointers + 1) * 4) : start of cooked data
//
Here's how I load binary blob file and fix up pointers:
void* bbq_load(const char* filename)
{
unsigned char* p;
int size = LoadFileToMemory(filename, &p);
if(size <= 0)
return 0;
// get the start of the pointer table
unsigned int* ptr_table = (unsigned int*)p;
unsigned int num_ptrs = *ptr_table;
ptr_table++;
// get the start of the actual data
// the 2 is to skip past both num_pointer values
unsigned char* base = p + ((num_ptrs + 2) * sizeof(unsigned int));
// fix up the pointers
while ((ptr_table + 1) < (unsigned int*)base)
{
unsigned int* ptr = (unsigned int*)(base + *ptr_table);
*ptr = (unsigned int)((unsigned char*)ptr + *ptr);
ptr_table++;
}
return base;
}
My bbq library isn't quite ready for prime time, but it could give you some ideas on how to write one yourself in python.
Good Luck!
|
1,727,608 | 1,727,755 | How do I get hardware information on Linux/Unix? | How I can get hardware information from a Linux / Unix machine.
Is there a set of APIs?
I am trying to get information like:
OS name.
OS version.
available network adapters.
information on network adapters.
all the installed software.
I am looking for an application which collects this information and show it in a nice format.
I have used something similar with the "system_profile" command line tool for Mac OS X. I
was wondering if something similar is available for Linux as well.
| If you need a simple answer, use:
cat /proc/cpuinfo
cat /proc/meminfo
lspci
lsusb
and harvest any info you need from the output of these commands. (Note: the cut command may be your friend here if you are writing a shell script.)
Should you need more detail, add a -v switch to get verbose output from the lspci and lsusb commands.
If what you are looking for is a more feature-complete API, then use HAL, though that may be an overkill for what you are trying to build.
|
1,727,824 | 1,727,939 | Does using callbacks in C++ increase coupling? |
Q1. Why are callback functions used?
Q2. Are callbacks evil? Fun for those
who know, for others a nightmare.
Q3. Any alternative to callback?
| Regardless of "using callbacks in C++ increase coupling" or not, I suggest using event handler style especially process event-like things. For example, concrete a Design Pattern instead of callback concept as the below:
class MyClass
{
public:
virtual bool OnClick(...) = 0;
virtual bool OnKey(...) = 0;
virtual bool OnTimer(...) = 0;
virtual bool OnSorting(...) = 0
...
};
You may still consider the above functions are callbacks, but when considering them as the known Design Pattern, you won't get confused because you are doing OO and writing C++.
Effo UPD@2009nov13 -
Typical cases: Framework, Events System or Concurrent Programming Model, and so on. Following examples should be useful.
Framework controls the overall flow as the Hollywood Principle states "Don't call us, we'll call you." (that's what "callback" means exactly) while per a normal function or lib, caller controls the flow.
A well-known C framework is Linux kernel and a Linux driver writer knows that s/he'd implement a "struct file_operations" in it "read()" means OnRead() and "write()" means OnWrite() etc.
struct file_operations {
struct module *owner;
loff_t (*llseek) (struct file *, loff_t, int);
ssize_t (*read) (struct file *, char __user *, size_t, loff_t *);
ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
ssize_t (*aio_read) (struct kiocb *, const struct iovec *, unsigned long, loff_t);
ssize_t (*aio_write) (struct kiocb *, const struct iovec *, unsigned long, loff_t);
...
};
But the simplest example of a framework should be:
The Framework | A developer to do
----------------------------+--------------------------------------
| class MyClass : public Actor
| {
| public:
pApplication->Init(...); | virtual bool OnInit(...) {}
pApplication->Run(...); | virtual int OnRun(...) {}
pApplication->Quit(...); | virtual void OnQuit(...) {}
| ...
| };
and pApplication->Init() will call pActor->OnInit, and pApplication->Run() calls pActor->OnRun(), and so on internally. Most Windows GUI developers had experienced implementing OnMouseClick() or OnButtonPress() etc.
I agree with other answers of this thread that they give correct explanation based on the viewpoint accordingly, such as handlers in layered approach, generalized callback or aynchronous operations, and so on. It's up to you that what idea(s) would be suitable for you.
|
1,727,881 | 1,727,896 | How to use the PI constant in C++ | I want to use the PI constant and trigonometric functions in some C++ program. I get the trigonometric functions with include <math.h>. However, there doesn't seem to be a definition for PI in this header file.
How can I get PI without defining it manually?
| On some (especially older) platforms (see the comments below) you might need to
#define _USE_MATH_DEFINES
and then include the necessary header file:
#include <math.h>
and the value of pi can be accessed via:
M_PI
In my math.h (2014) it is defined as:
# define M_PI 3.14159265358979323846 /* pi */
but check your math.h for more. An extract from the "old" math.h (in 2009):
/* Define _USE_MATH_DEFINES before including math.h to expose these macro
* definitions for common math constants. These are placed under an #ifdef
* since these commonly-defined names are not part of the C/C++ standards.
*/
However:
on newer platforms (at least on my 64 bit Ubuntu 14.04) I do not need to define the _USE_MATH_DEFINES
On (recent) Linux platforms there are long double values too provided as a GNU Extension:
# define M_PIl 3.141592653589793238462643383279502884L /* pi */
|
1,728,158 | 1,728,189 | Why is new[] allocating extra memory? | I'm reading "Thinking in C++" and I'm confused by the new operator. Here is the code from the book:
//: C13:ArrayOperatorNew.cpp
// Operator new for arrays
#include <new> // Size_t definition
#include <fstream>
using namespace std;
ofstream trace("ArrayOperatorNew.out");
class Widget
{
enum { sz = 10 };
int i[sz];
public:
Widget() { trace << "*"; }
~Widget() { trace << "~"; }
void* operator new(size_t sz)
{
trace << "Widget::new: "
<< sz << " bytes" << endl;
return ::new char[sz];
}
void operator delete(void* p)
{
trace << "Widget::delete" << endl;
::delete []p;
}
void* operator new[](size_t sz)
{
trace << "Widget::new[]: "
<< sz << " bytes" << endl;
return ::new char[sz];
}
void operator delete[](void* p)
{
trace << "Widget::delete[]" << endl;
::delete []p;
}
};
int main()
{
trace << "new Widget" << endl;
Widget* w = new Widget;
trace << "\ndelete Widget" << endl;
delete w;
trace << "\nnew Widget[25]" << endl;
Widget* wa = new Widget[25];
trace << "\ndelete []Widget" << endl;
delete []wa;
} ///:~
and here is the content of the trace in "ArrayOperatorNew.out"
new Widget
Widget::new: 40 bytes
*
delete Widget
~Widget::delete
new Widget[25]
Widget::new[]: 1004 bytes
*************************
delete []Widget
~~~~~~~~~~~~~~~~~~~~~~~~~
Widget::delete[]
I'm confused about the number 1004. Why it's not 1000? The book says:
This extra four bytes is where the
system keeps information about the
array, in particular, the number of
objects in the array.
But what's the system? How is this accomplished? Compiler helps here?
| When using new[] the runtime needs some way to remember the size of the array allocated, so it knows how much to deallocate when using delete[]. In your particular implementation it's way of remembering is allocating the extra four bytes which hold the size (it doesn't have to work this way).
You can read more about this in the C++ FAQ.
|
1,728,772 | 1,728,798 | Is it correct to use declaration only for empty private constructors in C++? | For example is this correct:
class C
{
private:
C();
C(const & C other);
}
or you should rather provide definition(s):
class C
{
private:
C() {};
C(const & C other) {};
}
?
Thanks for the current answers. Let's extend this question - does compiler generate better code in one of this examples? I can imagine that providing body for ctor forces compiler to include some (empty) code in compilation unit? Is this also true for auto-generated code?
| If you do not wish your object to be copyable, then there is no need to provide the implementation. Just declare the copy ctor private without any implementation. The same holds for other ctors, if you do not want any body to use them, just declare them private without any implementation.
|
1,728,847 | 1,728,890 | How can i find a value in a map using binders only | Searching in the second value of a map i use somthing like the following:
typedef std::map<int, int> CMyList;
static CMyList myList;
template<class t> struct second_equal
{
typename typedef t::mapped_type mapped_type;
typename typedef t::value_type value_type;
second_equal(mapped_type f) : v(f) {};
bool operator()(const value_type &a) { return a.second == v;};
mapped_type v;
};
...
int i = 7;
CMyList::iterator it = std::find_if(myList.begin(), myList.end(),
second_equal<CMyList>(i));
Question: How can i do such a find in a single line without supplying a self written template?
| Use a selector to select the first or the second element from the value_type that you get from the map.
Use a binder to bind the value (i) to one of the arguments of the std::equal_to function.
Use a composer to use the output of the selector as the other argument of the equal_to function.
//stl version
CMyList::iterator it = std::find_if(
myList.begin(),
myList.end(),
std::compose1(
std::bind2nd(equal_to<CMyList::mapped_type>(), i),
std::select2nd<CMyList::value_type>())) ;
//Boost.Lambda or Boost.Bind version
CMyList::iterator it = std::find_if(
myList.begin(),
myList.end(),
bind( &CMyList::mapped_type::second, _1)==i);
|
1,728,909 | 1,728,925 | How to find the number of data added in CStringArray | Helo guys,
I am working wid CStringArray and i want to know how to find the number of datas added in a CStringArray . in the below i have a defined the size of the array as 10 but i have added only three 3 data,So i want to know the number of data's added to the array.(here its 3).Is there any way we can do it in CStringArray to find the number of data's added to array
CStringArray filepaths[10] =
{path1.path2,path3};
| CStringArray::GetCount()
Edit: In your code above you have actually created an array of CStringArray's. I am presuming you mean CStringArray from the Microsoft MFC library?
I think you want to be doing something like:
CStringArray filepaths;
filepaths.Add( path1 );
filepaths.Add( path2 );
filepaths.Add( path3 );
filepaths.GetCount(); //should ==3
|
1,728,961 | 1,728,980 | Extract statically linked libraries from an executable | I'm not sure if this is even possible, but given an executable file (foo.exe), with has many libraries which has been linked statically.
Is there any software that extract from this file the .lib ( or .a ) that lay inside the executable ?
Thanks.
| Incredibly unlikely since, typically, you don't get the entire contents of the library injected into your executable.
You only get enough to satisfy all the undefined symbols. This may actually only be a small part of the library. A library generally consists of a set of object files of which only those that are required are linked into your executable.
For example, if the only thing you called in the C runtime library was exit(), you would be very unlikely to have the printf() family of functions in your executable.
If you linked with the object files directly, you may have a chance, since they would be included whether used or not (unless your linker is a smart one).
But even that would be a Herculean task as there may be no information in the executable as to what code sections came from specific object files. It's potentially doable but, if there's another way, I'd be looking at that first.
Let me clarify the typical process:
Four object files, a.o, b.o, c.o and d.o contain the functions a(), b(), c() and d() respectively. They are all added to the abcd.a archive.
They are all standalone (no dependencies) except for the fact that b() calls c().
You have a main program which calls a() and b() and you compile it then link it with the abcd.a library.
The linker drags a.o and b.o out of the library and into your executable, satisfying the need for a() and b() but introducing a need for c(), because b() needs it.
The linker then drags c.o out of the library and into your executable, satisfying the need for c(). Now all undefined symbols are satisfied, the executable is done and dusted, you can run it when ready.
At no stage in that process was d.o dragged into your executable so you have zero hope of getting it out.
Update: Re the "if there's another way, I'd be looking at that first" comment I made above, you have just stated in a comment to one of the other answers that you have the source code that made the libraries you want extracted. I need to ask: why can you not rebuild the libraries with that source? That seems to me a much easier solution than trying to recreate the libraries from a morass of executable code.
|
1,729,326 | 1,729,354 | templated class can't redefine operator[] | I've this class
namespace baseUtils {
template<typename AT>
class growVector {
int size;
AT **arr;
AT* defaultVal;
public:
growVector(int size, AT* defaultVal ); //Expects number of elements (5) and default value (NULL)
AT*& operator[](unsigned pos);
int length();
void reset(int pos); //Resets an element to default value
void reset(); //Resets all elements to default value
~growVector();
};
}
and this is the implementation for operator[]
template<typename AT>
AT*& growVector<AT>::operator [](unsigned pos){
if (pos >= size){
int newSize = size*2;
AT** newArr = new AT*[newSize];
memcpy(newArr, arr, sizeof(AT)*size);
for (int i = size; i<newSize; i++)
newArr[i] = defaultVal;
size = newSize;
delete arr;
arr = newArr;
}
return arr[pos];
}
(yes I do realize i don't check if size*2 >= pos... but that's not the point now)
if I use it in code like:
int main() {
growVector<char> gv();
char* x = NULL;
for (int i = 0; i< 50; i++){
gv[i] = x;
}
gv.reset();
return 0;
}
the compiler says
../src/base.cpp:98: warning: pointer to a function used in arithmetic
../src/base.cpp:98: error: assignment of read-only location ‘*(gv + ((unsigned int)i))’
../src/base.cpp:98: error: cannot convert ‘char*’ to ‘baseUtils::growVector<char>()’ in assignment
referring to the line gv[i] = x; (seems like it doesn't see the redefinition of [])
Why???? What am I missing?
After correcting the constructor problem I've the linker sayng:
/home/dario/workspace/base/Debug/../src/base.cpp:95: undefined reference to `baseUtils::growVector<char>::growVector(int, char*)'
/home/dario/workspace/base/Debug/../src/base.cpp:98: undefined reference to `baseUtils::growVector<char>::operator[](unsigned int)'
/home/dario/workspace/base/Debug/../src/base.cpp:100: undefined reference to `baseUtils::growVector<char>::reset()'
/home/dario/workspace/base/Debug/../src/base.cpp:101: undefined reference to `baseUtils::growVector<char>::~growVector()'
/home/dario/workspace/base/Debug/../src/base.cpp:101: undefined reference to `baseUtils::growVector<char>::~growVector()'
like it cannot link... why??? :O
| The problem is your declaration
growVector<char> gv();
The compiler interprets this as declaring a function called gv which returns a growVector<char>, not as an object as you indend. Since there isn't a default constructor, this wouldn't compile anyway. Change it to:
growVector<char> gv(0,0);
|
1,729,430 | 1,729,504 | Cross compiler exception handling - Can it be done safely? | I am doing some maintenance on a C++ windows dll library that is required to work with different VC++ compilers (as I don’t want to address different mangling schemes).
I have already eliminated any use of the STL in the interface.
I have insured that heap corruption will not occur from mixing different new/delete’s.
The final item is exceptions. Can I throw an exception and expect it to be caught correctly by a different version of the compiler?
| Even considering your additional comment: This will fail whenever MS changes either the compiler's ABI or the class layout of the exception class(es) or even with different compiler settings. In fact, the latter might cause failure even with the same compiler.
So I guess the answer is: No you cannot do this safely. If you want to pass non-POD objects between executables, they should be compiled with the same compiler, the same std library, and the same (to some degree) compiler settings.
|
1,729,699 | 1,730,718 | How do I correctly use SDL_FreeSurface when dealing with a vector of surfaces | I have setup a small shooter game as a tutorial for myself in SDL. I have a struct of a projectile
struct projectile
{
SDL_Surface* surface;
int x;
int y;
};
And I put that into a vector.
vector<projectile> shot;
projectile one_shot;
And when I press space I create a new projectile and add it to the vector and then they're blitted when they're rendered.
This works fine, but I'm in seemingly random cases getting a "program has stopped working" error.
So I'm wondering what is the proper way to free the surfaces.
Do I free them all afterwards?
Do I free each individual shot when it exits the screen?
Or some other choice?
UPDATE:
I have found where it crashes when I quit, when I have fired a few shots and they have all exited the screen. I have tried replacing the code that adds the surface to the vector with the "proper way to duplicate" as seen in this example, and it still behaves in the same way.
This is how I free the surface.
if(shot.at(i).y < 0 - shot.at(i).surface->h)
{
SDL_FreeSurface(shot.at(i).surface);
shot.erase(shot.begin() + i);
}
Anyone have an idea or some sample code I can look at to figure this out.
| If several projectiles use the same sprite (as in almost all sprite-based games), it's probably better to use an image cache containing all the images used by your games and do memory management only there. Fill it at start or on demand and flush it when exiting. Then projectiles just need to ask to this cache a pointer to "arrow.png", the cache loads it (if needed) and returns the surface pointer.
Such cache can be a simple std::map< string, SDL___Surface* > with just functions like get_surface(string) and flush().
EDIT: an implementation of this idea:
class image_cache{
map<string, SDL_Surface*> cache_;
public:
SDL_Surface* get_image(string file){
map<string, SDL_Surface*>::iterator i = cache_.find(file);
if(i == cache_.end()) {
SDL_Surface* surf = SDL_LoadBMP(file.c_str());
i = cache_.insert(i, make_pair(file, surf));
}
return i->second;
}
void flush(){
map<string, SDL_Surface*>::iterator i = cache_.begin();
for(;i != cache_.end();++i)
SDL_FreeSurface(i->second);
cache_.clear();
}
~image_cache() {flush();}
};
image_cache images;
// you can also use specialized caches instead of a global one
image_cache projectiles_images;
int main()
{
...
SDL_Surface* surf = images.get_image("sprite.png");
...
}
|
1,729,772 | 1,731,005 | getline vs istream_iterator | Should there be a reason to preffer either getline or istream_iterator if you are doing line by line input from a file(reading the line into a string, for tokenization).
| I sometimes (depending on the situation) write a line class so I can use istream_iterator:
#include <string>
#include <vector>
#include <iterator>
#include <iostream>
#include <algorithm>
struct Line
{
std::string lineData;
operator std::string() const
{
return lineData;
}
};
std::istream& operator>>(std::istream& str,Line& data)
{
std::getline(str,data.lineData);
return str;
}
int main()
{
std::vector<std::string> lines(std::istream_iterator<Line>(std::cin),
std::istream_iterator<Line>());
}
|
1,729,834 | 1,730,162 | What's the purpose of IUnknown member functions in END_COM_MAP? | ATL END_COM_MAP macro is defined as follows:
#define END_COM_MAP() \
__if_exists(_GetAttrEntries) {{NULL, (DWORD_PTR)_GetAttrEntries, _ChainAttr }, }\
{NULL, 0, 0}}; return _entries;} \
virtual ULONG STDMETHODCALLTYPE AddRef( void) throw() = 0; \
virtual ULONG STDMETHODCALLTYPE Release( void) throw() = 0; \
STDMETHOD(QueryInterface)(REFIID, void**) throw() = 0;
It is intended to be used within definition of classes inherited from COM interfaces, for example:
class ATL_NO_VTABLE CMyClass :
public CComCoClass<CMyClass, &MyClassGuid>,
public CComObjectRoot,
public IMyComInterface
{
public:
BEGIN_COM_MAP( CMyClass )
COM_INTERFACE_ENTRY( IMyComInterface)
END_COM_MAP()
};
This means that QueryInterface(), AddRef() and Release() are declared as pure virtual in this class. Since I don't define their implementation this class should be uncreatable. Yet ATL successfully instantiates it.
How does it work and why are those IUnknown member functions redeclared here?
| It's been a while since I used ATL but, IIRC, what ends up being instantiated is not CMyClass, but CComObject<CMyClass>.
CComObject implements IUnknown and inherits from its template parameter.
Edit: The "Fundamentals of ATL COM Objects" page on MSDN nicely illustrates what's going on.
|
1,730,427 | 1,734,006 | Display message in windows dialogue box using "cout" - C++ | Can a windows message box be display using the cout syntax?
I also need the command prompt window to be suppressed / hidden.
There are ways to call the messagebox function and display text through its usage, but the main constraint here is that cout syntax must be used.
cout << "message";
I was thinking of invoking the VB msgbox command in the cout output, but couldn't find anything that worked.
Any ideas?
| First thing you should take into account is that MessageBox stops the thread until you close the window. If that is the behavior you desire, go ahead.
You can create a custom streambuf and set it to std::cout:
#include <windows.h>
#include <sstream>
#include <iostream>
namespace {
class mb_streambuf : public std::stringbuf {
virtual ~mb_streambuf() { if (str().size() > 0) sync(); }
virtual int sync() {
MessageBoxA(0, str().c_str(), "", MB_OK);
str("");
return 0;
}
} mb_buf;
struct static_initializer {
static_initializer() {
std::cout.rdbuf(&mb_buf);
}
} cout_buffer_switch;
}
int main()
{
std::cout << "Hello \nworld!"; // Will show a popup
}
A popup will be shown whenever std::cout stream is flushed.
|
1,730,452 | 1,730,462 | Link errors on Snow Leopard | I creating a small desktop application using Qt and Poco on Mac OS X Snow Leopard.
Qt works fine, but once I started linking with Poco I get the following warning:
ld: warning: in /Developer/SDKs/MacOSX10.6.sdk/usr/local/lib/libPocoFoundation.8.dylib, file is not of required architecture
Also when I link against the 10.5 SDK:
ld: warning: in /Developer/SDKs/MacOSX10.5.sdk/usr/local/lib/libPocoFoundation.8.dylib, file is not of required architecture
Any hints on how to solve this?
Solved!
Here's my workaround (I also posted it on the Poco forums btw):
The problem is that when the architecture is not specified Snow Leopard defaults to 64-bit, while older versions of the OS default to 32-bit. In the Poco build system, the Darwin config file does not specify the architecture, so it picks 32 bit. My solution was to copy the Darwin_x86_64 config file over the Darwin file and replace the textual instances of 'x86_64' with 'i386'. This forces a 32 bit build.
A somewhat hackish solution though, let me know if it can be done in a cleaner way...
| Did you pull down the libraries from somewhere? Poco comes with all the source. Recompile it.
|
1,730,515 | 1,731,318 | How to create a VB6 collection object with ATL | or a VB6 - compatible - collection object.
We provide hooks into our .net products through a set of API's.
We need to continue to support customers that call our API's from VB6, so we need to continue supporting VB6 collection objects (simple with VBA.Collection in .net).
The problem is supporting some sites that use VBScript to call our API's. VBScript has no concept of a collection object, so to create a collection object to pass to our API we built a VB6 ActiveX DLL that provides a "CreateCollection" method. This method simply creates and passes back a new collection object. Problem solved.
After many years of pruning, porting and re-building, this DLL is the only VB6 code we have. Because of it we still need to install Visual Studio 6 on our Dev & build Machines.
I'm not happy with our reliance on this DLL for several reasons (my personal dislike of VB6 is not one of them). Top of the list is that Microsoft no longer support Visual Studio 6.
My question is, how do I get ATL to create a collection object that implements the same interface as the VB6 collection object.
I've a good handle on C++, but only a loose grasp of ATL - I can create simple objects and implement simple methods, but this is beyond me.
| Collections are more or less based on convention. They implement IDispatch and expose some standard methods and properties:
Add() - optional
Remove() - optional
Item()
Count - read-only
_NewEnum - hidden, read-only, returns pointer to enumerator object that implements IEnumVariant
The _NewEnum property is what allows Visual Basic For Each.
In the IDL you use a dual interface and:
DISPID_VALUE for Item()
[propget, id(DISPID_NEWENUM), restricted] HRESULT _NewEnum([out, retval] IUnknown** pVal)
Here are some MSDN entries: Design Considerations for ActiveX Objects
And here is some ATL specific convenience: ATL Collections and Enumerators
|
1,730,739 | 1,730,790 | Using non-abstract class as base | I need to finish others developer work but problem is that he started in different way...
So now I found in situation to use existing code where he chooses to inherit a non-abstract class (very big class, without any virtual functions) that already implements bunch of interfaces or to dismiss that code (which shouldn't be to much work) and to write another class that implements interfaces I need.
What are the pros and cons that would help me to choose the better approach.
p.s. please note that I don't have to much experience
Many Thanks
| Although it is very tempting to say write it from scratch again, don't do it! The existing code may be ugly, but it looks like it does work. Since the class is big, I assume there is fair bit of history behind it as well. It might have solutions for some very obscure cases which you might not have imagined till now. What I suggest is, if possible first talk to the person who developed that class, understand how it works, then derive from it (after making its destructor virtual of course) and complete your work. Then as and when time permits slowly refactor the parts of the class into smaller more manageable classes. Also, don't forget to write a good unit-tester before you start so that you can validate the new behavior against the existing class's behavior. One more thing, there is nothing wrong in inheriting from a non-abstract base class as long as it makes sense and the base class destructor is virtual.
|
1,730,966 | 1,730,981 | C# for UI, c++ for library | I have a numerical library coded in C++.
I am going to make a UI for the library. I know some MFC. So one solution is to use MFC and make a native application.
The alternative is C#. I know nothing about C#. But I think it should be easy to learn.
Some tutorial for mixed programming of C++ and C# would be very helpful to me.
Thanks!
Yin
| I would recommend using Windows Forms or WPF via C# for your GUI.
Take your numerical library, and use C++/CLI to make a .NET wrapper for it. This makes it trivial to use from C# (it looks like any other C# library).
I highly recommend Nishant Sivakumar's C++/CLI articles on CodeProject for learning about C++/CLI and how to wrap C++ libraries. They're fairly well written.
MSDN is a good reference for how to use Windows Forms from C#.
|
1,731,226 | 1,731,444 | Optimizing a LAN server for a game | I'm the network programmer on a school game project. We want to have up to 16 players at once on a LAN. I am using the Server-Client model and am creating a new thread per client that joins. However, a lot of CPU time is wasted just checking on each thread if the non-blocking port has received anything from the client.
I've been reading "Network Programming for Microsoft Windows" by Anthony Johns and Jim Ohlund. They mention two different models for server-client apps.
1.)
Using overlapped IO socket options
Pass the overlapped struct and a WorkerRoutine to WSARecv
Call WSAWaitForMultipleEvents() or SleepEX() to set thread to alertable.
Handle Received data in the WorkerRoutines.
2.)
Using overlapped IO socket options
Create Io Completion Port
Create ServerWorkerThreads (however many CUPs you have)
Associate Completion port with Socket.
Call GetQueuedCompletionStatus in the ServerWorkerThread and handle received data.
I wanted to know which method would best fit my circumstance. The book says the Completion Port model is great for thousands of clients but that makes me think its made for a big server not for a small LAN game. The WorkerRoutines/Event system seems simpler.
| You should stop using non-blocking ports. They're really most useful for concurrency, and you have threads for that.
Just make each thread in the team wait on the port, and when it unblocks, it'll have something to do so you don't waste time spinning. If you have other things to do (like update game state), do that with other threads that don't do any network I/O at all.
And make sure to use synchronization primitives (semaphores) to keep from getting out of sync with yourself.
That's the super multi-threaded way to do it, and you probably should use it. The other way is to have a lock-step advancing state machine that updates all clients round-robin. This is rather useless if you want to play across the internet since varying pings will make your game run at varying speeds.
|
1,731,404 | 1,731,512 | Using a non-static class member inside a comparison function | I'm currently developing a syntaxic analyser class that needs, at a point of the code, to sort structs holding info about operators. Each operator has a priority, which is user-defined through public member functions of my analyser class. Thus, when sorting, I need my sorting function to order elements based on the priority of the corresponding operator. I'm using the following code to compare elements:
bool parser::op_comp(const op_info& o1, const op_info& o2) {
op_def& op1 = operators[o1.op_char];
op_def& op2 = operators[o2.op_char];
return op1.priority > op2.priority;
}
Note that I had to make this function static, since it's defined inside of a class.
In fact, my compare function compares elements of type op_char, and I retrieve the operator def from a map which contain elements of type op_def, which have a field "priority".
The problem I'm facing is that I can't manage to use std::sort(ops.begin(), ops.end(), std::mem_fun_ref(&parser::op_comp)) (where ops is a vector of op_info) method. I get the following error, which sounds quite logical :
error: invalid use of member `parser::operators' in static member function
Here is my question : how can I force std::sort to use a comp function that makes use of elements from non-static members of the class ? Obviously the function should be non-static, but I can't manage to use it if I don't make it static...
| Use a functor instead of a function:
struct op_comp : std::binary_function<op_info, op_info, bool>
{
op_comp(parser * p) : _parser(p) {}
bool operator() (const op_info& o1, const op_info& o2) {
return _parser->op_comp(o1, o2);
}
parser * _parser;
};
This way the method op_comp can stay non-static. However the caller needs an instance of the parser, where are all the operators are stored. This is the usage of our new functor:
std::sort(ops.begin(), ops.end(), op_comp(&my_parser));
Where my_parser is the instance of parser you are using. Alternatively, if you are calling std::sort from the parser, you can simply write:
std::sort(ops.begin(), ops.end(), op_comp(this));
|
1,731,675 | 1,731,691 | Boost: what is a "convenience header"? | What is the difference between "header" and "convenience header" in boost?
| A convenience header is typically (not just in Boost) a header which includes a number of other headers (that contain actual code) which are commonly used together, even though there are no hard dependencies between them (which is why they're separate in the first place).
|
1,731,784 | 1,731,824 | Boost: how do we specify "any port" for a TCP server? |
How can I specify "pick any available port" for a TCP based server in Boost?
And how do I retrieve the port once a connection is accepted?
UPDATED: By "available port" I mean: the OS can pick any available port i.e. I do not want to specify a port.
| Question 1: Use port number 0
Question 2: Use acceptor.local_endpoint().port()
|
1,731,838 | 1,731,871 | Magic in placement new? | I'm playing with dynamic memory allocation "by hand" and I wanted to see how placement new is implemented by guys from MS but when debugging I "stepped into" it moved me to code:
inline void *__CRTDECL operator new(size_t, void *_Where) _THROW0()
{ // construct array with placement at _Where
return (_Where);
}
Could anyone explain to me how on earth this code places my object into place pointed by my pointer when all I can see in this code is line with return statement with what I've supplied as an argument. I don't think that saying in comment what I would like this fnc to do is actually enough for it to work.
Thank you for any constructive answers.
| The purpose of operator new is only to allocate memory for an object, and return the pointer to that memory. When you use placement new, you're essentially telling the compiler "I know this memory is good, skip allocation, and use this pointer for my object." Your object's constructor is then called using the pointer provided by operator new whether or not it was memory that was just allocated, or specified by using placement new. operator new itself does not have any bearing on how your object is constructed.
|
1,731,996 | 1,743,346 | Explicitly calling a destructor in a signal handler | I have a destructor that performs some necessary cleanup (it kills processes). It needs to run even when SIGINT is sent to the program. My code currently looks like:
typedef boost::shared_ptr<PidManager> PidManagerPtr
void PidManager::handler(int sig)
{
std::cout << "Caught SIGINT\n";
instance_.~PidManagerPtr(); //PidManager is a singleton
exit(1);
}
//handler registered in the PidManager constructor
This works, but there seem to be numerous warnings against explicitly calling a destructor. Is this the right thing to do in this situation, or is there a "more correct" way to do it?
| Turns out that doing this was a very bad idea. The amount of weird stuff going on is tremendous.
What was happening
The shared_ptr had a use_count of two going into the handler. One reference was in PidManager itself, the other was in the client of PidManager. Calling the destructor of the shared_ptr (~PidManager() ) reduced the use_count by one. Then, as GMan hinted at, when exit() was called, the destructor for the statically initialized PidManagerPtr instance_ was called, reducing the use_count to 0 and causing the PidManager destructor to be called. Obviously, if PidManager had more than one client, the use_count would not have dropped to 0, and this wouldn't have worked at all.
This also gives some hints as to why calling instance_.reset() didn't work. The call does indeed reduce the reference count by 1. But the remaining reference is the shared_ptr in the client of PidManager. That shared_ptr is an automatic variable, so its destructor is not called at exit(). The instance_ destructor is called, but since it was reset(), it no longer points to the PidManager instance.
The Solution
I completely abandoned the use of shared_ptrs and decided to go with the Meyers Singleton instead. Now my code looks like this:
void handler(int sig)
{
exit(1);
}
typedef PidManager * PidManagerPtr
PidManagerPtr PidManager::instance()
{
static PidManager instance_;
static bool handler_registered = false;
if(!handler_registered)
{
signal(SIGINT,handler);
handler_registered = true;
}
return &instance_;
}
Explicitly calling exit allows the destructor of the statically initialized PidManager instance_ to run, so no other clean up code need be placed in the handler. This neatly avoids any issues with the handler being called while PidManager is in an inconsistent state.
|
1,732,155 | 1,732,174 | Question about the garbage collector in .NET (memory leak) | I guess this is very basic but since I'm learning .NET by myself, I have to ask this question.
I'm used to coding in C, where you have to free() everything. In C++/.NET, I have read about the garbage collector. From what I understand, when an instance is no longer used (in the scope of the object), it is freed by the garbage collector.
So, having that in mind, I built a little testing application. But, it seems I didn't get something because when doing the same things a few times (like, opening a form, closing it, reopening it, etc), memory leaks. And big time.
I tried to look this up on Google but I didn't find anything good for a beginner.
Is the garbage collector really freeing every objects when no longer used or there are exceptions that I have to handle? What am I missing?
Are there free tools to look up for memory leaks?
| Yeah, the garbage collector is freeing your objects when they're not used anymore.
What we usually call a memory leak in .NET is more like:
You're using external resources (that are not garbage collected) and forgetting to free them. This is solved usually by implementing the IDisposable interface.
You think there aren't references to a given object but indeed there are, somewhere and you're not using them any more but the garbage collector does not know about them.
In addition, memory is only reclaimed when needed, meaning the garbage collector activates at given times and performs a collection determining them what memory can be freed and freeing it. Until it does, the memory isn't claimed so it might look like memory is being lost.
Here, I think I'll provide a more complex answer just to clarify.
First, the garbage collector runs in its own thread. You have to understand that, in order do reclaim memory the garbage collector needs to stop all other threads so that he can follow up the reference chain an determine what depends on what. That's the reason memory isn't freed right away, a garbage collector imply certain costs in performance.
Internally the garbage collector manage memory in generations. In a very simplified way there are several generations for long lived, short lived and big size objects. The garbage collector moves the object from one generation to another each time its performs a collection which happens more often for short lived generation that for long lived one. It also reallocates objects to get you the most possible contiguous space so again, performing a collection is a costly process.
If you really want to see the effects of you freeing the form (when getting out of scope and having no more reference to it) you can call GC.Collect(). Do that just for testing, it's highly unwise to call Collect except for a very few cases where you know exactly what you're doing and the implications it will have.
A little more explaining about the Dispose method of the IDispose interface.
Dispose isn't a destructor in the usual C++ way, it isn't a destructor at all. Dispose is a way to deterministically get rid of unmanaged objects. An example: Suppose you call an external COM library that happens to allocate 1GB of memory due to what it is doing. If you have no Dispose that memory will sit there, wasting space until the GC inits a collection and reclaims the unmanaged memory by calling the actual object destructor. So if you want to free the memory right away you have to call the Dispose method but you're not "forced" to do so.
If you don't use IDisposable interface then you have to free you're unmanaged resources in the Finalize method. Finalize is automatically called from the object destructor when the GC attempts to reclaim the object. So if you have a proper finalize method the unmanaged memory will get freed either way. Calling Dispose will only make it deterministic.
|
1,732,208 | 1,732,281 | What is a safe way to pass an array of arrays to a DLL in C#? | I have an array of arrays that I want to pass into a DLL. I am running into the error "There is no marshaling support for nested arrays."
I can pass a single array in fine but if I stack them up it fails. I need/want a "safe" way of passing in the array of arrays.
private static extern int PrintStuff(string[][] someStringsInGroups, int numberOfGroups, int[] lengthSetsInGroups);
EDIT: I am also willing, with enough discouragement and anguish, to accept a solution involving marshaling.
| You could convert the double array to a single array (i.e. flatten it). This can be done by keeping width and height variables, and accessing the indices as such:
string atXY = someStringsInSingleArray[(y * width) + x];
The array can then be converted as such:
string * array = new string[width * height];
for (unsigned int y = 0; y < height; ++y)
{
for (unsigned int x = 0; x < width; ++x)
{
array[(y * width) + x] = someStringsInGroups[x][y];
}
}
// (pass single array to dll)
delete [] array;
|
1,732,376 | 1,732,420 | Is nesting namespaces an overkill? | I'm writing a C++ program that has a large number of classes. In my head I can visualise them as distinct collections. For example there's a collection of classes for reading and storing config data, and another collection for drawing a user interface with various widgets.
Each of those collections could be neatly stored inside separate namespaces, which seems sensible. The config part has a "screen" class, and the GUI part also has a "screen" class, but they are different to each other. I could rename one "gui_screen" and the other "config_screen", but that's the point of a namespace isn't it? To remove these prefixes we invent to separate things.
Part of me then thinks it'd be tidy to store those namespaces inside one main one so that none of my code can interfere with the namespaces of anything else. And I suppose it might also make the code more readable too.
Or am I just making overly complex hierarchies of data for no reason at all?
| It is not overkill. It is reasonable to nest your namespaces, e.g. piku::gui::screen etc.
You could also collapse them to piku_gui_screen, but with the separate, nested namespaces you get the advantage that, if you're inside piku::gui, you can access all names in that namespace easily (e.g. screen will automatically resolve to piku::gui::screen).
|
1,732,643 | 1,735,531 | Choosing the right subclass to instantiate programmatically | Ok, the context is some serialization / deserialization code that will parse a byte stream into an 'object' representation that's easier to work with (and vice-versa).
Here's a simplified example with a base message class and then depending on a 'type' header, some more data/function are present and we must choose the right subclass to instantiate:
class BaseMessage {
public:
enum Type {
MyMessageA = 0x5a,
MyMessageB = 0xa5,
};
BaseMessage(Type type) : mType(type) { }
virtual ~BaseMessage() { }
Type type() const { return mType; }
protected:
Type mType;
virtual void parse(void *data, size_t len);
};
class MyMessageA {
public:
MyMessageA() : BaseMessage(MyMessageA) { }
/* message A specific stuf ... */
protected:
virtual void parse(void *data, size_t len);
};
class MyMessageB {
public:
MyMessageB() : BaseMessage(MyMessageB) { }
/* message B specific stuf ... */
protected:
virtual void parse(void *data, size_t len);
};
In a real examples, there would be hundreds of different message types and possibly several level or hierarchy because some messages share fields/functions with each other.
Now, to parse a byte string, I'm doing something like:
BaseMessage *msg = NULL;
Type type = (Type)data[0];
switch (type) {
case MyMessageA:
msg = new MyMessageA();
break;
case MyMessageB:
msg = new MyMessageB();
break;
default:
/* protocol error */
}
if (msg)
msg->parse(data, len);
But I don't find this huge switch very elegant, and I have the information about which message has which 'type value' twice (once in the constructor, one in this switch)
It's also quite long ...
I'm looking for a better way that would just be better ... How to improve this?
| It's a pretty basic question in fact (as you can imagine, you are definitely not the only one deserializing in C++).
What you are looking for is called Virtual Construction.
C++ does not define Virtual Construction, but it's easy to approximate it using the Prototype Design Pattern or using a Factory method.
I personnally prefer the Factory approach, for the reason that the Prototype one means having some kind of default instance that is replicated and THEN defined... the problem is that not all classes have a meaningful default, and for that matter, a meaningful Default Constructor.
The Factory approach is easy enough.
You need a common base class for the Messages, and another for the Parsers
Each Message has both a Tag and an associated Parser
Let's see some code:
// Framework
class Message
{
public:
virtual ~Message();
};
class Parser
{
public:
virtual ~Parser();
virtual std::auto_ptr<Message> parse(std::istream& serialized) const;
};
// Factory of Messages
class MessageFactory
{
public:
void register(std::string const& tag, Parser const& parser);
std::auto_ptr<Message> build(std::string const& tag, std::istream& serialized) const;
private:
std::map<std::string,Parser const*> m_parsers;
};
And with this framework (admittedly simple), some derived classes:
class MessageA: public Message
{
public:
MessageA(int a, int b);
};
class ParserA: public Parser
{
public:
typedef std::auto_ptr<MessageA> result_type;
virtual result_type parse(std::istream& serialized) const
{
int a = 0, b = 0;
char space = 0;
std::istream >> a >> space >> b;
// Need some error control there
return result_type(new MessageA(a,b));
}
};
And at last, the use:
int main(int argc, char* argv[])
{
// Register the parsers
MessageFactory factory;
factory.register("A", ParserA());
// take a file
// which contains 'A 1 2\n'
std::ifstream file = std::ifstream("file.txt");
std::string tag;
file >> tag;
std::auto_ptr<Message> message = factory.parse(tag, file);
// message now points to an instance of MessageA built by MessageA(1,2)
}
It works, I know for I use it (or a variation).
There are some things to consider:
You may be willing to make MessageFactory a singleton, this then allows it to be called at library load, and thus you can register your parsers by instantiating static variables. This is very handy if you don't want main to have to register every single parser type: locality > less dependencies.
The tags have to be shared. It is not unusual either for the tag to be served by a virtual method of the Message class (called tag).
Like:
class Message
{
public:
virtual ~Message();
virtual const std::string& tag() const = 0;
virtual void serialize(std::ostream& out) const;
};
The logic for serialization has to be shared too, it is not unusual for an object to handle its own serialization/deserialization
Like:
class MessageA: public Message
{
public:
static const std::string& Tag();
virtual const std::string& tag() const;
virtual void serialize(std::ostream& out) const;
MessageA(std::istream& in);
};
template <class M>
class ParserTemplate: public Parser // not really a parser now...
{
public:
virtual std::auto_ptr<M> parse(std::istream& in) const
{
return std::auto_ptr<M>(new M(in));
}
};
What's great with templates is that it never stops to amaze me
class MessageFactory
{
public:
template <class M>
void register()
{
m_parsers[M::Tag()] = new ParserTemplate<M>();
}
};
//skipping to registration
factory.register<MessageA>();
Now isn't it pretty :) ?
|
1,732,717 | 33,875,437 | How to determine how much free space on a drive in Qt? | I'm using Qt and want a platform-independent way of getting the available free disk space.
I know in Linux I can use statfs and in Windows I can use GetDiskFreeSpaceEx(). I know boost has a way, boost::filesystem::space(Path const & p).
But I don't want those. I'm in Qt and would like to do it in a Qt-friendly way.
I looked at QDir, QFile, QFileInfo -- nothing!
| I know It's quite old topic but somebody can still find it useful.
Since QT 5.4 the QSystemStorageInfo is discontinued, instead there is a new class QStorageInfo that makes the whole task really simple and it's cross-platform.
QStorageInfo storage = QStorageInfo::root();
qDebug() << storage.rootPath();
if (storage.isReadOnly())
qDebug() << "isReadOnly:" << storage.isReadOnly();
qDebug() << "name:" << storage.name();
qDebug() << "fileSystemType:" << storage.fileSystemType();
qDebug() << "size:" << storage.bytesTotal()/1024/1024 << "MB";
qDebug() << "availableSize:" << storage.bytesAvailable()/1024/1024 << "MB";
Code has been copied from the example in QT 5.5 docs
|
1,732,772 | 1,732,783 | Cocoa WebView cross-thread access | I have a C++ class running in its own thread that needs to execute some javascript in a WebView that's part of a Cocoa app. I have the C++ app call a method in the Cocoa window's controller and it in turns runs the javascript, passing in the data. It seems to work part of the time, but crash a lot of the time as well (somewhere in WebView's code). I tried using the @synchronized on the webview instance, but it doesn't seem to be doing anything.
Can anyone offer any advice?
| Maybe [yourWebView performSelectorOnMainThread:...] and friends? (Or call a mediating controller class.)
|
1,732,821 | 1,732,886 | What are your experiences with Code::Blocks? | I looked at Code::Blocks and it certainly looks great for c++ development, I like it's multiplatform capabilities (runs everywhere), but I wanted to get your feedback.
Is it good/stable enough to be used in a professional environment?
Thanks.
| I have tried Code::Blocks for windows and found below things about -
Pros:
1.) Supports and generates code using many compilers - GNU GCC for x86, GCC for ARM, MS-VS2005 compiler, ... many more(See list in Project Build options)
2.) Has decent source code browser with necessary stuff(syntax highlighting based on multiple programming language supported, Source code file statistics, like code lines, comment lines, blank lines- good for KLOC statistics of code)
3.)Has a decent debugger, in it to step, break, analyse the code and data for debugging
Cons:
1.) Per se i did not find any problems to point but found some glitches in code generation using this and at times faced some problems in executing the generated executable.
2.) The profiling tool is not so detailed and that great.
3.) There isnt a tool which gives a Call graph (caller-callee relation) .
Given all this,
So overall, my opinion is, if its possible, you can start it for smaller and easier projects, and get more familiar with it by finding more about the tool. Because u might have to spend equal time in finding out about the tool settings, plugins/add-ons for the tools, at same time while you are doing your development, so keep your main work of development relatively easier and less complex.
hope it helps,
-AD
|
1,733,112 | 1,733,114 | What is the reason for the entire C++ STL code to be included in the .h rather than .cpp/.c files? | I just downloaded the STL source code and I noticed all the definition for the STL template classes are included in the .h file. The actual source code for the function definition is in the .h file rather than .cpp/.c file. What is the reason for this?
http://www.sgi.com/tech/stl/download.html
| Because very few compilers implement linking of templates. It's hard.
Here's a brief but (I think) informative article about it: http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=53
I say "I think" because it's really not something I'm very familiar with other than that it's widely unimplemented. I initially said the standard didn't require it, but looking at the definition of "export" in C++03, I don't see any indication that it's optional. Maybe it's just a failed standard.
|
1,733,143 | 1,733,153 | Converting between C++ std::vector and C array without copying | I would like to be able to convert between std::vector and its underlying C array int* without explicitly copying the data.
Does std::vector provide access to the underlying C array? I am looking for something like this
vector<int> v (4,100)
int* pv = v.c_array();
EDIT:
Also, is it possible to do the converse, i.e. how would I initialize an std::vector from a C array without copying?
int pv[4] = { 4, 4, 4, 4};
vector<int> v (pv);
| You can get a pointer to the first element as follows:
int* pv = &v[0];
This pointer is only valid as long as the vector is not reallocated. Reallocation happens automatically if you insert more elements than will fit in the vector's remaining capacity (that is, if v.size() + NumberOfNewElements > v.capacity(). You can use v.reserve(NewCapacity) to ensure the vector has a capacity of at least NewCapacity.
Also remember that when the vector gets destroyed, the underlying array gets deleted as well.
|
1,733,272 | 1,736,542 | IThumbnailProvider and IInitializeWithItem | I am trying to develop an IThumbnailProvider for use in Windows 7. Since this particular thumbnail would also be dependant on some other files in the same directory, I need to use something other than IInitializeWithStream to a path to work with, this being IInitializeWithItem. (Alternatively, I could use IInitializeWithFile, but that is even more frowned upon apparently.)
No matter what I do, I cannot get it to work. I have Microsoft's FileTypeVerifier.exe tool which gives the green light on using IInitializeWithItem, but when explorer calls it, it only seems to try IInitializeWithStream, ever. (This was tested by temporarily implementing said interface, and Beep()ing away in its Initialize()) Did I forget to configure something?
In short: how do I get this to work?
| Okay, I finally found out what is the matter. To quote the Building Thumbnail Providers link on the MSDN website:
There are cases where initialization with streams is not possible. In scenarios where your thumbnail provider does not implement IInitializeWithStream, it must opt out of running in the isolated process where the system indexer places it by default when there is a change to the stream. To opt out of the process isolation feature, set the following registry value.
HKEY_CLASSES_ROOT
CLSID
{66742402-F9B9-11D1-A202-0000F81FEDEE}
DisableProcessIsolation = 1
I knew I was running out of process since I read elsewhere that thumbnailproviders ALWAYS ran out of process. But since that particular snippet is on almost -all- shell extension handlers, I interpreted it to be a overly happy copy-paste job, since it was -required- to run in-process the way I understood it.
And I was wrong. I hope this will help someone else in the near future. :)
|
1,733,484 | 1,734,057 | Unit testing real-time / concurrent software |
Possible Duplicate:
How should I unit test threaded code?
The classical unit testing is basically just putting x in and expecting y out, and automating that process. So it's good for testing anything that doesn't involve time. But then, most of the nontrivial bugs I've come across have had something to do with timing. Threads corrupt each others' data, or cause deadlocks. Nondeterministic behavior happens – in one run out of million. Hard stuff.
Is there anything useful out there for "unit testing" parts of multithreaded, concurrent systems? How do such tests work? Isn't it necessary to run the subject of such test for a long time and vary the environment in some clever manner, to become reasonably confident that it works correctly?
| Most of the work I do these days involves multi-threaded and/or distributed systems. The majority of bugs involve "happens-before" type errors, where the developer assumes (wrongly) that event A will always happen before event B. But every 1000000th time the program is run, event B happens first, and this causes unpredictable behavior.
Additionally, there aren't really any good tools to detect timing issues, or even data corruption caused by race conditions. Tools like Helgrind and drd from the Valgrind toolkit work great for trivial programs, but they are not very useful in diagnosing large, complex systems. For one thing, they report false positives quite frequently (Helgrind especially). For another thing, it's difficult to actually detect certain errors while running under Helgrind/drd simply because programs running under Helgrind run almost 1000x slower, and you often need to run a program for quite a long time to even reproduce the race condition. Additionally, since running under Helgrind totally changes the timing of the program, it may become impossible to reproduce a certain timing issue. That's the problem with subtle timing issues; they're almost Heisenbergian in the sense that altering a program to detect timing issues may obscure the original issue.
The sad fact is, the human race still isn't adequately prepared to deal with complex, concurrent software. So unfortunately, there's no easy way to unit-test it. For distributed systems especially, you should plan your program carefully using Lamport's happens-before diagrams to help you identify the necessary order of events in your program. But ultimately, you can't really get away from brute-force unit testing with randomly varying inputs. It also helps to vary the frequency of thread context-switching during your unit-test by, e.g. running another background process which just takes up CPU cycles. Also, if you have access to a cluster, you can run multiple unit-tests in parallel, which can detect bugs much quicker and save you a lot of time.
|
1,733,488 | 1,733,655 | Terrain minimap in OpenGL? | So I have what is essentially a game... There is terrain in this game. I'd like to be able to create a top-down view minimap so that the "player" can see where they are going. I'm doing some shading etc on the terrain so I'd like that to show up in the minimap as well. It seems like I just need to create a second camera and somehow get that camera's display to show up in a specific box. I'm also thinking something like a mirror would work.
I'm looking for approaches that I could take that would essentially give me the same view I currently have, just top down... Does this seem feasible? Feel free to ask questions... Thanks!
| One way to do this is to create an FBO (frame buffer object) with a render buffer attached, render your minimap to it, and then bind the FBO to a texture. You can then map the texture to anything you'd like, generally a quad. You can do this for all sorts of HUD objects. This also means that you don't have to redraw the contents of your HUD/menu objects as often as your main view; update the the associated buffer only as often as you require. You will often want to downsample (in the polygon count sense) the objects/scene you are rendering to the FBO for this case. The functions in the API you'll want to check into are:
glGenFramebuffersEXT
glBindFramebufferEXT
glGenRenderbuffersEXT
glBindRenderbufferEXT
glRenderbufferStorageEXT
glFrambufferRenderbufferEXT
glFrambufferTexture2DEXT
glGenerateMipmapEXT
There is a write-up on using FBOs on gamedev.net. Another potential optimization is that if the contents of the minimap are static and you are simply moving a camera over this static view (truly just a map). You can render a portion of the map that is much larger than what you actually want to display to the player and fake a camera by adjusting the texture coordinates of the object it's mapped onto. This only works if your minimap is in orthographic projection.
|
1,733,627 | 1,830,639 | Benchmarks for Intel C++ compiler and GCC | I have an AMD Opteron server running CentOS 5. I want to have a compiler for a fairly large C++ Boost based program. Which compiler I should choose?
| I hope this helps more than hurts :)
I did a little compiler shootout sometime over a year ago, and I am going off memory.
GCC 4.2 (Apple)
Intel 10
GCC 4.2 (Apple) + LLVM
I tested multiple template heavy audio signal processing programs that I'd written.
Compilation times: The Intel compiler was by far the slowest compiler - more than '2x times slower' as another posted cited.
GCC handled deep templates very well in comparison to Intel.
The Intel compiler generated huge object files.
GCC+LLVM yielded the smallest binary.
The generated code may have significant variance due to the program's construction, and where SIMD could be used.
For the way I write, I found that GCC + LLVM generated the best code. For programs which I'd written before I took optimization seriously (as I wrote), Intel was generally better.
Intel's results varied; it handled some programs far better, and some programs far worse. It handled raw processing very well, but I give GCC+LLVM the cake because when put into the context of a larger (normal) program... it did better.
Intel won for out of the box, number crunching on huge data sets.
GCC alone generated the slowest code, though it can be as fast with measurement and nano-optimizations. I prefer to avoid those because the wind may change direction with the next compiler release, so to speak.
I never measured poorly written programs in this test (i.e. results outperformed distributions of popular performance libraries).
Finally, the programs were written over several years, using GCC as the primary compiler in that time.
Update: I was also enabling optimizations/extensions for Core2Duo. The programs were clean enough to enable strict aliasing.
|
1,733,656 | 1,734,647 | Has anyone seen a 2-Sat implementation | I have been looking for a while, but I just can't seem to find any implementation of the 2-Sat algorithm.
I am working in c++ with the boost library (which has a strongly connected component module) and need some guidance to either create an efficient 2-Sat program or find an existing library for me to utilise through c++.
| I suppose you know how to model a 2-Sat problem to solve it with SCC.
The way I handle vars and its negation isn't very elegant, but allows a short implementation:
Given n variables numbered from 0 to n-1, in the clauses -i means the negation of variable i, and in the graph i+n means the same (am I clear ?)
#include <boost/config.hpp>
#include <iostream>
#include <vector>
#include <boost/graph/strong_components.hpp>
#include <boost/graph/adjacency_list.hpp>
#include <boost/foreach.hpp>
typedef std::pair<int, int> clause;
//Properties of our graph. By default oriented graph
typedef boost::adjacency_list<> Graph;
const int nb_vars = 5;
int var_to_node(int var)
{
if(var < 0)
return (-var + nb_vars);
else
return var;
}
int main(int argc, char ** argv)
{
std::vector<clause> clauses;
clauses.push_back(clause(1,2));
clauses.push_back(clause(2,-4));
clauses.push_back(clause(1,4));
clauses.push_back(clause(1,3));
clauses.push_back(clause(-2,4));
//Creates a graph with twice as many nodes as variables
Graph g(nb_vars * 2);
//Let's add all the edges
BOOST_FOREACH(clause c, clauses)
{
int v1 = c.first;
int v2 = c.second;
boost::add_edge(
var_to_node(-v1),
var_to_node(v2),
g);
boost::add_edge(
var_to_node(-v2),
var_to_node(v1),
g);
}
// Every node will belong to a strongly connected component
std::vector<int> component(num_vertices(g));
std::cout << strong_components(g, &component[0]) << std::endl;
// Let's check if there is variable having it's negation
// in the same SCC
bool satisfied = true;
for(int i=0; i<nb_vars; i++)
{
if(component[i] == component[i+nb_vars])
satisfied = false;
}
if(satisfied)
std::cout << "Satisfied!" << std::endl;
else
std::cout << "Not satisfied!" << std::endl;
}
|
1,733,857 | 23,632,871 | How to set the padding of QTableView cells through CSS? | Is it possible to define the padding of QTableView cells? I would expect this to be possible using CSS stylesheets, but the documentation does not describe a method to do this.
The following stylesheet does not have the desired effect:
QTableView {
padding: 5px;
}
as it influences the padding property of the widget as a whole, not of the individual cells.
| I know this is an old question but I've been struggling with this recently.
I found out that by setting
tableView->verticalHeader()->setSectionResizeMode(QHeaderView::ResizeToContents);
the padding set in your CSS will then be applied to the top and bottom of the cell, too!
|
1,734,026 | 1,734,050 | Do types in QT applications for different platforms have similar size? | I created an application for Windows in C++ using QT. If I want to port it to Linux or Mac OS, will sizeof(int) or sizeof(long) change? In other words, do types in QT applications for different platforms have similar size?
| In addition to stefaanv's answer, if you're worried about their size, use quint32 and friends. Qt guarantees them to be the same size on all supported platforms.
Check out QtGlobal:
The header file declares several type definitions that guarantee a specified bit-size on all platforms supported by Qt for various basic types, for example qint8 which is a signed char guaranteed to be 8-bit on all platforms supported by Qt. The header file also declares the qlonglong type definition for long long int (__int64 on Windows).
Several convenience type definitions are declared: qreal for double, uchar for unsigned char, uint for unsigned int, ulong for unsigned long and ushort for unsigned short.
Or, if you want a more standard solution, there's stdint.h. I like using Qt all the way, because then the porting is literally just running qmake on a different platform.
|
1,734,059 | 1,734,111 | How do I set the icon for my program in Visual C++ 2008 Express Edition? | I want to set the embedded icon for my executable console program to a custom icon.
I know this is pretty much the same question as this one here, but I'm using Visual C++ 2008 Express Edition, and the Resources View is not available (and the Project-context-menu->Add->Resource... is grayed out), so I'm at loss here. Will I have to upgrade to a pro edition for such a basic task?
| You can still add a resource in Express edition, but there is no resource editor GUI, you have to create the resource yourself using external tools.
The Win32 Platform SDK has a resource compiler (rc.exe) that will compile a resource script which is just a text file that you can write yourself. There are also free resource editors out there if you'd rather not create the file by hand.
The process is:
Create the resource script
Compile it using rc.exe to create a .rc file
Add the .rc file to the Visual Studio C++ project
Recompile
|
1,734,522 | 1,734,542 | Looking for a Java Graphics alternative | Any time I embark on a project that requires some rendering of primitive shapes and lines, I usually turn to Java because it's just so easy. For my latest project, I decided I might like to learn another API similar to but not Java Graphics2D. I would preferably like something that will work with C++ on Linux. Does anybody have any good recommendations for me? Thanks!
| Anti-Grain geometry gives high quality 2D rendering from path and font primitives, is a good example of idiomatic use of templates in C++, and looks fantastic. It has more documentation on the algorithms than on the API, so be prepared to look at the examples for how to use it. It requires some OS specific code to take the in-memory bitmap and blit it onto the screen. The other disadvantage is that when you next look at Java 2D or GDI+ applications you'll think Ewww as they're so badly rendered.
|
1,734,556 | 1,734,600 | C++ generic classes & inheritance | I'm new to generic class programming so maybe my question is silly - sorry for that. I'd like to know whether the following thing is possible and - if so, how to do it
I have a simple generic class Provider, which provides values of the generic type:
template <class A_Type> class Provider{
public:
A_Type getValue();
void setSubProvider(ISubProvider* subProvider)
private:
A_Type m_value;
ISubProvider* m_subProvider;
};
The getValue function shall return m_value in case of m_subProvider is NULL. But if SubProvider is not Null, the value shall be calculated by the SubProvider class.
so subprovider must be of generic type too, but i create it as an abstract class without implementation:
template <class A_Type> class ISubProvider{
public:
virtual A_Type getValue() = 0;
};
now I want the actual implementations of ISubProvider to be nongeneric! for example I want to implement IntegerProvider which returns type Integer
class IntegerProvider : public ISubProvider{
int getValue(){return 123;}
};
and maybe a StringProvider:
class StringProvider : public ISubProvider{
string getValue(){return "asdf";}
};
now - how can I code the whole thing, such that i can use the
void setSubProvider(ISubProvider* subProvider)
function of class Provider only with a subprovider that corresponds to the generic type of Provider?
for example, if i instanciate a provider of type int:
Provider<int> myProvider = new Provider<int>();
then it shall be possible to call
myProvider.setSubProvider(new IntegerProvider());
but it must be impossible to call
myProvider.setSubProvider(new StringProvider());
I hope you understand my question and can tell me how to create that code properly :)
Thank you!
| C++ has templates (class templates and function templates), not generics.
Given this declaration:
template <class A_Type> class ISubProvider;
you can't do this:
class IntegerProvider : public ISubProvider{
int getValue(){return 123;}
};
because ISubProvider is not a class, it's a class template.
You can though, do this.
class IntegerProvider : public ISubProvider<int> {
int getValue(){return 123;}
};
I think that this is what you want to do, in any case.
This also won't work for the same reason.
template <class A_Type> class Provider {
public:
A_Type getValue();
void setSubProvider(ISubProvider* subProvider)
private:
A_Type m_value;
ISubProvider* m_subProvider;
};
You have to do something like this.
template <class A_Type> class Provider {
public:
A_Type getValue();
void setSubProvider(ISubProvider<A_Type>* subProvider);
private:
A_Type m_value;
ISubProvider<A_Type>* m_subProvider;
};
Now you have achieve what you wanted in that you must provide a pointer to an ISubProvider instantation for the same template type parameter as the Provider class template.
Note, however, that you haven't really gained anything by using a base class templated on type in this case.
|
1,734,628 | 1,734,640 | Copy constructor and = operator overload in C++: is a common function possible? | Since a copy constructor
MyClass(const MyClass&);
and an = operator overload
MyClass& operator = (const MyClass&);
have pretty much the same code, the same parameter, and only differ on the return, is it possible to have a common function for them both to use?
| Yes. There are two common options. One - which is generally discouraged - is to call the operator= from the copy constructor explicitly:
MyClass(const MyClass& other)
{
operator=(other);
}
However, providing a good operator= is a challenge when it comes to dealing with the old state and issues arising from self assignment. Also, all members and bases get default initialized first even if they are to be assigned to from other. This may not even be valid for all members and bases and even where it is valid it is semantically redundant and may be practically expensive.
An increasingly popular solution is to implement operator= using the copy constructor and a swap method.
MyClass& operator=(const MyClass& other)
{
MyClass tmp(other);
swap(tmp);
return *this;
}
or even:
MyClass& operator=(MyClass other)
{
swap(other);
return *this;
}
A swap function is typically simple to write as it just swaps the ownership of the internals and doesn't have to clean up existing state or allocate new resources.
Advantages of the copy and swap idiom is that it is automatically self-assignment safe and - providing that the swap operation is no-throw - is also strongly exception safe.
To be strongly exception safe, a 'hand' written assignment operator typically has to allocate a copy of the new resources before de-allocating the assignee's old resources so that if an exception occurs allocating the new resources, the old state can still be returned to. All this comes for free with copy-and-swap but is typically more complex, and hence error prone, to do from scratch.
The one thing to be careful of is to make sure that the swap method is a true swap, and not the default std::swap which uses the copy constructor and assignment operator itself.
Typically a memberwise swap is used. std::swap works and is 'no-throw' guaranteed with all basic types and pointer types. Most smart pointers can also be swapped with a no-throw guarantee.
|
1,734,893 | 1,734,905 | Overloading a method in a subclass in C++ | Suppose I have some code like this:
class Base {
public:
virtual int Foo(int) = 0;
};
class Derived : public Base {
public:
int Foo(int);
virtual double Foo(double) = 0;
};
class Concrete : public Derived {
public:
double Foo(double);
};
If I have a object of type Concrete, why can I not call Foo(int)?
If I change the name of Foo(double) so that it isn't overloading Foo, then all is well and both methods are accessible, but this isn't what I want.
Similarly, if I remove Concrete class and implement Foo(double) in Derived, then both are accessible, but again, not what I want.
| Name lookup happens before overload resolution, so once Foo has been found in Concrete, base classes won't be search for other methods called Foo. int Foo(int) in Derived is hidden by the Foo in Concrete.
You have a number of options.
Change the call to be explicit.
concrete.Derived::Foo(an_int);
Add a using declaration to Concrete.
class Concrete : public Derived {
public:
using Derived::Foo;
double Foo(double);
};
Call the function through a base reference.
Derived& dref = concrete;
dref.Foo(an_int);
|
1,734,927 | 1,734,961 | Compare characters at the end of the string C++ | This program supposed to find command line arguments entered on Unix which ends with “.exe”. For some reason it doesn't work. Here is the code:
int main( int argc, char* argv[] )
{
for ( int i = 1; i < argc; i++)
if( findExe( argv[i] ) )
cout << argv[i] << endl;
return 0;
}
bool findExe( char* argument )
{
if ( strlen( argument ) >= 4 )
{
string testExe = ".exe";
string initialWord=argument; //converts c-string to string
string temp( initialWord,( initialWord.size() - 4 ),4 );//creates temp with last four characters from initialWord
if ( !temp.compare(testExe) )
return true;
}
else
return false;
}
| Remove the else, I think (although I haven't compiled the code to check). In the case where the length is at least 4, but the string comparison returns non-zero, you reach the end of the function without returning. Your compiler should have warned you: turn on more warnings.
|
1,735,038 | 1,735,101 | Why "not all control paths return a value" is warning and not an error? | I was trying to answer this question. As suggested by the accepted answer, the problem with that code is that not all control paths are returning a value. I tried this code on the VC9 compiler and it gave me a warning about the same. My question is why is just a warning and not an error? Also, in case the path which doesn't return a value gets executed, what will be returned by the function (It has to return something) ? Is it just whatever is there on top of the stack or is the dreaded undefined behavior again?
| Failing to return a value from a function that has a non-void return type results in undefined behaviour, but is not a semantic error.
The reason for this, as far as I can determine, is largely historical.
C originally didn't have void and implicit int meant that most functions returned an int unless explicitly declared to return something else even if there was no intention to use the return value.
This means that a lot of functions returned an int but without explicitly setting a return value, but that was OK becase the callers would never use the return value for these functions.
Some functions did return a value, but used the implicit int because int was a suitable return type.
This means that pre-void code had lots of functions which nominally returned int but which could be declared to return void and lots of other functions that should return an int with no clear way to tell the difference. Enforcing return on all code paths of all non-void functions at any stage would break legacy code.
There is also the argument that some code paths in a function may be unreachable but this may not be easy to determine from a simple static analysis so why enforce an unnecessary return?
|
1,735,085 | 1,735,116 | C vs C++ (Objective-C vs Objective-C++) for iPhone | I would like to create a portable library for iPhone, that also could be used for other platforms.
My question is the fallowing:
Does anyone knows what is the best to be used on the iPhone: Objective-C or Objective-C++? Does it works with C++ the same way as Objective-C with C or not?
Reasons: Objective-C is a superset of C, but Objective-C++ is not a superset of C++.
Thanks in advance!
UPDATE: What about memory usage, speed in the same implementation of an use case?
UPDATE1: If anyone can provide any more information, he'll be welcome.
| They're not really different languages. Objective-C++ is just Objective-C with slightly limited support for including C++ code. Objective-C is the standard dialect, but if you need to work with C++, there's no reason not to use it. AFAIK, the biggest practical difference (aside from allowing use of different libraries) is that Objective-C++ seems to compile a bit slower. Just be sure to read up on it first if you do decide to go that route, because the merging of C++ and Objective-C is not 100% seamless.
|
1,735,102 | 1,735,141 | error C2664 + generic classes + /Wp64 | I've got the following lines of code:
p_diffuse = ShaderProperty<Vector4>(Vector4(1,1,1,1));
addProperty(&p_diffuse, "diffuse");
p_shininess = ShaderProperty<float>(10.0f);
addProperty(&p_shininess, "shininess");
the addProperty function is implemented as follows:
template <class A_Type>
void IShader<A_Type>::addProperty( ShaderProperty<A_Type>* shaderProperty,
std::string propertyName )
{
m_shaderProperties[propertyName] = shaderProperty;
}
now i get a strange compiler error on the last line of the first chunk of code. addProperty works fine in the first case, but in the second (when trying to add p_shininess) i get:
error C2664: 'IShader<A_Type>::addProperty': cannot convert parameter 1 from 'ShaderProperty<A_Type> *' to 'ShaderProperty<A_Type> *'
Huh!?
a hint of the problem could be the following: if I go to the project settings and set in the C++ general tab "check for 64-bit compatibility problems" from "no" to "yes(/Wp64)" then the error reads slightly different:
error C2664: 'IShader<A_Type>::addProperty': cannot convert parameter 1 from 'ShaderProperty<A_Type> *__w64 ' to 'ShaderProperty<A_Type> *'
what's going on?? what is __w64??
edit: class definition of IShader:
template <class A_Type> class IShader {
public:
virtual ~IShader(void) {};
virtual A_Type shade(IntersectionData* iData, Scene* scene) = 0;
protected:
ShaderProperty<A_Type>* getProperty(std::string propertyName);
void addProperty(ShaderProperty<A_Type>* shaderProperty, std::string propertyName);
private:
std::map<std::string, ShaderProperty<A_Type>*> m_shaderProperties;
};
| float != Vector4. Your whole class (IShader), is templated on A_Type, not just the addProperty method. /Wp64 has nothing to do with anything. The solution to this problem will need more context, you may want to define addProperty to be a template member function instead of IShader (or in addition to) being templated.
Again this will be hard to get right without knowing exactly what you are doing, but I suspect what you want is a heterogeneous collection of properties. To do this safely you'll need to employ some runtime checking.
class ISharderProperty {
public:
virtual ~IProperty() {}
};
template<typename ShadeType>
class IShader;
template <typename T>
class ShaderProperty : public IShaderProperty {
IShader<T> *m_shader;
...
};
template<typename ShadeType>
class IShader {
ShadeType shade(...) = 0;
protected:
map<string, IShaderProperty*> m_shaderProperties;
template<typename T>
void addProperty(ShaderProperty<T>* prop, string name) {
m_shaderProperties[name] = prop;
}
template<typename T>
void getProperty(const string& name, ShaderProperty<T>** outProp) {
map<string, IShaderProperty*>::iterator i = m_shaderProperties.find(name);
*outProp = NULL;
if( i != m_shaderProperties.end() ) {
*outProp = dynamic_cast<ShaderProperty<T>*>( *i );
}
}
};
You'll have to use getProperty like
ShaderProperty<float> *x;
ashader.getProperty("floatprop", &x);
if( x ) {
...
}
Alternatively, getProperty could directly return the value, but then you'll need to mention T twice, e.g.
ShaderProperty<float> *x = ashader.getProperty<float>("floatprop");
if( x ) { ... }
You'll note I use dynamic_cast and check for NULL. If you have some other mechanism for mapping property names to property types you can use that instead and static_cast. There is some runtime overhead associated with dynamic_cast.
|
1,735,138 | 1,735,147 | Copying a class that inherits from a class with pure virtual methods? | I've not used C++ in a while, and I've become far too comfortable with the ease-of-use of real languages.
At any rate, I'm attempting to implement the Command pattern, and I need to map a number of command object implementations to string keys. I have an STL map of string to Command, and I'd like to copy the Command.
Essentially,
Command * copiedCommand = new Command( commandImplementation );
And I'd like to retain the functionality of commandImplementation. Since Command has the pure virtual function execute, this doesn't work. What's the correct way to do this?
| One way to do it would be to have add this to your Command class:
public:
virtual Command * Clone() const = 0;
... and then in the various subclasses of Command, implement Clone() to return a copy of the object:
public:
virtual Command * Clone() const {return new MyCommandSubclass(*this);}
Once that's done, you can then do what you want like this:
Command * copiedCommand = commandImplementation->Clone();
|
1,735,179 | 1,735,195 | Why would a blocking socket repeatedly return 0-length data? | I'm having a significant problem using a standard BSD-style socket in a C++ program. In the code below, I connect to a local web server, send a request, and simply create a loop waiting for data to return. I actually do receive the data, but then I get an endless stream of 0-length data as if it was a non-blocking socket. The web server presumably didn't kill the connection, because if so I would have received a length of -1.
Please ignore simple typos I make below, as I'm writing the code from memory, not a direct copy/paste. The code produces the same result on OSX and Windows.
int sock = socket(AF_INET, SOCK_STREAM, 0);
//assume serv_addr has been created correctly
connect(sock, (sockaddr*)&serv_addr, sizeof(serv_addr)) < 0);
std::string header = "GET / HTTP/1.1\r\n"
"Host: 127.0.0.1:80\r\n"
"Keep-Alive: 300\r\n"
"Connection: keep-alive\r\n\r\n";
send(sock, header.c_str(), header.length()+1, 0);
for (;;) {
char buffer[1024];
int len = recv(sock, buffer, 1024, 0);
cout << len << endl;
//this outputs two numbers around 200 and 500,
//which are the header and html, and then it
//outputs and endless stream of 0's
}
| From the man page of recv
For TCP sockets, the return value 0 means the peer has closed its half
side of the connection.
|
1,735,225 | 1,735,549 | C++ print out limit number of words | I'm beginner to C++ and I wonder how to do this.
I want to write a code which take in a text line. E.g. "Hello stackoverflow is a really good site"
From the output I want only to print out the first three words and skip the rest.
Output I want: "Hello stackoverflow is"
If it was Java I would've used the string split(). As for C++ I don't really know. Is their any similar or what is the approach for C++?
| The operator >> breaks a stream into words.
But does not detect end of line.
What you can do is read a line then get the first three words from that line:
#include <string>
#include <iostream>
#include <sstream>
int main()
{
std::string line;
// Read a line.
// If it succeeds then loop is entered. So this loop will read a file.
while(std::getline(std::cin,line))
{
std::string word1;
std::string word2;
std::string word3;
// Get the first three words from the line.
std::stringstream linestream(line);
linestream >> word1 >> word2 >> word3;
}
// Expanding to show how to use with a normal string:
// In a loop context.
std::string test("Hello stackoverflow is a really good site!");
std::stringstream testStream(test);
for(int loop=0;loop < 3;++loop)
{
std::string word;
testStream >> word;
std::cout << "Got(" << word << ")\n";
}
}
|
1,735,229 | 1,735,247 | c++ overload < using a friend function in C++ for multiple classes? | i am experimenting with overloading the < operator (i'll add the > later) as a means for checking if one of the goldfish (please see code below) is within the territory of another (the territories can overlap). im getting multiple compile errors with teh code - primarily to do with being able to access the private variables, even though i've made the overloaded operator a friend function for both of them.
am i doing the syntax wrong perhaps?
/* fish program:
each fish will start out with a static size and a territory.
they may attack each other, in which case the outcomes with depend on
1) their location (ie. if the attacking fish is on the territory of another goldfish
2) the relative size of the two fish
depending on the outcome of the attack - the fish may retreat or triumph.
triumph will increase its territory.
*/
#include <iostream>
using namespace std;
class point {
private:
float x;
float y;
public:
point(float x_in, float y_in) { //the 2 arg constructor
x = x_in;
y = y_in;
}
friend bool operator< ( const point &point_a, const territory &the_territory);
};
class territory {
private:
point ne, sw;
public:
territory(float x_ne, float y_ne, float x_sw, float y_sw)
: ne(x_ne, y_ne), sw(x_sw,y_sw) {
cout << ne.x << " and " << ne.y <<endl; //debug
}
friend bool operator< ( const point &point_a, const territory &the_territory) {
if((point_a.x < the_territory.ne.x && point_a.x > the_territory.ne.x) &&
(point_a.y < the_territory.ne.y && point_a.y > the_territory.ne.y))
return true;
}
};
class goldfish {
private:
float size;
point pos;
territory terr;
public:
goldfish(float x, float y) : pos(x,y), terr(x-1,y-1,x+1,y+1) { //constructor
size = 2.3;
}
void retreat() { //what happens in the case of loss in attack
}
void triumph() {
}
void attack() {
}
};
int main() {
goldfish adam(1,1); // fish1
goldfish eve(1.2,1.2); // fish2
}
| I haven't checked the logic of your program, but there are some simple notes. I commented the additions:
class territory; // declare the class used in operator<
class point {
private:
float x;
float y;
public:
point(float x_in, float y_in) {
x = x_in;
y = y_in;
}
float getx() { return x;} // add public "getter" for x
float gety() { return y;} // same for y
friend bool operator< ( const point &point_a, const territory &the_territory);
};
Also, don't forget to use the getters we defined above:
territory(float x_ne, float y_ne, float x_sw, float y_sw)
: ne(x_ne, y_ne), sw(x_sw,y_sw) {
// access x and y through getters
cout << ne.getx() << " and " << ne.gety() <<endl;
}
|
1,735,324 | 1,735,343 | Is there a linked list predefined library in C++? | Is there a linked list in C++ that I could just #include? Or do I need to create my own if I want to use one?
| As daniel notes, yes, std::list. Usage would be:
#include <list>
// ...
std::list<int> listOfInts;
listOfInts.push_back(1);
// ...
And so on.
You can find a complete list of STL classes here. The section you're after is 3.2, Container classes. Another useful reference of the C++ Standard Library is here.
|
1,735,539 | 1,735,649 | GDB debugger problems - No source file named | For some reason I can't get gdb to recognize the files in my project when debugging. I've tried a variety of things, including downloading different version, etc. and the last thing I did was completely overwrite all of MingW with Twilight Dragon Media's Bundle Package.
Does anyone know how to solve this issue? The odd thing was that when I first started using it, it was working fine, and then it suddenly stopped working.
I'm using Code::Blocks and I have the -g checked for the debug version of the project. I'm running on Windows Vista.
| GDB locates source files by file path/name from the object module debug information. So if you moved your sources you should rebuild your project. This can also happen if you are running on a different host which has no visibility to your source at all.
The most important thing to remember is:
Debug information doesn't contain the source itself, but only the path where to find it. People often don't know this.
|
1,735,576 | 1,735,587 | Are there any major differences between Visual C++ and C++ | I want to make win32 apps and games. Is there any major differences between C++ and Visual C++? What should I use.
| Visual C++ is the IDE, C++ the language... Use the language you want, there is no best language...
I would go for the .NET platform if you start something new, there you can use any language that targets the platform (Managed C++, C#, VB.NET, J# ...) and interoperability with native code is great + support for DirectX is provided too.
EDIT:
You want to develop for the Win32 platform, are you serious? I would not start projects for a slowly dying platform now...
|
1,735,640 | 1,735,701 | Some memory seems to be left allocated after malloc() and free() | I am new to C. I am trying to get comfortable with malloc + free. I have coded following test but for some reason the memory isn't freed completely (top still indicates about 150MB of memory allocated to process). Why is that?
#include <stdio.h>
#include <malloc.h>
typedef struct {
char *inner;
} structure;
int main()
{
int i;
structure** structureArray;
structureArray = (structure**)malloc(sizeof(structure*)*1000*10000);
for (i = 0; i < 1000*10000;i++)
{
structureArray[i] = (structure*) malloc(sizeof(structure));
structureArray[i]->inner = (char*) malloc(sizeof(char)*1000*1000*1000);
}
printf("freeing memory");
for (i = 0; i < 1000*10000;i++)
{
free(structureArray[i]->inner);
free(structureArray[i]);
}
free(structureArray);
system("sleep 100");
return 0;
}
coresponding Makefile:
all: test.c
gcc -o test test.c
./test &
top -p `pidof ./test`
killall ./test
| top will tell you the amount of physical memory assigned to your process. Virtual memory is an abstraction on top of physical memory, and malloc/free provide an abstraction on top of that.
malloc reserves space from the heap of your program. The heap is simply an area your program's virtual address space used for temporary storage. As you call malloc more, the heap is expanded using the brk system call. However, although the virtual size of your heap increases, physical memory isn't actually assigned until you read or write to your newly allocated memory. For instance, since you never write to the memory allocated to the inner fields of your records, those allocations will not take up any physical RAM.
free just releases parts of the heap allocated by malloc. This doesn't necessarily reduce the virtual size of the heap, so the physical memory associated with it may not be released. This is why you're not seeing a reduction in physical memory usage.
|
1,735,796 | 1,735,816 | Is it possible to choose a C++ generic type parameter at runtime? | Is there a way to choose the generic type of a class at runtime or is this a compile-time thing in C++?
What I want to do is something like this (pseudocode):
Generictype type;
if(somveval==1)
type = Integer;
if(someval==2)
type = String;
list<type> myList;
Is this possible in C++? and if yes, how?
| It's a compile time thing. Template parameter types must be known to the compiler at compile-time.
That being, said, using certain template meta-programming techniques, you can choose one type or another AT compile-time, but only if all possible types are known at compile-time, and only if the condition for selecting a type can be resolved at compile time.
For example, using partial specialization you could select a type at compile time based on an integer:
template <typename T>
class Foo
{ };
template <int N>
struct select_type;
template<>
struct select_type<1>
{
typedef int type;
};
template<>
struct select_type<2>
{
typedef float type;
};
int main()
{
Foo<select_type<1>::type> f1; // will give you Foo<int>
Foo<select_type<2>::type> f2; // will give you Foo<float>
}
|
1,735,841 | 1,735,910 | Initializing a ublas vector from a C array | I am writing a Matlab extension using the C++ ublas library, and I would like to be able to initialize my ublas vectors from the C arrays passed by the Matlab interpeter.
How can I initialize the ublas vector from a C array without (for the sake of efficiency) explicitly copying the data. I am looking for something along the following lines of code:
using namespace boost::numeric::ublas;
int pv[10] = { 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 };
vector<int> v (pv);
In general, is it possible to initialize a C++ std::vector from an array? Something like this:
#include <iostream>
#include <vector>
using namespace std;
int main()
{
int pv[4] = { 4, 4, 4, 4};
vector<int> v (pv, pv+4);
pv[0] = 0;
cout << "v[0]=" << v[0] << " " << "pv[0]=" << pv[0] << endl;
return 0;
}
but where the initialization would not copy the data. In this case the output is
v[0]=4 pv[0]=0
but I want the output to be the same, where updating the C array changes the data pointed to by the C++ vector
v[0]=0 pv[0]=0
| Both std::vector and ublas::vector are containers. The whole point of containers is to manage the storage and lifetimes of their contained objects. This is why when you initialize them they must copy values into storage that they own.
C arrays are areas of memory fixed in size and location so by their nature you can only get their values into a container by copying.
You can use C arrays as the input to many algorithm functions so perhaps you can do that to avoid the initial copy?
|
1,735,865 | 1,735,880 | C++ Constructor and Destructor | I'm getting some errors when compiling my program. They relate to the constructor and destructor of my class Instruction.
Errors are:
/tmp/ccSWO7VW.o: In function `Instruction::Instruction(std::basic_string<char, std::char_traits<char>, std::allocator<char> >, int)':
ale.c:(.text+0x241): undefined reference to `vtable for Instruction'
/tmp/ccSWO7VW.o: In function `Instruction::Instruction(std::basic_string<char, std::char_traits<char>, std::allocator<char> >, int)':
ale.c:(.text+0x2ab): undefined reference to `vtable for Instruction'
/tmp/ccSWO7VW.o: In function `Instruction::~Instruction()':
ale.c:(.text+0x315): undefined reference to `vtable for Instruction'
/tmp/ccSWO7VW.o: In function `Instruction::~Instruction()':
ale.c:(.text+0x38d): undefined reference to `vtable for Instruction'
collect2: ld returned 1 exit status
Here is my code:
//classses.h
#include <iostream>
#include <string>
using namespace std;
class Instruction{
protected:
string name;
int value;
public:
Instruction(string _name, int _value);
~Instruction();
void setName(string _name);
void setValue(int _value);
string getName();
int getValue();
virtual void execute();
};
//constructor
Instruction::Instruction(string _name, int _value){
name = _name;
value = _value;
}
//destructor
Instruction::~Instruction(){
name = "";
value = 0;
}
void Instruction::setName(string _name){
name = _name;
}
void Instruction::setValue(int _value){
value = _value;
}
string Instruction::getName(){
return name;
}
int Instruction::getValue(){
return value;
}
/////////////////////////////////////////////////////////////////////
//ale.cpp
#include "headers.h"
#include "functions.h"
#include "classes.h"
#include <list>
using namespace std;
int main(){
return 0;
}
| I would guess the problem is due to you declaring a virtual method 'execute' in the Instruction class, and never defining it anywhere. Compilers have to produce a vtable object for a class with virtual methods and really only want one copy of it, so they usually just do it in the compilation unit (source file) that defines the first virtual function...
|
1,735,938 | 1,735,967 | Problems with deltaTicks combined with high speed in a game loop | My game uses a d = vt calculation for movement of objects where t is the time since the last frame (one frame per loop).
I'm using SDL and the gist of the timing calculation is that I create an instance of a Timer class and start it. I call GetSeconds() when it's needed which returns the difference between when the timer was started and the current time (divided by 1000 because everything is in milliseconds).
Ex:
return (SDL_GetTicks() - m_StartingTicks) / MILLISECONDS_PER_SECOND;
After each loop the timer is reset. I.e. m_StartingTicks = SDL_GetTicks() However, I recently changed this so that it's only reset if m_StartingTicks is < SDL_GetTicks, but it didn't fix the problem.
This was all hunky dory until I recently wrote a game engine to handle different game states and various other things which are used in my main game loop. This seriously improved performance, but unfortunately each iteration of the game loop is now occuring in less than 1 millisecond so when I pass GetSeconds(), 0 is returned and things on the screen don't move.
The easiest way to handle this is a simple kludge where if SDL_GetTicks() - m_StartingTicks) == 0 I change it to a 1 (as in 1 millisecond instead of 0). I don't really like this though, and I'd like to hear any suggestions, fixes, improvements, etc.
If you need more info I'd be happy to offer it.
| If you're running at 60Hz, then each frame should be roughly 16ms. Sounds like you have disabled vertical sync?
|
1,736,017 | 1,736,052 | Getting Union, Intersection, or Difference of Sets in C++ | I have a couple questions about how to use C++ sets (std::set)
Is there a way to get the union, intersection, or difference of two C++ sets? (It's pretty easy to write my own functionto do that but I wanted to know if there was a built in function for it)
Can C++ sets be used as keys in a map?
| Use the set_difference(), set_union(), set_intersection() and set_symmetric_difference() functions.
Sets and maps support any key type that can compare. By default this means the type has operator<() defined, but you can provide your own comparator. C++ sets don't have operator<() defined and therefore can't be used as keys unless you provide your own comparator.
|
1,736,056 | 1,736,069 | How can you cast between wchar_t* and an int? | I have a function which returns the inner text of an xml element. It returns it, however, as a const wchar_t*. I wish to return this value as an integer (And a float in some other cases). What is the best method for doing so?
| The C++ way would be:
wchar_t* foo = L"123";
std::wistringstream s(foo);
int i = 0;
s >> i;
With Boost, you could do:
try {
int i2 = boost::lexical_cast<int>(foo);
} catch (boost::bad_lexical_cast const&) {
...
}
Depending on which CRT implementation you're using you may have "wide" atoi/strtol functions:
int i = _wtoi(foo);
long l = _wcstol(foo, NULL, 10);
|
1,736,080 | 1,736,086 | C++ cout cin string manipulation | I'm trying to get a line as input from the command line. My problem is that I'm not getting the whole line, but it's being tokenized by space.
So if I entered something such as "I like Math a lot" instead of getting
"you enterend: I like Math a lot"
I get the follwoing:
EDITING MODE: Enter a command
i like Math a lot
you entered i
EDITING MODE: Enter a command
you entered like
EDITING MODE: Enter a command
you entered Math
EDITING MODE: Enter a command
you entered a
EDITING MODE: Enter a command
you entered lot
void enterEditingMode(){
editingMode = TRUE;
static string CMD = "\nEDITING MODE: Enter a command\n";
string input;
while(editingMode == TRUE){
cout << CMD;
cin >> input;
//we assume input is always correct
// here we need to parse the instruction
cout << "you entered " << input <<endl;
| std::getline is the standard way to read a line of input at a time.
You can use it like this:
std::getline(std::cin, string);
It returns a reference to the input stream which has an implicit conversion to void* so you can check for success easily like this:
if (std::getline(std::cin, string))
{
// successfully read a line...
}
|
1,736,199 | 1,736,203 | Assigning to pointer immediately after deleting | I was wondering if it is safe to do this...
delete p_pointer;
p_pointer = p_otherPointer;
Rather than...
delete p_pointer;
p_pointer = 0;
p_pointer = p_otherPointer;
I would assume so since there aren't any new memory allocations between the deletion and assignment, but I just want to make sure.
| Yes it is safe. It's useless to set the deleted pointer to NULL if you're about to reassign it anyway. The reason people set deleted pointers to NULL is so they can "mark" it as deleted, so later they can check if it has already been deleted.
|
1,736,267 | 1,736,309 | C++ cout printing slowly | I noticed if I print out a long string(char*) using cout it seems to print 1 character at a time to the screen in Windows 7, Vista, and Linux(using putty) using Visual C++ 2008 on Windows and G++ on Linux. Printf is so much faster I actually switched from cout to printf for most printing in a project of mine. This is confusing me because this question makes it seem like I'm the only one having this issue.
I even wrote a cout replacement that looks like it beats the pants off of cout on my comp -
class rcout
{
public:
char buff[4096];
unsigned int size;
unsigned int length;
rcout()
{
size = 4096;
length = 0;
buff[0] = '\0';
}
~rcout()
{
printf("%s", buff);
}
rcout &operator<<(char *b)
{
strncpy(buff+length, b, size-length);
unsigned int i = strlen(b);
if(i+length >= size)
{
buff[size-1] = '\0';
printf("%s", buff);
b += (size-length) -1;
length = 0;
return (*this) << b;
}
else
length += i;
return (*this);
}
rcout &operator<<(int i)
{
char b[32];
_itoa_s(i, b, 10);
return (*this)<<b;
}
rcout &operator<<(float f)
{
char b[32];
sprintf_s(b, 32, "%f", f);
return (*this)<<b;
}
};
int main()
{
char buff[65536];
memset(buff, 0, 65536);
for(int i=0;i<3000;i++)
buff[i] = rand()%26 + 'A';
rcout() << buff << buff <<"\n---"<< 121 <<"---" << 1.21f <<"---\n";
Sleep(1000);
cout << "\n\nOk, now cout....\n\n";
cout << buff << buff <<"\n---"<< 121 <<"---" << 1.21f <<"---\n";
Sleep(1000);
cout << "\n\nOk, now me again....\n\n";
rcout() << buff << buff <<"\n---"<< 121 <<"---" << 1.21f <<"---\n";
Sleep(1000);
return 0;
}
Any ideas why cout is printing so slowly for me?
| NOTE: This experimental result is valid for MSVC. In some other implementation of library, the result will vary.
printf could be (much) faster than cout. Although printf parses the format string in runtime, it requires much less function calls and actually needs small number of instruction to do a same job, comparing to cout. Here is a summary of my experimentation:
The number of static instruction
In general, cout generates a lot of code than printf. Say that we have the following cout code to print out with some formats.
os << setw(width) << dec << "0x" << hex << addr << ": " << rtnname <<
": " << srccode << "(" << dec << lineno << ")" << endl;
On a VC++ compiler with optimizations, it generates around 188 bytes code. But, when you replace it printf-based code, only 42 bytes are required.
The number of dynamically executed instruction
The number of static instruction just tells the difference of static binary code. What is more important is the actual number of instruction that are dynamically executed in runtime. I also did a simple experimentation:
Test code:
int a = 1999;
char b = 'a';
unsigned int c = 4200000000;
long long int d = 987654321098765;
long long unsigned int e = 1234567890123456789;
float f = 3123.4578f;
double g = 3.141592654;
void Test1()
{
cout
<< "a:" << a << “\n”
<< "a:" << setfill('0') << setw(8) << a << “\n”
<< "b:" << b << “\n”
<< "c:" << c << “\n”
<< "d:" << d << “\n”
<< "e:" << e << “\n”
<< "f:" << setprecision(6) << f << “\n”
<< "g:" << setprecision(10) << g << endl;
}
void Test2()
{
fprintf(stdout,
"a:%d\n"
"a:%08d\n"
"b:%c\n"
"c:%u\n"
"d:%I64d\n"
"e:%I64u\n"
"f:%.2f\n"
"g:%.9lf\n",
a, a, b, c, d, e, f, g);
fflush(stdout);
}
int main()
{
DWORD A, B;
DWORD start = GetTickCount();
for (int i = 0; i < 10000; ++i)
Test1();
A = GetTickCount() - start;
start = GetTickCount();
for (int i = 0; i < 10000; ++i)
Test2();
B = GetTickCount() - start;
cerr << A << endl;
cerr << B << endl;
return 0;
}
Here is the result of Test1 (cout):
# of executed instruction: 423,234,439
# of memory loads/stores: approx. 320,000 and 980,000
Elapsed time: 52 seconds
Then, what about printf? This is the result of Test2:
# of executed instruction: 164,800,800
# of memory loads/stores: approx. 70,000 and 180,000
Elapsed time: 13 seconds
In this machine and compiler, printf was much faster cout. In both number of executed instructions, and # of load/store (indicates # of cache misses) have 3~4 times differences.
I know this is an extreme case. Also, I should note that cout is much easier when you're handling 32/64-bit data and require 32/64-platform independence. There is always trade-off. I'm using cout when checking type is very tricky.
Okay, cout in MSVS just sucks :)
|
1,736,295 | 1,736,359 | C++ logging framework suggestions | I'm looking for a C++ logging framework with the following features:
logs have a severity (info, warning, error, critical, etc)
logs are tagged with a module name
framework has a UI (or CLI) to configure for which modules we will actually log to file, and the minimum severity required for a log to be written to file.
has a viewer which lets me search per module, severity, module name, error name, etc
| Not sure about the configuration from a UI or CLI. I've used both of these logging frameworks at one point or other.
https://sourceforge.net/projects/log4cplus/
https://logging.apache.org/log4cxx/index.html
It wouldn't be too hard to drive your logging based on a configuration file that could be editable by hand or through a quick and dirty GUI or CLI app. Might be a bit harder to adjust these dynamically but not too bad.
Update:
It looks like the proposed Boost.Log is now in Boost 1.54 which is at a stable release. If you are already using Boost than I would take a look at it.
|
1,736,304 | 1,939,807 | How to write bitmaps as frames to Ogg Theora in C\C++? | How to write bitmaps as frames to Ogg Theora in C\C++?
Some Examples with source would be grate!)
| Here's the libtheora API and example code.
Here's a micro howto that shows how to use the theora binaries. As the encoder reads raw, uncompressed 'yuv4mpeg' data for video you could use that from your app, too by piping the video frames to the encoder.
|
1,736,403 | 1,736,414 | Wait for a detached thread to finish in C++ | How can I wait for a detached thread to finish in C++?
I don't care about an exit status, I just want to know whether or not the thread has finished.
I'm trying to provide a synchronous wrapper around an asynchronous thirdarty tool. The problem is a weird race condition crash involving a callback. The progression is:
I call the thirdparty, and register a callback
when the thirdparty finishes, it notifies me using the callback -- in a detached thread I have no real control over.
I want the thread from (1) to wait until (2) is called.
I want to wrap this in a mechanism that provides a blocking call. So far, I have:
class Wait {
public:
void callback() {
pthread_mutex_lock(&m_mutex);
m_done = true;
pthread_cond_broadcast(&m_cond);
pthread_mutex_unlock(&m_mutex);
}
void wait() {
pthread_mutex_lock(&m_mutex);
while (!m_done) {
pthread_cond_wait(&m_cond, &m_mutex);
}
pthread_mutex_unlock(&m_mutex);
}
private:
pthread_mutex_t m_mutex;
pthread_cond_t m_cond;
bool m_done;
};
// elsewhere...
Wait waiter;
thirdparty_utility(&waiter);
waiter.wait();
As far as I can tell, this should work, and it usually does, but sometimes it crashes. As far as I can determine from the corefile, my guess as to the problem is this:
When the callback broadcasts the end of m_done, the wait thread wakes up
The wait thread is now done here, and Wait is destroyed. All of Wait's members are destroyed, including the mutex and cond.
The callback thread tries to continue from the broadcast point, but is now using memory that's been released, which results in memory corruption.
When the callback thread tries to return (above the level of my poor callback method), the program crashes (usually with a SIGSEGV, but I've seen SIGILL a couple of times).
I've tried a lot of different mechanisms to try to fix this, but none of them solve the problem. I still see occasional crashes.
EDIT: More details:
This is part of a massively multithreaded application, so creating a static Wait isn't practical.
I ran a test, creating Wait on the heap, and deliberately leaking the memory (i.e. the Wait objects are never deallocated), and that resulted in no crashes. So I'm sure it's a problem of Wait being deallocated too soon.
I've also tried a test with a sleep(5) after the unlock in wait, and that also produced no crashes. I hate to rely on a kludge like that though.
EDIT: ThirdParty details:
I didn't think this was relevant at first, but the more I think about it, the more I think it's the real problem:
The thirdparty stuff I mentioned, and why I have no control over the thread: this is using CORBA.
So, it's possible that CORBA is holding onto a reference to my object longer than intended.
| Yes, I believe that what you're describing is happening (race condition on deallocate). One quick way to fix this is to create a static instance of Wait, one that won't get destroyed. This will work as long as you don't need to have more than one waiter at the same time.
You will also permanently use that memory, it will not deallocate. But it doesn't look like that's too bad.
The main issue is that it's hard to coordinate lifetimes of your thread communication constructs between threads: you will always need at least one leftover communication construct to communicate when it is safe to destroy (at least in languages without garbage collection, like C++).
EDIT:
See comments for some ideas about refcounting with a global mutex.
|
1,736,458 | 1,736,477 | Help un-noobify my C++ homework | #include <iostream>
#include <stdlib.h>
#include <time.h>
using namespace std;
int twoify(int num, int times)
{
num *= 2;
if (times > 0)
{
times--;
return twoify(num, times);
}
return num;
}
int main()
{
srand(time(NULL));
const int BET = 1;
const int TIMES = 100000;
const int CHANCE = 50;
int wins = 0;
int losses = 0;
int wstreak = 0;
int lstreak = 0;
int cwstreak = 0;
int clstreak = 0;
for (int i = 0; i < TIMES; i++)
{
int num = rand() % 100 + 1;
if (num <= CHANCE) // win?
{
wins++;
cwstreak++;
clstreak = 0;
if (cwstreak > wstreak)
wstreak = cwstreak;
}
else
{
losses++;
clstreak++;
cwstreak = 0;
if (clstreak > lstreak)
lstreak = clstreak;
}
}
cout << "Wins: " << wins << "\tLosses: " << losses << endl;
cout << "Win Streak: " << wstreak << "\tLoss Streak: " << lstreak << endl;
cout << "Worst lose bet: " << twoify(BET, lstreak) << endl;
system("PAUSE");
cout << endl << endl;
return main();
}
In particular, the twoify() function seems noobis. This is a martingale bet pattern, and basically every loss you double your previous bet until you win.
| First, avoid the useless recursion, turn it into iteration:
int twoify(int num, int times)
{
do {
num *= 2;
--times;
} while (times >= 0);
return num;
}
But, you can do better (if times > 0 is guaranteed, which would also simplify the version above by allowing you to use a while instead of the do/while, but, anyway...):
int twoify(int num, int times)
{
return num << (times + 1);
}
The reason this works is that it's equivalent to multiplying num by 2 raised to the (times + 1)th power, which is what the recursive and iterative versions both do.
|
1,736,480 | 1,736,524 | C++ new operator. Creating a new instance | I'm having some trouble creating an object in C++. I create a class called Instruction, and I am trying to create a new instance, but I get compiler errors.
Class code:
class Instruction{
protected:
string name;
int value;
public:
Instruction(string _name, int _value);
~Instruction();
void setName(string _name);
void setValue(int _value);
string getName();
int getValue();
virtual void execute();
};
//constructor
inline Instruction::Instruction(string _name, int _value){
name = _name;
value = _value;
}
//destructor
inline Instruction::~Instruction(){
//name = "";
//value = 0;
}
inline void Instruction::setName(string _name){
name = _name;
}
inline void Instruction::setValue(int _value){
value = _value;
}
inline string Instruction::getName(){
return name;
}
int Instruction::getValue(){
return value;
}
inline void Instruction::execute(){
cout << "still have to implement";
}
This is how I try to create a new object:
Instruction* inst;
inst = new Instruction("instruction33", 33);
I get the following compiler errors:
functions.h:70: error: no matching function for call to ‘operator new(unsigned int, std::string&, int&)’
/usr/include/c++/4.3/new:95: note: candidates are: void* operator new(size_t)
/usr/include/c++/4.3/new:99: note: void* operator new(size_t, const std::nothrow_t&)
/usr/include/c++/4.3/new:105: note: void* operator new(size_t, void*)
You guys are correct. The error comes from this line of code:
instList.push_back(inst);
where instList is created like this:
list <Instruction> instList; //#include <list> is in the file
| inst is a pointer to an Instruction object and instList is a list of Instruction objects. So when you try instList.push_back(inst) it doesn't work (it expects a real object not the pointer to it). You should instead have instList.push_back(*inst).
|
1,736,502 | 1,736,505 | Compile C++ in Eclipse? | How can I compile C++ .cpp files in the Eclipse IDE. I have CDT installed but when I try to execute it, I get a "Launch Failed. Binary not found." I do not want to install CYGWIN unless it is absolutely necessary.
| The CDT only provides you with the facilities in Eclipse to edit and understand C files. It does not, to my knowledge, incorporate a compiler (unlike the JDT).
You need to install and configure a C compiler that the CDT can use.
If you're on Linux, you'll probably already have gcc installed that you can use. The only time I ever had to install a C development environment under Windows, I actually used MinGW although you could use Cygwin since it comes with the gcc compiler as well.
I used MinGW since it's only the development suite (hence the "minimalist" in "Minimalist GNU for Windows") whereas Cygwin include all sorts of extra stuff
|
1,736,620 | 1,736,781 | SDL_Event.type always empty after polling | I have a general function that is supposed to handle any event in the SDL event queue. So far, the function looks like this:
int eventhandler(void* args){
cout << "Eventhandler started.\n";
while (!quit){
while (SDL_PollEvent(&event)){
cout << "Got event to handle: " << event.type << "\n";
switch (event.type){
SDL_KEYDOWN:
keyDownHandler(event.key.keysym.sym);
break;
default:
break;
}
}
}
}
However, when I test the function, I get a whole bunch of events but none of them seem to have a type. It doesn't even print 0 or anything — just nothing. The output when pressing any key looks like this:
Got event to handle:
And nothing else. Any tutorial and the SDL docs say that I should handle events like this, but it isn't working. Anybody else have this problem or a solution?
By the way, the eventhandler runs in an SDL_Thread, but I don't think that's the problem.
| That nothing happens is a result of the missing case in front of SDL_KEYDOWN.
With case missing the compiler sees a jump label which you would use for e.g. goto SDL_KEYDOWN;, which results in the default label being the only label in the switch statement.
I don't see why event.type doesn't get output though unless you set some stream-flags somewhere.
event.type is an Uint8 which SDL just typedefs from integral types, so it should be handled like one. Like any integral type it also can't be "empty", but the output for it can be.
|
1,736,654 | 1,736,711 | Macros as arguments to preprocessor directives | Being faced with the question whether it's possible to choose #includes in the preprocessor I immediately thought not possible.
.. Only to later find out that it is indeed possible and you only need to watch out for argument expansions (which e.g. Boost.Preprocessor can take care of).
While I'd avoid actually doing that for includes if possible, I'd like to know why this works. At the moment I fail to get a useful understanding in the C++ or C standard.
Are parameterized macros allowed for any preprocessor-directive? (except #define/#undef)
Can someone reference where this is allowed and summarize it?
Example for the curious utilizing Boost.Preprocessor for simplicity:
#include <boost/preprocessor/cat.hpp>
#include <boost/preprocessor/stringize.hpp>
#define INC_LOCAL(a,b) BOOST_PP_STRINGIZE(BOOST_PP_CAT(BOOST_PP_CAT(a,b),.h))
#define INC_GLOBAL(a,b) BOOST_PP_CAT(BOOST_PP_CAT(<,a),BOOST_PP_CAT(b,>))
#include INC_LOCAL(loc,al) // #include "local.h"
#include INC_GLOBAL(vect,or) // #include <vector>
Update:
Referenced C standard, clarified question.
| From § 16.2-4 ("Source file inclusion") of C++ 2003 draft:
A preprocessing directive of the form
# include pp-tokens new-line
(that does not match one of the two previous forms) is permitted. The preprocessing tokens after include in the directive are processed just as in normal text (each identifier currently defined as a macro name is replaced by its replacement list of preprocessing tokens).
§ 6.10.2-4 of C99 says the same.
The "two previous forms" mentioned above are # include <h-char-sequence> and # include "q-char-sequence". The section seems too simple to summarize.
For other directives, macro expansion isn't performed on any identifier preprocessing token (note this behavior is not defined by the grammar, but by C++ § 16 / C § 6.10):
# if constant-expression new-line [group]
# ifdef identifier new-line [group]
# ifndef identifier new-line [group]
# elif constant-expression new-line [group]
# else new-line [group]
# endif new-line
# include pp-tokens new-line
# define identifier replacement-list new-line
# define identifier lparen [identifier-list] ) replacement-list new-line
# undef identifier new-line
# line pp-tokens new-line
# error [pp-tokens] new-line
# pragma [pp-tokens] new-line
# new-line
#line is explicitly macro-expanded by C++ § 16.4-5 / C § 6.10.4-5. Expansion for #error (C++ § 16.5 / C § 6.10.5) and #pragma (C++ § 16.6 / C § 6.10.6) isn't mentioned. C++ § 16.3-7 / C 6.10.3-8 states:
If a # preprocessing token, followed by an identifier, occurs lexically at the point at which a preprocessing directive could begin, the identifier is not subject to macro replacement.
C++ § 16.3.1 / C § 6.10.3.1-1 tells us that when the arguments to a macro function are substituted into the replacement-list, they are first macro expanded. Similarly, C++ § 16.3.4 / C § 6.10.3.4 has the preprocessor macro-expand the replacement-list after substitution.
In summary, macro expansion is done for #if, #elif, #include, #line, the arguments to a macro function and the body of a macro function when substituted. I think that's everything.
|
1,736,745 | 1,736,750 | C++ Class Inheritance problem | Hi I have two classes, one called Instruction, one called LDI which inherits from instruction class.
class Instruction{
protected:
string name;
int value;
public:
Instruction(string _name, int _value){ //constructor
name = _name;
value = _value;
}
~Instruction(){}
Instruction (const Instruction &rhs){
name = rhs.name;
value = rhs.value;
}
void setName(string _name){
name = _name;
}
void setValue(int _value){
value = _value;
}
string getName(){
return name;
}
int getValue(){
return value;
}
virtual void execute(){}
virtual Instruction* Clone() {
return new Instruction(*this);
}
};
/////////////end of instruction super class //////////////////////////
class LDI : public Instruction{
void execute(){
//not implemented yet
}
virtual Instruction* Clone(){
return new LDI(*this);
}
};
Then I create a pointer of type Instruction and try to make point to a new instance of type LDI.
Instruction* ptr;
ptr = new LDI("test", 22);
I get the following compiler errors. Any ideas what I'm doing wrong?
functions.h:71: error: no matching function for call to ‘LDI::LDI(std::string&, int&)’
classes.h:54: note: candidates are: LDI::LDI()
classes.h:54: note: LDI::LDI(const LDI&)
| The code: new LDI(name, val) specifically says "Call the LDI constructor with a name and val."
There is no LDI constructor that takes name / val.
In fact, I don't see a constructor for LDI at all.
If you want to use the constructor of a base-class, here is how:
public LDI(string _name, int _value) // Public constructor for LDI
: Instruction(_name, _value) // Delegate to the base-class constructor
{
// Do more LDI-specific construction here
}
|
1,736,833 | 1,736,841 | void pointers: difference between C and C++ | I'm trying to understand the differences between C and C++ with regards to void pointers. the following compiles in C but not C++ (all compilations done with gcc/g++ -ansi -pedantic -Wall):
int* p = malloc(sizeof(int));
Because malloc returns void*, which C++ doesn't allow to assign to int* while C does allow that.
However, here:
void foo(void* vptr)
{
}
int main()
{
int* p = (int*) malloc(sizeof(int));
foo(p);
return 0;
}
Both C++ and C compile it with no complains. Why?
K&R2 say:
Any pointer to an object may be
converted to type void * without loss
of information. If the result is
converted back to the original pointer
type, the original pointer is
recovered.
And this pretty sums all there is about void* conversions in C. What does C++ standard dictate?
| In C, pointer conversions to and from void* were always implicit.
In C++, conversions from T* to void* are implicit, but void* to anything else requires a cast.
|
1,736,919 | 1,736,932 | <list> retreving items problem with iterator | I have a list of type Instruction*. Instruction is a class that I made. This class has a function called execute().
I create a list of Instruction*
list<Instruction*> instList;
I create an Instruction*
Instruction* instPtr;
instPtr = new Instruction("test",10);
If I call
instPtr.execute();
the function will be executed correctly, however if I store instPtr in the instList I cannot call the execute() function anymore from the list.
//add to list
instList.push_back(instPtr);
//create iterator for list
list<Instruction*>::iterator p = instList.begin();
//now p should be the first element in the list
//if I try to call execute() function it will not work
p -> execute();
I get the following error:
error: request for member ‘execute’ in ‘* p.std::_List_iterator<_Tp>::operator-> [with _Tp = Instruction*]()’, which is of non-class type ‘Instruction*’
| p is an iterator of Instruction * pointers. You can think of it as if it were of type Instruction **. You need to double dereference p like so:
(*p)->execute();
*p will evaluate to an Instruction *, and further applying the -> operator on that will dereference the pointer.
|
1,737,111 | 1,737,622 | C++ Debug Problem - pdb file | I have several DLLs in my project that are defined to create their *.pdb file (debug info) inside the same folder. The problem is that the only *.pdb file that actually exist inside this folder is the one that belongs to the last compiled DLL. As a result I can debug only the last project (each compilation deletes all the *.pdb in the folder and creates the current compiled DLL pdb file.)
How can I resolve it ?
I'm using Visual Studio 2003.
| Or you can create separate folder for program releases, and always compile and execute your program from the newest release-folder. This way you can also manage all other external files like images and what not.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.